text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1954–1964, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Cross-Lingual Morphological Tagging for Low-Resource Languages Jan Buys Department of Computer Science University of Oxford [email protected] Jan A. Botha Google Inc. London [email protected] Abstract Morphologically rich languages often lack the annotated linguistic resources required to develop accurate natural language processing tools. We propose models suitable for training morphological taggers with rich tagsets for low-resource languages without using direct supervision. Our approach extends existing approaches of projecting part-of-speech tags across languages, using bitext to infer constraints on the possible tags for a given word type or token. We propose a tagging model using Wsabie, a discriminative embeddingbased model with rank-based learning. In our evaluation on 11 languages, on average this model performs on par with a baseline weakly-supervised HMM, while being more scalable. Multilingual experiments show that the method performs best when projecting between related language pairs. Despite the inherently lossy projection, we show that the morphological tags predicted by our models improve the downstream performance of a parser by +0.6 LAS on average. 1 Introduction Morphologically rich languages pose significant challenges for Natural Language Processing (NLP) due to data-sparseness caused by large vocabularies. Intermediate processing is often required to address the limitations of only using surface forms, especially for small datasets. Common morphological processing tasks include segmentation (Creutz and Lagus, 2007; Snyder and Barzilay, 2008), paradigm learning (Durrett and DeNero, 2013; Ahlberg et al., 2015) and morphological tagging (M¨uller and Schuetze, 2015). In this paper we focus on the latter. Parts-of-speech (POS) tagging is the most common form of syntactic annotation. However, the granularity of POS varies across languages and annotation-schemas, and tagsets have often been extended to include tags for morphologicallymarked properties such as number, case or degree. To enable cross-lingual learning, a small set of universal (coarse-grained) POS tags have been proposed (Petrov et al., 2012). For morphological processing this can be complemented with a set of attribute-feature values that makes the annotation more fine-grained (Zeman, 2008; Sylak-Glassman et al., 2015b). Tagging text with morphologically-enriched labels has been shown to benefit downstream tasks such as parsing (Tsarfaty et al., 2010) and semantic role labelling (Hajiˇc et al., 2009). In generation tasks such as machine translation these tags can help to generate the right form of a word and to model agreement (Toutanova et al., 2008). Morphological information can also benefit automatic speech recognition for low-resource languages (Besacier et al., 2014). However, annotating sufficient data to learn accurate morphological taggers is expensive and relies on linguistic expertise, and is therefore currently only feasible for the world’s most widelyused languages. In this paper we are interested in learning morphological taggers without the availability of supervised data. A successful paradigm for learning without direct supervision is to make use of word-aligned parallel text, with a resourcerich language on one side and a resource-poor language on the other side (Yarowsky et al., 2001; Fossum and Abney, 2005; Das and Petrov, 2011; T¨ackstr¨om et al., 2013). In this paper we extend these methods, that have mostly been proposed for universal POS-taggers, to learn weakly-supervised morphological taggers. 1954 Our approach is based on projecting token and type constraints across parallel text, learning a tagger in a weakly-supervised manner from the projected constraints (T¨ackstr¨om et al., 2013). We propose an embedding-based model trained with the Wsabie algorithm (Weston et al., 2011), and compare this approach against a baseline HMM model. We evaluate the projected tags for a set of languages for which morphological tags are available in the Universal Dependency corpora. To show the feasibility of our approach, and to compare the performance of different models, we use English as source language. Then we perform an evaluation on all language pairs in the set of target languages which shows that the best performance is obtained when projecting between genealogically related languages. As an extrinsic evaluation of our approach, we show that NLP models can benefit from using these induced tags even if they are not as accurate as tags produced by supervised models, by evaluating the effect of features obtained from tags predicted by the induced morphological taggers in dependency parsing. 2 Universal Morphological Tags In order to do cross-lingual learning we require a common morphological tagset. To evaluate these models we require datasets in multiple languages which have been annotated with such a consistent schema. The treebanks annotated in the Universal Dependencies (UD) project (de Marneffe et al., 2014) are suitable for this purpose. All the data is annotated with universal POS tags, a set of 17 tags1. We use UD v1.2 (Nivre et al., 2015), which contain 25 languages annotated with morphological attributes (called features). In addition to POS, there are 17 universal attributes, which each takes one of a set of values when annotated. The morphological tag of a token denotes the union of its morphological attributevalue pairs, including its POS. Although the schema is consistent across languages, there are language-specific phenomena and considerations that result in some mismatches for a given pair of languages. One source of this is that the UD treebanks were mostly constructed by fully or semi-automatic conversion of exist1This extends, but is not fully consistent with, the set of 12 tags proposed by Petrov et al. (2012). ing treebanks which had used different annotation schemes. Furthermore, not all the attributes and values appear in all languages (e.g. additional cases in morphologically-rich languages such as Finnish), and there are still a number of languagespecific tags not in the universal schema. Finally, in some instances properties that are not realised in the surface word form are absent from the annotation (e.g. in English the person and number of verbs are only annotated for third-person singular, as there are no distinct morphological forms for their other values). An example of the morphological annotation employed is given in Figure 1. Note that the annotations for aligned word-pairs are not fully consistent. Some attributes appear only in the English treebank (e.g. Voice), while others appear only in the Dutch treebank (e.g. Aspect, Subcat). 3 Tag Projection across Bitext Our approach to train morphological taggers is based on the paradigm of projecting token and type constraints as proposed by T¨ackstr¨om et al. (2013). The training data consist of parallel text with the resource-rich language on the source-side and the low-resource language on the target side. The source-side text is tagged with a supervised morphological tagger. For every target-side sentence, the type and token constraints are used to construct a set of permitted tags for each token in the sentence. These constraints will then be used to train morphological taggers. 3.1 Type and token constraints To extract constraints from the parallel text, we first obtain bidirectional word alignments. To ensure high quality alignments, alignment pairs with a confidence below a fixed threshold α are removed. The motivation for using only highconfidence alignments is that incorrect alignments will hurt the performance of the model, while it is easier to use more parallel text to obtain a sufficient number of alignments for training. The first class of constraints that we extract from the parallel text is type constraints. For each word type, we construct a distribution over tags for the word by accumulating counts of the morphological tags of source-side tokens that are aligned to instances of the word type. The set of tags with probability above some threshold β is taken as the tag dictionary entry for that word type. To 1955 POS=PRON Number=Plur Person=1 Poss=Yes PronType=Prs POS=NOUN Number=Sing POS=AUX Mood=Ind Number=Sing Person=3 Tense=Pres VerbForm=Fin POS=VERB Tense=Past VerbForm=Part Voice=Pass POS=ADP POS=NOUN Number=Sing POS=NOUN Number=Sing POS=PUNCT Our independence is guaranteed by law today . Onze onafhankelijkheid wordt vandaag bij wet gegarandeerd . POS=PRON Number=Plur Person=1 Poss=Yes PronType=Prs POS=NOUN Number=Sing POS=AUX Aspect=Imp Mood=Ind Number=Sing Person=3 Tense=Pres VerbForm=Fin POS=ADV Degree=Pos POS=ADP AdpType=Prep POS=NOUN Number=Sing POS=VERB Tense=Past VerbForm=Part SubCat=Tran POS=PUNCT Figure 1: A parallel sentence in English and Dutch annotated with universal morphological tags, showing high-confidence automatic word-alignments. Attribute-value pairs that occur only on one side of an aligned pair of tokens are indicated in italics. The dashed line indicates a low-confidence alignment point, which is ignored in our projection method. construct the training examples, each token whose type occurs in the tag dictionary is restricted to the set of tags in the dictionary entry. For tokens for which the dictionary entry is empty, all the tags are included in the set of permitted tags (this happens when the tag distribution is too flat and all the probabilities are below the threshold). In principle, type constraints can also be obtained from an external dictionary, but in this paper we assume we do not have such a resource. The second class of constraints places restrictions on word tokens. Every target token is constrained to the tag of its aligned source token, while unaligned tokens can take any tag. Token constraints are combined with type constraints as proposed by T¨ackstr¨om et al. (2013): If a token is unaligned, its type constraints are used. If the token is aligned, and there is no dictionary entry for the token type, the token constraint is used. If there is a dictionary entry for the token type, and the token constraint tag is in the dictionary, the token constraint is used. If the token constraint tag is not in the dictionary entry, the type constraints are used. 4 Learning from Projected Tags Next we propose models to learn a morphological tagger from cross-lingually projected constraints. 4.1 Related work HMMs have previously been used for weaklysupervised learning from token or type constraints (Das and Petrov, 2011; Li et al., 2012; T¨ackstr¨om et al., 2013). HMMs are generative models, and in this setting the words in the target sentence form the observed sequence and the morphological tags the hidden sequence. The projected constraints are used as partially observed training data for the hidden sequence. T¨ackstr¨om et al. (2013) proposed a discriminative CRF model that relies on incorporating two sets of constraints, of which one is a subset of the other. Ganchev and Das (2013) used a similar CRF model, but instead of using the projected tags as hard constraints, they were employed as soft constraints with posterior regularization. The model of Wisniewski et al. (2014) makes greedy predictions with a history-based model, that includes previously predicted tags in the sequence, during training and testing. The model is trained with a variant of the perceptron algorithm that allows a set of positive labels. When an incorrect prediction is made during training, the parameters are updated in the direction of all the positive labels. 4.2 HMM model As a baseline model we use an HMM where the transition and emission distributions are parameterized by log-linear models (a feature-HMM). Training is performed with L-BFGS rather than with the EM algorithm. This parameterization was proposed by Berg-Kirkpatrick et al. (2010) and applied to cross-lingual POS induction by Das and 1956 Petrov (2011) and T¨ackstr¨om et al. (2013). Let w be the target sentence and t the sequence of tags for the sentence. The marginal probability of a sequence during training is p(w1:n) = X t1:n∈T n Y i=1 p(ti|ti−1)p(wi|ti), where T is the set of tag sequences allowed by the type and token constraints. The probability of all other tag sequences are assumed to be 0. The features in our model are similar to those used by T¨ackstr¨om et al. (2013), including features based on word and tag identity, suffixes up to length 3, punctuation and word clusters. Word clusters are obtained by clustering frequent words into 256 clusters with the Exchange algorithm (Uszkoreit and Brants, 2008), using the data and methodology detailed in T¨ackstr¨om et al. (2012). 4.3 Wsabie model We propose a discriminative model based on Wsabie (Weston et al., 2011), a shallow neural network that learns to optimize precision at the top of a ranked list of labels. In our application, the goal is to learn to rank the set of tags allowed by the projected constraints in the training data above all other tags. In contrast to the HMM, which performs inference over the entire sequence, Wsabie makes the predictions at each token independently, based on a large context-size. Therefore, Wsabie inference is linear in the number of tags, while for an HMM it is quadratic, making the Wsabie model much faster during training and decoding. Wsabie maps the input features and output labels into a low-dimensional joint space. The input vector x for a word w consists of the concatenation of word embeddings and sparse features extracted from w and the surrounding context. A mapping ΘI(x) = V x maps x ∈Rd into RD, with matrix V ∈RD×d of parameters. The output tag t is mapped into the same space by ΘO(t) = Wt, where W ∈RD×L is a matrix of output tag embeddings and Wt selects the column embedding of tag t. The model score for tag t given input token with feature vector x is the dot product ft(x) = ΘO(t)T ΘI(x), where the tags are ranked by the magnitude of ft(x). The norms of the columns of V and W are constrained, which acts as a regularizer. The loss function is a margin-based hinge loss based on the rank of a tag given by ft(x). The rank is estimated by sampling an incorrect tag uniformly with replacement until the sampled tag violates the margin with a correct tag. Training is performed with stochastic gradient descent by performing a gradient step against the violating tag. The word embedding features for the Wsabie models consist of 64-dimensional word vectors of the 5 words on either side of a token and of the token itself. The embeddings are trained with word2vec (Mikolov et al., 2013) on large corpora of newswire text. Sparse features are based on prefixes and suffixes up to length 3 as well as word cluster features for a window size 3 around the token, using the clusters described in the previous section. 5 Experiments We evaluate our model in two settings. The first evaluation measures the accuracy of the crosslingual taggers on language pairs where annotated data is available for both languages. The annotated target language data is used only during evaluation and not for training. Second, we perform a downstream evaluation by including the morphological attributes predicted by the tagger as features in a dependency parser to guage the effectiveness of our approach in a setting where one does not have access to gold morphological annotations. 5.1 Experimental setup As source of parallel training data we use Europarl2 (Koehn, 2005) version 7. Sentences are tokenized but not lower-cased, and sentences longer than 80 words are excluded. In our experiments we learn taggers for a set of 11 European languages that have both UD training data with morphological features, and parallel data in Europarl: Bulgarian, Czech, Danish, Dutch, Finnish, Italian, Polish, Portuguese, Slovene, Spanish and Swedish. We train cross-lingual models in two setups: The first uses English as source language; in the second we train models with different source languages for each target language. Word alignments over the parallel data are obtained using FastAlign (Dyer et al., 2013). High2http://www.statmt.org/europarl/ 1957 confidence bidirectional word alignments are constructed by intersecting the alignments in the two directions and including alignment points only if the posterior probabilities in both directions are above the alignment threshold α. For each language pair all the word-aligned parallel data available (between 10 and 50 million target-side tokens per language) are used to extract the type constraints, and the models are trained on a subset of 2 million target-side tokens (optionally with their token constraints). The number of distinct attribute-value pairs appearing in the tagsets depends on the language pair and ranges between 35 and 79, with 54 on average (including POS tags). The number of distinct composite morphological tags is 423 on average, with a much larger range, between 81 and 1483. The English UD data has 116 tags composed out of 51 distinct attribute-value pairs. Therefore, we can project a reasonable number of morpho-syntactic attributes from English, although the number of attribute combinations that occur in the data is less than for morphologically richer languages. The source text is tagged with supervised taggers, trained with Wsabie on the UD training data for each of the source languages used. For each language pair, we train a distinct source-side model covering only the attribute types appearing in both languages. This is meant to obtain a maximally accurate source-side tagger, while accepting that our approach cannot predict target-side attributes that are absent from the source language. The average accuracy of the English taggers on the UD test data is 94.96%. The source-side taggers over all the language pairs we experiment on have an average accuracy of 95.75%, with a minimum of 89.14% and a maximum of 98.59%. 5.2 Tuning The hyperparameters of the Wsabie taggers are tuned on the English development set, and the same parameters are used for the Wsabie targetside models trained on the projected tags. The optimal setting is a learning rate of 0.01, embedding dimension size D = 50, margin 0.1, and 25 training iterations. Hyperparameters for the projection models are set by tuning on the UD dev set accuracy for English to Danish. English was chosen as it is the language with the most available data and the most likely to be used when projecting to other languages; Danish simply because its corpus size is typical of the larger languages in Europarl. Using a small grid search, we choose the parameters that give the best average accuracy across all four projection model instances we consider. This allows using the same hyperparameters for all these models, an important factor in making them comparable in the evaluation, since the hyperparameters determine the effective training data. The parameters tuned in this manner are the alignment threshold α, which is set to 0.8, and the type distribution threshold β, set to 0.3. 5.3 Tagging evaluation setup In order to evaluate the induced taggers on the annotated UD data for the target languages, we define two settings that circumvent mismatches between source and target language annotations to different degrees. The STANDARD setting involves first making minor corrections to certain predicted POS values to account for inconsistencies in the original annotated data. When predicted by the model, the POS tag values absent from the target language training corpus are deterministically mapped to the mostrelated value present in the target language in the following way: PROPN to NOUN; SYM and INTJ to X; SYM and X to PUNCT. Besides POS, the evaluation considers only those attribute types that appear in both languages’ training corpora, i.e., the set of attributes for which the model was trained. Note that this leaves cases intact where the model predicts certain attribute values that appear only in one of the two languages; it is thus penalised for making mistakes on values that it cannot learn under our projection approach. The second evaluation setting, INTERSECTED, relaxes the latter aspect: it only considers attribute-value pairs appearing in the training corpora of both languages. The motivation for this is to get a better measurement of the accuracy of our method, assuming that the tagsets are consistent. In both settings we report macro-averaged F1 scores over all the considered attribute types. Results for Wsabie are averaged over 3 random restarts because it uses stochastic optimization during training. 5.4 Tagging results projecting from English Following previous work on projecting POS tags and the assumption that it is easier to obtain paral1958 Model STANDARD INTERSECTED POS HMM projected type 53.86 (-) 58.67 (-) 79.45 (-) HMM projected type and token 48.49 (-) 52.40 (-) 73.61 (-) unambiguous type 51.72 (0.33) 56.22 (0.36) 79.58 (0.22) projected type 53.60 (0.16) 58.11 (0.18) 80.09 (0.12) projected type and token 53.36 (0.19) 57.77 (0.21) 79.94 (0.11) supervised 1K 62.44 (1.52) 61.74 (1.55) 72.51 (0.82) supervised type 75.55 (1.88) 74.72 (1.95) 75.91 (1.16) Table 1: Cross-lingual morphological tagging from English: Macro F1 scores averaged across 11 languages. All the results except for the first two rows are for Wsabie models. The standard deviation over 3 runs is given in brackets. lel data between a low-resource language and English than with another language, we start by training cross-lingual taggers using English as source language. The overall tagging results are given in Table 1. In addition to evaluating the morphological tags in the two settings described above, we also report accuracies for POS tags only, projected jointly with the morphological attributes. We find that for both the HMM and Wsabie models the performance with type and token constraints is worse than when only using type constraints. T¨ackstr¨om et al. (2013) similarly found that for HMMs for POS projection, models with joint constraints do not perform better than those using only type constraints. They postulated that this is due to the type dictionaries having the same biases as token projections, and therefore the model with joint constraints not being able to filter out systematic errors in the projections. For both sets of constraints the performance of the Wsabie model is close to that of the corresponding HMM, despite the Wsabie model having a linear runtime against the quadratic runtime of the HMM. As another baseline we train a Wsabie model on unambiguous type constraints, i.e., we only extract training examples for words which only have a single tag in the tag dictionary. Including ambiguous type constraints gives an average improvement of 2.2%. As a target ceiling on performance we train a Wsabie model with supervised type constraints. This model uses type constraints based on an oracle morphological tag dictionary extracted from the gold training data of the target language. It is trained on the same training data as the projected models (without token constraints). The model scores higher on STANDARD than on INTERSECTED, as it has access to annotations for the full set of tags used in the target language, not just the restricted set that can be projected. This oracle performs on average 17% better than the projected type constraints model on INTERSECTED. Therefore, despite the promising results of our approach, there is still a considerable amount of noise in the type constraints extracted from the aligned data. We also compare the performance of the model to that of a supervised model trained on a small annotated corpus. Average performance when training on 1000 annotated tokens is only a few points higher than that of the best projected model for INTERSECTED. Given that is it expensive to let annotators learn to annotate a large set of attributes, even for a small corpus, it shows that our model can bring considerable benefits in practice to the development of NLP models for low-resource languages. It is possible to obtain further improvements in performance by learning jointly from a small annotated dataset and parallel data (Duong et al., 2014), but we leave that for future work. The results when evaluating only the POS tags follow the same pattern, except that the overall level of accuracy is much higher than when considering all morphological attributes. For POS, the models with projected constraints actually perform better than those with supervised type constraints. In this case the benefits from learning constraints from a larger set of word types seem to outweigh the noise in the projections. The projected models are also more accurate than the supervised model trained on 1000 tokens. 5.5 Multilingual tagging results Results for cross-lingual experiments on all pairs of the target languages under consideration are 1959 bg cs da es fi it nl pl pt sl sv Avg. en 46.7 49.7 58.0 55.7 54.0 59.6 64.1 45.0 57.8 51.0 47.9 53.6 bg 58.3 59.2 51.2 52.6 43.2 38.7 52.8 41.1 49.2 53.6 50.0 cs 55.2 54.5 42.3 48.4 51.3 45.0 56.8 33.6 67.5 53.2 50.8 da 61.9 61.6 41.8 49.1 45.5 49.6 53.7 44.0 49.3 72.1 52.9 es 54.3 58.8 41.3 53.0 74.4 52.1 52.2 69.2 53.8 46.9 55.6 fi 46.6 48.7 45.3 39.5 50.9 36.8 37.4 30.1 55.5 57.8 44.9 it 43.6 59.4 44.0 74.0 53.3 54.3 46.5 69.2 55.9 47.0 54.7 nl 44.7 59.5 56.2 54.8 54.0 60.3 55.9 58.6 48.6 51.6 54.4 pl 52.7 58.6 46.3 37.5 42.1 47.9 42.1 40.7 56.0 42.6 46.6 pt 45.4 45.0 49.6 66.2 42.6 69.5 50.1 43.5 47.8 43.9 50.3 sl 46.6 60.7 35.2 40.9 49.2 49.8 36.0 54.1 35.0 40.4 44.8 sv 50.1 54.6 70.7 47.7 57.2 49.7 46.9 41.6 46.3 43.5 50.8 Avg 49.8 55.9 50.9 50.1 50.5 54.7 46.9 49.0 47.8 52.6 50.6 Table 2: Cross-lingual morphological tagging results (STANDARD F1 scores) per source and target language, Wsabie projected model with type constraints. Rows indicate source language and columns target language. given in Table 2, using the STANDARD evaluation setup. We make use of Wsabie for these experiments, as it is a more efficient model, which is especially significant when training models with large tagsets. We see that there is large variance in the morphological tagging accuracies across language pairs. In most cases the source language for which we learn the most accurate model for morphological tagging on the target language is a related language. The Romance languages we consider (Spanish, Italian and Portuguese) seem to transfer particularly well across each other. Swedish and Danish also transfer well to each other, while English transfers best to Dutch, which the former is most closely related to among the languages compared here. However, there are also some cases of unrelated source languages performing best: Using Danish as source language gives the highest performing models for both Bulgarian and Czech. When comparing these results, however, one should keep in mind that the attribute type sets used to train taggers from different source languages for the same target language is not always the same (due to our definition of the STANDARD evaluation), therefore these results should not be interpreted directly as indicating which source language gives the best target language performance on a particular tagset. We compare the results of the STANDARD and INTERSECTED evaluations, both when using English as source language, and when using the source language which gives the highest accuracy on STANDARD for each target language (Table 3). We see that the gap in performance between the two evaluations tends to be larger when projectSTANDARD INTERSECTED enbestenbestbg 46.7 61.88 51.6 64.97 cs 49.7 61.57 55.7 63.97 da 58.0 70.74 65.4 73.14 es 55.7 74.01 60.7 74.62 fi 54.0 57.23 59.1 59.11 it 59.6 74.42 66.1 75.32 nl 64.1 64.12 64.7 64.66 pl 45.0 56.83 47.3 60.39 pt 57.8 69.22 60.2 73.10 sl 51.0 67.48 53.4 69.86 sv 47.9 72.07 55.1 74.60 Table 3: Comparison of the performance of the most accurate cross-lingual taggers for each target language, compared to having English as source language. ing from English than when projecting from the source language which performs best for each target language. One of the main causes of variation in performance is annotation differences. Languages that are morphologically rich tend to have lower performance, but we also see variation between similar languages: There is a 10% performance gap between Danish and Swedish when projecting from English, even though they are closely related. We also investigate the effect of the choice of source language on the accuracy of the projected POS tags (Table 4). Again, we compare the performance with English as source (which is standard for previous work on POS projection) to that of the best source language for each target. Although the gap in performance is smaller than for 1960 Target enbestbg 81.84 81.84 (en) cs 80.41 86.29 (sl) da 80.69 84.85 (sv) es 86.02 89.04 (it) fi 77.07 77.48 (cs) it 83.46 86.91 (es) nl 73.05 76.02 (da) pl 79.38 82.66 (cs) pt 84.30 87.98 (es) sl 74.71 83.21 (cs) sv 80.37 86.47 (da) Table 4: Wsabie projected model with type constraints, POS accuracy with English and the best language for each target as source. the full evaluation, we see that for most target languages we can still do better by projecting from a language other than English. Detailed per attribute results for the STANDARD evaluation are given in Table 5, again comparing the results of projecting from English to that of the most accurate model for each target language. We see that there are large differences in accuracy across attributes and across languages. In some cases, the transfer is unsuccessful. For example, degree accuracy in Italian is 2% F1 when projecting from English and 14% F1 projecting from Portuguese. Some of the cases can be explained by differences in where an attribute is marked: For example, for definiteness the performance is 1% from English to Bulgarian, as Bulgarian marks definiteness on nouns and adjectives rather than on determiners. Other attributes are very languagedependent. Gender transfers well between Romance languages, but poorly when transferring from English. 5.6 Parsing evaluation To evaluate the effect of our models on a downstream task, we apply the cross-lingual taggers induced using English as source language to dependency parsing. This is applicable to a scenario where a language might have a corpus annotated with dependency trees and universal POS, but not morphological attributes. We want to determine how much of the performance gain from features based on supervised morphological tags we can recover with the tags predicted by our model. As baseline we use a reimplementation of no morph projected type supervised bg 79.14 78.99 79.62 cs 76.88 77.25 79.03 da 69.73 70.04 71.51 es 77.66 78.08 78.64 fi 61.78 62.68 70.42 it 81.51 81.49 82.24 nl 64.76 65.80 65.92 pl 70.83 71.89 74.03 pt 75.92 76.71 77.98 sl 77.17 77.46 79.25 sv 72.92 74.09 74.58 Avg. 73.48 74.04 75.75 Table 6: Dependency parsing results (LAS) with no, projected and supervised morphological tags. Zhang and Nivre (2011), an arc-eager transitionbased dependency parser with a rich feature-set, with beam-size 8, trained for 10 epochs with a structured perceptron. We assume that universal POS tags are available, using a supervised SVM POS tagger for training and evaluation. To include the morphology, we add features based on the predicted tags of the word on top of the stack and the first two words on the buffer. Parsing results are given in Table 6. We report labelled attachment scores (LAS) for the baseline with no morphological tags, the model with features predicted by Wsabie with projected type constraints, and the model with features predicted by the supervised morphological tagger. We obtain improvements in parsing accuracies for all languages except Bulgarian when adding the induced morphological tags. Using the projected tags as features recovers 24.67% (0.6 LAS absolute) of the average gain that supervised morphology features delivers over the baseline parser. The parser with features from the supervised tagger trained on 1000 tokens obtains 73.63 LAS on average. This improvement of +0.15 LAS over the baseline versus the +0.6 of our method shows that the tags predicted by our projected models are more useful as features than those predicted by a small supervised model. To investigate the effect of source language choice for the projected models in this evaluation, we trained a model for Swedish using Danish as source language. The parsing performance is insignificantly different from using English as source, despite the accuracy of the tags projected 1961 Target bg cs da es fi it nl pl pt sl sv Source en da en it en sv en it en sv en pt en en en nl en it en cs en da Case 40 62 2 62 18 4 5 26 16 16 4 4 50 2 68 10 14 Definite 1 68 0 64 97 97 89 91 89 89 93 93 2 19 66 Degree 67 63 69 2 72 77 5 26 50 47 2 14 56 56 57 47 2 18 63 74 70 81 Gender 1 6 2 46 7 78 0 85 2 80 0 0 3 0 1 77 2 61 7 81 Mood 61 66 55 83 81 94 72 80 69 79 76 83 69 69 58 63 74 75 68 91 73 94 Number 69 71 67 75 60 82 54 92 67 68 57 90 78 78 69 63 62 75 68 91 64 94 NumType 64 62 91 86 84 82 85 86 89 66 78 78 63 65 86 73 Person 54 28 63 68 56 58 79 51 57 80 82 82 55 59 61 74 67 91 Poss 76 77 90 84 97 98 94 93 87 88 64 64 96 98 67 62 99 97 PronType 72 71 41 38 46 2 82 74 42 0 76 71 81 81 38 50 81 79 43 73 0 3 Reflex 0 85 0 0 62 0 0 61 0 0 60 60 0 97 0 0 Tense 60 63 68 70 81 85 69 81 67 77 75 75 74 74 66 66 64 72 62 74 65 86 VerbForm 43 49 59 78 75 79 79 81 64 65 82 81 78 78 56 66 79 72 59 74 73 86 Voice 9 75 9 6 89 10 76 55 15 90 Table 5: Cross-lingual tagging results (F1 scores) per language and per attribute (not showing POS and a small number of attribute types that only appear with 1 or 2 language pairs), for Wsabie projected with type constraints. English and best source language. from Danish being higher. Faruqui et al. (2016) show that features from induced morpho-syntactic lexicons can also improve dependency parsing accuracy. However, their method relies on having a seed lexicon of 1000 annotated word types, while our method does not require any morphological annotations in the target language. 6 Future Work A big challenge in cross-lingual morphology is that of relatedness between source and target languages. Although we evaluate our models on multiple source-target language pairs, more work is required to investigate strategies for choosing which source language to use for a low-resource target language. A related direction is to constructing models from multiple source languages, as our results show that the overall best-performing source language for a given target language may not always have the best performance on all attributes. Another direction is to make use of dictionaries such as Wiktionary to obtain type constraints, similar to previous work on weakly-supervised POS tagging (Li et al., 2012; T¨ackstr¨om et al., 2013). Sylak-Glassman et al. (2015b) and SylakGlassman et al. (2015a) proposed a morphological schema and method to extract annotations in that schema from Wiktionary. Although different from the schema used in this paper, their method can be used to extract type dictionaries for morphological tags that can be used to complement constraints extracted from parallel data. Finally, greater use can be made of syntactic information: There is a close relation between the syntactic structure expressed in dependency parses and inflections in morphologically rich languages; by including this syntactic structure in our models we can induce morphological tags, e.g. related to case, that is also expressed in dependency parses. 7 Conclusion In this paper we proposed a method that can successfully induce morphological taggers for resource-scarce languages using tags projected across bitext. It relies on access to a morphological tagger for a source-language and a moderate amount of bitext. The method obtains strong performance on a range of language pairs. We showed that downstream tasks such as dependency parsing can be improved by using the predictions from the tagger as features. Our results provide a strong baseline for future work in weaklysupervised morphological tagging. Acknowledgments This research was primarily performed while the first author was an intern at Google Inc. We thank Oscar T¨ackstr¨om, Kuzman Ganchev, Bernd Bohnet and Ryan McDonald for valuable assistance and discussions about this work. References Malin Ahlberg, Markus Forsberg, and Mans Hulden. 2015. Paradigm classification in supervised learning of morphology. In Proceedings of NAACL, pages 1024–1029. 1962 Taylor Berg-Kirkpatrick, Alexandre Bouchard-Cˆot´e, John DeNero, and Dan Klein. 2010. Painless unsupervised learning with features. In Proceedings of NAACL, pages 582–590. Lauent Besacier, Ettiene Barnard, Alexey Karpov, and Tanja Schultz. 2014. Automatic speech recognition for under-resourced languages: A survey. Speech Communication, 56:85–100. Mathias Creutz and Krista Lagus. 2007. Unsupervised models for morpheme segmentation and morphology learning. ACM Transactions on Speech and Language Processing, 4(1). Dipanjan Das and Slav Petrov. 2011. Unsupervised part-of-speech tagging with bilingual graph-based projections. In Proceedings of ACL, pages 600–609. Marie-Catherine de Marneffe, Timothy Dozat, Natalia Silveira, Katri Haverinen, Filip Ginter, Joakim Nivre, and Christopher D. Manning. 2014. Universal dependencies: A cross-linguistic typology. In Proceedings of LREC. Long Duong, Trevor Cohn, Karin Verspoor, Steven Bird, and Paul Cook. 2014. What can we get from 1000 tokens? A case study of multilingual pos tagging for resource-poor languages. In Proceedings of EMNLP, pages 886–897. Greg Durrett and John DeNero. 2013. Supervised learning of complete morphological paradigms. In Proceedings of NAACL, pages 1185–1195. Chris Dyer, Victor Chahuneau, and Noah A Smith. 2013. A simple, fast, and effective reparameterization of IBM Model 2. In Proceeding of NAACL, pages 682–686. Manaal Faruqui, Ryan McDonald, and Radu Soricut. 2016. Morpho-syntactic lexicon generation using graph-based semi-supervised learning. Transactions of the Association for Computational Linguistics, 4:1–16. Victoria Fossum and Steven Abney. 2005. Automatically inducing a part-of-speech tagger by projecting from multiple source languages across aligned corpora. In Proceedings of IJCNLP, pages 862–873. Kuzman Ganchev and Dipanjan Das. 2013. Crosslingual discriminative learning of sequence models with posterior regularization. In Proceedings of EMNLP, pages 1996–2006. Jan Hajiˇc, Massimiliano Ciaramita, Richard Johansson, Daisuke Kawahara, Maria Ant`onia Mart´ı, Llu´ıs M`arquez, Adam Meyers, Joakim Nivre, Sebastian Pad´o, Jan ˇStˇep´anek, et al. 2009. The CoNLL-2009 shared task: Syntactic and semantic dependencies in multiple languages. In Proceedings of CoNLL: Shared Task, pages 1–18. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT summit, volume 5, pages 79–86. Shen Li, Jo˜ao Grac¸a, and Ben Taskar. 2012. Wiki-ly supervised part-of-speech tagging. In Proceedings of EMNLP-CoNLL, pages 1389–1398, July. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Thomas M¨uller and Hinrich Schuetze. 2015. Robust morphological tagging with word representations. In Proceedings of NAACL, pages 526–536, Denver, Colorado, May–June. Joakim Nivre, ˇZeljko Agi´c, Maria Jesus Aranzabe, Masayuki Asahara, Aitziber Atutxa, Miguel Ballesteros, John Bauer, Kepa Bengoetxea, Riyaz Ahmad Bhat, Cristina Bosco, Sam Bowman, Giuseppe G. A. Celano, Miriam Connor, Marie-Catherine de Marneffe, Arantza Diaz de Ilarraza, Kaja Dobrovoljc, Timothy Dozat, Tomaˇz Erjavec, Rich´ard Farkas, Jennifer Foster, Daniel Galbraith, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Yoav Goldberg, Berta Gonzales, Bruno Guillaume, Jan Hajiˇc, Dag Haug, Radu Ion, Elena Irimia, Anders Johannsen, Hiroshi Kanayama, Jenna Kanerva, Simon Krek, Veronika Laippala, Alessandro Lenci, Nikola Ljubeˇsi´c, Teresa Lynn, Christopher Manning, Cˇatˇalina Mˇarˇanduc, David Mareˇcek, H´ector Mart´ınez Alonso, Jan Maˇsek, Yuji Matsumoto, Ryan McDonald, Anna Missil¨a, Verginica Mititelu, Yusuke Miyao, Simonetta Montemagni, Shunsuke Mori, Hanna Nurmi, Petya Osenova, Lilja Øvrelid, Elena Pascual, Marco Passarotti, CenelAugusto Perez, Slav Petrov, Jussi Piitulainen, Barbara Plank, Martin Popel, Prokopis Prokopidis, Sampo Pyysalo, Loganathan Ramasamy, Rudolf Rosa, Shadi Saleh, Sebastian Schuster, Wolfgang Seeker, Mojgan Seraji, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simk´o, Kiril Simov, Aaron Smith, Jan ˇStˇep´anek, Alane Suhr, Zsolt Sz´ant´o, Takaaki Tanaka, Reut Tsarfaty, Sumire Uematsu, Larraitz Uria, Viktor Varga, Veronika Vincze, Zdenˇek ˇZabokrtsk´y, Daniel Zeman, and Hanzhi Zhu. 2015. Universal dependencies 1.2. LINDAT/CLARIN digital library at Institute of Formal and Applied Linguistics, Charles University in Prague. Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A universal part-of-speech tagset. In Proceedings of LREC. Benjamin Snyder and Regina Barzilay. 2008. Unsupervised multilingual learning for morphological segmentation. In Proceedings of ACL, pages 737– 745. John Sylak-Glassman, Christo Kirov, Matt Post, Roger Que, and David Yarowsky. 2015a. A universal feature schema for rich morphological annotation and fine-grained cross-lingual part-of-speech tagging. In 1963 Proceedings of Systems and Frameworks for Computational Morphology: Fourth International Workshop, pages 72–93. Springer International Publishing, Cham. John Sylak-Glassman, Christo Kirov, David Yarowsky, and Roger Que. 2015b. A language-independent feature schema for inflectional morphology. In Proceedings of ACL-IJCNLP (short papers), pages 674– 680. Oscar T¨ackstr¨om, Ryan McDonald, and Jakob Uszkoreit. 2012. Cross-lingual word clusters for direct transfer of linguistic structure. In Proceedings of NAACL, pages 477–487. Oscar T¨ackstr¨om, Dipanjan Das, Slav Petrov, Ryan McDonald, and Joakim Nivre. 2013. Token and type constraints for cross-lingual part-of-speech tagging. Transactions of the Association for Computational Linguistics, 1:1–12. Kristina Toutanova, Hisami Suzuki, and Achim Ruopp. 2008. Applying morphology generation models to machine translation. In Proceedings of ACL-HLT, pages 558–566. Reut Tsarfaty, Djam´e Seddah, Yoav Goldberg, Sandra K¨ubler, Marie Candito, Jennifer Foster, Yannick Versley, Ines Rehbein, and Lamia Tounsi. 2010. Statistical parsing of morphologically rich languages (SPMRL): what, how and whither. In Proceedings of the NAACL HLT 2010 First Workshop on Statistical Parsing of Morphologically-Rich Languages, pages 1–12. Jakob Uszkoreit and Thorsten Brants. 2008. Distributed word clustering for large scale class-based language modeling in machine translation. In Proceedings of ACL-HLT, pages 755–762. Jason Weston, Samy Bengio, and Nicolas Usunier. 2011. Wsabie: Scaling up to large vocabulary image annotation. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI). Guillaume Wisniewski, Nicolas Pcheux, Souhir Gahbiche-Braham, and Franois Yvon. 2014. Crosslingual part-of-speech tagging through ambiguous learning. In Proceedings of EMNLP, pages 1779– 1785. David Yarowsky, Grace Ngai, and Richard Wicentowski. 2001. Incuding multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of HLT. Daniel Zeman. 2008. Reusable tagset conversion using tagset drivers. In Proceedings of LREC. Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proceedings of ACL-HLT, pages 188–193. 1964
2016
184
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1965–1974, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Semi-Supervised Learning for Neural Machine Translation Yong Cheng#, Wei Xu#, Zhongjun He+, Wei He+, Hua Wu+, Maosong Sun† and Yang Liu† ∗ #Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China †State Key Laboratory of Intelligent Technology and Systems Tsinghua National Laboratory for Information Science and Technology Department of Computer Science and Technology, Tsinghua University, Beijing, China +Baidu Inc., Beijing, China [email protected] [email protected] {hezhongjun,hewei06,wu hua}@baidu.com {sms,liuyang2011}@tsinghua.edu.cn Abstract While end-to-end neural machine translation (NMT) has made remarkable progress recently, NMT systems only rely on parallel corpora for parameter estimation. Since parallel corpora are usually limited in quantity, quality, and coverage, especially for low-resource languages, it is appealing to exploit monolingual corpora to improve NMT. We propose a semisupervised approach for training NMT models on the concatenation of labeled (parallel corpora) and unlabeled (monolingual corpora) data. The central idea is to reconstruct the monolingual corpora using an autoencoder, in which the sourceto-target and target-to-source translation models serve as the encoder and decoder, respectively. Our approach can not only exploit the monolingual corpora of the target language, but also of the source language. Experiments on the ChineseEnglish dataset show that our approach achieves significant improvements over state-of-the-art SMT and NMT systems. 1 Introduction End-to-end neural machine translation (NMT), which leverages a single, large neural network to directly transform a source-language sentence into a target-language sentence, has attracted increasing attention in recent several years (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015). Free of latent structure design and feature engineering that are critical in conventional statistical machine translation (SMT) (Brown et al., 1993; Koehn et al., 2003; Chiang, 2005), NMT has proven to excel in model∗Yang Liu is the corresponding author. ing long-distance dependencies by enhancing recurrent neural networks (RNNs) with the gating (Hochreiter and Schmidhuber, 1993; Cho et al., 2014; Sutskever et al., 2014) and attention mechanisms (Bahdanau et al., 2015). However, most existing NMT approaches suffer from a major drawback: they heavily rely on parallel corpora for training translation models. This is because NMT directly models the probability of a target-language sentence given a source-language sentence and does not have a separate language model like SMT (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015). Unfortunately, parallel corpora are usually only available for a handful of researchrich languages and restricted to limited domains such as government documents and news reports. In contrast, SMT is capable of exploiting abundant target-side monolingual corpora to boost fluency of translations. Therefore, the unavailability of large-scale, high-quality, and wide-coverage parallel corpora hinders the applicability of NMT. As a result, several authors have tried to use abundant monolingual corpora to improve NMT. Gulccehre et al. (2015) propose two methods, which are referred to as shallow fusion and deep fusion, to integrate a language model into NMT. The basic idea is to use the language model to score the candidate words proposed by the translation model at each time step or concatenating the hidden states of the language model and the decoder. Although their approach leads to significant improvements, one possible downside is that the network architecture has to be modified to integrate the language model. Alternatively, Sennrich et al. (2015) propose two approaches to exploiting monolingual corpora that is transparent to network architectures. The first approach pairs monolingual sentences with dummy input. Then, the parameters of encoder 1965 bushi yu shalong juxing le huitan bushi yu shalong juxing le huitan Bush held a talk with Sharon Bush held a talk with Sharon Bush held a talk with Sharon bushi yu shalong juxing le huitan encoder decoder encoder decoder (a) (b) Figure 1: Examples of (a) source autoencoder and (b) target autoencoder on monolingual corpora. Our idea is to leverage autoencoders to exploit monolingual corpora for NMT. In a source autoencoder, the source-to-target model P(y|x; −→θ ) serves as an encoder to transform the observed source sentence x into a latent target sentence y (highlighted in grey), from which the target-to-source model P(x′|y; ←−θ ) reconstructs a copy of the observed source sentence x′ from the latent target sentence. As a result, monolingual corpora can be combined with parallel corpora to train bidirectional NMT models in a semi-supervised setting. and attention model are fixed when training on these pseudo parallel sentence pairs. In the second approach, they first train a nerual translation model on the parallel corpus and then use the learned model to translate a monolingual corpus. The monolingual corpus and its translations constitute an additional pseudo parallel corpus. Similar ideas have also been suggested in conventional SMT (Ueffing et al., 2007; Bertoldi and Federico, 2009). Sennrich et al. (2015) report that their approach significantly improves translation quality across a variety of language pairs. In this paper, we propose semi-supervised learning for neural machine translation. Given labeled (i.e., parallel corpora) and unlabeled (i.e., monolingual corpora) data, our approach jointly trains source-to-target and target-to-source translation models. The key idea is to append a reconstruction term to the training objective, which aims to reconstruct the observed monolingual corpora using an autoencoder. In the autoencoder, the source-to-target and target-to-source models serve as the encoder and decoder, respectively. As the inference is intractable, we propose to sample the full search space to improve the efficiency. Specifically, our approach has the following advantages: 1. Transparent to network architectures: our approach does not depend on specific architectures and can be easily applied to arbitrary end-to-end NMT systems. 2. Both the source and target monolingual corpora can be used: our approach can benefit NMT not only using target monolingual corpora in a conventional way, but also the monolingual corpora of the source language. Experiments on Chinese-English NIST datasets show that our approach results in significant improvements in both directions over state-of-the-art SMT and NMT systems. 2 Semi-Supervised Learning for Neural Machine Translation 2.1 Supervised Learning Given a parallel corpus D = {⟨x(n), y(n)⟩}N n=1, the standard training objective in NMT is to maximize the likelihood of the training data: L(θ) = N X n=1 log P(y(n)|x(n); θ), (1) where P(y|x; θ) is a neural translation model and θ is a set of model parameters. D can be seen as labeled data for the task of predicting a target sentence y given a source sentence x. As P(y|x; θ) is modeled by a single, large neural network, there does not exist a separate target language model P(y; θ) in NMT. Therefore, parallel corpora have been the only resource for parameter estimation in most existing NMT systems. Unfortunately, even for a handful of resource-rich 1966 languages, the available domains are unbalanced and restricted to government documents and news reports. Therefore, the availability of large-scale, high-quality, and wide-coverage parallel corpora becomes a major obstacle for NMT. 2.2 Autoencoders on Monolingual Corpora It is appealing to explore the more readily available, abundant monolingual corpora to improve NMT. Let us first consider an unsupervised setting: how to train NMT models on a monolingual corpus T = {y(t)}T t=1? Our idea is to leverage autoencoders (Vincent et al., 2010; Socher et al., 2011): (1) encoding an observed target sentence into a latent source sentence using a target-to-source translation model and (2) decoding the source sentence to reconstruct the observed target sentence using a source-to-target model. For example, as shown in Figure 1(b), given an observed English sentence “Bush held a talk with Sharon”, a target-to-source translation model (i.e., encoder) transforms it into a Chinese translation “bushi yu shalong juxing le huitan” that is unobserved on the training data (highlighted in grey). Then, a source-to-target translation model (i.e., decoder) reconstructs the observed English sentence from the Chinese translation. More formally, let P(y|x; −→ θ) and P(x|y; ←−θ ) be source-to-target and target-to-source translation models respectively, where −→θ and ←−θ are corresponding model parameters. An autoencoder aims to reconstruct the observed target sentence via a latent source sentence: P(y′|y; −→θ , ←−θ ) = X x P(y′, x|y; −→θ , ←−θ ) = X x P(x|y; ←−θ ) | {z } encoder P(y′|x; −→θ ) | {z } decoder , (2) where y is an observed target sentence, y′ is a copy of y to be reconstructed, and x is a latent source sentence. We refer to Eq. (2) as a target autoencoder. 1 Likewise, given a monolingual corpus of source language S = {x(s)}S s=1, it is natural to introduce a source autoencoder that aims at reconstructing 1Our definition of auotoencoders is inspired by Ammar et al. (2014). Note that our autoencoders inherit the same spirit from conventional autoencoders (Vincent et al., 2010; Socher et al., 2011) except that the hidden layer is denoted by a latent sentence instead of real-valued vectors. the observed source sentence via a latent target sentence: P(x′|x; −→θ , ←−θ ) = X y P(x′, y|x; ←−θ ) = X y P(y|x; −→θ ) | {z } encoder P(x′|y; ←−θ ) | {z } decoder . (3) Please see Figure 1(a) for illustration. 2.3 Semi-Supervised Learning As the autoencoders involve both source-to-target and target-to-source models, it is natural to combine parallel corpora and monolingual corpora to learn birectional NMT translation models in a semi-supervised setting. Formally, given a parallel corpus D = {⟨x(n), y(n)⟩}N n=1 , a monolingual corpus of target language T = {y(t)}T t=1, and a monolingual corpus of source language S = {x(s)}S s=1, we introduce our new semi-supervised training objective as follows: J(−→θ , ←−θ ) = N X n=1 log P(y(n)|x(n); −→θ ) | {z } source-to-target likelihood + N X n=1 log P(x(n)|y(n); ←−θ ) | {z } target-to-source likelihood +λ1 T X t=1 log P(y′|y(t); −→θ , ←−θ ) | {z } target autoencoder +λ2 S X s=1 log P(x′|x(s); −→θ , ←−θ ) | {z } source autoencoder , (4) where λ1 and λ2 are hyper-parameters for balancing the preference between likelihood and autoencoders. Note that the objective consists of four parts: source-to-target likelihood, target-to-source likelihood, target autoencoder, and source autoencoder. In this way, our approach is capable of exploiting abundant monolingual corpora of both source and target languages. 1967 The optimal model parameters are given by −→θ ∗= argmax ( N X n=1 log P(y(n)|x(n); −→θ ) + λ1 T X t=1 log P(y′|y(t); −→θ , ←−θ ) + λ2 S X s=1 log P(x′|x(s); −→θ , ←−θ ) ) (5) ←−θ ∗= argmax ( N X n=1 log P(x(n)|y(n); ←−θ ) + λ1 T X t=1 log P(y′|y(t); −→θ , ←−θ ) + λ2 S X s=1 log P(x′|x(s); −→θ , ←−θ ) ) (6) It is clear that the source-to-target and target-tosource models are connected via the autoencoder and can hopefully benefit each other in joint training. 2.4 Training We use mini-batch stochastic gradient descent to train our joint model. For each iteration, besides the mini-batch from the parallel corpus, we also construct two additional mini-batches by randomly selecting sentences from the source and target monolingual corpora. Then, gradients are collected from these mini-batches to update model parameters. The partial derivative of J(−→θ , ←−θ ) with respect to the source-to-target model −→θ is given by ∂J(−→θ , ←−θ ) ∂−→θ = N X n=1 ∂log P(y(n)|x(n); −→θ ) ∂−→θ +λ1 T X t=1 ∂log P(y′|y(t); −→θ , ←−θ ) ∂−→θ +λ2 S X s=1 ∂log P(x′|x(s); −→θ , ←−θ ) ∂−→θ . (7) The partial derivative with respect to ←−θ can be calculated similarly. Unfortunately, the second and third terms in Eq. (7) are intractable to calculate due to the exponential search space. For example, the derivative in Chinese English # Sent. 2.56M Parallel # Word 67.54M 74.82M Vocab. 0.21M 0.16M # Sent. 18.75M 22.32M Monolingual # Word 451.94M 399.83M Vocab. 0.97M 1.34M Table 1: Characteristics of parallel and monolingual corpora. the third term in Eq. (7) is given by P x∈X(y) P(x|y; ←−θ )P(y′|x; −→θ )∂log P(y′|x;− → θ ) ∂− → θ P x∈X(y) P(x|y; ←−θ )P(y′|x; −→θ ) . (8) It is prohibitively expensive to compute the sums due to the exponential search space of X(y). Alternatively, we propose to use a subset of the full space ˜ X(y) ⊂X(y) to approximate Eq. (8): P x∈˜ X(y) P(x|y; ←−θ )P(y′|x; −→θ )∂log P(y′|x;− → θ ) ∂− → θ P x∈˜ X(y) P(x|y; ←−θ )P(y′|x; −→θ ) . (9) In practice, we use the top-k list of candidate translations of y as ˜ X(y). As | ˜ X(y)| ≪X|(y)|, it is possible to calculate Eq. (9) efficiently by enumerating all candidates in ˜ X(y). In practice, we find this approximation results in significant improvements and k = 10 seems to suffice to keep the balance between efficiency and translation quality. 3 Experiments 3.1 Setup We evaluated our approach on the ChineseEnglish dataset. As shown in Table 1, we use both a parallel corpus and two monolingual corpora as the training set. The parallel corpus from LDC consists of 2.56M sentence pairs with 67.53M Chinese words and 74.81M English words. The vocabulary sizes of Chinese and English are 0.21M and 0.16M, respectively. We use the Chinese and English parts of the Xinhua portion of the GIGAWORD corpus as the monolingual corpora. The Chinese monolingual corpus contains 18.75M sentences with 451.94M words. The English corpus contains 22.32M sentences with 399.83M words. The vocabulary sizes of Chinese and English are 0.97M and 1.34M, respectively. 1968 Iterations 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 BLEU 30.0 30.5 31.0 31.5 32.0 32.5 33.0 33.5 34.0 × 104 k=15 k=10 k=5 k=1 Figure 2: Effect of sample size k on the Chineseto-English validation set. Iterations 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 BLEU 15.0 15.5 16.0 16.5 17.0 17.5 k=15 k=10 k=5 k=1 Figure 3: Effect of sample size k on the Englishto-Chinese validation set. For Chinese-to-English translation, we use the NIST 2006 Chinese-English dataset as the validation set for hyper-parameter optimization and model selection. The NIST 2002, 2003, 2004, and 2005 datasets serve as test sets. Each Chinese sentence has four reference translations. For English-to-Chinese translation, we use the NIST datasets in a reverse direction: treating the first English sentence in the four reference translations as a source sentence and the original input Chinese sentence as the single reference translation. The evaluation metric is case-insensitive BLEU (Papineni et al., 2002) as calculated by the multi-bleu.perl script. We compared our approach with two state-ofthe-art SMT and NMT systems: 1. MOSES (Koehn et al., 2007): a phrase-based SMT system; Iterations 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 BLEU 30.0 30.5 31.0 31.5 32.0 32.5 33.0 33.5 34.0 × 104 0% OOV 10% OOV 20% OOV 30% OOV Figure 4: Effect of OOV ratio on the Chinese-toEnglish validation set. Iterations 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 BLEU 6.0 8.0 10.0 12.0 14.0 16.0 18.0 × 104 0% OOV 10% OOV 20% OOV 30% OOV Figure 5: Effect of OOV ratio on the English-toChinese validation set. 2. RNNSEARCH (Bahdanau et al., 2015): an attention-based NMT system. For MOSES, we use the default setting to train the phrase-based translation on the parallel corpus and optimize the parameters of log-linear models using the minimum error rate training algorithm (Och, 2003). We use the SRILM toolkit (Stolcke, 2002) to train 4-gram language models. For RNNSEARCH, we use the parallel corpus to train the attention-based neural translation models. We set the vocabulary size of word embeddings to 30K for both Chinese and English. We follow Luong et al. (2015) to address rare words. On top of RNNSEARCH, our approach is capable of training bidirectional attention-based neural translation models on the concatenation of parallel and monolingual corpora. The sample size k is set to 10. We set the hyper-parameter λ1 = 0.1 and 1969 λ2 = 0 when we add the target monolingual corpus, and λ1 = 0 and λ2 = 0.1 for source monolingual corpus incorporation. The threshold of gradient clipping is set to 0.05. The parameters of our model are initialized by the model trained on parallel corpus. 3.2 Effect of Sample Size k As the inference of our approach is intractable, we propose to approximate the full search space with the top-k list of candidate translations to improve efficiency (see Eq. (9)). Figure 2 shows the BLEU scores of various settings of k over time. Only the English monolingual corpus is appended to the training data. We observe that increasing the size of the approximate search space generally leads to improved BLEU scores. There are significant gaps between k = 1 and k = 5. However, keeping increasing k does not result in significant improvements and decreases the training efficiency. We find that k = 10 achieves a balance between training efficiency and translation quality. As shown in Figure 3, similar findings are also observed on the English-to-Chinese validation set. Therefore, we set k = 10 in the following experiments. 3.3 Effect of OOV Ratio Given a parallel corpus, what kind of monolingual corpus is most beneficial for improving translation quality? To answer this question, we investigate the effect of OOV ratio on translation quality, which is defined as ratio = P y∈yJy /∈VDtK |y| , (10) where y is a target-language sentence in the monolingual corpus T , y is a target-language word in y, VDt is the vocabulary of the target side of the parallel corpus D. Intuitively, the OOV ratio indicates how a sentence in the monolingual resembles the parallel corpus. If the ratio is 0, all words in the monolingual sentence also occur in the parallel corpus. Figure 4 shows the effect of OOV ratio on the Chinese-to-English validation set. Only English monolingual corpus is appended to the parallel corpus during training. We constructed four monolingual corpora of the same size in terms of sentence pairs. “0% OOV” means the OOV ratio is 0% for all sentences in the monolingual corpus. “10% OOV” suggests that the OOV ratio is no greater 10% for each sentence in the monolingual corpus. We find that using a monolingual corpus with a lower OOV ratio generally leads to higher BLEU scores. One possible reason is that low-OOV monolingual corpus is relatively easier to reconstruct than its high-OOV counterpart and results in better estimation of model parameters. Figure 5 shows the effect of OOV ratio on the English-to-Chinese validation set. Only English monolingual corpus is appended to the parallel corpus during training. We find that “0% OOV” still achieves the highest BLEU scores. 3.4 Comparison with SMT Table 2 shows the comparison between MOSES and our work. MOSES used the monolingual corpora as shown in Table 1: 18.75M Chinese sentences and 22.32M English sentences. We find that exploiting monolingual corpora dramatically improves translation performance in both Chinese-to-English and English-to-Chinese directions. Relying only on parallel corpus, RNNSEARCH outperforms MOSES trained also only on parallel corpus. But the capability of making use of abundant monolingual corpora enables MOSES to achieve much higher BLEU scores than RNNSEARCH only using parallel corpus. Instead of using all sentences in the monolingual corpora, we constructed smaller monolingual corpora with zero OOV ratio: 2.56M Chinese sentences with 47.51M words and 2.56M English English sentences with 37.47M words. In other words, the monolingual corpora we used in the experiments are much smaller than those used by MOSES. By adding English monolingual corpus, our approach achieves substantial improvements over RNNSEARCH using only parallel corpus (up to +4.7 BLEU points). In addition, significant improvements are also obtained over MOSES using both parallel and monolingual corpora (up to +3.5 BLEU points). An interesting finding is that adding English monolingual corpora helps to improve English-toChinese translation over RNNSEARCH using only parallel corpus (up to +3.2 BLEU points), suggesting that our approach is capable of improving NMT using source-side monolingual corpora. In the English-to-Chinese direction, we obtain similar findings. In particular, adding Chi1970 System Training Data Direction NIST06 NIST02 NIST03 NIST04 NIST05 CE C E MOSES √ × × C →E 32.48 32.69 32.39 33.62 30.23 E →C 14.27 18.28 15.36 13.96 14.11 √ × √ C →E 34.59 35.21 35.71 35.56 33.74 √ √ × E →C 20.69 25.85 19.76 18.77 19.74 RNNSEARCH √ × × C →E 30.74 35.16 33.75 34.63 31.74 E →C 15.71 20.76 16.56 16.85 15.14 √ × √ C →E 35.61∗∗++ 38.78∗∗++ 38.32∗∗++ 38.49∗∗++ 36.45∗∗++ E →C 17.59++ 23.99 ++ 18.95++ 18.85++ 17.91++ √ √ × C →E 35.01++ 38.20∗∗++ 37.99∗∗++ 38.16∗∗++ 36.07∗∗++ E →C 21.12∗++ 29.52∗∗++ 20.49∗∗++ 21.59∗∗++ 19.97++ Table 2: Comparison with MOSES and RNNSEARCH. MOSES is a phrase-based statistical machine translation system (Koehn et al., 2007). RNNSEARCH is an attention-based neural machine translation system (Bahdanau et al., 2015). “CE” donates Chinese-English parallel corpus, “C” donates Chinese monolingual corpus, and “E” donates English monolingual corpus. “√” means the corpus is included in the training data and × means not included. “NIST06” is the validation set and “NIST02-05” are test sets. The BLEU scores are case-insensitive. “*”: significantly better than MOSES (p < 0.05); “**”: significantly better than MOSES (p < 0.01);“+”: significantly better than RNNSEARCH (p < 0.05); “++”: significantly better than RNNSEARCH (p < 0.01). Method Training Data Direction NIST06 NIST02 NIST03 NIST04 NIST05 CE C E Sennrich et al. (2015) √ × √ C →E 34.10 36.95 36.80 37.99 35.33 √ √ × E →C 19.85 28.83 20.61 20.54 19.17 this work √ × √ C →E 35.61∗∗ 38.78∗∗ 38.32∗∗ 38.49∗ 36.45∗∗ E →C 17.59 23.99 18.95 18.85 17.91 √ √ × C →E 35.01∗∗ 38.20∗∗ 37.99∗∗ 38.16 36.07∗∗ E →C 21.12∗∗ 29.52∗∗ 20.49 21.59∗∗ 19.97∗∗ Table 3: Comparison with Sennrich et al. (2015). Both Sennrich et al. (2015) and our approach build on top of RNNSEARCH to exploit monolingual corpora. The BLEU scores are case-insensitive. “*”: significantly better than Sennrich et al. (2015) (p < 0.05); “**”: significantly better than Sennrich et al. (2015) (p < 0.01). nese monolingual corpus leads to more benefits to English-to-Chinese translation than adding English monolingual corpus. We also tried to use both Chinese and English monolingual corpora through simply setting all the λ to 0.1 but failed to obtain further significant improvements. Therefore, our findings can be summarized as follows: 1. Adding target monolingual corpus improves over using only parallel corpus for source-totarget translation; 2. Adding source monolingual corpus also improves over using only parallel corpus for source-to-target translation, but the improvements are smaller than adding target monolingual corpus; 3. Adding both source and target monolingual corpora does not lead to further significant improvements. 3.5 Comparison with Previous Work We re-implemented Sennrich et al. (2015)’s method on top of RNNSEARCH as follows: 1. Train the target-to-source neural translation model P(x|y; ←−θ ) on the parallel corpus D = {⟨x(n), y(n)⟩}N n=1. 2. The trained target-to-source model ←−θ ∗is used to translate a target monolingual corpus T = {y(t)}T t=1 into a source monolingual corpus ˜S = {˜x(t)}T t=1. 3. The target monolingual corpus is paired with its translations to form a pseudo parallel corpus, which is then appended to the original parallel corpus to obtain a larger parallel corpus: ˜D = D ∪⟨˜S, T ⟩. 4. Re-train the the source-to-target neural translation model on ˜D to obtain the final model parameters −→θ ∗. 1971 Monolingual hongsen shuo , ruguo you na jia famu gongsi dangan yishenshifa , name tamen jiang zihui qiancheng . Reference hongsen said, if any logging companies dare to defy the law, then they will destroy their own future . Translation hun sen said , if any of those companies dare defy the law , then they will have their own fate . [iteration 0] hun sen said if any tree felling company dared to break the law , then they would kill themselves . [iteration 40K] hun sen said if any logging companies dare to defy the law , they would destroy the future themselves . [iteration 240K] Monolingual dan yidan panjue jieguo zuizhong queding , ze bixu zai 30 tian nei zhixing . Reference But once the final verdict is confirmed , it must be executed within 30 days . Translation however , in the final analysis , it must be carried out within 30 days . [iteration 0] however , in the final analysis , the final decision will be carried out within 30 days . [iteration 40K] however , once the verdict is finally confirmed , it must be carried out within 30 days . [iteration 240K] Table 4: Example translations of sentences in the monolingual corpus during semi-supervised learning. We find our approach is capable of generating better translations of the monolingual corpus over time. Table 3 shows the comparison results. Both the two approaches use the same parallel and monolingual corpora. Our approach achieves significant improvements over Sennrich et al. (2015) in both Chinese-to-English and English-to-Chinese directions (up to +1.8 and +1.0 BLEU points). One possible reason is that Sennrich et al. (2015) only use the pesudo parallel corpus for parameter estimation for once (see Step 4 above) while our approach enables source-to-target and targetto-source models to interact with each other iteratively on both parallel and monolingual corpora. To some extent, our approach can be seen as an iterative extension of Sennrich et al. (2015)’s approach: after estimating model parameters on the pseudo parallel corpus, the learned model parameters are used to produce a better pseudo parallel corpus. Table 4 shows example Viterbi translations on the Chinese monolingual corpus over iterations: x∗= argmax x n P(y′|x; −→θ )P(x|y; ←−θ ) o . (11) We observe that the quality of Viterbi translations generally improves over time. 4 Related Work Our work is inspired by two lines of research: (1) exploiting monolingual corpora for machine translation and (2) autoencoders in unsupervised and semi-supervised learning. 4.1 Exploiting Monolingual Corpora for Machine Translation Exploiting monolingual corpora for conventional SMT has attracted intensive attention in recent years. Several authors have introduced transductive learning to make full use of monolingual corpora (Ueffing et al., 2007; Bertoldi and Federico, 2009). They use an existing translation model to translate unseen source text, which can be paired with its translations to form a pseudo parallel corpus. This process iterates until convergence. While Klementiev et al. (2012) propose an approach to estimating phrase translation probabilities from monolingual corpora, Zhang and Zong (2013) directly extract parallel phrases from monolingual corpora using retrieval techniques. Another important line of research is to treat translation on monolingual corpora as a decipherment problem (Ravi and Knight, 2011; Dou et al., 2014). 1972 Closely related to Gulccehre et al. (2015) and Sennrich et al. (2015), our approach focuses on learning birectional NMT models via autoencoders on monolingual corpora. The major advantages of our approach are the transparency to network architectures and the capability to exploit both source and target monolingual corpora. 4.2 Autoencoders in Unsupervised and Semi-Supervised Learning Autoencoders and their variants have been widely used in unsupervised deep learning ((Vincent et al., 2010; Socher et al., 2011; Ammar et al., 2014), just to name a few). Among them, Socher et al. (2011)’s approach bears close resemblance to our approach as they introduce semi-supervised recursive autoencoders for sentiment analysis. The difference is that we are interested in making a better use of parallel and monolingual corpora while they concentrate on injecting partial supervision to conventional unsupervised autoencoders. Dai and Le (2015) introduce a sequence autoencoder to reconstruct an observed sequence via RNNs. Our approach differs from sequence autoencoders in that we use bidirectional translation models as encoders and decoders to enable them to interact within the autoencoders. 5 Conclusion We have presented a semi-supervised approach to training bidirectional neural machine translation models. The central idea is to introduce autoencoders on the monolingual corpora with source-totarget and target-to-source translation models as encoders and decoders. Experiments on ChineseEnglish NIST datasets show that our approach leads to significant improvements. As our method is sensitive to the OOVs present in monolingual corpora, we plan to integrate Jean et al. (2015)’s technique on using very large vocabulary into our approach. It is also necessary to further validate the effectiveness of our approach on more language pairs and NMT architectures. Another interesting direction is to enhance the connection between source-to-target and target-tosource models (e.g., letting the two models share the same word embeddings) to help them benefit more from interacting with each other. Acknowledgements This work was done while Yong Cheng was visiting Baidu. This research is supported by the 973 Program (2014CB340501, 2014CB340505), the National Natural Science Foundation of China (No. 61522204, 61331013, 61361136003), 1000 Talent Plan grant, Tsinghua Initiative Research Program grants 20151080475 and a Google Faculty Research Award. We sincerely thank the viewers for their valuable suggestions. References Waleed Ammar, Chris Dyer, and Noah Smith. 2014. Conditional random field autoencoders for unsupervised structred prediction. In Proceedings of NIPS 2014. Dzmitry Bahdanau, KyungHyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR. Nicola Bertoldi and Marcello Federico. 2009. Domain adaptation for statistical machine translation. In Proceedings of WMT. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguisitics. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of ACL. Kyunhyun Cho, Bart van Merri¨enboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. In Proceedings of SSST-8. Andrew M. Dai and Quoc V. Le. 2015. Semisupervised sequence learning. In Proceedings of NIPS. Qing Dou, Ashish Vaswani, and Kevin Knight. 2014. Beyond parallel data: Joint word alignment and decipherment improves machine translation. In Proceedings of EMNLP. Caglar Gulccehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Lo¨ıc Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On using monolingual corpora in neural machine translation. arXiv:1503.03535 [cs.CL]. Sepp Hochreiter and J¨urgen Schmidhuber. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguisitics. 1973 Sebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In Proceedings of ACL. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of EMNLP. Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. 2014. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems. Alexandre Klementiev, Ann Irvine, Chris CallisonBurch, and David Yarowsky. 2012. Toward statistical machine translation without paralel corpora. In Proceedings of EACL. Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of NAACL. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of ACL (demo session). Minh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In Proceedings of ACL. Franz Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of ACL. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a methof for automatic evaluation of machine translation. In Proceedings of ACL. Sujith Ravi and Kevin Knight. 2011. Deciphering foreign language. In Proceedings of ACL. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Improving nerual machine translation models with monolingual data. arXiv:1511.06709 [cs.CL]. Richard Socher, Jeffrey Pennington, Eric Huang, Andrew Ng, and Christopher Manning. 2011. Semisupervised recursive autoencoders for predicting sentiment distributions. In Proceedings of EMNLP. Andreas Stolcke. 2002. Srilm - am extensible language modeling toolkit. In Proceedings of ICSLP. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of NIPS. Nicola Ueffing, Gholamreza Haffari, and Anoop Sarkar. 2007. Trasductive learning for statistical machine translation. In Proceedings of ACL. Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Autoine Manzagol. 2010. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research. Jiajun Zhang and Chengqing Zong. 2013. Learning a phrase-based translation model from monolingual data with application to domain adaptation. In Proceedings of ACL. 1974
2016
185
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1975–1985, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Strategies for Training Large Vocabulary Neural Language Models Wenlin Chen David Grangier Michael Auli Facebook, Menlo Park, CA Abstract Training neural network language models over large vocabularies is computationally costly compared to count-based models such as Kneser-Ney. We present a systematic comparison of neural strategies to represent and train large vocabularies, including softmax, hierarchical softmax, target sampling, noise contrastive estimation and self normalization. We extend self normalization to be a proper estimator of likelihood and introduce an efficient variant of softmax. We evaluate each method on three popular benchmarks, examining performance on rare words, the speed/accuracy trade-off and complementarity to Kneser-Ney. 1 Introduction Neural network language models (Bengio et al., 2003; Mikolov et al., 2010) have gained popularity for tasks such as automatic speech recognition (Arisoy et al., 2012) and statistical machine translation (Schwenk et al., 2012; Vaswani et al., 2013; Baltescu and Blunsom, 2014). Similar models are also developed for translation (Le et al., 2012; Devlin et al., 2014; Bahdanau et al., 2015), summarization (Chopra et al., 2015) and language generation (Sordoni et al., 2015). Language models assign a probability to a word given a context of preceding, and possibly subsequent, words. The model architecture determines how the context is represented and there are several choices including recurrent neural networks (Mikolov et al., 2010; Jozefowicz et al., 2016), or log-bilinear models (Mnih and Hinton, 2010). This paper does not focus on architecture or context representation but rather on how to efficiently deal with large output vocabularies, a problem common to all approaches to neural language modeling and related tasks (machine translation, language generation). We therefore experiment with a classical feed-forward neural network model similar to Bengio et al. (2003). Practical training speed for these models quickly decreases as the vocabulary grows. This is due to three combined factors: (i) model evaluation and gradient computation become more time consuming, mainly due to the need of computing normalized probabilities over a large vocabulary; (ii) large vocabularies require more training data in order to observe enough instances of infrequent words which increases training times; (iii) a larger training set often allows for larger models which requires more training iterations. This paper provides an overview of popular strategies to model large vocabularies for language modeling. This includes the classical softmax over all output classes, hierarchical softmax which introduces latent variables, or clusters, to simplify normalization, target sampling which only considers a random subset of classes for normalization, noise contrastive estimation which discriminates between genuine data points and samples from a noise distribution, and infrequent normalization, also referred as self-normalization, which computes the partition function at an infrequent rate. We also extend self-normalization to be a proper estimator of likelihood. Furthermore, we introduce differentiated softmax, a novel variation of softmax which assigns more parameters, or capacity, to frequent words and which we show to be faster and more accurate than softmax (§2). Our comparison assumes a reasonable budget of one week for training models on a high end GPU (Nvidia K40). We evaluate on three benchmarks differing in the amount of training data and vocabulary size, that is Penn Treebank, Gigaword and the Billion Word benchmark (§3). Our results show that conclusions drawn from small datasets do not always generalize to larger settings. For instance, hierarchical softmax is less accurate than softmax on the small vocabulary Penn Treebank task but performs best on the very large vocabulary Billion Word benchmark. This is because hierarchical softmax is the fastest method for training and can perform more training updates in the same period of time. Furthermore, our re1975 sults with differentiated softmax demonstrate that assigning capacity where it has the most impact allows to train better models in our time budget (§4). Our analysis also shows clearly that traditional Kneser-Ney models are competitive on rare words, contrary to the common belief that neural models are better on infrequent words (§5). 2 Modeling Large Vocabularies We first introduce our model architecture with a classical softmax and then describe various other methods including a novel variation of softmax. 2.1 Softmax Neural Language Model Our feed-forward neural network implements an n-gram language model, i.e., it is a parametric function estimating the probability of the next word wt given n −1 previous context words, wt−1, . . . , wt−n+1. Formally, we take as input a sequence of discrete indexes representing the n−1 previous words and output a vocabulary-sized vector of probability estimates, i.e., f : {1, . . . , V }n−1 →[0, 1]V , where V is the vocabulary size. This function results from the composition of simple differentiable functions or layers. Specifically, f composes an input mapping from discrete word indexes to continuous vectors, a succession of linear operations followed by hyperbolic tangent non-linearities, plus one final linear operation, followed by a softmax normalization. The input layer maps each context word index to a continuous d′ 0-dimensional vector. It relies on a matrix W 0 ∈RV ×d′ 0 to convert the input x = [wt−1, . . . , wt−n+1] ∈{1, . . . , V }n−1 to n −1 vectors of dimension d′ 0. These vectors are concatenated into a single (n−1)×d′ 0 matrix, h0 = [W 0 wt−1; . . . ; W 0 wt−n+1] ∈Rn−1×d′ 0. This state h0 is considered as a d0 = (n −1) × d′ 0 vector by the next layer. The subsequent states are computed through k layers of linear mappings followed by hyperbolic tangents, i.e. ∀i = 1, . . . , k, hi = tanh(W ihi−1 + bi) ∈Rdi where W i ∈ Rdi×di−1, b ∈ Rdi are learnable weights and biases and tanh denotes the component-wise hyperbolic tangent. Finally, the last layer performs a linear operation followed by a softmax normalization, i.e., hk+1 = W k+1hk + bk+1 ∈RV and y = 1 Z exp(hk+1) ∈[0, 1]V (1) where Z = PV j=1 exp(hk+1 j ) and exp denotes the component-wise exponential. The network output y is therefore a vocabulary-sized vector of probability estimates. We use the standard cross-entropy loss with respect to the computed log probabilities ∂log yi ∂hk+1 j = δij −yj where δij = 1 if i = j and 0 otherwise The gradient update therefore increases the score of the correct output hk+1 i and decreases the score of all other outputs hk+1 j for j ̸= i. A downside of the classical softmax formulation is that it requires computation of the activations for all output words, Eq. (1). The output layer with V activations is much larger than any other layer in the network and its matrix multiplication dominates the complexity of the entire network. 2.2 Hierarchical Softmax Hierarchical Softmax (HSM) organizes the output vocabulary into a tree where the leaves are the words and the intermediate nodes are latent variables, or classes (Morin and Bengio, 2005). The tree has potentially many levels and there is a unique path from the root to each word. The probability of a word is the product of the probabilities of the latent variables along the path from the root to the leaf, including the probability of the leaf. We follow Goodman (2001) and Mikolov et al. (2011b) and model a two-level tree. Given context x, HSM predicts the class of the next word ct and the actual word wt p(wt|x) = p(ct|x) p(wt|ct, x) (2) If the number of classes is O( √ V ) and classes are balanced, then we only need to compute O(2 √ V ) outputs. In practice, this strategy results in weight matrices whose largest dimension is < 1, 000, a setting for which GPU hardware is fast. A popular strategy is frequency clustering. It sorts the vocabulary by frequency and then forms clusters of words with similar frequency. Each cluster contains an equal share of the total unigram probability. We compare this strategy to random class assignment and to clustering based on word 1976 W k+1 hk dA dB dC |A| |B| |C| dA dB dC Figure 1: Output weight matrix W k+1 and hidden layer hk for differentiated softmax for vocabulary partitions A, B, C with embedding dimensions dA, dB, dC; non-shaded areas are zero. contexts, relying on PCA (Lebret and Collobert, 2014). A full comparison of context-based clustering is beyond the scope of this work (Brown et al., 1992; Mikolov et al., 2013). 2.3 Differentiated Softmax This section introduces a novel variation of softmax that assigns a variable number of parameters to each word in the output layer. The weight matrix of the final layer W k+1 ∈Rdk×V stores output embeddings of size dk for the V words the language model may predict: W k+1 1 ; . . . ; W k+1 V . Differentiated softmax (D-Softmax) varies the dimension of the output embeddings dk across words depending on how much model capacity, or parameters, are deemed suitable for a given word. We assign more parameters to frequent words than to rare words since more training occurrences allow for fitting more parameters. We partition the output vocabulary based on word frequency and the words in each partition share the same embedding size. Partitioning the vocabulary in this way results in a sparse final weight matrix W k+1 which arranges the embeddings of the output words in blocks, each block corresponding to a separate partition (Figure 1). The size of the final hidden layer hk is the sum of the embedding sizes of the partitions. The final hidden layer is effectively a concatenation of separate features for each partition which are used to compute the dot product with the corresponding embedding type in W k+1. In practice, we efficiently compute separate matrix-vector products, or in batched form, matrix-matrix products, for each partition in W k+1 and hk. Overall, differentiated softmax can lead to large speed-ups as well as accuracy gains since we can greatly reduce the complexity of computing the output layer. Most significantly, this strategy speeds up both training and inference. This is in contrast to hierarchical softmax which is fast during training but requires even more effort than softmax for computing the most likely next word. 2.4 Target Sampling Sampling-based methods approximate the softmax normalization, Eq. (1), by summing over a sub-sample of impostor classes. This can significantly speed-up each training iteration, depending on the size of the impostor set. Target sampling builds upon the importance sampling work of Bengio and Sen´ecal (2008). We follow Jean et al. (2014) who choose as impostors all positive examples in a mini-batch as well as a subset of the remaining words. This subset is sampled uniformly and its size is chosen by validation. 2.5 Noise Contrastive Estimation Noise contrastive estimation (NCE) is another sampling-based technique (Hyv¨arinen, 2010; Mnih and Teh, 2012; Chen et al., 2015). Contrary to target sampling, it does not maximize the training data likelihood directly. Instead, it solves a two-class problem of distinguishing genuine data from noise samples. The training algorithm samples a word w given the preceding context x from a mixture p(w|x) = 1 k + 1ptrain(w|x) + k k + 1pnoise(w|x) where ptrain is the empirical distribution of the training set and pnoise is a known noise distribution which is typically a context-independent unigram distribution. The training algorithm fits the model ˆp(w|x) to recover whether a mixture sample came from the data or the noise distribution, this amounts to minimizing the binary crossentropy −y log ˆp(y = 1|w, x)−(1−y) log ˆp(y = 0|w, x) where y is a binary variable indicating where the current sample originates from ( ˆp(y = 1|w, x) = ˆp(w|x) ˆp(w|x)+kpnoise(w|x) (data) ˆp(y = 0|w, x) = 1 −ˆp(y = 1|w, x) (noise). This formulation still involves a softmax over the vocabulary to compute ˆp(w|x). However, Mnih and Teh (2012) suggest to forego normalization and replace ˆp(w|x) with unnormalized exponentiated scores. This makes the training complexity independent of the vocabulary size. At test time, softmax normalization is reintroduced to get a proper distribution. We also follow Mnih and Teh (2012) recommendations for pnoise and rely on a unigram distribution of the training set. 1977 2.6 Infrequent Normalization Devlin et al. (2014), followed by Andreas and Klein (2015), proposed to relax score normalization. Their strategy (here referred to as WeaknormSQ) associates unnormalized likelihood maximization with a penalty term that favors normalized predictions. This yields the following loss over the training set T L(2) α = − X (w,x)∈T s(w|x) + α X (w,x)∈T (log Z(x))2 where s(w|x) refers to the unnormalized score of word w given context x and Z(x) = P w exp(s(w|x)) refers to the partition function for context x. This strategy therefore pushes the log partition towards zero. For efficient training, the second term can be down-sampled L(2) α,γ = − X (w,x)∈T s(w|x)+ α γ X (w,x)∈Tγ (log Z(x))2 where Tγ is the training set sampled at rate γ. A small rate implies computing the partition function only for a small fraction of the training data. We extend this strategy to the case where the log partition term is not squared (Weaknorm), i.e., L(1) α,γ = − X (w,x)∈T s(w|x) + α γ X (w,x)∈Tγ log Z(x) For α = 1, this loss is an unbiased estimator of the negative log-likelihood of the training data L(2) 1 = −P (w,x)∈T s(w|x) + log Z(x). 3 Experimental Setup Datasets We run experiments over three news datasets of different sizes: Penn Treebank (PTB), WMT11-lm (billionW) and English Gigaword, version 5 (gigaword). Penn Treebank (Marcus et al., 1993) is the smallest corpus with 1M tokens and we use a vocabulary size of 10k (Mikolov et al., 2011a). The billion word benchmark (Chelba et al., 2013) comprises almost one billion tokens and a vocabulary of about 800k words1. Gigaword (Parker et al., 2011) is even larger with 5 billion tokens and was previously used for language modeling (Heafield, 2011) but there is no standard train/test split or vocabulary for this set. We split according to time: training covers 1994–2009 and test covers 2010. The vocabulary comprises the 100k most frequent words in train. Table 1 summarizes the data statistics. 1T. Robinson version http://tiny.cc/1billionLM . Dataset Train Test Vocab OOV PTB 1M 0.08M 10k 5.8% gigaword 4,631M 279M 100k 5.6% billionW 799M 8.1M 793k 0.3% Table 1: Dataset statistics. Number of tokens for train and test, vocabulary size, fraction of OOV. Evaluation We measure perplexity on the test set. For PTB and billionW, we report results on a per sentence basis, i.e., models do not use context words across sentence boundaries and we score end-of-sentence markers. This is the standard setting for these benchmarks and allows comparison with other work. On gigaword, we use contexts across sentence boundaries and evaluation does not include end-of-sentence markers. Our baseline is an interpolated Kneser-Ney (KN) model. We use KenLM (Heafield, 2011) to train 5-gram models without pruning. For neural models, we train 11-gram models for gigaword and billionW; for PTB we train a 6-gram model. The model parameters (weights W i and biases bi for i = 0, . . . , k + 1) are learned to maximize the training log-likelihood relying on stochastic gradient descent (SGD; LeCun et al.. 1998). Validation Hyper-parameters are the number of layers k and the dimension of each layer di, ∀i = 0, . . . , k. We tune the following settings for each technique on the validation set: the number of clusters, the clustering technique for hierarchical softmax, the number of frequency bands and their allocated capacity for differentiated softmax, the number of distractors for target sampling, the noise/data ratio for NCE, as well as the regularization rate and strength for infrequent normalization. Similarly, SGD parameters (learning rate and mini-batch size) are set to maximize validation likelihood. We also tune the dropout rate (Srivastava et al., 2014); dropout is employed after each tanh non-linearity.2 Training Time We train for 168 hours (one week) on the large datasets (billionW, gigaword) and 24 hours (one day) for Penn Treebank. All experiments are performed on the same hardware, a single K40 GPU. We select the hyper-parameters which yield the best validation perplexity after the allocated time and report the perplexity of the resulting model on the test set. This training time 2More parameter settings are available in an extended version of the paper at http://arxiv.org/abs/1512.04906. 1978 is a trade-off between being able to do a comprehensive exploration of the various settings for each method and good accuracy. The chosen training times are not long enough to observe over-fitting, i.e. validation performance is still improving – albeit very slowly – at the end of the training session. As a general observation, even on the small PTB where 24 hours is rather long, we always found better results using the full training time, possibly increasing the dropout rate. A concern may be that a fixing the training time favors models with better implementations. However, all models are very similar and their core computations are always matrix/matrix products. Training differs mostly in the size and frequency of large matrix/matrix products. Matrix products rely on CuBLAS3, using torch4. For the matrix sizes involved (> 500×1, 000), the time complexity of matrix product is linear in each dimension, both on CPU (Intel MKL5) and GPU (CuBLAS), with a 10X speedup for GPU (Nvidia K40) compared to CPU (Intel Xeon E5-2680). Therefore, the speed trade-off applies to both CPU and GPU hardware, albeit with a different time scale. 4 Results The test perplexities (Table 2) and validation learning curves (Figures 2, 3, and 4) show that the competitiveness of softmax diminishes with larger vocabularies. Softmax does well on the small vocabulary PTB but poorly on the large vocabulary billionW corpus. Faster methods such as sampling, hierarchical softmax, and infrequent normalization (Weaknorm, WeaknormSQ) are much better in the large vocabulary setting of billionW. D-Softmax is performing well on all sets and shows that assigning higher capacity where it benefits most results in better models. Target sampling performs worse than softmax on gigaword but better on billionW. Hierarchical softmax performs poorly on Penn Treebank which is in stark contrast to billionW where it does well. Noise contrastive estimation has good accuracy on billionW, where speed is essential to achieving good accuracy. Of all the methods, hierarchical softmax processes most training examples in a given time frame (Table 3). Our test time speed comparison assumes that we would like to find the highest 3http://docs.nvidia.com/cuda/cublas/ 4http://torch.ch 5https://software.intel.com/en-us/intel-mkl PTB gigaW billionW KN 141.2 57.1 70.26 Softmax 123.8 56.5 108.3 D-Softmax 121.1 52.0 91.2 Sampling 124.2 57.6 101.0 HSM 138.2 57.1 85.2 NCE 143.1 78.4 104.7 Weaknorm 124.4 56.9 98.7 WeaknormSQ 122.1 56.1 94.9 KN+Softmax 108.5 43.6 59.4 KN+D-Softmax 107.0 42.0 56.3 KN+Sampling 109.4 43.8 58.1 KN+HSM 115.0 43.9 55.6 KN+NCE 114.6 49.0 58.8 KN+Weaknorm 109.2 43.8 58.1 KN+WeaknormSQ 108.8 43.8 57.7 Table 2: Test perplexity of individual models and interpolation with Kneser-Ney. 120 130 140 150 160 170 180 190 0 5 10 15 20 Perplexity Training time (hours) Softmax Sampling HSM D-Softmax Weaknorm WeaknormSQ NCE Figure 2: PTB validation learning curve. scoring next word rather than rescoring an existing string. This scenario requires scoring all output words and D-Softmax can process nearly twice as many tokens per second than the other methods whose complexity is similar to softmax. 4.1 Softmax Despite being our baseline, softmax ranks among the most accurate methods on PTB and it is second best on gigaword after D-Softmax (with WeaknormSQ performing similarly). For billionW, the extremely large vocabulary makes softmax training too slow to compete with faster alterna6This perplexity is higher than reported in (Chelba et al., 2013), in which Kneser Ney is not trained on the 800m token training set, but on a larger corpus of 1.1B tokens. 1979 50 60 70 80 90 100 110 0 20 40 60 80 100 120 140 160 180 Perplexity Training time (hours) Softmax Sampling HSM D-Softmax Weaknorm WeaknormSQ NCE Figure 3: Gigaword validation learning curve. 80 100 120 140 160 180 0 20 40 60 80 100 120 140 160 180 Perplexity Training time (hours) Softmax Sampling HSM D-Softmax Weaknorm WeaknormSQ NCE Figure 4: Billion Word validation learning curve. train test Softmax 510 510 D-Softmax 960 960 Sampling 1,060 510 HSM 12,650 510 NCE 4,520 510 Weaknorm 1,680 510 WeaknormSQ 2,870 510 Table 3: Training and test speed on billionW in tokens per second for generation of the next word. Most techniques are identical to softmax at test time. HSM can be faster for rescoring. 50 60 70 80 90 100 110 120 0 10 20 30 40 50 60 70 80 90 100 Perplexity Distractors per Sample (% of vocabulary) Sampling Figure 5: Number of Distractors versus Perplexity for Target Sampling over Gigaword tives. However, of all the methods softmax has the simplest implementation and it has no additional hyper-parameters compared to other methods. 4.2 Target Sampling Figure 5 shows that target sampling is most accurate for distractor sets that amount to a large fraction of the vocabulary, i.e. > 30% on gigaword (billionW best setting > 50% is even higher). Target sampling is faster and performs more iterations than softmax in the same time. However, its perplexity reduction per iteration is less than softmax. Overall, it is not much better than softmax. A reason might be that sampling chooses distractors independently from context and current model performance. This does not favor distractors the model incorrectly considers likely for the current context. These distractors would yield higher gradients that could update the model faster. 4.3 Hierarchical Softmax Hierarchical softmax is very efficient for large vocabularies and it is the best method on billionW. On the other hand, HSM does poorly on small vocabularies as seen on PTB. We found that a good word clustering structure is crucial: when clusters gather words occurring in similar contexts, cluster likelihoods are easier to learn; when the cluster structure is uninformative, cluster likelihoods converge to the uniform distribution. This affects accuracy since words cannot have higher probability than their clusters, Eq. (2). Our experiments organize words into a two level hierarchy and compare four clustering strategies on billionW and gigaword (§2.2). Random clustering shuffles the vocabulary and splits it into equally sized partitions. Frequency-based clustering first orders words based on their frequency and assigns words to clusters such that each cluster represents an equal share of the total frequency (Mikolov et al., 2011b). Kmeans runs the well-known clustering algorithm on Hellinger PCA word embeddings. Weighted kmeans weights each word by its frequency.7 Random clusters perform worst (Table 4) followed by frequency-based clustering but k-means does best; weighted k-means performs similarly to its unweighted version. In earlier experiments, plain k-means performed very poorly since the most frequent cluster captured up to 40% of the 7The time to compute the clustering (multi-threaded word co-occurrence counts, PCA and k-means) is under one hour, which is negligible given a one week training budget. 1980 billionW gigaword random 98.51 62,27 frequency-based 92.02 59.47 k-means 85.70 57.52 weighted k-means 85.24 57.09 Table 4: HSM with different clustering. token occurrences. We then explicitly capped the frequency budget of each cluster to 10% which brought k-means on par with weighted k-means. 4.4 Differentiated Softmax D-Softmax is the best technique on gigaword and second best on billionW after HSM. On PTB it ranks among the best techniques whose perplexities cannot be reliably distinguished. The variable-capacity scheme of D-Softmax can assign large embeddings to frequent words, while keeping computational complexity manageable through small embeddings for rare words. Unlike for hierarchical softmax, NCE or Weaknorm, the computational advantage of DSoftmax is preserved at test time (Table 3). DSoftmax is the fastest technique at test time, while ranking among the most accurate methods. This speed advantage is due to the low dimensional representation of rare words which negatively affects the model accuracy on these words (Table 5). 4.5 Noise Contrastive Estimation Although we report better perplexities than the original NCE paper on PTB (Mnih and Teh, 2012), we found NCE difficult to use for large vocabularies. In order to work in this setting where models are larger, we had to dissociate the number of noise samples from the data to noise ratio in the modeled mixture. For instance, a data/noise ratio of 1/50 gives good performance in our experiments but estimating only 50 noise sample posteriors per data point is wasteful given the cost of network evaluation. Moreover, 50 samples do not allow frequent sampling of every word in a large vocabulary. Our setting considers more noise samples and up-weights the data sample. This allows to set the data/noise ratio independently from the number of noise samples. Overall, NCE results are better than softmax only for billionW, a setting for which softmax is very slow due to the very large vocabulary. Why does NCE perform so poorly? Figure 6 shows entropy on the validation set versus the NCE loss for several models. The results clearly show that sim 4 5 6 7 8 9 10 0.054 0.056 0.058 0.06 0.062 0.064 Entropy NCE Loss Figure 6: Validation entropy versus NCE loss on gigaword for experiments differing only in learning rates and initial weights. Each color corresponds to one experiment, with one point per hour. ilar NCE loss values can result in very different validation entropy. Although NCE might make sense for other metrics such as BLEU (Baltescu and Blunsom, 2014), it is not among the best techniques for minimizing perplexity. Jozefowicz et al. (2016) recently drew similar conclusions. 4.6 Infrequent Normalization Infrequent normalization (Weaknorm and WeaknormSQ) performs better than softmax on billionW and comparably to softmax on Penn Treebank and gigaword (Table 2). The speedup from skipping partition function computations is substantial. For instance, WeaknormSQ on billionW evaluates the partition only on 10% of the examples. In one week, the model is evaluated and updated on 868M tokens (with 86.8M partition evaluations) compared to 156M tokens for softmax. Although referred to as self-normalizing (Andreas and Klein, 2015), the trained models still need normalization after training. The partition varies greatly between data samples. On billionW, the partition ranges between 9.4 to 10.3 in log scale for 10th to 90th percentile, i.e. a ratio of 2.5. We observed the squared version (WeaknormSQ) to be unstable at times. Regularization strength could be found too low (collapse) or too high (blow-up) after a few days of training. We added an extra unit to bound unnormalized predictions x →10 tanh(x/5), which yields stable training and better generalization. For the non-squared Weaknorm, stability was not an issue. A regularization strength of 1 was the best setting for Weaknorm. This choice makes the loss an unbiased estimator of the data likelihood. 1981 1-4K 4-20K 20-40K 40-70K 70-100K Kneser-Ney 3.48 7.85 9.76 10.76 11.57 Softmax 3.46 7.87 9.76 11.09 12.39 D-Softmax 3.35 7.79 10.13 12.22 12.69 Target sampling 3.51 7.62 9.51 10.81 12.06 HSM 3.49 7.86 9.38 10.30 11.24 NCE 3.74 8.48 10.60 12.06 13.37 Weaknorm 3.46 7.86 9.77 11.12 12.40 WeaknormSQ 3.46 7.79 9.67 10.98 12.32 Table 5: Test entropy on gigaword over subsets of the frequency ranked vocabulary; rank 1 is the most frequent word. 5 Analysis 5.1 Model Capacity Training neural language models over large corpora highlights that training time, not training data, is the main factor limiting performance. The learning curves on gigaword and billionW indicate that most models are still making progress after one week. Training time has therefore to be taken into account when considering increasing capacity. Figure 7 shows validation perplexity versus the number of iterations for a week of training. This figure shows that a softmax model with 1024 hidden units in the last layer could perform better than the 512-hidden unit model with a longer training horizon. However, in the allocated time, 512 hidden units yield the best validation performance. D-softmax shows that it is possible to selectively increase capacity, i.e., to allocate more hidden units to the most frequent words at the expense of rarer words. This captures most of the benefit of a larger softmax model while staying within a reasonable training budget. 5.2 Effect of Initialization We consider initializing both the input word embeddings and the output matrix from Hellinger PCA embeddings. Several alternative techniques for pre-training embeddings have been proposed (Mikolov et al., 2013; Lebret and Collobert, 2014; Pennington et al., 2014). Our experiment highlights the advantage of initialization and do not aim to compare embedding techniques. Figure 8 shows that PCA is better than random for initializing both input and output word representations; initializing both from PCA is even better. We see that even after long training sessions, the initial conditions still impact the validation perplexity. We observed this trend also with 80 100 120 140 160 180 200 0 50 100 150 200 250 300 Perplexity Training tokens (millions) D-Softmax 1024x50K, 512x100K, 64x640K D-Softmax 1024x50K, 256x740K Softmax 1024 Softmax 512 Figure 7: Validation perplexity per iteration on billionW for softmax and D-softmax. Softmax uses the same number of units for all words. The first D-Softmax experiment uses 1024 units for the 50K most frequent words, 512 for the next 100K, and 64 units for the rest; similarly for the second experiment. All experiments end after one week. other strategies than softmax. After one week of training, HSM is the only method which can reach comparable accuracy to PCA initialization when the output matrix is randomly initialized. 5.3 Training Set Size Large training sets and a fixed training time introduce competition between slower models with more capacity and observing more training data. This trade-off only applies to iterative SGD optimization and does not apply to classical countbased models, which visit the training set once and then solve training in closed form. We compare Kneser-Ney and softmax, trained for one week, with gigaword on differently sized subsets of the training data. For each setting we take care to include all data from the smaller subsets. Figure 9 shows that the performance of the neural model improves very little on more than 1982 40 60 80 100 120 140 160 180 200 0 20 40 60 80 100 120 140 160 180 Perplexity Training time (hours) Input: PCA, Output: PCA Input: PCA, Output: Random Input: Random, Output: PCA Input: Random, Output: Random Figure 8: Effect of random initialization and with Hellinger PCA on gigaword for softmax. 55 60 65 70 75 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 Perplexity Training data size (billions) Softmax KN Figure 9: Effect of training set size measured on test of gigaword for Softmax and Kneser-Ney. 500M tokens. In order to benefit from the full training set we would require a much higher training budget, faster hardware, or parallelization. Scaling training to large datasets can have a significant impact on perplexity, even when data from the distribution of interest is limited. As an illustration, we adapted a softmax model trained on billionW to Penn Treebank and achieved a perplexity of 96 - a far better result than with any model we trained from scratch on PTB (cf. Table 2). 5.4 Rare Words How well do neural models perform on rare words? To answer this question, we computed entropy across word frequency bands for KneserNey and neural models. Table 5 reports entropy for the 4, 000 most frequent words, then the next most frequent 16, 000 words, etc. For frequent words, neural models are on par or better than Kneser-Ney. For rare words, Kneser-Ney is very competitive. Although neural models might eventually close this gap with much longer training, one should consider that Kneser-Ney trains on gigaword in only 8 hours on CPU which contrasts with 168 hours of training for neural models on high end GPUs. This result highlights the complementarity of both approaches, as observed in our interpolation experiments (Table 2). For neural models, D-Softmax excels on frequent words but performs poorly on rare ones. This is because D-Softmax assigns more capacity to frequent words at the expense of rare words. Overall, hierarchical softmax is the best neural technique for rare words. HSM does more iterations than any other technique and so it can observe every rare word more often. 6 Conclusions This paper presents a comprehensive analysis of strategies to train neural language models with large vocabularies. This setting is very challenging for neural networks as they need to compute the partition function over the entire vocabulary at each evaluation. We compared classical softmax to hierarchical softmax, target sampling, noise contrastive estimation and infrequent normalization, commonly referred to as self-normalization. Furthermore, we extend infrequent normalization to be a proper estimator of likelihood and we introduce differentiated softmax, a novel variant of softmax assigning less capacity to rare words to reduce computation. Our results show that methods which are effective on small vocabularies are not necessarily equally so on large vocabularies. In our setting, target sampling and noise contrastive estimation failed to outperform the softmax baseline. Overall, differentiated softmax and hierarchical softmax are the best strategies for large vocabularies. Compared to classical Kneser-Ney models, neural models are better at modeling frequent words, but are less effective for rare words. A combination of the two is therefore very effective. We conclude that there is a lot to explore in training from a combination of normalized and unnormalized objectives. An interesting future direction is to combine complementary approaches, either through combined parameterization (e.g. hierarchical softmax with differentiated capacity per word) or through a curriculum (e.g. transitioning from target sampling to regular softmax as training progresses). Further promising areas are parallel training as well as better rare word modeling. References Jacob Andreas and Dan Klein. 2015. When and why are log-linear models self-normalizing? In Proc. of NAACL. 1983 Ebru Arisoy, Tara N. Sainath, Brian Kingsbury, and Bhuvana Ramabhadran. 2012. Deep Neural Network Language Models. In NAACL-HLT Workshop on the Future of Language Modeling for HLT, pages 20–28, Stroudsburg, PA, USA. Association for Computational Linguistics. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. of ICLR. Association for Computational Linguistics, May. Paul Baltescu and Phil Blunsom. 2014. Pragmatic neural language modelling in machine translation. Technical Report arXiv 1412.7119. Yoshua Bengio and Jean-S´ebastien Sen´ecal. 2008. Adaptive importance sampling to accelerate training of a neural probabilistic language model. IEEE Transactions on Neural Networks. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A Neural Probabilistic Language Model. Journal of Machine Learning Research, 3:1137–1155. Peter F. Brown, Peter V. deSouza, Robert L. Mercer, Vincent J. Della Pietra, and Jenifer C. Lai. 1992. Class-based n-gram models of natural language. Computational Linguistics, 18(4):467–479, Dec. Ciprian Chelba, Tom´aˇs Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. 2013. One billion word benchmark for measuring progress in statistical language modeling. Technical report, Google. Xie Chen, Xunying Liu, MJF Gales, and PC Woodland. 2015. Recurrent neural network language model training with noise contrastive estimation for speech recognition. In Acoustics, Speech and Signal Processing (ICASSP). Sumit Chopra, Jason Weston, and Alexander M. Rush. 2015. Tuning as ranking. In Proc. of EMNLP. Association for Computational Linguistics, Sep. Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard Schwartz, , and John Makhoul. 2014. Fast and Robust Neural Network Joint Models for Statistical Machine Translation. In Proc. of ACL. Association for Computational Linguistics, June. Joshua Goodman. 2001. Classes for Fast Maximum Entropy Training. In Proc. of ICASSP. Kenneth Heafield. 2011. KenLM: Faster and Smaller Language Model Queries. In Workshop on Statistical Machine Translation, pages 187–197. Michael Gutmann Aapo Hyv¨arinen. 2010. Noisecontrastive estimation: A new estimation principle for unnormalized statistical models. In Proc. of AISTATS. S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2014. On Using Very Large Target Vocabulary for Neural Machine Translation. CoRR, abs/1412.2007. Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language modeling. Technical Report arXiv 1602.02410. Hai-Son Le, Alexandre Allauzen, and Franc¸ois Yvon. 2012. Continuous Space Translation Models with Neural Networks. In Proc. of HLT-NAACL, pages 39–48, Montr´eal, Canada. Association for Computational Linguistics. Remi Lebret and Ronan Collobert. 2014. Word Embeddings through Hellinger PCA. In Proc. of EACL. Yann LeCun, Leon Bottou, Genevieve Orr, and KlausRobert Mueller. 1998. Efficient BackProp. In Genevieve Orr and Klaus-Robert Muller, editors, Neural Networks: Tricks of the trade. Springer. Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics, 19(2):314–330, Jun. Tom´aˇs Mikolov, Karafi´at Martin, Luk´aˇs Burget, Jan Cernock´y, and Sanjeev Khudanpur. 2010. Recurrent Neural Network based Language Model. In Proc. of INTERSPEECH, pages 1045–1048. Tom´aˇs Mikolov, Anoop Deoras, Stefan Kombrink, Lukas Burget, and Jan Honza Cernocky. 2011a. Empirical Evaluation and Combination of Advanced Language Modeling Techniques. In Interspeech. Tom´aˇs Mikolov, Stefan Kombrink, Luk´aˇs Burget, Jan Cernock´y, and Sanjeev Khudanpur. 2011b. Extensions of Recurrent Neural Network Language Model. In Proc. of ICASSP, pages 5528–5531. Tom´aˇs Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. CoRR, abs/1301.3781. Andriy Mnih and Geoffrey E. Hinton. 2010. A Scalable Hierarchical Distributed Language Model. In Proc. of NIPS. Andriy Mnih and Yee Whye Teh. 2012. A fast and simple algorithm for training neural probabilistic language models. In Proc. of ICML. Frederic Morin and Yoshua Bengio. 2005. Hierarchical Probabilistic Neural Network Language Model. In Proc. of AISTATS. Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2011. English Gigaword Fifth Edition. Technical report, Linguistic Data Consortium. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the Empiricial Methods in Natural Language Processing. 1984 Holger Schwenk, Anthony Rousseau, and Mohammed Attik. 2012. Large, Pruned or Continuous Space Language Models on a GPU for Statistical Machine Translation. In NAACL-HLT Workshop on the Future of Language Modeling for HLT, pages 11–19. Association for Computational Linguistics. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, JianYun Nie1, Jianfeng Gao, and Bill Dolan. 2015. A Neural Network Approach to Context-Sensitive Generation of Conversational Responses. In Proc. of NAACL. Association for Computational Linguistics, May. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research. Ashish Vaswani, Yinggong Zhao, Victoria Fossum, and David Chiang. 2013. Decoding with Largescale Neural Language Models improves Translation. In Proc. of EMNLP. Association for Computational Linguistics, October. 1985
2016
186
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1986–1997, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Predicting the Compositionality of Nominal Compounds: Giving Word Embeddings a Hard Time Silvio Cordeiro1,2, Carlos Ramisch1, Marco Idiart3, Aline Villavicencio2 1 Aix Marseille Universit´e, CNRS, LIF UMR 7279 (France) 2 Institute of Informatics, Federal University of Rio Grande do Sul (Brazil) 3 Institute of Physics, Federal University of Rio Grande do Sul (Brazil) [email protected] [email protected] [email protected] [email protected] Abstract Distributional semantic models (DSMs) are often evaluated on artificial similarity datasets containing single words or fully compositional phrases. We present a large-scale multilingual evaluation of DSMs for predicting the degree of semantic compositionality of nominal compounds on 4 datasets for English and French. We build a total of 816 DSMs and perform 2,856 evaluations using word2vec, GloVe, and PPMI-based models. In addition to the DSMs, we compare the impact of different parameters, such as level of corpus preprocessing, context window size and number of dimensions. The results obtained have a high correlation with human judgments, being comparable to or outperforming the state of the art for some datasets (Spearman’s ρ=.82 for the Reddy dataset). 1 Introduction Distributional semantic models (DSMs) use context information to represent the meaning of lexical units as vectors. They normally focus on the accurate semantic representation of single words. It is based on single words that many optimizations for these models have been proposed (Lin, 1999; Erk and Pad´o, 2010; Baroni and Lenci, 2010). This is particularly true for word embeddings, that is, a type of DSM where distributional vectors are obtained as a by-product of training a neural network to learn a function between words and their contexts (Mikolov et al., 2013a). Simultaneously, there has been intensive research on models to compose individual word vectors in order to create representations for larger units such as phrases, sentences and even whole documents (Mitchell and Lapata, 2010; Mikolov et al., 2013a). Larger units can often be assumed to have their meanings derived from their parts according to the language’s grammar, but this is not always the case (Sag et al., 2002). Many multiword units are associated with idiomatic interpretations, unrelated to the meaning of the component words (e.g. silver bullet, eager beaver). Precision-oriented NLP applications need to be able to identify partly-compositional and idiomatic cases and ensure meaning preservation during processing. Compositionality identification is a first step towards complete semantic interpretation in tasks such as machine translation (to translate non-compositional compounds as a unit), word sense disambiguation (to avoid assigning a sense to parts of non-compositional compounds), and semantic parsing (to identify complex predicates and their arguments). Even when larger units are explicitly represented in DSMs (McCarthy et al., 2003; Reddy et al., 2011; Mikolov et al., 2013c; Ferret, 2014), it is not clear whether the quality of these representations is comparable to the representations of single words. In particular, when building vectors for larger units, their generally lower frequencies in corpora (Kim and Baldwin, 2006) may combine with morphosyntactic phenomena to increase sparsity even further, often requiring non-trivial preprocessing (lemmatization and word reordering) to conflate variants. This paper presents a large-scale multilingual evaluation of DSMs and their parameters for the task of compositionality prediction of nominal compounds in French and English. We examine parameters like the level of corpus preprocessing, the size of the context window and the number of dimensions for context representation. Additionally, we compare standard DSMs based on positive pointwise mutual information (PPMI) 1986 against widely used word embedding tools such as word2vec, henceforth w2v (Mikolov et al., 2013c), and GloVe (Pennington et al., 2014). We start with a discussion of related work (§2) and the materials and methods used (§3). We report on the evaluations performed (§4) and finish with conclusions and future work (§5). 2 Related Work We define nominal compounds as conventional noun phrases composed by two or more words, such as science fiction (Nakov, 2013). In English, they are often expressed as noun compounds but their syntactic realization may vary for different languages. For instance, one of the equivalent forms in French involves a denominal adjective used as modifier (e.g. cell death and the corresponding mort cellulaire).1 In this paper, we focus on 2-word nominal compounds involving modifiers that are nouns (e.g. word embedding) or adjectives (e.g. hard time). Semantically, nominal compounds may display a wide range of idiomaticity, from compositional cases like access road to idiomatic or noncompositional cases like gravy train, whose meaning is unrelated to its parts.2 Even when there is a level of compositionality in the compound, the contribution of each word may vary considerably, independently from its status as a syntactic head or modifier, as cash in cash cow versus tears in crocodile tears. Indeed, various annotation scales have been proposed as means to collect human judgments about compositionality. Particularly for nominal compounds, Reddy et al. (2011) used a 6-point scale to collect judgments on the literal or figurative use of nominal compounds and its components in English. Similar judgments have also been collected for 244 German compounds, for which an average of 30 judgments on a scale from 1 to 7 were gathered through crowdsourcing (Roller et al., 2013). An alternative to multi-point scales is the binary judgment adopted by Farahmand et al. (2015), for a dataset of English nominal compounds. There has been much interest in creating semantic representations of larger units, such as phrases (Mikolov et al., 2013b), sentences and 1In French, one can also use a preposition and optional determiner, like cancer du poumon (lung cancer). 2It refers to an initiative that provides money to many people without much effort. documents (Le and Mikolov, 2014), and in examining whether it is possible to accurately derive the semantics of a compound or multiword expression from its parts (McCarthy et al., 2003; Baldwin et al., 2003; Tratz and Hovy, 2010; Reddy et al., 2011). For the latter, proposals include using additive and multiplicative functions to combine vector representations of component words (Mitchell and Lapata, 2008; Reddy et al., 2011), calculating the overlap between the components and the expression (McCarthy et al., 2003) and looking at the literality of translations into multiple languages (Salehi et al., 2014). Other proposals to explicitly represent the semantics of nominal compounds include the use of paraphrases (Lauer, 1995; Nakov, 2008; Hendrickx et al., 2013), and inventories of semantic relations (Girju et al., 2005). The ability of DSMs for accurately capturing semantic information may be affected by a number of factors involved in constructing the models, such as the source corpus, context representation, and parameters of the model. Relevant corpus parameters include size (Ferret, 2013; Mikolov et al., 2013c) and quality (Lapesa and Evert, 2014). Factors related to context representation include the context window size and the number of context dimensions adopted for a model (Lapesa and Evert, 2014); the choice of contexts to be used with targets (syntactic dependencies vs. bag-of-words) (Agirre et al., 2009); the use of morphosyntactic information (Pad´o and Lapata, 2003; Pad´o and Lapata, 2007); context filtering (Riedl and Biemann, 2012; Padr´o et al., 2014a); and dimensionality reduction methods (van de Cruys et al., 2012). Important model parameters that have been studied include the choice of association and similarity measures (Curran and Moens, 2002) and the use of subsampling and negative sampling techniques (Mikolov et al., 2013c). However, the particular effects may be heterogeneous and depend on the task and model (Lapesa and Evert, 2014). In this paper, we examine the impact of both corpus and context parameters for a variety of models, for the task of nominal compound compositionality prediction in English and French. For the choice of particular DSM, contradictory results have been published showing the superiority of neural models (Baroni et al., 2014) and of more traditional but carefully designed models (Levy et al., 2015). The former were also reported as a better fit to behavioral data on semantic prim1987 ing tasks (Mandera et al., 2016). Moreover, these evaluations are often performed on single-word similarity tasks (Freitag et al., 2005; CamachoCollados et al., 2015) and little has been said about the use of word embeddings for the compositionality prediction of multiword expressions. Two notable exceptions are the recent works of Salehi et al. (2015) and Yazdani et al. (2015). Salehi et al. (2015) show that word embeddings are more accurate in predicting compositionality than a simplistic count-based DSM. Yazdani et al. (2015) focus on the composition function, using a lightly supervised neural network to learn the best combination strategy for individual word vectors. In order to consolidate previous punctual results, we present a large-scale and systematic evaluation, comparing DSMs and their parameters, on several compositionality datasets. 3 Materials and Methods We examine the impact of corpus parameters related to the target language and the degree of corpus preprocessing adopted. We also investigate context parameters related to the size of the context window and the number of dimensions used to represent context. 3.1 Corpora Preprocessing We use the lemmatized and POS-tagged versions of the ukWaC for English (∼2 billion tokens) and frWaC (∼1.6 billion tokens) for French (Baroni et al., 2009) to train the models and build vector representations of words and compounds. For each corpus, we re-tokenize all target compounds as a single word with a separator (e.g. monkey business →monkey business) and re-tag them using a single manually selected tag per compound to handle POS-tagging errors.3 All forms are then lowercased (surface forms, lemmas and POS-tags); and noisy tokens, with special characters, numbers or punctuation, are removed. Additionally, ligatures are normalized for French (e.g. œ →oe) and a spellchecker4 is applied to normalize words across English spelling variants (e.g. color →colour). To test the influence of preprocessing in model accuracy, for each corpus, we generate four variants with different degrees of abstraction: 1. surface+: the original corpus with no preprocessing, containing surface forms. 3We use a simplified tag set (e.g. v instead of vvz). 4https://hunspell.github.io 2. surface: stopword removal; generating a corpus of surface forms of content words. 3. lemma: stopword removal and lemmatization; generating a corpus of lemmas of content words. 4. lemmaPOS: stopword removal, lemmatization and POS-tagging; generating a corpus of content words, represented as lemma/tag. The operation of stopword removal eliminates from the corpus all function words, leaving only nouns, adjectives, adverbs and verbs. In lemmatized corpora, the lemmas of proper names are replaced by placeholders. 3.2 Compositionality Datasets For evaluation, we use nominal compound compositionality datasets for English (Reddy, Reddy++ and Farahmand) and for French (FR-comp). They provide annotations as to whether a given compound is more idiomatic or more compositional. Reddy contains compositionality judgments for 90 compounds and their individual word components, in a scale of literality from 0 (idiomatic) to 5 (literal), collected with Mechanical Turk (Reddy et al., 2011). For each compound, compositionality scores are averaged over its annotators. Compounds included in the dataset were selected to balance frequency range and degree of compositionality (low, middle and high). We use only the global compositionality score, ignoring individual word judgments. With a few exceptions (e.g. sacred cow), most compounds are formed exclusively by nouns. Reddy++ is a new resource created for this evaluation (Ramisch et al., 2016). It extends the Reddy set with an additional 90 English nominal compounds, in a total of 180 entries. Scores also range from 0 to 5 and were collected through Mechanical Turk and averaged over the annotators. The extra 90 entries include some adjective-noun compounds and are balanced with respect to frequency and compositionality. We focus our evaluation on this combined dataset, since it includes Reddy. However, to allow comparison with state of the art, we also report results individually for Reddy. Farahmand contains 1042 English compounds extracted from Wikipedia with binary noncompositionality judgments by four experts (Farahmand et al., 2015). We consider a compound as non-compositional if at least two judges agree that it is non-compositional, following Yaz1988 dani et al. (2015). In our evaluations, we use the sum of all judgments in order to have a single numeral compositionality score, ranging from 0 (compositional) to 4 (idiomatic). FR-comp is also a new resource created for this evaluation (Ramisch et al., 2016). It contains 180 adjective-noun and noun-adjective compounds in French, such as belle-m`ere (mother-in-law, lit. beautiful-mother) and carte bleue (credit card, lit. blue card). This dataset was constructed in the same manner as the extension to Reddy, that is, using crowdsourcing and average numerical scores. Special care was taken to guarantee that annotators were native speakers by asking them to provide paraphrases along with compositionality scores. The new datasets Reddy++ and FR-comp are similar to Reddy. For instance, the average standard deviation of compound scores given by different annotators is σ = 1.17 for the new compounds in Reddy++, σ = 1.15 for FR-comp and σ = 0.99 for Reddy. Their detailed evaluation is presented by Ramisch et al. (2016). 3.3 DSM Models We build three types of DSMs: models based on sparse PPMI cooccurrence vectors, as well as those constructed with word2vec and GloVe. PPMI For each target word or compound, we extract from the corpus its neighboring nouns and verbs in a symmetric sliding window of w words to the left/right5, using a linear decay weighting scheme with respect to its distance d to the target (Levy et al., 2015). In other words, each cooccurrence count of target-context pairs is incremented by w + 1 −d instead of 1. The representation of a target is a vector containing the positive pointwise mutual information (PPMI) association scores between the target and its contexts.6 In PPMI-thresh, we follow Padr´o et al. (2014b) to select the top k most relevant contexts (highest PPMI) for each target. No further dimensionality reduction is applied. In PPMI-TopK, we use a fixed global list of 1000 contexts, built by looking at the most frequent words in the corpus: the top 50 are skipped, and the next 1000 are taken (Salehi et al., 2015). No further dimensionality reduction is applied. 5Syntactic context definition is planned as future work. 6PPMI vectors are built using minimantics https:// github.com/ceramisch/minimantics. In PPMI-SVD, for each target, contexts that appear less than 1000 times are discarded.7 We then use the Dissect toolkit8 (Dinu et al., 2013) in order to build a PPMI matrix and reduce its dimensionality using singular value decomposition (SVD) to factorize the matrix. w2v Uses the word2vec toolkit based on neural networks to predict target/context cooccurrence (Mikolov et al., 2013a). We build models from two variants of word2vec: CBOW (w2v-cbow) and skipgram (w2v-sg). In both cases, the configurations are the default ones, except for the following: no hierarchical softmax; negative sampling of 25; frequent-word downsampling weight of 10−6; runs 15 training iterations. We use the default minimum word count threshold of 5. glove We use the count-based DSM of Pennington et al. (2014), which implements a factorization of the co-occurrence count matrix. The configurations are the default ones, except for the following: internal cutoff parameter xmax = 75; builds co-occurrence matrix in 15 iterations. Due to the large vocabulary size, we use a minimum word count threshold of 5 for lemma-based models, 15 for surface and 20 for surface+. For each DSM, we evaluate the influence of a set of parameters. By varying the values of these parameters, we build a total of 408 models per language. The parameters are: • WORDFORM: Refers to one of the four variants of each corpus: surface+, surface, lemma, and lemmaPOS. • WINDOWSIZE: Indicates within how many words to the left/right we are searching for target-context co-occurrence pairs. In this work we explore windows of sizes of 1, 4 and 8. • DIMENSION: Each model is constructed to have a maximum number of final dimensions for each vector. We generate models with 250, 500 and 750 dimensions. 3.4 Compositionality Prediction To predict the compositionality of a nominal compound w1w2 using the DSMs, we use as a measure the cosine similarity between the compound 7Aggressive filtering was required because SVD seems quite sensitive to low-frequency contexts. 8http://clic.cimec.unitn.it/composes/toolkit/ index.html 1989 vector representation v(w1w2) and the sum of the vector representations of the component words: cos( v(w1w2), v(w1 + w2) ) where for v(w1 + w2) we use the normalized sum v(w1 + w2) = v(w1) ||v(w1)|| + v(w2) ||v(w2)||. In this framework, a compound is compositional if the compound representation is close to the sum of its components representations (cosine is close to 1), and it is idiomatic otherwise. One possible improvement of the predictive model would consist in using more sophisticated composition functions instead of sum, such as the multiplicative model of Mitchell and Lapata (2008). However, we want to first assess the performance of a simple additive function. Other optimized functions like the ones proposed by Yazdani et al. (2015) could also be verified, but are out of the scope of this paper, since they are based on supervised learning. 3.5 Evaluation Setup We evaluate the compositionality models and their parameters on the datasets described in Section 3.2. For Reddy, Reddy++ and FR-comp, we report Spearman’s ρ correlation between the ranking provided by humans and those calculated from the models. We follow Yazdani et al. (2015) and report the best F1 score (BF1) obtained for the Farahmand dataset, by calculating the F1 score for the top k compounds classified as positive (noncompositional), for all possible values of k. Given the high number of experiments we performed, we report the best performance of each model type. For instance, the performances reported for w2v-cbow using different values of WINDOWSIZE are the best configurations across all possible values of other parameters such as DIMENSION and WORDFORM. This avoids reporting local maxima that can arise if one fixes all other parameters when evaluating a given one (Lapesa and Evert, 2014). For Reddy++ and Farahmand, we distinguish between strict evaluation, reported in the form of wider bars in the figures, and loose evaluation, shown as narrow blue bars in the figures. Strict evaluation corresponds to the performance of the model only on those compounds that have a vector representation in all underlying DSMs, 175 out of 180 for Reddy++ and 913 out of 1042 for Farahmand. Loose evaluation considers the full dataset, using a fallback strategy for the imputation of missing values, assigning the average compositionality score to absent compounds (Salehi et al., 2015). This is particularly important for Farahmand, which contains more rare compounds such as universe human and mankind instruction so that 129 compounds are missing in the corpus. Only strict evaluation is reported for FR-comp, as all compounds are frequent enough in FRWaC. The vectors generated by w2v and glove have some non-determinism due to random initialization. To assess its impact on results, we report the average of 3 runs using identical configurations and use error bars in the graphics.9 4 Results We report results on each dataset separately and then discuss findings that hold for all datasets. 4.1 Reddy++ and Reddy Datasets Figure 1 summarizes the results for Reddy++ dataset.10 Overall, w2v-cbow (ρ = 0.73), w2v-sg (ρ = 0.73), PPMI-SVD (ρ = 0.72) and PPMIthresh (ρ = 0.71) obtain similar results. In spite of this, except for the two best w2v models, all differences were deemed statistically significant (Wilcoxon rank correlation test, p < 0.05). Figure 1(b) shows the influence of the degree of corpus preprocessing (shown as WORDFORM in these figures). The results are heterogeneous, as the best w2v models seem to profit from the presence of stopwords, unlike the other models for which more preprocessing (lemma and lemmaPOS) leads to better results. One exception is PPMI-SVD for which the use of lemmaPOS drastically reduces performance.11 For WINDOWSIZE, Figure 1(c), although increasing context size seems to help DSMs (at least up to 4), for the best w2v models, a better result is obtained with limited context of 1 word left/right. Probably the interaction between the subsampling strategy and randomized window size explains why increasing this value does not improve the 9Error bars are barely visible because results are stable. 10In the remainder of this section, we will discuss strict evaluation results (outer bars). 11Further investigation must be done to determine the cause of this reduction as an increase in vocabulary size alone is insufficient to explain the effect, given that both surface forms outperform it. 1990 (a) Overall best Spearman’s ρ per DSM. (b) Best Spearman’s ρ per DSM and WORDFORM. (c) Best Spearman’s ρ per DSM and WINDOWSIZE. (d) Best Spearman’s ρ per DSM and DIMENSION. Figure 1: Spearman’s ρ for different DSM parameters on Reddy++ dataset. results. PPMI-SVD can use extra information from larger window sizes (WINDOWSIZE=8) better than models based on context filtering. This is probably related to the aggressive context filter, which keeps only very salient cooccurrences even in large windows. The results for context vector dimensionality, Figure 1(d), show, as expected, that the best results are obtained with larger dimensions (DIMENSION=750) for all models, except for glove, which displays very similar results independently of the number of dimensions. Examining the Reddy dataset alone, the same trends for all parameters were found, but with higher results. The overall best performances on Reddy were quite similar: w2v-cbow (ρ = 0.82), w2v-sg (ρ = 0.81), PPMI-SVD (ρ = 0.80) and PPMI-thresh (ρ = 0.79), and the differences are significant except for the two best w2v models. The 90 compounds added to Reddy++ seem to be more difficult to assess than the original ones, probably because they include many adjectives, which have been found harder to judge for compositionality than nouns (Ramisch et al., 2016). 4.2 Farahmand Dataset Figure 2(a) shows the overall best model for the Farahmand dataset. PPMI-SVD reached a BF1 score of 0.52, with DIMENSION=750, WINDOWSIZE=4, using lemma, and both w2v (BF1=0.51) obtain comparable results with similar configurations. These results show a marked difference between the loose (the narrower bars in the figures) and the strict evaluation (wider bars). The former uses a fallback strategy for the imputation of missing values that does not accurately reflect how the compositionality scores vary. Indeed, we observed that compounds that do not appear very often in our corpora tend to be non-compositional, whereas most of the compound occurrences are compositional, increasing average compositionality. For instance, the 10 most compositional compounds in Reddy++ occur an average of 26551 1991 (a) Overall best BF1 score per DSM. (b) Best BF1 score per DSM and WORDFORM. (c) Best BF1 score per DSM and WINDOWSIZE. (d) Best BF1 score per DSM and DIMENSION. Figure 2: BF1 scores for different DSM parameters on Farahmand dataset. times in the UKWaC vs 1096 times for the 10 least compositional ones. Spearman rank correlation between frequency and compositionality in Reddy++ is ρ = 0.43.12 In short, even if a fallback strategy is adopted as the means to obtain a lowerbound for performance, it may be unrelated to the real performance for the missing compounds. For most models, corpus preprocessing resulted in better scores, with WORDFORM=lemma outperforming all other forms of preprocessing, especially for French. Concatenating lemmas and POS tags does not seem to help, probably due to decreasing word frequencies without substantial gain in informativeness (Figure 2(b)). The impact of WINDOWSIZE has a similar trend to the one found for the Reddy++ and Reddy datasets (Figure 2(c)). That is, the larger window was preferred by most models, but the average difference between the best and the worst size for 12We report these figures for Reddy++ because Farahmand has many ties, given the binary nature of compositionality annotations. each DSM is only 0.01. For DIMENSION, a larger number resulted in better scores, as expected, with 750 being the best for all models in Figure 2(d). Nonetheless, here too the average difference in scores between DIMENSION=750 and 250 is 0.01. 4.3 FR-comp Dataset Globally, for the FR-comp dataset, PPMI-thresh (ρ = 0.70) outperforms glove (ρ = 0.68) and w2v (ρ = 0.66), as can be seen in Figure 3(a).13 For morphologically rich languages like French, Figure 3(b) indicates that working on lemmatized data often yields better results than working on surface forms. Lemmas conflate the frequencies for all the many morphologically inflected variants which would otherwise be dispersed in different surface forms. Therefore, it is not surprising that the best results concerning WORDFORM are achieved by lemma. These results differ from English, where a corpus without any preprocess13As all compounds in the dataset occur in the corpus, only strict evaluation results are reported. 1992 (a) Overall best Spearman’s ρ per DSM. (b) Best Spearman’s ρ per DSM and WORDFORM. (c) Best Spearman’s ρ per DSM and WINDOWSIZE. (d) Best Spearman’s ρ per DSM and DIMENSION. Figure 3: Spearman’s ρ for different DSM parameters on FR-comp dataset. ing yields more accurate results. Moreover, a smaller WINDOWSIZE leads to better results for most models, as shown in Figure 3(c). But just as in English, all models except glove benefit from an increase in dimension, as shown in Figure 3(d).14 4.4 Discussion When comparing DIMENSION across languages and datasets, larger values often bring better performance. Likewise, the lemma is usually the better WORDFORM. The recommended WINDOWSIZE depends on the model and language, but for the best models in all datasets, a window of 1 outperforms the others. This may be a consequence of the linear decay context weighting process, which assigns higher weights to closer words as window size increases. As an overall conclusion, in combination with a large dimension and a small 14For w2v, the same parameters used for English were adopted also for French. As a sanity check, we tested a range of negative sampling values [5, 15, 25, 35, 50], as well as subsampling rates for powers of 10 in [10−3 to 10−7]. Variations in ρ are minor and do not show any clear trend. window size, investing in preprocessing provides a good balance of a small vocabulary (of lemmas) and good accuracy. This is especially clear for a morphologically richer language like French, where lemmatization is homogeneously better for all models, even in w2v, for which surface forms were better for English. In terms of models, the w2v models performed better than PPMI for Reddy++, both were in a tie for Farahmand, and w2v was outperformed by PPMI-thresh for French. The performance of glove for English was underwhelming, probably because we did not perform parameter tuning. As shown by (Salehi et al., 2015), PPMI-TopK is not an appropriate DSM for this task, as it does not model relevant cooccurrence very well. The average Spearman’s ρ for Reddy over all tested parameter configurations was 0.71 for both w2v models and 0.67 for PPMI-SVD and PPMI14DSM parameters: WF: WORDFORM, D: DIMENSION, W: WINDOWSIZE. Results in parentheses for loose evaluation, using fallback. 1993 Model & Parameters Result Reddy et al. (2011) .71 Salehi et al. (2014) .74 Salehi et al. (2015) .80 Best w2v (sg, WF=surface, D=750, W=1) .82 (.80) Best PPMI (thresh, WF=surface, D=750, W=8) .80 (.80) Best glove (WF=lemmapos, D=250, W=8) .76 (.76) Table 1: Comparison of our best models with state-of-the-art ρ for Reddy.14 Model & Parameters Result Yazdani et al. (2015) .49 Best w2v (sg, WF=lemma, D=500, W=1) .51 (.47) Best PPMI (svd, WF=lemma, D=750, W=4) .52 (.45) Best glove (WF=lemma, D=500, W=8) .40 (.36) Table 2: Comparison of our best models with state-of-the-art BF1 for Farahmand.14 thresh, and this was also observed for the other datasets. In short, both types of models can obtain good results. While PPMI-thresh is a simple, fast and inexpensive model to build, w2v has a free and push-button implementation, and requires less hyper-parameter tuning, as is it seems more robust to parameter variation. More generally, the best results obtained for Reddy and Farahmand are comparable and even outperform the state of the art, as shown in Tables 1 and 2, when strict evaluation is adopted (that is, when not using a fallback strategy for missing compounds). 5 Conclusions In this paper we presented a multilingual, largescale evaluation of DSMs for compound compositionality prediction. We have built 816 DSMs and performed 2,856 evaluations, examining the impact of corpus and context parameters, namely the level of corpus preprocessing, the context window size and the number of dimensions. Evaluation on 3 English datasets and a French one revealed that a large dimension is consistently better, and corpus preprocessing is usually beneficial. The choice of window size varies according to language and dataset, but a small window can often provide a good performance. The DSMs w2v and PPMI alternated in providing the best results. Moreover, the results obtained were comparable and even outperformed the state-of-the-art. As future work, we plan to examine the use of a voting scheme for combining the output of complementary DSMs. Moreover, we also plan to combine additional sources of information for building the models, such as multilingual resources or translation data, to improve even further the compositionality prediction. We would also like to propose and evaluate more sophisticated compositionality functions that take into account the unbalanced contribution of individual words to the global meaning of a compound. Acknowledgments This work has been partly funded by projects PARSEME (Cost Action IC1207), PARSEMEFR (ANR-14-CERA-0001), AIM-WEST (FAPERGS-INRIA 1706-2551/13-7), CNPq 482520/2012-4, 312114/2015-0, “Simplificac¸˜ao Textual de Express˜oes Complexas”, sponsored by Samsung Eletrˆonica da Amazˆonia Ltda. under the terms of Brazilian federal law No. 8.248/91. References Eneko Agirre, Enrique Alfonseca, Keith B. Hall, Jana Kravalova, Marius Pasca, and Aitor Soroa. 2009. A study on similarity and relatedness using distributional and wordnet-based approaches. In Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings, May 31 - June 5, 2009, Boulder, Colorado, USA, pages 19–27. The Association for Computational Linguistics. Timothy Baldwin, Colin Bannard, Takaaki Tanaka, and Dominic Widdows. 2003. An empirical model of multiword expression decomposability. In Francis Bond, Anna Korhonen, Diana McCarthy, and Aline Villavicencio, editors, Proc. of the ACL Workshop on MWEs: Analysis, Acquisition and Treatment (MWE 2003), pages 89–96, Sapporo, Japan, Jul. ACL. Marco Baroni and Alessandro Lenci. 2010. Distributional memory: A general framework for corpus-based semantics. Computational Linguistics, 36(4):673–721. Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The wacky wide web: a collection of very large linguistically processed web-crawled corpora. Lang. Res. & Eval., 43(3):209–226, Sep. Marco Baroni, Georgiana Dinu, and Germ´an Kruszewski. 2014. Don’t count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long 1994 Papers), pages 238–247, Baltimore, Maryland, June. Association for Computational Linguistics. Jos´e Camacho-Collados, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. A framework for the construction of monolingual and cross-lingual word similarity datasets. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 1–7, Beijing, China, July. Association for Computational Linguistics. James Curran and Marc Moens. 2002. Scaling context space. In Proc. of the 40th ACL (ACL 2002), pages 231–238, Philadelphia, PA, USA, Jul. ACL. Georgiana Dinu, Nghia The Pham, and Marco Baroni. 2013. DISSECT - DIStributional SEmantics composition toolkit. In Proc. of the ACL 2013 System Demonstrations, pages 31–36, Sofia, Bulgaria, Aug. ACL. Katrin Erk and Sebastian Pad´o. 2010. Exemplar-based models for word meaning in context. In ACL 2010, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, July 11-16, 2010, Uppsala, Sweden, Short Papers, pages 92–97. The Association for Computer Linguistics. Meghdad Farahmand, Aaron Smith, and Joakim Nivre. 2015. A multiword expression data set: Annotating non-compositionality and conventionalization for english noun compounds. In Proceedings of the 11th Workshop on Multiword Expressions, pages 29–33, Denver, Colorado, June. Association for Computational Linguistics. Olivier Ferret. 2013. Identifying bad semantic neighbors for improving distributional thesauri. In Proc. of the 51st ACL (Volume 1: Long Papers), pages 561–571, Sofia, Bulgaria, Aug. ACL. Olivier Ferret. 2014. Compounds and distributional thesauri. In Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Hrafn Loftsson, Bente Maegaard, Joseph Mariani, Asunci´on Moreno, Jan Odijk, and Stelios Piperidis, editors, Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014), Reykjavik, Iceland, May 26-31, 2014., pages 2979–2984. European Language Resources Association (ELRA). Dayne Freitag, Matthias Blume, John Byrnes, Edmond Chow, Sadik Kapadia, Richard Rohwer, and Zhiqiang Wang. 2005. New experiments in distributional representations of synonymy. In Ido Dagan and Dan Gildea, editors, Proc. of the Ninth CoNLL (CoNLL-2005), pages 25–32, University of Michigan, MI, USA, Jun. ACL. Roxana Girju, Dan Moldovan, Marta Tatu, and Daniel Antohe. 2005. On the semantics of noun compounds. Computer speech & language, 19(4):479– 496. Iris Hendrickx, Zornitsa Kozareva, Preslav Nakov, Diarmuid ´O S´eaghdha, Stan Szpakowicz, and Tony Veale. 2013. Semeval-2013 task 4: Free paraphrases of noun compounds. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 138–143, Atlanta, Georgia, USA, June. Association for Computational Linguistics. Su Nam Kim and Timothy Baldwin. 2006. Interpreting semantic relations in noun compounds via verb semantics. In James Curran, editor, Proc. of the COLING/ACL 2006 Main Conference Poster Sessions, pages 491–498, Sidney, Australia, Jul. ACL. Gabriella Lapesa and Stefan Evert. 2014. A large scale evaluation of distributional semantic models: Parameters, interactions and model selection. Transactions of the Association for Computational Linguistics, 2:531–545. Mark Lauer. 1995. How much is enough?: Data requirements for statistical NLP. CoRR, abs/cmplg/9509001. Quoc V. Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 2126 June 2014, volume 32 of JMLR Proceedings, pages 1188–1196. JMLR.org. Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211–225. Dekang Lin. 1999. Automatic identification of noncompositional phrases. In Proc. of the 37th ACL (ACL 1999), pages 317–324, College Park, MD, USA, Jun. ACL. Paweł Mandera, Emmanuel Keuleers, and Marc Brysbaert. 2016. Explaining human performance in psycholinguistic tasks with models of semantic similarity based on prediction and counting: A review and empirical validation. Journal of Memory and Language. Diana McCarthy, Bill Keller, and John Carroll. 2003. Detecting a continuum of compositionality in phrasal verbs. In Francis Bond, Anna Korhonen, Diana McCarthy, and Aline Villavicencio, editors, Proc. of the ACL Workshop on MWEs: Analysis, Acquisition and Treatment (MWE 2003), pages 73–80, Sapporo, Japan, Jul. ACL. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013a. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. 1995 Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Christopher J. C. Burges, L´eon Bottou, Zoubin Ghahramani, and Kilian Q. Weinberger, editors, Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States., pages 3111–3119. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continuous space word representations. In Lucy Vanderwende, Hal Daum´e III, and Katrin Kirchhoff, editors, Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings, June 9-14, 2013, Westin Peachtree Plaza Hotel, Atlanta, Georgia, USA, pages 746–751. The Association for Computational Linguistics. Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proc. of the 46th ACL: HLT (ACL-08: HLT), pages 236–244, Columbus, OH, USA, Jun. ACL. Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive science, 34(8):1388–1429. Preslav Nakov. 2008. Paraphrasing verbs for noun compound interpretation. In Proc. of the LREC Workshop Towards a Shared Task for MWEs (MWE 2008), pages 46–49. Preslav Nakov. 2013. On the interpretation of noun compounds: Syntax, semantics, and entailment. Nat. Lang. Eng. Special Issue on Noun Compounds, 19(3):291–330. Sebastian Pad´o and Mirella Lapata. 2003. Constructing semantic space models from parsed corpora. In Proc. of the 41st ACL (ACL 2003), pages 128–135, Sapporo, Japan, Jul. ACL. Sebastian Pad´o and Mirella Lapata. 2007. Dependency-based construction of semantic space models. Computational Linguistics, 33(2):161–199. Muntsa Padr´o, Marco Idiart, Aline Villavicencio, and Carlos Ramisch. 2014a. Comparing similarity measures for distributional thesauri. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC 2014), Reykjavik, Iceland, May. European Language Resources Association. Muntsa Padr´o, Marco Idiart, Aline Villavicencio, and Carlos Ramisch. 2014b. Nothing like good old frequency: Studying context filters for distributional thesauri. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2014) - short papers, Doha, Qatar, Oct. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar, October. Association for Computational Linguistics. Carlos Ramisch, Silvio Cordeiro, Leonardo Zilio, Marco Idiart, Aline Villavicencio, and Rodrigo Wilkens. 2016. How naked is the naked truth? a multilingual lexicon of nominal compound compositionality. In Proc. of the 55th ACL (Volume 2: Short Papers), Berlin, Germany, Aug. ACL. Siva Reddy, Diana McCarthy, and Suresh Manandhar. 2011. An empirical study on compositionality in compound nouns. In Proceedings of The 5th International Joint Conference on Natural Language Processing 2011 (IJCNLP 2011), Chiang Mai, Thailand, November. Martin Riedl and Chris Biemann. 2012. Topictiling: A text segmentation algorithm based on LDA. In Proc. of the ACL 2012 SRW, pages 37–42, Jeju, Republic of Korea, Jul. ACL. Stephen Roller, Sabine Schulte im Walde, and Silke Scheible. 2013. The (un)expected effects of applying standard cleansing models to human ratings on compositionality. In Valia Kordoni, Carlos Ramisch, and Aline Villavicencio, editors, Proc. of the 9th Workshop on MWEs (MWE 2013), pages 32– 41, Atlanta, GA, USA, Jun. ACL. Ivan Sag, Timothy Baldwin, Francis Bond, Ann Copestake, and Dan Flickinger. 2002. Multiword expressions: A pain in the neck for NLP. In Proc. of the 3rd CICLing (CICLing-2002), volume 2276/2010 of LNCS, pages 1–15, Mexico City, Mexico, Feb. Springer. Bahar Salehi, Paul Cook, and Timothy Baldwin. 2014. Using distributional similarity of multi-way translations to predict multiword expression compositionality. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 472–481, Gothenburg, Sweden, April. Association for Computational Linguistics. Bahar Salehi, Paul Cook, and Timothy Baldwin. 2015. A word embedding approach to predicting the compositionality of multiword expressions. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 977–983, Denver, Colorado, May–June. Association for Computational Linguistics. Stephen Tratz and Eduard Hovy. 2010. ISI: Automatic classification of relations between nominals using a maximum entropy classifier. In Katrin Erk and Carlo Strapparava, editors, Proc. of the 5th SemEval (SemEval 2010), pages 222–225, Uppsala, Sweden, Jul. ACL. 1996 Tim van de Cruys, Laura Rimell, Thierry Poibeau, and Anna Korhonen. 2012. Multi-way tensor factorization for unsupervised lexical acquisition. In Proc. of the 24th COLING (COLING 2012), pages 2703– 2720, Mumbai, India, Dec. The Coling 2012 Organizing Committee. Majid Yazdani, Meghdad Farahmand, and James Henderson. 2015. Learning semantic composition to detect non-compositionality of multiword expressions. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1733–1742, Lisbon, Portugal, September. Association for Computational Linguistics. 1997
2016
187
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1998–2008, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Learning-Based Single-Document Summarization with Compression and Anaphoricity Constraints Greg Durrett Computer Science Division UC Berkeley [email protected] Taylor Berg-Kirkpatrick School of Computer Science Carnegie Mellon University [email protected] Dan Klein Computer Science Division UC Berkeley [email protected] Abstract We present a discriminative model for single-document summarization that integrally combines compression and anaphoricity constraints. Our model selects textual units to include in the summary based on a rich set of sparse features whose weights are learned on a large corpus. We allow for the deletion of content within a sentence when that deletion is licensed by compression rules; in our framework, these are implemented as dependencies between subsentential units of text. Anaphoricity constraints then improve cross-sentence coherence by guaranteeing that, for each pronoun included in the summary, the pronoun’s antecedent is included as well or the pronoun is rewritten as a full mention. When trained end-to-end, our final system1 outperforms prior work on both ROUGE as well as on human judgments of linguistic quality. 1 Introduction While multi-document summarization is wellstudied in the NLP literature (Carbonell and Goldstein, 1998; Gillick and Favre, 2009; Lin and Bilmes, 2011; Nenkova and McKeown, 2011), single-document summarization (McKeown et al., 1995; Marcu, 1998; Mani, 2001; Hirao et al., 2013) has received less attention in recent years and is generally viewed as more difficult. Content selection is tricky without redundancy across multiple input documents as a guide and simple positional information is often hard to beat (Penn and Zhu, 2008). In this work, we tackle the single-document problem by training an expressive summarization model on a large nat1Available at http://nlp.cs.berkeley.edu urally occurring corpus—the New York Times Annotated Corpus (Sandhaus, 2008) which contains around 100,000 news articles with abstractive summaries—learning to select important content with lexical features. This corpus has been explored in related contexts (Dunietz and Gillick, 2014; Hong and Nenkova, 2014), but to our knowledge it has not been directly used for singledocument summarization. To increase the expressive capacity of our model we allow more aggressive compression of individual sentences by combining two different formalisms—one syntactic and the other discursive. Additionally, we incorporate a model of anaphora resolution and give our system the ability rewrite pronominal mentions, further increasing expressivity. In order to guide the model, we incorporate (1) constraints from coreference ensuring that critical pronoun references are clear in the final summary and (2) constraints from syntactic and discourse parsers ensuring that sentence realizations are well-formed. Despite the complexity of these additional constraints, we demonstrate an efficient inference procedure using an ILPbased approach. By training our full system endto-end on a large-scale dataset, we are able to learn a high-capacity structured model of the summarization process, contrasting with past approaches to the single-document task which have typically been heuristic in nature (Daum´e and Marcu, 2002; Hirao et al., 2013). We focus our evaluation on the New York Times Annotated corpus (Sandhaus, 2008). According to ROUGE, our system outperforms a document prefix baseline, a bigram coverage baseline adapted from a strong multi-document system (Gillick and Favre, 2009), and a discourse-informed method from prior work (Yoshida et al., 2014). Imposing discursive and referential constraints improves human judgments of linguistic clarity and referential structure—outperforming the method of 1998 8i, k xunit i xunit k if 9j w ith xref ij = 0 w h ere th e an teced en t o f r ij is in uk 8j xref ij = 1 i↵n o p rio r in clu d ed textu al u n it men tio n s th e en tity th at r ij refers to 8i, k xunit i xunit k if ui req u ires uk o n th e b asis o f p ro n o u n an ap h o ra 8i, k xunit i xunit k if ui req u ires uk { X (i,j) ⇥ xref ij (w>f(r ij)) ⇤ + + X (i,j) xref ij (|r ij| −1) X i xunit i |ui| X i ⇥ xunit i (w>f(ui)) ⇤ Extraction Score Grammaticality Constraints (Section 2.1) Anaphora Score su b ject to h i Anaphora Constraints (Section 2.2) k Length adjustment for explicit mention Length Constraint max xunit,xref Figure 1: ILP formulation of our single-document summarization model. The basic model extracts a set of textual units with binary variables xUNIT subject to a length constraint. These textual units u are scored with weights w and features f. Next, we add constraints derived from both syntactic parses and Rhetorical Structure Theory (RST) to enforce grammaticality. Finally, we add anaphora constraints derived from coreference in order to improve summary coherence. We introduce additional binary variables xREF that control whether each pronoun is replaced with its antecedent using a candidate replacement rij. These are also scored in the objective and are incorporated into the length constraint. Yoshida et al. (2014) and approaching the clarity of a sentence-extractive baseline—and still achieves substantially higher ROUGE score than either method. These results indicate that our model has the expressive capacity to extract important content, but is sufficiently constrained to ensure fluency is not sacrificed as a result. Past work has explored various kinds of structure for summarization. Some work has focused on improving content selection using discourse structure (Louis et al., 2010; Hirao et al., 2013), topical structure (Barzilay and Lee, 2004), or related techniques (Mithun and Kosseim, 2011). Other work has used structure primarily to reorder summaries and ensure coherence (Barzilay et al., 2001; Barzilay and Lapata, 2008; Louis and Nenkova, 2012; Christensen et al., 2013) or to represent content for sentence fusion or abstraction (Thadani and McKeown, 2013; Pighin et al., 2014). Similar to these approaches, we appeal to structures from upstream NLP tasks (syntactic parsing, RST parsing, and coreference) to restrict our model’s capacity to generate. However, we go further by optimizing for ROUGE subject to these constraints with end-to-end learning. 2 Model Our model is shown in Figure 1. Broadly, our ILP takes a set of textual units u = (u1, . . . , un) from a document and finds the highest-scoring extractive summary by optimizing over variables xUNIT = xUNIT 1 , . . . , xUNIT n , which are binary indicators of whether each unit is included. Textual units are contiguous parts of sentences that serve as the fundamental units of extraction in our model. For a sentence-extractive model, these would be entire sentences, but for our compressive models we will have more fine-grained units, as shown in Figure 2 and described in Section 2.1. Textual units are scored according to features f and model parameters w learned on training data. Finally, the extraction process is subject to a length constraint of k words. This approach is similar in spirit to ILP formulations of multi-document summarization systems, though in those systems content is typically modeled in terms of bigrams (Gillick and Favre, 2009; Berg-Kirkpatrick et al., 2011; Hong and Nenkova, 2014; Li et al., 2015). For our model, type-level n-gram scoring only arises when we compute our loss function in maxmargin training (see Section 3). In Section 2.1, we discuss grammaticality constraints, which take the form of introducing dependencies between textual units, as shown in Figure 2. If one textual unit requires another, it cannot be included unless its prerequisite is. We will show that different sets of requirements can capture both syntactic and discourse-based compression schemes. Furthermore, we introduce anaphora constraints (Section 2.2) via a new set of variables that capture the process of rewriting pronouns to make them 1999 Ms. Johnson, dressed in jeans and a sweatshirt , is a claims adjuster with Aetna . NP CC NP NP NP NP PP SAME-UNIT (b) Syntactic compressions (a) Discourse compressions Ms. Johnson , dressed in jeans and a sweatshirt , is a claims adjuster with Aetna . ELABORATION (c) Combined compressions Ms. Johnson , dressed in jeans and a sweatshirt , is a claims adjuster with Aetna . u1 u2 u3 u4 u5 u1 u2 u3 u1 u2 u3 u4 u5 u6 u7 (d) Augmentation Process is a claims adjuster with Aetna . is a claims adjuster with Aetna . IN Figure 2: Compression constraints on an example sentence. (a) RST-based compression structure like that in Hirao et al. (2013), where we can delete the ELABORATION clause. (b) Two syntactic compression options from Berg-Kirkpatrick et al. (2011), namely deletion of a coordinate and deletion of a PP modifier. (c) Textual units and requirement relations (arrows) after merging all of the available compressions. (d) Process of augmenting a textual unit with syntactic compressions. explicit mentions. That is, xREF ij = 1 if we should rewrite the jth pronoun in the ith unit with its antecedent. These pronoun rewrites are scored in the objective and introduced into the length constraint to make sure they do not cause our summary to be too long. Finally, constraints on these variables control when they are used and also require the model to include antecedents of pronouns when the model is not confident enough to rewrite them. 2.1 Grammaticality Constraints Following work on isolated sentence compression (McDonald, 2006; Clarke and Lapata, 2008) and compressive summarization (Lin, 2003; Martins and Smith, 2009; Berg-Kirkpatrick et al., 2011; Woodsend and Lapata, 2012; Almeida and Martins, 2013), we wish to be able to compress sentences so we can pack more information into a summary. During training, our model learns how to take advantage of available compression options and select content to match human generated summaries as closely possible.2 We explore two ways of deriving units for compression: the RST-based compressions of Hirao et al. (2013) and the syntactic compressions of Berg-Kirkpatrick et al. (2011). RST compressions Figure 2a shows how to derive compressions from Rhetorical Structure Theory (Mann and Thompson, 1988; Carlson et al., 2001). We show a sentence broken into elemen2The features in our model are actually rich enough to learn a sophisticated compression model, but the data we have (abstractive summaries) does not directly provide examples of correct compressions; past work has gotten around this with multi-task learning (Almeida and Martins, 2013), but we simply treat grammaticality as a constraint from upstream models. tary discourse units (EDUs) with RST relations between them. Units marked as SAME-UNIT must both be kept or both be deleted, but other nodes in the tree structure can be deleted as long as we do not delete the parent of an included node. For example, we can delete the ELABORATION clause, but we can delete neither the first nor last EDU. Arrows depict the constraints this gives rise to in the ILP (see Figure 1): u2 requires u1, and u1 and u3 mutually require each other. This is a more constrained form of compression than was used in past work (Hirao et al., 2013), but we find that it improves human judgments of fluency (Section 4.3). Syntactic compressions Figure 2b shows two examples of compressions arising from syntactic patterns (Berg-Kirkpatrick et al., 2011): deletion of the second part of a coordinated NP and deletion of a PP modifier to an NP. These patterns were curated to leave sentences as grammatical after being compressed, though perhaps with damaged semantic content. Combined compressions Figure 2c shows the textual units and requirement relations yielded by combining these two types of compression. On this example, the two schemes capture orthogonal compressions, and more generally we find that they stack to give better results for our final system (see Section 4.3). To actually synthesize textual units and the constraints between them, we start from the set of RST textual units and introduce syntactic compressions as new children when they don’t cross existing brackets; because syntactic compressions are typically narrower in scope, they are usually completely contained in EDUs. 2000 Figure 2d shows an example of this process: the possible deletion of with Aetna is grafted onto the textual unit and appropriate requirement relations are introduced. The net effect is that the textual unit is wholly included, partially included (with Aetna removed), or not at all. Formally, we define an RST tree as Trst = (Srst, πrst) where Srst is a set of EDU spans (i, j) and π : S →2S is a mapping from each EDU span to EDU spans it depends on. Syntactic compressions can be expressed in a similar way with trees Tsyn. These compressions are typically smallerscale than EDU-based compressions, so we use the following modification scheme. Denote by Tsyn(kl) a nontrivial (supports some compression) subtree of Tsyn that is completely contained in an EDU (i, j). We build the following combined compression tree, which we refer to as the augmentation of Trst with Tsyn(kl): Tcomb = (S ∪Ssyn(kl) ∪{(i, k), (l, j)}, πrst ∪πsyn(kl)∪ {(i, k) →(l, j), (l, j) →(i, k), (k, l) →(i, k)}) That is, we maintain the existing tree structure except for the EDU (i, j), which is broken into three parts: the outer two depend on each other (is a claims adjuster and . from Figure 2d) and the inner one depends on the others and preserves the tree structure from Tsyn. We augment Trst with all maximal subtrees of Tsyn, i.e. all trees that are not contained in other trees that are used in the augmentation process. This is broadly similar to the combined compression scheme in Kikuchi et al. (2014) but we use a different set of constraints that more strictly enforce grammaticality.3 2.2 Anaphora Constraints What kind of cross-sentential coherence do we need to ensure for the kinds of summaries our system produces? Many notions of coherence are useful, including centering theory (Grosz et al., 1995) and lexical cohesion (Nishikawa et al., 2014), but one of the most pressing phenomena to deal with is pronoun anaphora (Clarke and Lapata, 2010). Cases of pronouns being “orphaned” during extraction (their antecedents are deleted) are 3We also differ from past work in that we do not use crosssentential RST constraints (Hirao et al., 2013; Yoshida et al., 2014). We experimented with these and found no improvement from using them, possibly because we have a featurebased model rather than a heuristic content selection procedure, and possibly because automatic discourse parsers are less good at recovering cross-sentence relations. This hasn’t been Kellogg’s year . Replacement (2.2.1): If : The oat-bran craze has cost it market share. Otherwise (i.e. if no replacement is possible): xunit 2 xunit 1 u1 u2 p1 p2 p3 Allow pronoun replacement with the predicted antecedent and add the following constraint: Add the following constraint: Kellogg it year it No replacement necessary Replace the first pronoun in the second textual unit max(p1, p2, p3) > ↵ p1 + p2 > β Antecedent inclusion (2.2.2): If xref 2,1 = 1 i↵xunit 1 = 0 an d xunit 2 = 1 Figure 3: Modifications to the ILP to capture pronoun coherence. It, which refers to Kellogg, has several possible antecedents from the standpoint of an automatic coreference system (Durrett and Klein, 2014). If the coreference system is confident about its selection (above a threshold α on the posterior probability), we allow for the model to explicitly replace the pronoun if its antecedent would be deleted (Section 2.2.1). Otherwise, we merely constrain one or more probable antecedents to be included (Section 2.2.2); even if the coreference system is incorrect, a human can often correctly interpret the pronoun with this additional context. relatively common: they occur in roughly 60% of examples produced by our summarizer when no anaphora constraints are enforced. This kind of error is particularly concerning for summary interpretation and impedes the ability of summaries to convey information effectively (Grice, 1975). Our solution is to explicitly impose constraints on the model based on pronoun anaphora resolution.4 Figure 3 shows an example of a problem case. If we extract only the second textual unit shown, the pronoun it will lose its antecedent, which in this case is Kellogg. We explore two types of constraints for dealing with this: rewriting the pronoun explicitly, or constraining the summary to include the pronoun’s antecedent. 2.2.1 Pronoun Replacement One way of dealing with these pronoun reference issues is to explicitly replace the pronoun with what it refers to. This replacement allows us to maintain maximal extraction flexibility, since we 4We focus on pronoun coreference because it is the most pressing manifestation of this problem and because existing coreference systems perform well on pronouns compared to harder instances of coreference (Durrett and Klein, 2013). 2001 can make an isolated textual unit meaningful even if it contains a pronoun. Figure 3 shows how this process works. We run the Berkeley Entity Resolution System (Durrett and Klein, 2014) and compute posteriors over possible links for the pronoun. If the coreference system is sufficiently confident in its prediction (i.e. maxi pi > α for a specified threshold α > 1 2), we allow ourselves to replace the pronoun with the first mention of the entity corresponding to the pronoun’s most likely antecedent. In Figure 3, if the system correctly determines that Kellogg is the correct antecedent with high probability, we enable the first replacement shown there, which is used if u2 is included the summary without u1.5 As shown in the ILP in Figure 1, we instantiate corresponding pronoun replacement variables xREF where xREF ij = 1 implies that the jth pronoun in the ith sentence should be replaced in the summary. We use a candidate pronoun replacement if and only if the pronoun’s corresponding (predicted) entity hasn’t been mentioned previously in the summary.6 Because we are generally replacing pronouns with longer mentions, we also need to modify the length constraint to take this into account. Finally, we incorporate features on pronoun replacements in the objective, which helps the model learn to prefer pronoun replacements that help it to more closely match the human summaries. 2.2.2 Pronoun Antecedent Constraints Explicitly replacing pronouns is risky: if the coreference system makes an incorrect prediction, the intended meaning of the summary may be damaged. Fortunately, the coreference model’s posterior probabilities have been shown to be wellcalibrated (Nguyen and O’Connor, 2015), meaning that cases where it is likely to make errors are signaled by flatter posterior distributions. In this case, we enable a more conservative set of constraints that include additional content in the summary to make the pronoun reference clear without explicitly replacing it. This is done by requiring the inclusion of any textual unit which contains 5If the proposed replacement is a proper mention, we replace the pronoun just with the subset of the mention that constitutes a named entity (rather than the whole noun phrase). We control for possessive pronouns by deleting or adding ’s as appropriate. 6Such a previous mention may be a pronoun; however, note that that pronoun would then be targeted for replacement unless its antecedent were included somehow. possible pronoun references whose posteriors sum to at least a threshold parameter β. Figure 3 shows that this constraint can force the inclusion of u1 to provide additional context. Although this could still lead to unclear pronouns if text is stitched together in an ambiguous or even misleading way, in practice we observe that the textual units we force to be added almost always occur very recently before the pronoun, giving enough additional context for a human reader to figure out the pronoun’s antecedent unambiguously. 2.3 Features The features in our model (see Figure 1) consist of a set of surface indicators capturing mostly lexical and configurational information. Their primary role is to identify important document content. The first three types of features fire over textual units, the last over pronoun replacements. Lexical These include indicator features on nonstopwords in the textual unit that appear at least five times in the training set and analogous POS features. We also use lexical features on the first, last, preceding, and following words for each textual unit. Finally, we conjoin each of these features with an indicator of bucketed position in the document (the index of the sentence containing the textual unit). Structural These features include various conjunctions of the position of the textual unit in the document, its length, the length of its corresponding sentence, the index of the paragraph it occurs in, and whether it starts a new paragraph (all values are bucketed). Centrality These features capture rough information about the centrality of content: they consist of bucketed word counts conjoined with bucketed sentence index in the document. We also fire features on the number of times of each entity mentioned in the sentence is mentioned in the rest of the document (according to a coreference system), the number of entities mentioned in the sentence, and surface properties of mentions including type and length Pronoun replacement These target properties of the pronoun replacement such as its length, its sentence distance from the current mention, its type (nominal or proper), and the identity of the pronoun being replaced. 2002 3 Learning We learn weights w for our model by training on a large corpus of documents u paired with reference summaries y. We formulate our learning problem as a standard instance of structured SVM (see Smith (2011) for an introduction). Because we want to optimize explicitly for ROUGE-1,7 we define a ROUGE-based loss function that accommodates the nature of our supervision, which is in terms of abstractive summaries y that in general cannot be produced by our model. Specifically, we take: ℓ(xNGRAM, y) = maxx∗ROUGE-1(x∗, y) −ROUGE-1(xNGRAM, y) i.e. the gap between the hypothesis’s ROUGE score and the oracle ROUGE score achievable under the model (including constraints). Here xNGRAM are indicator variables that track, for each n-gram type in the reference summary, whether that n-gram is present in the system summary. These are the sufficient statistics for computing ROUGE. We train the model via stochastic subgradient descent on the primal form of the structured SVM objective (Ratliff et al., 2007; Kummerfeld et al., 2015). In order to compute the subgradient for a given training example, we need to find the most violated constraint on the given instance through a loss-augmented decode, which for a linear model takes the form arg maxx w⊤f(x)+ℓ(x, y). To do this decode at training time in the context of our model, we use an extended version of our ILP in Figure 1 that is augmented to explicitly track typelevel n-grams: max xUNIT,xREF,xNGRAM "X i h x UNIT i (w⊤f(ui)) i + X (i,j) h x REF ij (w⊤f(rij)) i −ℓ(x NGRAM, y)   subject to all constraints from Figure 1, and x NGRAM i = 1 iff an included textual unit or replacement contains the ith reference n-gram These kinds of variables and constraints are common in multi-document summarization systems 7We found that optimizing for ROUGE-1 actually resulted in a model with better performance on both ROUGE-1 and ROUGE-2. We hypothesize that this is because framing our optimization in terms of ROUGE-2 would lead to a less nuanced set of constraints: bigram matches are relatively rare when the reference is a short, abstractive summary, so a loss function based on ROUGE-2 will express a flatter preference structure among possible outputs. that score bigrams (Gillick and Favre, 2009 inter alia). Note that since ROUGE is only computed over non-stopword n-grams and pronoun replacements only replace pronouns, pronoun replacement can never remove an n-gram that would otherwise be included. For all experiments, we optimize our objective using AdaGrad (Duchi et al., 2011) with ℓ1 regularization (λ = 10−8, chosen by grid search), with a step size of 0.1 and a minibatch size of 1. We train for 10 iterations on the training data, at which point held-out model performance no longer improves. Finally, we set the anaphora thresholds α = 0.8 and β = 0.6 (see Section 2.2). The values of these and other hyperparameters were determined on a held-out development set from our New York Times training data. All ILPs are solved using GLPK version 4.55. 4 Experiments We primarily evaluate our model on a roughly 3000-document evaluation set from the New York Times Annotated Corpus (Sandhaus, 2008). We also investigate its performance on the RST Discourse Treebank (Carlson et al., 2001), but because this dataset is only 30 documents it provides much less robust estimates of performance.8 Throughout this section, when we decode a document, we set the word budget for our summarizer to be the same as the number of words in the corresponding reference summary, following previous work (Hirao et al., 2013; Yoshida et al., 2014). 4.1 Preprocessing We preprocess all data using the Berkeley Parser (Petrov et al., 2006), specifically the GPUaccelerated version of the parser from Hall et al. (2014), and the Berkeley Entity Resolution System (Durrett and Klein, 2014). For RST discourse analysis, we segment text into EDUs using a semiMarkov CRF trained on the RST treebank with features on boundaries similar to those of Hernault et al. (2010), plus novel features on spans including span length and span identity for short spans. To follow the conditions of Yoshida et al. (2014) as closely as possible, we also build a discourse parser in the style of Hirao et al. (2013), since their parser is not publicly available. Specifically, 8Tasks like DUC and TAC have focused on multidocument summarization since around 2003, hence the lack of more standard datasets for single-document summarization. 2003 Article on Speak-Up, program begun by Westchester County Office for the Aging to bring together elderly and college students. National Center for Education Statistics reports students in 4th, 8th and 12th grades scored modestly higher on American history test than five years earlier. Says more than half of high school seniors still show poor command of basic facts. Only 4th graders made any progress in civics test. New exam results are another ingredient in debate over renewing Pres Bush’s signature No Child Left Behind Act. Filtered article: NYT50 article: Summary: Summary: Federal officials reported yesterday that students in 4th, 8th and 12th grades had scored modestly higher on an American history test than five years earlier, although more than half of high school seniors still showed poor command of basic facts like the effect of the cotton gin on the slave economy or the causes of the Korean War. Federal officials said they considered the results encouraging because at each level tested, student performance had improved since the last time the exam was administered, in 2001. “In U.S. history there were higher scores in 2006 for all three grades,” said Mark Schneider, commissioner of the National Center for Education Statistics, which administers the test, at a Boston news conference that the Education Department carried by Webcast. The results were less encouraging on a national civics test, on which only fourth graders made any progress. The best results in the history test were also in fourth grade, where 70 percent of students attained the basic level of achievement or better. The test results in the two subjects are likely to be closely studied, because Congress is considering the renewal of President Bush's signature education law, the No Child Left Behind Act. A number of studies have shown that because No Child Left Behind requires states… Long before President Bush's proposal to rethink Social Security became part of the national conversation, Westchester County came up with its own dialogue to bring issues of aging to the forefront. Before the White House Conference on Aging scheduled in October, the county's Office for the Aging a year ago started Speak-Up, which stands for Student Participants Embrace Aging Issues of Key Concern, to reach students in the county's 13 colleges and universities. Through a variety of events to bring together the elderly and college students, organizers said they hoped to have by this spring a series of recommendations that could be given to Washington… Figure 4: Examples of an article kept in the NYT50 dataset (top) and an article removed because the summary is too short. The top summary has a rich structure to it, corresponding to various parts of the document (bolded) and including some text that is essentially a direct extraction. Count 0 250 500 750 1000 Sentence index in document 0 5 10 15 20 Oracle sentences First k sentences ≥ Figure 5: Counts on a 1000-document sample of how frequently both a document prefix baseline and a ROUGE oracle summary contain sentences at various indices in the document. There is a long tail of useful sentences later in the document, as seen by the fact that the oracle sentence counts drop off relatively slowly. Smart selection of content therefore has room to improve over taking a prefix of the document. we use the first-order projective parsing model of McDonald et al. (2005) and features from Soricut and Marcu (2003), Hernault et al. (2010), and Joty et al. (2013). When using the same head annotation scheme as Yoshida et al. (2014), we outperform their discourse dependency parser on unlabeled dependency accuracy, getting 56% as opposed to 53%. 4.2 New York Times Corpus We now provide some details about the New York Times Annotated corpus. This dataset contains 110,540 articles with abstractive summaries; we split these into 100,834 training and 9706 test examples, based on date of publication (test is all articles published on January 1, 2007 or later). Examples of two documents from this dataset are shown in Figure 4. The bottom example demonstrates that some summaries are extremely short and formulaic (especially those for obituaries and editorials). To counter this, we filter the raw dataset by removing all documents with summaries that are shorter than 50 words. One benefit of filtering is that the length distribution of our resulting dataset is more in line with standard summarization evaluations like DUC; it also ensures a sufficient number of tokens in the budget to produce nontrivial summaries. The filtered test set, which we call NYT50, includes 3,452 test examples out of the original 9,706. Interestingly, this dataset is one where the classic document prefix baseline can be substantially outperformed, unlike in some other summarization settings (Penn and Zhu, 2008). We show this fact explicitly in Section 4.3, but Figure 5 provides additional analysis in this regard. We compute oracle ROUGE-1 sentence-extractive summaries on a 1000-document subset of the training set and look at where the extracted sentences lie in the document. While they certainly skew earlier in the document, they do not all fall within the doc2004 R-1 ↑ R-2 ↑ CG ↑ UP ↓ Baselines First sentences 28.6 17.3 8.21 0.28 First k words 35.7 21.6 − − Bigram Frequency 25.1 9.8 − − Past work Tree Knapsack 34.7 19.6 7.20 0.42 This work Sentence extraction 38.8 23.5 7.93 0.32 EDU extraction 41.9 25.3 6.38 0.65 Full 42.2 25.9 *†7.52 *0.36 Ablations from Full No Anaphoricity 42.5 26.3 7.46 0.44 No Syntactic Compr 41.1 25.0 − − No Discourse Compr 40.5 24.7 − − Table 1: Results on the NYT50 test set (documents with summaries of at least 50 tokens) from the New York Times Annotated Corpus (Sandhaus, 2008). We report ROUGE-1 (R-1), ROUGE-2 (R-2), clarity/grammaticality (CG), and number of unclear pronouns (UP) (lower is better). On content selection, our system substantially outperforms all baselines, our implementation of the tree knapsack system (Yoshida et al., 2014), and learned extractive systems with less compression, even an EDU-extractive system that sacrifices grammaticality. On clarity metrics, our final system performs nearly as well as sentence-extractive systems. The symbols * and † indicate statistically significant gains compared to No Anaphoricity and Tree Knapsack (respectively) with p < 0.05 according to a bootstrap resampling test. We also see that removing either syntactic or EDU-based compressions decreases ROUGE. ument prefix summary. One reason for this is that many of the articles are longer-form pieces that begin with a relatively content-free lede of several sentences, which should be identifiable with lexicosyntactic indicators as are used in our discriminative model. 4.3 New York Times Results We evaluate our system along two axes: first, on content selection, using ROUGE9 (Lin and Hovy, 2003), and second, on clarity of language and referential structure, using annotators from Amazon Mechanical Turk. We follow the method of Gillick and Liu (2010) for this evaluation and ask Turkers to rate a summary on how grammatical it is using a 10-point Likert scale. Furthermore, we ask how many unclear pronouns references there were in the text. The Turkers do not see the original document or the reference summary, and rate each summary in isolation. Gillick and Liu (2010) showed that for linguistic quality judgments (as opposed to content judgments), Turkers reproduced the ranking of systems according to expert judgments. To speed up preprocessing and training time 9We use the ROUGE 1.5.5 script with the following command line arguments: -n 2 -x -m -s. All given results are macro-averaged recall values over the test set. on this corpus, we further restrict our training set to only contain documents with fewer than 100 EDUs. All told, the final system takes roughly 20 hours to make 10 passes through the subsampled training data (22,000 documents) on a single core of an Amazon EC2 r3.4xlarge instance. Table 1 shows the results on the NYT50 corpus. We compare several variants of our system and baselines. For baselines, we use two variants of first k: one which must stop on a sentence boundary (which gives better linguistic quality) and one which always consumes k tokens (which gives better ROUGE). We also use a heuristic sentence-extractive baseline that maximizes the document counts (term frequency) of bigrams covered by the summary, similar in spirit to the multi-document method of Gillick and Favre (2009).10 We also compare to our implementation of the Tree Knapsack method of Yoshida et al. (2014), which matches their results very closely on the RST Discourse Treebank when discourse trees are controlled for. Finally, we compare several variants of our system: purely extractive systems operating over sentences and EDUs respectively, our full system, and ablations removing either the anaphoricity component or parts of the compression module. In terms of content selection, we see that all of the systems that incorporate end-to-end learning (under “This work”) substantially outperform our various heuristic baselines. Our full system using the full compression scheme is substantially better on ROUGE than ablations where the syntactic or discourse compressions are removed. These improvements reflect the fact that more compression options give the system more flexibility to include key content words. Removing the anaphora resolution constraints actually causes ROUGE to increase slightly (as a result of granting the model flexibility), but has a negative impact on the linguistic quality metrics. On our linguistic quality metrics, it is no surprise that the sentence prefix baseline performs the best. Our sentence-extractive system also does well on these metrics. Compared to the EDUextractive system with no constraints, our constrained compression method improves substantially on both linguistic quality and reduces the 10Other heuristic multi-document approaches could be compared to, e.g. He et al. (2012), but a simple term frequency method suffices to illustrate how these approaches can underperform in the single-document setting. 2005 ROUGE-1 ROUGE-2 First k words 23.5 8.3 Tree Knapsack 25.1 8.7 Full 26.3 8.0 Table 2: Results for RST Discourse Treebank (Carlson et al., 2001). Differences between our system and the Tree Knapsack system of Yoshida et al. (2014) are not statistically significant, reflecting the high variance in this small (20 document) test set. number of unclear pronouns, and adding the pronoun anaphora constraints gives further improvement. Our final system is approaches the sentenceextractive baseline, particularly on unclear pronouns, and achieves substantially higher ROUGE score. 4.4 RST Treebank We also evaluate on the RST Discourse Treebank, of which 30 documents have abstractive summaries. Following Hirao et al. (2013), we use the gold EDU segmentation from the RST corpus but automatic RST trees. We break this into a 10document development set and a 20-document test set. Table 2 shows the results on the RST corpus. Our system is roughly comparable to Tree Knapsack here, and we note that none of the differences in the table are statistically significant. We also observed significant variation between multiple runs on this corpus, with scores changing by 1-2 ROUGE points for slightly different system variants.11 5 Conclusion We presented a single-document summarization system trained end-to-end on a large corpus. We integrate a compression model that enforces grammaticality as well as pronoun anaphoricity constraints that enforce coherence. Our system improves substantially over baseline systems on ROUGE while still maintaining good linguistic quality. Our system and models are publicly available at http://nlp.cs.berkeley.edu 11The system of Yoshida et al. (2014) is unavailable, so we use a reimplementation. Our results differ from theirs due to having slightly different discourse trees, which cause large changes in metrics due to high variance on the test set. Acknowledgments This work was partially supported by NSF Grant CNS-1237265 and a Google Faculty Research Award. Thanks to Tsutomu Hirao for providing assistance with our reimplementation of the Tree Knapsack model, and thanks the anonymous reviewers for their helpful comments. References Miguel Almeida and Andre Martins. 2013. Fast and Robust Compressive Summarization with Dual Decomposition and Multi-Task Learning. In Proceedings of the Association for Computational Linguistics (ACL). Regina Barzilay and Mirella Lapata. 2008. Modeling Local Coherence: An Entity-based Approach. Computational Linguistics, 34(1):1–34, March. Regina Barzilay and Lillian Lee. 2004. Catching the Drift: Probabilistic Content Models, with Applications to Generation and Summarization. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL). Regina Barzilay, Noemie Elhadad, and Kathleen R. McKeown. 2001. Sentence Ordering in Multidocument Summarization. In Proceedings of the International Conference on Human Language Technology Research. Taylor Berg-Kirkpatrick, Dan Gillick, and Dan Klein. 2011. Jointly Learning to Extract and Compress. In Proceedings of the Association for Computational Linguistics (ACL). Jaime Carbonell and Jade Goldstein. 1998. The Use of MMR, Diversity-based Reranking for Reordering Documents and Producing Summaries. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. Lynn Carlson, Daniel Marcu, and Mary Ellen Okurowski. 2001. Building a Discourse-tagged Corpus in the Framework of Rhetorical Structure Theory. In Proceedings of the Second SIGDIAL Workshop on Discourse and Dialogue. Janara Christensen, Mausam, Stephen Soderland, and Oren Etzioni. 2013. Towards Coherent MultiDocument Summarization. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL). James Clarke and Mirella Lapata. 2008. Global Inference for Sentence Compression an Integer Linear Programming Approach. Journal of Artificial Intelligence Research, 31(1):399–429, March. 2006 James Clarke and Mirella Lapata. 2010. Discourse Constraints for Document Compression. Computational Linguistics, 36(3):411–441, September. Hal Daum´e, III and Daniel Marcu. 2002. A NoisyChannel Model for Document Compression. In Proceedings of the Association for Computational Linguistics (ACL). John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. Journal of Machine Learning Research, 12:2121–2159, July. Jesse Dunietz and Daniel Gillick. 2014. A New Entity Salience Task with Millions of Training Examples. In Proceedings of the European Chapter of the Association for Computational Linguistics (EACL). Greg Durrett and Dan Klein. 2013. Easy Victories and Uphill Battles in Coreference Resolution. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), October. Greg Durrett and Dan Klein. 2014. A Joint Model for Entity Analysis: Coreference, Typing, and Linking. In Transactions of the Association for Computational Linguistics (TACL). Dan Gillick and Benoit Favre. 2009. A Scalable Global Model for Summarization. In Proceedings of the Workshop on Integer Linear Programming for Natural Language Processing. Dan Gillick and Yang Liu. 2010. Non-Expert Evaluation of Summarization Systems is Risky. In Proceedings of the NAACL Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk. H.P. Grice. 1975. Logic and Conversation. Syntax and Semantics 3: Speech Acts, pages 41–58. Barbara J. Grosz, Scott Weinstein, and Aravind K. Joshi. 1995. Centering: A Framework for Modeling the Local Coherence of Discourse. Computational Linguistics, 21(2):203–225, June. David Hall, Taylor Berg-Kirkpatrick, John Canny, and Dan Klein. 2014. Sparser, Better, Faster GPU Parsing. In Proceedings of the Association for Computational Linguistics (ACL). Zhanying He, Chun Chen, Jiajun Bu, Can Wang, Lijun Zhang, Deng Cai, and Xiaofei He. 2012. Document Summarization Based on Data Reconstruction. In Proceedings of the Association for the Advancement of Artificial Intelligence (AAAI). Hugo Hernault, Helmut Prendinger, David A. Duverle, Mitsuru Ishizuka, and Tim Paek. 2010. HILDA: A discourse parser using support vector machine classification. Dialogue and Discourse, 1:1–33. Tsutomu Hirao, Yasuhisa Yoshida, Masaaki Nishino, Norihito Yasuda, and Masaaki Nagata. 2013. Single-Document Summarization as a Tree Knapsack Problem. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Kai Hong and Ani Nenkova. 2014. Improving the Estimation of Word Importance for News MultiDocument Summarization. In Proceedings of the European Chapter of the Association for Computational Linguistics (EACL). Shafiq Joty, Giuseppe Carenini, Raymond Ng, and Yashar Mehdad. 2013. Combining Intra- and Multi-sentential Rhetorical Parsing for Documentlevel Discourse Analysis. In Proceedings of the Association for Computational Linguistics (ACL). Yuta Kikuchi, Tsutomu Hirao, Hiroya Takamura, Manabu Okumura, and Masaaki Nagata. 2014. Single Document Summarization based on Nested Tree Structure. In Proceedings of the Association for Computational Linguistics (ACL). Jonathan K. Kummerfeld, Taylor Berg-Kirkpatrick, and Dan Klein. 2015. An Empirical Analysis of Optimization for Max-Margin NLP. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Chen Li, Yang Liu, and Lin Zhao. 2015. Using External Resources and Joint Learning for Bigram Weighting in ILP-Based Multi-Document Summarization. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL). Hui Lin and Jeff Bilmes. 2011. A Class of Submodular Functions for Document Summarization. In Proceedings of the Association for Computational Linguistics (ACL). Chin-Yew Lin and Eduard Hovy. 2003. Automatic Evaluation of Summaries Using N-gram CoOccurrence Statistics. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL). Chin-Yew Lin. 2003. Improving Summarization Performance by Sentence Compression: A Pilot Study. In Proceedings of the International Workshop on Information Retrieval with Asian Languages. Annie Louis and Ani Nenkova. 2012. A Coherence Model Based on Syntactic Patterns. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). Annie Louis, Aravind Joshi, and Ani Nenkova. 2010. Discourse Indicators for Content Selection in Summarization. In Proceedings of the SIGDIAL 2010 Conference. 2007 Inderjeet Mani. 2001. AutomaticSummarization. John Benjamins Publishing. William C. Mann and Sandra A. Thompson. 1988. Rhetorical Structure Theory: Toward a Functional Theory of Text Organization. Text, 8(3):243–281. Daniel Marcu. 1998. Improving summarization through rhetorical parsing tuning. In Proceedings of the Workshop on Very Large Corpora. Andre Martins and Noah A. Smith. 2009. Summarization with a Joint Model for Sentence Extraction and Compression. In Proceedings of the Workshop on Integer Linear Programming for Natural Language Processing. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online Large-margin Training of Dependency Parsers. In Proceedings of the Association for Computational Linguistics (ACL). Ryan McDonald. 2006. Discriminative Sentence Compression With Soft Syntactic Evidence. In Proceedings of the European Chapter of the Association for Computational Linguistics (EACL). Kathleen McKeown, Jacques Robin, and Karen Kukich. 1995. Generating Concise Natural Language Summaries. Information Processing and Management, 31(5):703–733, September. Shamima Mithun and Leila Kosseim. 2011. Discourse Structures to Reduce Discourse Incoherence in Blog Summarization. In Proceedings of Recent Advances in Natural Language Processing. Ani Nenkova and Kathleen McKeown. 2011. Automatic summarization. Foundations and Trends in Information Retrieval, 5(2?3):103–233. Khanh Nguyen and Brendan O’Connor. 2015. Posterior Calibration and Exploratory Analysis for Natural Language Processing Models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Hitoshi Nishikawa, Kazuho Arita, Katsumi Tanaka, Tsutomu Hirao, Toshiro Makino, and Yoshihiro Matsuo. 2014. Learning to Generate Coherent Summary with Discriminative Hidden Semi-Markov Model. In Proceedings of the International Conference on Computational Linguistics (COLING). Gerald Penn and Xiaodan Zhu. 2008. A Critical Reassessment of Evaluation Baselines for Speech Summarization. In Proceedings of the Association for Computational Linguistics (ACL). Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning Accurate, Compact, and Interpretable Tree Annotation. In Proceedings of the Conference on Computational Linguistics and the Association for Computational Linguistics (ACLCOLING). Daniele Pighin, Marco Cornolti, Enrique Alfonseca, and Katja Filippova. 2014. Modelling Events through Memory-based, Open-IE Patterns for Abstractive Summarization. In Proceedings of the Association for Computational Linguistics (ACL). Nathan J. Ratliff, Andrew Bagnell, and Martin Zinkevich. 2007. (Online) Subgradient Methods for Structured Prediction. In Proceedings of the International Conference on Artificial Intelligence and Statistics. Evan Sandhaus. 2008. The New York Times Annotated Corpus. In Linguistic Data Consortium. Noah A. Smith. 2011. Linguistic Structure Prediction. Morgan & Claypool Publishers, 1st edition. Radu Soricut and Daniel Marcu. 2003. Sentence Level Discourse Parsing Using Syntactic and Lexical Information. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL). Kapil Thadani and Kathleen McKeown. 2013. Supervised Sentence Fusion with Single-Stage Inference. In Proceedings of the International Joint Conference on Natural Language Processing (IJCNLP). Kristian Woodsend and Mirella Lapata. 2012. Multiple Aspect Summarization Using Integer Linear Programming. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). Yasuhisa Yoshida, Jun Suzuki, Tsutomu Hirao, and Masaaki Nagata. 2014. Dependency-based Discourse Parser for Single-Document Summarization. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). 2008
2016
188
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2009–2018, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Set-Theoretic Alignment for Comparable Corpora Thierry Etchegoyhen and Andoni Azpeitia Vicomtech-IK4 Mikeletegi Pasalekua, 57 Donostia / San Sebasti´an, Gipuzkoa, Spain {tetchegoyhen, aazpeitia}@vicomtech.org Abstract We describe and evaluate a simple method to extract parallel sentences from comparable corpora. The approach, termed STACC, is based on expanded lexical sets and the Jaccard similarity coefficient. We evaluate our system against state-of-theart methods on a large range of datasets in different domains, for ten language pairs, showing that it either matches or outperforms current methods across the board and gives significantly better results on the noisiest datasets. STACC is a portable method, requiring no particular adaptation for new domains or language pairs, thus enabling the efficient mining of parallel sentences in comparable corpora. 1 Introduction With the rise of data-driven machine translation, be it statistical (Brown et al., 1990), examplebased (Nagao, 1984), or rooted in neural networks (Bahdanau et al., 2014), the need for large parallel corpora has increased accordingly. Although quality bitexts have been made available over the years (Tiedemann, 2012), creating parallel corpora is a resource-consuming effort involving professional human translation of large volumes of texts in multiple languages. As a consequence, there is still a lack of parallel data to properly model translation across languages and domains. To overcome this limitation, special emphasis has been placed in the last two decades on the exploitation of comparable corpora, with the development of a range of methods to mine parallel sentences from texts addressing similar topics in different languages. The work we present follows this line of research, describing and evaluating a simple method that allows parallel sentences to be efficiently mined in different languages and domains with minimal adaptation effort. The method we describe, termed STACC, is based on expanded lexical sets and the Jaccard similarity coefficient (Jaccard, 1901), which is computed as the ratio of set intersection over union. We evaluate this simple approach against state-of-the-art methods for comparable sentence alignment on a variety of datasets for ten different language pairs, showing that STACC either matches or outperforms competing approaches. The paper is organised as follows: Section 2 describes related work on parallel sentence mining in comparable corpora; Section 3 presents the STACC method; Section 4 describes the experiments in comparable sentence alignment, including the description of test corpora and systems, and an analysis of the results; Section 5 presents results obtained with an optimised version of the alignment process, beyond system comparison; finally, Section 6 draws conclusions from the work described in the paper. 2 Related work A large variety of techniques have been proposed to mine parallel sentences in comparable corpora. One of the first approaches was proposed by (Zhao and Vogel, 2002), who combined sentence length and bilingual lexicon models under a maximum likelihood criterion. (Munteanu and Marcu, 2002) explored the use of suffix trees, later opting for maximum entropy-based binary classification using a modified version of IBM Model 1 word translation probabilities (Brown et al., 1993) 2009 and both general and alignment-specific features (Munteanu and Marcu, 2005). (Fung and Cheung, 2004) describe the first approach to tackle parallel sentence mining in very non-parallel corpora, using cosine similarity as their sentence selection criterion. Several approaches have employed full statistical machine translation models instead of relying only on lexical tables. (Abdul-Rauf and Schwenk, 2009), for instance, apply the TER metric (Snover et al., 2006) on fully machine translated output to identify parallel sentences; (Sarikaya et al., 2009) use a similar approach but with BLEU (Papineni et al., 2002) as their similarity metric. One of the noted advantages of including full machine translation is the ability to better model the complex factors found in translation, e.g. fertility and contextual information, as compared to lexicon-based approaches. The latter enable, in principle, the capture of a larger set of lexical translation variants, and do not require the training of complete translation models. Sophisticated feature-based approaches have been developed in recent years in order to provide a method that may apply to larger sets of language pairs and domains. (Stef˘anescu et al., 2012) report improvements over previous methods with a feature-based sentence similarity measure, an approach which is described in more detail in Section 4.2.1. Another feature-rich approach is described in (Smith et al., 2010), showing improvements over standard and improved binary classifiers; we describe their model in more details in Section 4.2.2. Jaccard similarity, a core component of the approach we describe, has been standardly used as a text similarity measure in information retrieval and text summarisation tasks, or to compute semantic similarity (Pilehvar et al., 2013). For comparable corpora, it has been notably employed by (Paramita et al., 2013), who estimate document comparability by computing the coefficient on a subset of translated source sentences, discarding those containing large amounts of named entities or numbers, and taking the average of these sentence-level scores. The method we present in the next section builds on a related similarity measure as a direct indicator of comparable sentence similarity. 3 STACC STACC is an approach to sentence similarity based on expanded lexical sets, whose main goal is to provide a simple yet effective procedure that can be applied across domains and corpora with minimal adaptation and deployment costs. We start with the minimal set of bilingual information that can be automatically extracted from a seed parallel corpus, using lexical translations determined and ranked according to IBM models; word translations are computed in both directions using the GIZA++ toolkit (Och and Ney, 2003). STACC relies on the Jaccard index, which defines set similarity as the ratio of set intersection over union. We base our comparable sentence similarity measure strictly on this index, applying it to expanded lexical sets as described below. Let si and sj be two tokenised and truecased sentences in languages l1 and l2, respectively, Si the set of tokens in si, Sj the set of tokens in sj, Tij the set of expanded translations into l2 for all tokens in Si, and Tji the set of expanded translations into l1 for all tokens in Sj. The STACC similarity score is then computed as in Equation 1: simstacc = |Tij∩Sj| |Tij∪Sj| + |Tji∩Si| |Tji∪Si| 2 (1) That is, the score is defined as the average of the Jaccard similarity coefficients obtained between sentence token sets and expanded lexical translations in both directions. The translation sets Tij and Tji are initially computed from sentences si and sj by retaining the k-best lexical translations found in GIZA tables, if any. Lexical translations are selected according to the ranking provided by the precomputed lexical probabilities but the specific probability values are not used any further to compute similarity:1 all potential translations are members of the translation set as tokens. Discarding this source of potentially exploitable information is mostly motivated by the relative reliability of lexical translation probabilities across domains. Lexical translations are usually extracted from a different domain than that of the comparable corpora at hand, typically using professionally created institutional corpora such as Europarl (Koehn, 2005), and lexical distributions across 1This differs from (Skadin¸a et al., 2012), who include a lexical translation feature where actual probabilities are used to compute the final score. 2010 domains can be expected to be quite different. This casts doubt on the usefulness of using precomputed translation probabilities and simple set membership was favoured in our approach. The initial lexical translation sets undergo a first expansion step to capture morphological variation, using longest common prefix matching (hereafter, LCP). To apply prefix matching to the minimal set of elements necessary, we compute the following two set differences: • Set of elements in the source to target translation set that are not members of the target token set: T ′ ij = Tij −Sj • Set of elements in the target to source translation set that are not members of the source token set: T ′ ji = Tji −Si For each element in T ′ ij (respectively T ′ ji) and each element in Sj (respectively Si), if a common prefix is found with a minimal length of more than n characters, the prefix is added to both translation sets.2 This simplified approach to stemming removes the need to rely on manually constructed endings lists to compute similarity or on a complete morphological analyser, which might not be available at all for under-resourced languages. It is also computationally more efficient as it exploits the nature of the alignment problem to reduce the search space: instead of matching each source and target word against every potential ending, with hundreds of possible endings in some languages, only the prefixes of word pairs within the subsets created through set difference need to be compared using LCP. Another set expansion operation is defined to handle named entities, which are strong indicators of potential alignment, given their low relative frequency, and are likely to be missing from translation tables trained on a different domain. While creating the previously defined lexical translation sets from truecased sentences, capitalised tokens that are not found in the translation tables are added to the translation sets. Numbers are similarly handled and added to the expanded sets, as they can also act as alignment indicators, in particular when they denote dates. These two expansions steps are essential to a successful use of Jaccard similarity for comparable sentence alignment. For instance, LCP gives 2Throughout the experiments we describe, n was set to 3. a 2.9 points improvement in F1 measure on the initial Basque-Spanish test set described in Section 4.1, whereas the NE/Number expansion resulted in a 1.3 points gain; the two expansions combined gave a 4.3 points increase in terms of F1 measure. For the English-Bulgarian pair on the initial Wikipedia test set, the gains were 3.7, 2.6 and 5.5, respectively. Combining the two operations thus contributed to the improvements over the state of the art described in Section 4.3. No additional operations are performed on the created sets, and in particular no filtering is applied, with punctuation and functional words kept alongside content words in the final sets. This notably eliminates the use of stop word lists from the computation of similarity. Although it builds on fairly standard ideas, such as the use of GIZA tables or the Jaccard index, the approach is original in its conjoined use of these elements with surface-based information and simple set-theoretic operations to form a similarity assessment mechanism that proved efficient on comparable corpora, as shown in the next section. 4 Comparable sentence alignment We performed a systematic comparison between different approaches to comparable sentence alignment on a variety of comparable corpora and language pairs. This section describes the components of the experimental setup. 4.1 Corpora Three core sets of corpora were used in the evaluation, which we describe in turn. The selected test sets, all manually aligned, were used in different settings with gradual amounts of alignment noise added to the original sets. The goal of noisification is to assess the behavior of each approach in different scenarios and evaluate their ability to properly align data from ideal conditions to gradually noisier environments, the latter being a more realistic case when dealing with comparable corpora. The first corpus consists in the public datasets created within the Accurat project.3 The corpus covers 7 language pairs, each one composed of English and an under-resourced language. The datasets contain manually verified alignments that were created from news articles. We noisified these datasets by adding sentences from the 3http://www.accurat-project.eu/. The corpus is available from: http://metashare.elda.org/repository/search/?q=accurat 2011 TEST SETS EN-DE EN-EL EN-ET EN-LT EN-LV EN-RO EN-SL 1:1 ATS: 512 ATS: 512 ATS: 512 ATS: 512 ATS: 512 ATS: 512 ATS: 512 2:1 ATS: 512 AOC: 512 ATS: 512 AOC: 512 ATS: 512 AOC: 512 ATS: 512 AOC: 512 ATS: 512 AOC: 512 ATS: 512 AOC: 512 ATS: 512 AOC: 512 100:1 ATS: 512 AOC: 6891 EUP: 43797 ATS: 512 AOC: 24276 EUP: 26412 ATS: 512 AOC: 50688 ATS: 512 AOC: 50688 ATS: 512 AOC: 50688 ATS: 512 AOC: 50688 ATS: 512 AOC: 15857 EUP: 34831 Table 1: Accurat evaluation sets TEST SETS BG-EN DE-EN ES-EN 1:1 WTS: 516 WTS: 314 WTS: 500 100:1 WTS: 516 EUP: 51084 WTS: 314 NC: 31086 WTS: 500 NC: 49500 Table 2: Wikipedia evaluation sets TEST SETS ES-EU 1:1 500-500 EITB NOISE1 1000-1000 EITB NOISE2 1000-1500 Table 3: EITB evaluation sets original comparable corpora collected within the project, creating the following additional variants: (i) a 2:1 noisified version, where for each sentence in the original sets, 2 additional sentences without corresponding alignments were added; and (ii) a 100:1 noisified version with 100 sentences added for each sentence in the test sets. For each language pair, the additional sentences were taken from the initial portion of the selected additional corpora in one language and the final portion in the other language. For the 2:1 datasets, and the 100:1 variants in some language pairs, the original comparable corpora were used as additional data. For other language pairs, creating the 100:1 variant required adding sentences from different corpora to reach the required amount of data. Table 1 describes the final datasets used in the evaluation.4 As a second corpus, we used the data described in (Smith et al., 2010).5 The texts were extracted from Wikipedia articles in 3 language pairs (English-German, English-Spanish and EnglishBulgarian) and manually annotated for parallelism. We used the provided test sets (hereafter, WTS) and added a 100:1 noisified variant using sentences from the News Crawl corpus6 for English-German and English-Spanish, and from Europarl for the English-Bulgarian pair. Table 2 4In the table, ATS refers to the Accurat test sets, AOC to the Accurat original corpora, and EUP to the Europarl corpus. 5Available at: http://research.microsoft.com/enus/people/chrisq/wikidownload.aspx. 6Refered to as NC here and available from: http://www.statmt.org/wmt13/translation-task.html. describes these datasets, to which we will refer collectively as the Wikipedia corpus. Finally, we used the EITB corpus, composed of news generated by the Basque Country’s public broadcasting service.7 The news are written independently in Basque and Spanish but refer to the same specific events and the corpus can thus be categorized as strongly comparable. We defined initial test sets of 500 manually aligned sentences in each language, and created two noisified variants: (i) a test set with 500 additional sentences in both languages, and (ii) a test set with 500 additional sentences in Spanish and 1000 in Basque. All additional sentences were taken from unaligned portions of the same EITB corpus. Table 3 summarises the EITB test sets. The selected corpora thus cover 10 different language pairs and different domains, with varying degrees of noisification, and provide for a large and diverse comparison set. 4.2 Systems Three approaches were evaluated against the previously described corpora: LEXACC (Stef˘anescu et al., 2012), the STACC method described in Section 3, and the approach based on Conditional Random Fields described in (Smith et al., 2010), to which we will refer as CRF. The latter was only evaluated on the Wikipedia corpus, using the re7Euskal Irrati Telebista (EITB): http://www.eitb.eus. The corpus was provided courtesy of EITB and will be made available to the research community. 2012 sults reported in the aforementioned article, as the tools to apply this method were not available to us; both LEXACC and STACC were evaluated on all test sets. LEXACC was selected given its reported performance and its aim at portability across domains and language pairs; the system is also available as part of the Accurat toolkit,8 which allowed for a direct comparison with STACC on all datasets. The CRF approach has proven more effective than standard classifier-based methods on the Wikipedia datasets, with published results on publically available test sets, and was thus selected as an alternative approach to comparable sentence alignment. Both approaches are based on sophisticated methods with demonstrated improvements over the state-of-the-art, thus providing strong baselines for system comparison. 4.2.1 LEXACC LEXACC is a fast parallel sentence mining system based on a cross-linguistic information retrieval (CLIR) approach. It uses the Lucene search engine9 in two major steps: target sentences are first indexed by the search engine, and a search query is built from a translation of content words in the source sentence to retrieve alignment candidates. The query is constructed using IBM Model 1 lexical translation tables, extracted from seed parallel corpora The alignment metric in LEXACC is a translation similarity measure based on 5 feature functions briefly described here (see (Stef˘anescu et al., 2012) for a detailed description): • f1 measures source-target candidate pairs strength in terms of content word translation and string similarity; • f2 is similar to f1 but applies to functional words, as identified in manually created stop word lists; • f3 measures content word alignment obliqueness defined as a discounted correlation measure; • f4 is a binary feature that compares the number of initial/final aligned word translations over a pre-defined threshold; 8http://www.accurat-project.eu/index.php?p=accurattoolkit 9http://lucenenet.apache.org/ • f5 is a second binary feature which evaluates if the source and target sentences end with the same punctuation. The similarity measure is then computed according to the sum of weighted feature functions, with optimal weights determined by means of logistic regression. We used the optimal feature weights described in (Stef˘anescu et al., 2012) for the language pairs in the Accurat corpus and the provided default weights for English-Spanish and English-Bulgarian; for Basque-Spanish, optimal weights were estimated through logistic regression on a training set formed with 9500 positive parallel examples from the IVAP corpus10 and an equal amount of non-parallel negative examples. For the experiments, all lexical translation tables were created with GIZA++ on the JRC-Acquis Communautaire corpus.11 Lucene searches were set to return a maximum of 100 candidates for each source sentence. We used the default setup for LEXACC, except for two minor changes. First, we removed the initial Lucene search constraint which was set to discard identical source and target sentences, a setting which prevented the retrieval of valid news candidates such as sports results. Secondly, we increased the length ratio filter from 1.5 to 7.5, as the initial value was too restrictive for the Basque-Spanish corpus. Both changes were thus meant to retrieve the most accurate set of alignment candidates, in order to get meaningful results on the test sets with both methods. 4.2.2 Conditional Random Fields The model we refer to as CRF (Smith et al., 2010) is a first order linear chain Conditional Random Field (Lafferty et al., 2001), where for each source sentence a hidden variable indicates the corresponding target sentence to which it is aligned, or null if there is no such target sentence. This system was compared to the standard binary classifier of (Munteanu and Marcu, 2005) and to a ranking variant designed by the authors to avoid class imbalance issues that arise with binary classification. On the Wikipedia test sets, the CRF approach gave 10Extracted from the translation memories released by the Basque Public Administration Institute (http://opendata.euskadi.eus/catalogo/-/memorias-detraduccion-del-servicio-oficial-de-traductores-del-ivap/), which consist of professional translations of public administration texts. 11We used the latest available version of the corpus, as of November 2015, in the OPUS repository: http://opus.lingfil.uu.se/JRC-Acquis.php. 2013 the best results overall and was thus selected for our system comparison. The sequence model comprises the following features: • A word alignment feature set, based on IBM Model 1 and HMM alignments, which includes: log probability of the alignment; number of aligned/unaligned words; longest aligned/unaligned sequence of words; and number of words for different degrees of fertility. • Two sentence-related features: source and target length ratio modeled through a Poisson distribution (Moore, 2002), and relative position of source and target sentences in the document. • A set of distortion features measuring the difference in position between the previous and current aligned sentences. • A set of features based on Wikipedia markup, including matching and non-matching links for alignment candidates. • A set of lexicon features based on a probabilistic model of word pair alignments, trained on a set of annotated Wikipedia articles. The lexicon-based feature set includes the HMM translation probability, word-based positional differences, orthographic similarity, context translation similarity and distributional similarity. The seed parallel data were based on the Europarl corpus for Spanish and German and the JRC-Aquis corpus for Bulgarian. The authors also included article titles of parallel Wikipedia documents and Wiktionary translations as additional seed data. 4.2.3 STACC In order to establish a fair comparison between LEXACC and STACC, all shared settings were identical. Thus, lexical translations were based on the same previously described GIZA tables extracted from the JRC corpus, and STACC alignment was performed on the same sets of candidates retrieved from the Lucene searches by LEXACC for each language pair. As described in Section 3, STACC is based on the k-best translations provided by lexical translation tables. For the experiments, k was set to 5, a value arbitrarily determined to be an optimal compromise between overcrowding the sets with unlikely translations and limiting translation candidates to minimal translation variants. Experimenting with different values on the test sets showed that this value for k was not actually the optimal one for some language pairs, with e.g. a 2.9 point gain in F1 measure when setting k to 2 for EnglishGreek on the initial Accurat test set.12 The results we present in the next section are thus not the best achievable ones using the STACC approach. Nonetheless, we maintained the use of a default value because of the lack of in-domain development sets on which an optimal value could be fairly computed. 4.3 Results To evaluate the accuracy of the tested methods, precision was taken as the ratio of correct alignments over predicted alignments, and recall as the ratio of correct alignments over true alignments. We present results in terms of F1 measure, as we seek an optimal balance between alignment precision and recall. Table 4 presents the results on the Accurat test sets for LEXACC and STACC using their respective optimal similarity thresholds.13 On the 21 test sets, the two systems were tied on two occasions, with STACC obtaining better results in 89.5% of the remaining cases. On the noisiest datasets, STACC was consistently and markedly better across language pairs. The results on the Wikipedia test sets are shown in Table 5. For English-Spanish and EnglishGerman, both approaches performed quite similarily on the initial test sets, with STACC obtaining the best results on the noisier sets. The results for English-Bulgarian are interesting, as this is the only case where LEXACC outperforms STACC on both the clean and noisy datasets. The data used for noisification in this case may have had an effect on the results. Data extracted from Europarl, which compose the entire noisifi12Note that similar issues would arise if the selected translations were determined based on thresholds over translation probabilities, as the thresholds would need to be empirically set as well. 13The optimal thresholds were determined as the values providing the best results on the test sets. This would obviously not be an available threshold selection method when mining comparable corpora, where a default value would have to be used instead. Such a default value would however not allow for a fair comparison of the systems. 2014 SYSTEM TEST SETS EN-DE EN-EL EN-ET EN-LT EN-LV EN-RO EN-SL LEXACC 1:1 96.0 89.5 88.9 93.1 95.0 99.4 88.5 STACC 1:1 96.7 88.0 92.0 96.1 96.6 98.8 89.5 LEXACC 2:1 83.4 83.2 73.9 81.2 83.8 95.3 81.6 STACC 2:1 89.2 83.2 79.9 86.9 88.2 95.3 82.3 LEXACC 100:1 16.6 22.7 34.2 45.1 45.1 70.4 24.9 STACC 100:1 33.7 37.3 42.5 56.0 56.2 75.7 35.3 Table 4: Best F1 measures on the Accurat evaluation sets SYSTEM TEST SETS EN-BG EN-DE EN-ES LEXACC 1:1 87.1 82.7 98.2 STACC 1:1 84.9 82.0 99.7 LEXACC 100:1 27.6 31.0 66.2 STACC 100:1 16.6 35.8 73.3 Table 5: Best F1 measures on the Wikipedia evaluation sets CRF LEXACC STACC LANGUAGE PAIR R@90 R@80 R@90 R@80 R@90 R@80 EN-BG 72.0 81.8 80.4↑ 80.4↑ 80.2 81.6↑ EN-DE 58.7 68.8 75.2 78.7 68.8 81.8↑ EN-ES 90.4 93.7 97.0↑ 97.0↑ 99.6↑ 99.6↑ Table 6: Targeted recall on the Wikipedia evaluation sets SYSTEM TEST SETS ES-EU LEXACC 1:1 77.2 LEXACC DF 1:1 80.2 STACC 1:1 90.9 LEXACC EITB NOISE1 59.2 LEXACC DF EITB NOISE1 62.2 STACC EITB NOISE1 82.8 LEXACC EITB NOISE2 54.5 LEXACC DF EITB NOISE2 57.4 STACC EITB NOISE2 79.5 Table 7: Best F1 measures on the EITB evaluation sets 33.7 37.3 42.5 56.0 56.2 75.7 35.3 38.5 42.2 43.6 59.2 57.9 78.3 37.8 en-de en-el en-et en-lt en-lv en-ro en-sl F1 stacc stacc_opt Figure 1: STACC optimisation results on the Accurat 100:1 test sets 2015 cation set for this language pair, is closer to the JRC vocabulary than the original comparable data on which the alignment process would take place in real-world conditions. Although we have not thoroughly tested the impact of this variable, it is possible that those datasets are more confusing for an approach such as STACC, which is based mostly on lexical information extracted from seed parallel data, than for a feature-based approach where some features, like the boolean punctuation-based ones in LEXACC, may compensate for erroneous alignments due to artificial domain vocabulary overlap. Determining if this hypothesis is indeed correct would require further experiments beyond the scope of this paper To include the CRF approach in the comparison, we used two of the provided measures, namely recall obtained at precisions of 80 and 90 percent on the 1:1 test sets.14 We report results obtained with the best variant of CRF, namely the model which includes Wikipedia and lexicon features, with intersected results from both directions. Results are reported in Table 6. Although the comparison was limited in this case, results were in favour of LEXACC and STACC on targeted recall measures for the Wikipedia datasets. Finally, both LEXACC and STACC were compared against the EITB test sets, with results shown in Table 7. For this language pair, STACC performed markedly better with differences of up to 25 points. A likely explanation for these results is the nature of the features that compose the LEXACC model. In particular the features related to alignment obliqueness and number of initial/final aligned words might be detrimental in the case of Basque, which exhibits free word order. Given the poor results obtained with feature weights optimised on the IVAP corpus, we also checked the results using the provided default weights. This resulted in slightly better performance, as shown in the rows named LEXACC DF in Table 7, though still far from the results achieved with STACC. 4.4 Discussion Overall, STACC provided the best results across domains and language pairs, in particular for noisier datasets. Additionally, the approach has several 14Note that, for both LEXACC and STACC, in some scenarios even the lowest thresholds gave precisions higher than 90, rendering the comparison moot. We indicate these cases with a ↑sign next to the highest recall obtained at the closest precision to the arbitrary 80 and 90 precision points. advantages over existing methods and systems for comparable segment alignment. First, it is undoubtedly simpler, as it requires but minimal information to reach optimal results. Lexical tables and simple set expansion operations based on surface properties of the tokens are the only components of the approach, as compared to the more sophisticated feature-based approaches which rely on larger sets of components for which optimal weights need to be computed prior to applying the models. Secondly, because of its simplicity, STACC is a more portable method, as is it is not necessary to perform any type of adaptation for new domains and language pairs, nor to rely on domain-specific information such as link structure in Wikipedia. In actual practice, portability is an important issue which hinders on the exploitation of comparable corpora. An efficient yet easily deployable method is therefore a welcome addition to the toolset for parallel data extraction. Finally, STACC results in fewer computational steps when compared to more complex featurebased methods. First, it involves simple binary set intersection and union operations for the computation of similarity, instead of conjoined feature computation on larger component sets. Secondly, the approach relies on tractable set differences for its most computationally expensive operation of longest common prefix matching, compared to matching all tokens against lists of word endings which can be quite large, notably in the case of agglutinative languages. Although promising, the approach could be further evaluated, and potentially improved, along two main lines. It might be worth exploring for instance the impact of filtering alignment candidates according to the relative position of sentence pairs in the original source and target documents, a documentlevel property notably exploited by (Smith et al., 2010). As the STACC approach is featureless, and meant to remain as such in order to maintain its portability and ease of deployment, filtering distant sentence pairs would need to take place prior to the computation of alignment scores. A simple approach compatible with STACC would consist in constraining candidate sets by including sentence position information when performing indexing and candidate querying in a CLIR approach. This would provide an additional evalua2016 tion of the accuracy of the approach in scenarios where document-level information is exploitable. Additionally, given the importance of k-best lexical translations in computing STACC similarity, variations in lexical coverage obtained with different translation tables can be expected to impact alignment accuracy. Although mining comparable corpora usually requires the use of seed translation knowledge extracted from a domain that differs from the one being mined, default tables with wide lexical coverage can be built from existing parallel corpora in different domains. Thus, improvements might be obtained with larger and more diverse tables than the ones used in the experiments reported here, which were based on translations extracted from a single domain. A precise assessment of the evolution of alignment accuracy given variations in lexical translation coverage is left for future research. 5 Alignment optimisation As previously mentioned, for both LEXACC and STACC, alignments were computed for every source sentence against candidate translations retrieved by Lucene and all cases where a given target sentence has more than one source alignment were left as is. Although this methodology enabled a fair comparison between the two systems, it evidently impacts alignment accuracy. One simple optimisation is to retain only the best overall source-target alignments, discarding all alignments established between a given source sentence and a target sentence if the latter is linked to better scoring source sentences. The net effect of this procedure is the promotion of better alignments, as some correct alignments would not be hidden anymore by other better scoring shared alignments. This is most likely to occur with source-target pairs that are close variants of each other, with close similarity scores. We applied this simple optimisation to the Accurat test sets and observed improvements across the board, as shown in Figure 1. Depending on actual usage, this optimised version of STACC alignment can constitute the best alternative for the extraction of parallel sentences from comparable corpora. 6 Conclusions We described a simple approach to comparable sentence alignment, termed STACC, which is based on automatically extracted seed lexical translations, the Jaccard similarity coefficient, and simple set expansion operations that target named entities, numbers, and morphological variation using longest common prefixes. Building on fairly standard components for the computation of similarity, this method is shown to perform better than current alternatives. The approach was evaluated on a large range of datasets from various domains for ten language pairs, giving the best results overall when compared to sophisticated state-of-the-art methods. STACC also performed better than competing approaches on noisier corpora, showing promises for the exploitation of the typically noisy data found when mining comparable corpora. STACC is a highly portable method which requires no adaptation for its application to new domains and language pairs. It thus allows for the fast deployment of a crucial component in comparable corpora alignment, which opens the path for an increase in the amount of such corpora that can be exploited in the future. Acknowledgments This work was partially funded by the Spanish Ministry of Economy and Competitiveness and the Department of Economic Development and Competitiveness of the Basque Government through the AdapTA (RTC-2015-3627-7), PLATA (IG2014/00037) and TRADIN (IG-2015/0000347) projects. We would like to thank MondragonLingua Translation & Communication as coordinator of these projects and the three anonymous reviewers for their helpful feedback and suggestions. References Sadaf Abdul-Rauf and Holger Schwenk. 2009. On the use of comparable corpora to improve SMT performance. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, EACL ’09, pages 16–23, Stroudsburg, PA, USA. Association for Computational Linguistics. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv:1409.0473. 2017 Peter F Brown, John Cocke, Stephen A Della Pietra, Vincent J Della Pietra, Fredrick Jelinek, John D Lafferty, Robert L Mercer, and Paul S Roossin. 1990. A statistical approach to machine translation. Computational linguistics, 16(2):79–85. Peter F Brown, Vincent J Della Pietra, Stephen A Della Pietra, and Robert L Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational linguistics, 19(2):263–311. Pascale Fung and Percy Cheung. 2004. Mining Very Non-Parallel Corpora: Parallel Sentence and Lexicon Extraction via Bootstrapping and E.M. In Proceedings of Empirical Methods in Natural Language Processing, pages 57–63. Paul Jaccard. 1901. Distribution de la flore alpine dans le bassin des Dranses et dans quelques r´egions voisines. Bulletin de la Soci´et´e Vaudoise des Sciences Naturelles, 37:241 – 272. Philipp Koehn. 2005. Europarl: A Parallel Corpus for Statistical Machine Translation. In Proceedings of the 10th Machine Translation Summit, pages 79–86. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML ’01, pages 282–289, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Robert C. Moore. 2002. Fast and accurate sentence alignment of bilingual corpora. In Proceedings of the 5th Conference of the Association for Machine Translation in the Americas on Machine Translation: From Research to Real Users, AMTA ’02, pages 135–144, London, UK, UK. Springer-Verlag. Dragos Stefan Munteanu and Daniel Marcu. 2002. Processing Comparable Corpora With Bilingual Suffix Trees. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 289–295. Association for Computational Linguistics. Dragos Stefan Munteanu and Daniel Marcu. 2005. Improving machine translation performance by exploiting non-parallel corpora. Computational Linguistics, 31(4):477–504. Makoto Nagao. 1984. A Framework for a Mechanical Translation Between Japanese and English by Analogy Principle. In Proceedings of the International NATO Symposium on Artificial and Human Intelligence, pages 173–180, New York, NY, USA. Elsevier North-Holland, Inc. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational linguistics, 29(1):19–51. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 311–318. Association for Computational Linguistics. Monica Lestari Paramita, David Guthrie, Evangelos Kanoulas, Rob Gaizauskas, Paul Clough, and Mark Sanderson. 2013. Methods for collection and evaluation of comparable documents. In Building and Using Comparable Corpora, pages 93–112. Springer. Mohammad Taher Pilehvar, David Jurgens, and Roberto Navigli. 2013. Align, disambiguate and walk: A unified approach for measuring semantic similarity. In Proceedings of the 51st meeting of the Association for Computational Linguistics, pages 1341–1351. The Association for Computational Linguistics. Ruhi Sarikaya, Sameer Maskey, R Zhang, Ea-Ee Jan, D Wang, Bhuvana Ramabhadran, and Salim Roukos. 2009. Iterative sentence-pair extraction from quasi-parallel corpora for machine translation. In Proceedings of InterSpeech, pages 432–435. Inguna Skadin¸a, Ahmet Aker, Nikos Mastropavlos, Fangzhong Su, Dan Tufis, Mateja Verlic, Andrejs Vasil¸jevs, Bogdan Babych, Paul Clough, Robert Gaizauskas, et al. 2012. Collecting and using comparable corpora for statistical machine translation. In Proceedings of the 8th International Conference on Language Resources and Evaluation. Jason R. Smith, Chris Quirk, and Kristina Toutanova. 2010. Extracting parallel sentences from comparable corpora using document level alignment. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT ’10, pages 403–411, Stroudsburg, PA, USA. Association for Computational Linguistics. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of Association for Machine Translation in the Americas, pages 223–231. Dan Stef˘anescu, Radu Ion, and Sabine Hunsicker. 2012. Hybrid parallel sentence mining from comparable corpora. In Proceedings of the 16th Conference of the European Association for Machine Translation, pages 137–144. J¨org Tiedemann. 2012. Parallel data, tools and interfaces in OPUS. In Proceedings of the 8th Language Resources and Evaluation Conference, pages 2214– 2218. Bing Zhao and Stephan Vogel. 2002. Adaptive parallel sentences mining from web bilingual news collection. In Proceedings of the 2002 IEEE International Conference on Data Mining, pages 745–748. IEEE. 2018
2016
189
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 194–204, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Idiom Token Classification using Sentential Distributed Semantics Giancarlo D. Salton and Robert J. Ross and John D. Kelleher Applied Intelligence Research Centre School of Computing Dublin Institute of Technology Ireland [email protected] {robert.ross,john.d.kelleher}@dit.ie Abstract Idiom token classification is the task of deciding for a set of potentially idiomatic phrases whether each occurrence of a phrase is a literal or idiomatic usage of the phrase. In this work we explore the use of Skip-Thought Vectors to create distributed representations that encode features that are predictive with respect to idiom token classification. We show that classifiers using these representations have competitive performance compared with the state of the art in idiom token classification. Importantly, however, our models use only the sentence containing the target phrase as input and are thus less dependent on a potentially inaccurate or incomplete model of discourse context. We further demonstrate the feasibility of using these representations to train a competitive general idiom token classifier. 1 Introduction Idioms are a class of multiword expressions (MWEs) whose meaning cannot be derived from their individual constituents (Sporleder et al., 2010). Idioms often present idiosyncratic behaviour such as violating selection restrictions or changing the default semantic roles of syntactic categories (Sporleder and Li, 2009). Consequently, they present many challenges for Natural Language Processing (NLP) systems. For example, in Statistical Machine Translation (SMT) it has been shown that translations of sentences containing idioms receive lower scores than translations of sentences that do not contain idioms (Salton et al., 2014). Idioms are pervasive across almost all languages and text genres and as a result broad coverage NLP systems must explicitly handle idioms (Villavicencio et al., 2005). A complicating factor, however, is that many idiomatic expressions can be used both literally or figuratively. In general, idiomatic usages are more frequent, but for some expressions the literal meaning may be more common (Li and Sporleder, 2010a). As a result, there are two fundamental tasks in NLP idiom processing: idiom type classification is the task of identifying expressions that have possible idiomatic interpretations and idiom token classification is the task of distinguishing between idiomatic and literal usages of potentially idiomatic phrases (Fazly et al., 2009). In this paper we focus on this second task, idiom token classification. Previous work on idiom token classification, such as (Sporleder and Li, 2009) and (Peng et al., 2014), often frame the problem in terms of modelling the global lexical context. For example, these models try to capture the fact that the idiomatic expression break the ice is likely to have a literal meaning in a context containing words such as cold, frozen or water and an idiomatic meaning in a context containing words such as meet or discuss (Li and Sporleder, 2010a). Frequently these global lexical models create a different idiom token classifier for each phrase. However, a number of papers on idiom type and token classification have pointed to a range of other features that could be useful for idiom token classification; including local syntactic and lexical patterns (Fazly et al., 2009) and cue words (Li and Sporleder, 2010a). However, in most cases these non-global features are specific to a particular phrase. So a key challenge is to identify from a range of features which features are the correct features to use for idiom token classification for a specific expression. Meanwhile, in recent years there has been an explosion in the use of neural networks for learning distributed representations for language (e.g., 194 Socher et al. (2013), Kalchbrenner et al. (2014) and Kim (2014)). These representations are automatically trained from data and can simultaneously encode multiple linguistics features. For example, word embeddings can encode gender distinctions and plural-singular distinctions (Mikolov et al., 2013b) and the representations generated in sequence to sequence mappings have been shown to be sensitive to word order (Sutskever et al., 2014). The recent development of Skip-Thought Vectors (or Sent2Vec) (Kiros et al., 2015) has provided an approach to learn distributed representations of sentences in an unsupervised manner. In this paper we explore whether the representations generated by Sent2Vec encodes features that are useful for idiom token classification. This question is particularly interesting because the Sent2Vec based models only use the sentence containing the phrase as input whereas the baselines systems use full the paragraph surrounding the sentence. We further investigate the construction of a “general” classifier that can predict if a sentence contains literal or idiomatic language (independent of the expression) using just the distributed representation of the sentence. This approach contrasts with previous work that has primarily adopted a “per expression” classifier approach and has been based on more elaborate context features, such as discourse and lexical cohesion between and sentence and the larger context. We show that our method needs less contextual information than the state-of-the-art method and achieves competitive results, making it an important contribution to a range of applications that do not have access to a full discourse context. We proceed by introducing that previous work in more detail. 2 Previous Work One of the earliest works on idiom token classification was on Japanese idioms (Hashimoto and Kawahara, 2008). This work used a set of features, commonly used in Word Sense Disambiguation (WSD) research, that were defined over the text surrounding a phrase, as well as a number of idiom specific features, which were in turn used to train an SVM classifier based on a corpus of sentences tagged as either containing an idiomatic usage or a literal usage of a phrase. Their results indicated that the WSD features worked well on idiom token classification but that their idioms specific features did not help on the task. Focusing on idiom token classification in English, Fazly et al. (2009) developed the concept of a canonical form (defined in terms of local syntactic and lexical patterns) and argued that for each idiom there is a distinct canonical form (or small set of forms) that mark idiomatic usages of a phrase. Meanwhile Sporleder and Li (2009) proposed a model based on how strongly an expression is linked to the overall cohesive structure of the discourse. Strong links result in a literal classification, otherwise an idiomatic classification is returned. In related work, Li and Sporleder (2010a) experimented with a range of features for idiom token classification models, including: global lexical context, discourse cohesion, syntactic structures based on dependency parsing, and local lexical features such as cue words, occurring just before or after a phrase. An example of a local lexical feature is when the word between occurs directly after break the ice; here this could mark an idiomatic usage of the phrase: it helped to break the ice between Joe and Olivia. The results of this work indicated that features based on global lexical context and discourse cohesion were the best features to use for idiom token classification. The inclusion of syntactic structures in the feature set provided a boost to the performance of the model trained on global lexical context and discourse cohesion. Interestingly, unlike the majority of previous work on idiom token classification Li and Sporleder (2010a) also investigated building general models that could work across multiple expressions. Again they found that global lexical context and discourse cohesion were the best features in their experiments. Continuing work on this topic, Li and Sporleder (2010b) present research based on the assumption that literal and figurative language are generated by two different Gaussians. The model representation is based on semantic relatedness features similar to those used earlier in (Sporleder and Li, 2009). A Gaussian Mixture Model was trained using an Expectation Maximization method with the classification of instances performed by choosing the category which maximises the probability of fitting either of the Gaussian components. Li and Sporleder (2010b)’s results confirmed the findings from previous work that figurative language exhibits less cohesion with the surrounding context then literal language. 195 More recently, Feldman and Peng (2013) describes an approach to idiom token identification that frames the problem as one of outlier detection. The intuition behind this work is that because idiomatic usages of phrases have weak cohesion with the surrounding context they are semantically distant from local topics. As a result, phrases that are semantic outliers with respect to the context are likely to be idioms. Feldman and Peng (2013) explore two different approaches to outlier detection based on principle component analysis (PCA) and linear discriminant analysis (LDA) respectively. Building on this work, Peng et al. (2014) assume that phrases within a given text segment (e.g., a paragraph) that are semantically similar to the main topic of discussion in the segment are likely to be literal usages. They use Latent Dirichlet Allocation (LDA) (Blei et al., 2003) to extract a topic representation, defined as a topic term document matrix, of each text segment within a corpus. They then trained a number of models that classify a phrase in a given text segment as a literal or idiomatic usage by using the topic term document matrix to project the phrase into a topic space representation and label outliers within the topic space as idiomatic. To the best of our knowledge, Peng et al. (2014) is currently the best performing approach to idiom token classification and we use their models as our baseline1. 3 Skip-Thought Vectors While idiom token classification based on long range contexts, such as is explored in a number of the models outlined in the previous section, generally achieve good performance, an NLP system may not always have access to the surrounding context, or may indeed find it challenging to construct a reliable interpretation of that context. Moreover, the construction of classifiers for each individual idiom case is resource intensive, and we argue fails to easily scale to under-resourced languages. In light of this, in our work we are exploring the potential of distributed compositional semantic models to produce reliable estimates of idiom token classification. Skip-Thought Vectors (Sent2Vec) (Kiros et al., 1However, it is not possible for us to reproduce their results directly as they “apply the (modified) Google stop list before extracting the topics” (Peng et al., 2014, p. 2023) and, to date, we do not have access to the modified list. So in our experiments we compare our results with the results they report on the same data. 2015) are a recent prominent example of such distributed models. Skip-Thought Vectors are an application of the Encoder/Decoder framework (Sutskever et al., 2014), a popular architecture for NMT (Bahdanau et al., 2015) based on recurrent neural networks (RNN). The encoder takes an input sentence and maps it into a distributed representation (a vector of real numbers). The decoder is a language model that is conditioned on the distributed representation and, in Sent2Vec, is used to “predict” the sentences surrounding the input sentence. Consequently, the Sent2Vec encoder learns (among other things) to encode information about the context of an input sentence without the need of explicit access to it. Figure 1 presents the architecture of Sent2Vec. More formally, assume a given tuple (si−1, si, si+1) where si is the input sentence, si−1 is the previous sentence to si and si+1 is the next sentence to si. Let wt i denote the t-th word for si and xt i denote its word embedding. We follow Kiros et al. (2015) and describe the model in three parts: encoder, decoder and objective function. Encoder. Given the sentence si of length N, let w1 i , . . . , wN i denote the words in si. At each timestep t, the encoder (in this case an RNN with Gated Recurrent Units - GRUs (Cho et al., 2014)) produces a hidden state ht i that represents the sequence w1 i , . . . , wt i. Therefore, hN i represents the full sentence. Each hN i is produced by iterating the following equations (without the subscript i): rt = σ(We rxt + Ue rht−1) (1) zt = σ(We zxt + Ue zht−1) (2) ˜ht = tanh(Wext + Ue(rt ⊙ht−1)) (3) ht = (1 −zt) ⊙ht−1 + zt ⊙˜ht (4) where rt is the reset gate, zt is the update gate, ˜ht is the proposed update state at time t and ⊙ denotes a component-wise product. Decoder. The decoder is essentially a neural language model conditioned on the input sentence representation hN i . However, two RNNs are used (one for the sentence si−1 and the other for the sentence si+1) with different parameters except the embedding matrix (E), and a new set of matrices (Cr, Cz and C) are introduced to condition the GRU on hN i . Let ht i+1 denote the hidden state of the decoder of the sentence si+1 at time t. De196 Figure 1: Picture representing the Encoder/Decoder architecture used in the Sent2Vec as shown in Kiros et al. (2015). The gray circles represent the Encoder unfolded in time, the red and the green circles represent the Decoder for the previous and the next sentences respectively also unfolded in time. In this example, the input sentence presented to the Encoder is I could see the cat on the steps. The previous sentence is I got back home and the next sentence is This was strange. Unattached arrows are connected to the encoder output (which is the last gray circle). coding si+1 requires iterating the following equations: rt = σ(Wd rxt + Ud rht−1 + CrhN i ) (5) zt = σ(Wd zxt + Ud zht−1 + CzhN i ) (6) ˜ht = tanh(Wdxt + Ud(rt ⊙ht−1) + ChN i ) (7) ht i+1 = (1 −zt) ⊙ht−1 + zt ⊙˜ht (8) where rt is the reset gate, zt is the update gate, ˜ht is the proposed update state at time t and ⊙ denotes a component-wise product. An analogous computation is required to decode si−1. Given ht i+1, the probability of the word wt i+1 conditioned on the previous w<t i+1 words and the encoded representation produced by the encoder (hN i ) is: P(wt i+1|w<t i+1, hN i ) ∝exp(Ewt i+1ht i+1) (9) where Ewt i+1 denotes the embedding for the word wt i+1. An analogous computation is performed to find the probability of si−1. Objective. Given the tuple (si−1, si, si+1), the objective is to optimize the sum of the logprobabilities of the next (si+1) and previous (si−1) sentences given the distributed representation (hN i ) of si: X log P(wt i+1|w<t i+1, hN i ) + P(wt i−1|w<t i−1, hN i ) (10) where the total objective is summed over all training tuples (si−1, si, si+1). The utility of Sent2Vec is that it is possible to infer properties of the surrounding context only from the input sentence. Therefore, we can assume that the Sent2Vec distributed representation is also carrying information regarding its context (without the need to explicitly access it). Following that intuition, we can train a supervised classifier only using the labelled sentences containing examples of idiomatic or literal language use without modelling long windows of context or using methods to extract topic representations. 4 Experiments In the following we describe a study that evaluates the predictiveness of the distributed representations generated by Sent2Vec for idiom token classifier. We first evaluate these representations using a “per expression” study design (i.e., one classifier per expression) and compare our results to those of Peng et al. (2014) who applied multiparagraphs contexts to generate best results. We also experiment with a “general” classifier trained and tested on a set of mixed expressions. 4.1 Dataset In order to make our results comparable with (Peng et al., 2014) we used the same VNC-Tokens dataset (Cook et al., 2008) that they used in their experiments. The dataset used is a collection of sentences containing 53 different Verb Noun Constructions2 (VNCs) extracted from the British National Corpus (BNC) (Burnard, 2007). In total, the VNC-Token dataset has 2984 sentences where each sample sentence is labelled with one of three labels: I (idiomatic); L (literal); or Q (unknown). 2This verb-noun constructions can be used either idiomatically or literally. 197 Of the 56 VNCs in the dataset 28 of these expressions have a reasonably balanced representation (with similar numbers of idiomatic and literal occurrences in the corpus) and the other 28 expressions have a skewed representation (with one class much more common then the other). Following the approach taken by (Peng et al., 2014), in this study we use the “balanced” part of the dataset and considered only those sentences labelled as “I” and “L” (1205 sentences - 749 labelled as “I” and 456 labelled as “L”). Peng et al. (2014) reported the precision, recall and f1-score of their models on 4 of the expressions from the balanced section of dataset: BlowWhistle; MakeScene; LoseHead; and TakeHeart. So, our first experiment is designed to compare our models with these baseline systems on a “per-expression” basis. For this experiment we built a training and test set for each of these expressions by randomly sampling expressions following the same distributions presented in Peng et al. (2014). In Table 1 we present those distribution and the split into training and test sets. The numbers in parentheses denote the number of samples labelled as “I”. Expression Samples Train Size Test Size BlowWhistle 78 (27) 40 (20) 38 (7) LoseHead 40 (21) 30 (15) 10 (6) MakeScene 50 (30) 30 (15) 20 (15) TakeHeart 81 (61) 30 (15) 51 (46) Table 1: The sizes of the samples for each expression and the split into training and test set. The numbers in parentheses indicates the number of idiomatic labels within the set. We follow the same split as described in Peng et al. (2014). While we wish to base our comparison on the work of Peng et al. (2014) as it is the current state of the art, this is not without its own challenges. In particular we see the choice of these 4 expression as a somewhat random decision as other expressions could also be selected for the evaluation with similar ratios to those described in Table 1. Moreover, the choosen expressions are all semi-compositional and do not consider fully non-compositional expressions (although we believe the task of classifying non-compositional expressions would be easier for any method aimed at idiom token classification as these expressions are high-fixed) .A better evaluation would consider all the 28 expressions of the balanced part of the VNC-tokens dataset. In addition, we also see this choice of training and test splits as somewhat arbitrary. For two of the expressions the test set contain samples in a way that one of the classes outnumber the other by a great amount: for BlowWhistle, the literal class contains roughly 4 times more samples than the idiomatic class; and for TakeHeart the idiomatic class contains roughly 9 times more samples than the literal class. Our concerns with these very skewed test set ratios is that it is very easy when applying a per expression approach (i.e., a separate model for each expression) for a model to achieve good performance (in terms of precision, recall, ad f1) if the positive class is the majority class in the test set. However, despite these concerns, in our first experiment in order to facilitate comparison with the prior art we follow the expression selections and training/test splits described in Peng et al. (2014). Studies on the characteristics of distributed semantic representations of words have shown that similar words tend to be represented by points that are close to each other in the semantic feature space (e.g. Mikolov et al. (2013a)). Inspired by these results we designed a second experiment to test whether the Sent2Vec representations would cluster idiomatic sentences in one part of the feature space and literal sentences in another part of the space. For this experiment we used the entire “balanced” part of the VNC-tokens dataset to train and test our “general” (multi-expression) models. In this experiment we wanted the data to reflect, as much as possible, the real distribution of the idiomatic and literal usages of each expression. So, in constructing our training and test set we tried to maintain for each expression the same ratio of idiomatic and literal examples across the training and test set. To create the training and test sets, we split the dataset into roughly 75% for training (917 samples) and 25% for testing (288 samples). We randomly sample the expressions ensuring that the ratio of idiomatic to literal expressions of each expression were maintained across both sets. In Table 2 we show the expressions used and their split into training and testing. The numbers in parentheses are the number of samples labelled as “I”. 4.2 Sent2Vec Models To encode the sentences into their distributed representations we used the code and models made available3 by Kiros et al. (2015). Using their 3https://github.com/ryankiros/skip-thoughts 198 Expression Samples Train Size Test Size BlowTop 28 (23) 21 (18) 7 (5) BlowTrumpet 29 (19) 21 (14) 8 (5) BlowWhistle 78 (27) 59 (20) 19 (7) CutFigure 43 (36) 33 (28) 10 (8) FindFoot 53 (48) 39 (36) 14 (12) GetNod 26 (23) 19 (17) 7 (6) GetSack 50 (43) 40 (34) 10 (9) GetWind 28 (13) 20 (9) 8 (4) HaveWord 91 (80) 69 (61) 22 (19) HitRoad 32 (25) 24 (19) 8 (6) HitRoof 18 (11) 14 (9) 4 (2) HitWall 63 (7) 50 (6) 13 (1) HoldFire 23 (7) 19 (5) 4 (2) KickHeel 39 (31) 30 (23) 9 (8) LoseHead 40 (21) 29 (15) 11 (6) LoseThread 20 (18) 16 (15) 4 (3) MakeFace 41 (27) 31 (21) 10 (6) MakeHay 17 (9) 12 (6) 5 (3) MakeHit 14 (5) 9 (3) 5 (2) MakeMark 85 (72) 66 (56) 19 (16) MakePile 25 (8) 18 (6) 7 (2) MakeScene 50 (30) 37 (22) 13 (8) PullLeg 51 (11) 40 (8) 11 (3) PullPlug 64 (44) 49 (33) 15 (11) PullPunch 22 (18) 18 (15) 4 (3) PullWeight 33 (27) 24 (20) 9 (7) SeeStar 61 (5) 49 (3) 12 (2) TakeHeart 81 (61) 61 (45) 20 (16) Table 2: The sizes of the samples for each expression and the split into training and test set. The numbers in parentheses indicates the number of idiomatic labels within the set. models it is possible to encode the sentences into three different formats: uni-skip (which uses a regular RNN to encode the sentence into a 2400dimensional vector); bi-skip (that uses a bidirectional RNN to encode the sentence also into a 2400-dimensional vector); and the comb-skip (a concatenation of uni-skip and bi-skip which has 4800 dimensions). Their models were trained using the BookCorpus dataset (Zhu et al., 2015) and has been tested in several different NLP tasks as semantic relatedness, paraphrase detection and image-sentence ranking. Although we experimented with all the three models, in this paper we only report the results of classifiers trained and tested using the comb-skip features. 4.3 Classifiers 4.3.1 “Per-expression” models The idea behind Sent2Vec is similar to those of word embeddings experiments: sentences containing similar meanings should be represented by points close to each other in the feature space. Following this intuition we experiment first with a similarity based classifier, the K-Nearest Neighbours (k-NN). For the k-NNs we experimented with k = {2, 3, 5, 10}. We also experimented with a more advanced algorithm, namely the Support Vector Machine (SVM) (Vapnik, 1995). We trained the SVM under three different configurations: Linear-SVM-PE4. This model used a “linear” kernel with C = 1.0 on all the classification setups. Grid-SVM-PE. For this model we performed a grid search for the best parameters for each expression. The parameters are: BlowWhiste = { kernel: ’rbf’, C = 100}; LoseHead = { kernel: ’rbf’, C = 1 }; MakeSene = { kernel: ’rbf’, C = 100 }; TakeHeart = { kernel: ’rbf’, C = 1000 }. SGD-SVM-PE. This model is a SVM with linear kernel but trained using stochastic gradient descent (Bottou, 2010). We set the SGD‘s learning rates (α) using a grid search: BlowWhiste = {α = 0.001 }; LoseHead = {α = 0.01 }; MakeSene = {α = 0.0001 }; TakeHeart = {α = 0.0001 }; FullDataset = {α = 0.0001 }. We trained these classifiers for 15 epochs. 4.3.2 “General” models We consider the task of creating a “general” classifier that takes an example of any potential idiom and classifying it into idiomatic or literal usage more difficult than the “per-expression” classification task. Hence we executed this part of the study with the SVM models only. We trained the same three types of SVM models used in the “perexpression” approach but with the following parameters: Linear-SVM-GE5. This model used a linear kernel with C = 1.0 for all the classification sets. Grid-SVM-GE. For this model we also performed a grid search and set the kernel to “polynomial kernel” of degree = 2 with C = 1000. SGD-SVM-GE. We also experimented with a SVM with linear kernel trained using stochastic gradient descent. We set the SGD‘s learning rate α = 0.0001 after performing a grid search. We trained this classifier for 15 epochs. 5 Results and Discussion We first present the results for the per expression comparison with Peng et al. (2014) and then in 4PE stands for “per-expression” 5GE stands for “general”. 199 Models BlowWhistle LoseHead MakeScene TakeHeart P. R. F1 P. R. F1 P. R. F1 P. R. F1 Peng et. al (2014) FDA-Topics 0.62 0.60 0.61 0.76 0.97 0.85 0.79 0.95 0.86 0.93 0.99 0.96 FDA-Topics+A 0.47 0.44 0.45 0.74 0.93 0.82 0.82 0.69 0.75 0.92 0.98 0.95 FDA-Text 0.65 0.43 0.52 0.72 0.73 0.72 0.79 0.95 0.86 0.46 0.40 0.43 FDA-Text+A 0.45 0.49 0.47 0.67 0.88 0.76 0.80 0.99 0.88 0.47 0.29 0.36 SVMs-Topics 0.07 0.40 0.12 0.60 0.83 0.70 0.46 0.57 0.51 0.90 1.00 0.95 SVMs-Topics+A 0.21 0.54 0.30 0.66 0.77 0.71 0.42 0.29 0.34 0.91 1.00 0.95 SVMs-Text 0.17 0.90 0.29 0.30 0.50 0.38 0.10 0.01 0.02 0.65 0.21 0.32 SVMs-Text+A 0.24 0.87 0.38 0.66 0.85 0.74 0.07 0.01 0.02 0.74 0.13 0.22 Distributed Representations KNN-2 0.61 0.41 0.49 0.30 0.64 0.41 0.55 0.89 0.68 0.46 0.96 0.62 KNN-3 0.84 0.32 0.46 0.58 0.65 0.61 0.88 0.88 0.88 0.72 0.94 0.81 KNN-5 0.79 0.28 0.41 0.57 0.65 0.61 0.87 0.83 0.85 0.73 0.94 0.82 KNN-10 0.83 0.30 0.44 0.28 0.68 0.40 0.85 0.83 0.84 0.78 0.94 0.85 Linear SVM 0.77 0.50 0.60 0.72 0.84 0.77 0.81 0.91 0.86 0.73 0.96 0.83 Grid SVM 0.80 0.51 0.62 0.83 0.89 0.85 0.80 0.91 0.85 0.72 0.96 0.82 SGD SVM 0.70 0.40 0.51 0.73 0.79 0.76 0.85 0.91 0.88 0.61 0.95 0.74 Table 3: Results in terms of precision (P.), recall (R.) and f1-score (F1) on the four chosen expressions. The results of (Peng et al., 2014) are those of the multi-paragraphs method. The bold values indicates the best results for that expression in terms of f1-score. Section 5.2 we present the results for the “general’ classifier approach. 5.1 Per-Expression Classification The averaged results over 10 runs in terms of precision, recall and f1-score are presented in Table 3. When calculating these metrics, we considered the positive class to be the “I” (idiomatic) label. We used McNemar‘s test (McNemar, 1947) to check the statistical significance of our models‘ results and found all our results to be significant at p < 0.05. We can see in Table 3 that some of our models outperform the baselines on 1 expression (BlowWhistle) and achieved the same f1-scores on 2 expressions (LoseHead and MakeScene). For theses 3 expressions, our best models generally had higher precision than the baselines, finding more idioms on the test sets. In addition, for MakeScene, 2 of our models achieved the same f1scores (KNN-3 and SGD-SVM-PE), although they have different precision and recall. The only expression on which a baseline model outperformed all our models was TakeHeart where it achieved higher precision, recall and f1-scores. Nevertheless, this expression had the most imbalanced test set, with roughly 9 times more idioms than literal samples. Therefore, if the baseline label all the test set samples as idiomatic (including the literal examples), it would still have the best results. It is thus worth emphasizing that the choices of distributions for training and test sets in Peng et al’s work seems arbitrary and does not reflect the real distribution of the data in a balanced corpus. Also, Peng et al. (2014) did not provide the confusion matrices for their models so we cannot analyse their model behaviour across the classes. That aside, while our best models share the same f1-score with the baseline on 2 of the expressions, we believe that our method is more powerful if we take into account that we do not explicitly access the context surrounding our input sentences. We can also consider that our method is cheaper than the baseline in the sense that we do not need to process words other than the words in the input sentence. In addition, we note that the SVMs generally outperform the KNNs, although no single model perform best across all expressions. Regardless of the fact that the KNN-3 achieved the same f1score as SGD-SVM on MakeScene, the SVM consistently scored higher than the KNNs on all expressions. This is an interesting finding if we consider that our feature vector is 4800-dimensional and the SVMs are projecting these features into a space that has much more than 4800 dimensions and not incurring into the “curse of dimensionality”. Furthermore, other work using Sent2vec have shown the capabilities of the Sent2Vec representations to capture features that are suited to various NLP tasks where semantics is involved (e.g., paraphrase detection and semantic relatedness (Kiros et al., 2015)). These results together with our findings suggests that the factors in200 Expressions Linear-SVM-GE Grid-SVM-GE SGD-SVM-GE P. R. F1 P. R. F1 P. R. F1 BlowTop 0.91 0.96 0.94 0.91 0.93 0.94 0.80 0.98 0.88 BlowTrumpet 0.98 0.88 0.93 0.98 0.88 0.93 0.89 0.93 0.90 BlowWhistle* 0.84 0.67 0.75 0.84 0.68 0.75 0.67 0.59 0.63 CutFigure 0.91 0.85 0.88 0.89 0.85 0.87 0.86 0.85 0.86 FindFoot 0.96 0.93 0.94 0.97 0.93 0.95 0.85 0.90 0.87 GetNod 0.98 0.91 0.95 0.98 0.91 0.95 0.91 0.91 0.91 GetSack 0.87 0.89 0.88 0.86 0.88 0.87 0.81 0.89 0.84 GetWind 0.86 0.82 0.84 0.92 0.85 0.88 0.69 0.81 0.75 HaveWord 0.99 0.89 0.94 0.99 0.89 0.94 0.95 0.91 0.93 HitRoad 0.86 0.98 0.92 0.89 0.98 0.93 0.83 0.98 0.90 HitRoof 0.88 0.88 0.88 0.92 0.88 0.90 0.80 0.83 0.82 HitWall 0.74 0.58 0.65 0.74 0.58 0.65 0.74 0.45 0.56 HoldFire 1.00 0.63 0.77 1.00 0.63 0.77 0.82 0.67 0.74 KickHeel 0.92 0.96 0.94 0.92 0.99 0.95 0.89 0.92 0.91 LoseHead* 0.78 0.66 0.72 0.75 0.64 0.69 0.75 0.67 0.71 LoseThread 1.00 0.88 0.93 1.00 0.86 0.92 0.81 0.85 0.83 MakeFace 0.70 0.83 0.76 0.69 0.76 0.72 0.62 0.81 0.70 MakeHay 0.81 0.78 0.79 0.81 0.84 0.82 0.73 0.76 0.75 MakeHit 0.10 0.54 0.70 0.10 0.54 0.70 0.85 0.55 0.67 MakeMark 0.99 0.92 0.95 0.98 0.91 0.94 0.93 0.93 0.93 MakePile 0.84 0.67 0.74 0.84 0.70 0.76 0.74 0.70 0.72 MakeScene* 0.92 0.84 0.88 0.92 0.81 0.86 0.78 0.81 0.79 PullLeg 0.79 0.71 0.75 0.82 0.72 0.77 0.75 0.70 0.72 PullPlug 0.91 0.91 0.91 0.91 0.91 0.91 0.90 0.92 0.91 PullPunch 0.85 0.87 0.86 0.87 0.87 0.87 0.70 0.85 0.77 PullWeight 1.00 0.96 0.98 1.00 0.96 0.98 0.89 0.93 0.93 SeeStar 0.17 0.13 0.15 0.17 0.13 0.15 0.17 0.17 0.17 TakeHeart* 0.94 0.79 0.86 0.94 0.80 0.86 0.86 0.80 0.83 Total 0.84 0.80 0.83 0.84 0.80 0.83 0.79 0.79 0.78 Table 4: Precision (P.), recall (R.) and f1-scores (F1) calculated on the expressions of the balanced part of the VNC-Tokens dataset. The expressions marked with * indicate the expressions also evaluated with the “per-expression” classifiers. volved in distinguishing between the semantics of idiomatic and literal language are deeply entrenched in language generation and only a highdimensional representation can enable a classifier to make that distinction. This observation also implies that the contribution of each feature (generated by the distributed representation) is very small, given the fact that we need that many dimensions and the space needed to unpack the components of literal and idiomatic language has many more dimensions than the input space. Therefore, the current manually engineered features (i.e., the features used in previous idiom token classification) are only capturing a small portion of these dimensions and assigning more weight to these dimensions while other dimensions (not captured) are not considered (i.e., as they are not considered, the features represented by these dimensions have their weight equal to 0) Another point for consideration is the fact that the combination of our model with the work of Peng et al. (2014) may result in a stronger model on this “per-expression” setting. Nevertheless, as previously highlighted, it was not possible for us to directly re-implement their work. 5.2 General Classification Moving on to the general classification case, we present the average results (in terms of precision, recall and f1-score) over 10 runs to our “general” classifiers on the balanced part of the VNC-Tokens dataset. Once again, the positive class is assumed to be the “I” (idiomatic) label and we split the outcomes per expression. It should be noted that the “per-expression” evaluation was performed using a balanced set to train the classifiers while in this experiment we maintained the ratio of idiomatic to literal usages for each expression across the training and test sets. Our motivation for maintaining this ratio was to simulate the real distribution of the classes in the corpus. We present results for the four individual MWEs used in the per-sentence based evaluation as well as a set of averages made over all 28 expression in the “balanced” portion of the dataset. Referring to the results we first of all note the overall performance of the “general” classifiers is 201 fairly high with 2 classifiers (Linear-SVM-GE and Grid-SVM-GE) sharing the same precision, recall and f1-scores. While averages here are the same across the two classifiers, it is worth noting that deviations occured across individual MWE types, though these deviations balanced out across the data set. Although not displayed in this table due to space limitations, it should be noted that all the 3 classifier had a extremely low performance on SeeStar (f1 = 0.15, 0.15 and 0.17 respectively). If we compare the performance of the 4 expressions analysed in the “per-expression” experiment we can observe that all the “general” classifiers had a better performance over BlowWhistle and the Linear-SVM-GE also performed better on MakeScene. Nevertheless we should emphasize that the “general” classifier‘s evaluation is closer to what we would expect in a real data distribution than the evaluation presented on the “perexpression” section. This does not invalidate the evaluation of the latter but when we have access to a real data distribution it should also be taken into account when performing a ML evaluation. In general, the results look promising. It is interesting to see how the classifiers trained on a set of mixed expressions (“general” classifiers) had a performance close to the “per-expression” classifiers, even though the latter were trained and tested on “artificial” training and test sets that do not reflect the real data distributions. We believe that these results indicate that the distributed representations generated by Sent2Vec are indeed clustering together sentences within the same class (idiomatic or literal) in feature space. 6 Conclusions and Future Work In this paper we have investigated the use of distributed compositional semantics in literal and idiomatic language classification, more specifically using Skip-Thought Vectors (Sent2Vec). We followed the intuition that the distributed representations generated by Sent2Vec also include information regarding the context where the potential idiomatic expression is inserted and therefore is sufficient for distinguishing between idiomatic and literal language use. We tested this approach with different Machine Learning (ML) algorithms (K-Nearest Neighbours and Support Vector Machines) and compared our work against a topic model representation that include the full paragraph or the surrounding paragraphs where the potential idiom is inserted. We have shown that using the Sent2Vec representations our classifiers achieve better results in 3 out of 4 expressions tested. We have also shown that our models generally present better precision and/or recall than the baselines. We also investigated the capability of Sent2Vec clustering representations of sentences within the same class in feature space. We followed the intuition presented by previous experiments with distributed representations that words with similar meaning are clustered together in feature space and experimented with a “general” classifier that is trained on a dataset of mixed expressions. We have shown that the “general” classifier is feasible but the traditional “per-expression” does achieve better results in some cases. In future work we plan to investigate the use of Sent2Vec to encode larger samples of text - not only the sentence containing idioms. We also plan to further analyse the errors made by our “general” model and investigate the “general” approach on the skewed part of the VNC-tokens dataset. We also plan to investigate an end-to-end approach based on deep learning-based representations to classify literal and idiomatic language use. In addition, we also plan to compare our work to the method of Sporleder et al. (2010) as well apply our work on the IDX Corpus (Sporleder et al., 2010) and to other languages. The focus of these future experiments will be to test how our approach which is relatively less dependent on NLP resources compares with these other methods for idiom token classification. Acknowledgments We would like to thank the anonymous reviewers for their valuable comments and feedback. Giancarlo D. Salton would like to thank CAPES (“Coordenac¸˜ao de Aperfeic¸oamento de Pessoal de N´ıvel Superior”) for his Science Without Borders scholarship, proc n. 9050-13-2. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993–1022, March. 202 L´eon Bottou. 2010. Large-scale machine learning with stochastic gradient descent. In Proceedings of the 19th International Conference on Computational Statistics (COMPSTAT’2010), pages 177–187. Lou Burnard. 2007. Reference guide for the british national corpus (xml edition). Technical report, http://www.natcorp.ox.ac.uk/. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734, Doha, Qatar, October. Association for Computational Linguistics. Paul Cook, Afsaneh Fazly, and Suzanne Stevenson. 2008. The VNC-Tokens Dataset. In Proceedings of the LREC Workshop: Towards a Shared Task for Multiword Expressions (MWE 2008), Marrakech, Morocco. Afsanesh Fazly, Paul Cook, and Suzanne Stevenson. 2009. Unsupervised type and token identification of idiomatic expressions. In Computational Linguistics, volume 35, pages 61–103. Anna Feldman and Jing Peng. 2013. Automatic detection of idiomatic clauses. In Proceedings of the 14th International Conference on Computational Linguistics and Intelligent Text Processing - Volume Part I, CICLing’13, pages 435–446. Chikara Hashimoto and Daisuke Kawahara. 2008. Construction of an idiom corpus and its application to idiom identification based on wsd incorporating idiom-specific features. In Proceedings of the conference on empirical methods in natural language processing, pages 992–1001. Association for Computational Linguistics. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. June. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1746–1751. Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in Neural Information Processing Systems 28, pages 3276–3284. Linlin Li and Caroline Sporleder. 2010a. Linguistic cues for distinguishing literal and non-literal usages. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 683– 691. Linlin Li and Caroline Sporleder. 2010b. Using gaussian mixture models to detect figurative language in context. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT ’10, pages 297–300, Stroudsburg, PA, USA. Association for Computational Linguistics. Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika, 12(2):153–157. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic regularities in continuous space word representations. In The 2013 Conference of the North Americal Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 746–751. Jing Peng, Anna Feldman, and Ekaterina Vylomova. 2014. Classifying idiomatic and literal expressions using topic models and intensity of emotions. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2019–2027, October. Giancarlo D. Salton, Robert J. Ross, and John D. Kelleher. 2014. An Empirical Study of the Impact of Idioms on Phrase Based Statistical Machine Translation of English to Brazilian-Portuguese. In Third Workshop on Hybrid Approaches to Translation (HyTra) at 14th Conference of the European Chapter of the Association for Computational Linguistics. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642. Caroline Sporleder and Linlin Li. 2009. Unsupervised recognition of literal and non-literal use of idiomatic expressions. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, pages 754–762. Caroline Sporleder, Linlin Li, Philip Gorinski, and Xaver Koch. 2010. Idioms in context: The idix corpus. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC-2010), pages 639–646. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104–3112. 203 Vladimir N. Vapnik. 1995. The Nature of Statistical Learning Theory. Springer-Verlag New York, Inc., New York, NY, USA. Aline Villavicencio, Francis Bond, Anna Korhonen, and Diana McCarthy. 2005. Editorial: Introduction to the special issue on multiword expressions: Having a crack at a hard nut. Comput. Speech Lang., 19(4):365–377. Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. arXiv preprint arXiv:1506.06724. 204
2016
19
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2019–2028, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Jointly Learning to Embed and Predict with Multiple Languages Daniel C. Ferreira∗ André F. T. Martins†∗♯ Mariana S. C. Almeida∗♯ ∗Priberam Labs, Alameda D. Afonso Henriques, 41, 2o, 1000-123 Lisboa, Portugal †Unbabel Lda, Rua Visconde de Santarém, 67-B, 1000-286 Lisboa, Portugal ♯Instituto de Telecomunicações, Instituto Superior Técnico, 1049-001 Lisboa, Portugal {dcf,mla}@priberam.pt, {andre.martins}@unbabel.com Abstract We propose a joint formulation for learning task-specific cross-lingual word embeddings, along with classifiers for that task. Unlike prior work, which first learns the embeddings from parallel data and then plugs them in a supervised learning problem, our approach is oneshot: a single optimization problem combines a co-regularizer for the multilingual embeddings with a task-specific loss. We present theoretical results showing the limitation of Euclidean co-regularizers to increase the embedding dimension, a limitation which does not exist for other co-regularizers (such as the ℓ1distance). Despite its simplicity, our method achieves state-of-the-art accuracies on the RCV1/RCV2 dataset when transferring from English to German, with training times below 1 minute. On the TED Corpus, we obtain the highest reported scores on 10 out of 11 languages. 1 Introduction Distributed representations of text (embeddings) have been the target of much research in natural language processing (Collobert and Weston, 2008; Mikolov et al., 2013; Pennington et al., 2014; Levy et al., 2015). Word embeddings partially capture semantic and syntactic properties of text in the form of dense real vectors, making them apt for a wide variety of tasks, such as language modeling (Bengio et al., 2003), sentence tagging (Turian et al., 2010; Collobert et al., 2011), sentiment analysis (Socher et al., 2011), parsing (Chen and Manning, 2014), and machine translation (Zou et al., 2013). At the same time, there has been a consistent progress in devising “universal” multilingual models via cross-lingual transfer techniques of various kinds (Hwa et al., 2005; Zeman and Resnik, 2008; McDonald et al., 2011; Ganchev and Das, 2013; Martins, 2015). This line of research seeks ways of using data from resourcerich languages to solve tasks in resource-poor languages. Given the difficulty of handcrafting language-independent features, it is highly appealing to obtain rich, delexicalized, multilingual representations embedded in a shared space. A string of work started with Klementiev et al. (2012) on learning bilingual embeddings for text classification. Hermann and Blunsom (2014) proposed a noise-contrastive objective to push the embeddings of parallel sentences to be close in space. A bilingual auto-encoder was proposed by Chandar et al. (2014), while Faruqui and Dyer (2014) applied canonical correlation analysis to parallel data to improve monolingual embeddings. Other works optimize a sum of monolingual and cross-lingual terms (Gouws et al., 2015; Soyer et al., 2015), or introduce bilingual variants of skip-gram (Luong et al., 2015; Coulmance et al., 2015). Recently, Pham et al. (2015) extended the non-compositional paragraph vectors of Le and Mikolov (2014) to a bilingual setting, achieving a new state of the art at the cost of more expensive (and non-deterministic) prediction. In this paper, we propose an alternative joint formulation that learns embeddings suited to a particular task, together with the corresponding classifier for that task. We do this by minimizing a combination of a supervised loss function and a multilingual regularization term. Our approach leads to a convex optimization problem and makes a bridge between classical co-regularization approaches for semi-supervised learning (Sindhwani et al., 2005; Altun et al., 2005; Ganchev et al., 2019 2008) and modern representation learning. In addition, we show that Euclidean co-regularizers have serious limitations to learn rich embeddings, when the number of task labels is small. We establish this by proving that the resulting embedding matrices have their rank upper bounded by the number of labels. This limitation does not exist for other regularizers (convex or not), such as the ℓ1-distance and noise-contrastive distances. Our experiments in the RCV1/RCV2 dataset yield state-of-the-art accuracy (92.7%) with this simple convex formulation, when transferring from English to German, without the need of negative sampling, extra monolingual data, or nonadditive representations. For the reverse direction, our best number (79.3%), while far behind the recent para_doc approach (Pham et al., 2015), is on par with current compositional methods. On the TED corpus, we obtained general purpose multilingual embeddings for 11 target languages, by considering the (auxiliary) task of reconstructing pre-trained English word vectors. The resulting embeddings led to cross-lingual multi-label classifiers that achieved the highest reported scores on 10 out of these 11 languages.1 2 Cross-Lingual Text Classification We consider a cross-lingual classification framework, where a classifier is trained on a dataset from a source language (such as English) and applied to a target language (such as German). Later, we generalize this setting to multiple target languages and to other tasks besides classification. The following data are assumed available: 1. A labeled dataset Dl := {⟨x(m), y(m))}M m=1, consisting of text documents x in the source language categorized with a label y ∈{1, . . . , L}. 2. An unlabeled parallel corpus Du := {(s(n), t(n))}N n=1, containing sentences s in the source language paired with their translations t in the target language (but no information about their categories). Let VS and VT be the vocabulary size of the source and target languages, respectively. Throughout, we represent sentences s ∈RVS and t ∈RVT as vectors of word counts, and documents x as an average of sentence vectors. We assume that 1We provide the trained embeddings at http://www. cs.cmu.edu/~afm/projects/multilingual_ embeddings.html. the unlabeled sentences largely outnumber the labeled documents, N ≫M, and that the number of labels L is relatively small. The goal is to use the data above to learn a classifier h : RVT → {1, . . . , L} for the target language. This problem is usually tackled with a two-stage approach: in the first step, bilingual word embeddings P ∈RVS×K and Q ∈RVT×K are learned from Du, where each row of these matrices contains a Kth dimensional word representation in a shared vector space. In the second step, a standard classifier is trained on Dl, using the source embeddings P ∈RVS×K. Since the embeddings are in a shared space, the trained model can be applied directly to classify documents in the target language. We describe next these two steps in more detail. We assume throughout an additive representation for sentences and documents (denoted ADD by Hermann and Blunsom (2014)). These representations can be expressed algebraically as P ⊤x, P ⊤s, Q⊤t ∈RK, respectively. Step 1: Learning the Embeddings. The crosslingual embeddings P and Q are trained so that the representations of paired sentences (s, t) ∈ Du have a small (squared) Euclidean distance dℓ2(s, t) = 1 2∥P ⊤s −Q⊤t∥2. (1) Since a direct minimization of Eq. 1 leads to a degenerate solution (P = 0, Q = 0), Hermann and Blunsom (2014) use instead a noise-contrastive large-margin distance obtained via negative sampling, dns(s, t, n) = [m + dℓ2(s, t) −dℓ2(s, n)]+, (2) where n is a random (unpaired) target sentence, m is a “margin” parameter, and [x]+ := max{0, x}. Letting J be the number of negative examples in each sample, they arrive at the following objective function to be minimized: Rns(P , Q) := 1 N N X n=1 J X j=1 dns(s(n), t(n), n(n,j)). (3) This minimization can be carried out efficiently with gradient-based methods, such as stochastic gradient descent or AdaGrad (Duchi et al., 2011). Note however that the objective function in Eq. 3 is not convex. Therefore, one may land at different local minima, depending on the initialization. 2020 Step 2: Training the Classifier. Once we have the bilingual embeddings P and Q, we can compute the representation P ⊤x ∈RK of each document x in the labeled dataset Dl. Let V ∈RK×L be a matrix of parameters (weights), with one column vy per label. A linear model is used to make predictions, according to by = argmaxy∈{1,...,L}v⊤ y P ⊤x = argmaxy∈{1,...,L}w⊤ y x, (4) where wy is a column of the matrix W := P V ∈ RVS×L. In prior work, the perceptron algorithm was used to learn the weights V from the labeled examples in Dl (Klementiev et al., 2012; Hermann and Blunsom, 2014). Note that, at test time, it is not necessary to store the full embeddings: if L ≪K, we may simply precompute W := P V (one weight per word and label) if the input is in the source language—or QV , if the input is in the target language—and treat this as a regular bag-ofwords linear model. 3 Jointly Learning to Embed and Classify Instead of a two-stage approach, we propose to learn the bilingual embeddings and the classifier jointly on Dl ∪Du, as described next. Our formulation optimizes a combination of a co-regularization function R, whose goal is to push the embeddings of paired sentences in Du to stay close, and a loss function L, which fits the model to the labeled data in Dl. The simplest choice for R is a simple Euclidean co-regularization function: Rℓ2(P , Q) = 1 N N X n=1 dℓ2(s(n), t(n)) (5) = 1 2N N X n=1 ∥P ⊤s(n) −Q⊤t(n)∥2. An alternative is the ℓ1-distance: Rℓ1(P , Q) = 1 N N X n=1 ∥P ⊤s(n) −Q⊤t(n)∥1. (6) One possible advantage of Rℓ1(P , Q) over Rℓ2(P , Q) is that the ℓ1-distance is more robust to outliers, hence it is less sensitive to differences in the parallel sentences. Note that both functions in Eqs. 5–6 are jointly convex on P and Q, unlike the one in Eq. 3. They are also simpler and do not require negative sampling. While these functions have a degenerate behavior in isolation (since they are both minimized by P = 0 and Q = 0), we will see that they become useful when plugged into a joint optimization framework. The next step is to define the loss function L to leverage the labeled data in Dl. We consider a loglinear model P(y | x; W ) ∝exp(w⊤ y x), which leads to the following logistic loss function: LLL(W ) = −1 M M X m=1 log P(y(m) | x(m); W ). (7) We impose that W is of the form W = P V for a fixed V ∈RK×L, whose choice we discuss below. Putting the pieces together and adding some extra regularization terms, we formulate our joint objective function as follows: F(P , Q) = µR(P , Q) + L(P V ) + µS 2 ∥P ∥2 F + µT 2 ∥Q∥2 F, (8) where µ, µS, µT ≥0 are regularization constants. By minimizing a combination of L(P V ) and R(P , Q), we expect to obtain embeddings Q∗ that lead to an accurate classifier h for the target language. Note that P = 0 and Q = 0 is no longer a solution, due to the presence of the loss term L(P V ) in the objective. Choice of V . In Eq. 8, we chose to keep V fixed rather than optimize it. The rationale is that there are many more degrees of freedom in the embedding matrices P and Q than in V (concretely, O(K(VS + VT)) versus O(KL), where we are assuming a small number of labels, L ≪VS + VT). Our assumption is that we have enough degrees of freedom to obtain an accurate model, regardless of the choice of V . These claims will be backed in §4 by a more rigorous theoretical result. Keeping V fixed has another important advantage: it allows to minimize F with respect to P and Q only, which makes it a convex optimization problem if we choose R and L to be both convex—e.g., setting R ∈{Rℓ2, Rℓ1} and L := LLL. Relation to Multi-View Learning. An interesting particular case of this formulation arises if K = L and V = IL (the identity matrix). In that case, we have W = P and the embedding matrices P and Q are in fact weights for every pair of word and label, as in standard bag-of-word 2021 models. In this case, we may interpret the coregularizer R(P , Q) in Eq. 8 as a term that pushes the label scores of paired sentences P ⊤s(n) and Q⊤t(n) to be similar, while the source-based loglinear model is fit via L(W ). The same idea underlies various semi-supervised co-regularization methods that seek agreement between multiple views (Sindhwani et al., 2005; Altun et al., 2005; Ganchev et al., 2008). In fact, we may regard the joint optimization in Eq. 8 as a generalization of those methods, making a bridge between those methods and representation learning. Multilingual Embeddings. It is straightforward to extend the framework herein presented to the case where there are multiple target languages (say R of them), and we want to learn one embedding matrix for each, {Q1, . . . , QR}. The simplest way is to consider a sum of pairwise co-regularizers, R′(P , {Q1, . . . , QR}) := R X r=1 R(P , Qr). (9) If R is additive over the parallel sentences (which is the case for Rℓ2, Rℓ1 and Rns), then this procedure is equivalent to concatenating all the parallel sentences (regardless of the target language) and adding a language suffix to the words to distinguish them. This reduces directly to a problem in the same form as Eq. 8. Pre-Trained Source Embeddings. In practice, it is often the case that pre-trained embeddings for the source language are already available (let ¯P be the available embedding matrix). It would be foolish not to exploit those resources. In this scenario, the goal is to use ¯P and the dataset Du to obtain “good” embeddings for the target languages (possibly tweaking the source embeddings too, P ≈¯P ). Our joint formulation in Eq. 8 can also be used to address this problem. It suffices to set K = L and V = IL (as in the multi-view learning case discussed above) and to define an auxiliary task that pushes P and ¯P to be similar. The simplest way is to use a reconstruction loss: Lℓ2(P , ¯P ) := 1 2∥P −¯P ∥2 F. (10) The resulting optimization problem has resemblances with the retrofitting approach of Faruqui et al. (2015), except that the goal here is to extend the embeddings to other languages, instead of pushing monolingual embeddings to agree with a semantic lexicon. We will present some experiments in §5.2 using this framework. 4 Limitations of the Euclidean Co-Regularizer One may wonder how much the embedding dimension K influences the learned classifier. The next proposition shows the (surprising) result that, with the formulation in Eq. 8 with R = Rℓ2, it makes absolutely no difference to increase K past the number of labels L. Below, T ∈RVT×N denotes the matrix with columns t(1), . . . , t(N). Proposition 1. Let R = Rℓ2 and assume T has full row rank.2 Then, for any choice of V ∈ RK×L, possibly with K > L, the following holds: 1. There is an alternative, low-dimensional, V ′ ∈ RK′×L with K′ ≤L such that the classifier obtained (for both languages) by optimizing Eq. 8 using V ′ is the same as if using V .3 2. This classifier depends on V only via the L-byL matrix V ⊤V . 3. If P ∗, Q∗are the optimal embeddings obtained with V , then we always have rank(P ∗) ≤L and rank(Q∗) ≤L regardless of K. Proof. See App. A.1 in the supplemental material. Let us reflect for a moment on the practical impact of Prop. 1. This result shows the limitation of the Euclidean co-regularizer Rℓ2 in a very concrete manner: when R = Rℓ2, we only need to consider representations of dimension K ≤L. Note also that a corollary of Prop. 1 arises when V ⊤V = IL, i.e., when V is chosen to have orthonormal columns (a sensible choice, since it corresponds to seeking embeddings that leave the label weights “uncorrelated”). Then, the second statement of Prop. 1 tells us that the resulting classifier will be the same as if we had simply set V = IL (the particular case discussed in §3). We will see in §5.1 that, despite this limitation, this classifier is actually a very strong baseline. Of course, if the number of labels L is large enough, 2This assumption is not too restrictive: it holds if N ≥VT and if no target sentence can be written as a linear combination of the others (this can be accomplished if we remove redundant parallel sentences). 3Let P ∗, Q∗and P ′∗, Q′∗be the optimal embeddings obtained with V and V ′, respectively. Since we are working with linear classifiers, the two classifiers are the same in the sense that P ∗V = P ′∗V ′ and Q∗V = Q′∗V ′. 2022 this limitation might not be a reason for concern.4 An instance will be presented in §5.2, where we will see that the Euclidean co-regularizer excels. Finally, one might wonder whether Prop. 1 applies only to the (Euclidean) ℓ2 norm or if it holds for arbitrary regularizers. In fact, we show in App. A.2 that this limitation applies more generally to Mahalanobis-Frobenius norms, which are essentially Euclidean norms after a linear transformation of the vector space. However, it turns out that for general norms such limitation does not exist, as shown below. Proposition 2. If R = Rℓ1 in Eq. 8, then the analogous to Proposition 1 does not hold. It also does not hold for the ℓ∞-norm and the ℓ0-“norm.” Proof. See App. A.3 in the supplemental material. This result suggests that, for other regularizers R ̸= Rℓ2, we may eventually obtain better classifiers by increasing K past L. As such, in the next section, we experiment with R ∈ {Rℓ2, Rℓ1, Rns}, where Rns is the (non-convex) noise-contrastive regularizer of Eq. 3. 5 Experiments We report results on two experiments: one on cross-lingual classification on the Reuters RCV1/RCV2 dataset, and another on multi-label classification with multilingual embeddings on the TED Corpus.5 5.1 Reuters RCV1/RCV2 We evaluate our framework on the cross-lingual document classification task introduced by Klementiev et al. (2012). Following prior work, our dataset Du consists of 500,000 parallel sentences from the Europarl v7 English-German corpus (Koehn, 2005); and our labeled dataset Dl consists of English and German documents from the RCV1/RCV2 corpora (Lewis et al., 2004), each categorized with one out of L = 4 labels. We used the same split as Klementiev et al. (2012): 1,000 documents for training, of which 200 are held out as validation data, and 5,000 for testing. 4For regression tasks (such as the one presented in the last paragraph of 3), instead of the “number of labels,” L should be regarded as the number of output variables to regress. 5Our code is available at https: //github.com/dcferreira/ multilingual-joint-embeddings. Note that, in this dataset, we are classifying documents based on their bag-of-word representations, and learning word embeddings by bringing the bag-of-word representations of parallel sentences to be close together. In this sense, we are bringing together these multiple levels of representations (document, sentence and word). We experimented with the joint formulation in Eq. 8, with L := LLL and R ∈{Rℓ2, Rℓ1, Rns}. We optimized with AdaGrad (Duchi et al., 2011) with a stepsize of 1.0, using mini-batches of 100 Reuters RCV1/RCV2 documents and 50,000 Europarl v7 parallel sentences. We found no need to run more than 100 iterations, with most of our runs converging under 50. Our vocabulary has 69,714 and 175,650 words for English and German, respectively, when training on the English portion of the Reuters RCV1/RCV2 corpus, and 61,120 and 183,888 words for English and German, when training in the German portion of the corpus. This difference is due to the inclusion of words in the training data into the vocabulary. We do not remove any words from the vocabulary, for simplicity. We used the validation set to tune the hyperparameters {µ, µS, µT} and to choose the iteration number. When using K = L, we chose V = IL; otherwise, we chose V randomly, sampling its entries from a Gaussian N(0, 0.1). Table 1 shows the results. We include for comparison the most competitive systems published to date. The first thing to note is that our joint system with Euclidean co-regularization performs very well for this task, despite the theoretical limitations shown in §4. Although its embedding size is only K = 4 (one dimension per label), it outperformed all the two-stage systems trained on the same data, in both directions. For the EN→DE direction, our joint system with ℓ1 co-regularization achieved state-of-the-art results (92.7%), matching two-stage systems that use extra monolingual data, negative sampling, or non-additive document representations. It is conceivable that the better results of Rℓ1 over Rℓ2 come from its higher robustness to differences in the parallel sentences. For the DE→EN direction, our best result (79%) was obtained with the noise-contrastive coregularizer, which outperformed all systems except para_doc (Pham et al., 2015). While the accuracy of para_doc is quite impressive, note that it requires 500-dimensional embeddings 2023 K EN→DE DE→EN I-Matrix [KTB12] 40 77.6 71.1 ADD [HB14] 40 83.7 71.4 ADD [HB14] 128 86.4 74.7 BI [HB14] 40 83.4 69.2 BI [HB14] 128 86.1 79.0 BilBOWA [GBC15] 40 86.5 75.0 Binclusion [SSA15] 40 86.8 76.7 Bincl.+RCV [SSA15] (‡) 40 92.7 84.4 CLC-WA [SLLS15] (†) 40 91.3 77.2 para_sum [PLM15] (†) 100 90.6 78.8 para_doc [PLM15] (†) 500 92.7 91.5 Joint, Rℓ2 4 91.2 78.2 Joint, Rℓ1 4 92.7 76.0 Joint, Rℓ1 40 92.7 76.2 Joint, Rns 4 91.2 76.8 Joint, Rns 40 91.4 79.3 Table 1: Accuracies in the RCV1/RCV2 dataset. Shown for comparison are Klementiev et al. (2012) [KTB12], Hermann and Blunsom (2014) [HB14], Gouws et al. (2015) [GBC15], Soyer et al. (2015) [SSA15], Shi et al. (2015) [SLLS15], and Pham et al. (2015) [PLM15]. Systems marked with (†) used the full 1.8M parallel sentences in Europarl. The one with (‡) used additional target monolingual data from RCV1/RCV2. The bottom rows refer to our joint method, with Euclidean (ℓ2), ℓ1, and noisecontrastive co-regularization. (hence many more parameters), was trained on more parallel sentences, and requires more expensive (and non-deterministic) computation at test time to compute a document’s embedding. Our method has the advantage of being simple and very fast to train: it took less than 1 minute to train the joint-Rℓ1 system for EN→DE, using a single core on an Intel Xeon @2.5 GHz. This can be compared with Klementiev et al. (2012), who took 10 days on a single core, or Coulmance et al. (2015), who took 10 minutes with 6 cores.6 Although our theoretical results suggest that increasing K when using the ℓ1 norm may increase the expressiveness of our embeddings, our results do not support this claim (the improvements in DE→EN from K = 4 to K = 40 were tiny). However, it led to a gain of 2.5 points when using negative sampling. For K = 40, this system is much more accurate than Hermann and Blunsom (2014), which confirms that learning the embeddings together with the task is highly beneficial. 6Coulmance et al. (2015) reports accuracies of 87.8% (EN→DE) and 78.7% (DE→EN), when using 10,000 training documents from the RCV1/RCV2 corpora. 5.2 TED Corpus To assess the ability of our framework to handle multiple target languages, we ran a second set of experiments on the TED corpus (Cettolo et al., 2012), using the training and test partitions created by Hermann and Blunsom (2014), downloaded from http://www.clg.ox.ac. uk/tedcorpus. The corpus contains English transcriptions and multilingual, sentence-aligned translations of talks from the TED conference in 12 different languages, with 12,078 parallel documents in the training partition (totalling 1,641,985 parallel sentences). Following their prior work, we used this corpus both as parallel data (Du) and as the task dataset (Dl). There are L = 15 labels and documents can have multiple labels. We experimented with two different strategies: • A one-stage system (Joint), which jointly trains the multilingual embeddings and the multi-label classifier (similarly as in §5.1). To cope with multiple target languages, we used a sum of pairwise co-regularizers as described in Eq. 9. For classification, we use multinomial logistic regression, where we select those labels with a posterior probability above 0.18 (tuned on vali2024 dation data). • A two-stage approach (Joint w/ Aux), where we first obtain multilingual embeddings by applying our framework with an auxiliary task with pre-trained English embeddings (as described in Eq. 10 and in the last paragraph of §3), and then use the resulting multilingual representations to train the multi-label classifier. We address this multi-label classification problem with independent binary logistic regressors (one per label), trained by running 100 iterations of L-BFGS (Liu and Nocedal, 1989). At test time, we select those labels whose posterior probability are above 0.5. For the Joint w/ Aux strategy, we used the 300-dimensional GloVe-840B vectors (Pennington et al., 2014), downloaded from http:// nlp.stanford.edu/projects/glove/. Table 2 shows the results for cross-lingual classification, where we use English as source and each of the other 11 languages as target. We compare our two strategies above with the strong Machine Translation (MT) baseline used by Hermann and Blunsom (2014) (which translates the input documents to English with a state-of-theart MT system) and with their two strongest systems, which build document-level representations from embeddings trained bilingually or multilingually (called DOC/ADD single and DOC/ADD joint, respectively).7 Overall, our Joint system with ℓ2 regularization outperforms both Hermann and Blunsom (2014)’s systems (but not the MT baseline) for 8 out of 11 languages, performing generally better than our ℓ1-regularized system. However, the clear winner is our ℓ2-regularized Joint w/ Aux system, which wins over all systems (including the MT baseline) by a substantial margin, for all languages. This shows that pre-trained source embeddings can be extremely helpful in bootstrapping multilingual ones.8 On the other hand, the performance of the Joint w/ Aux system with ℓ1 regularization is rather disappointing. Note that the limitations of Rℓ2 shown in §4 are not a concern here, since the auxiliary task has 7Note that, despite the name, the Hermann and Blunsom (2014)’s joint systems are not doing joint training as we are. 8Note however that, overall, our Joint w/ Aux systems have access to more data than our Joint systems and also than Hermann and Blunsom (2014)’s systems, since the pretrained embeddings were trained on a large amount of English monolingual data. Yet, the amount of target language data is the same. L = 300 dimensions (the dimension of the pretrained embeddings). A small sample of the multilingual embeddings produced by the winner system is shown in Table 4. Finally, we did a last experiment in which we use our multilingual embeddings obtained with Joint w/ Aux to train monolingual systems for each language. This time, we compare with a bag-ofwords naïve Bayes system (reported by Hermann and Blunsom (2014)), a system trained on the Polyglot embeddings from Al-Rfou et al. (2013) (which are multilingual, but not in a shared representation space), and the two systems developed by Hermann and Blunsom (2014). The results are shown in Table 3. We observe that, with the exception of Turkish, our systems consistently outperform all the competitors. Comparing the bottom two rows of Tables 2 and 3 we also observe that, for the ℓ2-regularized system, there is not much degradation caused by cross-lingual training versus training on the target language directly (in fact, for Spanish, Polish, and Brazilian Portuguese, the former scores are even higher). This suggests that the multilingual embeddings have high quality. 6 Conclusions We proposed a new formulation which jointly minimizes a combination of a supervised loss function with a multilingual co-regularization term using unlabeled parallel data. This allows learning task-specific multilingual embeddings together with a classifier for the task. Our method achieved state-of-the-art accuracy on the Reuters RCV1/RCV2 cross-lingual classification task in the English to German direction, while being extremely simple and computationally efficient. Our results in the Reuters RCV1/RCV2 task, obtained using Europarl v7 as parallel data, show that our method has no trouble handling different levels of representations simutaneously (document, sentence and word). On the TED Corpus, we obtained the highest reported scores for 10 out of 11 languages, using an auxiliary task with pre-trained English embeddings. Acknowledgments We would like to thank the three anonymous reviewers. This work was partially supported by the European Union under H2020 project SUMMA, grant 688139, and by Fundação para a Ciência e Tecnologia (FCT), 2025 Ara. Ger. Spa. Fre. Ita. Dut. Pol. Br. Pt. Rom. Rus. Tur. MT Baseline [HB14] 42.9 46.5 51.8 52.6 51.4 50.5 44.5 47.0 49.3 43.2 40.9 DOC/ADD single [HB14] 41.0 42.4 38.3 47.6 48.5 26.4 40.2 35.4 41.8 44.8 45.2 DOC/ADD joint [HB14] 39.2 40.5 44.3 44.7 47.5 45.3 39.4 40.9 44.6 47.6 41.7 Joint, Rℓ2, K = 15 41.8 46.6 46.6 46.0 48.7 52.5 39.5 40.8 47.6 44.9 47.2 Joint, Rℓ1, K = 15 44.0 44.7 49.4 40.1 46.1 49.4 35.7 43.5 40.5 42.2 43.4 Joint w/ Aux, Rℓ2, K = 300 46.9 52.0 59.4 54.6 56.0 53.6 51.0 51.7 53.9 52.3 49.5 Joint w/ Aux, Rℓ1, K = 300 44.0 40.4 40.4 39.5 38.6 38.1 43.2 36.6 35.1 44.3 44.4 Table 2: Cross-lingual experiments on the TED Corpus using English as a source language. Reported are the micro-averaged F1 scores for a machine translation baseline and the two strongest systems of Hermann and Blunsom (2014), our one-stage joint system (Joint), and our two-stage system that trains the multilingual embeddings jointly with the auxiliary task of fitting pre-trained English embeddings (Joint w/ Aux), with both ℓ1 and ℓ2 regularization. Bold indicates the best result for each target language. Ara. Ger. Spa. Fre. Ita. Dut. Pol. Br. Pt. Rom. Rus. Tur. BOW baseline [HB14] 46.9 47.1 52.6 53.2 52.4 52.2 41.5 46.5 50.9 46.5 51.3 Polyglot [HB14] 41.6 27.0 41.8 36.1 33.2 22.8 32.3 19.4 30.0 40.2 29.5 DOC/ADD Single [HB14] 42.2 42.9 39.4 48.1 45.8 25.2 38.5 36.3 43.1 47.1 43.5 DOC/ADD Joint [HB14] 37.1 38.6 47.2 45.1 39.8 43.9 30.4 39.4 45.3 40.2 44.1 Joint w/ Aux, Rℓ2, K = 300 48.6 54.4 57.5 55.8 56.9 54.5 46.1 51.3 56.5 53.0 49.5 Joint w/ Aux, Rℓ1, K = 300 52.4 47.8 57.8 50.0 53.3 52.3 47.6 49.0 49.2 51.4 50.9 Table 3: Monolingual experiments on the TED Corpus. Shown are the micro-averaged F1 scores for a bag-of-words baseline, a system trained on Polyglot embeddings, the two strongest systems of Hermann and Blunsom (2014), and our Joint w/ Aux system with ℓ1 and ℓ2 regularization. january_en science_en oil_en road_en speak_en januari_nl ˜`lw_ar óleo_pb route_fr spreken_nl ¸subat_tr ˜`lœ_ar olie_nl strada_it fala_pb gennaio_it ciência_pb petrolio_it weg_nl ˜k®_ar februarie_ro science_fr öl_de drum_ro gesproken_nl br§r_ar ¸stiin¸ta_ro pétrole_fr ˜syr_ar habla_es ianuarie_ro wetenschap_nl petrol_tr estrada_pb konu¸sma_tr febrero_es scienza_it petróleo_es drogi_pl ãîâîðèòü_ru janvier_fr ciencia_es ˜nfX_ar lopen_nl horen_nl §nA§r_ar wissenschaft_de petróleo_pb strade_it mowy_pl janeiro_pb científica_pb petrol_ro drodze_pl vorbeasc˘a_ro enero_es nauka_pl aceite_es wegen_nl spreekt_nl september_nl bilim_tr rop˛e_pl yol_tr ˜d§_ar settembre_it s,tiint,a_ro íåôòü_ru camino_es sprechen_de septiembre_es s,tiint,˘a_ro petrolul_ro conduce_ro ii_ro september_de nauki_pl íåôòè_ru andar_pb discours_fr ekim_tr íàóêà_ru žfX_ar ïóòè_ru sentire_it Fbtmbr_ar ˆlœ_ar ropy_pl syr_ar contar_pb febbraio_it ˆlw_ar E§ _ar äàëåêî_ru ñåáÿ_ru septembrie_ro scientifica_it ulei_ro yolculuk_tr JP_ar setembro_pb scienze_it ˜z§ yola_tr poser_fr Table 4: Examples of nearest neighbor words for the multilingual embeddings trained with our Joint w/ Aux system with ℓ2 regularization. Shown for each English word are the 20 closest target words in Euclidean distance, regardless of language. 2026 through contracts UID/EEA/50008/2013, through the LearnBig project (PTDC/EEISII/7092/2014), and the GoLocal project (grant CMUPERI/TIC/0046/2014). References Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual NLP. arXiv preprint arXiv:1307.1662 . Yasemin Altun, Mikhail Belkin, and David A. McAllester. 2005. Maximum Margin SemiSupervised Learning for Structured Variables. In Advances in Neural Information Processing Systems 18. pages 33–40. Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research 3:1137–1155. Mauro Cettolo, Christian Girardi, and Marcello Federico. 2012. Wit3: Web inventory of transcribed and translated talks. In Proc. of the 16th Conference of the European Association for Machine Translation. pages 261–268. Sarath Chandar, Stanislas Lauly, Hugo Larochelle, Mitesh M. Khapra, Balaraman Ravindran, Vikas Raykar, and Amrita Saha. 2014. An Autoencoder Approach to Learning Bilingual Word Representations. Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural networks. In Proc. of Empirical Methods for Natural Language Processing. pages 740–750. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proc. of the International Conference on Machine Learning. ACM, pages 160–167. Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12:2493–2537. Jocelyn Coulmance, Jean-Marc Marty, Guillaume Wenzek, and Amine Benhalloum. 2015. Transgram, Fast Cross-lingual Word-embeddings. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP) pages 1109–1113. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. Journal of Machine Learning Research 12:2121–2159. Manaal Faruqui, Jesse Dodge, Sujay K Jauhar, Chris Dyer, Eduard Hovy, and Noah A Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proc. of Annual Meeting of the NorthAmerican Chapter of the Association for Computational Linguistics. Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proc. of Annual Meeting of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics. Kuzman Ganchev and Dipanjan Das. 2013. Crosslingual discriminative learning of sequence models with posterior regularization. In Proc. of Empirical Methods in Natural Language Processing. Kuzman Ganchev, Joao Graca, John Blitzer, and Ben Taskar. 2008. Multi-view learning over structured and non-identical outputs. In Proc. of Conference on Uncertainty in Artificial Intelligence. Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. BilBOWA: Fast Bilingual Distributed Representations without Word Alignments. Proceedings of the 32nd International Conference on Machine Learning (2015) pages 748–756. Karl Moritz Hermann and Phil Blunsom. 2014. Multilingual Models for Compositional Distributed Semantics. Proceedings of ACL pages 58–68. Rebecca Hwa, Philip Resnik, Amy Weinberg, Clara Cabezas, and Okan Kolak. 2005. Bootstrapping parsers via syntactic projection across parallel texts. Natural language engineering 11(3):311–325. Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. 24th International Conference on Computational Linguistics - Proceedings of COLING 2012: Technical Papers (2012) pages 1459–1474. Philipp Koehn. 2005. Europarl: A parallel corpus 2027 for statistical machine translation. MT summit 11. Quoc Le and Tomas Mikolov. 2014. Distributed Representations of Sentences and Documents. International Conference on Machine Learning - ICML 2014 32:1188–1196. Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics 3:211–225. David D. Lewis, Yiming Yang, Tony G. Rose, and Fan Li. 2004. RCV1: A New Benchmark Collection for Text Categorization Research. Journal of Machine Learning Research 5:361–397. D. C. Liu and J. Nocedal. 1989. On the limited memory BFGS method for large scale optimization. Mathematical Programming 45:503–528. Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Bilingual Word Representations with Monolingual Quality in Mind. Workshop on Vector Modeling for NLP pages 151–159. André F. T. Martins. 2015. Transferring coreference resolvers with posterior regularization. In ACL. Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-source transfer of delexicalized dependency parsers. In Proc. of Empirical Methods in Natural Language Processing. Tomas Mikolov, Greg Corrado, Kai Chen, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. Proceedings of the International Conference on Learning Representations (ICLR 2013) pages 1–12. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. pages 1532–1543. Kaare Brandt Petersen and Michael Syskind Pedersen. 2012. The Matrix Cookbook. Hieu Pham, Minh-Thang Luong, and Christopher D. Manning. 2015. Learning Distributed Representations for Multilingual Text Sequences. Workshop on Vector Modeling for NLP pages 88–94. Tianze Shi, Zhiyuan Liu, Yang Liu, and Maosong Sun. 2015. Learning Cross-lingual Word Embeddings via Matrix Co-factorization. Annual Meeting of the Association for Computational Linguistics pages 567–572. Vikas Sindhwani, Partha Niyogi, and Mikhail Belkin. 2005. A co-regularization approach to semi-supervised learning with multiple views. In Proceedings of ICML workshop on learning with multiple views. Citeseer, pages 74–79. Richard Socher, Jeffrey Pennington, and Eh Huang. 2011. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Conference on Empirical Methods in Natural Language Processing. pages 151–161. Hubert Soyer, Pontus Stenetorp, and Akiko Aizawa. 2015. Leveraging Monolingual Data for Crosslingual Compositional Word Representations. Proceedings of the 2015 International Conference on Learning Representations (ICLR) . Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proc. of the Annual Meeting of the Association for Computational Linguistics. Daniel Zeman and Philip Resnik. 2008. Crosslanguage parser adaptation between related languages. In IJCNLP. pages 35–42. Will Y Zou, Richard Socher, Daniel M Cer, and Christopher D Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Proc. of Empirical Methods for Natural Language Processing. pages 1393–1398. 2028
2016
190
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2029–2041, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Supersense Embeddings: A Unified Model for Supersense Interpretation, Prediction, and Utilization Lucie Flekova† and Iryna Gurevych†‡ † Ubiquitous Knowledge Processing Lab (UKP-TUDA) Department of Computer Science, Technische Universit¨at Darmstadt ‡ Ubiquitous Knowledge Processing Lab (UKP-DIPF) German Institute for Educational Research www.ukp.tu-darmstadt.de Abstract Coarse-grained semantic categories such as supersenses have proven useful for a range of downstream tasks such as question answering or machine translation. To date, no effort has been put into integrating the supersenses into distributional word representations. We present a novel joint embedding model of words and supersenses, providing insights into the relationship between words and supersenses in the same vector space. Using these embeddings in a deep neural network model, we demonstrate that the supersense enrichment leads to a significant improvement in a range of downstream classification tasks. 1 Introduction The effort of understanding the meaning of words is central to the NLP community. The word sense disambiguation (WSD) task has therefore received a substantial amount of attention (see Navigli (2009) or Pal and Saha (2015) for an overview). Words in training and evaluation data are usually annotated with senses taken from a particular lexical semantic resource, most commonly WordNet (Miller, 1995). However, WordNet has been criticized to provide too fine-grained distinctions for end level applications. e.g. in machine translation or information retrieval (Izquierdo et al., 2009). Although some researchers report an improvement in sentiment prediction using WSD (Rentoumi et al., 2009; Akkaya et al., 2011; Sumanth and Inkpen, 2015), the publication bias toward positive results (Plank et al., 2014) impedes the comparison to experiments with the opposite conclusion, and the contribution of WSD to downstream document classification tasks remains “mostly speculative”(Ciaramita and Altun, 2006), which can be attributed to the too subtle sense distinctions (Navigli, 2009). This is why supersenses, the coarse-grained word labels based on WordNet’s (Fellbaum, 1998) lexicographer files, have recently gained attention for text classification tasks. Supersenses contain 26 labels for nouns, such as ANIMAL, PERSON or FEELING and 15 labels for verbs, such as COMMUNICATION, MOTION or COGNITION. Usage of supersense labels has been shown to improve dependency parsing (Agirre et al., 2011), named entity recognition (Marrero et al., 2009; R¨ud et al., 2011), non-factoid question answering (Surdeanu et al., 2011), question generation (Heilman, 2011), semantic role labeling (Laparra and Rigau, 2013), personality profiling (Flekova and Gurevych, 2015), semantic similarity (Severyn et al., 2013) and metaphor detection (Tsvetkov et al., 2013). An alternative path to semantic interpretation follows the distributional hypothesis (Harris, 1954). Recently, word vector representations learned with neural-network based language models have contributed to state-of-the-art results on various linguistic tasks (Bordes et al., 2011; Mikolov et al., 2013b; Pennington et al., 2014; Levy et al., 2015). In this work, we present a novel approach for incorporating the supersense information into the word embedding space and propose a new methodology for utilizing these to label the text with supersenses and to exploit these joint word and supersense embeddings in a range of applied text classification tasks. Our contributions in this work include the following: • We are the first to provide a joint wordand supersense-embedding model, which we make publicly available1 for the research community. This provides an insight into the word and supersense positions in the vector space 1https://github.com/UKPLab/ acl2016-supersense-embeddings 2029 through similarity queries and visualizations, and can be readily used in any word embedding application. • Using this information, we propose a supersense tagging model which achieves competitive performance on recently published social media datasets. • We demonstrate how these predicted supersenses and their embeddings can be used in a range of text classification tasks. Using a deep neural network architecture, we achieve an improvement of 2-6% in accuracy for the tasks of sentiment polarity classification, subjectivity classification and metaphor prediction. 2 Related Work 2.1 Semantically Enhanced Word Embeddings An idea of combining the distributional information with the expert knowledge is attractive and has been newly pursued in multiple directions. One of them is creating the word sense or synset embeddings (Iacobacci et al., 2015; Chen et al., 2014; Rothe and Sch¨utze, 2015; Bovi et al., 2015). While the authors demonstrate the utility of these embeddings in tasks such as WSD, knowledge base unification or semantic similarity, the contribution of such vectors to downstream document classification problems can be challenging, given the fine granularity of the WordNet senses (cf. the discussion in Navigli (2009)). As discussed above, supersenses have been shown to be better suited for carrying the relevant amount of semantic information. An alternative approach focuses on altering the objective of the learning mechanism to capture relational and similarity information from knowledge bases (Bordes et al., 2011; Bordes et al., 2012; Yu and Dredze, 2014; Bian et al., 2014; Faruqui and Dyer, 2014; Goikoetxea et al., 2015). While, in principle, supersenses could be seen as a relation between a word and its hypernym, to our knowledge they have not been explicitly employed in these works. Moreover, an important advantage of our explicit supersense embeddings compared to the retrained vectors is their direct interpretability. 2.2 Supersense Tagging Supersenses, also known as lexicographer files or semantic fields, were originally used to organize lexical-semantic resources (Fellbaum, 1990). The supersense tagging task was introduced by Ciaramita and Johnson (2003) for nouns and later expanded for verbs (Ciaramita and Altun, 2006). Their state-of-the-art system is trained and evaluated on the SemCor data (Miller et al., 1994) with an F-score of 77.18%, using a hidden Markov model. Since then, the system, resp. its reimplementation by Heilman2, was widely used in applied tasks (Agirre et al., 2011; Surdeanu et al., 2011; Laparra and Rigau, 2013). Supersense taggers have then been built also for Italian (Picca et al., 2008), Chinese (Qiu et al., 2011) and Arabic (Schneider et al., 2013). Tsvetkov et al. (2015) proposes the usage of SemCor supersense frequencies as a way to evaluate word embedding models, showing that a good alignment of embedding dimensions to supersenses correlates with performance of the vectors in word similarity and text classification tasks. Recently, Johannsen et al. (2014) introduced a task of multiword supersense tagging on Twitter. On their newly constructed dataset, they show poor domain adaptation performance of previous systems, achieving a maximum performance with a searchbased structured prediction model (Daum´e III et al., 2009) trained on both Twitter and SemCor data. In parallel, Schneider and Smith (2015) expanded a multiword expression (MWE) annotated corpus of online reviews with supersense information, following an alternative annotation scheme focused on MWE. Similarly to Johannsen et al. (2014), they find that SemCor may not be a sufficient resource for supersense tagging adaption to different domains. Therefore, in our work, we explore the potential of using an automatically annotated Babelfied Wikipedia corpus (Scozzafava et al., 2015) for this task. 3 Building Supersense Embeddings To learn our embeddings, we adapt the freely available sample of 500k articles of Babelfied English Wikipedia (Scozzafava et al., 2015). To our knowledge, this is one of the largest published and evaluated sense-annotated corpora, containing over 500 million words, of which over 100 million are annotated with Babel synsets, with an estimated synset annotation accuracy of 77.8%. Few other automatically sense-annotated Wikipedia corpora are available (Jordi Atserias and Attardi, 2008; Reese et 2https://github.com/kutschkem/ SmithHeilmann_fork/tree/master/ MIRATagger 2030 1 About 10.9% of families were below the poverty line, including 13.6% of those under age 18. 2 About 10.9% of N.GROUP were below the N.POSSESSION V.CHANGE 13.6% of those under N.ATTRIBUTE 18. 3 About 10.9% of FAMILIES N.GROUP were below the POVERTY LINE N.POSSESSION INCLUDING V.CHANGE 13.6% of those under AGE N.ATTRIBUTE 18. Table 1: Example of plain (1), generalized (2) and disambiguated (3) Wikipedia al., 2010). However, their annotation quality was assessed only on the training domain and as Atserias et al. state (p.2316): “Wikipedia text differs significantly ... from the corpora used to train the taggers ... Therefore the quality of these NLP processors is considerably lower than the results of the evaluation in-domain.” We map the Babel synsets to WordNet 3.0 synsets (Miller, 1995) using the BabelNet API (Navigli and Ponzetto, 2012), and map these synsets to their corresponding WordNet’s supersense categories (Miller, 1990; Fellbaum, 1990). For the nested named entities, only the largest BabelNet span is considered, hence there are no nested supersense labels in our data. In this manner we obtain an alternative Wikipedia corpus, where each word is replaced by its corresponding supersense (see Table 1, second row) and another alternative corpus where each word has its supersense appended (Table 1, third row). Using the Gensim ( ˇReh˚uˇrek and Sojka, 2010) implementation of Word2vec (Mikolov et al., 2013a), we applied the skip-gram model with negative sampling on these three Wikipedia corpora jointly (i.e., on the rows 1, 2 and 3 in Table 1) to produce continuous representations of words, supersense-disambiguated words and standalone supersenses in one vector space based on the distributional information obtained from the data. 3 The benefits of learning this information jointly are threefold: 1. Vectorial representations of the original words are altered (compared to training on text only), taking into account the similarity to supersenses in the vector space 3The embeddings are learned using skip-gram as training algorithm with downsampling of 0.001 higher-frequency words, negative sampling of 5 noise words, minimal word frequency of 100, window of size 2 and alpha of 0.025, using 10 epochs to produce 300-dimensional vectors. Our experiments with less dimensions and with the CBOW model performed worse. 2. Standalone supersenses are positioned in the vector space, enabling insightful similarity queries between words and supersenses, esp. for unannotated words 3. Disambiguated word+supersense vectors of annotated words can be employed similarly to sense embeddings (Iacobacci et al., 2015; Chen et al., 2014) to improve downstream tasks and serve as input for supersense disambiguation or contextual similarity systems In the following, the designation WORDS denotes the experiments with the word embeddings learned on plain Wikipedia text (as in row 1 of Table 1) while the designation SUPER denotes the experiments with the word embeddings learned jointly on the supersense-enriched Wikipedia (i.e., rows 1, 2 and 3 in Table 1 together). 4 Qualitative Analysis 4.1 Verb Supersenses Table 2 shows the most similar word vectors to each of the verb supersense vectors using cosine similarity. Note that while no explicit part-of-speech information is specified, the most similar words hold both the semantic and syntactic information most of the assigned words are verbs. VERBS BODY wearing, injured, worn, wear, wounded, bitten, soaked, healed, cuffed, dressed CHANGE changed, started, added, dramatically, expanded drastically, begun, altered, shifted, transformed COGNITION known, thought, consider, regarded, remembered attributed, considers, accepted, believed, read COMMUNICATION stated, said, argued, jokingly, called, noted, suggested, described, claimed, referred COMPETITION won, played, lost, beat, scored defeated, win, competed, winning, playing CONSUMPTION feed, fed, employed, based, hosted feeds, utilized, applied, provided, consumed CONTACT thrown, set, carried, opened, laid pulled, placed, cut, dragged, broken CREATION produced, written, created, designed, developed directed, built, published, penned, constructed EMOTION want, felt, loved, wanted, delighted disappointed, feel, like, saddened, thrilled MOTION brought, led, headed, returned, followed left, turned, sent, travelled, entered PERCEPTION seen, shown, revealed, appeared, appears shows, noticed, see, showing, presented POSSESSION received, obtained, awarded, acquired, provided donated, gained, bought, found, sold SOCIAL appointed, established, elected, joined, assisted led, succeeded, encouraged, initiated, organized STATIVE included, held, includes, featured, served, represented, referred, holds, continued, related WEATHER glow, emitted, ignited, flare, emitting smoke, fumes, sunlight, lit, darkened Table 2: Top 10 most similar word embeddings for verb supersense vectors 2031 Figure 1: Verb supersense embeddings visualized in the vector space (t-SNE) Furthermore, using a large corpus such as Wikipedia conveniently reduces the current need of lemmatization for supersense tagging, as the words are sufficiently represented in all their forms. The most frequent error originates from assigning the adverbs to their related verb categories, e.g. jokingly to COMMUNICATION and drastically to CHANGE. Such information, however, can be beneficial for context analysis in supersense tagging. Figure 1 displays the verb supersenses using the t-distributed Stochastic Neighbor Embedding (Van der Maaten and Hinton, 2008), a technique designed to visualize structures in high-dimensional data. While many of the distances are probable to be dataset-agnostic, such as the proximity of BODY, CONSUMPTION and EMOTION, other appear emphasized by the nature of Wikipedia corpus, e.g. the proximity of supersenses COMMUNICATION and CREATION or SOCIAL and MOTION, as can be explained by table 2 (see e.g. led and followed). Figure 2: Noun supersense embeddings (t-SNE) 4.2 Noun Supersenses Table 3 displays the most similar word embeddings for noun supersenses. In accordance with previous work on suppersense tagging (Ciaramita and Altun, 2006; Schneider et al., 2012; Johannsen et al., 2014), the assignments of more specific supersenses such as FOOD, PLANT, TIME or PERSON are in general more plausible than those for abstract concepts such as ACT, ARTIFACT or COGNITION. The same is visible in Figure 2, where these supersense embeddings are more central, with closer neighbors. In contrast to the observations by Schneider et al. (2012) and Johannsen et al. (2014), the COMMUNICATION supersense appears well defined, likely due to the character of Wikipedia. NOUNS ACT participation, activities, involvement, undertaken ongoing, conduct, efforts, large-scale, success ANIMAL peccaries, capybaras, frogs, echidnas, birds marmosets, rabits, hatchling, ciconiidae, species ARTIFACT wooden, two-floor, purpose-built, installed, wall fittings, turntable, racks, wrought-iron, ceramic, stone ATTRIBUTE height, strength, age, versatility, hardness power, fluidity, mastery, brilliance, inherent BODY abdomen, bone, femur, anterior, forearm femoral, skin, neck, muscles, thigh COGNITION ideas, concepts, empirical, philosophy, knowledge, epistemology, analysis, atomistic, principles COMMUNICATION written, excerpts, text, music, excerpted, translation, lyrics, subtitle, transcription, words EVENT sudden, death, occurred, event, catastrophic unexpected, accident, victory, final, race FEELING sadness, love, sorrow, frustration, disgust anger, affection, feelings, grief, fear FOOD cheese, butter, coffee, milk, yogurt dessert, meat, bread, vegetables, sauce GROUP members, school, phtheochroa, ypsolophidae pitcairnia, cryptanthus, group, division, schools LOCATION northern, southern, northeastern, area, south capital, town, west, region, city MOTIVE motivation, reasons, rationale, justification, motive justifications, motives, incentive, desire, why OBJECT river, valley, lake, hills, floodplain lakes, rivers, mountain, estuary, ocean PERSON greatgrandfather, son, nephew, son-in-law, father halfbrother, brother, who, mentor, fellow PHENOMENON wind, forces, self-focusing, radiation, ionizing result, intensity, gravitational, dissipation, energy PLANT fruit, fruits, magnifera, sativum, flowers caesalpinia, shrubs, trifoliate, vines, berries POSSESSION property, payment, money, payments, taxes tax, cash, fund, pay, $100 PROCESS growth, decomposition, oxidative, mechanism rapid, reaction, hydrolysis, inhibition, development QUANTITY miles, square, meters, kilometer, cubic, ton, number, megabits, volume, kilowatthours RELATION southeast, southwest, northeast, northwest, east portion, link, correlation, south, west SHAPE semicircles, right-angled, concave, parabola, ellipse, angle, circumcircle, semicircle, lines STATE chronic, condition, debilitating, problems, health worsening, illness, illnesses, exacerbation, disease SUBSTANCE magnesium, zinc, silica, manganese, sulfur oxide, sulphate, phosphate, salts, phosphorus TIME september, december, november, july, april january, august, february, year, days TOPS time, group, event, person, groups individuals, events, animals, individual, plant Table 3: Top 10 most similar word embeddings for noun supersense vectors 2032 4.3 Word Analogy and Word Similarity Tasks We also assess the changes between the individual word embeddings learned on plain Wikipedia text (WORDS) and jointly with the supersense-enriched Wikipedia (SUPER). With this aim we perform two standard embedding evaluation tasks: word similarity and word analogy. Mikolov et al. (2013b) introduce a word analogy dataset containing 19544 analogy questions that can be answered with word vector operations (Paris is to France as Athens are to...?). The questions are grouped into 13 categories. Table 4 presents our results. Word vectors trained in the SUPER setup achieve better results on groups related to entities, e.g. Family Relations and Citizen to State questions, where the PERSON and LOCATION supersenses can provide additional information to reduce noise. At the same time, performance on questions such as Opposites or Plurals drops, as this information is pushed to the background. Enriching our data with the recently proposed adjective supersenses (Tsvetkov et al., 2014) could be of interest for these categories. Group/Vectors: WORDS SUPER Capitals - common 91.1 94.7±0.99 Capitals - world 87.6 89.5±0.69 City in state 65.2 65.7±1.03 Nationality to state 94.5 95.2±0.58 Family relations 93.0 94.4±1.28 Opposites 56.7 54.6±3.21 Plurals 89.4 86.4±1.08 Comparatives 90.6 90.4±0.85 Superlatives 79.4 79.6±1.83 Adjective to adverb 20.2 22.2±1.53 Present to participle 64.2 64.6±1.57 Present to past 60.0 59.2±1.30 3rd person verbs 84.3 82.1±1.44 Total 75.0 76.0±0.28 Table 4: Accuracy and standard error on analogy tasks. Tasks related to noun supersense distinctions show the tendency to improve, while syntax-related information is pushed to the background. In most cases, however, the difference is not significant. Without explicitly exploiting the sense infromation, we compare the performance of our texttrained (WORDS) to our jointly trained (SUPER) word vectors on the following word similarity datasets: WordSim353-Similarity (353-S) and WordSim353-Relatedness (353-R) (Agirre et al., 2009), MEN dataset (Bruni et al., 2014), RG-65 dataset (Rubenstein and Goodenough, 1965) and MC-30 (Miller and Charles, 1991). Data: MEN 353-S 353-R RG-65 MC-30 WORDS 73.18 76.93 62.11 79.13 79.49 SUPER 74.26 78.63 61.22 79.75 80.94 Table 5: Performance of our vectors (Spearman’s ρ) on five similarity datasets. Results indicate a trend of better performance of vectors trained jointly with supersenses. The word embeddings for words trained jointly with supersenses achieve higher performance than those trained solely on the same text without supersenses on 4 out of 5 tasks (Table 5). In addition, the explicit supersense information could be further exploited, similarly to previous sense embedding works (Iacobacci et al., 2015; Rothe and Sch¨utze, 2015; Chen et al., 2014). Furthermore, note that while we report the performance of our embeddings on the word similarity tasks for completeness, there has been a substantial discussion on seeking alternative ways to quantify embedding quality with the focus on their purpose in downstream applications (Li and Jurafsky, 2015; Faruqui et al., 2016). Therefore, in the remainder of this paper we explore the usefulness of supersense embeddings in text classification tasks. 5 Building a Supersense Tagger The task of predicting supersenses has recently regained its popularity (Johannsen et al., 2014; Schneider and Smith, 2015), since supersenses provide disambiguating information, useful for numerous downstream NLP tasks, without the need of tedious fine-grained WSD. Exploiting our joint embeddings, we build a deep neural network model to predict supersenses on the Twitter supersense corpus created by Johannsen et al. (2014), based on the Twitter NER task (Ritter et al., 2011), using the same training data as the authors. 45 The datasets follow the token-level annotation which combines the B-I-O flags (Ramshaw and Marcus, 1995) with the supersense class labels to represent the multiword expression segmentation and supersense labeling in a sentence. 5.1 Experimental Setup We implement a window-based approach with a multi-channel multi-layer perceptron model using 4https://github.com/kutschkem/ SmithHeilmann_fork/tree/master/ MIRATagger/data 5https://github.com/coastalcph/ supersense-data-twitter 2033 the Theano framework (Bastien et al., 2012). With a sliding window of size 5 for the sequence learning setup we extract for each word the following seven feature vectors: 1. 300-dimensional word embedding, 2. 41 cosine similarities of the word to each standalone supersense embedding, 3. 41 cosine similarities of the word to each of its word SUPERSENSE embeddings, 4. fixed vector of frequencies of each supersense in Wikipedia, in order to simulate the MFS backoff strategy, 5. for the given word, the frequency of each word SUPERSENSE in our Wikipedia corpus, 6. part-of-speech information as a unit vector, 7. casing information as a 3-dimensional (upper/lower/mixed) unit vector After a dropout regularization, the embedding sets are flattened, concatenated and fed into fully connected dense layers with a rectified linear unit (ReLU) activation function and a final softmax. 5.2 Supersense Prediction We evaluate our system on the same Twitter dataset with provided training and development (Tw-R-dev) set and two test sets: Tw-R-eval, reported by Johannsen et al. as RITTER, and Tw-J-eval, reported by Johannsen et al. as INHOUSE. Our results are shown in table 6 and compared to results reported in previous work by Johannsen et al. (2014), with two additional baselines: The SemCor system of Ciaramita and Altun (2006) and the most frequent sense. Our system achieves comparable performance to the best previously used supervised systems, without using any explicit gazetteers. To get an intuition.6 of how the individual feature vectors contribute to the prediction, we perform an ablation test by removing one feature group at a time. The biggest performance drop in the F-score (2.7–5.4) occurs when removing the the part of 6Intuition, since there are many additional aspects that may affect the performance. For example, we keep the network parameters fixed for the ablation, although the feature vectors are of different lengths. Furthermore, our model performs a concatenation of the feature vectors, hence only the ablation extended to all possible permutations would verify the feature order effect. speech information, followed by the supersense similarity features and supersense frequency priors (0.2–3.0). The casing information has only a minor contribution to Twitter supersense tagging (0–0.9). System/Data: Tw-R-dev Tw-R-eval Tw-J-eval Baseline and upper bound Most frequent sense 47.54 44.98 38.65 Inter-annotator agreement 69.15 61.15 SemCor-trained systems (Ciaramita and Altun, 2006)† 48.96 45.03 39.65 Searn (Johannsen et al., 2014) 56.59 50.89 40.50 HMM (Johannsen et al., 2014) 57.14 50.98 41.84 Ours Semcor 54.47 50.30 35.61 Twitter-trained systems Searn (Johannsen et al., 2014) 67.72 57.14 42.42 HMM (Johannsen et al., 2014) 60.66 51.40 41.60 Ours Twitter (all features) 61.12 57.16 41.97 Ours Twitter no casing 61.06 56.20 41.13 Ours Twitter no similarities 63.47 56.78 39.44 Ours Twitter no frequencies 61.10 57.32 39.02 Ours Twitter no part-of-speech 57.08 54.45 36.50 Ours Twitter no word embed. 57.57 53.43 34.91 Table 6: Weighted F-score performance on supersense prediction for the development set and two test sets provided by Johannsen et al. (2004). Our system performs comparably to state-of-the-art systems. † For the system of Ciaramita et al, the publicly avaliable reimplementation of Heilman was used 6 Using Supersense Embeddings in Document Classification Tasks Word sense disambiguation is to some extent an artificial stand-alone task. Despite its popularity, its contribution to downstream document classification tasks remains rather limited, which might be attributed to the complexity of document preprocessing and the errors cumulated along the pipeline. In this section, we demonstrate an alternative, deep learning approach, in which we process the original text in parallel to the supersense information. The model can then flexibly learn the usefulness of provided input. We demonstrate that the model extended with supersense embeddings outperforms the same model using only word-based features on a range of classification tasks. 6.1 Experimental Setup Both Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) are state-of-the-art semantic composition models for a variety of text classification tasks (Kim, 2014; Li et al., 2015; Johnson and Zhang, 2014). Recently, their combinations have been proposed, achieving an unprecedented performance (Sainath et al., 2015). We extend the CNN-LSTM approach from the publicly available 2034 Cat  sat  on  a  mat   Noun.Animal  Verb.Contact    on  a  Noun.Ar4fact   LSTM   LSTM   LSTM   LSTM   Figure 3: Network architecture. Each of the four different embedding channels serves as input to its CNN layer, followed by an LSTM layer. Afterwards, the outputs are concatenated and fed into a dense layer. Keras demo7, into which we incorporate the supersense information. Figure 3 displays our network architecture. First, we use three channels of word embeddings on the plain textual input. The first channel are the 300-dimensional word embeddings obtained from our enriched Wikipedia corpus. The second embedding channel consists of 41-dimensional vectors capturing the cosine similarity of the word to each supersense embedding. The third channel contains the vector of relative frequencies of the word occurring in the enriched Wikipedia together with its supersense, i.e. providing the background supersense distribution for the word. Each of the document embeddings is then convoluted with the filter size of 3, followed by a pooling layer of length 2 and fed into a longshort-term-memory (LSTM) layer. In parallel, we feed as input a processed document text, where the words are replaced by their predicted supersenses. Given that we have the Wikipedia-based supersense embeddings in the same vector space as the word embeddings, we can now proceed to creating the 300-dimensional embedding channel also for the supersense text. As in the plain text channels, we feed also these embeddings into the 7https://github.com/fchollet/keras/ blob/master/examples/imdb_cnn_lstm.py convolutional and LSTM layers in a similar fashion. Afterwards, we concatenate all LSTM outputs and feed them into a standard fully connected neural network layer, followed by the sigmoid for the binary output. The following subsections discuss our results on a range of classification tasks: subjectivity prediction, sentiment polarity classification and metaphor detection. 6.2 Sentiment Polarity Classification Sentiment classification has been a widely explored task which received a lot of attention. The Movie Review dataset, published by Pang and Lee (2005)8, has become a standard machine learning benchmark task for binary sentence classification. Socher et al. (2011) address this task with recursive autoencoders and Wikipedia word embeddings, later improving their score using recursive neural network with parse trees (Socher et al., 2012). Competitive results were achieved also by a sentimentanalysis-specific parser (Dong et al., 2015), with a fast dropout logistic regression (Wang and Manning, 2013), and with convolutional neural networks (Kim, 2014). Table 7 compares these approaches to our results for a 10-fold crossvalidation with 10% of the data withheld for parameter tuning. The line WORDS displays the performance using only the leftmost part of our architecture, i.e. only the text input with our word embeddings. The line SUPER shows the result of using the full supersense architecture. As it can be seen from the table, the supersense features improve the accuracy by about 2%. Both systems are significantly different (p < 0.01), using the McNemar’s test. System Accuracy Socher et al. (2011) 77.7 Socher et al. (2012) 79.0 Wang and Manning (2013) 79.1 Dong et al. (2015) 79.5 Kim (2014) 81.5 WORDS 79.4 SUPER 81.7±0.37 Table 7: 10-fold cross-validation accuracy and standard error of our system and as reported in previous work for the sentiment classification task on Pang and Lee (2005) movie review data A detailed analysis of the supersense-tagged data and the classification output revealed that supersenses help to generalize over rare terms. Noun 8http://www.cs.uic.edu/liub/FBS/ sentiment-analysis.html 2035 Positive reviews Text Supersenses beating the austin powers film at their own game , verbstative the nounlocation nouncognition nounartifact at their own nouncommunication , this blaxploitation spoof downplays the raunch in favor this nounact nouncommunication verbstative the nouncognition in nouncommunication of gags that rely on the strength of their own cleverness of that verbcognition on the nouncognition of their own nouncognition as oppose to the extent of their outrageousness . as verbcommunication to the nounevent of their nounattribute . there is problem with this film that there verbstative nouncognition with this nouncommunication that even 3 oscar winner ca n’t overcome , even 3 nounevent nounperson ca n’t verbemotion , but it ’s a nice girl-buddy movie but it verbstative a nice girl-buddy nouncommunication once it get rock-n-rolling . once it verbstative rock-n-rolling godard ’s ode to tackle life ’s wonderment is a nounperson nouncommunication to verbstative nouncognition ’s nouncognition verbstative rambling and incoherent manifesto about the vagueness of topical a rambling and incoherent nouncommunication about the nounattribute of topical excess . in praise of love remain a ponderous and pretentious excess . in nouncognition of nouncognition verbstative a ponderous and pretentious endeavor that ’s unfocused and tediously exasperating . nounact that verbstative unfocused and tediously exasperating Negative reviews Text Supersenses the action scene has all the suspense of a 20-car pileup , the nounact nounlocation verbstative all the nouncognition of a 20-car nouncognition , while the plot hole is big enough for a train car to drive while the nounlocation verbstative big enough for a nounartifact nounartifact to verbmotion through – if kaos have n’t blow them all up . through – if nounperson have n’t verbcommunication them all up . the scriptwriter is no less a menace to society the nounperson verbstative no less nounstate to noungroup than the film ’s character . than the nouncommunication nounperson . a very slow , uneventful ride a very slow , uneventful nounact around a pretty tattered old carousel . around a pretty tattered old nounartifact . the milieu is wholly unconvincing . . . the nouncognition verbstative wholly unconvincing and the histrionics reach a truly annoying pitch . and the nouncommunication verbstative a truly annoying nounattribute . Table 8: Example of documents classified incorrectly with word embeddings and correctly with word and supersense embeddings on Pang and Lee (2005) movie review data. concepts such as GROUP, LOCATION, TIME and PERSON appear somewhat more frequently in positive reviews while certain verb supersenses such as PERCEPTION, SOCIAL and COMMUNICATION are more frequent in the negative ones. On the other hand, the supersense tagging introduces additional errors too - for example the director’s cut is persistently classified into FOOD. Table 8 shows an example of positive and negative reviews which were consistently (5x in repeated experiments with different random seeds) classified incorrectly with word embeddings and classified correctly with supersense embeddings. Often the wit of unusual expressions is lost for the benefit of generalization. Some improvements appear to be a result of replacing proper names by NOUN.PERSON. 6.3 Subjectivity Classification Pang and Lee (2004) demonstrate that the subjectivity detection can be a useful input for a sentiment classifier. They compose a publicly available dataset9 of 5000 subjective and 5000 objective sentences, classifying them with a reported accuracy of 90-92% and further show that predicting this information improves the end-level sentiment classification on a movie review dataset. Kim (2014) and Wang and Manning (2013) further improve the performance through different machine learning methods. Supersenses are a natural candidate for subjectivity prediction, as we 9https://www.cs.cornell.edu/people/ pabo/movie-review-data/ hypothesize that the nouns and verbs in the subjective and objective sentences often come from different semantic classes (e.g. VERB.FEELING vs. VERB.COGNITION). We employ the same architecture as in previous task, automatically annotating the words in the documents with their supersenses. Our results are reported in Table 9. The supersenses (SUPER) provide an additional information, improving the model performance by up to 2% over word embeddings (WORDS). The difference between both systems is significant. Based on a manual error analysis, the supersense information contributes here in a similar manner as in the previous case. Subjective sentences contain more verbs of supersense PERCEPTION, while objective ones more frequently feature the supersenses POSSESSION and SOCIAL. Nouns in the subjective category are characterized by supersenses COMMUNICATION and ATTRIBUTE, while in objective ones the PERSON and POSSESSION are more frequent. System Accuracy SVM (Pang and Lee, 2004) 90.0 NB (Pang and Lee, 2004) 92.0 CNN (Kim, 2014) 93.4 F-Dropout (Wang and Manning, 2013) 93.6 MV-CNN (Zhang et al., 2016) 93.9 WORDS 92.1 SUPER 93.9±0.26 Table 9: 10-fold cross-validation accuracy and standard error of our system and as reported in previous work for binary classification on the subjectivity dataset of Pang and Lee (2004) 2036 6.4 Metaphor Identification Supersenses have recently been shown to provide improvements in metaphor prediction tasks (Gershman et al., 2014), as they hold the information of coarse semantic concepts. Turney et al. (2011) explore the task of discriminating literal and metaphoric adjective-noun expressions. They report an accuracy of 79% on a small dataset rated by five annotators. Tsvetkov et al. (2013) pursue this work further by constructing and publishing a dataset of 985 literal and 985 methaphorical adjective-noun pairs10 and classify them. Gershman et al. (2014) further expand on this work using 64-dimensional vector-space word representations constructed by Faruqui and Dyer (2014) for classification. They report a state-of-the-art F-score of 85% with random decision forests, including also abstractness and imageability features (Wilson, 1988) and supersenses from WordNet, averaged across senses. System F1-score on test set (Gershman et al., 2014) 85 WORDS 81.91±2.81 SUPER 87.23±2.36 Table 10: F1-score and a standard error on a provided test set for the adjective-noun metaphor prediction task Gershman et al. (2014). WORDS: word embeddings only, SUPER: multi-channel word embeddings with the supersense similarity and frequency vectors added Since this setup is simpler than the sentence classification tasks, we use only a subset of our architecture, specifically the left half of Figure 3, i.e. our word embeddings, similarity vectors and supersense frequency vectors. Since there are only two words in each document, we leave out the LSTM layer. We merge the similarity and frequency layers by multiplication and concatenate the result to the word embedding convolution, feeding the output of the concatenation directly to the dense layer. Table 10 shows our results on a provided test set. Based on McNemar’s test, there is a significant difference (p < 0.01) between our system based on words only and the one with supersenses. 7 Discussion Unlike previous research on supersenses, our work is not based on a manually produced gold stan10http://www.cs.cmu.edu/˜ytsvetko/ metaphor/datasets.zip dard, but on an automatically annotated large corpus. While Scozzafava et al. (2015) report a high accuracy estimate of 77.8% on sense level, the performance and possible bias on tagged supersenses are yet to be evaluated. We are also aware that some of the previously proposed approaches for building word sense embeddings (Rothe and Sch¨utze, 2015; Chen et al., 2014; Iacobacci et al., 2015) could be eventually extended to supersenses. We strongly encourage the authors to do so and perform a contrastive evaluation comparing these methods. Additionaly, a different level of granularity of the concepts, such as WordNet Domains (Magnini and Cavaglia, 2000) could be explored. 8 Conclusions and Future Work We have presented a novel joint embedding set of words and supersenses, which provides a new insight into the word and supersense positions in the vector space. We demonstrated the utility of these embeddings for predicting supersenses and manifested that the supersense enrichment can lead to a significant improvement in a range of downstream classification tasks, using our embeddings in a neural network model. The outcomes of this work are available to the research community.11. In follow-up work, we aim to apply our embedding method on smaller, yet gold-standard corpora such as SemCor (Miller et al., 1994) and STREUSLE (Schneider and Smith, 2015) to examine the impact of the corpus choice in detail and extend the training data beyond WordNet vocabulary. Moreover, the coarse semantic categorization contained in supersenses was shown to be preserved in translation (Schneider et al., 2013), making them a perfect candidate for a multilingual adaptation of the vector space, e.g. extending Faruqui and Dyer (2014). Acknowledgments This work has been supported by the Volkswagen Foundation as part of the Lichtenberg Professorship Program under grant No. I/82806 and by the German Research Foundation under grant No. GU 798/14-1. Additional support was provided by the German Federal Ministry of Education and Research (BMBF) as a part of the Software Campus program under the reference 01-S12054 and by the German Institute for Educational Research (DIPF). We thank the anonymous reviewers for their input. 11https://github.com/UKPLab/ acl2016-supersense-embeddings 2037 References Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pas¸ca, and Aitor Soroa. 2009. A study on similarity and relatedness using distributional and WordNet-based approaches. In Proceedings of the 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics - Human Language Technologies, pages 19–27. Association for Computational Linguistics. Eneko Agirre, Kepa Bengoetxea, Koldo Gojenola, and Joakim Nivre. 2011. Improving dependency parsing with semantic classes. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: short papers-Volume 2, pages 699–703. Association for Computational Linguistics. Cem Akkaya, Janyce Wiebe, Alexander Conrad, and Rada Mihalcea. 2011. Improving the impact of subjectivity word sense disambiguation on contextual opinion analysis. In Proceedings of the 15th Conference on Computational Natural Language Learning, pages 87–96. Association for Computational Linguistics. Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio. 2012. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop. Jiang Bian, Bin Gao, and Tie-Yan Liu. 2014. Knowledge-powered deep learning for word embedding. In Machine Learning and Knowledge Discovery in Databases, pages 132–148. Springer. Antoine Bordes, Jason Weston, Ronan Collobert, and Yoshua Bengio. 2011. Learning structured embeddings of knowledge bases. In Conference on Artificial Intelligence. Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. 2012. Joint learning of words and meaning representations for open-text semantic parsing. In International Conference on Artificial Intelligence and Statistics, pages 127–135. Claudio Delli Bovi, Luis Espinosa Anke, and Roberto Navigli. 2015. Knowledge base unification via sense embeddings and disambiguation. In Proceedings of the 2015 Conference on Empirical Methods on Natural Language Processing, pages 726–736. Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Artificial Intelligence Research (JAIR), 49(1-47). Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A unified model for word sense representation and disambiguation. In Proceedings of the 2014 Conference on Empirical Methods on Natural Language Processing, pages 1025–1035. Massimiliano Ciaramita and Yasemin Altun. 2006. Broad-coverage sense disambiguation and information extraction with a supersense sequence tagger. In Proceedings of the 2006 Conference on Empirical Methods on Natural Language Processing, pages 594–602. Association for Computational Linguistics. Massimiliano Ciaramita and Mark Johnson. 2003. Supersense tagging of unknown nouns in WordNet. In Proceedings of the 2003 Conference on Empirical Methods on Natural Language Processing, pages 168–175. Association for Computational Linguistics. Hal Daum´e III, John Langford, and Daniel Marcu. 2009. Search-based structured prediction. Machine learning, 75(3):297–325. Li Dong, Furu Wei, Shujie Liu, Ming Zhou, and Ke Xu. 2015. A statistical parsing framework for sentiment classification. Computational Linguistics. Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 462–471, Gothenburg, Sweden, April. Association for Computational Linguistics. Manaal Faruqui, Yulia Tsvetkov, Pushpendre Rastogi, and Chris Dyer. 2016. Problems with evaluation of word embeddings using word similarity tasks. arXiv preprint, arXiv:1605.02276. Christiane Fellbaum. 1990. English verbs as a semantic net. International Journal of Lexicography, 3(4):278–301. Christiane Fellbaum. 1998. WordNet. Wiley Online Library. Lucie Flekova and Iryna Gurevych. 2015. Personality profiling of fictional characters using sense-level links between lexical resources. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1805–1816. Anatole Gershman, Yulia Tsvetkov, Leonid Boytsov, Eric Nyberg, and Chris Dyer. 2014. Metaphor detection with cross-lingual model transfer. In Proceedings of the 52nd annual meeting on Association for Computational Linguistics. Josu Goikoetxea, Aitor Soroa, Eneko Agirre, and Basque Country Donostia. 2015. Random walks and neural network language models on knowledge bases. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics - Human Language Technologies, pages 1434–1439. Zellig S Harris. 1954. Distributional structure. Word, 10(2-3):146–162. 2038 Michael Heilman. 2011. Automatic factual question generation from text. Ph.D. thesis, Carnegie Mellon University. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Ignacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. Sensembed: learning sense embeddings for word and relational similarity. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, pages 95– 105. Rub´en Izquierdo, Armando Su´arez, and German Rigau. 2009. An empirical study on class-based word sense disambiguation. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, pages 389–397. Association for Computational Linguistics. Anders Johannsen, Dirk Hovy, H´ector Martınez Alonso, Barbara Plank, and Anders Søgaard. 2014. More or less supervised supersense tagging of Twitter. Proceedings of the 3rd Joint Conference on Lexical and Computational Semantics, pages 1–11. Rie Johnson and Tong Zhang. 2014. Effective use of word order for text categorization with convolutional neural networks. arXiv preprint arXiv:1412.1058. Massimiliano Ciaramita Jordi Atserias, Hugo Zaragoza and Giuseppe Attardi. 2008. Semantically annotated snapshot of the english wikipedia. In Bente Maegaard Joseph Mariani Jan Odijk Stelios Piperidis Daniel Tapias Nicoletta Calzolari (Conference Chair), Khalid Choukri, editor, Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC’08), Marrakech, Morocco, may. European Language Resources Association (ELRA). Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746– 1751, Doha, Qatar, October. Association for Computational Linguistics. Egoitz Laparra and German Rigau. 2013. Impar: A deterministic algorithm for implicit semantic role labelling. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1180–1189. Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211–225. Jiwei Li and Dan Jurafsky. 2015. Do multi-sense embeddings improve natural language understanding? In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1722–1732, Lisbon, Portugal, September. Association for Computational Linguistics. Jiwei Li, Thang Luong, Dan Jurafsky, and Eduard Hovy. 2015. When are tree structures necessary for deep learning of representations? In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2304–2314, Lisbon, Portugal, September. Association for Computational Linguistics. Bernardo Magnini and Gabriela Cavaglia. 2000. Integrating subject field codes into WordNet. In Proceedings of the Third International Conference on Language Resources and Evaluation. M´onica Marrero, Sonia S´anchez-Cuadrado, Jorge Morato Lara, and George Andreadakis. 2009. Evaluation of named entity extraction systems. Advances in Computational Linguistics, Research in Computing Science, 41:47–58. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013a. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic regularities in continuous space word representations. In North American Chapter of the Association for Computational Linguistics - Human Language Technologies, pages 746–751. George A. Miller and Walter G. Charles. 1991. Contextual correlates of semantic similarity. Language and Cognitive Processes, 6(1):1–28. George A Miller, Martin Chodorow, Shari Landes, Claudia Leacock, and Robert G Thomas. 1994. Using a semantic concordance for sense identification. In Proceedings of the workshop on Human Language Technology, pages 240–243. Association for Computational Linguistics. George A Miller. 1990. Nouns in WordNet: a lexical inheritance system. International Journal of Lexicography, 3(4):245–264. George A Miller. 1995. WordNet: a lexical database for english. Communications of the ACM, 38(11):39–41. Roberto Navigli and Simone Paolo Ponzetto. 2012. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artificial Intelligence, 193:217– 250. Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM Computing Surveys (CSUR), 41(2):10. Alok Ranjan Pal and Diganta Saha. 2015. Word sense disambiguation: a survey. arXiv preprint arXiv:1508.01346. Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings 2039 of the 52nd Annual Meeting of the Association for Computational Linguistics, page 271. Association for Computational Linguistics. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 115–124. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global vectors for word representation. In Conference on Empirical Methods on Natural Language Processing, volume 14, pages 1532–1543. Davide Picca, Alfio Massimiliano Gliozzo, and Massimiliano Ciaramita. 2008. Supersense tagger for italian. In Proceedings of the Sixth International Conference on Language Resources and Evaluation. Citeseer. Barbara Plank, Anders Johannsen, and Anders Søgaard. 2014. Importance weighting and unsupervised domain adaptation of pos taggers: a negative result. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 968–973, Doha, Qatar, October. Association for Computational Linguistics. Likun Qiu, Yunfang Wu, and Yanqiu Shao. 2011. Combining contextual and structural information for supersense tagging of Chinese unknown words. In Computational Linguistics and Intelligent Text Processing, pages 15–28. Springer. Lance Ramshaw and Mitch Marcus. 1995. Text chunking using transformation-based learning. In Third Workshop on Very Large Corpora, pages 82–94. Samuel Reese, Gemma Boleda Torrent, Montserrat Cuadros Oller, Llu´ıs Padr´o, and German Rigau Claramunt. 2010. Word-sense disambiguated multilingual Wikipedia corpus. In Proceedings of the 7th International Conference on Language Resources and Evaluation. Radim ˇReh˚uˇrek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45–50, Valletta, Malta, May. ELRA. Vassiliki Rentoumi, George Giannakopoulos, Vangelis Karkaletsis, and George A Vouros. 2009. Sentiment analysis of figurative language using a word sense disambiguation approach. In Proceedings of the Conference on Recent Advances in Natural Language Processing, pages 370–375. Alan Ritter, Sam Clark, Oren Etzioni, et al. 2011. Named entity recognition in tweets: an experimental study. In Proceedings of the 2011 Conference on Empirical Methods on Natural Language Processing, pages 1524–1534. Association for Computational Linguistics. Sascha Rothe and Hinrich Sch¨utze. 2015. Autoextend: Extending word embeddings to embeddings for synsets and lexemes. In Proceedings of the 53rd Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1793–1803, Beijing, China, July. Association for Computational Linguistics. Herbert Rubenstein and John B. Goodenough. 1965. Contextual correlates of synonymy. Commun. ACM, 8(10):627–633, October. Stefan R¨ud, Massimiliano Ciaramita, Jens M¨uller, and Hinrich Sch¨utze. 2011. Piggyback: Using search engines for robust cross-domain named entity recognition. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 965– 975. Association for Computational Linguistics. Tara N Sainath, Oriol Vinyals, Andrew Senior, and Hasim Sak. 2015. Convolutional, long shortterm memory, fully connected deep neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pages 4580–4584. IEEE. Nathan Schneider and Noah A Smith. 2015. A corpus and model integrating multiword expressions and supersenses. Nathan Schneider, Behrang Mohit, Kemal Oflazer, and Noah A Smith. 2012. Coarse lexical semantic annotation with supersenses: an arabic case study. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short PapersVolume 2, pages 253–258. Association for Computational Linguistics. Nathan Schneider, Behrang Mohit, Chris Dyer, Kemal Olflazer, and Noah A Smith. 2013. Supersense tagging for arabic: the mt-in-the-middle attack. Association for Computational Linguistics. Federico Scozzafava, Alessandro Raganato, Andrea Moro, and Roberto Navigli. 2015. Automatic identification and disambiguation of concepts and named entities in the multilingual wikipedia. In AI* IA 2015 Advances in Artificial Intelligence, pages 357– 366. Springer. Aliaksei Severyn, Massimo Nicosia, and Alessandro Moschitti. 2013. Learning semantic textual similarity with structural representations. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 714–718. Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. 2011. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the 2011 Conference on Empirical Methods on Natural Language Processing, pages 151–161. Association for Computational Linguistics. 2040 Richard Socher, Brody Huval, Christopher D Manning, and Andrew Y Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1201– 1211. Association for Computational Linguistics. Chiraag Sumanth and Diana Inkpen. 2015. How much does word sense disambiguation help in sentiment analysis of micropost data? In 6TH Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (WASSA 2015), page 115. Mihai Surdeanu, Massimiliano Ciaramita, and Hugo Zaragoza. 2011. Learning to rank answers to nonfactoid questions from web collections. Computational Linguistics, 37(2):351–383. Yulia Tsvetkov, Anatole Gershman, and Elena Mukomel. 2013. Cross-lingual metaphor detection using common semantic features. In The First Workshop on Metaphor in NLP, page 45. Yulia Tsvetkov, Nathan Schneider, Dirk Hovy, Archna Bhatia, Manaal Faruqui, and Chris Dyer. 2014. Augmenting english adjective senses with supersenses. Yulia Tsvetkov, Manaal Faruqui, Wang Ling, Guillaume Lample, and Chris Dyer. 2015. Evaluation of word vector representations by subspace alignment. In Proceedings of the 2015 Conference on Empirical Methods on Natural Language Processing. Association for Computational Linguistics. Peter D Turney, Yair Neuman, Dan Assaf, and Yohai Cohen. 2011. Literal and metaphorical sense identification through concrete and abstract context. In Proceedings of the 2011 Conference on the Empirical Methods in Natural Language Processing, pages 680–690. Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9(2579-2605):85. Sida Wang and Christopher Manning. 2013. Fast dropout training. In Proceedings of the 30th International Conference on Machine Learning, pages 118– 126. Michael Wilson. 1988. MRC psycholinguistic database: Machine-usable dictionary, version 2.00. Behavior Research Methods, Instruments, & Computers, 20(1):6–10. Mo Yu and Mark Dredze. 2014. Improving lexical embeddings with semantic knowledge. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, pages 545–550. Ye Zhang, Stephen Roller, and Byron Wallace. 2016. MGNC-CNN: A simple approach to exploiting multiple word embeddings for sentence classification. arXiv preprint arXiv:1603.00968. 2041
2016
191
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2042–2051, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Efficient techniques for parsing with tree automata Jonas Groschwitz and Alexander Koller Department of Linguistics University of Potsdam Germany groschwi|[email protected] Mark Johnson Department of Computing Macquarie University Australia [email protected] Abstract Parsing for a wide variety of grammar formalisms can be performed by intersecting finite tree automata. However, naive implementations of parsing by intersection are very inefficient. We present techniques that speed up tree-automata-based parsing, to the point that it becomes practically feasible on realistic data when applied to context-free, TAG, and graph parsing. For graph parsing, we obtain the best runtimes in the literature. 1 Introduction Grammar formalisms that go beyond context-free grammars have recently enjoyed renewed attention throughout computational linguistics. Classical grammar formalisms such as TAG (Joshi and Schabes, 1997) and CCG (Steedman, 2001) have been equipped with expressive statistical models, and high-performance parsers have become available (Clark and Curran, 2007; Lewis and Steedman, 2014; Kallmeyer and Maier, 2013). Synchronous grammar formalisms such as synchronous context-free grammars (Chiang, 2007) and tree-to-string transducers (Galley et al., 2004; Graehl et al., 2008; Seemann et al., 2015) are being used as models that incorporate syntactic information in statistical machine translation. Synchronous string-to-tree (Wong and Mooney, 2006) and string-to-graph grammars (Chiang et al., 2013) have been applied to semantic parsing; and so forth. Each of these grammar formalisms requires its users to develop new algorithms for parsing and training. This comes with challenges that are both practical and theoretical. From a theoretical perspective, many of these algorithms are basically the same, in that they rest upon a CKY-style parsing algorithm which recursively explores substructures of the input object and assigns them nonterminal symbols, but their exact relationship is rarely made explicit. On the practical side, this parsing algorithm and its extensions (e.g. to EM training) have to be implemented and optimized from scratch for each new grammar formalism. Thus, development time is spent on reinventing wheels that are slightly different from previous ones, and the resulting implementations still tend to underperform. Koller and Kuhlmann (2011) introduced Interpreted Regular Tree Grammars (IRTGs) in order to address this situation. An IRTG represents a language by describing a regular language of derivation trees, each of which is mapped to a term over some algebra and evaluated there. Grammars from a wide range of monolingual and synchronous formalisms can be mapped into IRTGs by using different algebras: Context-free and treeadjoining grammars use string algebras of different kinds, graph grammars can be captured by using graph algebras, and so on. In addition, IRTGs come with a universal parsing algorithm based on closure results for tree automata. Implementing and optimizing this parsing algorithm once, one could apply it to all grammar formalisms that can be mapped to IRTG. However, while Koller and Kuhlmann show that asymptotically optimal parsing is possible in theory, it is non-trivial to implement their algorithm optimally. In this paper, we introduce practical algorithms for the two key operations underlying IRTG parsing: computing the intersection of two tree automata and applying an inverse tree homomorphism to a tree automaton. After defining IRTGs (Section 2), we will first illustrate that a naive bottom-up implementation of the intersection algorithm yields asymptotic parsing complexities that are too high (Section 3). We will then 2042 show how the parsing complexity can be improved by combining algebra-specific index data structures with a generic parsing algorithm (Section 4), and by replacing bottom-up with top-down queries (Section 5). In contrast to the naive algorithm, both of these methods achieve the expected asymptotic complexities, e.g. O(n3) for context-free parsing, O(n6) for TAG parsing, etc. Furthermore, an evaluation with realistic grammars shows that our algorithms improve practical parsing times with IRTG grammars encoding context-free grammars, tree-adjoining grammars, and graph grammars by orders of magnitude (Section 6). Thus our algorithms make IRTG parsing practically feasible for the first time; for graph parsing, we obtain the fastest reported runtimes. 2 Interpreted Regular Tree Grammars We will first define IRTGs and explain how the universal parsing algorithm for IRTGs works. 2.1 Formal foundations First, we introduce some fundamental theoretical concepts and notation. A signature Σ is a finite set of symbols r, f, . . ., each of which has an arity ar(r) ≥0. A tree t over the signature Σ is a term of the form r(t1, . . . , tn), where the ti are trees and r ∈Σ has arity n. We identify the nodes of t by their Gorn addresses, i.e. paths π ∈N∗from the root to the node, and write t(π) for the label of π. We write TΣ for the set of all trees over Σ, and TΣ(Xk) for the trees in which each node either has a label from Σ, or is a leaf labeled with one of the variables {x1, . . . , xk}. A (linear, nondeleting) tree homomorphism h from a signature Σ to a signature ∆is a mapping h : TΣ →T∆. It is defined by specifying, for each symbol r ∈Σ of arity k, a term h(r) ∈T∆(Xk) in which each variable occurs exactly once. This symbol-wise mapping is lifted to entire trees by letting h(r(t1, . . . , tk)) = h(r)[h(t1), . . . , h(tk)], i.e. by replacing the variable xi in h(r) by the recursively computed value h(ti). Let ∆be a signature. A ∆-algebra A consists of a nonempty set A, called the domain, and for each symbol f ∈∆with arity k, a function fA : Ak →A, the operation associated with f. We can evaluate any term t ∈T∆to a value tA ∈A, by evaluating the operation symbols bottom-up. In this paper, we will be particularly interested in the string algebra E∗over the finite automaton rule homomorphism S →r1(NP, VP) ∗(x1, x2) NP →r2 John VP →r3 walks VP →r4(VP, NP) ∗(x1, ∗(on, x2)) NP →r5 Mars Figure 1: An example IRTG. alphabet E. Its domain is the set of all strings over E. For each symbol a ∈E, it has a nullary operation symbol a with aE∗= a. It also has a single binary operation symbol ∗, such that ∗E∗(w1, w2) is the concatenation of the strings w1 and w2. Thus the term ∗(John, ∗(walks, ∗(on, Mars))) in Fig. 2b evaluates to the string “John walks on Mars”. A finite tree automaton M over the signature Σ is a structure M = (Σ, Q, R, XF ), where Q is a finite set of states and XF ∈Q is a final state. R is a finite set of transition rules of the form X →r(X1, . . . , Xk), where the terminal symbol r ∈Σ is of arity k and X, X1, . . . , Xk ∈Q. A tree automaton can run non-deterministically on a tree t ∈TΣ by assigning states to the nodes of t bottom-up. If we have t = r(t1, . . . , tn) and M can assign the state Xi to each ti, written Xi →∗ti, then we also have X →∗t. We say that M accepts t if XF →∗t, and define the language L(M) ⊆TΣ of M as the (possibly infinite) set of all trees that M accepts. An example of a tree automaton (with states S, NP, etc.) is shown in the “automaton rule” column of Fig. 1. It accepts, among others, the tree τ1 in Fig. 2a. Tree automata can be defined top-down or bottom-up, and are equivalent to regular tree grammars. The languages that can be accepted by finite tree automata are called the regular tree languages. See e.g. Comon et al. (2008) for details. 2.2 Interpreted regular tree grammars We can combine tree automata, homomorphisms, and algebras into grammars that can describe languages of arbitrary objects, as well as relations between such objects – in a way that inherits many technical properties from context-free grammars, while extending the expressive capacity. An interpreted regular tree grammar (IRTG, Koller and Kuhlmann (2011)) G = (M, (h1, A1), . . . , (hn, An)) consists of a tree automaton M over some signature Σ, together with an arbitrary number n of inter2043 r1 r2 r4 r3 r5 (a) Tree τ1. h−→ ∗ John ∗ walks ∗ on Mars (b) Term h (τ1). evaluate −−−−→ “John walks on Mars” (c) h (τ1) evaluated in E∗. Figure 2: The tree τ1, evaluated by the homomorphism h and the algebra E∗ pretations (hi, Ai), where each Ai is an algebra over some signature ∆i and each hi is a tree homomorphism from Σ to ∆i. The automaton M describes a language L(M) of derivation trees which represent abstract syntactic structures. Each derivation tree τ is then interpreted n ways: we map it to a term hi(τ) ∈T∆i, and then we evaluate hi(τ) to a value ai = hi(τ)Ai ∈Ai of the algebra Ai. Thus, the IRTG G defines a language L(G) = {(h1(τ)A1, . . . , hn(τ)An) | τ ∈L(M)}, which is an n-place relation between the domains of the algebras. Consider the IRTG G shown in Fig. 1. The “automaton rule” column indicates the five rules of M; the final state is S. We already saw the derivation tree τ1 ∈L(M). G has a single interpretation, into a string algebra E∗, and with a homomorphism that is specified by the “homomorphism” column; for instance, h(r1) = ∗(x1, x2) and h(r2) = John. Applying this homomorphism to τ1, we obtain the term h(τ1) in Fig. 2b. As we saw earlier, this term evaluates in the string algebra to the string “John walks on Mars” (Fig. 2c). Thus this string is an element of L(G). We assume that no two rules of M use the same terminal symbol; this is generally not required in tree automata, but every IRTG can be brought into this convenient form. Furthermore, we focus (but only for simplicity of presentation) on IRTGs that use a single string-algebra interpretation, as in Fig. 1. Such grammars capture context-free grammars. However, IRTGs can capture a wide variety of grammar formalisms by using different algebras. For instance, an interpretation that uses a TAG string algebra (or TAG derived-tree algebra) models a tree-adjoining grammar (Koller and Kuhlmann, 2012), and an interpretation into an s-graph algebra models a hyperedge replacement graph grammar (HRG, Groschwitz et al. (2015)). By using multiple algebras, IRTGs can also represent synchronous grammars and (bottom-up) treeto-tree and tree-to-string transducers. In general, any grammar formalism whose grammars describe derivations in terms of a finite set of states can typically be converted into IRTG. 2.3 Parsing IRTGs Koller and Kuhlmann (2011) present a uniform parsing algorithm for IRTGs based on tree automata. The (monolingual) parsing problem of IRTG consists in determining, for an IRTG G and an input object a ∈A, a representation of the set parses(a) = {τ ∈L(M) | h(τ)A = a}, i.e. of the derivation trees that are grammatically correct and are mapped to a by the interpretation. In the example, we have parses(“John walks on Mars”) = {τ1}, where τ1 is as above. In general, parses(a) may be infinite, and thus we aim to represent it using a tree automaton Cha with L(Cha) = parses(a), the parse chart of a. We can compute Cha as follows. First, observe that parses(a) = L(M) ∩h−1(terms(a)), where h−1(L) = {τ ∈TΣ | h(τ) ∈L} (the inverse homomorphic image, or invhom, of L) and terms(a) = {t ∈T∆| tA = a}, i.e. the set of all terms that evaluate to a. Now assume that the algebra A is regularly decomposable, which means that every a ∈A has a decomposition automaton Da, i.e. there is a tree automaton Da such that L(Da) = terms(a). Because regular tree languages are closed under invhom and intersection, we can then compute a tree automaton Cha by intersecting M with the invhom of Da. To illustrate the IRTG parsing algorithm, let us compute a chart for the sentence s = “John walks on Mars” with the example grammar G of Fig. 1. The states of the decomposition automaton Ds are spans [i, k] of s; the final state is XF = [1, 5]. The automaton contains fourteen rules, including the ones shown in Fig. 3a. 2044 [1, 5] →∗([1, 2], [2, 5]) [2, 5] →∗([2, 3], [3, 5]) [3, 5] →∗([3, 4], [4, 5]) [3, 4] →on [4, 5] →Mars (a) Some rules of Ds. [1, 5] →r1([1, 2], [2, 5]) [2, 5] →r4([2, 3], [4, 5]) [2, 4] →r1([2, 3], [3, 4]) [1, 2] →r2 [2, 3] →r3 (b) Some rules of I = h−1 (Ds). S[1, 5] →r1(NP[1, 2], VP[2, 5]) VP[2, 5] →r4(VP[2, 3], NP[4, 5]) NP[1, 2] →r2 VP[2, 3] →r3 NP[4, 5] →r5 (c) The parse chart Chs. Figure 3: Example rules for the sentence s = “John walks on Mars” Algorithm 1 Naive bottom-up intersection 1: initialize agenda with state pairs for constants 2: initialize P as empty 3: while agenda is not empty do 4: T ′X′ ←pop(agenda) 5: add T ′X′ to P 6: for T ′′X′′ ∈P do 7: for {T1X1, T2X2} = {T ′X′, T ′′X′′} do 8: for T →r(T1, T2) in ML do 9: for X →r(X1, X2) in MR do 10: store TX →r(T1X1, T2X2) 11: add TX to agenda if new We can then compute the invhom automaton I, such that L(I) = h−1(L(Ds)). I uses the same states as Ds, but uses terminal symbols from Σ instead of ∆. Some rules of the invhom automaton I in the example are shown in Fig. 3b. Notice that I also contains rules that are not consistent with M, i.e. that would not occur in a grammatical parse of the sentence, such as [2, 4] →r1([2, 3], [3, 4]). Finally, the chart Chs is computed by intersecting M with I (see Fig. 3c). The states of Chs are pairs of states from M and states from I. It accepts τ1, because τ1 ∈parses(s). Observe the similarity to a traditional context-free parse chart. 3 Bottom-up intersection Both the practical efficiency of this algorithm and its asymptotic complexity depend crucially on how we compute intersection and invhom. We illustrate this using an overly naive intersection algorithm as a strawman, and then analyze the problem to lay the foundations for the improved algorithms in Sections 4 and 5. Let’s say that we want to compute a tree automaton C for the intersection of a “left” automaton ML and a “right” automaton MR both over the same signature Σ. In the application to IRTG parsing, ML is typically the derivation tree automaton (called M above) and MR is the invhom of a decomposition automaton. As in the product construction for finite string automata, the states of C will be pairs TX of states T of ML and states X of MR, and the rules of C will all have the form TX →r(T1X1, . . . , TnXn), where T →r(T1, . . . , Tn) is a rule in ML, and X →r(X1, . . . , Xn) is a rule in MR. 3.1 Naive intersection A naive bottom-up algorithm is shown in Alg. 1.1 This algorithm maintains an agenda of state pairs that have been discovered, but not explored as children of bottom-up rule applications; and a chart-like set P of all state pairs that have ever been popped off the agenda. The algorithm maintains the invariant that if TX is on the agenda or in P, then T and X are partners (written T ≈X), i.e. there is a tree t ∈TΣ such that T →∗t in ML and X →∗t in MR. The agenda is initialized with all state pairs TX, for which ML has a rule T →r and MR has a rule X →r for some nullary symbol r ∈Σ. Then, while there are state pairs left on the agenda, Alg. 1 pops a state pair T ′X′ off the agenda and adds it to P; iterates over all state pairs T ′′X′′ in P; and queries ML and MR bottom-up for rules in which these states appear as children.2 The iteration in line 7 allows T ′ and X′ to be either left or right children in these rules. For each pair of left and right rules, the rules are combined into a rule of C, and the pair of the parent states T and X is added to the agenda. This naive intersection algorithm yields an asymptotic complexity for IRTG parsing that is higher than expected. Assume, for example, 1We assume binary symbols for simplicity; all algorithms generalize to arbitrary arities. 2For the invhom automaton this can be done by substituting the variables in the homomorphic image h(r) with the corresponding states X′ and X′′, and running the decomposition automaton on the resulting tree. 2045 Algorithm 2 Bottom-up intersection with BU 1: initialize agenda with state pairs for constants 2: generate new Sr = S(MR, r) for every r ∈Σ 3: while agenda is not empty do 4: T ′X′ ←pop(agenda) 5: for T →r(T1, T2) in ML s.t. Ti = T ′ do 6: for X →r(X1, X2) ∈BU(Sr, i, X′) do 7: store rule TX →r(T1X1, T2X2) 8: add TX to agenda if new that we are parsing with an IRTG encoding of a context-free grammar, i.e. with a string algebra (as in Fig. 1). Then the states of MR are spans [i, k], i.e. MR has O(n2) states. Once line 4 has picked a span X′ = [i, j], line 6 iterates over all spans X′′ = [k, l] that have been discovered so far – including ones in which j ̸= k and i ̸= l. Thus the bottom-up lookup in line 9 is executed O(n4) times, most of which will yield no rules. The overall runtime of Alg. 1 is therefore higher than the asymptotic runtime of O(n3) expected for context-free parsing. Similar problems arise for other algebras; for instance, the runtime of Alg. 1 for TAG parsing is O(n8) rather than O(n6). 3.2 Indexing In context-free parsing algorithms, such as CKY or Earley, this issue is addressed through appropriate index datastructures, which organize P such that the lookup in line 5 only returns state pairs where X′′ is of the form [j, k] or [k, i]. This reduces the runtime to cubic. The idea of obtaining optimal asymptotic complexities in IRTG parsing through appropriate indexing was already mentioned from a theoretical perspective by Koller and Kuhlmann (2011). However, they assumed an optimal indexing data structure as given. In practice, indexing requires algebra-specific knowledge about X′′: A CKYstyle index only works if we assume that the states of the decomposition automaton are spans (this is not the case in other algebras), and that the only binary operation in the string algebra is ∗, which composes spans in a certain way. Furthermore, in IRTG parsing the rules of the invhom automaton do not directly correspond to algebra operations, but to terms of operations, which further complicates indexing. In this paper, we incorporate indexing into the intersection algorithm through sibling-finders. A sibling-finder S = S(M, r) for an automaton M and a label r in M’s signature is a data structure that supports a single operation, BU(S, i, X′). We require that a call to BU(S, i, X′) returns the set of rules X →r(X1, . . . , Xn) of M such that X′ is the i-th child state, and for every j ̸= i, Xj must be a state for which we previously called BU(S, j, Xj). Thus a sibling-finder performs a bottom-up rule lookup, changing its state after each call by caching the state and position. Assume that we have sibling-finders for MR. Then we can modify the naive Alg. 1 to the closely related algorithm shown as Alg. 2. This algorithm maintains the same agenda as Alg. 1, but instead of iterating over all explored partner states T ′′X′′, it first iterates over all rules in ML that have T ′ as a child (line 5). In line 6, Alg. 2 then queries MR sibling-finders – we maintain one for each rule label – for right rules with matching rule label and child positions. Note that because there is only one rule with label r in ML, the sibling-finders implicitly keep track of the partners of T2 we have seen so far. Thus they play the role of a more structured variant of P. There are a number of ways in which siblingfinders can be implemented. First, they could simply maintain sets chi(Sr, i) where a call to BU(Sr, i, X′) first adds X′ to chi(Sr, i). The query can then iterate over the set chi(Sr, 3−i), to check for each state X′′ in that set whether MR actually contains a rule with terminal symbol r and children X′ and X′′ (in the right order). This essentially reimplements the behavior of Alg. 1, and comes with the same complexity issues. Second, we could theoretically iterate over all rules of MR to implement the sibling finders via a bottom-up index (e.g., a trie) that supports efficient BU queries. However, in IRTG parsing MR is the invhom of a decomposition automaton. Because the decomposition automaton represents all the ways in which the input object can be built recursively out of smaller structures, including ones which will later be rejected by the grammar, such automata can be very large in practice. Thus we would like to work with a lazy representation of MR and avoid iterating over all rules. 4 Efficient bottom-up lookup Finally, we can exploit the fact that in IRTG parsing, MR is the invhom of a decomposition automaton. Below, we first show how to define algebra-specific sibling-finders for decompo2046 Algorithm 3 passUpwards(Y, π, i, r) 1: rules ←BU(Sr,π, i, Y ) 2: if π = π′k ̸= ϵ then 3: for X →f(X1, . . . , Xn) ∈rules do 4: passUpwards(X, π′, k, r) sition automata. Then we develop an algebraindependent way to generate invhom siblingfinders out of those for the decomposition automata. These can be plugged into Alg. 2 to achieve the expected parsing complexity. 4.1 Sibling-finders for decomposition automata First, consider the special case of sibling-finders for a decomposition automaton D. The terminal symbols f of D are the operation symbols of an algebra. If we have information about the operations of this algebra, and how they operate on the states of D, a sibling-finder S = S(D, f) can use indexing specific to the operation f to look up potential siblings, and only for them query D to answer BU(S, i, X) For instance, a sibling-finder for the ‘∗’ operation of the string algebra may store all states [k, l] for i = 1 under the index l. Thus a lookup BU(S, 2, [l, m]) can directly retrieve siblings from the l-bin, just as a traditional parse chart would. Spans which do not end at l are never considered. Different algebras require different index structures. For instance, sibling-finders for the string-wrapping operation in the TAG string algebra might retrieve all pairs of substrings [k, l, m, o] that wrap around [l, m] instead. Analogous data structures can be defined for the s-graph algebra. 4.2 Sibling-finders for invhom We can build upon the D-sibling-finders to construct sibling-finders for the invhom I of D. The basic idea is as follows. Consider the term h (r1) = ∗(x1, x2) from Fig. 1. It contains a single operation symbol ∗(plus variables); the homomorphism only replaces one symbol with another. Thus a sibling-finder S(D, ∗) from the decomposition automaton can directly serve as a siblingfinder S(I, r1). We only need to replace the ∗label on the returned rules with r1. In general, the situation is more complicated, because t = h(r) may be a complex term consisting of many algebra operations. In such a case, we construct a separate sibling-finder Sr,π = new S(D, t(π)) for each node π with at least two children. For instance, consider the term t = h (r4) in Fig. 1. It contains three nodes which are labeled by algebra operations, two of which are the concatenation. We decorate these with the sibling-finders Sr4,ϵ and Sr4,1. Each of these is a sibling-finder for the algebra’s concatenation operation; but they may have different state because they received different queries. We can then construct an invhom siblingfinder Sr = S(I, r), which answers a query BU(Sr, i, X′) in two steps. First, we substitute the variable xi by the state X′ and percolate it upward through t using the D-sibling-finders on the path from xi to the root. If π = π′k is the path to xi, we do this by calling passUpwards(X′, π′, k, r), as defined in Alg. 3. If the local sibling-finder returns rules and we are not at the root yet, we recursively call passUpwards at the parent node π′ with each parent state of these rules. As we do this, we let each sibling-finder maintain the set of rules it found, indexed by their parent state. This allows us to perform the second step: we traverse t top-down from the root to extract the rules of the invhom automaton that answer the BU query. Recall that BU(Sr, i, X′) should return only rules X →r(X1, X2) where BU(Sr, 3 −i, X3−i) was called before. Here, this is guaranteed by having distinct D-sibling-finders Sπ,r for every node π at every tree h(r). A final detail is that before the first query to r, we initialize the sibling-finders by calling passUpwards for all the leaves that are labeled by constants. This process is illustrated in Fig. 4, on the sibling-finder S = S(I, r4) and the input string “John walks on a hill on Mars”, parsed with a suitable extension of the IRTG in Fig. 1. The decomposition automaton can accept the word “on” from states [3, 4] and [6, 7], which during initialization are entered into position 1 of the lower D-sibling-finder Sr4,2, indexed by their end positions (a). Alg. 2 may then generate the query BU(S, 2, [4, 6]). This enters [4, 6] into the lower D-sibling-finder, and because there is a state with end position 4 on the left side of this siblingfinder, BU(Sr4,2, 2, [4, 6]) returns a rule with parent [3, 6]. The parent is subsequently entered into the upper sibling-finder (b). Finally, the query BU(S, 1, [2, 3]) enters [2, 3] into the upper Dsibling-finder and discovers its sibling [3, 6] (c). This yields a state X = [2, 6] for the whole phrase 2047 ∗  ∅ ∅  x1 ∗  4 : [3, 4] ∅  7 : [6, 7] on x2 (a) After initialization. ∗  ∅ 3 : [3, 6]  x1 ∗  4 : [3, 4] 4 : [4, 6]  7 : [6, 7] on x2 (b) After BU(S, 2, [4, 6]). ∗  3 : [2, 3] 3 : [3, 6]  x1 ∗  4 : [3, 4] 4 : [4, 6]  7 : [6, 7] on x2 (c) After BU(S, 1, [2, 3]). Figure 4: Three stages of BU on S(I, r4) for the sentence “John walks on a hill on Mars”. Algorithm 4 Top-down intersection 1: function expand(X): 2: if X /∈visited then 3: add X to visited 4: for X →r(X1, X2) in MR do 5: call expand(Xi) for i = 1, 2 6: for T→r(T1, T2) s.t. Ti ∈prt(Xi) do 7: store rule TX→r(T1X1, T2X2) 8: add T to prt(X) “walks on a hill”. The top-down traversal of the sibling-finders reveals that this state is reached by combining x1 = [2, 3], for which this BU query asked, with x2 = [4, 6], and thus the BU query yields the rule [2, 6] →r4([2, 3], [4, 6]). A subsequent query for BU(S, 2, [4, 8]) would yield the rule [2, 8] →r4([2, 3], [4, 8]), and so on. The overall construction allows us to answer BU queries on invhom automata while making use of algebra-specific index structures. Given suitable index structures, the asymptotic complexity drops down to the expected levels, e.g. O(n3) for IRTGs using the string algebra, O(n6) for the TAG string algebra, and so on. This yields a practical algorithm that can be flexibly adapted to new algebras by implementing their sibling-finders. 5 Top-down intersection Instead of investing into efficient bottom-up queries, we can also explore the use of top-down queries instead. These ask for all rules with parent state X and terminal symbol r. Such queries completely avoid the problem of finding siblings in MR. An invhom automaton can answer top-down queries for r efficiently by running the decomposition automaton top-down on h(r), collecting child states at the variable nodes. For instance, if we query I from Section 2 top-down for rules with the parent [1, 5] and symbol r1, it will enumerate the rules [1, 5] →r1([1, 2], [2, 5]), [1, 5] → r1([1, 3], [3, 5]), and [1, 5] → r1([1, 4], [4, 5]), without ever considering any other combination of child states. This is the idea underlying the intersection algorithm in Alg. 4. It recursively visits states X of MR, collecting for each X a set prt(X) of states T of ML such that T ≈X. Line 5 ensures that the prt sets have been computed for both child states of the rule X →r(X1, X2). Line 6 then does a bottom-up lookup of ML rules with the terminal symbol r and with child states that are partners of X1 and X2. Applied to our running example, Alg. 4 parses “John walks on Mars” by recursive calls on expand([1, 5]) and expand([2, 5]), following the rules of I top-down. Recursive calls for [2, 3] and [4, 5] establish VP ∈prt([2, 3]) and NP ∈prt([4, 5]), which enables the recursive call for [2, 5] to apply r4 in line 6 and consequently add VP to prt([2, 5]) in line 8. The algorithm mixes top-down queries to MR with bottom-up queries to ML. Line 6 implements the core idea of the CKY parser, in that it performs bottom-up queries on sets of nonterminals that are partners of adjacent spans – but generalized to arbitrary IRTGs instead of just the string algebra. The top-down query to MR in line 4 is bounded by the number of rules that actually exist in MR, which is O(n3) for the string algebra, O(n6) in the TAG string algebra, and O ns · 3dsds  for graphs of degree d and treewidth s −1 in the graph algebra. Thus Alg. 4 achieves the same asymptotic complexity as native parsing algorithms. Condensed top-down intersection. One weakness of Alg. 4 is that it iterates over all rules X →r(X1, X2) of MR individually. This can be extremely wasteful when MR is the invhom of a decomposition automaton, because it may contain 2048 0 10 20 30 40 50 sentenceLength 101 102 103 104 105 106 107 runtime [ms] bottom-up top-down top-down cond. sibling-finder Figure 5: Runtimes for context-free parsing. a great number of rules that have the same states and only differ in the terminal symbol r. For instance, when we encode a context-free grammar as an IRTG, for every rule r of the form A →B C we have h(r) = ∗(x1, x2). The rules of the invhom automaton are the same for all terminal symbols r with the same term h(r). But Alg. 4 iterates over rules r and not over different terms h(r), repeating the exact same computation for every binary rule of the context-free grammar. To solve this, we define condensed tree automata, which have rules of the form X → ρ(X1, . . . , Xn), where ρ ⊆Σ is a nonempty set of symbols with arity n. A condensed automaton represents the tree automaton which for all condensed rules X →ρ(X1, . . . , Xn) and all r ∈ρ has the rule X →r(X1, . . . , Xn). It is straightforward to represent an invhom automaton as a condensed condensed automaton, by determining for each distinct homomorphic image t the set ρt = {r1, . . . , rk} of symbols with h(ri) = t. We can modify Alg. 4 to iterate over condensed rules in line 4, and to iterate in line 6 over the rules T →r(T1, T2) for which Ti ∈prt(Xi) and r ∈ρ. This bottom-up query to ML can be answered efficiently from an appropriate index on the rules of ML. Altogether, this condensed intersection algorithm can be dramatically faster than the original version, if the grammar contains many symbols with the same homomorphic image. 6 Evaluation We compare the runtime performance of the proposed algorithms on practical grammars and inputs, from three very different grammar formalisms: context-free grammars, TAG, and HRG graph grammars. In each setting, we measure the 0 10 20 30 40 50 sentenceLength 102 103 104 105 106 107 runtime [ms] bottom-up top-down top-down cond. sibling-finder Figure 6: Runtimes for TAG parsing. 1 2 3 4 5 6 7 8 9 10 nodeCount 101 102 103 104 105 106 107 runtime [ms] bottom-up top-down top-down cond. sibling-finder GKT 15 Figure 7: Runtimes for graph parsing. runtime of four algorithms: the naive bottom-up baseline of Section 3; the sibling-finder algorithm from Section 4; and the non-condensed and the condensed version of the top-down algorithm from Section 5. The results are shown in Figures 5, 6 and 7. We measure the runtimes for computing the complete chart, and plot the geometric mean of runtimes for each input size on a log scale. We measured all runtimes on an Intel Xeon E78857 CPU at 3 GHz using Java 8. The JVM was warmed up before the measurements. The parser filtered each grammar automatically, removing all rules whose homomorphic image contained a constant that could not be used for a given input (e.g., a word that did not occur in the sentence). PCFG. We extracted a binarized context-free grammar with 6929 rules from Section 00 of the Penn Treebank, and parsed the sentences of Section 00 with it. The homorphism in the corresponding IRTG assigns every terminal symbol a constant or the term ∗(x1, x2), as in Fig. 1. As a consequence, the condensed automaton optimization from Section 5 outperforms all other algo2049 rithms, achieving a 100x speedup over the naive bottom-up algorithm when it was cancelled. TAG. We also extracted a tree-adjoining grammar from Section 00 of the PTB as described by Chen and Vijay-Shanker (2000), converted it to an IRTG as described by Koller and Kuhlmann (2012), and binarized it, yielding an IRTG with 26652 rules. Each term h(r) in this grammar represents an entire TAG elementary tree, which means the terms are much more complex than for the PCFG and there are much fewer terminal symbols with the same homomorphic term. As a consequence, condensing the invhom is much less helpful. However, the sibling-finder algorithm excels at maintaining state information within each elementary tree, yielding a 1000x speedup over the naive bottom-up algorithm when it was cancelled. Graphs. Finally, we parsed a corpus of graphs instead of strings, using the 13681-rule graph grammar of Groschwitz et al. (2015) to parse the 1258 graphs with up to 10 nodes from the “Little Prince” AMR-Bank (Banarescu et al., 2013). The top-down algorithms are slow in this experiment, confirming Groschwitz et al.’s findings. Again, the sibling-finder algorithm outperforms all other algorithms. Note that Groschwitz et al.’s parser (“GKT 15” in Fig. 7) shares much code with our system. It uses the same decomposition automata, but a less mature version of the sibling-finder method which fully computes the invhom automaton. Our new system achieves a 9x speedup for parsing the whole corpus, compared to GKT 15. 7 Related Work Describing parsing algorithms at a high level of abstraction has a long tradition in computational linguistics, e.g. in deductive parsing with parsing schemata (Shieber et al., 1995). A key challenge under this view is to index chart entries so they can be retrieved efficiently, which parallels the situation in automata intersection discussed here. G´omez-Rodr´ıguez et al. (2009) present an algorithm that automatically establishes index structures that guarantee optimal asymptotic runtime, but also requires algebra-specific extensions for grammar formalisms that go beyond context-free string grammars. Efficient parsing has also been studied in other generalized grammar formalisms beyond IRTG. Kanazawa (to appear) shows how the parsing problem of Abstract Categorial Grammars (de Groote, 2001) can be translated into Datalog, which enables the use of generic indexing strategies for Datalog to achieve optimal asymptotic complexity. Ranta (2004) discusses parsing for his Grammatical Framework formalism in terms of partial evaluation techniques from functional programming, which are related to the step-by-step evaluation of sibling-finders in Figure 4. Like the approach of Gomez-Rodriguez et al., these methods have not been evaluated for large-scale grammars and realistic evaluation data, which makes it hard to judge their relative practical merits. Most work in the tree automata community has a theoretical slant, and there is less research on the efficient implementation of algorithms for tree automata than one would expect; Cleophas (2009) and Lengal et al. (2012) are notable exceptions. Even these tend to be motivated by applications such as specification and verification, where the tree automata are much smaller and much less ambiguous than in computational linguistics. This makes these systems hard to apply directly. 8 Conclusion We have presented novel algorithms for computing the intersection and the inverse homomorphic image of finite tree automata. These can be used to implement a generic algorithm for IRTG parsing, and apply directly to any grammar formalism that can be represented as an IRTG. An evaluation on practical data from three different grammar formalisms shows consistent speed improvements of several orders of magnitude, and our graph parser has the fastest published runtimes. A Java implementation of our algorithms is available as part of the Alto parser, http:// bitbucket.org/tclup/alto. We focused here purely on symbolic parsing, and on computing complete parse charts. In the presence of a probability model (e.g. for IRTG encodings of PCFGs), our algorithms could be made faster through the use of appropriate pruning techniques. It would also be interesting to combine the strengths of the condensed and sibling-finder algorithms for further efficiency gains. Acknowledgments. We thank the anonymous reviewers for their comments. We are grateful to Johannes Gontrum for an early implementation of Alg. 4, and to Christoph Teichmann for many fruitful discussions. This work was supported by the DFG grant KO 2916/2-1. 2050 References Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the Linguistic Annotation Workshop (LAW VII-ID). John Chen and K. Vijay-Shanker. 2000. Automated extraction of TAGs from the Penn Treebank. In Proceedings of IWPT. David Chiang, Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, Bevan Jones, and Kevin Knight. 2013. Parsing graphs with hyperedge replacement grammars. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL). David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. Stephen Clark and James Curran. 2007. Widecoverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics, 33(4):493–552. Loek Cleophas. 2009. Forest FIRE and FIRE Wood: Tools for tree automata and tree algorithms. In Proceedings of the Conference on Finite-State Methods and Natural Language Processing (FSMNLP). Hubert Comon, Max Dauchet, R´emi Gilleron, Florent Jacquemard, Denis Lugiez, Christof L¨oding, Sophie Tison, and Marc Tommasi. 2008. Tree automata techniques and applications. http:// tata.gforge.inria.fr/. Philippe de Groote. 2001. Towards abstract categorial grammars. In Proceedings of the 39th ACL/10th EACL. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In Proceedings of HLT/NAACL. Carlos G´omez-Rodr´ıguez, Jes´us Vilares, and Miguel A. Alonso. 2009. A compiler for parsing schemata. Software: Practice and Experience, 39(5):441–470. Jonathan Graehl, Kevin Knight, and Jonathan May. 2008. Training tree transducers. Computational Linguistics, 34(3). Jonas Groschwitz, Alexander Koller, and Christoph Teichmann. 2015. Graph parsing with s-graph grammars. In Proceedings of the 53rd ACL and 7th IJCNLP. Aravind K. Joshi and Yves Schabes. 1997. TreeAdjoining Grammars. In G. Rozenberg and A. Salomaa, editors, Handbook of Formal Languages, volume 3, pages 69–123. Springer-Verlag, Berlin. Laura Kallmeyer and Wolfgang Maier. 2013. Datadriven parsing using probabilistic linear contextfree rewriting systems. Computational Linguistics, 39(1):87–119. Makoto Kanazawa. to appear. Parsing and generation as datalog query evaluation. IfCoLog Journal of Logics and Their Applications. Alexander Koller and Marco Kuhlmann. 2011. A generalized view on parsing and translation. In Proceedings of the 12th International Conference on Parsing Technologies (IWPT). Alexander Koller and Marco Kuhlmann. 2012. Decomposing TAG algorithms using simple algebraizations. In Proceedings of the 11th TAG+ Workshop. Ondrej Lengal, Jiri Simacek, and Tomas Vojnar. 2012. Vata: A library for efficient manipulation of nondeterministic tree automata. In C. Flanagan and B. K¨onig, editors, Tools and Algorithms for the Construction and Analysis of Systems: 18th International Conference, TACAS 2012. Springer. Mike Lewis and Mark Steedman. 2014. A* CCG parsing with a supertag-factored model. In Proceedings of EMNLP. Aarne Ranta. 2004. Grammatical framework: A typetheoretical grammar formalism. Journal of Functional Programming, 14(2):145–189. Nina Seemann, Fabienne Braune, and Andreas Maletti. 2015. String-to-tree multi bottom-up tree transducers. In Proceedings of the 53rd ACL and 7th IJCNLP. Stuart M Shieber, Yves Schabes, and Fernando CN Pereira. 1995. Principles and implementation of deductive parsing. The Journal of logic programming, 24(1):3–36. Mark Steedman. 2001. The Syntactic Process. MIT Press, Cambridge, MA. Yuk Wah Wong and Raymond J. Mooney. 2006. Learning for semantic parsing with statistical machine translation. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT/NAACL-2006). 2051
2016
192
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2052–2062, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics A Vector Space for Distributional Semantics for Entailment ∗ James Henderson and Diana Nicoleta Popa Xerox Research Centre Europe [email protected] and [email protected] Abstract Distributional semantics creates vectorspace representations that capture many forms of semantic similarity, but their relation to semantic entailment has been less clear. We propose a vector-space model which provides a formal foundation for a distributional semantics of entailment. Using a mean-field approximation, we develop approximate inference procedures and entailment operators over vectors of probabilities of features being known (versus unknown). We use this framework to reinterpret an existing distributionalsemantic model (Word2Vec) as approximating an entailment-based model of the distributions of words in contexts, thereby predicting lexical entailment relations. In both unsupervised and semi-supervised experiments on hyponymy detection, we get substantial improvements over previous results. 1 Introduction Modelling entailment is a fundamental issue in computational semantics. It is also important for many applications, for example to produce abstract summaries or to answer questions from text, where we need to ensure that the input text entails the output text. There has been a lot of interest in modelling entailment in a vector-space, but most of this work takes an empirical, often ad-hoc, approach to this problem, and achieving good results has been difficult (Levy et al., 2015). In this work, we propose a new framework for modelling entailment in a vector-space, and illustrate its effective∗This work was partially supported by French ANR grant CIFRE N 1324/2014. ⇒ unk f g ¬f unk 1 0 0 0 f 1 1 0 0 g 1 0 1 0 ¬f 1 0 0 1 Table 1: Pattern of logical entailment between nothing known (unk), two different features f and g known, and the complement of f (¬f) known. ness with a distributional-semantic model of hyponymy detection. Unlike previous vector-space models of entailment, the proposed framework explicitly models what information is unknown. This is a crucial property, because entailment reflects what information is and is not known; a representation y entails a representation x if and only if everything that is known given x is also known given y. Thus, we model entailment in a vector space where each dimension represents something we might know. As illustrated in Table 1, knowing that a feature f is true always entails knowing that same feature, but never entails knowing that a different feature g is true. Also, knowing that a feature is true always entails not knowing anything (unk), since strictly less information is still entailment, but the reverse is never true. Table 1 also illustrates that knowing that a feature f is false (¬f) patterns exactly the same way as knowing that an unrelated feature g is true. This illustrates that the relevant dichotomy for entailment is known versus unknown, and not true versus false. Previous vector-space models have been very successful at modelling semantic similarity, in particular using distributional semantic models (e.g. (Deerwester et al., 1990; Sch¨utze, 1993; Mikolov et al., 2013a)). Distributional semantics uses the distributions of words in contexts to induce vectorspace embeddings of words, which have been 2052 shown to be useful for a wide variety of tasks. Two words are predicted to be similar if the dot product between their vectors is high. But the dot product is an anti-symmetric operator, which makes it more natural to interpret these vectors as representing whether features are true or false, whereas the dichotomy known versus unknown is asymmetric. We surmise that this is why distributional semantic models have had difficulty modelling lexical entailment (Levy et al., 2015). To develop a vector-space model of whether features are known or unknown, we start with discrete binary vectors, where 1 means known and 0 means unknown. Entailment between these discrete binary vectors can be calculated by independently checking each dimension. But as soon as we try to do calculations with distributions over these vectors, we need to deal with the case where the features are not independent. For example, if feature f has a 50% chance of being true and a 50% chance of being false, we can’t assume that there is a 25% chance that both f and ¬f are known. This simple case of mutual exclusion is just one example of a wide range of constraints between features which we need to handle in semantic models. These constraints mean that the different dimensions of our vector space are not independent, and therefore exact models are not factorised. Because the models are not factorised, exact calculations of entailment and exact inference of vectors are intractable. Mean-field approximations are a popular approach to efficient inference for intractable models. In a mean-field approximation, distributions over binary vectors are represented using a single probability for each dimension. These vectors of real values are the basis of our proposed vector space for entailment. In this work, we propose a vector-space model which provides a formal foundation for a distributional semantics of entailment. This framework is derived from a mean-field approximation to entailment between binary vectors, and includes operators for measuring entailment between vectors, and procedures for inferring vectors in an entailment graph. We validate this framework by using it to reinterpret existing Word2Vec (Mikolov et al., 2013a) word embedding vectors as approximating an entailment-based model of the distribution of words in contexts. This reinterpretation allows us to use existing word embeddings as an unsupervised model of lexical entailment, successfully predicting hyponymy relations using the proposed entailment operators in both unsupervised and semi-supervised experiments. 2 Modelling Entailment in a Vector Space To develop a model of entailment in a vector space, we start with the logical definition of entailment in terms of vectors of discrete known features: y entails x if and only if all the known features in x are also included in y. We formalise this relation with binary vectors x, y where 1 means known and 0 means unknown, so this discrete entailment relation (y⇒x) can be defined with the binary formula: P((y⇒x) | x, y) = Y k (1 −(1−yk)xk) Given prior probability distributions P(x), P(y) over these vectors, the exact joint and marginal probabilities for an entailment relation are: P(x, y, (y⇒x)) = P(x) P(y) Y k (1−(1−yk)xk) P((y⇒x)) = EP(x)EP(y) Y k (1−(1−yk)xk) (1) We cannot assume that the priors P(x) and P(y) are factorised, because there are many important correlations between features and therefore we cannot assume that the features are independent. As discussed in Section 1, even just representing both a feature f and its negation ¬f requires two different dimensions k and k′ in the vector space, because 0 represents unknown and not false. Given valid feature vectors, calculating entailment can consider these two dimensions separately, but to reason with distributions over vectors we need the prior P(x) to enforce the constraint that xk and xk′ are mutually exclusive. In general, such correlations and anti-correlations exist between many semantic features, which makes inference and calculating the probability of entailment intractable. To allow for efficient inference in such a model, we propose a mean-field approximation. This in effect assumes that the posterior distribution over vectors is factorised, but in practice this is a much weaker assumption than assuming the prior is factorised. The posterior distribution has less uncertainty and therefore is influenced less by nonfactorised prior constraints. By assuming a factorised posterior, we can then represent distributions over feature vectors with simple vectors of 2053 probabilities of individual features (or as below, with their log-odds). These real-valued vectors are the basis of the proposed vector-space model of entailment. In the next two subsections, we derive a meanfield approximation for inference of real-valued vectors in entailment graphs. This derivation leads to three proposed vector-space operators for approximating the log-probability of entailment, summarised in Table 2. These operators will be used in the evaluation in Section 5. This inference framework will also be used in Section 3 to model how existing word embeddings can be mapped to vectors to which the entailment operators can be applied. 2.1 A Mean-Field Approximation A mean-field approximation approximates the posterior P using a factorised distribution Q. First of all, this gives us a concise description of the posterior P(x| . . .) as a vector of continuous values Q(x=1), where Q(x=1)k = Q(xk=1) ≈ EP(x|...)xk = P(xk=1| . . .) (i.e. the marginal probabilities of each bit). Secondly, as is shown below, this gives us efficient methods for doing approximate inference of vectors in a model. First we consider the simple case where we want to approximate the posterior distribution P(x, y|y⇒x). In a mean-field approximation, we want to find a factorised distribution Q(x, y) which minimises the KL-divergence DKL(Q(x, y)||P(x, y|y⇒x)) with the true distribution P(x, y|y⇒x). L = DKL(Q(x, y)||P(x, y|(y⇒x))) ∝ X x Q(x) log Q(x, y) P(x, y, (y⇒x)) = X k EQ(xk) log Q(xk) + X k EQ(yk) log Q(yk) −EQ(x) log P(x) −EQ(y) log P(y) − X k EQ(xk)EQ(yk) log(1−(1−yk)xk) In the final equation, the first two terms are the negative entropy of Q, −H(Q), which acts as a maximum entropy regulariser, the final term enforces the entailment constraint, and the middle two terms represent the prior for x and y. One approach (generalised further in the next subsection) to the prior terms −EQ(x) log P(x) is to bound them by assuming P(x) is a function in the exponential family, giving us: EQ(x) log P(x) ≥EQ(x) log exp(P k θx kxk) Zθ = X k EQ(xk)θx kxk −log Zθ where the log Zθ is not relevant in any of our inference problems and thus will be dropped below. As typically in mean-field approximations, inference of Q(x) and Q(y) can’t be done efficiently with this exact objective L, because of the nonlinear interdependence between xk and yk in the last term. Thus, we introduce two approximations to L, one for use in inferring Q(x) given Q(y) (forward inference), and one for the reverse inference problem (backward inference). In both cases, the approximation is done with an application of Jensen’s inequality to the log function, which gives us an upper bound on L, as is standard practice in mean-field approximations. For forward inference: L ≤−H(Q) −Q(xk=1)θx k −EQ(yk)θy kyk (2) −Q(xk=1) log Q(yk=1) ) which we can optimise for Q(xk=1): Q(xk=1) = σ( θx k + log Q(yk=1) ) (3) where σ() is the sigmoid function. The sigmoid function arises from the entropy regulariser, making this a specific form of maximum entropy model. And for backward inference: L ≤−H(Q) −EQ(xk)θx kxk −Q(yk=1)θy k (4) −(1−Q(yk=1)) log(1−Q(xk=1)) ) which we can optimise for Q(yk=1): Q(yk=1) = σ( θy k −log(1−Q(xk=1)) ) (5) Note that in equations (2) and (4) the final terms, Q(xk=1) log Q(yk=1) and (1−Q(yk=1)) log(1−Q(xk=1)) respectively, are approximations to the log-probability of the entailment. We define two vector-space operators, < ⃝and > ⃝, to be these same approximations. log Q(y⇒x) ≈ X k EQ(xk) log(EQ(yk)(1 −(1−yk)xk)) = Q(x=1) · log Q(y=1) ≡X < ⃝Y log Q(y⇒x) ≈ X k EQ(yk) log(EQ(xk)(1 −(1−yk)xk)) = (1−Q(y=1)) · log(1−Q(x=1)) ≡Y > ⃝X 2054 X < ⃝Y ≡σ(X) · log σ(Y ) Y > ⃝X ≡σ(−Y ) · log σ(−X) Y ˜⇒X ≡ X k log(1 −σ(−Yk)σ(Xk)) Table 2: The proposed entailment operators, approximating log P(y⇒x). We parametrise these operators with the vectors X, Y of log-odds of Q(x), Q(y), namely X = log Q(x=1) Q(x=0) = σ-1(Q(x=1)). The resulting operator definitions are summarised in Table 2. Also note that the probability of entailment given in equation (1) becomes factorised when we replace P with Q. We define a third vector-space operator, ˜⇒, to be this factorised approximation, also shown in Table 2. 2.2 Inference in Entailment Graphs In general, doing inference for one entailment is not enough; we want to do inference in a graph of entailments between variables. In this section we generalise the above mean-field approximation to entailment graphs. To represent information about variables that comes from outside the entailment graph, we assume we are given a prior P(x) over all variables xi in the graph. As above, we do not assume that this prior is factorised. Instead we assume that the prior P(x) is itself a graphical model which can be approximated with a mean-field approximation. Given a set of variables xi each representing vectors of binary variables xik, a set of entailment relations r = {(i, j)|(xi⇒xj)}, and a set of negated entailment relations ¯r = {(i, j)|(xi /⇒xj)}, we can write the joint posterior probability as: P(x, r, ¯r) = 1 Z P(x) Y i  ( Y j:r(i,j) Y k P(xik⇒xjk|xik, xjk)) ( Y j:¯r(i,j) (1 − Y k P(xik⇒xjk|xik, xjk)))  We want to find a factorised distribution Q that minimises L = DKL(Q(x)||P(x|r, ¯r)). As above, we bound this loss for each element Xik= σ-1(Q(xik=1)) of each vector we want to infer, using analogous Jensen’s inequalities for the terms involving nodes i and j such that r(i, j) or r(j, i). For completeness, we also propose similar inequalities for nodes i and j such that ¯r(i, j) or ¯r(j, i), and bound them using the constants Cijk ≥ Y k′̸=k (1−σ(−Xik′)σ(Xjk′)). To represent the prior P(x), we use the terms θik(X ¯ik) ≤log EQ(x ¯ ik)P(x ¯ik, xik=1) 1 −EQ(x ¯ ik)P(x ¯ik, xik=1) where x ¯ik is the set of all xi′k′ such that either i′̸=i or k′̸=k. These terms can be thought of as the logodds terms that would be contributed to the loss function by including the prior’s graphical model in the mean-field approximation. Now we can infer the optimal Xik as: Xik = θik(X ¯ik) + X j:r(i,j) −log σ(−Xjk) (6) + X j:r(j,i) log σ(Xjk) + X j:¯r(j,i) log 1−Cijkσ(Xjk) 1−Cijk + X j:¯r(i,j) −log 1−Cijkσ(−Xjk) 1−Cijk In summary, the proposed mean-field approximation does inference in entailment graphs by iteratively re-estimating each Xi as the sum of: the prior log-odds, −log σ(−Xj) for each entailed variable j, and log σ(Xj) for each entailing variable j.1 This inference optimises Xi< ⃝Xj for each entailing j plus Xi > ⃝Xj for each entailed j, plus a maximum entropy regulariser on Xi. Negative entailment relations, if they exist, can also be incorporated with some additional approximations. Complex priors can also be incorporated through their log-odds, simulating the inclusion of the prior within the mean-field approximation. Given its dependence on mean-field approximations, it is an empirical question to what extent we should view this model as computing real entailment probabilities and to what extent we should view it as a well-motivated non-linear mapping for which we simply optimise the input-output behaviour (as for neural networks (Henderson and Titov, 2010)). In Sections 3 and 5 we argue for the former (stronger) view. 3 Interpreting Word2Vec Vectors To evaluate how well the proposed framework provides a formal foundation for the distributional semantics of entailment, we use it to re-interpret an 1It is interesting to note that −log σ(−Xj) is a nonnegative transform of Xj, similar to the ReLU nonlinearity which is popular in deep neural networks (Glorot et al., 2011). log σ(Xj) is the analogous non-positive transform. 2055 existing model of distributional semantics in terms of semantic entailment. There has been a lot of work on how to use the distribution of contexts in which a word occurs to induce a vector representation of the semantics of words. In this paper, we leverage this previous work on distributional semantics by re-interpreting a previous distributional semantic model and using this understanding to map its vector-space word embeddings to vectors in the proposed framework. We then use the proposed operators to predict entailment between words using these vectors. In Section 5 below, we evaluate these predictions on the task of hyponymy detection. In this section we motivate three different ways to interpret the Word2Vec (Mikolov et al., 2013a; Mikolov et al., 2013b) distributional semantic model as an approximation to an entailment-based model of the semantic relationship between a word and its context. Distributional semantics learns the semantics of words by looking at the distribution of contexts in which they occur. To model this relationship, we assume that the semantic features of a word are (statistically speaking) redundant with those of its context words, and consistent with those of its context words. We model these properties using a hidden vector which is the consistent unification of the features of the middle word and the context. In other words, there must exist a hidden vector which entails both of these vectors, and is consistent with prior constraints on vectors. We split this into two steps, inference of the hidden vector Y from the middle vector Xm, context vectors Xc and prior, and computing the log-probability (7) that this hidden vector entails the middle and context vectors: max Y (log P(y, y⇒xm, y⇒xc)) (7) We interpret Word2Vec’s Skip-Gram model as learning its context and middle word vectors so that the log-probability of this entailment is high for the observed context words and low for other (sampled) context words. The word embeddings produced by Word2Vec are only related to the vectors Xm assigned to the middle words; context vectors are computed but not output. We model the context vectors X′ c as combining (as in equation (5)) information about a context word itself with information which can be inferred from this word given the prior, X′ c = θc −log σ(−Xc). The numbers in the vectors output by Word2Vec are real numbers between negative infinity and infinity, so the simplest interpretation of them is as the log-odds of a feature being known. In this case we can treat these vectors directly as the Xm in the model. The inferred hidden vector Y can then be calculated using the model of backward inference from the previous section. Y = θc −log σ(−Xc) −log σ(−Xm) = X′ c −log σ(−Xm) Since the unification Y of context and middle word features is computed using backward inference, we use the backward-inference operator > ⃝to calculate how successful that unification was. This gives us the final score: log P(y, y⇒xm, y⇒xc) ≈Y > ⃝Xm + Y > ⃝Xc + −σ(−Y )·θc = Y > ⃝Xm + −σ(−Y )·X′ c This is a natural interpretation, but it ignores the equivalence in Word2Vec between pairs of positive values and pairs of negative values, due to its use of the dot product. As a more accurate interpretation, we interpret each Word2Vec dimension as specifying whether its feature is known to be true or known to be false. Translating this Word2Vec vector into a vector in our entailment vector space, we get one copy Y + of the vector representing known-to-be-true features and a second negated duplicate Y −of the vector representing known-to-be-false features, which we concatenate to get our representation Y . Y + = X′ c −log σ(−Xm) Y −= −X′ c −log σ(Xm) log P(y, y⇒xm, y⇒xc) ≈Y + > ⃝Xm + −σ(−Y +)·X′ c + Y −> ⃝(−Xm) + −σ(−Y −)·(−X′ c) As a third alternative, we modify this latter interpretation with some probability mass reserved for unknown in the vicinity of zero. By subtracting 1 from both the original and negated copies of each dimension, we get a probability of unknown of 1−σ(Xm−1) −σ(−Xm−1). This gives us: Y + = X′ c −log σ(−(Xm−1)) Y −= −X′ c −log σ(−(−Xm−1)) log P(y, y⇒xm, y⇒xc) ≈Y + > ⃝(Xm−1) + −σ(−Y +)·X′ c + Y −> ⃝(−Xm−1)) + −σ(−Y −)·(−X′ c) 2056 Figure 1: The learning gradients for Word2Vec, the log-odds > ⃝, and the unk dup > ⃝interpretation of its vectors. To understand better the relative accuracy of these three interpretations, we compared the training gradient which Word2Vec uses to train its middle-word vectors to the training gradient for each of these interpretations. We plotted these gradients for the range of values typically found in Word2Vec vectors for both the middle vector and the context vector. Figure 1 shows three of these plots. As expected, the second interpretation is more accurate than the first because its plot is antisymmetric around the diagonal, like the Word2Vec gradient. In the third alternative, the constant 1 was chosen to optimise this match, producing a close match to the Word2Vec training gradient, as shown in Figure 1 (Word2Vec versus Unk dup). Thus, Word2Vec can be seen as a good approximation to the third model, and a progressively worse approximation to the second and first models. Therefore, if the entailment-based distributional semantic model we propose is accurate, then we would expect the best accuracy in hyponymy detection using the third interpretation of Word2Vec vectors, and progressively worse accuracy for the other two interpretations. As we will see in Section 5, this prediction holds. 4 Related Work There has been a significant amount of work on using distributional-semantic vectors for hyponymy detection, using supervised, semi-supervised or unsupervised methods (e.g. (Yu et al., 2015; Necsulescu et al., 2015; Vylomova et al., 2015; Weeds et al., 2014; Fu et al., 2015; Rei and Briscoe, 2014)). Because our main concern is modelling entailment within a vector space, we do not do a thorough comparison to models which use measures computed outside the vector space (e.g. symmetric measures (LIN (Lin, 1998)), asymmetric measures (WeedsPrec (Weeds and Weir, 2003; Weeds et al., 2004), balAPinc (Kotlerman et al., 2010), invCL (Lenci and Benotto, 2012)) and entropy-based measures (SLQS (Santus et al., 2014))), nor to models which encode hyponymy in the parameters of a vector-space operator or classifier (Fu et al., 2015; Roller et al., 2014; Baroni et al., 2012)). We also limit our evaluation of lexical entailment to hyponymy, not including other related lexical relations (cf. (Weeds et al., 2014; Vylomova et al., 2015; Turney and Mohammad, 2014; Levy et al., 2014)), leaving more complex cases to future work on compositional semantics. We are also not concerned with models or evaluations which require supervised learning about individual words, instead limiting ourselves to semisupervised learning where the words in the training and test sets are disjoint. For these reasons, in our evaluations we replicate the experimental setup of Weeds et al. (2014), for both unsupervised and semi-supervised models. Within this setup, we compare to the results of the models evaluated by Weeds et al. (2014) and to previously proposed vector-space operators. This includes one vector space operator for hyponymy which doesn’t have trained parameters, proposed by Rei and Briscoe (2014), called weighted cosine. The dimensions of the dot product (normalised to make it a cosine measure) are weighted to put more weight on the larger values in the entailed (hypernym) vector. We base this evaluation on the Word2Vec (Mikolov et al., 2013a; Mikolov et al., 2013b) distributional semantic model and its publicly available word embeddings. We choose it because it is popular, simple, fast, and its embeddings have been derived from a very large corpus. Levy and Goldberg (2014) showed that it is closely related to the previous PMI-based distributional semantic models (e.g. (Turney and Pantel, 2010)). The most similar previous work, in terms of motivation and aims, is that of Vilnis and McCallum (2015). They also model entailment directly using a vector space, without training a classifier. But instead of representing words as a point in a vector space (as in this work), they represent words as a Gaussian distribution over points in a vector space. This allows them to represent the extent to which a feature is known versus unknown as the 2057 amount of variance in the distribution for that feature’s dimension. While nicely motivated theoretically, the model appears to be more computationally expensive than the one proposed here, particularly for inferring vectors. They do make unsupervised predictions of hyponymy relations with their learned vector distributions, using KL-divergence between the distributions for the two words. They evaluate their models on the hyponymy data from (Baroni et al., 2012). As discussed further in section 5.2, our best models achieve non-significantly better average precision than their best models. The semi-supervised model of Kruszewski et al. (2015) also models entailment in a vector space, but they use a discrete vector space. They train a mapping from distributional semantic vectors to Boolean vectors such that feature inclusion respects a training set of entailment relations. They then use feature inclusion to predict hyponymy, and other lexical entailment relations. This approach is similar to the one used in our semisupervised experiments, except that their discrete entailment prediction operator is very different from our proposed entailment operators. 5 Evaluation To evaluate whether the proposed framework is an effective model of entailment in vector spaces, we apply the interpretations from Section 3 to publicly available word embeddings and use them to predict the hyponymy relations in a benchmark dataset. This framework predicts that the more accurate interpretations of Word2Vec result in more accurate unsupervised models of hyponymy. We evaluate on detecting hyponymy relations between words because hyponymy is the canonical type of lexical entailment; most of the semantic features of a hypernym (e.g. “animal”) must be included in the semantic features of the hyponym (e.g. “dog”). We evaluate in both a fully unsupervised setup and a semi-supervised setup. 5.1 Hyponymy with Word2Vec Vectors For our evaluation on hyponymy detection, we replicate the experimental setup of Weeds et al. (2014), using their selection of word pairs2 from the BLESS dataset (Baroni and Lenci, 2011).3 2https://github.com/SussexCompSem/ learninghypernyms 3Of the 1667 word pairs in this data, 24 were removed because we do not have an embedding for one of the words. These noun-noun word pairs include positive hyponymy pairs, plus negative pairs consisting of some other hyponymy pairs reversed, some pairs in other semantic relations, and some random pairs. Their selection is balanced between positive and negative examples, so that accuracy can be used as the performance measure. For their semisupervised experiments, ten-fold cross validation is used, where for each test set, items are removed from the associated training set if they contain any word from the test set. Thus, the vocabulary of the training and testing sets are always disjoint, thereby requiring that the models learn about the vector space and not about the words themselves. We had to perform our own 10-fold split, but apply the same procedure to filter the training set. We could not replicate the word embeddings used in Weeds et al. (2014), so instead we use publicly available word embeddings.4 These vectors were trained with the Word2Vec software applied to about 100 billion words of the Google-News dataset, and have 300 dimensions. The hyponymy detection results are given in Table 3, including both unsupervised (upper box) and semi-supervised (lower box) experiments. We report two measures of performance, hyponymy detection accuracy (50% Acc) and direction classification accuracy (Dir Acc). Since all the operators only determine a score, we need to choose a threshold to get detection accuracies. Given that the proportion of positive examples in the dataset has been artificially set at 50%, we threshold each model’s score at the point where the proportion of positive examples output is 50%, which we call “50% Acc”. Thus the threshold is set after seeing the testing inputs but not their target labels. Direction classification accuracy (Dir Acc) indicates how well the method distinguishes the relative abstractness of two nouns. Given a pair of nouns which are in a hyponymy relation, it classifies which word is the hypernym and which is the hyponym. This measure only considers positive examples and chooses one of two directions, so it is inherently a balanced binary classification task. Classification is performed by simply comparing the scores in both directions. If both directions produce the same score, the expected random accuracy (50%) is used. As representative of previous work, we report 4https://code.google.com/archive/p/ word2vec/ 2058 operator supervision 50% Acc Dir Acc Weeds et.al. None 58% – log-odds < ⃝ None 54.0% 55.9% weighted cos None 55.5% 57.9% dot None 56.3% 50% dif None 56.9% 59.6% log-odds ˜⇒ None 57.0% 59.4% log-odds > ⃝ None 60.1%* 62.2% dup > ⃝ None 61.7% 68.8% unk dup ˜⇒ None 63.4%* 68.8% unk dup > ⃝ None 64.5% 68.8% Weeds et.al. SVM 75% – mapped dif cross ent 64.3% 72.3% mapped < ⃝ cross ent 74.5% 91.0% mapped ˜⇒ cross ent 77.5% 92.3% mapped > ⃝ cross ent 80.1% 90.0% Table 3: Accuracies on the BLESS data from Weeds et al. (2014), for hyponymy detection (50% Acc) and hyponymy direction classification (Dir Acc), in the unsupervised (upper box) and semisupervised (lower box) experiments. For unsupervised accuracies, * marks a significant difference with the previous row. the best results from Weeds et al. (2014), who try a number of unsupervised and semi-supervised models, and use the same testing methodology and hyponymy data. However, note that their word embeddings are different. For the semisupervised models, Weeds et al. (2014) trains classifiers, which are potentially more powerful than our linear vector mappings. We also compare the proposed operators to the dot product (dot),5 vector differences (dif), and the weighted cosine of Rei and Briscoe (2014) (weighted cos), all computed with the same word embeddings as for the proposed operators. In Section 3 we argued for three progressively more accurate interpretations of Word2Vec vectors in the proposed framework, the log-odds interpretation (log-odds > ⃝), the negated duplicate interpretation (dup > ⃝), and the negated duplicate interpretation with unknown around zero (unk dup > ⃝). We also evaluate using the factorised calculation of entailment (log-odds ˜⇒, unk dup ˜⇒), and the backward-inference entailment operator (log-odds < ⃝), neither of which match the proposed interpre5We also tested the cosine measure, but results were very slightly worse than dot. tations. For the semi-supervised case, we train a linear vector-space mapping into a new vector space, in which we apply the operators (mapped operators). All these results are discussed in the next two subsections. 5.2 Unsupervised Hyponymy Detection The first set of experiments evaluate the vectorspace operators in unsupervised models of hyponymy detection. The proposed models are compared to the dot product, because this is the standard vector-space operator and has been shown to capture semantic similarity very well. However, because the dot product is a symmetric operator, it always performs at chance for direction classification. Another vector-space operator which has received much attention recently is vector differences. This is used (with vector sum) to perform semantic transforms, such as “king - male + female = queen”, and has previously been used for modelling hyponymy (Vylomova et al., 2015; Weeds et al., 2014). For our purposes, we sum the pairwise differences to get a score which we use for hyponymy detection. For the unsupervised results in the upper box of table 3, the best unsupervised model of Weeds et al. (2014), and the operators dot, dif and weighted cos all perform similarly on accuracy, as does the log-odds factorised entailment calculation (logodds ˜⇒). The forward-inference entailment operator (log-odds < ⃝) performs above chance but not well, as expected given the backward-inferencebased interpretation of Word2Vec vectors. By definition, dot is at chance for direction classification, but the other models all perform better, indicating that all these operators are able to measure relative abstractness. As predicted, the > ⃝operator performs significantly better than all these results on accuracy, as well as on direction classification, even assuming the log-odds interpretation of Word2Vec vectors. When we move to the more accurate interpretation of Word2Vec vectors as specifying both original and negated features (dup > ⃝), we improve (non-significantly) on the log-odds interpretation. Finally, the third and most accurate interpretation, where values around zero can be unknown (unk dup > ⃝), achieves the best results in unsupervised hyponymy detection, as well as for direction classification. Changing to the factorised entailment operator (unk dup ˜⇒) is worse but also signifi2059 cantly better than the other accuracies. To allow a direct comparison to the model of Vilnis and McCallum (2015), we also evaluated the unsupervised models on the hyponymy data from (Baroni et al., 2012). Our best model achieved 81% average precision on this dataset, non-significantly better than the 80% achieved by the best model of Vilnis and McCallum (2015). 5.3 Semi-supervised Hyponymy Detection Since the unsupervised learning of word embeddings may reflect many context-word correlations which have nothing to do with hyponymy, we also consider a semi-supervised setting. Adding some supervision helps distinguish features that capture semantic properties from other features which are not relevant to hyponymy detection. But even with supervision, we still want the resulting model to be captured in a vector space, and not in a parametrised scoring function. Thus, we train mappings from the Word2Vec word vectors to new word vectors, and then apply the entailment operators in this new vector space to predict hyponymy. Because the words in the testing set are always disjoint from the words in the training set, this experiment measures how well the original unsupervised vector space captures features that generalise entailment across words, and not how well the mapping can learn about individual words. Our objective is to learn a mapping to a new vector space in which an operator can be applied to predict hyponymy. We train linear mappings for the > ⃝operator (mapped > ⃝) and for vector differences (mapped dif), since these were the best performing proposed operator and baseline operator, respectively, in the unsupervised experiments. We do not use the duplicated interpretations because these transforms are subsumed by the ability to learn a linear mapping.6 Previous work on using vector differences for semi-supervised hyponymy detection has used a linear SVM (Vylomova et al., 2015; Weeds et al., 2014), which is mathematically equivalent to our vector-differences model, except that we use cross entropy loss and they use a large-margin loss and SVM training. The semi-supervised results in the bottom box of table 3 show a similar pattern to the unsupervised results.7 The > ⃝operator achieves the best 6Empirical results confirm that this is in practice the case, so we do not include these results in the table. 7It is not clear how to measure significance for crossvalidation results, so we do not attempt to do so. generalisation from training word vectors to testing word vectors. The mapped > ⃝model has the best accuracy, followed by the factorised entailment operator mapped ˜⇒and Weeds et al. (2014). Direction accuracies of all the proposed operators (mapped > ⃝, mapped ˜⇒, mapped < ⃝) reach into the 90’s. The dif operator performs particularly poorly in this mapped setting, perhaps because both the mapping and the operator are linear. These semisupervised results again support our distributionalsemantic interpretations of Word2Vec vectors and their associated entailment operator > ⃝. 6 Conclusion In this work, we propose a vector-space model which provides a formal foundation for a distributional semantics of entailment. We developed a mean-field approximation to probabilistic entailment between vectors which represent known versus unknown features. And we used this framework to derive vector operators for entailment and vector inference equations for entailment graphs. This framework allows us to reinterpret Word2Vec as approximating an entailment-based distributional semantic model of words in context, and show that more accurate interpretations result in more accurate unsupervised models of lexical entailment, achieving better accuracies than previous models. Semi-supervised evaluations confirm these results. A crucial distinction between the semisupervised models here and much previous work is that they learn a mapping into a vector space which represents entailment, rather than learning a parametrised entailment classifier. Within this new vector space, the entailment operators and inference equations apply, thereby generalising naturally from these lexical representations to the compositional semantics of multi-word expressions and sentences. Further work is needed to explore the full power of these abilities to extract information about entailment from both unlabelled text and labelled entailment data, encode it all in a single vector space, and efficiently perform complex inferences about vectors and entailments. This future work on compositional distributional semantics should further demonstrate the full power of the proposed framework for modelling entailment in a vector space. 2060 References Marco Baroni and Alessandro Lenci. 2011. How we blessed distributional semantic evaluation. In Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics, GEMS ’11, pages 1–10. Association for Computational Linguistics. Marco Baroni, Raffaella Bernardi, Ngoc-Quynh Do, and Chung-chieh Shan. 2012. Entailment above the word level in distributional semantics. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 23–32, Avignon, France. Association for Computational Linguistics. Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6):391–407. Ruiji Fu, Jiang Guo, Bing Qin, Wanxiang Che, Haifeng Wang, and Ting Liu. 2015. Learning semantic hierarchies: A continuous vector space approach. Audio, Speech, and Language Processing, IEEE/ACM Transactions on, 23(3):461–471. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Deep sparse rectifier neural networks. In International Conference on Artificial Intelligence and Statistics, pages 315–323. James Henderson and Ivan Titov. 2010. Incremental sigmoid belief networks for grammar learning. Journal of Machine Learning Research, 11(Dec):3541–3570. Lili Kotlerman, Ido Dagan, Idan Szpektor, and Maayan Zhitomirsky-geffet. 2010. Directional distributional similarity for lexical inference. Natural Language Engineering, 16(4):359–389. Germn Kruszewski, Denis Paperno, and Marco Baroni. 2015. Deriving boolean structures from distributional vectors. Transactions of the Association for Computational Linguistics, 3:375–388. Alessandro Lenci and Giulia Benotto. 2012. Identifying hypernyms in distributional semantic spaces. In Proceedings of the First Joint Conference on Lexical and Computational Semantics, SemEval ’12, pages 75–79. Association for Computational Linguistics. Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2177–2185. Curran Associates, Inc. Omer Levy, Ido Dagan, and Jacob Goldberger. 2014. Focused entailment graphs for open ie propositions. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, pages 87–97, Ann Arbor, Michigan. Association for Computational Linguistics. Omer Levy, Steffen Remus, Chris Biemann, and Ido Dagan. 2015. Do supervised distributional methods really learn lexical inference relations? In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 970–976, Denver, Colorado, May–June. Association for Computational Linguistics. Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In Proceedings of the 17th International Conference on Computational Linguistics Volume 2, COLING ’98, pages 768–774. Association for Computational Linguistics. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 3111–3119. Curran Associates, Inc. Silvia Necsulescu, Sara Mendes, David Jurgens, N´uria Bel, and Roberto Navigli. 2015. Reading between the lines: Overcoming data sparsity for accurate classification of lexical relationships. In Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics, pages 182–192, Denver, Colorado. Association for Computational Linguistics. Marek Rei and Ted Briscoe. 2014. Looking for hyponyms in vector space. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, pages 68–77, Ann Arbor, Michigan. Association for Computational Linguistics. Stephen Roller, Katrin Erk, and Gemma Boleda. 2014. Inclusive yet selective: Supervised distributional hypernymy detection. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1025– 1036, Dublin, Ireland. Dublin City University and Association for Computational Linguistics. Enrico Santus, Alessandro Lenci, Qin Lu, and Sabine Schulte im Walde. 2014. Chasing hypernyms in vector spaces with entropy. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2014, April 26-30, 2014, Gothenburg, Sweden, pages 38–42. Hinrich Sch¨utze. 1993. Word space. In Advances in Neural Information Processing Systems 5, pages 895–902. Morgan Kaufmann. 2061 Peter D. Turney and Saif M. Mohammad. 2014. Experiments with three approaches to recognizing lexical entailment. CoRR, abs/1401.8269. Peter D. Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. J. Artif. Int. Res., 37(1):141–188. Luke Vilnis and Andrew McCallum. 2015. Word representations via Gaussian embedding. In Proceedings of the International Conference on Learning Representations 2015 (ICLR). Ekaterina Vylomova, Laura Rimell, Trevor Cohn, and Timothy Baldwin. 2015. Take and took, gaggle and goose, book and read: Evaluating the utility of vector differences for lexical relation learning. In CoRR 2015. Julie Weeds and David Weir. 2003. A general framework for distributional similarity. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, EMNLP ’03, pages 81– 88. Julie Weeds, David Weir, and Diana McCarthy. 2004. Characterising measures of lexical distributional similarity. In Proceedings of the 20th International Conference on Computational Linguistics, COLING ’04, pages 1015–1021. Association for Computational Linguistics. Julie Weeds, Daoud Clarke, Jeremy Reffin, David Weir, and Bill Keller. 2014. Learning to distinguish hypernyms and co-hyponyms. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 2249–2259, Dublin, Ireland. Dublin City University and Association for Computational Linguistics. Zheng Yu, Haixun Wang, Xuemin Lin, and Min Wang. 2015. Learning term embeddings for hypernymy identification. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015. AAAI Press / International Joint Conferences on Artificial Intelligence. 2062
2016
193
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2063–2072, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Hidden Softmax Sequence Model for Dialogue Structure Analysis Zhiyang He1, Xien Liu2, Ping Lv2, Ji Wu1 1Department of Electronic Engineering, Tsinghua University, Beijing, China 2Tsinghua-iFlytek Joint Laboratory for Speech Technology, Beijing, China {zyhe ts, xeliu, luping ts, wuji ee}@mail.tsinghua.edu.cn Abstract We propose a new unsupervised learning model, hidden softmax sequence model (HSSM), based on Boltzmann machine for dialogue structure analysis. The model employs three types of units in the hidden layer to discovery dialogue latent structures: softmax units which represent latent states of utterances; binary units which represent latent topics specified by dialogues; and a binary unit that represents the global general topic shared across the whole dialogue corpus. In addition, the model contains extra connections between adjacent hidden softmax units to formulate the dependency between latent states. Two different kinds of real world dialogue corpora, Twitter-Post and AirTicketBooking, are utilized for extensive comparing experiments, and the results illustrate that the proposed model outperforms sate-ofthe-art popular approaches. 1 Introduction Dialogue structure analysis is an important and fundamental task in the natural language processing domain. The technology provides essential clues for solving real-world problems, such as producing dialogue summaries (Murray et al., 2006; Liu et al., 2010), controlling conversational agents (Wilks, 2006), and designing interactive dialogue systems (Young, 2006; Allen et al., 2007) etc. The study of modeling dialogues always assumes that for each dialogue there exists an unique latent structure (namely dialogue structure), which consists of a series of latent states.1 1Also called dialogue acts or speech acts in some past work. In this paper, for simplicity we will only use the term “latent state” to describe the sequential dialogue structure. Some past works mainly rely on supervised or semi-supervised learning, which always involve extensive human efforts to manually construct latent state inventory and to label training samples. Cohen et al. (2004) developed an inventory of latent states specific to E-mail in an office domain by inspecting a large corpus of e-mail. Jeong et al. (2009) employed semi-supervised learning to transfer latent states from labeled speech corpora to the Internet media and e-mail. Involving extensive human efforts constrains scaling the training sample size (which is essential to supervised learning) and application domains. In recent years, there has been some work on modeling dialogues with unsupervised learning methods which operate only on unlabeled observed data. Crook et al. (2009) employed Dirichlet process mixture clustering models to recognize latent states for each utterance in dialogues from a travel-planning domain, but they do not inspect dialogues’ sequential structure. Chotimongkol (2008) proposed a hidden Markov model (HMM) based dialogue analysis model to study structures of task-oriented conversations from indomain dialogue corpus. More recently, Ritter et al. (2010) extended the HMM based conversation model by introducing additional word sources for topic learning process. Zhai et al. (2014) assumed words in an utterance are emitted from topic models under HMM framework, and topics were shared across all latent states. All these dialogue structure analysis models are directed generative models, in which the HMMs, language models and topic models are combined together. In this study, we attempt to develop a Boltzmann machine based undirected generative model for dialogue structure analysis. As for the document modeling using undirected generative model, Hinton and Salakhutdinov (2009) proposed a general framework, replicated soft2063 max model (RSM), for topic modeling based on restricted Boltzmann machine (RBM). The model focuses on the document-level topic analysis, it cannot be applied for the structure analysis. We propose a hidden softmax sequence model (HSSM) for the dialogue modeling and structure analysis. HSSM is a two-layer special Boltzmann machine. The visible layer contains softmax units used to model words in a dialogue, which are the same with the visible layer in RSM (Hinton and Salakhutdinov, 2009). However, the hidden layer has completely different design. There are three kinds of hidden units: softmax hidden units, which is utilized for representing latent states of dialogues; binary units used for representing dialogue specific topics; and a special binary unit used for representing the general topic of the dialogue corpus. Moreover, unlike RSM whose hidden binary units are conditionally independent when visible units are given, HSSM has extra connections utilized to formulate the dependency between adjacent softmax units in the hidden layer. The connections are the latent states of two adjacent utterances. Therefore, HSSM can be considered as a special Boltzmann machine. The remainder of this paper is organized as follows. Section 2 introduces two real world dialogue corpora utilized in our experiments. Section 3 describes the proposed hidden softmax sequence model. Experimental results and discussions are presented in Section 4. Finally, Section 5 presents our conclusions. 2 Data Set Two different datasets are utilized to test the effectiveness of our proposed model: a corpus of post conversations drawn from Twitter (TwitterPost), and a corpus of task-oriented human-human dialogues in the airline ticket booking domain (AirTicketBooking). 2.1 Twitter-Post Conversations in Twitter are carried out by replying or responding to specific posts with short 140-character messages. The post length restriction makes Twitter keep more chat-like interactions than blog posts. The style of writing used on Twitter is widely varied, highly ungrammatical, and often with spelling errors. For example, the terms “be4”, “b4”, and “bef4” are always appeared in the Twitter posts to represent the word “before”. Here, we totally collected about 900, 000 raw Twitter dialogue sessions. The majority of conversation sessions are very short; and the frequencies of conversation session lengths follow a power law relationship as described in (Ritter et al., 2010). For simplicity , in the data preprocessing stage non-English sentences were dropped; and nonEnglish characters, punctuation marks, and some non-meaning tokens (such as “&”) were also filtered from dialogues. We filtered short Twitter dialogue sessions and randomly sampled 5,000 dialogues (the numbers of utterances in dialogues rang from 5 to 25) to build the Twitter-Post dataset. 2.2 AirTicketBooking The AirTicketBooking corpus consists of a set of task-oriented human-human mandarin dialogues from an airline ticket booking service center. The manual transcripts of the speech dialogues are utilized in our experiments. In the dataset, there is always a relative clear structure underlying each dialogue. A dialogue often begins with a customer’s request about airline ticket issues. And the service agent always firstly checks the client’s personal information, such as name, phone number and credit card numberm, etc. Then the agent starts to deal with the client’s request. We totally collected 1,890 text-based dialogue sessions obtaining about 40,000 conversation utterances with length ranging from 15 to 100. 3 Dialogue Structure Analysis 3.1 Model Design Figure 1: Hidden layer that consists of different types of latent variables We design an undirected generative model based on Boltzmann machine. As we known, dialogue structure analysis models are always based on an underlying assumption: each utterance in the dialogues is generated from one latent state, which has a causal effect on the words. For instance, an utterance in AirTicketBooking dataset, “Tomorrow afternoon, about 3 o’clock” corre2064 sponds to the latent state “Time Information”. However, by carefully examining words in dialogues we can observe that not all words are generated from the latent states (Ritter et al., 2010; Zhai and Williams, 2014). There are some words relevant to a global or background topic shared across dialogues. For example, “about” and “that” belong to a global (general English) topic. Some other words in a dialogue may be strongly related to the dialogue specific topic. For example, “cake”, “toast” and “pizza” may appear in a Twitter dialogue with respect to a specific topic, “food”. From the perspective of generative model, we can also consider that words in a dialogue are generated by the mixture model of latent states, a global/background topic, and a dialogue specific topic. Therefore, there are three kinds of units in the hidden layer of our proposed model, which are displayed in Figure 1. hφ is a softmax unit, which indicates the latent state for a utterance. hψ and hξ represent the general topic, and the dialogue specific topic, respectively. For the visible layer, we utilize the softmax units to model words in each utterance, which is the same with the approach in RSM (Hinton and Salakhutdinov, 2009). In Section 3.2, We propose a basic model based on Boltzmann machine to formulate each word in utterances of dialogues. A dialogue can be abstractly viewed as a sequence of latent states in a certain reasonable order. Therefore, formulating the dependency between latent states is another import issue for dialogue structure analysis. In our model, we assume that each utterance’s latent state is dependent on its two neighbours. So there exist connections between each pair of adjacent hidden softmax units in the hidden layer. The details of the model will be presented in Section 3.3. 3.2 HSM: Hidden Softmax Model Notation Explanation K dictionary size J number of latent states V observed visibles representing words in dialogues b bias term of V hφ latent variables representing latent states hψ latent variable representing corpus general topic hξ latent variables representing dialogue specific topics aφ bias terms of hφ aψ bias term of hψ aξ bias terms of hξ Wφ weights connecting hφ to V Wψ weights connecting hψ to V Wξ weights connecting hξ to V F, Fs, Fe weights between hidden softmax units Table 1: Definition of notations. Words of utterance 1 ... ... ... Words of utterance 2 Words of utterance 3 Utterance 1 Utterance 2 Utterance 3 Figure 2: Hidden Softmax Model. The bottom layer are softmax visible units and the top layer consists of three types of hidden units: softmax hidden units used for representing latent states, a binary stochastic hidden unit used for representing the dialogue specific topic, and a special binary stochastic hidden unit used for representing corpus general topic. Upper: The model for a dialogue session containing three utterances. Connection lines in the same color related to a latent state represent the same weight matrix. Lower: A different interpretation of the Hidden Softmax Model, in which Dr visible softmax units in the rth utterance are replaced by one single multinomial unit which is sampled Dr times. Table 1 summarizes important notations utilized in this paper. Before introducing the ultimate learning model for dialogue structure analysis, we firstly discuss a simplified version, Hidden Softmax Model (HSM), which is based on Boltzmann machine and assumes that the latent variables are independent given visible units. HSM has a twolayer architecture as shown in Figure 2. The energy of the state {V, hφ, hψ, hξ} is defined as follows: E(V, hφ, hψ, hξ) = ¯Eφ(V, hφ) + ¯Eψ(V, hψ) + ¯Eξ(V, hξ) + C(V), (1) where ¯Eφ(V, hφ), ¯Eψ(V, hψ) and ¯Eξ(V, hξ) are sub-energy functions related to hidden variables hφ, hψ, and hξ, respectively. C(V) is the shared visible units bias term. Suppose K is the dictionary size, Dr is the rth utterance size (i.e. the 2065 number of words in the rth utterance), and R is the number of utterances in the a dialogue. For each utterance vr(r = 1, .., R) in the dialogue session we have a hidden variable vector hφ r (with size of J ) as a latent state of the utterance, the sub-energy function ¯Eφ(V, hφ) is defined by ¯Eφ(V, hφ) = − R X r=1 J X j=1 Dr X i=1 K X k=1 hφ rjW φ rjikvrik − R X r=1 J X j=1 hφ rjaφ rj, (2) where vrik = 1 means the ith visible unit vri in the rth utterance takes on kth value, hφ rj = 1 means the rth softmax hidden units takes on jth value, and aφ rj is the corresponding bias. W φ rjik is a symmetric interaction term between visible unit vri that takes on kth value and hidden variable hφ r that takes on jth value. The sub-energy function ¯Eψ(V, hψ), related to the global general topic of the corpus, is defined by ¯Eψ(V, hψ) = − R X r=1 Dr X i=1 K X k=1 hψW ψ rikvrik −hψaψ. (3) The sub-energy function ¯Eξ(V, hξ) corresponds to the dialogue specific topic, and is defined by ¯Eξ(V, hξ) = − R X r=1 Dr X i=1 K X k=1 hξW ξ rikvrik −hξaξ. (4) W ψ rik in Eq. (3) and W ξ rik in Eq. (4) are two symmetric interaction terms between visible units and the corresponding hidden units, which are similar to W φ rjik in (2); aψ and aξ are the corresponding biases. C(V) is defined by C(V) = − R X r=1 Dr X i=1 K X k=1 vrikbrik, (5) where brik is the corresponding bias. The probability that the model assigns to a visible binary matrix V = {v1, v2, ..., vD} (where D = PR r=1 Dr is the dialogue session size) is P(V) = 1 Z X hφ, hψ,hξ exp(−E(V, hφ, hψ, hξ)) Z = X V X hφ, hψ,hξ exp(−E(V, hφ, hψ, hξ), (6) where Z is known as the partition function or normalizing constant. In our proposed model, for each word in the document we use a softmax unit to represent it. For the sake of simplicity, assume that the order of words in an utterance is ignored. Therefore, all of these softmax units can share the same set of weights that connect them to hidden units, thus the visible bias term C(V) and the sub-energy functions ¯Eφ(V, hφ), ¯Eψ(V, hψ) and ¯Eξ(V, hξ) in Eq. (1) can be redefined as follows: ¯Eφ(V, hφ) = − R X r=1 J X j=1 K X k=1 hφ rjW φ jkˆvrk − R X r=1 (Dr J X j=1 hφ rjaφ j ) (7) ¯Eψ(V, hψ) = − K X k=1 hψW ψ k ˆvk −Dhψaψ (8) ¯Eξ(V, hξ) = − K X k=1 hξW ξ k ˆvk −Dhξaξ (9) C(V) = − K X k=1 ˆvkbk, (10) where ˆvrk = PDr i=1 vrik denotes the count for the kth word in the rth utterance of the dialogue, ˆvk = PR r=1 ˆvrk is the count for the kth word in whole dialogue session. Dr and D (D = PR r=1 Dr) are employed as the scaling parameters, which can make hidden units behave sensibly when dealing with dialogues of different lengths (Hinton and Salakhutdinov, 2009). The conditional distributions are given by softmax and logistic functions: P(hφ rj = 1|V) = exp(PK k=1 W φ jkˆvrk + Draφ j ) PJ j′=1 exp(PK k=1 W φ j′kˆvrk + Draφ j′) (11) P(hψ = 1|V) = σ( K X k=1 W ψ k ˆvk + Daψ) (12) P(hξ = 1|V) = σ( K X k=1 W ξ k ˆvk + Daξ) (13) P(vrik = 1|hφ, hψ, hξ) = exp(PJ j=1 hφ rjW φ jk + hψW ψ k + hξW ξ k + bk) PK k′=1 exp(PJ j=1 hφ rjW φ jk′ + hψW ψ k′ + hξW ξ k′ + bk′) , (14) where σ(x) = 1/(1 + exp(−x)) is the logistic function. 2066 3.3 HSSM: Hidden Softmax Sequence Model In this section, we consider the dependency between the adjacent latent states of utterances, and extend the HSM to hidden softmax sequence model (HSSM), which is displayed in Figure 3. We define the energy of the state {V, hφ, hψ, hξ} in HSSM as follows: E(V, hφ, hψ, hξ) = ¯Eφ(V, hφ) + ¯Eψ(V, hψ) + ¯Eξ(V, hξ) + C(V) + ¯EΦ(hφ, hφ), (15) where C(V), ¯Eφ(V, hφ), ¯Eψ(V, hψ) and ¯Eξ(V, hξ) are the same with that in HSM. The last term ¯EΦ(hφ, hφ) is utilized to formulate the dependency between latent variables hφ, which is defined as follows: ¯EΦ(hφ, hφ) = − J X q=1 hφ s F s q hφ 1q − J X q=1 hφ RqF e q hφ e − R−1 X r=1 J X j=1 J X q=1 hφ rjFjqhφ r+1,q, (16) where hφ s and hφ e are two constant scalar variables (hφ s ≡1, hφ e ≡1), which represent the virtual beginning state unit and ending state unit of a dialogue. F s is a vector with size J, and its elements measure the dependency between hφ s and the latent softmax units of the first utterance. F e also contains J elements, and in contrast to F s, F e represents the dependency measure between hφ e and the latent softmax units of the last utterance. F is a symmetric matrix for formulating dependency between each two adjacent hidden units pair (hφ r , hφ r+1), r = 1, ..., R −1. Utterance 1 Utterance 2 Utterance 3 Figure 3: Hidden softmax sequence model. A connection between each pair of adjacent hidden softmax units is added to formulate the dependency between the two corresponding latent states. 3.4 Parameter Learning Exact maximum likelihood learning in the proposed model is intractable. “Contrastive Divergence” (Hinton, 2002) can be used for HSM’s learning, however, it can not be utilized for HSSM, because the hidden-to-hidden interaction term, {F, F s, F e}, result in the intractability when obtaining exact samples from the conditional distribution P(hφ rj = 1|V), r = [1, R], j ∈[1, J]. We use the mean-field variational inference (Hinton and Zemel, 1994; Neal and Hinton, 1998; Jordan et al., 1999) and a stochastic approximation procedure (SAP) (Tieleman, 2008) to estimate HSSM’s parameters. The variational learning is utilized to get the data-dependent expectations, and SAP is utilized to estimate the model’s expectation. The log-likelihood of the HSSM has the following variational lower bound: log P(V; θ) ≥ X h Q(h) log P(V, h; θ) + H(Q). (17) Q(h) can be any distribution of h in theory. θ = {W φ, W ψ, W ξ, F, F s, F e} (the bias terms are omitted for clarity) are the model parameters. h = {hφ, hψ, hξ} represent all the hidden variables. H(·) is the entropy functional. In variational learning, we try to find parameters that minimize the Kullback-Leibler divergences between Q(h) and the true posterior P(h|V; θ). A naive mean-field approach can be chosen to obtain a fully factorized distribution for Q(h): Q(h) = " R Y r=1 q(hφ) # q(hψ) q(hξ), (18) where q(hφ rj = 1) = µφ rj, q(hψ = 1) = µψ, q(hξ = 1) = µξ. µ = {µφ, µψ, µξ} are the parameters of Q(h). Then the lower bound on the log-probability log P(V; θ) has the form: log P(V; θ) ≥−¯Eφ(V, µφ) −¯Eψ(V, µψ) −¯Eξ(V, µξ) −C(V) −¯EΦ(µφ, µφ) −log Z, (19) where ¯Eφ(V, µφ), ¯Eψ(V, µψ), ¯Eξ(V, µξ), and ¯EΦ(µφ, µφ) have the same forms, by replacing µ with h, as Eqs. (7), (8), (9), and (16), respectively. We can maximize this lower bound with respect to parameters µ for fixed θ, and obtain the meanfield fixed-point equations: µφ rj = exp(PK k=1 W φ jkˆvrk + Draφ j + Dj prev + Dj next −1) PJ j′=1 exp(PK k=1 W φ j′kˆvrk + Draφ j′ + Dj′ prev + Dj′ next −1) , (20) 2067 µψ = σ( K X k=1 W ψ k ˆvk + Daψ) (21) µξ = σ( K X k=1 W ξ k ˆvk + Daξ), (22) where Dj prev and Dj next are two terms relevant to the derivative of the RHS of Eq. (19) with respect to µφ rj, defined by Dj prev =  F s j , r = 1 PJ q=1 µφ r−1,qFqj, r > 1 Dj next = PJ q=1 Fjqµφ r+1,q, r < R. F e j , r = R The updating of µ can be carried out iteratively until convergence. Then, (V, µ) can be considered as a special “state” of HSSM, thus the SAP can be applied to update the model’s parameters, θ, for fixed (V, µ). 4 Experiments and Discussions It’s not easy to evaluate the performance of a dialogue structure analysis model. In this study, we examined our model via qualitative visualization and quantitative analysis as done in (Ritter et al., 2010; Zhai and Williams, 2014). We implemented five conventional models to conduct an extensive comparing study on the two corpora: Twitter-Post and AirTicketBooking. Conventional models include: LMHMM (Chotimongkol, 2008), LMHMMS (Ritter et al., 2010), TMHMM, TMHMMS, and TMHMMSS (Zhai and Williams, 2014). In our experiments, for each corpus we randomly select 80% dialogues for training, and use the rest 20% for testing. We select three different number (10, 20 and 30) of latent states to evaluate all the models. In TMHMM, TMHMMS and TMHMMSS, the number of “topics” in the latent states and a dialogue is a hyper-parameter. We conducted a series of experiments with varying numbers of topics, and the results illustrated that 20 is the best choice on the two corpora. So, for all the following experimental results of TMHMM, TMHMMS and TMHMMSS, the corresponding topic configurations are set to 20. The number of estimation iterations for all the models on training sets is set to 10,000; and on held-out test sets, the numver of iterations for inference is set to 1000. In order to speed-up the learning of HSSM, datasets are divided into minibatches, each has 15 dialogues. In addition, the learning rate and momentum are set to 0.1 and 0.9, respectively. 4.1 Qualitative Evaluation Dialogues in Twitter-Post always begin with three latent states: broadcasting what they (Twitter users) are doing now (“Status”), broadcasting an interesting link or quote to their followers (“Reference Broadcast”), or asking a question to their followers (“Question to Followers”).2 We find that structures discoverd by HSSM and LMHMMS with 10 latent states are most reasonable to interpret. For example, after the initiating state (“Status”, “Reference Broadcast”, or “Question to Followers”), it was often followed a “Reaction” to “Reference Broadcast” (or “Status”), or a “Comment” to “Status”, or a “Question” to “Status” ( “Reference Broadcast”, or “Question to Followers”’) etc. Compared with LMHMMS, besides obtaining similar latent states, HSSM exhibits powerful ability in learning sequential dependency relationship between latent states. Take the following simple Twitter dialogue session as an example: : rt i like katy perry lt lt we see tht lol : lol gd morning : lol gd morning how u : i’m gr8 n urself : i’m good gettin ready to head out : oh ok well ur day n up its cold out here ... LMHMMS labelled the second utterance (“lol gd morning ”) and the third utterance (“lol good morning how u ” ) into the same latent state, while HSSM treats them as two different latent states (Though they both have almost the same words). The result is reasonable: the first “gd morning” is a greeting, while the second “gd morning” is a response. For AirTicketBooking dataset, the statetransition diagram generated with our model under the setting of 10 latent states is presented in Figure 4. And several utterance examples corresponding to the latent staes are also showed in Table 2. In general, conversations begin with sever agent’s short greeting, such as “Hi, very glad to be of service.”, and then transit to checking the passenger’s identity information or 2For simplicity and readability in consistent, we follow the same latent state names used in (Ritter et al., 2010) 2068 inquiring the passenger’s air ticket demand; or it’s directly interrupted by the passenger with booking demand which is always associated with place information. After that, conversations are carried out with other booking related issues, such as checking ticket price or flight time. The flowchart produced by HSSM can be reasonably interpreted with knowledge of air ticket booking domain, and it most consistent with the agent’s real workflow of the Ticket Booking Corporation3 compared with other models. We notice that conventional models can not clearly distinguish some relevant latent states from each other. For example, these baseline models always confound the latent state “Price Info” with the latent state “Reservation”, due to certain words assigned large weights in the two states, such as “打折(discount)”, and “信用卡(credit card)” etc. Furthermore, Only HSSM and LMHMMS have dialogue specific topics, and experimental results illustrate that HSSM can learn much better than LMHMMS which always mis-recognize corpus general words as belonging to dialogue specific topic (An example is presented in Table 3). Please Waiting Confirmation Inquiry Start Place Info Price Info Time Info Passenger Info End Reservation 0.27 0.29 0.10 0.26 0.21 0.36 0.19 0.17 0.26 0.18 0.31 0.25 0.12 0.11 0.13 Figure 4: Transitions between latent states on AirTicketBooking generated by our HSSM model under the setting of J = 10 latent states. Transition probability cut-off is 0.10. 4.2 Quantitative Evaluation For quantitative evaluation, we examine HSSM and traditional models with log likelihood and an ordering task on the held-out test set of TwitterPost and AirTicketBooking. 3We hide the corporation’s real name for privacy reasons. Latent States Utterance Examples Utterance Examples (Chinese) (English Translation) Start 您好,很高兴为您服务。 Hello, very glad to be of service. Inquiry 您想预定机票吗? Do you want to make a flight reservation? Place Info 我想预定一张北京到上海的 机票。 I want to book an air ticket from Beijing to Shanghai. Time Info 明天上午10点左右。 Tomorrow morning, about 10 o’clock. Price Info 成人机票1300元一张。 The adult ticket is 1300 Yuan. Passenger Info 姓名李东,身份证 号12345。 My name is Li Dong, and my ID number is 12345. Confirmation 好的,可以。 Yes, that’s OK. Please Waiting 请稍等,我帮您查询。 Please wait a moment, I’ll check for you. Reservation 请预定一张,我想用信用卡 支付。 Please make a reservation, I want to use a credit card to pay. End 欢迎下次来电,再见。 Welcome to call next time. Bye. Table 2: Utterance examples of latent states discovered by our model. Model Top Words HSSM 十点, 李东, 福州, 厦门, 上航, ... ten o’clock, Dong Li (name), Fuzhou (city), Xiamen (city), Shanghai Airlines, ... LMHMMS 有, 十点, 额, 李东, 预留, ... have, ten o’clock, er, Dong Li (name), reserve, ... Table 3: One example of dialogue specific topic learned on the same dialogue session with HSSM and LMHMMS, respectively. Log Likelihood The likelihood metric measures the probability of generating the test set using a specified model. The likelihood of LMHMM and TMHMM can be directed computed with the forward algorithm. However, since likelihoods of LMHMMS, TMHMMS and TMHMMSS are intractable to compute due to the local dependencies with respect to certain latent variables, Chibstyle estimating algorithms (Wallach et al., 2009) are employed in our experiments. For HSSM, the partition function is a key problem for calculating the likelihood, and it can be effectively estimated by Annealed Importance Sampling (AIS) (Neal, 2001; Salakhutdinov and Murray, 2008). Figure 5 presents the likelihood of different models on the two held-out datasets. We can observe that HSSM achieves better performance on likelihood than all the other models under different number of latent states. On Twitter-Post dataset our model slightly surpasses LMHMMS, and it performs much better than all traditional models on AirTicketBooking dataset. Ordering Test Following previous work (Barzilay and Lee, 2004; Ritter et al., 2010; Zhai and Williams, 2014), we utilize Kendall’s τ (Kendall, 1938) as evaluation metric, which measures the similarity between any two sequential data and ranges from −1 (indicating a reverse ordering) to +1 (indicating an identical 2069 J = 10 J = 20 J = 30 Figure 5: Negative log likelihood (smaller is better) on held-out datasets of Twitter-Post (upper) and AirTicketBooking (lower) under different number of latent states J. ordering). This is the basic idea: for each dialogue session with n utterances in the test set, we firstly generate all n! permutations of the utterances; then evaluate the probability of each permutation, and measure the similarity, i.e. Kendall’s τ, between the max-probability permutation and the original order; finally, we average τ values for all dialogue sessions as the model’s ordering test score. As pointed out by Zhai et al. (2014), it’s however infeasible to enumerate all possible permutations of dialogue sessions when the number of utterances in large. In experiments, we employ the incrementally adding permutation strategy, as used by Zhai et al. (2014), to build up the permutation set. The results of ordering test are presented in Figure 6. We can see that HSSM exhibits better performance than all the other models. For the conventional models, it is interesting that LMHMMS, TMHMMS and TMHMMSS achieve worse performances than LMHMM and TMHMM. This is likely because the latter two models allow words to be emitted only from latent states (Zhai and Williams, 2014), while the former three models allow words to be generated from additional sources. This also implies HSSM’s effectiveness of modeling distinct information uderlying dialogues. 4.3 Discussion The expermental results illustrate the effectiveness of the proposed undirected dialogue structure analysis model based on Boltzmann machine. The conducted experiments also demonstrate that undirected models have three main merits for text modeling, which are also demonstrated by Hinton and Salakhutdinov (2009), Srivastava et al. (2013) through other tasks. Boltzmann machine based undirected models are able to generalize much better than traditional directed generative model; and model learning is more stable. Besides, an undirected model is more suitable for describing complex dependencies between different kinds of variables. We also notice that all the models can, to some degree, capture the sequential structure in the dialogues, however, each model has a special characteristic which makes itself fit a certain kind of dataset better. HSSM and LMHMMS are more appropriate for modeling the open domain dataset, such as Twitter-Post used in this paper, and the task-oriented domain dataset with one relatively concentrated topic in the corpus and special information for each dialogue, such as AirTicketBooking. As we known, dialogue specific topics in HSSM or LMHMMS are used and trained only within corresponding dialogues. They are crucial for absorbing certain words that have important meaning but do not belongs to latent states. In addition, for differet dataset, dialogue specific topics may have different effect to the modeling. Take the Twitter-Post for an example, dialogue specific topics formulate actual themes of dialogues, such as a pop song, a sport news. As for the AirTicketBooking dataset, dialogue specific 2070 J = 10 0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1.0 average kendall’s τ J = 20 0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1.0 J = 30 0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1.0 0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1.0 average kendall’s τ 0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1.0 # of random permutations 0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 HSSM LMHMM LMHMMS TMHMM TMHMMS TMHMMSS Figure 6: Average Kendall’s τ measure (larger is better) on held-out datasets of Twitter-Post (upper) and AirTicketBooking (lower) under different number of latent states J. topics always represent some special information, such as the personal information, including name, phone number, birthday, etc. In summary, each dialogue specific topic reflects special information which is different from other dialogues. The three models, TMHMM, TMHMMS and TMHMMSS, which do not include dialogue specific topics, should be utilized on the task-oriented domain dataset, in which each dialogue has little special or personnal information. For example, the three models perform well on the the BusTime and TechSupport datasets (Zhai and Williams, 2014), in which name entities are all replaced by different semantic types (e.g. phone numbers are replaced by “<phone>”, E-mail addresses are replaced by “<email>”, etc). 5 Conclusions We develope an undirected generative model, HSSM, for dialogue structure analysis, and examine the effectiveness of our model on two different datasets, Twitter posts occurred in open-domain and task-oriented dialogues from airline ticket booking domain. Qualitative evaluations and quantitative experimental results demonstrate that the proposed model achieves better performance than state-of-the-art approaches. Compared with traditional models, the proposed HSSM has more powerful ability of discovering structures of latent states and modeling different word sources, including latent states, dialogue specific topics and global general topic. According to recent study (Srivastava et al., 2013), a deep network model exhibits much benefits for latent variable learning. A dialogue may actually have a hierarchy structure of latent states, therefore the proposed model can be extended to a deep model to capture more complex structures. Another possible way to extend the model is to consider modeling long distance dependency between latent states. This may further improve the model’s performance. Acknowledgments We are grateful to anonymous reviewers for their helpful comments and suggestions. We would like to thank Alan Ritter for kindly providing the raw Twitter dataset. This work is supported in part by the National Natural Science Funds of China under Grant 61170197 and 61571266, and in part by the Electronic Information Industry Development Fund under project “The R&D and Industrialization on Information Retrieval System Based on ManMachine Interaction with Natural Speech”. 2071 References James Allen, Nathanael Chambers, George Ferguson, Lucian Galescu, Hyuckchul Jung, Mary Swift, and William Taysom. 2007. Plow: A collaborative task learning agent. In Proceedings of the National Conference on Artificial Intelligence, volume 22, page 1514. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999. Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models with applications to generation and summarization. In proceedings of HLT-NAACL 2004, pages 113–120. Ananlada Chotimongkol. 2008. Learning the structure of task-oriented conversations from the corpus of indomain dialogs. Ph.D. thesis, SRI International. William W Cohen, Vitor R Carvalho, and Tom M Mitchell. 2004. Learning to classify email into“speech acts”. In EMNLP, pages 309–316. Nigel Crook, Ramon Granell, and Stephen Pulman. 2009. Unsupervised classification of dialogue acts using a dirichlet process mixture model. In Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 341–348. Association for Computational Linguistics. Geoffrey E Hinton and Ruslan R Salakhutdinov. 2009. Replicated softmax: an undirected topic model. In Advances in neural information processing systems, pages 1607–1614. Geoffrey E Hinton and Richard S Zemel. 1994. Autoencoders, minimum description length, and helmholtz free energy. Advances in neural information processing systems, pages 3–3. Geoffrey E Hinton. 2002. Training products of experts by minimizing contrastive divergence. Neural computation, 14(8):1771–1800. Minwoo Jeong, Chin-Yew Lin, and Gary Geunbae Lee. 2009. Semi-supervised speech act recognition in emails and forums. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 3-Volume 3, pages 1250–1259. Association for Computational Linguistics. Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. 1999. An introduction to variational methods for graphical models. Machine learning, 37(2):183–233. Maurice G Kendall. 1938. A new measure of rank correlation. Biometrika, 30(1/2):81–93. Jingjing Liu, Stephanie Seneff, and Victor Zue. 2010. Dialogue-oriented review summary generation for spoken dialogue recommendation systems. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 64–72. Association for Computational Linguistics. Gabriel Murray, Steve Renals, Jean Carletta, and Johanna Moore. 2006. Incorporating speaker and discourse features into speech summarization. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 367–374. Association for Computational Linguistics. Radford M Neal and Geoffrey E Hinton. 1998. A view of the em algorithm that justifies incremental, sparse, and other variants. In Learning in graphical models, pages 355–368. Springer. Radford M Neal. 2001. Annealed importance sampling. Statistics and Computing, 11(2):125–139. Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Unsupervised modeling of twitter conversations. Ruslan Salakhutdinov and Iain Murray. 2008. On the quantitative analysis of deep belief networks. In Proceedings of the 25th international conference on Machine learning, pages 872–879. ACM. Nitish Srivastava, Ruslan R Salakhutdinov, and Geoffrey E Hinton. 2013. Modeling documents with deep boltzmann machines. UAI. Tijmen Tieleman. 2008. Training restricted boltzmann machines using approximations to the likelihood gradient. In Proceedings of the 25th international conference on Machine learning, pages 1064– 1071. ACM. Hanna M Wallach, Iain Murray, Ruslan Salakhutdinov, and David Mimno. 2009. Evaluation methods for topic models. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 1105–1112. ACM. Yorick Wilks. 2006. Artificial companions as a new kind of interface to the future internet. Steve J Young. 2006. Using pomdps for dialog management. In SLT, pages 8–13. Ke Zhai and Jason D Williams. 2014. Discovering latent structure in task-oriented dialogues. In ACL (1), pages 36–46. 2072
2016
194
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2073–2083, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Summarizing Source Code using a Neural Attention Model Srinivasan Iyer Ioannis Konstas Alvin Cheung Luke Zettlemoyer Computer Science & Engineering University of Washington Seattle, WA 98195 {sviyer,ikonstas,akcheung,lsz}@cs.washington.edu Abstract High quality source code is often paired with high level summaries of the computation it performs, for example in code documentation or in descriptions posted in online forums. Such summaries are extremely useful for applications such as code search but are expensive to manually author, hence only done for a small fraction of all code that is produced. In this paper, we present the first completely datadriven approach for generating high level summaries of source code. Our model, CODE-NN , uses Long Short Term Memory (LSTM) networks with attention to produce sentences that describe C# code snippets and SQL queries. CODE-NN is trained on a new corpus that is automatically collected from StackOverflow, which we release. Experiments demonstrate strong performance on two tasks: (1) code summarization, where we establish the first end-to-end learning results and outperform strong baselines, and (2) code retrieval, where our learned model improves the state of the art on a recently introduced C# benchmark by a large margin. 1 Introduction Billions of lines of source code reside in online repositories (Dyer et al., 2013), and high quality code is often coupled with natural language (NL) in the form of instructions, comments, and documentation. Short summaries of the overall computation the code performs provide a particularly useful form of documentation for a range of applications, such as code search or tutorials. However, such summaries are expensive to manually author. 1. Source Code (C#): public int TextWidth(string text) { TextBlock t = new TextBlock (); t.Text = text; return (int)Math.Ceiling(t.ActualWidth ); } Descriptions: a. Get rendered width of string rounded up to the nearest integer b. Compute the actual textwidth inside a textblock 2. Source Code (C#): var input = "Hello"; var regEx = new Regex("World"); return !regEx.IsMatch(input ); Descriptions: a. Return if the input doesn’t contain a particular word in it b. Lookup a substring in a string using regex 3. Source Code (SQL): SELECT Max(marks) FROM stud_records WHERE marks < (SELECT Max(marks) FROM stud_records ); Descriptions: a. Get the second largest value of a column b. Retrieve the next max record in a table Figure 1: Code snippets in C# and SQL and their summaries in NL, from StackOverflow. Our goal is to automatically generate summaries from code snippets. As a result, this laborious process is only done for a small fraction of all code that is produced. In this paper, we present the first completely data-driven approach for generating short highlevel summaries of source code snippets in natural language. We focus on C#, a general-purpose imperative language, and SQL, a declarative language for querying databases. Figure 1 shows example code snippets with descriptions that summarize the overall function of the code, with the goal to generate high level descriptions, such as 2073 lookup a substring in a string. Generating such a summary is often challenging because the text can include complex, non-local aspects of the code (e.g., consider the phrase ‘second largest’ in Example 3 in Figure 1). In addition to being directly useful for interpreting uncommented code, high-quality generation models can also be used for code retrieval, and in turn, for natural language programming by applying nearest neighbor techniques to a large corpus of automatically summarized code. Natural language generation has traditionally been addressed as a pipeline of modules that decide ‘what to say’ (content selection) and ‘how to say it’ (realization) separately (Reiter and Dale, 2000; Wong and Mooney, 2007; Chen et al., 2010; Lu and Ng, 2011). Such approaches require supervision at each stage and do not scale well to large domains. We instead propose an end-to-end neural network called CODE-NN that jointly performs content selection using an attention mechanism, and surface realization using Long Short Term Memory (LSTM) networks. The system generates a summary one word at a time, guided by an attention mechanism over embeddings of the source code, and by context from previously generated words provided by a LSTM network (Hochreiter and Schmidhuber, 1997). The simplicity of the model allows it to be learned from the training data without the burden of feature engineering (Angeli et al., 2010) or the use of an expensive approximate decoding algorithm (Konstas and Lapata, 2013). Our model is trained on a new dataset of code snippets with short descriptions, created using data gathered from Stackoverflow,1 a popular programming help website. Since access is open and unrestricted, the content is inherently noisy (ungrammatical, non-parsable, lacking content), but as we will see, it still provides strong signal for learning. To reliably evaluate our model, we also collect a clean, human-annotated test set.2 We evaluate CODE-NN on two tasks: code summarization and code retrieval (Section 2). For summarization, we evaluate using automatic metrics such as METEOR and BLEU-4, together with a human study for naturalness and informativeness of the output. The results show that CODENN outperforms a number of strong baselines and, 1http://stackoverflow.com 2Data and code are available at https://github.com/ sriniiyer/codenn. to the best of our knowledge, CODE-NN is the first approach that learns to generate summaries of source code from easily gathered online data. We further use CODE-NN for code retrieval for programming related questions on a recent C# benchmark, and results show that CODE-NN improves the state of the art (Allamanis et al. (2015b)) for mean reciprocal rank (MRR) by a wide margin. 2 Tasks CODE-NN generates a NL summary of source code snippets (GEN task). We have also used CODE-NN on the inverse task to retrieve source code given a question in NL (RET task). Formally, let UC be the set of all code snippets and UN be the set of all summaries in NL. For a training corpus with J code snippet and summary pairs (cj, nj), 1 ≤j ≤J, cj ∈UC, nj ∈UN, we define the following two tasks: GEN For a given code snippet c ∈UC, the goal is to produce a NL sentence n∗∈UN that maximizes some scoring function s ∈(UC × UN → R): n∗= argmax n s(c, n) (1) RET We also use the scoring function s to retrieve the highest scoring code snippet c∗ j from our training corpus, given a NL question n ∈UN: c∗ j = argmax cj s(cj, n), 1 ≤j ≤J (2) In this work, s is computed using an LSTM neural attention model, to be described in Section 5. 3 Related Work Although we focus on generating high-level summaries of source code snippets, there has been work on producing code descriptions at other levels of abstraction. Movshovitz-Attias and Cohen (2013) study the task of predicting class-level comments by learning n-gram and topic models from open source Java projects and testing it using a character-saving metric on existing comments. Allamanis et al. (2015a) create models for suggesting method and class names by embedding them in a high dimensional continuous space. Sridhara et al. (2010) present a pipeline that generates summaries of Java methods by selecting relevant content and generating phrases using templates to describe them. There is also work on improving program comprehension (Haiduc et al., 2074 2010), identifying cross-cutting source code concerns (Rastkar et al., 2011), and summarizing software bug reports (Rastkar et al., 2010). To the best of our knowledge, we are the first to use learning techniques to construct completely new sentences from arbitrary code snippets. Source code summarization is also related to generation from formal meaning representations. Wong and Mooney (2007) present a system that learns to generate sentences from lambda calculus expressions by inverting a semantic parser. Mei et al. (2016), Konstas and Lapata (2013), and Angeli et al. (2010) create learning algorithms for text generation from database records, again assuming data that pairs sentences with formal meaning representations. In contrast, we present algorithms for learning from easily gathered web data. In the database community, Simitsis and Ioannidis (2009) recognize the need for SQL database systems to talk back to users. Koutrika et al. (2010) built an interactive system (LOGOS) that translates SQL queries to text using NL templates and database schemas. Similarly there has been work on translating SPARQL queries to natural language using rules to create dependency trees for each section of the query, followed by a transformation step to make the output more natural (Ngonga Ngomo et al., 2013). These approaches are not learning based, and require significant manual template-engineering efforts. We use recurrent neural networks (RNN) based on LSTMs and neural attention to jointly model source code and NL. Recently, RNN-based approaches have gained popularity for text generation and have been used in machine translation (Sutskever et al., 2011), image and video description (Karpathy and Li, 2015; Venugopalan et al., 2015; Devlin et al., 2015), sentence summarization (Rush et al., 2015), and Chinese poetry generation (Zhang and Lapata, 2014). Perhaps most closely related, Wen et al. (2015) generate text for spoken dialogue systems with a two-stage approach, comprising an LSTM decoder semantically conditioned on the logical representation of speech acts, and a reranker to generate the final output. In contrast, we design an end-to-end attention-based model for source code. For code retrieval, Allamanis et al. (2015b) proposed a system that uses Stackoverflow data and web search logs to create models for retrieving C# code snippets given NL questions and vice versa. They construct distributional representations of code structure and language and combine them using additive and multiplicative models to score (code, language) pairs, an approach that could work well for retrieval but cannot be used for generation. We learn a neural generation model without using search logs and show that it can also be used to score code for retrieval, with much higher accuracy. Synthesizing code from language is an alternative to code retrieval and has been studied in both the Systems and NLP research communities. Giordani and Moschitti (2012), Li and Jagadish (2014), and Gulwani and Marron (2014) synthesize source code from NL queries for database and spreadsheet applications. Similarly, Lei et al. (2013) interpret NL instructions to machine-executable code, and Kushman and Barzilay (2013) convert language to regular expressions. Unlike most synthesis methods, CODE-NN is domain agnostic, as we demonstrate its applications on both C# and SQL. 4 Dataset We collected data from StackOverflow (SO), a popular website for posting programming-related questions. Anonymized versions of all the posts can be freely downloaded.3 Each post can have multiple tags. Using the C# tag for C# and the sql, database and oracle tags for SQL, we were able to collect 934,464 and 977,623 posts respectively.4 Each post comprises a short title, a detailed question, and one or more responses, of which one can be marked as accepted. We found that the text in the question and responses is domain-specific and verbose, mixed with details that are irrelevant for our tasks. Also, code snippets in responses that were not accepted were frequently incorrect or tangential to the question asked. Thus, we extracted only the title from the post and use the code snippet from those accepted answers that contain exactly one code snippet (using <code> tags). We add the resulting (title, query) pairs to our corpus, resulting in a total of 145,841 pairs for C# and 41,340 pairs for SQL. Cleaning We train a semi-supervised classifier to filter titles like ‘Difficult C# if then logic’ or ‘How can I make this query easier to write?’ that bear no relation to the corresponding code snippet. 3http://archive.org/details/stackexchange 4The data was downloaded in Dec 2014. 2075 To do so, we annotate 100 titles as being clean or not clean for each language and use them to bootstrap the algorithm. We then use the remaining titles in our training set as an unsupervised signal, and obtain a classification accuracy of over 73% on a manually labeled test set for both languages. For the final dataset, we retain 66,015 C# (title, query) pairs and 32,337 SQL pairs that are classified as clean, and use 80% of these datasets for training, 10% for validation and 10% for testing. Parsing Given the informal nature of StackOverflow, the code snippets are approximate answers that are usually incomplete. For example, we observe that only 12% of the SQL queries parse without any syntactic errors (using zql5). We therefore aim to perform a best-effort parse of the code snippet, using modified versions of an ANTLR parser for C# (Parr, 2013) and pythonsqlparse (Albrecht, 2015) for SQL. We strip out all comments and to avoid being context specific, we replace literals with tokens denoting their types. In addition, for SQL, we replace table and column names with numbered placeholder tokens while preserving any dependencies in the query. For example, the SQL query in Figure 1 is represented as SELECT MAX(col0) FROM tab0 WHERE col0 < (SELECT MAX(col0) FROM tab0). Data Statistics The structural complexity and size of the code snippets in our dataset makes our tasks challenging. More than 40% of our C# corpus comprises snippets with three or more statements and functions, and 20% contains loops and conditionals. Also, over a third of our SQL queries contain one or more subqueries and multiple tables, columns and functions (like MIN, MAX, SUM). On average, our C# snippets are 38 tokens long and the queries in our corpus are 46 tokens long, while titles are 9-12 words long. Table 2 shows the complete data statistics. Human Annotation For the GEN task, we use n-gram based metrics (see Section 6.1.2) of the summary generated by our model with respect to the actual title in our corpus. Titles can be short, and a given code snippet can be described in many different ways with little overlapping content between them. For example, the descriptions for the second code snippet in Figure 1 share very few words with each other. To address these limita5http://zql.sourceforge.net C# # Statements # Functions ≥3 23,611 (44.7%) ≥3 26,541 (51.0%) ≥4 17,822 (33.7%) ≥4 20,221 (38.2%) # Loops # Conditionals ≥1 10,676 (20.0%) ≥1 11,819 (22.3%) SQL # Subqueries # Tables ≥1 11,418 (35%) ≥3 14,695 (44%) ≥2 3,625 (11%) ≥4 10,377 (31%) # Columns # Functions ≥5 12,366 (37%) ≥3 6,290 (19%) ≥6 9,050 (27%) ≥4 3,973 (12%) Table 1: Statistics for code snippets in our dataset. C# Avg. code length 38 tokens # tokens 91,156 Avg. title length 12 words # words 24,857 SQL Avg. query length 46 tokens # tokens 1,287 Avg. title length 9 words # words 10,086 Table 2: Average code and title lengths together with vocabulary sizes for C# and SQL after postprocessing. tions, we extend our test set by asking human annotators to provide two additional titles for 200 snippets chosen at random from the test set, making a total of three reference titles for each code snippet. To collect this data, annotators were shown only the code snippets and were asked to write a short summary after looking at a few example summaries. They were also asked to “think of a question that they could ask on a programming help website, to get the code snippet as a response.” This encouraged them to briefly describe the key feature that the code is trying to demonstrate. We use half of this test set for model tuning (DEV, see Section 5) and the rest for evaluation (EVAL). 5 The CODE-NN Model Description We present an end-to-end generation system that performs content selection and surface realization jointly. Our approach uses an attention-based neural network to model the conditional distribution of a NL summary n given a code snippet c. Specifically, we use an LSTM model that is guided by attention on the source code snippet to generate a summary one word at a time, as shown in Figure 2.6 Formally, we represent a NL summary n = n1, . . . , nl as a sequence of 1-hot vectors 6We experimented with other sequence (Sutskever et al., 2014) and tree based architectures (Tai et al., 2015) as well. None of these models significantly improved performance, however, this is an important area for future work. 2076 LSTM LSTM LSTM . . n1 E E n1 nl−1 ∝ ∅ ∝ A + A + ∝ A + END h1 h2 hl t1 t2 tl F ∝ α ⊙ hi ti ⊙ c = c1, c2, ..., ck n2 F c c c h1; m1 h2; m2 hl−1; ml−1 Figure 2: Generation of a title n = n1, . . . , END given code snippet c1, ..., ck. The attention cell computes a distributional representation ti of the code snippet based on the current LSTM hidden state hi. A combination of ti and hi is used to generate the next word, ni, which feeds back into the next LSTM cell. This is repeated until a fixed number of words or END is generated. ∝blocks denote softmax operations. n1, . . . , nl ∈{0, 1}|N|, where N is the vocabulary of the summaries. Our model computes the probability of n (scoring function s in Eq. 1) as a product of the conditional next-word probabilities s(c, n) = lY i=1 p(ni|n1, . . . , ni−1) with, p(ni|n1, . . . , ni−1) ∝W tanh(W1hi + W2ti) where, W ∈R|N|×H and W1, W2 ∈RH×H, H being the embedding dimensionality of the summaries. ti is the contribution from the attention model on the source code (see below). hi represents the hidden state of the LSTM cell at the current time step and is computed based on the previously generated word, the previous LSTM cell state mi−1 and the previous LSTM hidden state hi−1 as mi; hi = f(ni−1E, mi−1, hi−1; θ) where E ∈R|N|×H is a word embedding matrix for the summaries. We compute f using the LSTM cell architecture used by Zaremba et al. (2014). Attention The generation of each word is guided by a global attention model (Luong et al., 2015), which computes a weighted sum of the embeddings of the code snippet tokens based on the current LSTM state (see right part in Figure 2). Formally, we represent c as a set of 1-hot vectors c1, . . . , ck ∈{0, 1}|C| for each source code token; C is the vocabulary of all tokens in our code snippets. Our attention model computes, ti = k X j=1 αi,j · cjF where F ∈R|C|×H is a token embedding matrix and each αi,j is proportional to the dot product between the current internal LSTM hidden state hi and the corresponding token embedding cj: αi,j = exp(hiTcjF) Pk j=1 exp(hiTcjF) Training We perform supervised end-to-end training using backpropagation (Werbos, 1990) to learn the parameters of the embedding matrices F and E, transformation matrices W, W1 and W2, and parameters θ of the LSTM cell that computes f. We use multiple epochs of minibatch stochastic gradient descent and update all parameters to minimize the negative log likelihood (NLL) of our training set. To prevent over-fitting we make use of dropout layers (Srivastava et al., 2014) at the summary embeddings and the output softmax layer. Using pre-trained embeddings (Mikolov et al., (2013)) for the summary embedding matrix or adding additional LSTM layers did not improve performance for the GEN task. Since the NLL training objective does not directly optimize for our evaluation metric (METEOR), we compute METEOR (see Section 6.1.2) on a small development set (DEV) after every epoch and save the intermediate model that gives the maximum score, as the final model. Decoding Given a trained model and an input code snippet c, finding the most optimal title entails generating the title n∗that maximizes s(c, n) (see Eq. 1). We approximate n∗by performing beam search on the space of all possible summaries using the model output. Implementation Details We add special START and END tokens to our training sequences and replace all tokens and output words occurring 2077 with a frequency of less than 3 with an UNK token, making |C| = 31, 667 and |N| = 7, 470 for C# and |C| = 747 and |N| = 2, 506 for SQL. Our hyper-parameters are set based on performance on the validation set. We use a minibatch size of 100 and set the dimensionality of the LSTM hidden states, token embeddings, and summary embeddings (H) to 400. We initialize all model parameters uniformly between −0.35 and 0.35. We start with a learning rate of 0.5 and start decaying it by a factor of 0.8 after 60 epochs if accuracy on the validation set goes down, and terminate training when the learning rate goes below 0.001. We cap the parameter gradients to 5 and use a dropout rate of 0.5. We use the Torch framework7 to train our models on GPUs. Training runs for about 80 epochs and takes approximately 7 hours. We compute METEOR score at every epoch on the development set (DEV) to choose the best final model, with the best results obtained between 60 and 70 epochs. For decoding, we set the beam size to 10, and the maximum summary length to 20 words. 6 Experimental Setup 6.1 GEN Task 6.1.1 Baselines For the GEN task, we compare CODE-NN with a number of competitive systems, none of which had been previously applied to generate text from source code, and hence we adapt them slightly for this task, as explained below. IR is an information retrieval baseline that outputs the title associated with the code cj in the training set that is closest to the input code c in terms of token Levenshtein distance. In this case s from Eq.1 becomes, s(c, nj) = −1 × lev(cj, c), 1 ≤j ≤J MOSES (Koehn et al., 2007) is a popular phrase-based machine translation system. We perform generation by treating the tokenized code snippet as the source language, and the title as the target. We train a 3-gram language model using KenLM (Heafield, 2011) to use with MOSES, and perform MIRA-based tuning (Cherry and Foster, 2012) of hyper-parameters using DEV. SUM-NN is the neural attention-based abstractive summarization model of Rush et al. (2015). 7http://torch.ch It uses an encoder-decoder architecture with an attention mechanism based on a fixed context window of previously generated words. The decoder is a feed-forward neural language model that generates the next word based on previous words in a context window of size k. In contrast, we decode using an LSTM network that can model long range dependencies and our attention weights are tied to the LSTM hidden states. We set the embedding and hidden state dimensions and context window size by tuning on our validation set. We found this model to generate overly short titles like ‘sql server 2008’ when a length restriction was not imposed on the output text. Therefore, we fix the output length to be the average title length in the training set while decoding. 6.1.2 Evaluation Metrics We evaluate the GEN task using automatic metrics, and also perform a human study. Automatic Evaluation We report METEOR (Banerjee and Lavie, 2005) and sentence level BLEU-4 (Papineni et al., 2002) scores. METEOR is recall-oriented and measures how well our model captures content from the references in our output. BLEU-4 measures the average n-gram precision on a set of reference sentences, with a penalty for overly short sentences. Since the generated summaries are short and there are multiple alternate summaries for a given code snippet, higher order n-grams may not overlap. We remedy this problem by using +1 smoothing (Lin and Och, 2004). We compute these metrics on the tuning set DEV and the held-out evaluation set EVAL. Human Evaluation Since automatic metrics do not always agree with the actual quality of the results (Stent et al., 2005), we perform human evaluation studies to measure the output of our system and baselines across two modalities, namely naturalness and informativeness. For the former, we asked 5 native English speakers to rate each title against grammaticality and fluency, on a scale between 1 and 5. For informativeness (i.e., the amount of content carried over from the input code to the NL summary, ignoring fluency of the text), we asked 5 human evaluators familiar with C# and SQL to evaluate the system output by rating the factual overlap of the summary with the reference titles, on a scale between 1 and 5. 2078 6.2 RET task 6.2.1 Model and Baselines CODE-NN As described in Section 2, for a given NL question n in the RET task, we rank all code snippets cj in our corpus by computing the scoring function s(cj, n), and return the query c∗ j that maximizes it (Eq. 2). RET-IR is an information retrieval baseline that ranks the candidate code snippets using cosine similarity between the given NL question n and all summaries nj in the retrieval set, based on their vector representations using TF-IDF weights over unigrams. The scoring function s in Eq. 2 becomes: s(cj, n) = tf-idf(nj) · tf-idf(n) ∥tf-idf(nj)∥∥tf-idf(n)∥, 1 ≤j ≤J 6.2.2 Evaluation Metrics We assess ranking quality by computing the Mean Reciprocal Rank (MRR) of c∗ j. For every snippet cj in EVAL (and DEV), we use two of the three references (title and human annotation), namely nj,1, nj,2. We then build a retrieval set comprising (cj, nj,1) together with 49 random distractor pairs (c′, n′), c′ ̸= cj from the test set. Using nj,2 as the natural language question, we rank all 50 items in this retrieval set and use the rank of query c∗ j to compute MRR. We average MRR over all returned queries c∗ j in the test set, and repeat this experiment for several different random sets of distractors. 6.3 Tasks from Allamanis et al. (2015b) Allamanis et al. (2015b) take a retrieval approach to answer C# related natural language questions (L to C), similar to our RET task. In addition, they also use retrieval to summarize C# source code (C to L) and evaluate both tasks using the MRR metric. Although they also use data from Stackoverflow, their dataset preparation and cleaning methods differs significantly from ours. For example, they filter out posts where the question has fewer than 2 votes, the answer has fewer than 3 votes, or the post has fewer than 1000 views. Additionally, they also filter code snippets that cannot be parsed by Roslyn (.NET compiler) or are longer than 300 characters. Thus, to directly compare with their model, we re-train our generation model on their dataset and use our model score for retrieval of both code and summaries. Model METEOR BLEU-4 C# IR 7.9 (6.1) 13.7 (12.6) MOSES 9.1 (9.7) 11.6 (11.5) SUM-NN 10.6 (10.3) 19.3 (18.2) CODE-NN 12.3 (13.4) 20.5 (20.4) SQL IR 6.3 (8.0) 13.5 (13.0) MOSES 8.3 (9.7) 15.4 (15.9) SUM-NN 6.4 (8.7) 13.3 (14.2) CODE-NN 10.9 (14.0) 18.4 (17.0) Table 3: Performance on EVAL for the GEN task. Performance on DEV is indicated in parentheses. Model Naturalness Informativeness C# IR 3.42 2.25 MOSES 1.41 2.42 SUM-NN 4.61* 1.99 CODE-NN 4.48 2.83 SQL IR 3.21 2.58 MOSES 2.80 2.54 SUM-NN 4.44 2.75 CODE-NN 4.54 3.12 Table 4: Naturalness and Informativeness measures of model outputs. Stat. sig. between CODENN and others is computed with a 2-tailed Student’s t-test; p < 0.05 except for *. 7 Results 7.1 GEN Task Table 3 shows automatic evaluation metrics for our model and baselines. CODE-NN outperforms all the other methods in terms of METEOR and BLEU-4 score. We attribute this to its ability to perform better content selection, focusing on the more salient parts of the code by using its attention mechanism jointly with its LSTM memory cells. The neural models have better performance on C# than SQL. This is in part because, unlike SQL, C# code contains informative intermediate variable names that are directly related to the objective of the code. On the other hand, SQL is more challenging in that it only has a handful of keywords and functions, and summarization models need to rely on other structural aspects of the code. Informativeness and naturalness scores for each model from our human evaluation study are presented in Table 4. In general, CODE-NN performs well across both dimensions. Its superior performance in terms of informativeness further supports our claim that it manages to select content more effectively. Although SUM-NN performs similar to CODE-NN on naturalness, its output lacks content and has very little variation (see Section 7.4), which also explains its surprisingly low 2079 Model MRR C# RET-IR 0.42 ± 0.02 (0.44 ± 0.01) CODE-NN 0.58 ± 0.01 (0.66 ± 0.02) SQL RET-IR 0.28 ±0.01(0.4 ± 0.01) CODE-NN 0.44 ± 0.01 (0.54 ± 0.02) Table 5: MRR for the RET task. Dev set results in parentheses. Model MRR L to C Allamanis 0.182 ±0.009 CODE-NN 0.590 ± 0.044 C to L Allamanis 0.434 ±0.003 CODE-NN 0.461 ± 0.046 Table 6: MRR values for the Language to Code (L to C) and the Code to Language (C to L) tasks using the C# dataset of Allamanis et al. (2015b) score on informativeness. 7.2 RET Task Table 5 shows the MRR on the RET task for CODE-NN and RET-IR, averaged over 20 runs for C# and SQL. CODE-NN outperforms the baseline by about 16% for C# and SQL. RET-IR can only output code snippets that are annotated with NL as potential matches. On the other hand, CODENN can rank even unannotated code snippets and nominate them as potential candidates. Hence, it can leverage vast amounts of such code available in online repositories like Github. To speed up retrieval when using CODE-NN , it could be one of the later stages in a multi-stage retrieval system and candidates may also be ranked in parallel. 7.3 Comparison with Allamanis et al. We train CODE-NN on their dataset and evaluate using the same MRR testing framework (see Table 6). Our model performs significantly better for the Language to Code task (L to C) and slightly better for the Code to Language task (C to L). The attention mechanism together with the LSTM network is able to generate better scores for (language, code) pairs. 7.4 Qualitative Analysis Figure 3 shows the relative magnitudes of the attention weights (αi,j) for example C# and SQL code snippets while generating their corresponding summaries. Darker regions represent stronger weights. CODE-NN automatically learns to do how to get selected cell value in datagridview ? MessageBox . Show ( dataGridView1 . SelectedCells [ 1 ] . Value . ToString ( ) ) how to get the difference between two dates in mysql ? select col0 from tab0 where col0 <= now ( ) interval 29 day ; Figure 3: Heatmap of attention weights αi,j for example C# (left) and SQL (right) code snippets. The model learns to align key summary words (like cell) with the corresponding tokens in the input (SelectedCells). high-quality content selection by aligning key summary words with informative tokens in the code snippet. Table 8 shows examples of the output generated by our model and baselines for code snippets in DEV. Most of the models produce meaningful output for simple code snippets (first example) but degrade on longer, compositional inputs. For example, the last SQL query listed in Table 8 includes a subquery, where a complete description should include both summing and concatenation. CODE-NN describes the summation (but not concatenation), while others return non-relevant descriptions. Finally, we performed manual error analysis on 50 randomly selected examples from DEV (Table 7) for each language. Redundancy is a major source of error, i.e., generation of extraneous content-bearing phrases, along with missing content, e.g., in the last example of Table 8 there is no reference to the concatenation operations present in the beginning of the query. Sometimes the output from our model can be out of context, in the sense that it does not match the input code. This often happens for low frequency tokens (7% of cases), for which CODE-NN realizes them with generic phrases. This also happens when there are very long range dependencies or compositional structures in the input, such as nested queries (13% of the cases). 8 Conclusion In this paper, we presented CODE-NN , an endto-end neural attention model using LSTMs to 2080 Error % Cases Correct 37% Redundancy 17% Missing Info 26% Out of context 20% Table 7: Error analysis on 50 examples in DEV generate summaries of C# and SQL code by learning from noisy online programming websites. Our model outperforms competitive baselines and achieves state of the art performance on automatic metrics, namely METEOR and BLEU, as well as on a human evaluation study. We also used CODE-NN to answer programming questions by retrieving the most appropriate code snippets from a corpus, and beat previous baselines for this task in terms of MRR. We have published our C# and SQL datasets, the accompanying human annotated test sets, and our code for the tasks described in this paper. In future work, we plan to develop better models for capturing the structure of the input, as well as extend the use of our system to other applications such as automatic documentation of source code. Acknowledgements We thank Mike Lewis, Chlo´e Kiddon, Kenton Lee, Eunsol Choi and the anonymous reviewers for comments on an earlier version. We also thank Bill Howe, Dan Halperin and Mark Yatskar for helpful discussions and Miltiadis Allamanis for providing the dataset for the comparison study. This research was supported in part by the NSF (IIS-1252835), an Allen Distinguished Investigator Award, and a gift from Amazon. References Andi Albrecht. 2015. python-sqlparse. Miltiadis Allamanis, Earl T Barr, Christian Bird, and Charles Sutton. 2015a. Suggesting accurate method and class names. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, pages 38–49. Miltiadis Allamanis, Daniel Tarlow, Andrew Gordon, and Yi Wei. 2015b. Bimodal modelling of source code and natural language. In Proceedings of The 32nd International Conference on Machine Learning, pages 2123–2132. Gabor Angeli, Percy Liang, and Dan Klein. 2010. A simple domain-independent probabilistic approach Method Output C# code var x = "FundList [10]. Amount"; int xIndex = Convert.ToInt32( Regex.Match(x,@"\d+"). Value ); Gold Identify the number in given string IR Convert string number to integer MOSES How to xIndex numbers in C#? SUM-NN How can I get the value of a string? CODE-NN How to convert string to int? C# code foreach (string pTxt in xml.parent) { TreeNode parent = new TreeNode (); foreach (string cTxt in xml.child) { TreeNode child = new TreeNode (); parent.Nodes.Add(child ); } } Gold Adding childs to a treenode dynamically in C# IR How to set the name of a tabPage programmatically MOSES How can TreeView nodes from XML parentText string to a treeview node SUM-NN How to get data from xml file in C# CODE-NN How to get all child nodes in TreeView? C# code string url = baseUrl + "/api/Entry/SendEmail?emailId=" + emailId; WebRequest req = WebRequest.Create(url); req.Method = "GET"; req.BeginGetResponse(null , null); Gold Execute a get request on a web server and receive the response asynchronously IR How to download a file from another Sharepoint Domain MOSES How baseUrl emailId C how to a page in BeginGetResponse to SUM-NN How to get data from a file in C CODE-NN How to call a URL from a web api post ? SQL Query SELECT * FROM table ORDER BY Rand() LIMIT 10 Gold Select random rows from mysql table IR How to select a random record from a mysql database? MOSES How to select all records in mysql ? SUM-NN How can I select random rows from a table CODE-NN How to get random rows from a mysql database? SQL Query SELECT Group concat( Concat ws(',', playerid , r1 , r2) SEPARATOR ';') FROM (SELECT playerid , Sum(rank = 1) r1 , Sum(rank < 5) r2 FROM result GROUP BY playerid) t; Gold Get sum of group values based on condition and concatenate them into a string IR Mysql: counting occurences in a table, return as a single row MOSES Mysql query to get this result of the result of one column value in mysql SUM-NN How do i combine these two queries into one? CODE-NN How to get the sum of a column in a single query? Table 8: Examples of outputs generated by each model for code snippets in DEV to generation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 502–512. 2081 Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, volume 29, pages 65–72. David L Chen, Joohyun Kim, and Raymond J Mooney. 2010. Training a multilingual sportscaster: Using perceptual context to learn language. Journal of Artificial Intelligence Research, pages 397–435. Colin Cherry and George Foster. 2012. Batch tuning strategies for statistical machine translation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 427–436. Jacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong He, Geoffrey Zweig, and Margaret Mitchell. 2015. Language models for image captioning: The quirks and what works. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 100–105. Robert Dyer, Hoan Anh Nguyen, Hridesh Rajan, and Tien N Nguyen. 2013. Boa: A language and infrastructure for analyzing ultra-large-scale software repositories. In Proceedings of the 2013 International Conference on Software Engineering, pages 422–431. Alessandra Giordani and Alessandro Moschitti. 2012. Translating questions to SQL queries with generative parsers discriminatively reranked. In Proceedings of COLING 2012: Posters, pages 401–410. Sumit Gulwani and Mark Marron. 2014. Nlyze: Interactive programming by natural language for spreadsheet data analysis and manipulation. In Proceedings of the 2014 ACM SIGMOD international conference on Management of data, pages 803–814. Sonia Haiduc, Jairo Aponte, and Andrian Marcus. 2010. Supporting program comprehension with source code summarization. In Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering-Volume 2, pages 223–226. Kenneth Heafield. 2011. Kenlm: Faster and smaller language model queries. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 187–197. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Andrej Karpathy and Fei-Fei Li. 2015. Deep visualsemantic alignments for generating image descriptions. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, pages 3128–3137. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions, pages 177–180. Ioannis Konstas and Mirella Lapata. 2013. A global model for concept-to-text generation. Journal of Artificial Intelligence Research, 48(1):305–346. Georgia Koutrika, Alkis Simitsis, and Yannis E Ioannidis. 2010. Explaining structured queries in natural language. In Data Engineering (ICDE), 2010 IEEE 26th International Conference on, pages 333–344. Nate Kushman and Regina Barzilay. 2013. Using semantic unification to generate regular expressions from natural language. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 826–836. Tao Lei, Fan Long, Regina Barzilay, and Martin Rinard. 2013. From natural language specifications to program input parsers. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1294–1303. Fei Li and Hosagrahar V Jagadish. 2014. Nalir: An interactive natural language interface for querying relational databases. In Proceedings of the 2014 ACM SIGMOD international conference on Management of data, pages 709–712. Chin-Yew Lin and Franz Josef Och. 2004. Orange: a method for evaluating automatic evaluation metrics for machine translation. In Proceedings of the 20th international conference on Computational Linguistics, page 501. Wei Lu and Hwee Tou Ng. 2011. A probabilistic forest-to-string model for language generation from typed lambda calculus expressions. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1611–1622. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421. Hongyuan Mei, Mohit Bansal, and Matthew R. Walter. 2016. What to talk about and how? selective generation using lstms with coarse-to-fine alignment. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In Proceedings of the International Conference on Learning Representations. 2082 Dana Movshovitz-Attias and William W. Cohen. 2013. Natural language models for predicting programming comments. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 35–40. Axel-Cyrille Ngonga Ngomo, Lorenz B¨uhmann, Christina Unger, Jens Lehmann, and Daniel Gerber. 2013. Sorry, i don’t speak sparql: Translating sparql queries into natural language. In Proceedings of the 22Nd International Conference on World Wide Web, pages 977–988. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Terence Parr. 2013. The definitive ANTLR 4 reference. Pragmatic Bookshelf. Sarah Rastkar, Gail C Murphy, and Gabriel Murray. 2010. Summarizing software artifacts: a case study of bug reports. In Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering-Volume 1, pages 505–514. Sarah Rastkar, Gail C Murphy, and Alexander WJ Bradley. 2011. Generating natural language summaries for crosscutting source code concerns. In Software Maintenance (ICSM), 2011 27th IEEE International Conference on, pages 103–112. Ehud Reiter and Robert Dale. 2000. Building natural language generation systems. Cambridge University Press, New York, NY. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379–389. Alkis Simitsis and Yannis E. Ioannidis. 2009. Dbmss should talk back too. In CIDR 2009, Fourth Biennial Conference on Innovative Data Systems Research, Online Proceedings. Giriprasad Sridhara, Emily Hill, Divya Muppaneni, Lori Pollock, and K Vijay-Shanker. 2010. Towards automatically generating summary comments for java methods. In Proceedings of the IEEE/ACM international conference on Automated software engineering, pages 43–52. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Amanda Stent, Matthew Marge, and Mohit Singhai. 2005. Evaluating evaluation methods for generation in the presence of variation. In Computational Linguistics and Intelligent Text Processing, pages 341– 351. Ilya Sutskever, James Martens, and Geoffrey E Hinton. 2011. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 1017–1024. Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, pages 1556–1566. Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond J. Mooney, and Kate Saenko. 2015. Translating videos to natural language using deep recurrent neural networks. In In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1494–1504. Tsung-Hsien Wen, Milica Gasic, Nikola Mrkˇsi´c, PeiHao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1711–1721. Paul J Werbos. 1990. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE, 78(10):1550–1560. Yuk Wah Wong and Raymond J Mooney. 2007. Generation by inverting a semantic parser that uses statistical machine translation. In In Proceedings of the 2007 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 172–179. Wojciech Zaremba and Ilya Sutskever. 2014. Learning to execute. CoRR, abs/1410.4615. Xingxing Zhang and Mirella Lapata. 2014. Chinese poetry generation with recurrent neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 670–680. 2083
2016
195
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2084–2093, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Continuous Profile Models in ASL Syntactic Facial Expression Synthesis Hernisa Kacorri Carnegie Mellon University Human-Computer Interaction Institute 5000 Forbes Avenue Pittsburgh, PA 15213, USA [email protected] Matt Huenerfauth Rochester Institute of Technology B. Thomas Golisano College of Computing and Information Sciences 152 Lomb Memorial Drive Rochester, NY 14623, USA [email protected] Abstract To create accessible content for deaf users, we investigate automatically synthesizing animations of American Sign Language (ASL), including grammatically important facial expressions and head movements. Based on recordings of humans performing various types of syntactic face and head movements (which include idiosyncratic variation), we evaluate the efficacy of Continuous Profile Models (CPMs) at identifying an essential “latent trace” of the performance, for use in producing ASL animations. A metric-based evaluation and a study with deaf users indicated that this approach was more effective than a prior method for producing animations. 1 Introduction and Motivation While there is much written content online, many people who are deaf have difficulty reading text or may prefer sign language. For example, in the U.S., standardized testing indicates that a majority of deaf high school graduates (age 18+) have a fourth-grade reading level or below (Traxler, 2000) (U.S. fourth-grade students are typically age 9). While it is possible to create video-recordings of a human performing American Sign Language (ASL) for use on websites, updating such material is expensive (i.e., re-recording). Thus, researchers investigate technology to automate the synthesis of animations of a signing virtual human, to make it more cost-effective for organizations to provide sign language content online that is easily updated and maintained. Animations can be automatically synthesized from a symbolic specification of the message authored by a human or perhaps by machine translation, e.g. (Ebling and Glauert, 2013; Filhol et al., 2013; Stein et al., 2012). 1.1 ASL Syntactic Facial Expressions Facial expressions are essential in ASL, conveying emotion, semantic variations, and syntactic structure. Prior research has verified that ASL animations with missing or poor facial expressions are significantly less understandable for deaf users (Kacorri et al., 2014; Kacorri et al., 2013b; Kacorri et al., 2013a). While artists can produce individual animations with beautiful expressions, such work is time-consuming. For efficiently maintainable online content, we need automatic synthesis of ASL from a sparse script representing the lexical items and basic elements of the sentence. Specifically, we are studying how to model and generate ASL animations that include syntactic facial expressions, conveying grammatical information during entire phrases and therefore constrained by the timing of the manual signs in a phrase (Baker-Shenk, 1983). Generally speaking, in ASL, upper face movements (examined in this paper) convey syntactic information across entire phrases, with the mouth movements conveying lexical or adverbial information. The meaning of a sequence of signs performed with the hands depends on the co-occuring facial expression. (While we use the term “facial expressions,” these phenomena also include movements of the head.) For instance, the ASL sentence “BOB LIKE CHOCOLATE” (English: “Bob likes chocolate.”) becomes a yes/no question (English: “Does Bob like chocolate?”), with the addition of a YesNo facial expression during the sentence. The addition of a Negative facial expression during the verb phrase “LIKE CHOCOLATE” changes the meaning of the sentence to “Bob doesn’t like chocolate.” (The lexical item NOT may optionally be used.) For interrogative questions, a WhQuestion facial expression must occur during the sentence, e.g., “BOB LIKE 2084 WHAT.” The five types of ASL facial expressions investigated in this paper include: • YesNo: The signer raises his eyebrows while tilting the head forward to indicate that the sentence is a polar question. • WhQuestion: The signer furrows his eyebrows and tilts his head forward during a sentence to indicate an interrogative question, typically with a “WH” word such as what, who, where, when, how, which, etc. • Rhetorical: The signer raises his eyebrows and tilts his head backward and to the side to indicate a rhetorical question. • Topic: The signer raises his eyebrows and tilts his head backward during a clause-initial phrase that should be interpreted as a topic. • Negative: The signer shakes his head left and right during the verb phrase to indicate negated meaning, often with the sign NOT. 1.2 Prior Work A survey of recent work of several researchers on producing animations of sign language with facial expressions appears in (Kacorri, 2015). There is recent interest in data-driven approaches using facial motion-capture of human performances to generate sign language animations: For example, (Schmidt et al., 2013) used clustering techniques to select facial expressions that co-occur with individual lexical items, and (Gibet et al., 2011) studied how to map facial motion-capture data to animation controls. In the most closely related prior work, we had investigated how to generate a face animation based on a set of video recordings of a human signer performing facial expressions (Kacorri et al., 2016), with head and face movement data automatically extracted from the video, and with individual recordings labeled as each of the five syntactic types, as listed in section 1.1. We wanted to identify a single exemplar recording in our dataset, for each of the syntactic types, that could be used as the basis for generating the movements of virtual human character. (In a collection of recordings of face and head movement, there will naturally be non-essential individual variation in the movements; thus, it may be desirable to select a recording that is maximally stereotypical of a set of recordings.) To do so, we made use of a variant of Dynamic Time Warping (DTW) as a distance metric to select the recording with minimal pairwise normalized DTW distance from all of the examples of each syntactic type. We had used this “centroid” recording as the basis for producing a novel animation of the face and head movements for a sign language sentence. 2 Method In this paper, we present a new methodology for generating face and head movements for sign language animations, given a set of human recordings of various syntactic types of facial expressions. Whereas we had previously selected a single exemplar recording of a human performance to serve as a basis for producing an animation (Kacorri et al., 2016), in this work, we investigate how to construct a model that generalizes across the entire set of recordings, to produce an “average” of the face and head movements, which can serve as a basis for generating an animation. To enable comparison of our new methodology to our prior technique, we make use of an identical training dataset as in (Kacorri et al., 2016) and an identical animation rendering pipeline, described in (Huenerfauth and Kacorri, 2015a). Briefly, the animation pipeline accepts a script of the hand location, hand orientation, and hand-shape information to pose and move the arms of the character over time, and it also accepts a file containing a stream of face movement information in MPEG4 Facial Animation Parameters format (ISO/IEC, 1999) to produce a virtual human animation. 2.1 Dataset and Feature Extraction ASL is a low-resource language, and it does not have a writing system in common use. Therefore, ASL corpora are generally small in size and in limited supply; they are usually produced through manual annotation of video recordings. Thus, researchers generally work with relatively small datasets. In this work, we make use of two datasets that consist of video recordings of humans performing ASL with annotation labeling the times in the video when each of the five types of syntactic facial expressions listed in section 1.1 occur. The training dataset used in this study was described in (Kacorri et al., 2016), and consists of 199 examples of facial expressions performed by a female signer recorded at Boston University. While the Training dataset can naturally be partitioned into five subsets, based on each of the five syntactic facial expression types, because adjacent 2085 Type Subgroup “ A” (Num. of Videos) Subgroup “ B” (Num. of Videos) YesNo Immediately preceded by a facial expression with raised eyebrows, e.g. Topic. (9) Not immediately preceded by an eyebrow-raising expression. (10) WhQuestion Performed during a single word, namely the whword (e.g., what, where, when). (4) Performed during a phrase consisting of multiple words. (8) Rhetorical Performed during a single word, namely the whword (e.g., what, where, when). (2) Performed during a phrase consisting of multiple words. (8) Topic Performed during a single word. (29) Performed during a phrase consisting of multiple words. (15) Negative Immediately preceded by a facial expression with raised eyebrows, e.g. Topic. (16) Not immediately preceded by eyebrow-raising expression. (25) Table 1: Ten subgroups of the training dataset. facial expressions or phrase durations may affect the performance of ASL facial expressions, in this work, we sub-divide the dataset further, into ten sub-groups, as summarized in Table 1. The “gold-standard” dataset used in this study was shared with the research community by (Huenerfauth and Kacorri, 2014); we use 10 examples of ASL facial expressions (one for each sub-group listed in Table 1) performed by a male signer who was recorded at the Linguistic and Assistive Technologies laboratory. To extract face and head movement information from the video, a face-tracker (Visage, 2016) was used to produce a set of MPEG4 facial animation parameters for each frame of video: These values represent face-landmark or head movements of the human appearing in the video, including 14 features used in this study: head x, head y, head z, head pitch, head yaw, head roll, raise l i brow, raise r i brow, raise l m brow, raise r m brow, raise l o brow, raise r o brow, squeeze l brow, squeeze r brow. The first six values represent head location and orientation. The next six values represent vertical movement of the outer (“o ”), middle (“m ”), or inner (“i ”) portion of the right (“r ”) or left (“l ”) eyebrows. The final values represent horizontal movement of the eyebrows. 2.2 Continuous Profile Models (CPM) Continuous Profile Model (CPM) aligns a set of related time series data while accounting for changes in amplitude. This model has been previously evaluated on speech signals and on other biological time-series data (Listgarten et al., 2004). With the assumption that a noisy, stochastic process generates the observed time series data, the approach automatically infers the underlying noiseless representation of the data, the so-called “latent trace.” Figure 6 (on the last page of this paper) shows an example of multiple time series in unaligned and aligned space, with CPM identifying the the latent trace. Given a set K of observed time series ⃗xk = (xk 1, xk 2, ..., xk N), CPM assumes there is a latent trace ⃗z = (z1, z2, ..., zM). While not a requirement of the model, the length of the time series data is assumed to be the same (N) and the length of the latent trace used in practice is M = (2+ε)N, where an ideal M would be large relative to N to allow precise mapping between observed data and an underlying point on the latent trace. Higher temporal resolution of the latent trace also accommodates flexible alignments by allowing an observational series to advance along the latent trace in small or large jumps (Listgarten, 2007). Continuous Profile Models (CPMs) build on Hidden Markov Models (HMMs) (Poritz, 1988) and share similarities with Profile HMMs which augment HMMs by two constrained-transition states: ‘Insert’ and ‘Delete’ (emitting no observations). Similar to the Profile HMM, the CPM has strict left-to-right transition rules, constrained to only move forward along a sequence. Figure 1 includes a visualization we created, which illustrates the graphical model of a CPM. 2.3 Obtaining the CPM Latent Trace We applied the CPM model to time align and coherently integrate time series data from multiple ASL facial expression performances of a particular type, e.g., Topic A as listed in section 2.1, with the goal of using the inferred ‘latent traces’ to drive ASL animations with facial expressions of that type. This section describes our work to train the CPM and to obtain the latent traces; implementation details appear in Appendix A. The input time-series data for each CPM model is the face and head movement data extracted from ASL videos of one of the facial expression types, 2086 Figure 1: Depiction of a CPM for series xk, with hidden state variables πk i underlying each observation xk i . The table illustrates the state-space: time-state/scale-state pairs mapped to the hidden variables, where time states belong to the integer set (1...M) and scale states belong to an ordered set, here with 7 evenly spaced scales in logarithmic space as in (Listgarten et al., 2004). as shown in Table 2. For each dataset, all the training examples are stretched (resampled using cubic interpolation) to meet the length of the longest example in the set. The length of time series, N, corresponds to the duration in video frames of the longest example in the data set. The recordings in the training set have 14 dimensions, corresponding to the 14 facial features listed in Section 2.1. As discussed above, the latent trace has a time axis of length M, which is approximately double the temporal resolution of the original training examples. CPM Models Training Data #Examples × N × #F eatures Latent Trace M × #F eatures where M = (2 + ε)N YesNo A 9 x 51 x 14 105 x 14 YesNo B 10 x 78 x 14 160 x 14 WhQuestion A 4 x 24 x 14 50 x 14 WhQuestion B 8 x 41 x 14 84 x 14 Rhetorical A 2 x 16 x 14 33 x 14 Rhetorical B 8 x 55 x 14 113 x 14 Topic A 29 x 29 x 14 60 x 14 Topic B 15 x 45 x 14 93 x 14 Negative A 16 x 67 x 14 138 x 14 Negative B 25 x 76 x 14 156 x 14 Table 2: Training data and the obtained latent traces for each of the CPM models on ASL facial expression subcategories. To demonstrate our experiments, Figure 6 illustrates one of the subcategories, Rhetorical B. (This figure appears at the end of the paper, due to its large size.) We illustrate the training set, before and after the alignment and amplitude normalization with the CPM, and the obtained latent trace for this subcategory. Figure 6a and Figure 6b illustrate each of the 8 training examples with a subplot extending from [0, N] in the x-axis, which is the observed time axis in video frames. Each of the 14 plots represents one of the head or face features. Figure 6c illustrates the learned latent trace with a subplot extending from [0, M] in the x-axis, which is the latent time axis. While the training set for this subcategory is very small and has high variability, upon visual inspection of Figure 6, we can observe that the learned latent trace shares similarities with most of the time series in the training set without being identical to any of them. We expect that during the Rhetorical facial expression (Section 2.1), the signer’s eyebrows will rise and the head will be tilted back and to the side. In the latent trace, the inner, middle, and outer portions of the left eyebrow rise (Figure 6c, plots 7, 9, 11), and so do the inner, middle, and outer portions of the right eyebrow (Figure 6c, plots 8, 10, 12). Note how the height of the lines in those plots rise, which indicates increased eyebrow height. For the Rhetorical facial expression, we would also expect symmetry in the horizontal displacement of the eyebrows, and we see such mirroring in the latent-trace: In (Figure 6c, plots 13-14), note the tendency for the line in plot 13 (left eyebrow) to increase in height as the line in plot 14 (right eyebrow) decreases in height, and vice versa. 3 Evaluation This section presents two forms of evaluation of the CPM latent trace model for ASL facial expression synthesis. In Section 3.1, the CPM model will be compared to a “gold-standard” performance of each sub-category of ASL facial expression using a distance-metric-based evaluation, and in Section 3.2, the results of a user-study will be presented, in which ASL signers evaluated animations of ASL based upon the CPM model. To provide a basis of comparison, in this section, we evaluate the CPM approach in comparison to an alternative approach that we call ‘Centroid’, which we described in prior work in (Ka2087 corri et al., 2016), where we used a multivariate DTW to select one of the time series in the training set as a representative performance of the facial expression. The centroid examples are actual recordings of human ASL signers that are used to drive an animation. Appendix A lists the codenames of the videos from the training dataset selected as centroids and the codenames of the videos used in the gold-standard dataset (Huenerfauth and Kacorri, 2014). 3.1 Metric Evaluation The gold-standard recordings of a male ASL signer were described in Section 2.1. In addition to the video recordings (which were processed to extract face and head movement data), we have annotation of the timing of the facial expressions and the sequence of signs performed on the hands. To compare the quality of our CPM model and that of the Centroid approach, we used each method to produce a candidate sequence of face and head movements for the sentence performed by the human in the gold-standard recording. Thus, the extracted facial expressions from the human recording can serve as a gold standard for how the face and head should move. In this section, we compare: (a) the distance of the CPM latent trace from the gold standard to (b) the distance of the centroid form the gold standard. It is notable that these gold-standard recordings were previously “unseen” during the creation of the CPM or Centroid models, that is, they were not used in the training data set during the creation of either model. Since there was variability in the length of the latent trace, centroid, and gold-standard videos, for a fairer comparison, we first resampled these time series, using cubic interpolation, to match the duration (in milliseconds) of the gold-standard ASL sentence, and then we used multivariate DTW to estimate their distance, following the methodology of (Kacorri et al., 2016) and (Kacorri and Huenerfauth, 2015). In prior work (Kacorri and Huenerfauth, 2015), we had shown that a scoring algorithm based on DTW had moderate (yet significant) correlation with scores that participants assigned to ASL animation with facial expressions. Figure 2 shows an example of a DTW distance scoring between the gold standard and each of the latent trace and the centroid, for one face feature Figure 2: DTW distances on the squeeze l brow feature (left eyebrow horizontal movement), during a Negative A facial expression: (left) between the CPM latent trace and gold standard and (right) between the centroid and gold standard. The timeline is given in milliseconds. Figure 3: Overall normalized DTW distances for latent trace and centroid (left) and per each subcategory of ASL facial expression (right). (horizontal movement of the left eyebrow) during a Negative A facial expression. Given that the centroid and the training data for the latent trace are driven by recordings of a (female) signer and the gold standard is a different (male) signer, there are differences between these facial expressions due to idiosyncratic aspects of individual signers. Thus the metric evaluation in this section is challenging because it is an inter-signer evaluation. Figure 3 illustrates the overall calculated DTW distances, including a graph with the results broken down per subcategory of ASL facial expression. The results indicate that the CPM latent trace is closer to the gold standard than the centroid is. Note that the distance values are not zero since the latent trace and the centroid are being compared to a recording from a different signer on novel, previously unseen, ASL sentences. The results in these graphs suggest that the latent trace model out-performed the centroid approach. 2088 Figure 4: Screenshots of YesNo A stimuli of three types: a) neutral, b) centroid, and c) latent trace. 3.2 User Evaluation To further assess our ASL synthesis approach, we conducted a user study where ASL signers watched short animations of ASL sentences with identical hand movements but differing in their face, head, and torso movements. There were three conditions in this between-subjects study: a) animations with a static neutral face throughout the animation (as a lower baseline), b) animations with facial expressions driven by the centroid human recording, and c) animations with facial expressions driven by the CPM latent trace based on multiple recordings of a human performing that type of facial expression. Figure 4 illustrates screenshots of each stimulus type for a YesNo A facial expression. The specific sentences used for this study were drawn from a standard test set of stimuli released to the research community by (Huenerfauth and Kacorri, 2014) for evaluating animations of sign language with facial expressions. All three types of stimuli (neutral, centroid and latent trace), shared identical animation-control scripts specifying the hand and arm movements; these scripts were hand-crafted by ASL signers in a pose-by-pose manner. For the neutral animations, we did not specify any torso, head, nor face movements; rather, we left them in their neutral pose throughout the sentences. As for the centroid and latent trace animations, we applied the head and face movements (as specified by the centroid model or by the latent trace model) only to the portion of the animation where the facial expression of interest occurs, leaving the head and face for the rest of the animation to a neutral pose. For instance, during a stimulus that contains a Whquestion, the face and head are animated only during the Wh-question, but they are left in a neutral pose for the rest of the stimulus (which may include other sentences). The period of time when the facial expression occurred was time-aligned with the subset of words (the sequence of signs performed on the hands) for the appropriate syntactic domain; the phrase-beginning and phraseending was aligned with the performance of the facial expression. Thus, the difference in appearance between our animation stimuli was subtle: The only portion of the animations that differed between the three conditions (neutral, centroid, and latent-trace) was the face and the head movements during the span of time when the syntactic facial expression should occur (e.g., during the Wh-question). We resampled the centroid and CPM time series, using cubic interpolation, to match the duration (in milliseconds) of the animation they would be applied to. To convert the centroid and latent trace time series into the input for the animationgeneration system, we used the MPEG4-featuresto-animation pipeline described in (Kacorri et al., 2016). That platform is based upon the opensource EMBR animation system for producing human animation (Heloir and Kipp, 2009); specifically, the facial expressions were represented as an EMBR PoseSequence with a pose defined every 133 milliseconds. In prior work (Huenerfauth and Kacorri, 2015b), we investigated key methodological considerations in conducting a study to evaluate sign language animations with deaf users, including the use of appropriate baselines for comparison, appropriate presentation of questions and instructions, demographic and technology experience factors influencing acceptance of signing avatars, and other factors that we have considered in the design of this current study. Our recent work (Kacorri et al., 2015) has established a set of demographic and technology experience questions which can be used to screen for the most critical participants in a user study of ASL signers to evaluate animation. Specifically, we screened for participants that identified themselves as “deaf/Deaf” or “hard-of-hearing,” who had grown up using ASL at home or had attended an ASL-based school as a young child, such as a residential or daytime school. Deaf researchers (all fluent ASL signers) recruited and collected data from participants, during meetings conducted in ASL. Initial advertise2089 ments were sent to local email distribution lists and Facebook groups. A total of 17 participants met the above criteria, where 14 participants selfidentified as deaf/Deaf and 3 as hard-of-hearing. Of our participants in the study, 10 had attended a residential school for deaf students, and 7, a daytime school for deaf students. 14 participants had learned ASL prior to age 5, and the remaining 3 had been using ASL for over 7 years. There were 8 men and 9 women of ages 19-29 (average age 22.8). In prior work, we (Kacorri et al., 2015) have advocated that participants in studies evaluating sign language animation complete a two standardized surveys about their technology experience (MediaSharing and AnimationAttitude) and that researchers report these values for participants, to enable comparison across studies. In our study, participant scores for MediaSharing varied between 3 and 6, with a mean score of 4.3, and scores for AnimationAttitude varied from 2 to 6, with a mean score of 3.8. At the beginning of the study, participants viewed a sample animation, to familiarize them with the experiment and the questions they would be asked about each animation. (This sample used a different stimulus than the other ten animations shown in the study.) Next, they responded to a set of questions that measured their subjective impression of each animation, using a 1-to-10 scalar response. Each question was conveyed using ASL through an onscreen video, and the following English question text was shown on the questionnaire: (a) Good ASL grammar? (10=Perfect, 1=Bad); (b) Easy to understand? (10=Clear, 1=Confusing); (c) Natural? (10=Moves like person, 1=Like robot). These questions have been used in many prior experimental studies to evaluate animations of ASL, e.g. (Kacorri and Huenerfauth, 2015), and were shared with research community as a standard evaluation tool in (Huenerfauth and Kacorri, 2014). To calculate a single score for each animation, the scalar response scores for the three questions were averaged. Figure 5 shows distributions of subjective scores as boxplots with a 1.5 interquartile range (IQR). For comparison, means are denoted with a star and their values are labeled above each boxplot. When comparing the subjective scores that participants assigned to the animations in Figure 5, we found a significant difference (KruskalWallis test used since the data was not normally Figure 5: Subjective scores for centroid, latent trace, and neutral animations. distributed) between the latent trace and centroid (p < 0.005) and between the latent trace and neutral (p < 0.05). In summary, our CPM modeling approach for generating an animation out-performed an animation produced from an actual recording of a single human performance (the “centroid” approach). In prior methodological studies, we demonstrated that it is valid to use either videos of humans or animations (driven by a human performance) as the baseline for comparison in a study of ASL animation (Kacorri et al., 2013a). As suggested by Figure 4, the differences in face and head movements between the Centroid and CPM conditions were subtle, yet fluent ASL signers rated the CPM animations higher in this study. 4 Conclusion and Future Work To facilitate the creation of ASL content that can easily be updated or maintained, we have investigated technologies for automating the synthesis of ASL animations from a sparse representation of the message. Specifically, this paper has focused on the synthesis of syntactic ASL facial expressions, which are essential to sentence meaning, using a data-driven methodology in which recordings of human ASL signers are used as a basis for generating face and head movements for animation. To avoid idiosyncratic aspects of a single performance, we have modeled a facial expression based on the underlying trace of the movement trained on multiple recordings of different sentences where this type of facial expression occurs. We obtain the latent trace with Continuous Profile Model (CPM), a probabilistic generative model that relies on Hidden Markov Models. We 2090 assessed our modeling approach through comparison to an alternative centroid approach, where a single performance was selected as a representative. Through both a metric evaluation and an experimental user study, we found that the facial expressions driven by our CPM models produce high-quality facial expressions that are more similar to human performance of novel sentences. While this work used the latent trace as the basis for animation, in future work, we also plan to explore methods for sampling from the model to produce variations in face and head movement. In addition, to aid CPM convergence to a good local optimum, in future work we will investigate dimensionality reduction approaches that are reversible such as Principal Component Analysis (Pearson, 1901) and other pre-processing approaches similar to (Listgarten, 2007), where the training data set is coarsely pre-aligned and pre-scaled based on the center of mass of the time series. In addition we plan to further investigate how to fine-tune some of the hyper parameters of the CPM such as spline scaling, single global scaling factor, convergence tolerance, and initialization of the latent trace with a centroid. In subsequent work, we would like to explore alternatives for enhancing CPMs by incorporating contextual features in the training data set such as timing of hand movements, and preceding, succeeding, and co-occurring facial expressions. Acknowledgments This material is based upon work supported by the National Science Foundation under award number 1065009 and 1506786. This material is also based upon work supported by the Science Fellowship and Dissertation Fellowship programs of The Graduate Center, CUNY. We are grateful for support and resources provided by Ali Raza Syed at The Graduate Center, CUNY, and by Carol Neidle at Boston University. References Charlotte Baker-Shenk. 1983. A microanalysis of the nonmanual components of questions in american sign language. Sarah Ebling and John Glauert. 2013. Exploiting the full potential of jasigning to build an avatar signing train announcements. In Proceedings of the Third International Symposium on Sign Language Translation and Avatar Technology (SLTAT), Chicago, USA, October, volume 18, page 19. Michael Filhol, Mohamed N Hadjadj, and Benoˆıt Testu. 2013. A rule triggering system for automatic text-to-sign translation. Universal Access in the Information Society, pages 1–12. Sylvie Gibet, Nicolas Courty, Kyle Duarte, and Thibaut Le Naour. 2011. The signcom system for data-driven animation of interactive virtual signers: Methodology and evaluation. ACM Transactions on Interactive Intelligent Systems (TiiS), 1(1):6. Alexis Heloir and Michael Kipp. 2009. Embr–a realtime animation engine for interactive embodied agents. In Intelligent Virtual Agents, pages 393– 404. Springer. Matt Huenerfauth and Hernisa Kacorri. 2014. Release of experimental stimuli and questions for evaluating facial expressions in animations of american sign language. In Proceedings of the 6th Workshop on the Representation and Processing of Sign Languages: Beyond the Manual Channel, The 9th International Conference on Language Resources and Evaluation (LREC 2014), Reykjavik, Iceland. Matt Huenerfauth and Hernisa Kacorri. 2015a. Augmenting embr virtual human animation system with mpeg-4 controls for producing asl facial expressions. In International Symposium on Sign Language Translation and Avatar Technology, volume 3. Matt Huenerfauth and Hernisa Kacorri. 2015b. Best practices for conducting evaluations of sign language animation. Journal on Technology and Persons with Disabilities, 3. ISO/IEC. 1999. Information technology—Coding of audio-visual objects—Part 2: Visual. ISO 144962:1999, International Organization for Standardization, Geneva, Switzerland. Hernisa Kacorri and Matt Huenerfauth. 2015. Evaluating a dynamic time warping based scoring algorithm for facial expressions in asl animations. In 6th Workshop on Speech and Language Processing for Assistive Technologies (SLPAT), page 29. Hernisa Kacorri, Pengfei Lu, and Matt Huenerfauth. 2013a. Effect of displaying human videos during an evaluation study of american sign language animation. ACM Transactions on Accessible Computing (TACCESS), 5(2):4. Hernisa Kacorri, Pengfei Lu, and Matt Huenerfauth. 2013b. Evaluating facial expressions in american sign language animations for accessible online information. In Universal Access in Human-Computer Interaction. Design Methods, Tools, and Interaction Techniques for eInclusion, pages 510–519. Springer. Hernisa Kacorri, Allen Harper, and Matt Huenerfauth. 2014. Measuring the perception of facial expressions in american sign language animations with eye tracking. In Universal Access in Human-Computer Interaction. Design for All and Accessibility Practice, pages 553–563. Springer. 2091 Hernisa Kacorri, Matt Huenerfauth, Sarah Ebling, Kasmira Patel, and Mackenzie Willard. 2015. Demographic and experiential factors influencing acceptance of sign language animation by deaf users. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility, pages 147–154. ACM. Hernisa Kacorri, Ali Raza Syed, Matt Huenerfauth, and Carol Neidle. 2016. Centroid-based exemplar selection of asl non-manual expressions using multidimensional dynamic time warping and mpeg4 features. In Proceedings of the 7th Workshop on the Representation and Processing of Sign Languages: Corpus Mining, The 10th International Conference on Language Resources and Evaluation (LREC 2016), Portoroz, Slovenia. http://huenerfauth.ist.rit.edu/pubs/lrec2016.pdf. Hernisa Kacorri. 2015. Tr-2015001: A survey and critique of facial expression synthesis in sign language animation. Technical report, The Graduate Center, CUNY. http://academicworks.cuny.edu/gc cs tr/403. Jennifer Listgarten, Radford M Neal, Sam T Roweis, and Andrew Emili. 2004. Multiple alignment of continuous time series. In Advances in neural information processing systems, pages 817–824. Jennifer Listgarten. 2007. Analysis of sibling time series data: alignment and difference detection. Ph.D. thesis, University of Toronto. Carol Neidle, Jingjing Liu, Bo Liu, Xi Peng, Christian Vogler, and Dimitris Metaxas. 2014. Computerbased tracking, analysis, and visualization of linguistically significant nonmanual events in american sign language (asl). In LREC Workshop on the Representation and Processing of Sign Languages: Beyond the Manual Channel. Citeseer. Karl Pearson. 1901. Principal components analysis. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 6(2):559. Alan B Poritz. 1988. Hidden markov models: A guided tour. In Acoustics, Speech, and Signal Processing, 1988. ICASSP-88., 1988 International Conference on, pages 7–13. IEEE. Christoph Schmidt, Oscar Koller, Hermann Ney, Thomas Hoyoux, and Justus Piater. 2013. Enhancing gloss-based corpora with facial features using active appearance models. In International Symposium on Sign Language Translation and Avatar Technology, volume 2. Daniel Stein, Christoph Schmidt, and Hermann Ney. 2012. Analysis, preparation, and optimization of statistical sign language machine translation. Machine Translation, 26(4):325–357. Carol Bloomquist Traxler. 2000. The stanford achievement test: National norming and performance standards for deaf and hard-of-hearing students. Journal of deaf studies and deaf education, 5(4):337–348. Technologies Visage. 2016. Face tracking. https://visagetechnologies.com/products-andservices/visagesdk/facetrack. Accessed: 2016-0310. A Appendix: Supplemental Material In Section 2.3, we made use of a freely available CPM implementation available from http://www.cs.toronto.edu/∼jenn/CPM/ in MATLAB, Version 8.5.0.197613 (R2015a). One parameter for regularizing the latent trace (Listgarten, 2007) is a smoothing parameter (λ), with values being dataset-dependent. To select a good λ, we experimented with held-out data and found that λ = 4 and NumberOfIterations = 3 resulted in a latent trace curve that captures the shape of the ASL features well. Other CPM parameters were: • USE SPLINE = 0: if set to 1, uses spline scaling rather than HMM scale states • oneScaleOnly = 0: no HMM scale states (only a single global scaling factor is applied to each time series.) • extraPercent(ε) = 0.05: slack on the length of the latent trace M, where M = (2 + ε)N. • learnStateTransitions = 0: whether to learn the HMM state-transition probabilities • learnGlobalScaleFactor = 1: learn single global scale factor for each time series Section 3.1 described how the centroids were selected from among videos in the Boston University dataset (Neidle et al., 2014), and the gold standard videos were selected from among videos in a different dataset (Huenerfauth and Kacorri, 2014). Table 3 lists the code names of the selected videos, using the nomenclature of each dataset. Subcategory Centroid Codename Gold-Standard Codename YesNo A 2011-12-01 0037-cam2-05 Y4 YesNo B 2011-12-01 0037-cam2-09 Y3 WhQuestion A 2011-12-01 0038-cam2-05 W1 WhQuestion B 2011-12-01 0038-cam2-07 W2 Rhetorical A 2011-12-01 0041-cam2-04 R3 Rhetorical B 2011-12-01 0041-cam2-02 R9 Topic A 2012-01-27 0050-cam2-05 T4 Topic B 2012-01-27 0051-cam2-09 T3 Negative A 2012-01-27 0051-cam2-03 N2 Negative B 2012-01-27 0051-cam2-30 N5 Table 3: Codenames of videos selected as centoids and gold standards for comparison in section 3.1. 2092 Figure 6: Example of CPM modeling for Rhetorical B: (a) training examples before CPM (each plot shows one of the 14 face features over time, with 8 colored lines in each plot showing each of the 8 training examples), (b) after CPM time-alignment and rescaling, and (c) the final latent trace based upon all 8 examples. 2093
2016
196
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2094–2103, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Evaluating Sentiment Analysis in the Context of Securities Trading Siavash Kazemian and Shunan Zhao and Gerald Penn Department of Computer Science University of Toronto {kazemian,szhao,gpenn}@cs.toronto.edu Abstract There are numerous studies suggesting that published news stories have an important effect on the direction of the stock market, its volatility, the volume of trades, and the value of individual stocks mentioned in the news. There is even some published research suggesting that automated sentiment analysis of news documents, quarterly reports, blogs and/or twitter data can be productively used as part of a trading strategy. This paper presents just such a family of trading strategies, and then uses this application to re-examine some of the tacit assumptions behind how sentiment analyzers are generally evaluated, in spite of the contexts of their application. This discrepancy comes at a cost. 1 Introduction The proliferation of opinion-rich text on the World Wide Web, which includes anything from product reviews to political blog posts, led to the growth of sentiment analysis as a research field more than a decade ago. The market need to quantify opinions expressed in social media and the blogosphere has provided a great opportunity for sentiment analysis technology to make an impact in many sectors, including the financial industry, in which interest in automatically detecting news sentiment in order to inform trading strategies extends back at least 10 years. In this case, sentiment takes on a slightly different meaning; positive sentiment is not the emotional and subjective use of laudatory language. Rather, a news article that contains positive sentiment is optimistic about the future financial prospects of a company. Zhang and Skiena (2010) experimented with news sentiment to inform simple market neutral trading algorithms, and produced an impressive maximum yearly return of around 30% — even more when using sentiment from blogs and twitter data. They did so, however, without an appropriate baseline, making it very difficult to appreciate the significance of this number. Using a very standard, and in fact somewhat dated sentiment analyzer, we are regularly able to garner annualized returns over twice that percentage, and in a manner that highlights two of the better design decisions that Zhang and Skiena (2010) made, viz., (1) their decision to trade based upon numerical SVM scores rather than upon discrete positive or negative sentiment classes, and (2) their decision to go long (resp., short) in the n best- (worst-) ranking securities rather than to treat all positive (negative) securities equally. On the other hand, we trade based upon the raw SVM score itself, rather than its relative rank within a basket of other securities as Zhang and Skiena (2010) did, and we experimentally tune a threshold for that score that determines whether to go long, neutral or short. We sampled our stocks for both training and evaluation in two runs, one without survivor bias, the tendency for long positions in stocks that are publicly traded as of the date of the experiment to pay better using historical trading data than long positions in random stocks sampled on the trading days themselves. Most of the evaluations of sentiment-based trading either unwittingly adopt this bias, or do not need to address it because their returns are computed over very brief historical periods. We also provide appropriate trading baselines as well as Sharpe ratios (Sharpe, 1966) to attempt to quantify the relative risk inherent to our experimental strategies. As tacitly assumed by most of the work on this subject, our trading strategy is not portfolio-limited, and our returns are calculated on a percentage basis with theoretical, commission-free trades. 2094 It is important to understand at the outset, however, that the purpose of this research was not to beat Zhang and Skiena’s (2010) returns (although we have), nor merely to conduct the first properly controlled, sufficiently explicit, scientific test of the descriptive hypothesis that sentiment analysis is of benefit to securities trading (although, to our knowledge, we did). The main purpose of this study was in fact to reappraise the evaluation standards used by the sentiment analysis community. It is not at all uncommon within this community to evaluate a sentiment analyzer with a variety of classification accuracy or hypothesis testing scores such as F-measures, SARs, kappas or Krippendorf alphas derived from human-subject annotations — even when more extensional measures are available, such as actual market returns from historical data in the case of securities trading. With Hollywood films, another popular domain for automatic sentiment analysis, one might refer to box-office returns or the number of award nominations that a film receives rather than to its star-rankings on review websites where pile-on and confirmation biases are widely known to be rampant. Are the opinions of human judges, paid or unpaid, a sufficient proxy for the business cases that actually drive the demand for sentiment analyzers? We regret to report that they do not seem to be. As a case study to demonstrate this point (Section 4.3), we exhibit one particular modification to our experimental financial sentiment analyzer that, when evaluated against an evaluation test set sampled from the same pool of human-subject annotations as the analyzer’s training data, returns poorer performance, but when evaluated against actual market returns, yields better performance. This should worry any researcher who relies on classification accuracies, because the improvements that they report, whether due to better feature selection or different pattern recognition algorithms, may in fact not be improvements at all. Differences in the amount or degree of improvement might arguably be rescalable, but Section 4.3 shows that such intrinsic measures are not even accurate up to a determination of the delta’s sign. On the other hand, the results reported here should not be construed as an indictment of sentiment analysis as a technology or its potential application. In fact, one of our baselines alternatively attempts to train the same classifier directly on market returns, and the experimental approach handily beats that, too. It is important to train on human-annotated sentiments, but then it is equally important to tune, and eventually evaluate, on an empirically grounded task-specific measure, such as market returns. This paper thus presents, to our knowledge, the first real proof that sentiment is worth analyzing in this or any other domain. A likely machine-learning explanation for this experimental result is that whenever two unbiased estimators are pitted against each other, they often result in an improved combined performance because each acts as a regularizer against the other. If true, this merely attests to the relative independence of task-based and human-annotated knowledge sources. A more HCI-oriented view, however, would argue that direct human-subject annotations are highly problematic unless the annotations have been elicited in manner that is ecologically valid. When human subjects are paid to annotate quarterly reports or business news, they are paid regardless of the quality of their annotations, the quality of their training, or even their degree of comprehension of what they are supposed to be doing. When human subjects post film reviews on web-sites, they are participating in a cultural activity in which the quality of the film under consideration is only one factor. These sources of annotation have not been properly controlled in previous experiments on sentiment analysis. Regardless of the explanation, this is a lesson that applies to many more areas of NLP than just sentiment analysis, and to far more recent instances of sentiment analysis than the one that we based our experiments on here. Indeed, we chose sentiment analysis because this is an area that can set a higher standard; it has the right size for an NLP component to be embedded in real applications and to be evaluated properly. This is noteworthy because it is challenging to explain why recent publications in sentiment analysis research would so dramatically increase the value that they assign to sentence-level sentiment scoring algorithms based on syntactically compositional derivations of “good-for/ bad-for” annotation (Anand and Reschke, 2010; Deng et al., 2013), when statistical parsing itself has spent the last twenty-five years staggering through a linguistically induced delirium as it attempts to document any of its putative advances without recourse to clear empirical evidence that PTB-style syntactic derivations are a reliable approximation of seman2095 tic content or structure. We submit, in light of our experience with the present study, that the most crucial obstacle facing the state of the art in sentiment analysis is not a granularity problem, nor a pattern recognition problem, but an evaluation problem. Those evaluations must be task-specific to be reliable, and sentiment analysis, in spite of our careless use of the term in the NLP community, is not a task. Stock trading is a task — one of many in which a sentiment analyzer is a potentially useful component. This paper provides an example of how to test that utility. 2 Related Work in Financial Sentiment Analysis Studies confirming the relationship between media and market performance date back to at least Niederhoffer (1971), who looked at NY Times headlines and determined that large market changes were more likely following world events than on random days. Conversely, Tetlock (2007) looked at media pessimism and concluded that high media pessimism predicts downward prices. Tetlock (2007) also developed a trading strategy, achieving modest annualized returns of 7.3%. Engle and Ng (1993) looked at the effects of news on volatility, showing that bad news introduces more volatility than good news. Chan (2003) claimed that prices are slow to reflect bad news and stocks with news exhibit momentum. Antweiler and Frank (2004) showed that there is a significant, but negative correlation between the number of messages on financial discussion boards about a stock and its returns, but that this trend is economically insignificant. Aside from Tetlock (2007), none of this work evaluated the effectiveness of an actual sentiment-based trading strategy. There is, of course, a great deal of work on automated sentiment analysis itself; see Pang and Lee (2008) for a survey. More recent developments germane to our work include the use of information retrieval weighting schemes (Paltoglou and Thelwall, 2010), with which accuracies of up to 96.6% have models based upon Latent Dirichlet Allocation (LDA) (Lin and He, 2009). There has also been some work that analyzes the sentiment of financial documents without actually using those results in trading strategies (Koppel and Shtrimberg, 2004; Ahmad et al., 2006; Fu et al., 2008; O’Hare et al., 2009; Devitt and Ahmad, 2007; Drury and Almeida, 2011). As to the relationship between sentiment and stock price, Das and Chen (2007) performed sentiment analysis on discussion board posts. Using this, they built a “sentiment index” that computed the timevarying sentiment of the 24 stocks in the Morgan Stanley High-Tech Index (MSH), and tracked how well their index followed the aggregate price of the MSH itself. Their sentiment analyzer was based upon a voting algorithm, although they also discussed a vector distance algorithm that performed better. Their baseline, the Rainbow algorithm, also came within 1 percentage point of their reported accuracy. This is one of the very few studies that has evaluated sentiment analysis itself (as opposed to a sentiment-based trading strategy) against market returns (versus gold-standard sentiment annotations). Das and Chen (2007) focused exclusively on discussion board messages and their evaluation was limited to the stocks on the MSH, whereas we focus on Reuters newswire and evaluate over a wide range of NYSE-listed stocks and market capitalization levels. Butler and Keselj (2009) try to determine sentiment from corporate annual reports using both character n-gram profiles and readability scores. They also developed a sentiment-based trading strategy with high returns, but do not report how the strategy works or how they computed the returns, making the results difficult to compare to ours. Basing a trading strategy upon annual reports also calls into question the frequency with which the trading strategy could be exercised. The work most similar to ours is Zhang and Skiena’s (2010). They look at both financial blog posts and financial news, forming a market-neutral trading strategy whereby each day, companies are ranked by their reported sentiment. The strategy then goes long and short on equal numbers of positive- and negative-sentiment stocks, respectively. They conduct their trading evaluation over the period from 2005 to 2009, and report a yearly return of roughly 30% when using news data, and yearly returns of up to 80% when they use Twitter and blog data. Crucially, they trade based upon the ranked relative order of documents by sentiment rather than upon the documents’ raw sentiment scores. Zhang and Skiena (2010) compare their strategy to two baselines. The “Worst-sentiment” Strategy trades the opposite of their strategy: short 2096 on positive-sentiment stocks and long on negative sentiment stocks. The “Random-selection” Strategy randomly picks stocks to go long and short on. As trading strategies, these baselines set a very low standard. Our evaluation uses standard trading benchmarks such as momentum trading and holding the S&P, as well as oracle trading strategies over the same holding periods. 3 Method and Materials 3.1 News Data Our dataset combines two collections of Reuters news documents. The first was obtained for a roughly evenly weighted collection of 22 small, mid- and large-cap companies, randomly sampled from the list of all companies traded on the NYSE as of 10th March, 1997. The second was obtained for a collection of 20 companies randomly sampled from those companies that were publicly traded in March, 1997 and still listed on 10th March, 2013. For both collections of companies, we collected every chronologically third Reuters news document about them from the period March, 1997 to March, 2013. The news articles prior to 10th March, 2005 were used as training data, and the news articles on or after 10th March, 2005 were reserved as testing data.1 We split the dataset at a fixed date rather than randomly in order not to incorporate future news into the classifier through lexical choice. In total, there were 1256 financial news documents. Each was labelled by two human annotators as being negative, positive, or neutral in sentiment. The annotators were instructed to gauge the author’s belief about the company, rather than to make a personal assessment of the company’s prospects. Only the 991 documents that were labelled twice as negative or positive were used for training and evaluation. 3.2 Sentiment Analysis Algorithm For each selected document, we first filter out all punctuation characters and the most common 429 stop words. Because this is a documentlevel sentiment scoring task, not sentence-level, 1An anonymous reviewer expressed concern about chronological bias in the training data relative to the test data because of this decision. While this may indeed influence our results, ecological validity requires us to situate all training data before some date, and all testing data after that date, because traders only have access to historical data before making a future trade. Representation Accuracy bm25 freq 81.143% term presence 80.164% bm25 freq sw 79.827% freq with sw 75.564% freq 79.276% Table 1: Average 10-fold cross validation accuracy of the sentiment classifier using different term-frequency weighting schemes. The same folds were used in all feature sets. our sentiment analyzer is a support-vector machine with a linear kernel function implemented using SVMlight (Joachims, 1999), using all of its default parameters.2 We have experimented with raw term frequencies, binary term-presence features, and term frequencies weighted by the BM25 scheme, which had the most resilience in the study of information-retrieval weighting schemes for sentiment analysis by Paltoglou and Thelwall (2010). We performed 10 fold cross-validation on the training data, constructing our folds so that each contains an approximately equal number of negative and positive examples. This ensures that we do not accidentally bias a fold. Pang et al. (2002) use word presence features with no stop list, instead excluding all words with frequencies of 3 or less. Pang et al. (2002) normalize their word presence feature vectors, rather than term weighting with an IR-based scheme like BM25, which also involves a normalization step. Pang et al. (2002) also use an SVM with a linear kernel on their features, but they train and compute sentiment values on film reviews rather than financial texts, and their human judges also classified the training films on a scale from 1 to 5, whereas ours used a scale that can be viewed as being from -1 to 1, with specific qualitative interpretations assigned to each number. Antweiler and Frank (2004) use SVMs with a polynomial kernel (of unstated degree) to train on word frequencies relative to a three-valued classification, but they only count frequencies for the 1000 words with the highest mutual information scores relative to the classification labels. Butler and Keselj (2009) also use an SVM trained upon a very different set of features, and with a polynomial kernel of degree 2There has been one important piece of work (Tang et al., 2015) on neural computing architectures for document-level sentiment scoring (most neural computing architectures for sentiment scoring are sentence-level), but the performance of this type of architecture is not mature enough to replace SVMs just yet. 2097 3. As a sanity check, we measured our sentiment analyzer’s accuracy on film reviews by training and evaluating on Pang and Lee’s (2004) film review dataset, which contains 1000 positively and 1000 negatively labelled reviews. Pang and Lee conveniently labelled the folds that they used when they ran their experiments. Using these same folds, we obtain an average accuracy of 86.85%, which is comparable to Pang and Lee’s 86.4% score for subjectivity extraction. The purpose of this comparison is simply to demonstrate that our implementation is a faithful rendering of Pang and Lee’s (2004) algorithm. Table 1 shows the performance of SVM with BM25 weighting on our Reuters evaluation set versus several baselines. All baselines are identical except for the term weighting schemes used, and whether stop words were removed. As can be observed, SVM-BM25 has the highest sentiment classification accuracy: 80.164% on average over the 10 folds. This compares favourably with previous reports of 70.3% average accuracy over 10 folds on financial news documents (Koppel and Shtrimberg, 2004). We will nevertheless adhere to normalized term presence for now, in order to stay close to Pang and Lee’s (2004) implementation. 3.3 Trading Algorithm Overall, our trading strategy is simple: go long when the classifier reports positive sentiment in a news article about a company, and short when the classifier reports negative sentiment. We will embed the aforementioned sentiment analyzer into three different trading algorithms. In Section 4.1, we use the discrete polarity returned by the classifier to decide whether go long/abstain/short a stock. In Section 4.2.1 we instead use the distance of the current document from the classifier’s decision boundary reported by the SVM. These distances do have meaningful interpretations apart from their internal use in assigning class labels. Platt (Platt, 1999) showed that they can be converted into posterior probabilities, for example, by fitting a sigmoid function onto them, but we will simply use the raw distances. In Section 4.2.2, we impose a safety zone onto the interpretation of these raw distance scores. 4 Experiments In the experiments of this section, we will evaluate an entire trading strategy, which includes the sentiment analyzer and the particulars of the trading algorithm itself. The purpose of these experiments is to refine the trading strategy itself and so the sentiment analyzer will be held constant. In Section 4.3, we will hold the trading strategy constant, and instead vary the document representation features in the underlying sentiment analyzer. In all three experiments, we compare the perposition returns of the following four standard strategies, where the number of days for which a position is held remains constant: 1. The momentum strategy computes the price of the stock h days ago, where h is the holding period. Then, it goes long for h days if the previous price is lower than the current price. It goes short otherwise. 2. The S&P strategy simply goes long on the S&P 500 for the holding period. This strategy completely ignores the stock in question and the news about it. 3. The oracle S&P strategy computes the value of the S&P 500 index h days into the future. If the future value is greater than the current day’s value, then it goes long on the S&P 500 index. Otherwise, it goes short. 4. The oracle strategy computes the value of the stock h days into the future. If the future value is greater than the current day’s value, then it goes long on the stock. Otherwise, it goes short. The oracle and oracle S&P strategies are included as toplines to determine how close the experimental strategies come to ones with perfect knowledge of the future. “Market-trained” is the same as “experimental” at test time, but trains the sentiment analyzer on the market return of the stock in question for h days following a training article’s publication, rather than the article’s annotation. 4.1 Experiment One: Utilizing Sentiment Labels Given a news document for a publicly traded company, the trading agent first computes the sentiment class of the document. If the sentiment is positive, the agent goes long on the stock on the date the news is released; if negative, it goes short. 2098 Strategy Period Return S. Ratio Experimental 30 days -0.037% -0.002 5 days 0.763% 0.094 3 days 0.742% 0.100 1 day 0.716% 0.108 Momentum 30 days 1.176% 0.066 5 days 0.366% 0.045 3 days 0.713% 0.096 1 day 0.017% -0.002 S&P 30 days 0.318% 0.059 5 days -0.038% -0.016 3 days -0.035% -0.017 1 day 0.046% 0.036 Oracle S&P 30 days 3.765% 0.959 5 days 1.617% 0.974 3 days 1.390% 0.949 1 day 0.860% 0.909 Oracle 30 days 11.680% 0.874 5 days 5.143% 0.809 3 days 4.524% 0.761 1 day 3.542% 0.630 Market-trained 30 days 0.286% 0.016 5 days 0.447% 0.054 3 days 0.358% 0.048 1 day 0.533% 0.080 Table 2: Returns and Sharpe ratios for the Experimental, baseline and topline trading strategies over 30, 5, 3, and 1 day(s) holding periods. All trades are made based on the adjusted closing price on this date. We evaluate the performance of this strategy using four different holding periods: 30, 5, 3, and 1 day(s). The returns and Sharpe ratios are presented in Table 2 for the four different holding periods and the five different trading strategies. The Sharpe ratio is a return-to-risk ratio, with a high value indicating good return for relatively low risk. The Sharpe ratio is calculated as: S = E[Ra−Rb] √ var(Ra−Rb), where Ra is the return of a single asset and Rb is the risk-free return of a 10-year U.S. Treasury note. The returns from this experimental trading system are fairly low, although they do beat the baselines. A one-way ANOVA test among the experimental, momentum and S&P strategies using the percent returns from the individual trades yields p values of 0.06493, 0.08162, 0.1792, and 0.4164, respectively, thus failing to reject the null hypothesis that the returns are not significantly higher.3 3An anonymous reviewer observed that Tetlock (2007) showed a statistically significant improvement from the use of sentiment, apparently contradicting this result. Tetlock’s (2007) sentiment-based trading strategy used a safety zone (see Section 4.2.2), and was never compared to a realistic baseline or control strategy. Instead, Tetlock’s (2007) significance test was conducted to demonstrate that his returns (positive in 12 of 15 calendar years of historical market data) Figure 1: Percent returns for 1 day holding period versus market capitalization of the traded stocks. Furthermore, the means and medians of all three trading strategies are approximately the same and centred around 0. The standard deviations of the experimental strategy and the momentum strategy are nearly identical, differing only in the thousandths digit. The standard deviations for the S&P strategy differ from the other two strategies due to the fact that the strategy buys and sells the entire S&P 500 index and not the individual stocks described in the news articles. There is, in fact, no convincing evidence that discrete sentiment class leads to an improved trading strategy from this or any other study with which we are familiar, based on their published details. One may note, however, that the returns from the experimental strategy have slightly higher Sharpe ratios than either of the baselines. One may also note that using a sentiment analyzer mostly beats training directly on market data. This vindicates using sentiment annotation as an information source. Figure 1 shows the market capitalizations of each individual trade’s companies plotted against their percent return with a 1 day holding period. The correlation between the two variables is not significant. Returns for the other holding periods are similarly dispersed. The importance of having good baselines is demonstrated by the fact that when we annualize our returns for the 3-day holding period, we get 70.086%. This number appears very high, but the annualized return from the momentum strategy is were unlikely to have been generated by chance from a normal distribution centred at zero. 2099 70.066%4, which is not significantly lower. Figure 2 shows the percent change in share value plotted against the raw SVM score for the different holding periods. We can see a weak correlation between the two. For the 30 days, 5 days, 3 days, and 1 day holding periods, the correlations are 0.017, 0.16, 0.16, and 0.16, respectively. The line of best fit is shown. This prompts our next experiment. 4.2 Utilizing SVM scores 4.2.1 Experiment Two: Variable Single Threshold Before, we labelled documents as positive (negative) when the score was above (below) 0, because 0 was the decision boundary. But 0 might not be the best threshold, θ, for high returns. To determine θ, we divided the evaluation dataset, i.e. the dataset with news articles dated on or after March 10, 2005, into two folds having an equal number of documents with positive and negative sentiment. We used the first fold to determine θ and traded using the data from the second fold and θ. For every news article, if the SVM score for that article is above (below) θ, then we go long (short) on the appropriate stock on the day the article was released. A separate theta was determined for each holding period. We varied θ from −1 to 1 in increments of 0.1. Using this method, we were able to obtain significantly higher returns. In order of 30, 5, 3, and 1 day holding periods, the returns were 0.057%, 1.107%, 1.238%, and 0.745% (p < 0.001 in every case). This is a large improvement over the previous returns, as they are average per-position figures.5 4.2.2 Experiment Three: Safety Zones For every news item classified, SVM outputs a score. For a binary SVM with a linear kernel function f, given some feature vector x, f(x) can be viewed as the signed distance of x from the decision boundary (Boser et al., 1992). It is then possibly justified to interpret raw SVM scores as degrees to which an article is positive or negative. As in the previous section, we separate the evaluation set into the same two folds, only now we 4The momentum strategy has a different number of possible trades in any actual calendar year because it is a function of the holding period. 5Training directly on market data, by comparison, yields -0.258%, -0.282%, -0.036% and -0.388%, respectively. Representation Accuracy 30 days 5 days 3 days 1 day term presence 80.164% 3.843% 1.851% 1.691% 2.251% bm25 freq 81.143% 1.110% 1.770% 1.781% 0.814% bm25 freq dnc 62.094% 3.458% 2.834% 2.813% 2.586% bm25 freq sw 79.827% 0.390% 1.685% 1.581% 1.250% freq 79.276% 1.596% 1.221% 1.344% 1.330% freq with sw 75.564% 1.752% 0.638% 1.056% 2.205% Table 3: Sentiment classification accuracy (average 10-fold cross-validation) and trade returns of different feature sets and term frequency weighting schemes in Exp. 3. The same folds were used for the different representations. The nonannualized returns are presented in columns 3-6. use two thresholds, θ ≥ζ. We will go long when the SVM score is above θ, abstain when the SVM score is between θ and ζ, and go short when the SVM score is below ζ. This is a strict generalization of the above experiment, in which ζ = θ. For convenience, we will assume in this section that ζ = −θ, leaving us again with one parameter to estimate. We again vary θ from 0 to 1 in increments of 0.1. Figure 3 shows the returns as a function of θ for each holding period on the development dataset. If we increased the upper bound on θ to be greater than 1, then there would be too few trading examples (less than 10) to reliably calculate the Sharpe ratio. Using this method with θ = 1, we were able to obtain even higher returns: 3.843%, 1.851%, 1.691, and 2.251% for the 30, 5, 3, and 1 day holding periods, versus 0.057%, 1.107%, 1.238%, and 0.745% in the second taskbased experiment. 4.3 Experiment Four: Feature Selection In our final experiment, let us now hold the trading strategy fixed (at the third one, with safety zones) and turn to the underlying sentiment analyzer. With a good trading strategy in place, it is clearly possible to vary some aspect of the sentiment analyzer in order to determine its best setting in this context. We will measure both market return and classifier accuracy to determine whether they agree. Is the latter a suitable proxy for the former? Indeed, we may hope that classifier accuracy will be more portable to other possible tasks, but then it must at least correlate well with task-based performance. In addition to evaluating those feature sets attempted in Section 3.2, we now hypothesize that the passive voice may be useful to emphasize in our representations, as the existential passive can be used to evade responsibility. So we add to the 2100 Figure 2: Percent change of trade returns plotted against SVM values for the 1, 3, 5, and 30 day holding periods in Exp. 1. Graphs are cropped to zoom in. Figure 3: Returns for different thresholds on the development data for 30, 5, 3, and 1 day holding periods in Exp. 2 with safety zone. 2101 BM25 weighted vector the counts of word tokens ending in “n” or “d” as well as the total count of every conjugated form of the copular verb: “be”, “is”, “am”, “are”, “were”, “was”, and “been”. These three features are superficial indicators of the passive voice. Clearly, we could have used a part-of-speech tagger to detect the passive voice more reliably, but we are more interested here in how well our task-based evaluation will correspond to a more customary classifier-accuracy evaluation, rather than finding the world’s best indicators of the passive voice. Table 3 presents returns obtained from these 6 feature sets. The feature set with BM25-weighted term frequencies plus the number of copulars and tokens ending in “n”, “d” (bm25 freq dnc) yields higher returns than any other representation attempted on the 5, 3, and 1 day holding periods, and the second-highest on the 30 days holding period. But it has the worst classification accuracy by far: a full 18 percentage points below term presence. This is a very compelling illustration of how misleading an intrinsic evaluation can be. 5 Conclusion In this paper, we examined sentiment analysis applied to stock trading strategies. We built a binary sentiment classifier that achieves high accuracy when tested on movie data and financial news data from Reuters. In four task-based experiments, we evaluated the usefulness of sentiment analysis to simple trading strategies. Although high annual returns are achieved simply by utilizing sentiment labels while trading, they can be increased by incorporating the output of the SVM’s decision function. But classification accuracy alone is not an accurate predictor of task-based performance. This calls into question the suitability of intrinsic sentiment classification accuracy, particularly (as here) when the relative cost of a task-based evaluation may be comparably low. We have also determined that training on human-annotated sentiment does in fact perform better than training on market returns themselves. So sentiment analysis is an important component, but it must be tuned against task data. Our price data only included adjusted opening and closing prices and most of our news data contain only the date of the article, with no specific time. This limits our ability to test much shorterterm trading strategies. Deriving sentiment labels for supervised training is an important topic for future study, as is inferring the sentiment of published news from stock price fluctuations instead of the reverse. We should also study how “sentiment” is defined in the financial world. This study has used a rather general definition of news sentiment, and a more precise definition may improve trading performance. Acknowledgments This research was supported by the Canadian Network Centre of Excellence in Graphics, Animation and New Media (GRAND). References Khurshid Ahmad, David Cheng, and Yousif Almas. 2006. Multi-lingual sentiment analysis of financial news streams. In Proceedings of the 1st International Conference on Grid in Finance. Pranav Anand and Kevin Reschke. 2010. Verb classes as evaluativity functor classes. In Interdisciplinary Workshop on Verbs: The Identification and Representation of Verb Features (Verb 2010). Werner Antweiler and Murray Z Frank. 2004. Is all that talk just noise? the information content of internet stock message boards. The Journal of Finance, 59(3):1259–1294. Bernhard E. Boser, Isabelle M. Guyon, and Vladimir N. Vapnik. 1992. A training algorithm for optimal margin classifiers. In Proceedings of the fifth annual workshop on Computational learning theory, COLT ’92, pages 144–152, New York, NY, USA. ACM. Matthew Butler and Vlado Keselj. 2009. Financial forecasting using character n-gram analysis and readability scores of annual reports. In Proceedings of Canadian AI’2009, Kelowna, BC, Canada, May. Wesley S. Chan. 2003. Stock price reaction to news and no-news: Drift and reversal after headlines. Journal of Financial Economics, 70(2):223–260. Sanjiv R. Das and Mike Y. Chen. 2007. Yahoo! for amazon: Sentiment extraction from small talk on the web. Management Science, 53(9):1375–1388. Lingjia Deng, Yoonjung Choi, and Janyce Wiebe. 2013. Benefactive/malefactive event and writer attitude annotation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 120–125. Association for Computational Linguistics. Ann Devitt and Khurshid Ahmad. 2007. Sentiment polarity identification in financial news: A cohesionbased approach. In Proceedings of the ACL. 2102 Brett Drury and J. J. Almeida. 2011. Identification of fine grained feature based event and sentiment phrases from business news stories. In Proceedings of the International Conference on Web Intelligence, Mining and Semantics, WIMS ’11, pages 27:1–27:7, New York, NY, USA. ACM. Robert F. Engle and Victor K. Ng. 1993. Measuring and testing the impact of news on volatility. The Journal of Finance, 48(5):1749–1778. Tak-Chung Fu, Ka ki Lee, Donahue C. M. Sze, Fu-Lai Chung, Chak man Ng, and Chak man Ng. 2008. Discovering the correlation between stock time series and financial news. In Web Intelligence, pages 880–883. Thorsten Joachims. 1999. Making large-scale svm learning practical. advances in kernel methodssupport vector learning, b. sch¨olkopf and c. burges and a. smola. Moshe Koppel and Itai Shtrimberg. 2004. Good news or bad news? let the market decide. In AAAI Spring Symposium on Exploring Attitude and Affect in Text, pages 86–88. Press. Chenghua Lin and Yulan He. 2009. Joint sentiment/topic model for sentiment analysis. In Proceedings of the 18th ACM conference on Information and knowledge management, CIKM ’09, pages 375–384, New York, NY, USA. ACM. Victor Niederhoffer. 1971. The analysis of world events and stock prices. Journal of Business, pages 193–219. Neil O’Hare, Michael Davy, Adam Bermingham, Paul Ferguson, P´araic Sheridan, Cathal Gurrin, and Alan F. Smeaton. 2009. Topic-dependent sentiment analysis of financial blogs. In Proceedings of the 1st international CIKM workshop on Topicsentiment analysis for mass opinion measurement. Georgios Paltoglou and Mike Thelwall. 2010. A study of information retrieval weighting schemes for sentiment analysis. In Proceedings of the ACL, pages 1386–1395. Association for Computational Linguistics. Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the ACL, pages 271–278. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and trends in information retrieval, 2(1-2):1–135. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing - Volume 10, EMNLP ’02, pages 79–86, Stroudsburg, PA, USA. Association for Computational Linguistics. John C. Platt. 1999. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In Advances in Large Margin Classifiers, pages 61–74. MIT Press. William F Sharpe. 1966. Mutual fund performance. The Journal of business, 39(1):119–138. Duyu Tang, Bing Qin, and Ting Liu. 2015. Document modeling with gated recurrent neural network for sentiment classification. In Proceedings of EMNLP, pages 1422–1432. Paul C. Tetlock. 2007. Giving content to investor sentiment: The role of media in the stock market. The Journal of Finance, 62(3):1139–1168. Wenbin Zhang and Steven Skiena. 2010. Trading strategies to exploit blog and news sentiment. In The 4th International AAAI Conference on Weblogs and Social Media. 2103
2016
197
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2104–2113, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Edge-Linear First-Order Dependency Parsing with Undirected Minimum Spanning Tree Inference EffiLevi1 Roi Reichart2 1Institute of Computer Science, The Hebrew Univeristy 2Faculty of Industrial Engineering and Management, Technion, IIT {efle|arir}@cs.huji.ac.il [email protected] Ari Rappoport1 Abstract The run time complexity of state-of-theart inference algorithms in graph-based dependency parsing is super-linear in the number of input words (n). Recently, pruning algorithms for these models have shown to cut a large portion of the graph edges, with minimal damage to the resulting parse trees. Solving the inference problem in run time complexity determined solely by the number of edges (m) is hence of obvious importance. We propose such an inference algorithm for first-order models, which encodes the problem as a minimum spanning tree (MST) problem in an undirected graph. This allows us to utilize state-of-the-art undirected MST algorithms whose run time is O(m) at expectation and with a very high probability. A directed parse tree is then inferred from the undirected MST and is subsequently improved with respect to the directed parsing model through local greedy updates, both steps running in O(n) time. In experiments with 18 languages, a variant of the first-order MSTParser (McDonald et al., 2005b) that employs our algorithm performs very similarly to the original parser that runs an O(n2) directed MST inference. 1 Introduction Dependency parsers are major components of a large number of NLP applications. As application models are applied to constantly growing amounts of data, efficiency becomes a major consideration. In graph-based dependency parsing models (Eisner, 2000; McDonald et al., 2005a; McDonald et al., 2005b; Carreras, 2007; Koo and Collins, 2010b), given an n word sentence and a model order k, the run time of exact inference is O(n3) for k = 1 and O(nk+1) for k > 1 in the projective case (Eisner, 1996; McDonald and Pereira, 2006). In the non-projective case it is O(n2) for k = 1 and NP-hard for k ≥2 (McDonald and Satta, 2007). 1 Consequently, a number of approximated parsers have been introduced, utilizing a variety of techniques: the Eisner algorithm (McDonald and Pereira, 2006), belief propagation (Smith and Eisner, 2008), dual decomposition (Koo and Collins, 2010b; Martins et al., 2013) and multi-commodity flows (Martins et al., 2009; Martins et al., 2011). The run time of all these approximations is superlinear in n. Recent pruning algorithms for graph-based dependency parsing (Rush and Petrov, 2012; Riedel et al., 2012; Zhang and McDonald, 2012) have shown to cut a very large portion of the graph edges, with minimal damage to the resulting parse trees. For example, Rush and Petrov (2012) demonstrated that a single O(n) pass of vinepruning (Eisner and Smith, 2005) can preserve > 98% of the correct edges, while ruling out > 86% of all possible edges. Such results give strong motivation to solving the inference problem in a run time complexity that is determined solely by the number of edges (m). 2 1We refer to parsing approaches that produce only projective dependency trees as projective parsing and to approaches that produce all types of dependency trees as non-projective parsing. 2Some pruning algorithms require initial construction of the full graph, which requires exactly n(n −1) edge weight computations. Utilizing other techniques, such as lengthdictionary pruning, graph construction and pruning can be 2104 In this paper we propose to formulate the inference problem in first-order (arc-factored) dependency parsing as a minimum spanning tree (MST) problem in an undirected graph. Our formulation allows us to employ state-of-the-art algorithms for the MST problem in undirected graphs, whose run time depends solely on the number of edges in the graph. Importantly, a parser that employs our undirected inference algorithm can generate all possible trees, projective and non-projective. Particularly, the undirected MST problem (§ 2) has a randomized algorithm which is O(m) at expectation and with a very high probability ((Karger et al., 1995)), as well as an O(m · α(m, n)) worst-case deterministic algorithm (Pettie and Ramachandran, 2002), where α(m, n) is a certain natural inverse of Ackermann’s function (Hazewinkel, 2001). As the inverse of Ackermann’s function grows extremely slowly 3 the deterministic algorithm is in practice linear in m (§ 3). In the rest of the paper we hence refer to the run time of these two algorithms as practically linear in the number of edges m. Our algorithm has four steps (§ 4). First, it encodes the first-order dependency parsing inference problem as an undirected MST problem, in up to O(m) time. Then, it computes the MST of the resulting undirected graph. Next, it infers a unique directed parse tree from the undirected MST. Finally, the resulting directed tree is greedily improved with respect to the directed parsing model. Importantly, the last two steps take O(n) time, which makes the total run time of our algorithm O(m) at expectation and with very high probability. 4 We integrated our inference algorithm into the first-order parser of (McDonald et al., 2005b) and compared the resulting parser to the original parser which employs the Chu-Liu-Edmonds algorithm (CLE, (Chu and Liu, 1965; Edmonds, 1967)) for inference. CLE is the most efficient exact inference algorithm for graph-based first-order nonprojective parsers, running at O(n2) time.5 jointly performed in O(n) steps. We therefore do not include initial graph construction and pruning in our complexity computations. 3α(m, n) is less than 5 for any practical input sizes (m, n). 4The output dependency tree contains exactly n−1 edges, therefore m ≥n −1, which makes O(m) + O(n) = O(m). 5CLE has faster implementations: O(m+nlogn) (Gabow et al., 1986) as well as O(mlogn) for sparse graphs (Tarjan, 1977), both are super-linear in n for connected graphs. We reWe experimented (§ 5) with 17 languages from the CoNLL 2006 and 2007 shared tasks on multilingual dependency parsing (Buchholz and Marsi, 2006; Nilsson et al., 2007) and in three English setups. Our results reveal that the two algorithms perform very similarly. While the averaged unlabeled attachment accuracy score (UAS) of the original parser is 0.97% higher than ours, in 11 of 20 test setups the number of sentences that are better parsed by our parser is larger than the number of sentences that are better parsed by the original parser. Importantly, in this work we present an edge-linear first-order dependency parser which achieves similar accuracy to the existing one, making it an excellent candidate to be used for efficient MST computation in k-best trees methods, or to be utilized as an inference/initialization subroutine as a part of more complex approximation frameworks such as belief propagation. In addition, our model produces a different solution compared to the existing one (see Table 2), paving the way for using methods such as dual decomposition to combine these two models into a superior one. Undirected inference has been recently explored in the context of transition based parsing (G´omez-Rodr´ıguez and Fern´andez-Gonz´alez, 2012; G´omez-Rodr´ıguez et al., 2015), with the motivation of preventing the propagation of erroneous early edge directionality decisions to subsequent parsing decisions. Yet, to the best of our knowledge this is the first paper to address undirected inference for graph based dependency parsing. Our motivation and algorithmic challenges are substantially different from those of the earlier transition based work. 2 Undirected MST with the Boruvka Algorithm In this section we define the MST problem in undirected graphs. We then discuss the Burovka algorithm (Boruvka, 1926; Nesetril et al., 2001) which forms the basis for the randomized algorithm of (Karger et al., 1995) we employ in this paper. In the next section we will describe the Karger et al. (1995) algorithm in more details. Problem Definition. For a connected undirected graph G(V, E), where V is the set of n vertices fer here to the classical implementation employed by modern parsers (e.g. (McDonald et al., 2005b; Martins et al., 2013)). 2105 and E the set of m weighted edges, the MST problem is defined as finding the sub-graph of G which is the tree (a connected acyclic graph) with the lowest sum of edge weights. The opposite problem – finding the maximum spanning tree – can be solved by the same algorithms used for the minimum variant by simply negating the graph’s edge weights. Graph Contraction. In order to understand the Boruvka algorithm, let us first define the Graph Contraction operation. For a given undirected graph G(V, E) and a subset ˜E ⊆E, this operation creates a new graph, GC(VC, EC). In this new graph, VC consists of a vertex for each connected component in ˜G(V, ˜E) (these vertices are referred to as super-vertices). EC, in turn, consists of one edge, (ˆu, ˆv), for each edge (u, v) ∈E \ ˜E, where ˆu, ˆv ∈VC correspond to ˜G’s connected components to which u and v respectively belong. Note that this definition may result in multiple edges between two vertices in VC (denoted repetitive edges) as well as in edges from a vertex in VC to itself (denoted self edges). Algorithm 1 The basic step of the Boruvka algorithm for the undirected MST problem. Contract graph Input: a graph G(V, E), a subset ˜E ⊆E C ←connected components of ˜G(V, ˜E) return GC(C, E \ ˜E) Boruvka-step Input: a graph G(V, E) 1: for all (u, v) ∈E do 2: if w(u, v) < w(u.minEdge) then 3: u.minEdge ←(u, v) 4: end if 5: if w(u, v) < w(v.minEdge) then 6: v.minEdge ←(u, v) 7: end if 8: end for 9: for all v ∈V do 10: Em ←Em ∪{v.minEdge} 11: end for 12: GB(VB, EB) ←Contract graph(G(V, E),Em) 13: Remove from EB self edges and non-minimal repetitive edges 14: return GB(VB, EB), Em The Boruvka-Step. Next, we define the basic step of the Borukva algorithm (see example in Fig(a) (b) (c) (d) Figure 1: An illustration of a Boruvka step: (a) The original graph; (b) Choosing the minimal edge for each vertex (marked in red); (c) The contracted graph; (d) The contracted graph after removing one self edge and two non-minimal repetitive edges. ure 1 and pseudocode in Algorithm 1). In each such step, the algorithm creates a subset Em ⊂E by selecting the minimally weighted edge for each vertex in the input graph G(V, E) (Figure 1 (a,b) and Algorithm 1 (lines 1-11)). Then, it performs the contraction operation on the graph G and Em to receive a new graph GB(VB, EB) (Figure 1 (c) and Algorithm 1 (12)). Finally, it removes from EB all self-edges and repetitive edges that are not the minimal edges between the vertices VB’s which they connect (Figure 1 (d) and Algorithm 1 (13)). The set Em created in each such step is guaranteed to consist only of edges that belong to G’s MST and is therefore also returned by the Boruvka step. The Boruvka algorithm runs successive Boruvka-steps until it is left with a single supervertex. The MST of the original graph G is given by the unification of the Em sets returned in each step. The resulting computational complexity is O(m log n) (Nesetril et al., 2001). We now turn to describe how the undirected MST problem can be solved in a time practically linear in the number of graph edges. 3 Undirected MST in Edge Linear Time There are two algorithms that solve the undirected MST problem in time practically linear in the number of edges in the input graph. These algorithms are based on substantially different approaches: one is deterministic and the other is randomized 6. 6Both these algorithms deal with a slightly more general case where the graph is not necessarily connected, in which case the minimum spanning forest (MSF) is computed. In our case, where the graph is connected, the MSF reduces to an MST. 2106 The complexity of the first, deterministic, algorithm (Chazelle, 2000; Pettie and Ramachandran, 2002) is O(m·α(m, n)), where α(m, n) is a natural inverse of Ackermann’s function, whose value for any practical values of n and m is lower than 5. As this algorithm employs very complex datastructures, we do not implement it in this paper. The second, randomized, algorithm (Karger et al., 1995) has an expected run time of O(m + n) (which for connected graphs is O(m)), and this run time is achieved with a high probability of 1 −exp(−Ω(m)). 7 In this paper we employ only this algorithm for first-order graph-based parsing inference, and hence describe it in details in this section. Definitions and Properties. We first quote two properties of undirected graphs (Tarjan, 1983): (1) The cycle property: The heaviest edge in a cycle in a graph does not appear in the MSF; and (2) The cut property: For any proper nonempty subset V ′ of the graph vertices, the lightest edge with exactly one endpoint in V ′ is included in the MSF. We continue with a number of definitions and observations. Given an undirected graph G(V, E) with weighted edges, and a forest F in that graph, F(u, v) is the path in that forest between u and v (if such a path exists), and sF (u, v) is the maximum weight of an edge in F(u, v) (if the path does not exist then sF (u, v) = ∞). An edge (u, v) ∈E is called F-heavy if s(u, v) > sF (u, v), otherwise it is called F-light. An alternative equivalent definition is that an edge is F-heavy if adding it to F creates a cycle in which it is the heaviest edge. An important observation (derived from the cycle property) is that for any forest F, no F-heavy edge can possibly be a part of an MSF for G. It has been shown that given a forest F, all the F-heavy edges in G can be found in O(m) time (Dixon et al., 1992; King, 1995). Algorithm. The randomized algorithm can be outlined as follows (see pseudocode in algorithm 2): first, two successive Boruvka-steps are applied to the graph (line 4, Boruvka-step2 stands for two successive Boruvka-steps), reducing the number of vertices by (at least) a factor of 4 to receive a contracted graph GC and an edge set Em (§ 2). Then, a subgraph Gs is randomly constructed, such that each edge in GC, along with 7This complexity analysis is beyond the scope of this paper. Algorithm 2 Pseudocode for the Randomized MSF algorithm of(Karger et al., 1995). Randomized MSF Input: a graph G(V, E) 1: if E is empty then 2: return ∅ 3: end if 4: GC(VC, EC), Em ←Boruvka-step2(G) 5: for all (u, v) ∈EC do 6: if coin-flip == head then 7: Es ←Es ∪{(u, v)} 8: Vs ←Vs ∪{u, v} 9: end if 10: end for 11: F ←Randomized MSF(Gs(Vs, Es)) 12: remove all F-heavy edges from GC(VC, EC) 13: FC ←Randomized MSF(GC(VC, EC)) 14: return FC ∪Em the vertices which it connects, is included in Gs with probability 1 2 (lines 5-10). Next, the algorithm is recursively applied to Gs to obtain its minimum spanning forest F (line 11). Then, all Fheavy edges are removed from GC (line 12), and the algorithm is recursively applied to the resulting graph to obtain a spanning forest FC (line 13). The union of that forest with the edges Em forms the requested spanning forest (line 14). Correctness. The correctness of the algorithm is proved by induction. By the cut property, every edge returned by the Boruvka step (line 4), is part of the MSF. Therefore, the rest of the edges in the original graph’s MSF form an MSF for the contracted graph. The removed F-heavy edges are, by the cycle property, not part of the MSF (line 12). By the induction assumption, the MSF of the remaining graph is then given by the second recursive call (line 13). 4 Undirected MST Inference for Dependency Parsing There are several challenges in the construction of an undirected MST parser: an MST parser that employs an undirected MST algorithm for inference.8 These challenges stem from the mismatch between the undirected nature of the inference algorithm and the directed nature of the resulting 8Henceforth, we refer to an MST parser that employs a directed MST algorithm for inference as directed MST parser. 2107 parse tree. The first problem is that of undirected encoding. Unlike directed MST parsers that explicitly encode the directed nature of dependency parsing into a directed input graph to which an MST algorithm is applied (McDonald et al., 2005b), an undirected MST parser needs to encode directionality information into an undirected graph. In this section we consider two solutions to this problem. The second problem is that of scheme conversion. The output of an undirected MST algorithm is an undirected tree while the dependency parsing problem requires finding a directed parse tree. In this section we show that for rooted undirected spanning trees there is only one way to define the edge directions under the constraint that the root vertex has no incoming edges and that each nonroot vertex has exactly one incoming edge in the resulting directed spanning tree. As dependency parse trees obey the first constraint and the second constraint is a definitive property of directed trees, the output of an undirected MST parser can be transformed into a directed tree using a simple O(n) time procedure. Unfortunately, as we will see in § 5, even with our best undirected encoding method, an undirected MST parser does not produce directed trees of the same quality as its directed counterpart. At the last part of this section we therefore present a simple, O(n) time, local enhancement procedure, that improves the score of the directed tree generated from the output of the undirected MST parser with respect to the edge scores of a standard directed MST parser. That is, our procedure improves the output of the undirected MST parser with respect to a directed model without having to compute the MST of the latter, which would take O(n2) time. We conclude this section with a final remark stating that the output class of our inference algorithm is non-projective. That is, it can generate all possible parse trees, projective and non-projective. Undirected Encoding Our challenge here is to design an encoding scheme that encodes directionality information into the graph of the undirected MST problem. One approach would be to compute directed edge weights according to a feature representation scheme for directed edges (e.g. one of the schemes employed by existing directed MST parsers) and then transform these directed weights into undirected ones. Specifically, given two vertices u and v with directed edges (u, v) and (v, u), weighted with sd(u, v) and sd(v, u) respectively, the goal is to compute the weight su( ˆ u, v) of the undirected edge ( ˆ u, v) connecting them in the undirected graph. We do this using a pre-determined function f : R × R →R, such that f(sd(u, v), sd(v, u)) = su( ˆ u, v). f can take several forms including mean, product and so on. In our experiments the mean proved to be the best choice. Training with the above approach is implemented as follows. w, the parameter vector of the parser, consists of the weights of directed features. At each training iteration, w is used for the computation of sd(u, v) = w · φ(u, v) and sd(v, u) = w · φ(v, u) (where φ(u, v) and φ(v, u) are the feature representations of these directed edges). Then, f is applied to compute the undirected edge score su( ˆ u, v). Next, the undirected MST algorithm is run on the resulting weighted undirected graph, and its output MST is transformed into a directed tree (see below). Finally, this directed tree is used for the update of w with respect to the gold standard (directed) tree. At test time, the vector w which resulted from the training process is used for sd computations. Undirected graph construction, undirected MST computation and the undirected to directed tree conversion process are conducted exactly as in training. 9 Unfortunately, preliminary experiments in our development setup revealed that this approach yields parse trees of much lower quality compared to the trees generated by the directed MST parser that employed the original directed feature set. In § 5 we discuss these results in details. An alternative approach is to employ an undirected feature set. To implement this approach, we employed the feature set of the MST parser ((McDonald et al., 2005a), Table 1) with one difference: some of the features are directional, distinguishing between the properties of the source (parent) and the target (child) vertices. We stripped those features from that information, which resulted in an undirected version of the feature set. Under this feature representation, training with undirected inference is simple. w, the parameter vector of the parser, now consists of the weights 9In evaluation setup experiments we also considered a variant of this model where the training process utilized directed MST inference. As this variant performed poorly, we exclude it from our discussion in the rest of the paper. 2108 (a) (b) (c) (d) Figure 2: An illustration of directing an undirected tree, given a constrained root vertex: (a) The initial undirected tree; (b) Directing the root’s outgoing edge; (c) Directing the root’s child’s outgoing edges; (d) Directing the last remaining edge, resulting in a directed tree. (a) (b) (c) (d) (e) Figure 3: An illustration of the local enhancement procedure for an edge (u, v) in the du-tree. Solid lines indicate edges in the du-tree, while dashed lines indicate edges not in the dutree. (a) Example subtree; (b) Evaluate gain = sd(t, u) + sd(u, v)−(sd(t, v)+sd(v, u)) = 4+3−(5+1) = 1; (c) In case a modification is made, first replace (u, v) with (v, u); and then (d) Remove the edge (t, u); and, finally, (e) Add the edge (t, v). of undirected features. Once the undirected MST is computed by an undirected MST algorithm, w can be updated with respect to an undirected variant of the gold parse trees. At test time, the algorithm constructs an undirected graph using the vector w resulted from the training process. This graph’s undirected MST is computed and then transformed into a directed tree. Interestingly, although this approach does not explicitly encode edge directionality information into the undirected model, it performed very well in our experiments (§ 5), especially when combined with the local enhancement procedure described below. Scheme Conversion Once the undirected MST is found, we need to direct its edges in order for the end result to be a directed dependency parse tree. Following a standard practice in graph-based dependency parsing (e.g. (McDonald et al., 2005b)), before inference is performed we add a dummy root vertex to the initial input graph with edges connecting it to all of the other vertices in the graph. Consequently, the final undirected tree will have a designated root vertex. In the resulting directed tree, this vertex is constrained to have only outgoing edges. As observed by G´omezRodr´ıguez and Fern´andez-Gonz´alez (2012), this effectively forces the direction for the rest of the edges in the tree. Given a root vertex that follows the above constraint, and together with the definitive property of directed trees stating that each non-root vertex in the graph has exactly one incoming edge, we can direct the edges of the undirected tree using a simple BFS-like algorithm (Figure 2). Starting with the root vertex, we mark its undirected edges as outgoing, mark the vertex itself as done and its descendants as open. We then recursively repeat the same procedure for each open vertex until there are no such vertices left in the tree, at which point we have a directed tree. Note that given the constraints on the root vertex, there is no other way to direct the undirected tree edges. This procedure runs in O(n) time, as it requires a constant number of operations for each of the n−1 edges of the undirected spanning tree. In the rest of the paper we refer to the directed tree generated by the undirected and directed MST parsers as du-tree and dd-tree respectively. Local Enhancement Procedure As noted above, experiments in our development setup (§ 5) revealed that the directed parser performs somewhat better than the undirected one. This motivated us to develop a local enhancement procedure that improves the tree produced by the undirected model with respect to the directed model without compromising our O(m) run time. Our enhancement procedure is motivated by development experiments, revealing the much smaller gap between the quality of the du-tree and dd-tree of the same sentence under undirected evaluation compared to directed evaluation (§ 5 demonstrates this for test results). For a du-tree that contains the vertex u and the edges (t, u) and (u, v), we therefore consider the replacement of (u, v) with (v, u). Note that after this change our graph would no longer be a directed tree, since it would cause u to have two parents, v and t, and v to have no parent. This, however, can be rectified by replacing the edge (t, u) with the edge (t, v). 2109 It is easy to infer whether this change results in a better (lower weight) spanning tree under the directed model by computing the equation: gain = sd(t, u) + sd(u, v) −(sd(t, v) + sd(v, u)), where sd(x, y) is the score of the edge (x, y) according to the directed model. This is illustrated in Figure 3. Given the du-tree, we traverse its edges and compute the above gain for each. We then choose the edge with the maximal positive gain, as this forms the maximal possible decrease in the directed model score using modifications of the type we consider, and perform the corresponding modification. In our experiments we performed this procedure five times per inference problem.10 This procedure performs a constant number of operations for each of the n −1 edges of the du-tree, resulting in O(n) run time. Output Class. Our undirected MST parser is non-projective. This stems from the fact that the undirected MST algorithms we discuss in § 3 do not enforce any structural constraint, and particularly the non-crossing constraint, on the resulting undirected MST. As the scheme conversion (edge directing) and the local enhancement procedures described in this section do not enforce any such constraint as well, the resulting tree can take any possible structure. 5 Experiments and Results Experimental setup We evaluate four models: (a) The original directed parser (D-MST, (McDonald et al., 2005b)); (b) Our undirected MST parser with undirected features and with the local enhancement procedure (U-MST-uf-lep);11 (c) Our undirected MST parser with undirected features but without the local enhancement procedure (UMST-uf); and (d) Our undirected MST parser with directed features (U-MST-df). All models are implemented within the MSTParser code12. The MSTParser does not prune its input graphs. To demonstrate the value of undirected parsing for sparse input graphs, we implemented the lengthdictionary pruning strategy which eliminates all edges longer than the maximum length observed 10This hyperparameter was estimated once on our English development setup, and used for all 20 multilingual test setups. 11The directed edge weights for the local enhancement procedure (sd in § 4) were computed using the trained DMST parser. 12http://www.seas.upenn.edu/˜strctlrn/ MSTParser/MSTParser.html for each directed head-modifier POS pair in the training data. An undirected edge ˆ (u, v) is pruned iff both directed edges (u, v) and (v, u) are to be pruned according to the pruning method. To estimate the accuracy/graph-size tradeoff provided by undirected parsing (models (b)-(d)), we apply the pruning strategy only to these models leaving the the D-MST model (model (a)) untouched. This way D-MST runs on a complete directed graph with n2 edges. Our models were developed in a monolingual setup: training on sections 2-21 of WSJ PTB (Marcus et al., 1993) and testing on section 22. The development phase was devoted to the various decisions detailed throughout this paper and to the tuning of the single hyperparameter: the number of times the local enhancement procedure is executed. We tested the models in 3 English and 17 multilingual setups. The English setups are: (a) PTB: training on sections 2-21 of the WSJ PTB and testing on its section 23; (b) GENIA: training with a random sample of 90% of the 4661 GENIA corpus (Ohta et al., 2002) sentences and testing on the other 10%; and (c) QBank: a setup identical to (b) for the 3987 QuestionBank (Judge et al., 2006) sentences. Multilingual parsing was performed with the multilingual datasets of the CoNLL 2006 (Buchholz and Marsi, 2006) and 2007 (Nilsson et al., 2007) shared tasks on multilingual dependency parsing, following their standard train/test split. Following previous work, punctuation was excluded from the evaluation. Length-dictionary pruning reduces the number of undirected edges by 27.02% on average across our 20 setups (std = 11.02%, median = 23.85%), leaving an average of 73.98% of the edges in the undirected graph. In 17 of 20 setups the reduction is above 20%. Note that the number of edges in a complete directed graph is twice the number in its undirected counterpart. Therefore, on average, the number of input edges in the pruned undirected models amounts to 73.98% 2 = 36.49% of the number of edges in the complete directed graphs. In fact, every edge-related operation (such as feature extraction) in the undirected model is actually performed on half of the number of edges compared to the directed model, saving run-time not only in the MST-inference stage but in every stage involving these operations. In addition, some pruning methods, such as length-dictionary pruning (used 2110 Swedish Danish Bulgarian Slovene Chinese Hungarian Turkish German Czech Dutch D-MST 87.7/88.9 88.5/89.5 90.4/90.9 80.4/83.4 86.1/87.7 82.9/84.3 75.2/75.3 89.6/90.2 81.7/84.0 81.3/83.0 U-MST-uf-lep 86.9/88.4 87.7/88.9 89.7/90.6 79.4/82.8 84.8/86.7 81.8/83.3 74.9/75.3 88.7/89.5 79.6/82.5 78.7/80.7 U-MST-uf 84.3/87.8 85.1/89.0 87.0/90.2 76.1/82.4 81.1/86.4 79.9/82.9 73.1/75.0 86.9/89.0 76.1/81.9 73.4/80.5 U-MST-df 72.0/79.2 74.3/82.9 69.5/81.4 66.8/75.8 65.9/76.5 68.2/72.1 57.4/62.6 77.7/82.5 57.3/70.9 59.0/71.3 Japanese Spanish Catalan Greek Basque Portuguese Italian PTB QBank GENIA D-MST 92.5/92.6 83.8/86.0 91.8/92.2 82.7/84.9 72.1/75.8 89.2/89.9 83.4/85.4 92.1/92.8 95.8/96.3 88.9/90.0 U-MST-uf-lep 92.1/92.2 83.5/85.9 91.3/91.9 81.8/84.4 71.6/75.8 88.3/89.3 82.4/84.7 90.6/91.7 95.6/96.2 87.2/88.9 U-MST-uf 91.4/92.4 80.4/85.4 89.7/91.7 78.7/84 68.8/75.4 85.8/89.3 79.4/84.4 88.5/91.8 94.8/96.0 85.0/89.0 U-MST-df 74.4/85.2 73.1/81.3 73.1/83.5 71.3/78.7 62.8/71.4 67.9/79.7 65.2/77.2 77.2/85.4 89.1/92.9 72.4/81.6 Table 1: Directed/undirected UAS for the various parsing models of this paper. Swedish Danish Bulgarian Slovene Chinese Hungarian Turkish German Czech Dutch D-MST 20.6 20.8 15.1 25.4 15.5 26.4 22.3 21.3 29.7 27.7 U-MST-uf-lep 18.0 24.5 22.1 29.6 16.7 27.2 19.3 17.9 26.2 24.4 Oracle 88.9 89.7 91.6 81.9 87.8 83.9 77.1 90.6 82.8 82.8 (+1.2) (+1.4) (+1.2) (+1.5) (+1.7) (+1) (+1.9) (+1) (+1.1) (+1.5) Japanese Spanish Catalan Greek Basque Portuguese Italian PTB QBank GENIA D-MST 5.7 26.7 23.4 28.9 23.4 22.6 22.5 27.8 5.3 33.7 U-MST-uf-lep 4.0 30.1 26.3 30.5 30.8 21.9 24.9 20.9 6.0 23.8 Oracle 93.1 84.8 92.6 83.9 74.1 89.9 84.4 92.8 96.4 89.7 (+0.6) (+1) (+0.8) (+1.2) (+2) (+0.7) (+1) (+0.7) (+0.8) (+0.8) Table 2: Top two lines (per language): percentage of sentences for which each of the models performs better than the other according to the directed UAS. Bottom line (Oracle): Directed UAS of an oracle model that selects the parse tree of the best performing model for each sentence. Improvement over the directed UAS score of D-MST is given in parenthesis. in this work) perform feature extraction only for existing (un-pruned) edges, meaning that any reduction in the number of edges also reduces feature extraction operations. For each model we report the standard directed unlabeled attachment accuracy score (D-UAS). In addition, since this paper explores the value of undirected inference for a problem that is directed in nature, we also report the undirected unlabeled attachment accuracy score (U-UAS), hoping that these results will shed light on the differences between the trees generated by the different models. Results Table 1 presents our main results. While the directed MST parser (D-MST) is the best performing model across almost all test sets and evaluation measures, it outperforms our best model, U-MST-uf-lep, by a very small margin. Particularly, for D-UAS, D-MST outperforms U-MST-uf-lep by up to 1% in 14 out of 20 setups (in 6 setups the difference is up to 0.5%). In 5 other setups the difference between the models is between 1% and 2%, and only in one setup it is above 2% (2.6%). Similarly, for U-UAS, in 2 setups the models achieve the same performance, in 15 setups the difference is less than 1% and in the other setups the differences is 1.1% - 1.5%. The average differences are 0.97% and 0.67% for DUAS and U-UAS respectively. The table further demonstrates the value of the local enhancement procedure. Indeed, U-MSTuf-lep outperforms U-MST in all 20 setups in DUAS evaluation and in 15 out of 20 setups in UUAS evaluation (in one setup there is a tie). However, the improvement this procedure provides is much more noticeable for D-UAS, with an averaged improvement of 2.35% across setups, compared to an averaged U-UAS improvement of only 0.26% across setups. While half of the changes performed by the local enhancement procedure are in edge directions, its marginal U-UAS improvement indicates that almost all of its power comes from edge direction changes. This calls for an improved enhancement procedure. Finally, moving to directed features (the UMST-df model), both D-UAS and U-UAS substantially degrade, with more noticeable degradation in the former. We hypothesize that this stems from the idiosyncrasy between the directed parameter update and the undirected inference in this model. Table 2 reveals the complementary nature of our U-MST-uf-lep model and the classical D-MST: each of the models outperforms the other on an average of 22.2% of the sentences across test setups. An oracle model that selects the parse tree of the best model for each sentence would improve DUAS by an average of 1.2% over D-MST across the test setups. The results demonstrate the power of first-order graph-based dependency parsing with undirected inference. Although using a substantially different inference algorithm, our U-MST-uf-lep model performs very similarly to the standard MST parser which employs directed MST inference. 2111 6 Discussion We present a first-order graph-based dependency parsing model which runs in edge linear time at expectation and with very high probability. In extensive multilingual experiments our model performs very similarly to a standard directed firstorder parser. Moreover, our results demonstrate the complementary nature of the models, with our model outperforming its directed counterpart on an average of 22.2% of the test sentences. Beyond its practical implications, our work provides a novel intellectual contribution in demonstrating the power of undirected graph based methods in solving an NLP problem that is directed in nature. We believe this contribution has the potential to affect future research on additional NLP problems. The potential embodied in this work extends to a number of promising research directions: • Our algorithm may be used for efficient MST computation in k-best trees methods which are instrumental in margin-based training algorithms. For example, McDonald et al. (2005b) observed that k calls to the CLU algorithm might prove to be too inefficient; our more efficient algorithm may provide the remedy. • It may also be utilized as an inference/initialization subroutine as a part of more complex approximation frameworks such as belief propagation (e.g. Smith and Eisner (2008), Gormley et al. (2015)). • Finally, the complementary nature of the directed and undirected parsers motivates the development of methods for their combination, such as dual decomposition (e.g. Rush et al. (2010), Koo et al. (2010a)). Particularly, we have shown that our undirected inference algorithm converges to a different solution than the standard directed solution while still maintaining high quality (Table 2). Such techniques can exploit this diversity to produce a higher quality unified solution. We intend to investigate all of these directions in future work. In addition, we are currently exploring potential extensions of the techniques presented in this paper to higher order, projective and non-projective, dependency parsing. Acknowledgments The second author was partly supported by a GIF Young Scientists’ Program grant No. I-2388407.6/2015 - Syntactic Parsing in Context. References Otakar Boruvka. 1926. O Jist´em Probl´emu Minim´aln´ım (About a Certain Minimal Problem) (in Czech, German summary). Pr´ace Mor. Pr´ırodoved. Spol. v Brne III, 3. Sabine Buchholz and Erwin Marsi. 2006. Conll-x shared task on multilingual dependency parsing. In Proceedings of the Tenth Conference on Computational Natural Language Learning, pages 149–164. Xavier Carreras. 2007. Experiments with a higherorder projective dependency parser. In Proc. of CoNLL. Bernard Chazelle. 2000. A minimum spanning tree algorithm with inverse-ackermann type complexity. J. ACM, 47(6):1028–1047. Y. J. Chu and T. H. Liu. 1965. On the shortest arborescence of a directed graph. Science Sinica, 14. Brandon Dixon, Monika Rauch, Robert, and Robert E. Tarjan. 1992. Verification and sensitivity analysis of minimum spanning trees in linear time. SIAM J. Comput, 21:1184–1192. J. Edmonds. 1967. Optimum branchings. Journal of Research of the National Bureau of Standards, 71B:233–240. Jason Eisner and Noah Smith. 2005. Parsing with soft and hard constraints on dependency length. In Proc. IWPT. Jason Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proc. of COLING. Jason Eisner. 2000. Bilexical grammars and their cubic-time parsing algorithms. In Harry Bunt and Anton Nijholt, editors, Advances in Probabilistic and Other Parsing Technologies. Kluwer Academic Publishers. Harold N Gabow, Zvi Galil, Thomas Spencer, and Robert E Tarjan. 1986. Efficient algorithms for finding minimum spanning trees in undirected and directed graphs. Combinatorica, 6(2):109–122. Carlos G´omez-Rodr´ıguez and Daniel Fern´andezGonz´alez. 2012. Dependency parsing with undirected graphs. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 66–76. Association for Computational Linguistics. 2112 Carlos G´omez-Rodr´ıguez, Daniel Fern´andezGonz´alez, and V´ıctor Manuel Darriba Bilbao. 2015. Undirected dependency parsing. Computational Intelligence, 31(2):348–384. Matthew Gormley, Mark Dredze, and Jason Eisner. 2015. Approximation-aware dependency parsing by belief propagation. Transactions of the Association for Computational Linguistics, 3:489–501. Michiel Hazewinkel. 2001. Ackermann function. In Encyclopedia of Mathematics. Springer. John Judge, Aoife Cahill, and Josef Van Genabith. 2006. Questionbank: Creating a corpus of parseannotated questions. In Proceedings of ACLCOLING, pages 497–504. David Karger, Philip Klein, and Robert Tarjan. 1995. A randomized linear-time algorithm to find minimum spanning trees. J. ACM, 42(2):321–328. Valerie King. 1995. A simpler minimum spanning tree verification algorithm. Algorithmica, 18:263–270. Terry Koo and Michael Collins. 2010b. Efficient thirdorder dependency parsers. In Proc. of ACL. T. Koo, A. M. Rush, M. Collins, T. Jaakkola, and D. Sontag. 2010a. Dual decomposition for parsing with non-projective head automata. In Proc. of EMNLP. Mitchell Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313–330. A.F.T. Martins, N.A. Smith, and E.P. Xing. 2009. Concise integer linear programming formulations for dependency parsing. In Proc. of ACL. A.F.T. Martins, N.A. Smith, P.M.Q. Aguiar, and M.A.T. Figueiredo. 2011. Dual decomposition with many overlapping components. In Proc. of EMNLP. A.F.T. Martins, Miguel Almeida, and N.A. Smith. 2013. Turning on the turbo: Fast third-order nonprojective turbo parsers. In Proc. of ACL. Ryan McDonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proc. of EACL. Ryan McDonald and Giorgio Satta. 2007. On the complexity of non-projective data-driven dependency parsing. In Proc. of IWPT. Ryan McDonald, Koby Crammer, and Giorgio Satta. 2005a. Online large-margin training of dependency parsers. In Proc. of ACL. Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajic. 2005b. Non-projective dependency parsing using spanning tree algorithms. In Proc. of HLTEMNLP. Jaroslav Nesetril, Eva Milkov´a, and Helena Nesetrilov´a. 2001. Otakar boruvka on minimum spanning tree problem translation of both the 1926 papers, comments, history. Discrete Mathematics, 233(1-3):3–36. Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007. The conll 2007 shared task on dependency parsing. In Proceedings of the CoNLL shared task session of EMNLP-CoNLL, pages 915–932. sn. Tomoko Ohta, Yuka Tateisi, and Jin-Dong Kim. 2002. The genia corpus: An annotated research abstract corpus in molecular biology domain. In Proceedings of the second international conference on Human Language Technology Research, pages 82–86. Seth Pettie and Vijaya Ramachandran. 2002. An optimal minimum spanning tree algorithm. J. ACM, 49(1):16–34. Sebastian Riedel, David Smith, and Andrew McCallum. 2012. Parse, price and cut – delayed column and row generation for graph based parsers. In Proc. of EMNLP-CoNLL 2012. Alexander Rush and Slav Petrov. 2012. Vine pruning for efficient multi-pass dependency parsing. In Proc. of NAACL. Alexander M. Rush, David Sontag, Michael Collins, and Tommi Jaakkola. 2010. On dual decomposition and linear programming relaxations for natural language processing. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP ’10, pages 1–11, Stroudsburg, PA, USA. Association for Computational Linguistics. David Smith and Jason Eisner. 2008. Dependency parsing by belief propagation. In Proc. of EMNLP. Robert Endre Tarjan. 1977. Finding optimum branchings. Networks, 7(1):25–35. Robert Endre Tarjan. 1983. Data Structures and Network Algorithms. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA. Hao Zhang and Ryan McDonald. 2012. Generalized higher-order dependency parsing with cube pruning,. In Proc. of EMNLP-CoNLL 2012. 2113
2016
198
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2114–2123, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Topic Extraction from Microblog Posts Using Conversation Structures Jing Li1,2∗, Ming Liao1,2, Wei Gao3, Yulan He4 and Kam-Fai Wong1,2 1The Chinese University of Hong Kong, Shatin, N.T., Hong Kong 2MoE Key Laboratory of High Confidence Software Technologies, China 3Qatar Computing Research Institute, Hamad Bin Khalifa University, Doha, Qatar 4School of Engineering and Applied Science, Aston University, UK {lijing,mliao,kfwong}@se.cuhk.edu.hk1,2 [email protected], [email protected] Abstract Conventional topic models are ineffective for topic extraction from microblog messages since the lack of structure and context among the posts renders poor message-level word co-occurrence patterns. In this work, we organize microblog posts as conversation trees based on reposting and replying relations, which enrich context information to alleviate data sparseness. Our model generates words according to topic dependencies derived from the conversation structures. In specific, we differentiate messages as leader messages, which initiate key aspects of previously focused topics or shift the focus to different topics, and follower messages that do not introduce any new information but simply echo topics from the messages that they repost or reply. Our model captures the different extents that leader and follower messages may contain the key topical words, thus further enhances the quality of the induced topics. The results of thorough experiments demonstrate the effectiveness of our proposed model. 1 Introduction The increasing popularity of microblog platforms results in a huge volume of user-generated short posts. Automatically modeling topics out of such massive microblog posts can uncover the hidden semantic structures of the underlying collection and can be useful to downstream applications such as microblog summarization (Harabagiu and Hickl, 2011), user profiling (Weng et al., 2010), event tracking (Lin et al., 2010) and so on. Popular topic models, like Probabilistic Latent Semantic Analysis (pLSA) (Hofmann, 1999) ∗* Part of this work was conducted when the first author was visiting Aston University. and Latent Dirichlet Allocation (LDA) (Blei et al., 2003b), model the semantic relationships between words based on their co-occurrences in documents. They have demonstrated their success in conventional documents such as news reports and scientific articles, but perform poorly when directly applied to short and colloquial microblog content due to severe sparsity in microblog messages (Wang and McCallum, 2006; Hong and Davison, 2010). A common way to deal with short text sparsity is to aggregate short messages into long pseudodocuments. Most of the studies heuristically aggregate messages based on authorship (Zhao et al., 2011; Hong and Davison, 2010), shared words (Weng et al., 2010), or hashtags (Ramage et al., 2010; Mehrotra et al., 2013). Some works directly take into account the word relations to alleviate document-level word sparseness (Yan et al., 2013; Sridhar, 2015). More recently, a self-aggregation-based topic model called SATM (Quan et al., 2015) was proposed to aggregate texts jointly with topic inference. However, we argue that the existing aggregation strategies are suboptimal for modeling topics in short texts. Microblogs allow users to share and comment on messages with friends through reposting or replying, similar to our everyday conversations. Intuitively, the conversation structures can not only enrich context, but also provide useful clues for identifying relevant topics. This is nonetheless ignored in previous approaches. Moreover, the occurrence of non-topic words such as emotional, sentimental, functional and even meaningless words are very common in microblog posts, which may distract the models from recognizing topic-related key words and thus fail to produce coherent and meaningful topics. We propose a novel topic model by utilizing the structures of conversations in microblogs. We link microblog posts using reposting and replying rela2114 tions to build conversation trees. Particularly, the root of a conversation tree refers to the original post and its edges represent the reposting/replying relations. [O] Just an hour ago, a series of coordinated terrorist attacks occurred in Paris !!! [R2] Gunmen and suicide bombers hit a concert hall. More than 100 are killed already. [R1] OMG! I can’t believe it’s real. Paris?! I’ve just been there last month. [R3] Oh no! @BonjourMarc r u OK? please reply me for god’s sake!!! [R4] My gosh!!! that sucks // Poor on u guys… [R7] For the safety of US, I’m for Trump to be president, especially after this. [R8] I repost to support @realDonaldTrump. Can’t agree more [R10] R U CRAZY?! Trump is just a bigot sexist and racist. …… …… …… [R9] thanks dude, you’d never regret …… [R5] Don’t worry. I was home. [R6] poor guys, terrible Figure 1: An example of conversation tree. [O]: the original post; [Ri]: the i-th repost/reply; Arrow lines: reposting/replying relations; Dark black posts: leaders to be detected; Underlined italic words: key words representing topics Figure 1 illustrates an example of a conversation tree, in which messages can initiate a new topic such as [O] and [R7] or raise a new aspect (subtopic) of the previously discussed topics such as [R2] and [R10]. These messages are named as leaders, which contain salient content in topic description, e.g., the italic and underlined words in Figure 1. The remaining messages, named as followers, do not raise new issues but simply respond to their reposted or replied messages following what has been raised by the leaders and often contain non-topic words, e.g., OMG, OK, agree, etc. Conversation tree structures from microblogs have been previously shown helpful to microblog summarization (Li et al., 2015), but have never been explored for topic modeling. We follows Li et al. (2015) to detect leaders and followers across paths of conversation trees using Conditional Random Fields (CRF) trained on annotated data. The detected leader/follower information is then incorporated as prior knowledge into our proposed topic model. Our experimental results show that our model, which captures parent-child topic correlations in conversation trees and generates topics by considering messages being leaders or followers separately, is able to induce high-quality topics and outperforms a number of competitive baselines. In summary, our contributions are three-fold: • We propose a novel topic model, which explicitly exploits the topic dependencies contained in conversation structures to enhance topic assignments. • Our model differentiates the generative process of topical and non-topic words, according to the message where a word is drawn from being a leader or a follower. This helps the model distinguish the topic-specific information from background noise. • Our model outperforms state-of-the-art topic models when evaluated on a large real-world microblog dataset containing over 60K conversation trees, which is publicly available1. 2 Related Works Topic models aim to discover the latent semantic information, i.e., topics, from texts and have been extensively studied. One of the most popular and well-known topic models is LDA (Blei et al., 2003b). It utilizes Dirichlet priors to generate document-topic and topic-word distributions, and has been shown effective in extracting topics from conventional documents. Nevertheless, prior research has demonstrated that standard topic models, essentially focusing on document-level word co-occurrences, are not suitable for short and informal microblog messages due to severe data sparsity exhibited in short texts (Wang and McCallum, 2006; Hong and Davison, 2010). Therefore, how to enrich and exploit context information becomes a main concern. Weng et al. (2010), Hong et al. (2010) and Zhao et al. (2011) first heuristically aggregated messages posted by the same user or sharing the same words before applying classic topic models to extract topics. However, such a simple strategy poses some problems. For example, it is common that a user has various interests and posts messages covering a wide range of topics. Ramage et al. (2010) and Mehrotra et al. (2013) used hashtags as labels to train supervised topic models. But these models depend on large-scale hashtag-labeled data for model training, and their performance is inevitably compromised when facing unseen topics irrelevant to any hashtag in training data due to the rapid change and wide variety of topics in social media. SATM (Quan et al., 2015) combined short texts aggregation and topic induction into a unified model. But in their work, no prior knowledge 1http://www1.se.cuhk.edu.hk/˜lijing/ data/microblog-topic-extraction-data.zip 2115 was given to ensure the quality of text aggregation, which therefore can affect the performance of topic inference. In this work, we organize microblog messages as conversation trees based on reposting/reply relations, which is a more advantageous message aggregation strategy. Another line of research tackled the word sparseness by modeling word relations instead of word occurrences in documents. For example, Gaussian Mixture Topic Model (GMTM) (Sridhar, 2015) utilized word embeddings to model the distributional similarities of words and then inferred clusters of words represented by word distributions using Gaussian Mixture Model (GMM) that capture the notion of latent topics. However, GMTM heavily relies on meaningful word embeddings that require a large volume of high-quality external resources for training. Biterm Topic Model (BTM) (Yan et al., 2013) directly explores unordered word-pair cooccurrence patterns in each individual message. Our model learns topics from aggregated messages based on conversation trees, which naturally provide richer context since word co-occurrence patterns can be captured from multiple relevant messages. 3 LeadLDA Topic Model In this section, we describe how to extract topics from a microblog collection utilizing conversation tree structures, where the trees are organized based on reposting and replying relations among the messages2. To identify key topic-related content from colloquial texts, we differentiate the messages as leaders and followers. Following Li et al. (2015), we extract all root-to-leaf paths on conversation trees and utilize the state-of-the-art sequence learning model CRF (Lafferty et al., 2001) to detect the leaders3. As a result, the posterior probability of each node being a leader or follower is obtained by averaging the different marginal probabilities of the same node over all the tree paths that contain the node. Then, the obtained probability distribution is considered as the observed prior variable input into our model. 2Reposting/replying relations are straightforward to obtain by using microblog APIs from Twitter and Sina Weibo. 3The CRF model for leader detection was trained on a public corpus with all the messages annotated on the tree paths. Details are described in Section 4. 3.1 Topics and Conversation Trees Previous works (Zhao et al., 2011; Yan et al., 2013; Quan et al., 2015) have proven that assuming each short post contains a single topic is useful to alleviate the data sparsity problem. Thus, given a corpus of microblog posts organized as conversation trees and the estimated leader probabilities of tree nodes, we assume that each message only contains a single topic and a tree covers a mixture of multiple topics. Since leader messages subsume the content of their followers, the topic of a leader can be generated from the topic distribution of the entire tree. Consequently, the topic mixture of a conversation tree is determined by the topic assignments to the leader messages on it. The topics of followers, however, exhibit strong and explicit dependencies on the topics of their ancestors. So, their topics need to be generated in consideration of local constraints. Here, we mainly address how to model the topic dependencies of followers. Enlighten by the general Structural Topic Model (strTM) (Wang et al., 2011), which incorporates document structures into topic model by explicitly modeling topic dependencies between adjacent sentences, we exploit the topical transitions between parents and children in the trees for guiding topic assignments. Intuitively, the emergence of a leader results in potential topic shift. It tends to weaken the topic similarities between the emerging leaders and their predecessors. For example, [R7] in Figure 1 transfers the topic to a new focus, thus weakens the tie with its parent. We can simplify our case by assuming that followers are topically responsive just up to (hence not further than) their nearest ancestor leaders. Thus, we can dismantle each conversation tree into forest by removing the links between leaders and their parents hence producing a set of subgraphs like [R2]–[R6] and [R7]–[R9] in Figure 1. Then, we model the internal topic dependencies within each subgraph by inferring the parent-child topic transition probabilities satisfying the first-order Markov properties in a similar way as estimating the transition distribution of adjacent sentences in strTM (Wang et al., 2011). At topic assignment stage, the topic of a follower will be assigned by referring to its parent’s topic and the transition distribution that captures topic similarities of followers to their parents (see Section 3.2). In addition, every word in the corpus is either 2116 a topical or non-topic (i.e., background) word, which highly depends on whether it occurs in a leader or a follower message. Figure 2 illustrates the graphical model of our generative process, which is named as LeadLDA. T Mt  𝛽 K  𝜙$  𝜙%  𝜃'  𝑧',* Nt,m      𝑧',+(*)    𝑦',*  𝑙',*  𝛾 K  𝜋$  𝑤',*,3  𝑥',*,3  𝛿 2  𝜏7  α Figure 2: Graphical Model of LeadLDA 3.2 Topic Modeling Formally, we assume that the microblog posts are organized as T conversation trees. Each tree t contains Mt message nodes and each message m contains Nt,m words in the vocabulary. The vocabulary size is V and there are K topics embedded in the corpus represented by word distribution φk ∼Dir(β) (k = 1, 2, ..., K). Also, a background word distribution φB ∼Dir(β) is included to capture the general information, which is not topic specific. φk and φB are multinomial distributions over the vocabulary. A tree t is modeled as a mixture of topics θt ∼Dir(α) and any message m on t is assumed to contain a single topic zt,m ∈{1, 2, ..., K}. (1) Topic assignments: The topic assignments of LeadLDA is inspired by Griffiths et al. (2004) that combines syntactic and semantic dependencies between words. LeadLDA integrates the outcomes of leader detection with a binomial switcher yt,m ∈{0, 1} indicating whether m is a leader (yt,m = 1) or a follower (yt,m = 0), given each message m on the tree t. yt,m is parameterized by its leader probability lt,m, which is the posterior probability output from the leader detection model and serves as an observed prior variable. According to the notion of leaders, they initiate key aspects of previously discussed topics or signal a new topic shifting the focus of its descendant followers. So, the topics of leaders on tree t are directly sampled from the topic mixture θt. To model the internal topic correlations within the subgraph of conversation tree consisting of a leader and all its followers, we capture parentchild topic transitions πk ∼Dir(γ), which is a distribution over K topics, and use πk,j to denote the probability of a follower assigned topic j when the topic of its parent is k. Specifically, if message m is sampled as a follower and the topic assignment to its parent message is zt,p(m), where p(m) indexes the parent of m, then zt,m (i.e., the topic of m) is generated from topic transition distribution πzt,p(m). In particular, since the root of a conversation tree has no parent and can only be a leader, we make the leader probability lt,root = 1 to force its topic only to be generated from the topic distribution of tree t. (2) Topical and non-topic words: We separately model the distributions of leader and follower messages emitting topical or non-topic words with τ0 and τ1, respectively, both of which are drawn from a symmetric Beta prior parametererized by δ. Specifically, for each word n in message m on tree t, we add a binomial background switcher xt,m,n controlled by whether m is a leader or a follower, i.e., xt,m,n ∼Bi(τyt,m), which indicates n is a topical word if xt,m,n = 0 or a background word if xt,m,n = 1, and xt,m,n controls n to be generated from the topic-word distribution φzt,m, where zt,m is the topic of m, or from background word distribution φB modeling non-topic information. (3) Generation process: To sum up, conditioned on the hyper-parameters Θ = (α, β, γ, δ), the generation process of a conversation tree t can be described as follows: • Draw θt ∼Dir(α) • For message m = 1 to Mt on tree t – Draw yt,m ∼Bi(lt,m) – If yt,m == 1 ∗Draw zt,m ∼Mult(θt) – If yt,m == 0 ∗Draw zt,m ∼Mult(πzt,p(m)) – For word n = 1 to Nt,m in m ∗Draw xt,m,n ∼Bi(τyt,m) ∗If xt,m,n == 0 · Draw wt,m,n ∼Mult(φzt,m) ∗If xt,m,n == 1 · Draw wt,m,n ∼Mult(φB) 2117 CLB s,(r) # of words with background switchers assigned as r and occurring in messages with leader switchers s. CLB s,(·) # of words occurring in messages whose leader switchers are s, i.e., P r∈{0,1} CLB s,(r). NB (r) # of words occurring in message (t, m) and with background switchers assigned as r. NB (·) # of words in message (t, m), i.e., NB (·) = P r∈{0,1} NB (r). CT W k,(v) # of words indexing v in vocabulary, sampled as topic (nonbackground) words, and occurring in messages assigned topic k. CT W k,(·) # of words assigned as topic (non-background) word and occurring in messages assigned topics k, i.e., CT W k,(·) = PV v=1 CT W k,(v). NW (v) # of words indexing v in vocabulary that occur in message (t, m) and are assigned as topic (non-background) word. NW (·) # of words assigned as topic (non-background) words and occurring in message (t, m), i.e., NW (·) = PV v=1 NW (v). CT R i,(j) # of messages sampled as followers and assigned topic j, whose parents are assigned topic i. CT R i,(·) # of messages sampled as followers whose parents are assigned topic i, i.e., CT R i,(·) = PK j=1 CT R i,(j). I(·) An indicator function, whose value is 1 when its argument inside () is true, and 0 otherwise. NCT (j) # of messages that are children of message (t, m), sampled as followers and assigned topic j. NCT (·) # of message (t, m)’s children sampled as followers, i.e., NCT (·) = PK j=1 NCT (j) CT T t,(k) # of messages on conversation tree t sampled as leaders and assigned topic k. CT T t,(·) # of messages on conversation tree t sampled as leaders, i.e., CT T t,(·) = PK k=1 CT T t,(k) CBW (v) # of words indexing v in vocabulary and assigned as background (non-topic) words CBW (·) # of words assigned as background (non-topic) words, i.e., CBW (·) = PV v=1 CBW (v) Table 1: The notations of symbols in the sampling formulas (1) and (2). (t, m): message m on conversation tree t. 3.3 Inference for Parameters We use collapsed Gibbs Sampling (Griffiths, 2002) to carry out posterior inference for parameter learning. The hidden multinomial variables, i.e., message-level variables (y and z) and wordlevel variables (x) are sampled in turn, conditioned on a complete assignment of all other hidden variables. Due to the space limitation, we leave out the details of derivation but give the core formulas in the sampling steps. We first define the notations of all variables needed by the formulation of Gibbs sampling, which are described in Table 1. In particular, the various C variables refer to counts excluding the message m on conversation tree t. For each message m on a tree t, we sample the leader switcher yt,m and topic assignment zt,m according to the following conditional probability distribution: p(yt,m = s, zt,m = k|y¬(t,m), z¬(t,m), w, x, l, Θ) ∝ Γ(CLB s,(·) + 2δ) Γ(CLB s,(·) + N B (·) + 2δ) Y r∈{0,1} Γ(CLB s,(r) + N B (r) + δ) Γ(CLB s,(r) + δ) · Γ(CT W k,(·) + V β) Γ(CT W k,(·) + N W (·) + V β) V Y v=1 Γ(CT W k,(v) + N W (v) + β) Γ(CT W k,(v) + β) ·g(s, k, t, m) (1) where g(s, k, t, m) takes different forms depending on the value of s: g(0, k, t, m) = Γ(CT R zt,p(m),(·) + Kγ) Γ(CT R zt,p(m),(·) + I(zt,p(m) ̸= k) + Kγ) · Γ(CT R k,(·) + Kγ) Γ(CT R k,(·) + I(zt,p(m) = k) + N CT (·) + Kγ) · K Y j=1 Γ(CT R k,(j) + N CT (j) + I(zt,p(m) = j = k) + γ) Γ(CT R k,(j) + γ) · Γ(CT R zt,p(m),(k) + I(zt,p(m) ̸= k) + γ) Γ(CT R zt,p(m),(k) + γ) · (1 −lt,m) and g(1, k, t, m) = CTT t,(k) + α CTT t,(·) + Kα · lt,m For each word n in m on t, the sampling formula of its background switcher is given as the following: p(xt,m,n = r|x¬(t,m,n), y, z, w, l, Θ) ∝ CLB yt,m,(r) + δ CLB yt,m,(·) + 2δ · h(r, t, m, n) (2) where h(r, t, m, n) =      CT W zt,m,(wt,m,n)+β CT W zt,m,(·)+V β if r = 0 CBW (wt,m,n)+β CBW (·) +V β if r = 1 4 Data Collection and Experiment Setup To evaluate our LeadLDA model, we conducted experiments on real-world microblog dataset collected from Sina Weibo that has the same 140character limitation and shares the similar market penetration as Twitter (Rapoza, 2011). For the hyper-parameters of LeadLDA, we fixed α = 50/K, β = 0.1, following the common practice in previous works (Griffiths and Steyvers, 2004; Quan et al., 2015). Since there is no analogue of γ and δ in prior works, where γ controls topic dependencies of follower messages to their ancestors and δ controls the different tendencies of 2118 Month # of trees # of messages Vocab size May 10,812 38,926 6,011 June 29,547 98,001 9,539 July 26,103 102,670 10,121 Table 2: Statistics of our three evaluation datasets leaders and followers covering topical and nontopic words. We tuned γ and δ by grid search on a large development set containing around 120K posts and obtained γ = 50/K, δ = 0.5. Because the content of posts are often incomplete and informal, it is difficult to manually annotate topics in a large scale. Therefore, we follow Yan et al. (2013) to utilize hashtags led by ‘#’, which are manual topic labels provided by users, as ground-truth categories of microblog messages. We collected the real-time trending hashtags on Sina Weibo and utilized the hashtag-search API4 to crawl the posts matching the given hashtag queries. In the end, we built a corpus containing 596,318 posts during May 1 – July 31, 2014. To examine the performance of models on various topic distributions, we split the corpus into 3 datasets, each containing messages of one month. Similar to Yan et al. (2013), for each dataset, we manually selected 50 frequent hashtags as topics, e.g. #mh17, #worldcup, etc. The experiments were conducted on the subsets of posts with the selected hashtags. Table 2 shows the statistics of the three subsets used in our experiments. We preprocessed the datasets before topic extraction in the following steps: 1) Use FudanNLP toolkit (Qiu et al., 2013) for word segmentation, stop words removal and POS tagging for Chinese Weibo messages; 2) Generate a vocabulary for each dataset and remove words occurring less than 5 times; 3) Remove all hashtags in texts before input them to models, since the models are expected to extract topics without knowing the hashtags, which are ground-truth topics; 4) For LeadLDA, we use the CRF-based leader detection model (Li et al., 2015) to classify messages as leaders and followers. The leader detection model was implemented by using CRF++5, which was trained on the public dataset composed of 1,300 conversation paths and achieved state-of-the-art 73.7% F1score of classification accuracy (Li et al., 2015). 4http://open.weibo.com/wiki/2/search/ topics 5https://taku910.github.io/crfpp/ 5 Experimental Results We evaluated topic models with two sets of K, i.e., the number of topics. One is K = 50, to match the count of hashtags following Yan et al. (2013), and the other is K = 100, much larger than the “real” number of topics. We compared LeadLDA with the following 5 state-of-the-art basedlines. TreeLDA: Analogous to Zhao et al. (2011), where they aggregated messages posted by the same author, TreeLDA aggregates messages from one conversation tree as a pseudo-document. Additionally, it includes a background word distribution to capture non-topic words controlled by a general Beta prior without differentiating leaders and followers. TreeLDA can be considered as a degeneration of LeadLDA, where topics assigned to all messages are generated from the topic distributions of the conversation trees they are on. StructLDA: It is another variant of LeadLDA, where topics assigned to all messages are generated based on topic transitions from their parents. The strTM (Wang et al., 2011) utilized a similar model to capture the topic dependencies of adjacent sentences in a document. Following strTM, we add a dummy topic Tstart emitting no word to the “pseudo parents” of root messages. Also, we add the same background word distribution to capture non-topic words as TreeLDA does. BTM: Biterm Topic Model (BTM)6 (Yan et al., 2013) directly models topics of all word pairs (biterms) in each post, which outperformed LDA, Mixture of Unigrams model, and the model proposed by Zhao et al. (2011) that aggregated posts by authorship to enrich context. SATM: A general unified model proposed by Quan et al. (2015) that aggregates documents and infers topics simultaneously. We implemented SATM and examined its effectiveness specifically on microblog data. GMTM: To tackle word sparseness, Sridhar et al. (2015) utilized Gaussian Mixture Model (GMM) to cluster word embeddings generated by a log-linear word2vec model7. The hyper-parameters of BTM, SATM and GMTM were set according to the best hyperparameters reported in their original papers. For TreeLDA and StructLDA, the parameter settings were kept the same as LeadLDA since they are its 6https://github.com/xiaohuiyan/BTM 7https://code.google.com/archive/p/ word2vec/ 2119 variants. And the background switchers were parameterized by symmetric Beta prior on 0.5, following Chemudugunta et al. (2006). We ran Gibbs samplings (in LeadLDA, TreeLDA, StructLDA, BTM and SATM) and EM algorithm (in GMTM) with 1,000 iterations to ensure convergence. Topic model evaluation is inherently difficult. In previous works, perplexity is a popular metric to evaluate the predictive abilities of topic models given held-out dataset with unseen words (Blei et al., 2003b). However, Chang et al. (2009) have demonstrated that models with high perplexity do not necessarily generate semantically coherent topics in human perception. Therefore, we conducted objective and subjective analysis on the coherence of produced topics. 5.1 Objective Analysis The quality of topics is commonly measured by coherence scores (Mimno et al., 2011), assuming that words representing a coherent topic are likely to co-occur within the same document. However, due to the severe sparsity of short text posts, we modify the calculation of commonly-used topic coherence measure based on word co-occurrences in messages tagged with the same hashtag, named as hashtag-document, assuming that those messages discuss related topics8. Specifically, we calculate the coherence score of a topic given the top N words ranked by likelihood as below: C = 1 K · K X k=1 N X i=2 i−1 X j=1 log D(wk i , wk j ) + 1 D(wk j ) , (3) where wk i represents the i-th word in topic k ranked by p(w|k), D(wk i , wk j ) refers to the count of hashtag-documents where word wk i and wk j cooccur, and D(wk i ) denotes the number of hashtagdocuments that contain word wk i . Table 3 shows the absolute values of C scores for topics produced on three evaluation datasets (May, June and July), and the top 10, 15, 20 words of topics were selected for evaluation. Lower scores indicate better coherence in the induced topic. We have the following observations: • GMTM gave the worst coherence scores, which may be ascribed to its heavy reliance on relevant large-scale high-quality external data, with8We sampled posts and their corresponding hashtags in our evaluation set and found only 1% mismatch. N Model May June July K50 K100 K50 K100 K50 K100 10 TREE 27.9 30.5 24.0 23.8 23.9 26.1 STR 29.9 30.8 24.0 24.1 24.4 26.4 BTM 26.7 28.9 27.8 25.5 25.4 25.2 SATM 30.6 29.9 23.8 23.7 24.3 27.5 GMTM 40.8 40.1 44.0 44.2 41.7 40.8 LEAD 28.4 26.9 19.8 23.4 22.6 25.1 15 TREE 71.9 76.4 55.3 60.4 61.2 66.2 STR 76.4 74.1 57.6 62.2 58.1 61.1 BTM 69.6 71.4 58.5 60.3 59.1 63.0 SATM 74.3 73.0 54.8 60.4 61.2 65.3 GMTM 96.4 93.1 100.4 105.1 94.6 94.9 LEAD 67.4 65.2 52.8 57.7 55.3 57.8 20 TREE 138.8 138.6 102.0 115.0 115.8 119.7 STR 134.0 136.9 104.3 112.7 111.0 117.3 BTM 125.2 131.1 109.4 115.7 115.3 120.2 SATM 134.6 131.9 105.5 114.3 113.5 118.9 GMTM 173.5 169.0 184.7 190.9 167.4 171.2 LEAD 120.9 127.2 101.6 106.0 97.2 104.9 Table 3: Absolute values of coherence scores. Lower is better. K50: 50 topics; K100: 100 topics; N: # of top words ranked by topic-word probabilities; TREE: TreeLDA; STR: StructLDA; LEAD: LeadLDA. out which the trained word embedding model failed to capture meaningful semantic features for words, and hence could not yield coherent topics. • TreeLDA and StructLDA produced competitive results compared to the state-of-the-art baseline models, which indicates the effectiveness of using conversation structures to enrich context and thus generate topics of reasonably good quality. • The coherence of topics generated by LeadLDA outperformed all the baselines on the three datasets, most of time by large margins and was only outperformed by BTM on the May dataset when K = 50 and N = 10. The generally higher performance of LeadLDA is due to three reasons: 1) It effectively identifies topics using the conversation tree structures, which provide richer context information; 2) It jointly models the topics of leaders and the topic dependencies of other messages on a tree. TreeLDA and StructLDA, each only considering one of the factors, performed worse than LeadLDA; 3) LeadLDA separately models the probabilities of leaders and followers containing topical or nontopic words while the baselines only model the general background information regardless of the different types of messages. This implies that leaders and followers do have different capacities in covering key topical words or background noise, which is useful to identify key words for topic representation. 2120 TreeLDA StructLDA BTM SATM LeadLDA 香港微博马航家属证 实入境处客机消息曹 格投给二胎选项教父 滋养飞机外国心情坠 毁男子同胞 乌克兰航空亲爱国民 绕开飞行航班领空所 有避开宣布空域东部 俄罗斯终于忘记公司 绝望看看珍贵 香港入境处家属证实 男子护照外国消息坠 毁马航报道联系电台 客机飞机同胞确认事 件霍家直接 马航祈祷安息生命逝 者世界艾滋病恐怖广 州飞机无辜默哀远离 事件击落公交车中国 人国际愿逝者真的 乌克兰马航客机击落 飞机坠毁导弹俄罗斯 消息乘客中国马来西 亚香港遇难事件武装 航班恐怖目前证实 Hong Kong, microblog, family, confirm, immigration, airliner, news, Grey Chow, vote, second baby, choice, god father, nourish, airplane, foreign, feeling, crash, man, fellowman Ukraine, airline, dear, national, bypass, fly, flight, airspace, all, avoid, announce, airspace, eastern, Russia, finally, forget, company, disappointed, look, valuable Hong Kong, immigration, family, confirm, man, passport, foreign, news, crash, Malaysia Airlines, report, contact, broadcast station, airliner, airplane, fellowman, confirm, event, Fok’s family, directly Malaysia Airlines, prey, rest in peace, life, dead, world, AIDS, terror, Guangzhou, airplane, innocent, silent tribute, keep away from, event, shoot down, bus, Chinese, international, wish the dead, really Ukraine, Malaysia Airlines, airliner, shoot down, airplane, crash, missile, Russia, news, passenger, China, Malaysia, Hong Kong, killed, event, militant, flight, terror, current, confirm Figure 3: The extracted topics describing MH17 crash. Each column represents the similar topic generated by the corresponding model with the top 20 words. The 2nd row: original Chinese words; The 3rd row: English translations. 5.2 Subjective Analysis To evaluate the coherence of induced topics from human perspective, we invited two annotators to subjectively rate the quality of every topic (by displaying the top 20 words) generated by different models on a 1-5 Likert scale. A higher rating indicates better quality of topics. The Fless’s Kappa of annotators’ ratings measured for various models on different datasets given K = 50 and 100 range from 0.62 to 0.70, indicating substantial agreements (Landis and Koch, 1977). Table 4 shows the overall subjective ratings. We noticed that humans preferred topics produced given K = 100 to K = 50, but coherence scores gave generally better grades to models for K = 50, which matched the number of topics in ground truth. This is because models more or less mixed more common words when K is larger. Coherence score calculation (Equation (3)) penalizes common words that occur in many documents, whereas humans could somehow “guess” the meaning of topics based on the rest of words thus gave relatively good ratings. Nevertheless, annotators gave remarkably higher ratings to LeadLDA than baselines on all datasets regardless of K being 50 or 100, which confirmed that LeadLDA effectively yielded goodquality topics. For a detailed analysis, Figure 3 lists the top 20 words about “MH17 crash” induced by different models9 when K = 50. We have the following 9As shown in Table 3 and 4, the topic coherence scores of GMTM were the worst. Hence, the topic generated by Model May June July K50 K100 K50 K100 K50 K100 TREE 3.12 3.41 3.42 3.44 3.03 3.48 STR 3.05 3.45 3.38 3.48 3.08 3.53 BTM 3.04 3.26 3.40 3.37 3.15 3.57 SATM 3.08 3.43 3.30 3.55 3.09 3.54 GMTM 2.02 2.37 1.99 2.27 1.97 1.90 LEAD 3.40 3.57 3.52 3.63 3.55 3.72 Table 4: Subjective ratings of topics. The meanings of K50, K100, TREE, STR and LEAD are the same as in Table 3. observations: • BTM, based on word-pair co-occurrences, mistakenly grouped “Fok’s family” (a tycoon family in Hong Kong), which co-occurred frequently with “Hong Kong” in other topics, into the topic of “MH17 crash”. “Hong Kong” is relevant here as a Hong Kong passenger died in the MH17 crash. • The topical words generated by SATM were mixed with words relevant to the bus explosion in Guangzhou, since it aggregated messages according to topic affinities based on the topics learned in the previous step. Thus the posts about bus explosion and MH17 crash, both pertaining to disasters, were aggregated together mistakenly, which generated spurious topic results. • Both TreeLDA and StructLDA generated topics containing non-topic words like “microblog” and “dear”. This means that without distinguishing leaders and followers, it is difficult to filter out non-topic words. The topic quality of StructLDA nevertheless seems better than GMTM is not shown due to space limitation. 2121 TreeLDA, which implies the usefulness of exploiting topic dependencies of posts in conversation structures. • LeadLDA not only produced more semantically coherent words describing the topic, but also revealed some important details, e.g., MH17 was shot down by a missile. 6 Conclusion and Future Works This paper has proposed a novel topic model by considering the conversation tree structures of microblog posts. By rigorously comparing our proposed model with a number of competitive baselines on real-world microblog datasets, we have demonstrated the effectiveness of using conversation structures to help model topics embedded in short and colloquial microblog messages. This work has proven that detecting leaders and followers, which are coarse-grained discourse derived from conversation structures, is useful to model microblogging topics. In the next step, we plan to exploit fine-grained discourse structures, e.g., dialogue acts (Ritter et al., 2010), and propose a unified model that jointly inferring discourse roles and topics of posts in context of conversation tree structures. Another extension is to extract topic hierarchies by integrating the conversation structures into hierarchical topic models like HLDA (Blei et al., 2003a) to extract fine-grained topics from microblog posts. Acknowledgment This work is supported by General Research Fund of Hong Kong (417112), the Innovation and Technology Fund of Hong Kong SAR (ITP/004/16LP), Shenzhen Peacock Plan Research Grant (KQCX20140521144507925) and Innovate UK (101779). We would like to thank Shichao Dong for his efforts on data processing and anonymous reviewers for the useful comments. References David M. Blei, Thomas L. Griffiths, Michael I. Jordan, and Joshua B. Tenenbaum. 2003a. Hierarchical topic models and the nested chinese restaurant process. In Proceedings of the 17th Annual Conference on Neural Information Processing Systems, NIPS, pages 17–24. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003b. Latent dirichlet allocation. Journal of Machine Learning Research, 3:993–1022. Jonathan Chang, Jordan L. Boyd-Graber, Sean Gerrish, Chong Wang, and David M. Blei. 2009. Reading tea leaves: How humans interpret topic models. In Proceedings of the 23rd Annual Conference on Neural Information Processing Systems, NIPS, pages 288– 296. Chaitanya Chemudugunta, Padhraic Smyth, and Mark Steyvers. 2006. Modeling general and specific aspects of documents with a probabilistic topic model. In Proceedings of the 20th Annual Conference on Neural Information Processing Systems, NIPS, pages 241–248. Thomas L Griffiths and Mark Steyvers. 2004. Finding scientific topics. Proceedings of the National Academy of Sciences, 101(suppl 1):5228–5235. Thomas L. Griffiths, Mark Steyvers, David M. Blei, and Joshua B. Tenenbaum. 2004. Integrating topics and syntax. In Proceedings of the 18th Annual Conference on Neural Information Processing Systems, NIPS, pages 537–544. Tom Griffiths. 2002. Gibbs sampling in the generative model of latent dirichlet allocation. Sanda M. Harabagiu and Andrew Hickl. 2011. Relevance modeling for microblog summarization. In Proceedings of the 5th International Conference on Web and Social Media, ICWSM. Thomas Hofmann. 1999. Probabilistic latent semantic indexing. In In Proceedings of the 22nd Annual International, ACM SIGIR, pages 50–57. Liangjie Hong and Brian D Davison. 2010. Empirical study of topic modeling in twitter. In Proceedings of the first workshop on social media analytics, pages 80–88. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the 18th International Conference on Machine Learning, ICML, pages 282–289. J Richard Landis and Gary G Koch. 1977. The measurement of observer agreement for categorical data. biometrics, pages 159–174. Jing Li, Wei Gao, Zhongyu Wei, Baolin Peng, and Kam-Fai Wong. 2015. Using content-level structures for summarizing microblog repost trees. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 2168–2178. Cindy Xide Lin, Bo Zhao, Qiaozhu Mei, and Jiawei Han. 2010. PET: a statistical model for popular 2122 events tracking in social communities. In Proceedings of the 16th International Conference on Knowledge Discovery and Data Mining, ACM SIGKDD, pages 929–938. Rishabh Mehrotra, Scott Sanner, Wray L. Buntine, and Lexing Xie. 2013. Improving LDA topic models for microblogs via tweet pooling and automatic labeling. In Proceedings of the 36th International conference on research and development in Information Retrieval, ACM SIGIR, pages 889–892. David M. Mimno, Hanna M. Wallach, Edmund M. Talley, Miriam Leenders, and Andrew McCallum. 2011. Optimizing semantic coherence in topic models. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 262–272. Xipeng Qiu, Qi Zhang, and Xuanjing Huang. 2013. Fudannlp: A toolkit for chinese natural language processing. In 51st Annual Meeting of the Association for Computational Linguistics, ACL, pages 49–54. Xiaojun Quan, Chunyu Kit, Yong Ge, and Sinno Jialin Pan. 2015. Short and sparse text topic modeling via self-aggregation. In Proceedings of the 24th International Joint Conference on Artificial Intelligence, IJCAI, pages 2270–2276. Daniel Ramage, Susan T. Dumais, and Daniel J. Liebling. 2010. Characterizing microblogs with topic models. In Proceedings of the 4th International Conference on Web and Social Media, ICWSM. Kenneth Rapoza. 2011. China’s weibos vs us’s twitter: And the winner is? Forbes (May 17, 2011). Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Unsupervised modeling of twitter conversations. In Proceedings of the 2010 Conference of the North American Chapter of the Association of Computational Linguistics, NAACL, pages 172–180. Vivek Kumar Rangarajan Sridhar. 2015. Unsupervised entity linking with abstract meaning representation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, pages 1130–1139. Xuerui Wang and Andrew McCallum. 2006. Topics over time: a non-markov continuous-time model of topical trends. In Proceedings of the 12th International Conference on Knowledge Discovery and Data Mining, ACM SIGKDD, pages 424–433. Hongning Wang, Duo Zhang, and ChengXiang Zhai. 2011. Structural topic model for latent topical structure analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, ACL, pages 1526–1535. Jianshu Weng, Ee-Peng Lim, Jing Jiang, and Qi He. 2010. Twitterrank: finding topic-sensitive influential twitterers. In Proceedings of the 3rd International Conference on Web Search and Web Data Mining, WSDM, pages 261–270. Xiaohui Yan, Jiafeng Guo, Yanyan Lan, and Xueqi Cheng. 2013. A biterm topic model for short texts. In Proceedings of the 22nd International World Wide Web Conference, WWW, pages 1445–1456. Wayne Xin Zhao, Jing Jiang, Jianshu Weng, Jing He, Ee-Peng Lim, Hongfei Yan, and Xiaoming Li. 2011. Comparing twitter and traditional media using topic models. In Advances in Information Retrieval - 33rd European Conference on IR Research, ECIR, pages 338–349. 2123
2016
199
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 12–22, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Data Recombination for Neural Semantic Parsing Robin Jia Computer Science Department Stanford University [email protected] Percy Liang Computer Science Department Stanford University [email protected] Abstract Modeling crisp logical regularities is crucial in semantic parsing, making it difficult for neural models with no task-specific prior knowledge to achieve good results. In this paper, we introduce data recombination, a novel framework for injecting such prior knowledge into a model. From the training data, we induce a highprecision synchronous context-free grammar, which captures important conditional independence properties commonly found in semantic parsing. We then train a sequence-to-sequence recurrent network (RNN) model with a novel attention-based copying mechanism on datapoints sampled from this grammar, thereby teaching the model about these structural properties. Data recombination improves the accuracy of our RNN model on three semantic parsing datasets, leading to new state-of-the-art performance on the standard GeoQuery dataset for models with comparable supervision. 1 Introduction Semantic parsing—the precise translation of natural language utterances into logical forms—has many applications, including question answering (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007; Liang et al., 2011; Berant et al., 2013), instruction following (Artzi and Zettlemoyer, 2013b), and regular expression generation (Kushman and Barzilay, 2013). Modern semantic parsers (Artzi and Zettlemoyer, 2013a; Berant et al., 2013) are complex pieces of software, requiring handcrafted features, lexicons, and grammars. Meanwhile, recurrent neural networks (RNNs) what are the major cities in utah ? what states border maine ? Original Examples Train Model Sequence-to-sequence RNN Sample New Examples Synchronous CFG Induce Grammar what are the major cities in [states border [maine]] ? what are the major cities in [states border [utah]] ? what states border [states border [maine]] ? what states border [states border [utah]] ? Recombinant Examples Figure 1: An overview of our system. Given a dataset, we induce a high-precision synchronous context-free grammar. We then sample from this grammar to generate new “recombinant” examples, which we use to train a sequence-to-sequence RNN. have made swift inroads into many structured prediction tasks in NLP, including machine translation (Sutskever et al., 2014; Bahdanau et al., 2014) and syntactic parsing (Vinyals et al., 2015b; Dyer et al., 2015). Because RNNs make very few domain-specific assumptions, they have the potential to succeed at a wide variety of tasks with minimal feature engineering. However, this flexibility also puts RNNs at a disadvantage compared to standard semantic parsers, which can generalize naturally by leveraging their built-in awareness of logical compositionality. In this paper, we introduce data recombination, a generic framework for declaratively inject12 GEO x: “what is the population of iowa ?” y: _answer ( NV , ( _population ( NV , V1 ) , _const ( V0 , _stateid ( iowa ) ) ) ) ATIS x: “can you list all flights from chicago to milwaukee” y: ( _lambda $0 e ( _and ( _flight $0 ) ( _from $0 chicago : _ci ) ( _to $0 milwaukee : _ci ) ) ) Overnight x: “when is the weekly standup” y: ( call listValue ( call getProperty meeting.weekly_standup ( string start_time ) ) ) Figure 2: One example from each of our domains. We tokenize logical forms as shown, thereby casting semantic parsing as a sequence-to-sequence task. ing prior knowledge into a domain-general structured prediction model. In data recombination, prior knowledge about a task is used to build a high-precision generative model that expands the empirical distribution by allowing fragments of different examples to be combined in particular ways. Samples from this generative model are then used to train a domain-general model. In the case of semantic parsing, we construct a generative model by inducing a synchronous context-free grammar (SCFG), creating new examples such as those shown in Figure 1; our domain-general model is a sequence-to-sequence RNN with a novel attention-based copying mechanism. Data recombination boosts the accuracy of our RNN model on three semantic parsing datasets. On the GEO dataset, data recombination improves test accuracy by 4.3 percentage points over our baseline RNN, leading to new state-of-the-art results for models that do not use a seed lexicon for predicates. 2 Problem statement We cast semantic parsing as a sequence-tosequence task. The input utterance x is a sequence of words x1, . . . , xm ∈V(in), the input vocabulary; similarly, the output logical form y is a sequence of tokens y1, . . . , yn ∈V(out), the output vocabulary. A linear sequence of tokens might appear to lose the hierarchical structure of a logical form, but there is precedent for this choice: Vinyals et al. (2015b) showed that an RNN can reliably predict tree-structured outputs in a linear fashion. We evaluate our system on three existing semantic parsing datasets. Figure 2 shows sample input-output pairs from each of these datasets. • GeoQuery (GEO) contains natural language questions about US geography paired with corresponding Prolog database queries. We use the standard split of 600 training examples and 280 test examples introduced by Zettlemoyer and Collins (2005). We preprocess the logical forms to De Brujin index notation to standardize variable naming. • ATIS (ATIS) contains natural language queries for a flights database paired with corresponding database queries written in lambda calculus. We train on 4473 examples and evaluate on the 448 test examples used by Zettlemoyer and Collins (2007). • Overnight (OVERNIGHT) contains logical forms paired with natural language paraphrases across eight varied subdomains. Wang et al. (2015) constructed the dataset by generating all possible logical forms up to some depth threshold, then getting multiple natural language paraphrases for each logical form from workers on Amazon Mechanical Turk. We evaluate on the same train/test splits as Wang et al. (2015). In this paper, we only explore learning from logical forms. In the last few years, there has an emergence of semantic parsers learned from denotations (Clarke et al., 2010; Liang et al., 2011; Berant et al., 2013; Artzi and Zettlemoyer, 2013b). While our system cannot directly learn from denotations, it could be used to rerank candidate derivations generated by one of these other systems. 3 Sequence-to-sequence RNN Model Our sequence-to-sequence RNN model is based on existing attention-based neural machine translation models (Bahdanau et al., 2014; Luong et al., 2015a), but also includes a novel attention-based copying mechanism. Similar copying mechanisms have been explored in parallel by Gu et al. (2016) and Gulcehre et al. (2016). 3.1 Basic Model Encoder. The encoder converts the input sequence x1, . . . , xm into a sequence of context13 sensitive embeddings b1, . . . , bm using a bidirectional RNN (Bahdanau et al., 2014). First, a word embedding function φ(in) maps each word xi to a fixed-dimensional vector. These vectors are fed as input to two RNNs: a forward RNN and a backward RNN. The forward RNN starts with an initial hidden state hF 0, and generates a sequence of hidden states hF 1, . . . , hF m by repeatedly applying the recurrence hF i = LSTM(φ(in)(xi), hF i−1). (1) The recurrence takes the form of an LSTM (Hochreiter and Schmidhuber, 1997). The backward RNN similarly generates hidden states hB m, . . . , hB 1 by processing the input sequence in reverse order. Finally, for each input position i, we define the context-sensitive embedding bi to be the concatenation of hF i and hB i Decoder. The decoder is an attention-based model (Bahdanau et al., 2014; Luong et al., 2015a) that generates the output sequence y1, . . . , yn one token at a time. At each time step j, it writes yj based on the current hidden state sj, then updates the hidden state to sj+1 based on sj and yj. Formally, the decoder is defined by the following equations: s1 = tanh(W (s)[hF m, hB 1]). (2) eji = s⊤ j W (a)bi. (3) αji = exp(eji) Pm i′=1 exp(eji′). (4) cj = m X i=1 αjibi. (5) P(yj = w | x, y1:j−1) ∝exp(Uw[sj, cj]). (6) sj+1 = LSTM([φ(out)(yj), cj], sj). (7) When not specified, i ranges over {1, . . . , m} and j ranges over {1, . . . , n}. Intuitively, the αji’s define a probability distribution over the input words, describing what words in the input the decoder is focusing on at time j. They are computed from the unnormalized attention scores eji. The matrices W (s), W (a), and U, as well as the embedding function φ(out), are parameters of the model. 3.2 Attention-based Copying In the basic model of the previous section, the next output word yj is chosen via a simple softmax over all words in the output vocabulary. However, this model has difficulty generalizing to the long tail of entity names commonly found in semantic parsing datasets. Conveniently, entity names in the input often correspond directly to tokens in the output (e.g., “iowa” becomes iowa in Figure 2).1 To capture this intuition, we introduce a new attention-based copying mechanism. At each time step j, the decoder generates one of two types of actions. As before, it can write any word in the output vocabulary. In addition, it can copy any input word xi directly to the output, where the probability with which we copy xi is determined by the attention score on xi. Formally, we define a latent action aj that is either Write[w] for some w ∈V(out) or Copy[i] for some i ∈{1, . . . , m}. We then have P(aj = Write[w] | x, y1:j−1) ∝exp(Uw[sj, cj]), (8) P(aj = Copy[i] | x, y1:j−1) ∝exp(eji). (9) The decoder chooses aj with a softmax over all these possible actions; yj is then a deterministic function of aj and x. During training, we maximize the log-likelihood of y, marginalizing out a. Attention-based copying can be seen as a combination of a standard softmax output layer of an attention-based model (Bahdanau et al., 2014) and a Pointer Network (Vinyals et al., 2015a); in a Pointer Network, the only way to generate output is to copy a symbol from the input. 4 Data Recombination 4.1 Motivation The main contribution of this paper is a novel data recombination framework that injects important prior knowledge into our oblivious sequence-tosequence RNN. In this framework, we induce a high-precision generative model from the training data, then sample from it to generate new training examples. The process of inducing this generative model can leverage any available prior knowledge, which is transmitted through the generated examples to the RNN model. A key advantage of our two-stage approach is that it allows us to declare desired properties of the task which might be hard to capture in the model architecture. 1On GEO and ATIS, we make a point not to rely on orthography for non-entities such as “state” to _state, since this leverages information not available to previous models (Zettlemoyer and Collins, 2005) and is much less languageindependent. 14 Examples (“what states border texas ?”, answer(NV, (state(V0), next_to(V0, NV), const(V0, stateid(texas))))) (“what is the highest mountain in ohio ?”, answer(NV, highest(V0, (mountain(V0), loc(V0, NV), const(V0, stateid(ohio)))))) Rules created by ABSENTITIES ROOT →⟨“what states border STATEID ?”, answer(NV, (state(V0), next_to(V0, NV), const(V0, stateid(STATEID ))))⟩ STATEID →⟨“texas”, texas ⟩ ROOT →⟨“what is the highest mountain in STATEID ?”, answer(NV, highest(V0, (mountain(V0), loc(V0, NV), const(V0, stateid(STATEID )))))⟩ STATEID →⟨“ohio”, ohio⟩ Rules created by ABSWHOLEPHRASES ROOT →⟨“what states border STATE ?”, answer(NV, (state(V0), next_to(V0, NV), STATE ))⟩ STATE →⟨“states border texas”, state(V0), next_to(V0, NV), const(V0, stateid(texas))⟩ ROOT →⟨“what is the highest mountain in STATE ?”, answer(NV, highest(V0, (mountain(V0), loc(V0, NV), STATE )))⟩ Rules created by CONCAT-2 ROOT →⟨SENT1 </s> SENT2, SENT1 </s> SENT2⟩ SENT →⟨“what states border texas ?”, answer(NV, (state(V0), next_to(V0, NV), const(V0, stateid(texas)))) ⟩ SENT →⟨“what is the highest mountain in ohio ?”, answer(NV, highest(V0, (mountain(V0), loc(V0, NV), const(V0, stateid(ohio))))) ⟩ Figure 3: Various grammar induction strategies illustrated on GEO. Each strategy converts the rules of an input grammar into rules of an output grammar. This figure shows the base case where the input grammar has rules ROOT →⟨x, y⟩for each (x, y) pair in the training dataset. Our approach generalizes data augmentation, which is commonly employed to inject prior knowledge into a model. Data augmentation techniques focus on modeling invariances— transformations like translating an image or adding noise that alter the inputs x, but do not change the output y. These techniques have proven effective in areas like computer vision (Krizhevsky et al., 2012) and speech recognition (Jaitly and Hinton, 2013). In semantic parsing, however, we would like to capture more than just invariance properties. Consider an example with the utterance “what states border texas ?”. Given this example, it should be easy to generalize to questions where “texas” is replaced by the name of any other state: simply replace the mention of Texas in the logical form with the name of the new state. Underlying this phenomenon is a strong conditional independence principle: the meaning of the rest of the sentence is independent of the name of the state in question. Standard data augmentation is not sufficient to model such phenomena: instead of holding y fixed, we would like to apply simultaneous transformations to x and y such that the new x still maps to the new y. Data recombination addresses this need. 4.2 General Setting In the general setting of data recombination, we start with a training set D of (x, y) pairs, which defines the empirical distribution ˆp(x, y). We then fit a generative model ˜p(x, y) to ˆp which generalizes beyond the support of ˆp, for example by splicing together fragments of different examples. We refer to examples in the support of ˜p as recombinant examples. Finally, to train our actual model pθ(y | x), we maximize the expected value of log pθ(y | x), where (x, y) is drawn from ˜p. 4.3 SCFGs for Semantic Parsing For semantic parsing, we induce a synchronous context-free grammar (SCFG) to serve as the backbone of our generative model ˜p. An SCFG consists of a set of production rules X →⟨α, β⟩, where X is a category (non-terminal), and α and β are sequences of terminal and non-terminal symbols. Any non-terminal symbols in α must be aligned to the same non-terminal symbol in β, and vice versa. Therefore, an SCFG defines a set of joint derivations of aligned pairs of strings. In our case, we use an SCFG to represent joint deriva15 tions of utterances x and logical forms y (which for us is just a sequence of tokens). After we induce an SCFG G from D, the corresponding generative model ˜p(x, y) is the distribution over pairs (x, y) defined by sampling from G, where we choose production rules to apply uniformly at random. It is instructive to compare our SCFG-based data recombination with WASP (Wong and Mooney, 2006; Wong and Mooney, 2007), which uses an SCFG as the actual semantic parsing model. The grammar induced by WASP must have good coverage in order to generalize to new inputs at test time. WASP also requires the implementation of an efficient algorithm for computing the conditional probability p(y | x). In contrast, our SCFG is only used to convey prior knowledge about conditional independence structure, so it only needs to have high precision; our RNN model is responsible for boosting recall over the entire input space. We also only need to forward sample from the SCFG, which is considerably easier to implement than conditional inference. Below, we examine various strategies for inducing a grammar G from a dataset D. We first encode D as an initial grammar with rules ROOT →⟨x, y⟩for each (x, y) ∈D. Next, we will define each grammar induction strategy as a mapping from an input grammar Gin to a new grammar Gout. This formulation allows us to compose grammar induction strategies (Section 4.3.4). 4.3.1 Abstracting Entities Our first grammar induction strategy, ABSENTITIES, simply abstracts entities with their types. We assume that each entity e (e.g., texas) has a corresponding type e.t (e.g., state), which we infer based on the presence of certain predicates in the logical form (e.g. stateid). For each grammar rule X →⟨α, β⟩in Gin, where α contains a token (e.g., “texas”) that string matches an entity (e.g., texas) in β, we add two rules to Gout: (i) a rule where both occurrences are replaced with the type of the entity (e.g., state), and (ii) a new rule that maps the type to the entity (e.g., STATEID →⟨“texas”, texas⟩; we reserve the category name STATE for the next section). Thus, Gout generates recombinant examples that fuse most of one example with an entity found in a second example. A concrete example from the GEO domain is given in Figure 3. 4.3.2 Abstracting Whole Phrases Our second grammar induction strategy, ABSWHOLEPHRASES, abstracts both entities and whole phrases with their types. For each grammar rule X →⟨α, β⟩in Gin, we add up to two rules to Gout. First, if α contains tokens that string match to an entity in β, we replace both occurrences with the type of the entity, similarly to rule (i) from ABSENTITIES. Second, if we can infer that the entire expression β evaluates to a set of a particular type (e.g. state) we create a rule that maps the type to ⟨α, β⟩. In practice, we also use some simple rules to strip question identifiers from α, so that the resulting examples are more natural. Again, refer to Figure 3 for a concrete example. This strategy works because of a more general conditional independence property: the meaning of any semantically coherent phrase is conditionally independent of the rest of the sentence, the cornerstone of compositional semantics. Note that this assumption is not always correct in general: for example, phenomena like anaphora that involve long-range context dependence violate this assumption. However, this property holds in most existing semantic parsing datasets. 4.3.3 Concatenation The final grammar induction strategy is a surprisingly simple approach we tried that turns out to work. For any k ≥2, we define the CONCAT-k strategy, which creates two types of rules. First, we create a single rule that has ROOT going to a sequence of k SENT’s. Then, for each rootlevel rule ROOT →⟨α, β⟩in Gin, we add the rule SENT →⟨α, β⟩to Gout. See Figure 3 for an example. Unlike ABSENTITIES and ABSWHOLEPHRASES, concatenation is very general, and can be applied to any sequence transduction problem. Of course, it also does not introduce additional information about compositionality or independence properties present in semantic parsing. However, it does generate harder examples for the attention-based RNN, since the model must learn to attend to the correct parts of the now-longer input sequence. Related work has shown that training a model on more difficult examples can improve generalization, the most canonical case being dropout (Hinton et al., 2012; Wager et al., 2013). 16 function TRAIN(dataset D, number of epochs T, number of examples to sample n) Induce grammar G from D Initialize RNN parameters θ randomly for each iteration t = 1, . . . , T do Compute current learning rate ηt Initialize current dataset Dt to D for i = 1, . . . , n do Sample new example (x′, y′) from G Add (x′, y′) to Dt end for Shuffle Dt for each example (x, y) in Dt do θ ←θ + ηt∇log pθ(y | x) end for end for end function Figure 4: The training procedure with data recombination. We first induce an SCFG, then sample new recombinant examples from it at each epoch. 4.3.4 Composition We note that grammar induction strategies can be composed, yielding more complex grammars. Given any two grammar induction strategies f1 and f2, the composition f1 ◦f2 is the grammar induction strategy that takes in Gin and returns f1(f2(Gin)). For the strategies we have defined, we can perform this operation symbolically on the grammar rules, without having to sample from the intermediate grammar f2(Gin). 5 Experiments We evaluate our system on three domains: GEO, ATIS, and OVERNIGHT. For ATIS, we report logical form exact match accuracy. For GEO and OVERNIGHT, we determine correctness based on denotation match, as in Liang et al. (2011) and Wang et al. (2015), respectively. 5.1 Choice of Grammar Induction Strategy We note that not all grammar induction strategies make sense for all domains. In particular, we only apply ABSWHOLEPHRASES to GEO and OVERNIGHT. We do not apply ABSWHOLEPHRASES to ATIS, as the dataset has little nesting structure. 5.2 Implementation Details We tokenize logical forms in a domain-specific manner, based on the syntax of the formal language being used. On GEO and ATIS, we disallow copying of predicate names to ensure a fair comparison to previous work, as string matching between input words and predicate names is not commonly used. We prevent copying by prepending underscores to predicate tokens; see Figure 2 for examples. On ATIS alone, when doing attention-based copying and data recombination, we leverage an external lexicon that maps natural language phrases (e.g., “kennedy airport”) to entities (e.g., jfk:ap). When we copy a word that is part of a phrase in the lexicon, we write the entity associated with that lexicon entry. When performing data recombination, we identify entity alignments based on matching phrases and entities from the lexicon. We run all experiments with 200 hidden units and 100-dimensional word vectors. We initialize all parameters uniformly at random within the interval [−0.1, 0.1]. We maximize the loglikelihood of the correct logical form using stochastic gradient descent. We train the model for a total of 30 epochs with an initial learning rate of 0.1, and halve the learning rate every 5 epochs, starting after epoch 15. We replace word vectors for words that occur only once in the training set with a universal <unk> word vector. Our model is implemented in Theano (Bergstra et al., 2010). When performing data recombination, we sample a new round of recombinant examples from our grammar at each epoch. We add these examples to the original training dataset, randomly shuffle all examples, and train the model for the epoch. Figure 4 gives pseudocode for this training procedure. One important hyperparameter is how many examples to sample at each epoch: we found that a good rule of thumb is to sample as many recombinant examples as there are examples in the training dataset, so that half of the examples the model sees at each epoch are recombinant. At test time, we use beam search with beam size 5. We automatically balance missing right parentheses by adding them at the end. On GEO and OVERNIGHT, we then pick the highest-scoring logical form that does not yield an executor error when the corresponding denotation is computed. On ATIS, we just pick the top prediction on the beam. 5.3 Impact of the Copying Mechanism First, we measure the contribution of the attentionbased copying mechanism to the model’s overall 17 GEO ATIS OVERNIGHT No Copying 74.6 69.9 76.7 With Copying 85.0 76.3 75.8 Table 1: Test accuracy on GEO, ATIS, and OVERNIGHT, both with and without copying. On OVERNIGHT, we average across all eight domains. GEO ATIS Previous Work Zettlemoyer and Collins (2007) 84.6 Kwiatkowski et al. (2010) 88.9 Liang et al. (2011)2 91.1 Kwiatkowski et al. (2011) 88.6 82.8 Poon (2013) 83.5 Zhao and Huang (2015) 88.9 84.2 Our Model No Recombination 85.0 76.3 ABSENTITIES 85.4 79.9 ABSWHOLEPHRASES 87.5 CONCAT-2 84.6 79.0 CONCAT-3 77.5 AWP + AE 88.9 AE + C2 78.8 AWP + AE + C2 89.3 AE + C3 83.3 Table 2: Test accuracy using different data recombination strategies on GEO and ATIS. AE is ABSENTITIES, AWP is ABSWHOLEPHRASES, C2 is CONCAT-2, and C3 is CONCAT-3. performance. On each task, we train and evaluate two models: one with the copying mechanism, and one without. Training is done without data recombination. The results are shown in Table 1. On GEO and ATIS, the copying mechanism helps significantly: it improves test accuracy by 10.4 percentage points on GEO and 6.4 points on ATIS. However, on OVERNIGHT, adding the copying mechanism actually makes our model perform slightly worse. This result is somewhat expected, as the OVERNIGHT dataset contains a very small number of distinct entities. It is also notable that both systems surpass the previous best system on OVERNIGHT by a wide margin. We choose to use the copying mechanism in all subsequent experiments, as it has a large advantage in realistic settings where there are many distinct entities in the world. The concurrent work of Gu et al. (2016) and Gulcehre et al. (2016), both of whom propose similar copying mechanisms, provides additional evidence for the utility of copying on a wide range of NLP tasks. 5.4 Main Results 2The method of Liang et al. (2011) is not comparable to For our main results, we train our model with a variety of data recombination strategies on all three datasets. These results are summarized in Tables 2 and 3. We compare our system to the baseline of not using any data recombination, as well as to state-of-the-art systems on all three datasets. We find that data recombination consistently improves accuracy across the three domains we evaluated on, and that the strongest results come from composing multiple strategies. Combining ABSWHOLEPHRASES, ABSENTITIES, and CONCAT-2 yields a 4.3 percentage point improvement over the baseline without data recombination on GEO, and an average of 1.7 percentage points on OVERNIGHT. In fact, on GEO, we achieve test accuracy of 89.3%, which surpasses the previous state-of-the-art, excluding Liang et al. (2011), which used a seed lexicon for predicates. On ATIS, we experiment with concatenating more than 2 examples, to make up for the fact that we cannot apply ABSWHOLEPHRASES, which generates longer examples. We obtain a test accuracy of 83.3 with ABSENTITIES composed with CONCAT-3, which beats the baseline by 7 percentage points and is competitive with the state-of-theart. Data recombination without copying. For completeness, we also investigated the effects of data recombination on the model without attention-based copying. We found that recombination helped significantly on GEO and ATIS, but hurt the model slightly on OVERNIGHT. On GEO, the best data recombination strategy yielded test accuracy of 82.9%, for a gain of 8.3 percentage points over the baseline with no copying and no recombination; on ATIS, data recombination gives test accuracies as high as 74.6%, a 4.7 point gain over the same baseline. However, no data recombination strategy improved average test accuracy on OVERNIGHT; the best one resulted in a 0.3 percentage point decrease in test accuracy. We hypothesize that data recombination helps less on OVERNIGHT in general because the space of possible logical forms is very limited, making it more like a large multiclass classification task. Therefore, it is less important for the model to learn good compositional representations that generalize to new logical forms at test time. ours, as they as they used a seed lexicon mapping words to predicates. We explicitly avoid using such prior knowledge in our system. 18 BASKETBALL BLOCKS CALENDAR HOUSING PUBLICATIONS RECIPES RESTAURANTS SOCIAL Avg. Previous Work Wang et al. (2015) 46.3 41.9 74.4 54.0 59.0 70.8 75.9 48.2 58.8 Our Model No Recombination 85.2 58.1 78.0 71.4 76.4 79.6 76.2 81.4 75.8 ABSENTITIES 86.7 60.2 78.0 65.6 73.9 77.3 79.5 81.3 75.3 ABSWHOLEPHRASES 86.7 55.9 79.2 69.8 76.4 77.8 80.7 80.9 75.9 CONCAT-2 84.7 60.7 75.6 69.8 74.5 80.1 79.5 80.8 75.7 AWP + AE 85.2 54.1 78.6 67.2 73.9 79.6 81.9 82.1 75.3 AWP + AE + C2 87.5 60.2 81.0 72.5 78.3 81.0 79.5 79.6 77.5 Table 3: Test accuracy using different data recombination strategies on the OVERNIGHT tasks. Depth-2 (same length) x: “rel:12 of rel:17 of ent:14” y: ( _rel:12 ( _rel:17 _ent:14 ) ) Depth-4 (longer) x: “rel:23 of rel:36 of rel:38 of rel:10 of ent:05” y: ( _rel:23 ( _rel:36 ( _rel:38 ( _rel:10 _ent:05 ) ) ) ) Figure 5: A sample of our artificial data. 0 100 200 300 400 500 0 20 40 60 80 100 Number of additional examples Test accuracy (%) Same length, independent Longer, independent Same length, recombinant Longer, recombinant Figure 6: The results of our artificial data experiments. We see that the model learns more from longer examples than from same-length examples. 5.5 Effect of Longer Examples Interestingly, strategies like ABSWHOLEPHRASES and CONCAT-2 help the model even though the resulting recombinant examples are generally not in the support of the test distribution. In particular, these recombinant examples are on average longer than those in the actual dataset, which makes them harder for the attention-based model. Indeed, for every domain, our best accuracy numbers involved some form of concatenation, and often involved ABSWHOLEPHRASES as well. In comparison, applying ABSENTITIES alone, which generates examples of the same length as those in the original dataset, was generally less effective. We conducted additional experiments on artificial data to investigate the importance of adding longer, harder examples. We experimented with adding new examples via data recombination, as well as adding new independent examples (e.g. to simulate the acquisition of more training data). We constructed a simple world containing a set of entities and a set of binary relations. For any n, we can generate a set of depth-n examples, which involve the composition of n relations applied to a single entity. Example data points are shown in Figure 5. We train our model on various datasets, then test it on a set of 500 randomly chosen depth-2 examples. The model always has access to a small seed training set of 100 depth-2 examples. We then add one of four types of examples to the training set: • Same length, independent: New randomly chosen depth-2 examples.3 • Longer, independent: Randomly chosen depth-4 examples. • Same length, recombinant: Depth-2 examples sampled from the grammar induced by applying ABSENTITIES to the seed dataset. • Longer, recombinant: Depth-4 examples sampled from the grammar induced by applying ABSWHOLEPHRASES followed by ABSENTITIES to the seed dataset. To maintain consistency between the independent and recombinant experiments, we fix the recombinant examples across all epochs, instead of resampling at every epoch. In Figure 6, we plot accuracy on the test set versus the number of additional examples added of each of these four types. As 3Technically, these are not completely independent, as we sample these new examples without replacement. The same applies to the longer “independent” examples. 19 expected, independent examples are more helpful than the recombinant ones, but both help the model improve considerably. In addition, we see that even though the test dataset only has short examples, adding longer examples helps the model more than adding shorter ones, in both the independent and recombinant cases. These results underscore the importance training on longer, harder examples. 6 Discussion In this paper, we have presented a novel framework we term data recombination, in which we generate new training examples from a highprecision generative model induced from the original training dataset. We have demonstrated its effectiveness in improving the accuracy of a sequence-to-sequence RNN model on three semantic parsing datasets, using a synchronous context-free grammar as our generative model. There has been growing interest in applying neural networks to semantic parsing and related tasks. Dong and Lapata (2016) concurrently developed an attention-based RNN model for semantic parsing, although they did not use data recombination. Grefenstette et al. (2014) proposed a non-recurrent neural model for semantic parsing, though they did not run experiments. Mei et al. (2016) use an RNN model to perform a related task of instruction following. Our proposed attention-based copying mechanism bears a strong resemblance to two models that were developed independently by other groups. Gu et al. (2016) apply a very similar copying mechanism to text summarization and singleturn dialogue generation. Gulcehre et al. (2016) propose a model that decides at each step whether to write from a “shortlist” vocabulary or copy from the input, and report improvements on machine translation and text summarization. Another piece of related work is Luong et al. (2015b), who train a neural machine translation system to copy rare words, relying on an external system to generate alignments. Prior work has explored using paraphrasing for data augmentation on NLP tasks. Zhang et al. (2015) augment their data by swapping out words for synonyms from WordNet. Wang and Yang (2015) use a similar strategy, but identify similar words and phrases based on cosine distance between vector space embeddings. Unlike our data recombination strategies, these techniques only change inputs x, while keeping the labels y fixed. Additionally, these paraphrasing-based transformations can be described in terms of grammar induction, so they can be incorporated into our framework. In data recombination, data generated by a highprecision generative model is used to train a second, domain-general model. Generative oversampling (Liu et al., 2007) learns a generative model in a multiclass classification setting, then uses it to generate additional examples from rare classes in order to combat label imbalance. Uptraining (Petrov et al., 2010) uses data labeled by an accurate but slow model to train a computationally cheaper second model. Vinyals et al. (2015b) generate a large dataset of constituency parse trees by taking sentences that multiple existing systems parse in the same way, and train a neural model on this dataset. Some of our induced grammars generate examples that are not in the test distribution, but nonetheless aid in generalization. Related work has also explored the idea of training on altered or out-of-domain data, often interpreting it as a form of regularization. Dropout training has been shown to be a form of adaptive regularization (Hinton et al., 2012; Wager et al., 2013). Guu et al. (2015) showed that encouraging a knowledge base completion model to handle longer path queries acts as a form of structural regularization. Language is a blend of crisp regularities and soft relationships. Our work takes RNNs, which excel at modeling soft phenomena, and uses a highly structured tool—synchronous context free grammars—to infuse them with an understanding of crisp structure. We believe this paradigm for simultaneously modeling the soft and hard aspects of language should have broader applicability beyond semantic parsing. Acknowledgments This work was supported by the NSF Graduate Research Fellowship under Grant No. DGE-114747, and the DARPA Communicating with Computers (CwC) program under ARO prime contract no. W911NF-15-1-0462. Reproducibility. All code, data, and experiments for this paper are available on the CodaLab platform at https: //worksheets.codalab.org/worksheets/ 0x50757a37779b485f89012e4ba03b6f4f/. 20 References Y. Artzi and L. Zettlemoyer. 2013a. UW SPF: The University of Washington semantic parsing framework. arXiv preprint arXiv:1311.3011. Y. Artzi and L. Zettlemoyer. 2013b. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics (TACL), 1:49– 62. D. Bahdanau, K. Cho, and Y. Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. J. Berant, A. Chou, R. Frostig, and P. Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Empirical Methods in Natural Language Processing (EMNLP). J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley, and Y. Bengio. 2010. Theano: a CPU and GPU math expression compiler. In Python for Scientific Computing Conference. J. Clarke, D. Goldwasser, M. Chang, and D. Roth. 2010. Driving semantic parsing from the world’s response. In Computational Natural Language Learning (CoNLL), pages 18–27. L. Dong and M. Lapata. 2016. Language to logical form with neural attention. In Association for Computational Linguistics (ACL). C. Dyer, M. Ballesteros, W. Ling, A. Matthews, and N. A. Smith. 2015. Transition-based dependency parsing with stack long short-term memory. In Association for Computational Linguistics (ACL). E. Grefenstette, P. Blunsom, N. de Freitas, and K. M. Hermann. 2014. A deep architecture for semantic parsing. In ACL Workshop on Semantic Parsing, pages 22–27. J. Gu, Z. Lu, H. Li, and V. O. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Association for Computational Linguistics (ACL). C. Gulcehre, S. Ahn, R. Nallapati, B. Zhou, and Y. Bengio. 2016. Pointing the unknown words. In Association for Computational Linguistics (ACL). K. Guu, J. Miller, and P. Liang. 2015. Traversing knowledge graphs in vector space. In Empirical Methods in Natural Language Processing (EMNLP). G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. 2012. Improving neural networks by preventing coadaptation of feature detectors. arXiv preprint arXiv:1207.0580. S. Hochreiter and J. Schmidhuber. 1997. Long shortterm memory. Neural Computation, 9(8):1735– 1780. N. Jaitly and G. E. Hinton. 2013. Vocal tract length perturbation (vtlp) improves speech recognition. In International Conference on Machine Learning (ICML). A. Krizhevsky, I. Sutskever, and G. E. Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems (NIPS), pages 1097–1105. N. Kushman and R. Barzilay. 2013. Using semantic unification to generate regular expressions from natural language. In Human Language Technology and North American Association for Computational Linguistics (HLT/NAACL), pages 826–836. T. Kwiatkowski, L. Zettlemoyer, S. Goldwater, and M. Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higher-order unification. In Empirical Methods in Natural Language Processing (EMNLP), pages 1223–1233. T. Kwiatkowski, L. Zettlemoyer, S. Goldwater, and M. Steedman. 2011. Lexical generalization in CCG grammar induction for semantic parsing. In Empirical Methods in Natural Language Processing (EMNLP), pages 1512–1523. P. Liang, M. I. Jordan, and D. Klein. 2011. Learning dependency-based compositional semantics. In Association for Computational Linguistics (ACL), pages 590–599. A. Liu, J. Ghosh, and C. Martin. 2007. Generative oversampling for mining imbalanced datasets. In International Conference on Data Mining (DMIN). M. Luong, H. Pham, and C. D. Manning. 2015a. Effective approaches to attention-based neural machine translation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1412–1421. M. Luong, I. Sutskever, Q. V. Le, O. Vinyals, and W. Zaremba. 2015b. Addressing the rare word problem in neural machine translation. In Association for Computational Linguistics (ACL), pages 11– 19. H. Mei, M. Bansal, and M. R. Walter. 2016. Listen, attend, and walk: Neural mapping of navigational instructions to action sequences. In Association for the Advancement of Artificial Intelligence (AAAI). S. Petrov, P. Chang, M. Ringgaard, and H. Alshawi. 2010. Uptraining for accurate deterministic question parsing. In Empirical Methods in Natural Language Processing (EMNLP). H. Poon. 2013. Grounded unsupervised semantic parsing. In Association for Computational Linguistics (ACL). 21 I. Sutskever, O. Vinyals, and Q. V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems (NIPS), pages 3104–3112. O. Vinyals, M. Fortunato, and N. Jaitly. 2015a. Pointer networks. In Advances in Neural Information Processing Systems (NIPS), pages 2674–2682. O. Vinyals, L. Kaiser, T. Koo, S. Petrov, I. Sutskever, and G. Hinton. 2015b. Grammar as a foreign language. In Advances in Neural Information Processing Systems (NIPS), pages 2755–2763. S. Wager, S. I. Wang, and P. Liang. 2013. Dropout training as adaptive regularization. In Advances in Neural Information Processing Systems (NIPS). W. Y. Wang and D. Yang. 2015. That’s so annoying!!!: A lexical and frame-semantic embedding based data augmentation approach to automatic categorization of annoying behaviors using #petpeeve tweets. In Empirical Methods in Natural Language Processing (EMNLP). Y. Wang, J. Berant, and P. Liang. 2015. Building a semantic parser overnight. In Association for Computational Linguistics (ACL). Y. W. Wong and R. J. Mooney. 2006. Learning for semantic parsing with statistical machine translation. In North American Association for Computational Linguistics (NAACL), pages 439–446. Y. W. Wong and R. J. Mooney. 2007. Learning synchronous grammars for semantic parsing with lambda calculus. In Association for Computational Linguistics (ACL), pages 960–967. M. Zelle and R. J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Association for the Advancement of Artificial Intelligence (AAAI), pages 1050–1055. L. S. Zettlemoyer and M. Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Uncertainty in Artificial Intelligence (UAI), pages 658– 666. L. S. Zettlemoyer and M. Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP/CoNLL), pages 678–687. X. Zhang, J. Zhao, and Y. LeCun. 2015. Characterlevel convolutional networks for text classification. In Advances in Neural Information Processing Systems (NIPS). K. Zhao and L. Huang. 2015. Type-driven incremental semantic parsing with polymorphism. In North American Association for Computational Linguistics (NAACL). 22
2016
2
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 205–215, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Adaptive Joint Learning of Compositional and Non-Compositional Phrase Embeddings Kazuma Hashimoto and Yoshimasa Tsuruoka The University of Tokyo, 3-7-1 Hongo, Bunkyo-ku, Tokyo, Japan {hassy,tsuruoka}@logos.t.u-tokyo.ac.jp Abstract We present a novel method for jointly learning compositional and noncompositional phrase embeddings by adaptively weighting both types of embeddings using a compositionality scoring function. The scoring function is used to quantify the level of compositionality of each phrase, and the parameters of the function are jointly optimized with the objective for learning phrase embeddings. In experiments, we apply the adaptive joint learning method to the task of learning embeddings of transitive verb phrases, and show that the compositionality scores have strong correlation with human ratings for verb-object compositionality, substantially outperforming the previous state of the art. Moreover, our embeddings improve upon the previous best model on a transitive verb disambiguation task. We also show that a simple ensemble technique further improves the results for both tasks. 1 Introduction Representing words and phrases in a vector space has proven effective in a variety of language processing tasks (Pham et al., 2015; Sutskever et al., 2014). In most of the previous work, phrase embeddings are computed from word embeddings by using various kinds of composition functions. Such composed embeddings are called compositional embeddings. An alternative way of computing phrase embeddings is to treat phrases as single units and assigning a unique embedding to each candidate phrase (Mikolov et al., 2013; Yazdani et al., 2015). Such embeddings are called noncompositional embeddings. Relying solely on non-compositional embeddings has the obvious problem of data sparsity (i.e. rare or unknown phrase problems). At the same time, however, using compositional embeddings is not always the best option since some phrases are inherently non-compositional. For example, the phrase “bear fruits” means “to yield results”1 but it is hard to infer its meaning by composing the meanings of “bear” and “fruit”. Treating all phrases as compositional also has a negative effect in learning the composition function because the words in those idiomatic phrases are not just uninformative but can serve as noisy samples in the training. These problems have motivated us to adaptively combine both types of embeddings. Most of the existing methods for learning phrase embeddings can be divided into two approaches. One approach is to learn compositional embeddings by regarding all phrases as compositional (Pham et al., 2015; Socher et al., 2012). The other approach is to learn both types of embeddings separately and use the better ones (Kartsaklis et al., 2014; Muraoka et al., 2014). Kartsaklis et al. (2014) show that non-compositional embeddings are better suited for a phrase similarity task, whereas Muraoka et al. (2014) report the opposite results on other tasks. These results suggest that we should not stick to either of the two types of embeddings unconditionally and could learn better phrase embeddings by considering the compositionality levels of the individual phrases in a more flexible fashion. In this paper, we propose a method that jointly learns compositional and non-compositional embeddings by adaptively weighting both types of phrase embeddings using a compositionality scoring function. The scoring function is used to quantify the level of compositionality of each phrase 1The definition is found at http://idioms. thefreedictionary.com/bear+fruit. 205 Figure 1: The overview of our method and examples of the compositionality scores. Given a phrase p, our method first computes the compositionality score α(p) (Eq. (3)), and then computes the phrase embedding v(p) using the compositional and non-compositional embeddings, c(p) and n(p), respectively (Eq. (2)). and learned in conjunction with the target task for learning phrase embeddings. In experiments, we apply our method to the task of learning transitive verb phrase embeddings and demonstrate that it allows us to achieve state-of-the-art performance on standard datasets for compositionality detection and verb disambiguation. 2 Method In this section, we describe our approach in the most general form, without specifying the function to compute the compositional embeddings or the target task for optimizing the embeddings. Figure 1 shows the overview of our proposed method. At each iteration of the training (i.e. gradient calculation) of a certain target task (e.g. language modeling or sentiment analysis), our method first computes a compositionality score for each phrase. Then the score is used to weight the compositional and non-compositional embeddings of the phrase in order to compute the expected embedding of the phrase which is to be used in the target task. Some examples of the compositionality scores are also shown in the figure. 2.1 Compositional Phrase Embeddings The compositional embedding c(p) ∈Rd×1 of a phrase p = (w1, · · · , wL) is formulated as c(p) = f(v(w1), · · · , v(wL)), (1) where d is the dimensionality, L is the phrase length, v(·) ∈Rd×1 is a word embedding, and f(·) is a composition function. The function can be simple ones such as element-wise addition or multiplication (Mitchell and Lapata, 2008). More complex ones such as recurrent neural networks (Sutskever et al., 2014) are also commonly used. The word embeddings and the composition function are jointly learned on a certain target task. Since compositional embeddings are built on word-level (i.e. unigram) information, they are less prone to the data sparseness problem. 2.2 Non-Compositional Phrase Embeddings In contrast to the compositional embedding, the non-compositional embedding of a phrase n(p) ∈ Rd×1 is independently parameterized, i.e., the phrase p is treated just like a single word. Mikolov et al. (2013) show that non-compositional embeddings are preferable when dealing with idiomatic phrases. Some recent studies (Kartsaklis et al., 2014; Muraoka et al., 2014) have discussed the (dis)advantages of using compositional or non-compositional embeddings. However, in most cases, a phrase is neither completely compositional nor completely non-compositional. To the best of our knowledge, there is no method that allows us to jointly learn both types of phrase embeddings by incorporating the levels of compositionality of the phrases as real-valued scores. 2.3 Adaptive Joint Learning To simultaneously consider both compositional and non-compositional aspects of each phrase, we compute a phrase embedding v(p) by adaptively weighting c(p) and n(p) as follows: v(p) = α(p)c(p) + (1 −α(p))n(p), (2) where α(·) is a scoring function that quantifies the compositionality levels, and outputs a real value ranging from 0 to 1. What we expect from the scoring function is that large scores indicate high levels of compositionality. In other words, when α(p) is close to 1, the compositional embedding is mainly considered, and vice versa. For example, we expect α(buy car) to be large and α(bear fruit) to be small as shown in Figure 1. We parameterize the scoring function α(p) as logistic regression: α(p) = σ(W · φ(p)), (3) where φ(p) ∈RN×1 is a feature vector of the phrase p, W ∈RN×1 is a weight vector, N is the number of features, and σ(·) is the logistic function. The weight vector W is jointly optimized in conjunction with the objective J for the target task of learning phrase embeddings v(p). 206 Updating the model parameters Given the partial derivative δp = ∂J ∂v(p) ∈Rd×1 for the target task, we can compute the partial derivative for updating W as follows: δα = α(p)(1 −α(p)){δp · (c(p) −n(p))} (4) ∂J ∂W = δαφ(p). (5) If φ(p) is not constructed by static features but is computed by a feature learning model such as neural networks, we can propagate the error term δα into the feature learning model by the following equation: ∂J ∂φ(p) = δαW . (6) When we use only static features, as in this work, we can simply compute the partial derivatives of J with respect to c(p) and n(p) as follows: ∂J ∂c(p) = α(p)δp (7) ∂J ∂n(p) = (1 −α(p))δp. (8) As mentioned above, Eq. (7) and (8) show that the non-compositional embeddings are mainly updated when α(p) is close to 0, and vice versa. The partial derivative ∂J ∂c(p) is used to update the model parameters in the composition function via the backpropagation algorithm. Any differentiable composition functions can be used in our method. Expected behavior of our method The training of our method depends on the target task; that is, the model parameters are updated so as to minimize the cost function as described above. More concretely, α(p) for each phrase p is adaptively adjusted so that the corresponding parameter updates contribute to minimizing the cost function. As a result, different phrases will have different α(p) values depending on their compositionality. If the size of the training data were almost infinitely large, α(p) for all phrases would become nearly zero, and the non-compositional embeddings n(p) are dominantly used (since that would allow the model to better fit the data). In reality, however, the amount of the training data is limited, and thus the compositional embeddings c(p) are effectively used to overcome the data sparseness problem. 3 Learning Verb Phrase Embeddings This section describes a particular instantiation of our approach presented in the previous section, focusing on the task of learning the embeddings of transitive verb phrases. 3.1 Word and Phrase Prediction in Predicate-Argument Relations Acquisition of selectional preference using embeddings has been widely studied, where word and/or phrase embeddings are learned based on syntactic links (Bansal et al., 2014; Hashimoto and Tsuruoka, 2015; Levy and Goldberg, 2014; Van de Cruys, 2014). As with language modeling, these methods perform word (or phrase) prediction using (syntactic) contexts. In this work, we focus on verb-object relationships and employ a phrase embedding learning method presented in Hashimoto and Tsuruoka (2015). The task is a plausibility judgment task for predicate-argument tuples. They extracted Subject-Verb-Object (SVO) and SVO-PrepositionNoun (SVOPN) tuples using a probabilistic HPSG parser, Enju (Miyao and Tsujii, 2008), from the training corpora. Transitive verbs and prepositions are extracted as predicates with two arguments. For example, the extracted tuples include (S, V, O) = (“importer”, “make”, “payment”) and (SVO, P, N) = (“importer make payment”, “in”, “currency”). The task is to discriminate between observed and unobserved tuples, such as the (S, V, O) tuple mentioned above and (S, V’, O) = (“importer”, “eat”, “payment”), which is generated by replacing “make” with “eat”. The (S, V’, O) tuple is unlikely to be observed. For each tuple (p, a1, a2) observed in the training data, a cost function is defined as follows: −log σ(s(p, a1, a2))−log σ(−s(p′, a1, a2)) −log σ(−s(p, a′ 1, a2)) −log σ(−s(p, a1, a′ 2)), (9) where s(·) is a plausibility scoring function, and p, a1 and a2 are a predicate and its arguments, respectively. Each of the three unobserved tuples (p′, a1, a2), (p, a′ 1, a2), and (p, a1, a′ 2) is generated by replacing one of the entries with a random sample. In their method, each predicate p is represented with a matrix M(p) ∈Rd×d and each argument a with an embedding v(a) ∈Rd×1. The matrices and embeddings are learned by minimizing the cost function using AdaGrad (Duchi et al., 2011). The scoring function is parameterized as s(p, a1, a2) = v(a1) · (M(p)v(a2)), (10) 207 and the VO and SVO embeddings are computed as v(V O) = M(V )v(O) (11) v(SV O) = v(S) ⊙v(V O), (12) as proposed by Kartsaklis et al. (2012). The operator ⊙denotes element-wise multiplication. In summary, the scores are computed as s(V, S, O) = v(S) · v(V O) (13) s(P, SV O, N) = v(SV O) · (M(P)v(N)). (14) With this method, the word and composed phrase embeddings are jointly learned based on cooccurrence statistics of predicate-argument structures. Using the learned embeddings, they achieved state-of-the-art accuracy on a transitive verb disambiguation task (Grefenstette and Sadrzadeh, 2011). 3.2 Applying the Adaptive Joint Learning In this section, we apply our adaptive joint learning method to the task described in Section 3.1. We here redefine the computation of v(V O) by first replacing v(V O) in Eq. (11) with c(V O) as, c(V O) = M(V )v(O), (15) and then assigning V O to p in Eq. (2) and (3): v(V O) = α(V O)c(V O) + (1 −α(V O))n(V O), (16) α(V O) = σ(W · φ(V O)). (17) The v(V O) in Eq. (16) is used in Eq. (12) and (13). We assume that the candidates of the phrases are given in advance. For the phrases not included in the candidates, we set v(V O) = c(V O). This is analogous to the way a human guesses the meaning of an idiomatic phrase she does not know. We should note that φ(V O) can be computed for phrases not included in the candidates, using partial features among the features described below. If any features do not fire, φ(V O) becomes 0.5 according to the logistic function. For the feature vector φ(V O), we use the following simple binary and real-valued features: • indices of V, O, and VO • frequency and Pointwise Mutual Information (PMI) values of VO. More concretely, the first set of the features (indices of V, O, and VO) is the concatenation of traditional one-hot vectors. The second set of features, frequency and PMI (Church and Hanks, 1990) features, have proven effective in detecting the compositionality of transitive verbs in McCarthy et al. (2007) and Venkatapathy and Joshi (2005). Given the training corpus, the frequency feature for a VO pair is computed as freq(V O) = log(count(V O)), (18) where count(V O) counts how many times the VO pair appears in the training corpus, and the PMI feature is computed as PMI(V O) = log count(V O)count(∗) count(V )count(O) , (19) where count(V ), count(O), and count(∗) are the counts of the verb V , the object O, and all VO pairs in the training corpus, respectively. We normalize the frequency and PMI features so that their maximum absolute value becomes 1. 4 Experimental Settings 4.1 Training Data As the training data, we used two datasets, one small and one large: the British National Corpus (BNC) (Leech, 1992) and the English Wikipedia. More concretely, we used the publicly available data2 preprocessed by Hashimoto and Tsuruoka (2015). The BNC data consists of 1.38 million SVO tuples and 0.93 million SVOPN tuples. The Wikipedia data consists of 23.6 million SVO tuples and 17.3 million SVOPN tuples. Following the provided code3, we used exactly the same train/development/test split (0.8/0.1/0.1) for training the overall model. As the third training data, we also used the concatenation of the two data, which is hereafter referred to as BNC-Wikipedia. We applied our adaptive joint learning method to verb-object phrases observed more than K times in each corpus. K was set to 10 for the BNC data and 100 for the Wikipedia and BNC-Wikipedia data. Consequently, the non-compositional embeddings were assigned to 17,817, 28,933, and 30,682 verb-object phrase types in the BNC, Wikipedia, and BNC-Wikipedia data, respectively. 2http://www.logos.t.u-tokyo.ac.jp/ ˜hassy/publications/cvsc2015/ 3https://github.com/hassyGo/ SVOembedding 208 4.2 Training Details The model parameters consist of d-dimensional word embeddings for nouns, non-compositional phrase embeddings, d×d-dimensional matrices for verbs and prepositions, and a weight vector W for α(V O). All the model parameters are jointly optimized. We initialized the embeddings and matrices with zero-mean gaussian random values with a variance of 1 d and 1 d2 , respectively, and W with zeros. Initializing W with zeros forces the initial value of each α(V O) to be 0.5 since we use the logistic function to compute α(V O). The optimization was performed via minibatch AdaGrad (Duchi et al., 2011). We fixed d to 25 and the mini-batch size to 100. We set candidate values for the learning rate ε to {0.01, 0.02, 0.03, 0.04, 0.05}. For the weight vector W , we employed L2norm regularization and set the coefficient λ to {10−3, 10−4, 10−5, 10−6, 0}. For selecting the hyperparameters, each training process was stopped when the evaluation score on the development split decreased. Then the best performing hyperparameters were selected for each training dataset. Consequently, ε was set to 0.05 for all training datasets, and λ was set to 10−6, 10−3, and 10−5 for the BNC, Wikipedia, and BNCWikipedia data, respectively. Once the training is finished, we can use the learned embeddings and the scoring function in downstream target tasks. 5 Evaluation on the Compositionality Detection Function 5.1 Evaluation Settings Datasets First, we evaluated the learned compositionality detection function on two datasets, VJ’054 and MC’075, provided by Venkatapathy and Joshi (2005) and McCarthy et al. (2007), respectively. VJ’05 consists of 765 verb-object pairs with human ratings for the compositionality. MC’07 is a subset of VJ’05 and consists of 638 verb-object pairs. For example, the rating of “buy car” is 6, which is the highest score, indicating the phrase is highly compositional. The rating of “bear fruit ” is 1, which is the lowest score, indicating the phrase is highly non-compositional. 4http://www.dianamccarthy.co.uk/ downloads/SVAJ2005compositionality_ rating.txt 5http://www.dianamccarthy.co.uk/ downloads/emnlp2007data.txt Method MC’07 VJ’05 Proposed method (Wikipedia) 0.508 0.514 Proposed method (BNC) 0.507 0.507 Proposed method (BNC-Wikipedia) 0.518 0.527 Proposed method (Ensemble) 0.550 0.552 Kiela and Clark (2013) w/ WordNet n/a 0.461 Kiela and Clark (2013) n/a 0.420 DSPROTO (McCarthy et al., 2007) 0.398 n/a PMI (McCarthy et al., 2007) 0.274 n/a Frequency (McCarthy et al., 2007) 0.141 n/a DSPROTO+ (McCarthy et al., 2007) 0.454 n/a Human agreement 0.702 0.716 Table 1: Compositionality detection task. Evaluation metric The evaluation was performed by calculating Spearman’s rank correlation scores6 between the averaged human ratings and the learned compositionality scores α(V O). Ensemble technique We also produced the result by employing an ensemble technique. More concretely, we used the averaged compositionality scores from the results of the BNC and Wikipedia data for the ensemble result. 5.2 Results and Discussion 5.2.1 Result Overview Table 1 shows our results and the state of the art. Our method outperforms the previous state of the art in all settings. The result denoted as Ensemble is the one that employs the ensemble technique, and achieves the strongest correlation with the human-annotated datasets. Even without the ensemble technique, our method performs better than all of the previous methods. Kiela and Clark (2013) used window-based cooccurrence vectors and improved their score using WordNet hypernyms. By contrast, our method does not rely on such external resources, and only needs parsed corpora. We should note that Kiela and Clark (2013) reported that their score did not improve when using parsed corpora. Our method also outperforms DSPROTO+, which used a small amount of the labeled data, while our method is fully unsupervised. We calculated confidence intervals (P < 0.05) using bootstrap resampling (Noreen, 1989). For example, for the results using the BNC-Wikipedia data, the intervals on MC’07 and VJ’05 are (0.455, 0.574) and (0.475, 0.579), respectively. These results show that our method significantly outperforms the previous state-of-the-art results. 6We used the Scipy 0.12.0 implementation in Python. 209 Phrase Gold standard (a) BNC (b) Wikipedia BNC-Wikipedia Ensemble ((a)+(b))×0.5 (A) buy car 6 0.78 0.71 0.80 0.74 own land 6 0.79 0.73 0.76 0.76 take toll 1.5 0.14 0.11 0.06 0.13 shed light 1 0.21 0.07 0.07 0.14 bear fruit 1 0.15 0.19 0.17 0.17 (B) make noise 6 0.37 0.33 0.30 0.35 have reason 5 0.26 0.39 0.33 0.33 (C) smoke cigarette 6 0.56 0.90 0.78 0.73 catch eye 1 0.48 0.14 0.17 0.31 Table 2: Examples of the compositionality scores. Figure 2: Trends of α(V O) during the training on the BNC data. 5.2.2 Analysis of Compositionality Scores Figure 2 shows how α(V O) changes for the seven phrases during the training on the BNC data. As shown in the figure, starting from 0.5, α(V O) for each phrase converges to its corresponding value. The differences in the trends indicate that our method can adaptively learn compositionality levels for the phrases. Table 2 shows the learned compositionality scores for the three groups of the examples along with the gold-standard scores given by the annotators. The group (A) is considered to be consistent with the gold-standard scores, the group (B) is not, and the group (C) shows examples for which the difference between the compositionality scores of our results is large. Characteristics of light verbs The verbs “take”, “make”, and “have” are known as light verbs 7, and the scoring function tends to assign low scores to light verbs. In other words, our 7In Section 5.2.2 in Newton (2006), the term light verb is used to refer to verbs which can be used in combination with some other element where their contribution to the meaning of the whole construction is reduced in some way. Highest average scores Lowest average scores approve 0.83 bear 0.37 reject 0.72 play 0.38 discuss 0.71 have 0.38 visit 0.70 make 0.39 want 0.70 break 0.40 describe 0.70 take 0.40 involve 0.69 raise 0.41 own 0.68 reach 0.41 attend 0.68 gain 0.42 reflect 0.67 draw 0.42 Table 3: The 10 highest and lowest average compositionality scores with the corresponding verbs on the BNC data. method can recognize that the light verbs are frequently used to form idiomatic (i.e. noncompositional) phrases. To verify the assumption, we calculated the average compositionality score for each verb by averaging the compositionality scores paired with its candidate objects. Here we used 135 verbs which take more than 30 types of objects in the BNC data. Table 3 shows the 10 highest and lowest average scores with the corresponding verbs. We see that relatively low scores are assigned to the light verbs as well as other verbs which often form idiomatic phrases. As shown in the group (B) in Table 2, however, light verb phrases are not always non-compositional. Despite this, the learned function assigns low scores to compositional phrases formed by the light verbs. These results suggest that using a more flexible scoring function may further strengthen our method. Context dependence Both our method and the two datasets, VJ’05 and MC’07, assume that the compositionality score can be computed for each phrase with no contextual information. However, in general, the compositionality level of a phrase depends on its contextual information. For example, the meaning of the idiomatic phrase “bear 210 fruit” can be compositionaly interpreted as “to yield fruit” for a plant or tree. We manually inspected the BNC data to check whether the phrase “bear fruit” is used as the compositional meaning or the idiomatic meaning (“to yield results”). As a result, we have found that most of the usage was its idiomatic meaning. In the model training, our method is affected by the majority usage and fits the evaluation datasets where the phrase “bear fruit” is regarded as highly non-compositional. Incorporating contextual information into the compositionality scoring function is a promising direction of future work. 5.2.3 Effects of Ensemble We used the two different corpora for constructing the training data, and our method achieves the state-of-the-art results in all settings. To inspect the results on VJ’05, we calculated the correlation score between the outputs from our results of the BNC and Wikipedia data. The correlation score is 0.674 and that is, the two different corpora lead to reasonably consistent results, which indicates the robustness of our method. However, the correlation score is still much lower than perfect correlation; in other words, there are disagreements between the outputs learned with the corpora. The group (C) in Table 2 shows such two examples. In these cases, the ensemble technique is helpful in improving the results as shown in the examples. Another interesting observation in our results is that the result of the ensemble technique outperforms that of the BNC-Wikipedia data as shown in Table 1. This shows that separately using the training corpora of different nature and then performing the ensemble technique can yield better results. By contrast, many of the previous studies on embedding-based methods combine different corpora into a single dataset, or use multiple corpora just separately and compare them (Hashimoto and Tsuruoka, 2015; Muraoka et al., 2014; Pennington et al., 2014). It would be worth investigating whether the results in the previous work can be improved by ensemble techniques. 6 Evaluation on the Phrase Embeddings 6.1 Evaluation Settings Dataset Next, we evaluated the learned embeddings on the transitive verb disambiguation dataset GS’118 provided by Grefenstette and Sadrzadeh (2011). GS’11 consists of 200 pairs of transitive verbs and each verb pair takes the same subject and object. For example, the transitive verb “run” is known as a polysemous word and this task requires one to identify the meanings of “run” and “operate” as similar to each other when taking “people” as their subject and “company” as their object. In the same setting, however, the meanings of “run” and “move” are not similar to each other. Each pair has multiple human ratings indicating how similar the phrases of the pair are. Evaluation metric The evaluation was performed by calculating Spearman’s rank correlation scores between the human ratings and the cosine similarity scores of v(SV O) in Eq. (12). Following the previous studies, we used the goldstandard ratings in two ways: averaging the human ratings for each SVO tuple (GS’11a) and treating each human rating separately (GS’11b). Ensemble technique We used the same ensemble technique described in Section 5.1. In this task we produced two ensemble results: Ensemble A and Ensemble B. The former used the averaged cosine similarity from the results of the BNC and Wikipedia data, and the latter further incorporated the result of the BNC-Wikipedia data. Baselines We compared our adaptive joint learning method with two baseline methods. One is the method in Hashimoto and Tsuruoka (2015) and it is equivalent to fixing α(V O) to 1 in our method. The other is fixing α(V O) to 0.5 in our method, which serves as a baseline to evaluate how effective the proposed adaptive weighting method is. 6.2 Results and Discussion 6.2.1 Result Overview Table 4 shows our results and the state of the art, and our method outperforms almost all of the previous methods in both datasets. Again, the ensemble technique further improves the results, and overall, Ensemble B yields the best results. The scores in Hashimoto and Tsuruoka (2015), the baseline results with α(V O) = 1 in our method, have been the best to date. As shown in Table 4, our method outperforms the baseline results with α(V O) = 0.5 as well as those 8http://www.cs.ox.ac.uk/activities/ compdistmeaning/GS2011data.txt 211 Proposed method α(V O) = 1 α(V O) = 0.5 take toll put strain deplete division put strain place strain necessitate monitoring cause lack α(take toll) = 0.11 cause strain deplete pool befall army have affect create pollution exacerbate weakness exacerbate injury deplete field cause strain catch eye catch attention catch ear grab attention grab attention catch heart make impression α(catch eye) = 0.14 make impression catch e-mail catch attention lift spirit catch imagination become legend become favorite catch attention inspire playing bear fruit accentuate effect bear herb increase richness enhance beauty bear grain reduce biodiversity α(bear fruit) = 0.19 enhance atmosphere bear spore fuel boom rejuvenate earth bear variety enhance atmosphere enhance habitat bear seed worsen violence make noise attack intruder make sound burn can attack trespasser do beating kill monster α(make noise) = 0.33 avoid predator get bounce wash machine attack diver get pulse lightn flash attack pedestrian lose bit cook raman buy car buy bike buy truck buy bike buy machine buy bike buy instrument α(buy car) = 0.71 buy motorcycle buy automobile buy chip buy automobile buy motorcycle buy scooter purchase coins buy vehicle buy motorcycle Table 5: Examples of the closest neighbors in the learned embedding space. All of the results were obtained by using the Wikipedia data, and the values of α(V O) are the same as those in Table 2. Method GS’11a GS’11b Proposed method (Wikipedia) 0.598 0.461 Proposed method (BNC) 0.595 0.463 Proposed method (BNC-Wikipedia) 0.623 0.483 Proposed method (Ensemble A) 0.661 0.511 Proposed method (Ensemble B) 0.680 0.524 α(V O) = 0.5 (Wikipedia) 0.491 0.386 α(V O) = 0.5 (BNC) 0.599 0.462 α(V O) = 0.5 (BNC-Wikipedia) 0.610 0.477 α(V O) = 0.5 (Ensemble A) 0.612 0.474 α(V O) = 0.5 (Ensemble B) 0.638 0.495 α(V O) = 1 (Wikipedia) 0.576 n/a α(V O) = 1 (BNC) 0.574 n/a Milajevs et al. (2014) 0.456 n/a Polajnar et al. (2014) n/a 0.370 Hashimoto et al. (2014) 0.420 0.340 Polajnar et al. (2015) n/a 0.330 Grefenstette and Sadrzadeh (2011) n/a 0.210 Human agreement 0.750 0.620 Table 4: Transitive verb disambiguation task. The results for α(V O) = 1 are reported in Hashimoto and Tsuruoka (2015). with α(V O) = 1. We see that our method improves the baseline scores by adaptively combining compositional and non-compositional embeddings. Along with the results in Table 1, these results show that our method allows us to improve the composition function by jointly learning noncompositional embeddings and the scoring function for compositionality detection. 6.2.2 Analysis of the Learned Embeddings We inspected the effects of adaptively weighting the compositional and non-compositional embeddings. Table 5 shows the five closest neighbor phrases in terms of the cosine similarity for the three idiomatic phrases “take toll”, “catch eye”, and “bear fruit” as well as the two non-idiomatic phrases “make noise” and “buy car”. The examples trained with the Wikipedia data are shown for our method and the two baselines, i.e., α(V O) = 1 and α(V O) = 0.5. As shown in Table 2, the compositionality levels of the first three phrases are low and their non-compositional embeddings are dominantly used to represent their meaning. One observation with α(V O) = 1 is that head words (i.e. verbs) are emphasized in the shown examples except “take toll” and “make noise”. As with other embedding-based methods, the compositional embeddings are highly affected by their component words. As a result, the phrases consisting of the same verb and the similar objects are often listed as the closest neighbors. By contrast, our method flexibly allows us to adaptively omit the information about the component words. Therefore, our method puts more weight on capturing the idiomatic aspects of the example phrases by 212 adaptively using the non-compositional embeddings. The results of α(V O) = 0.5 are similar to those with our proposed method, but we can see some differences. For example, the phrase list for “make noise” of our proposed method captures offensive meanings, whereas that of α(V O) = 0.5 is somewhat ambiguous. As another example, the phrase lists for “buy car” show that our method better captures the semantic similarity between the objects than α(V O) = 0.5. This is achieved by adaptively assigning a relatively large compositionality score (0.71) to the phrase to use the information about the object “car”. We should note that “make noise” is highly compositional but our method outputs α(make noise) = 0.33, and the phrase list of α(V O) = 1 is the most appropriate in this case. Improving the compositionality detection function should thus further improve the learned embeddings. 7 Related Work Learning embeddings of words and phrases has been widely studied, and the phrase embeddings have proven effective in many language processing tasks, such as machine translation (Cho et al., 2014; Sutskever et al., 2014), sentiment analysis and semantic textual similarity (Tai et al., 2015). Most of the phrase embeddings are constructed by word-level information via various kinds of composition functions like long short-term memory (Hochreiter and Schmidhuber, 1997) recurrent neural networks. Such composition functions should be powerful enough to efficiently encode information about all the words into the phrase embeddings. By simultaneously considering the compositionality of the phrases, our method would be helpful in saving the composition models from having to be powerful enough to perfectly encode the non-compositional phrases. As a first step towards this purpose, in this paper we have shown the effectiveness of our method on the task of learning verb phrase embeddings. Many studies have focused on detecting the compositionality of a variety of phrases (Lin, 1999), including the ones on verb phrases (Diab and Bhutada, 2009; McCarthy et al., 2003) and compound nouns (Farahmand et al., 2015; Reddy et al., 2011). Compared to statistical feature-based methods (McCarthy et al., 2007; Venkatapathy and Joshi, 2005), recent methods use word and phrase embeddings (Kiela and Clark, 2013; Yazdani et al., 2015). The embedding-based methods assume that word embeddings are given in advance and as a post-processing step, learn or simply employ composition functions to compute phrase embeddings. In other words, there is no distinction between compositional and noncompositional phrases. Yazdani et al. (2015) further proposed to incorporate latent annotations (binary labels) for the compositionality of the phrases. However, binary judgments cannot consider numerical scores of the compositionality. By contrast, our method adaptively weights the compositional and non-compositional embeddings using the compositionality scoring function. 8 Conclusion and Future Work We have presented a method for adaptively learning compositional and non-compositional phrase embeddings by jointly detecting compositionality levels of phrases. Our method achieves the state of the art on a compositionality detection task of verb-object pairs, and also improves upon the previous state-of-the-art method on a transitive verb disambiguation task. In future work, we will apply our method to other kinds of phrases and tasks. Acknowledgments We thank the anonymous reviewers for their helpful comments and suggestions. This work was supported by CREST, JST. References Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2014. Tailoring Continuous Word Representations for Dependency Parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 809–815. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations using RNN Encoder– Decoder for Statistical Machine Translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724–1734. Kenneth Church and Patrick Hanks. 1990. Word Association Norms, Mutual Information and Lexicography. Computational Linguistics, 19(2):263–312. 213 Mona Diab and Pravin Bhutada. 2009. Verb Noun Construction MWE Token Classification. In Proceedings of the Workshop on Multiword Expressions: Identification, Interpretation, Disambiguation and Applications, pages 17–22. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. Journal of Machine Learning Research, 12:2121–2159. Meghdad Farahmand, Aaron Smith, and Joakim Nivre. 2015. A Multiword Expression Data Set: Annotating Non-Compositionality and Conventionalization for English Noun Compounds. In Proceedings of the 11th Workshop on Multiword Expressions, pages 29–33. Edward Grefenstette and Mehrnoosh Sadrzadeh. 2011. Experimental Support for a Categorical Compositional Distributional Model of Meaning. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1394– 1404. Kazuma Hashimoto and Yoshimasa Tsuruoka. 2015. Learning Embeddings for Transitive Verb Disambiguation by Implicit Tensor Factorization. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, pages 1– 11. Kazuma Hashimoto, Pontus Stenetorp, Makoto Miwa, and Yoshimasa Tsuruoka. 2014. Jointly Learning Word Representations and Composition Functions Using Predicate-Argument Structures. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1544–1555. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation, 9(8):1735–1780. Dimitri Kartsaklis, Mehrnoosh Sadrzadeh, and Stephen Pulman. 2012. A Unified Sentence Space for Categorical Distributional-Compositional Semantics: Theory and Experiments. In Proceedings of the 24th International Conference on Computational Linguistics, pages 549–558. Dimitri Kartsaklis, Nal Kalchbrenner, and Mehrnoosh Sadrzadeh. 2014. Resolving Lexical Ambiguity in Tensor Regression Models of Meaning. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 212–217. Douwe Kiela and Stephen Clark. 2013. Detecting Compositionality of Multi-Word Expressions using Nearest Neighbours in Vector Space Models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1427–1432. Geoffrey Leech. 1992. 100 Million Words of English: the British National Corpus. Language Research, 28(1):1–13. Omer Levy and Yoav Goldberg. 2014. DependencyBased Word Embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 302–308. Dekang Lin. 1999. Automatic Identification of Noncompositional Phrases. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 317–324. Diana McCarthy, Bill Keller, and John Carroll. 2003. Detecting a Continuum of Compositionality in Phrasal Verbs. In Proceedings of the ACL 2003 Workshop on Multiword Expressions: Analysis, Acquisition and Treatment, pages 73–80. Diana McCarthy, Sriram Venkatapathy, and Aravind Joshi. 2007. Detecting Compositionality of VerbObject Combinations using Selectional Preferences. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 369–379. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed Representations of Words and Phrases and their Compositionality. In Advances in Neural Information Processing Systems 26, pages 3111–3119. Dmitrijs Milajevs, Dimitri Kartsaklis, Mehrnoosh Sadrzadeh, and Matthew Purver. 2014. Evaluating Neural Word Representations in Tensor-Based Compositional Settings. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 708–719. Jeff Mitchell and Mirella Lapata. 2008. Vector-based Models of Semantic Composition. In Proceedings of 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 236–244. Yusuke Miyao and Jun’ichi Tsujii. 2008. Feature Forest Models for Probabilistic HPSG Parsing. Computational Linguistics, 34(1):35–80, March. Masayasu Muraoka, Sonse Shimaoka, Kazeto Yamamoto, Yotaro Watanabe, Naoaki Okazaki, and Kentaro Inui. 2014. Finding The Best Model Among Representative Compositional Models. In Proceedings of the 28th Pacific Asia Conference on Language, Information, and Computation, pages 65–74. Mark Newton. 2006. Basic English Syntax with Exercises. B¨olcs´esz Konzorcium. Eric W. Noreen. 1989. Computer-Intensive Methods for Testing Hypotheses: An Introduction. WileyInterscience. 214 Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Nghia The Pham, Germ´an Kruszewski, Angeliki Lazaridou, and Marco Baroni. 2015. Jointly optimizing word representations for lexical and sentential tasks with the C-PHRASE model. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 971–981. Tamara Polajnar, Laura Rimell, and Stephen Clark. 2014. Using Sentence Plausibility to Learn the Semantics of Transitive Verbs. In Proceedings of Workshop on Learning Semantics at the 2014 Conference on Neural Information Processing Systems. Tamara Polajnar, Laura Rimell, and Stephen Clark. 2015. An Exploration of Discourse-Based Sentence Spaces for Compositional Distributional Semantics. In Proceedings of the First Workshop on Linking Computational Models of Lexical, Sentential and Discourse-level Semantics, pages 1–11. Siva Reddy, Diana McCarthy, and Suresh Manandhar. 2011. An Empirical Study on Compositionality in Compound Nouns. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 210–218. Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic Compositionality through Recursive Matrix-Vector Spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1201–1211. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to Sequence Learning with Neural Networks. In Advances in Neural Information Processing Systems 27, pages 3104–3112. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1556–1566. Tim Van de Cruys. 2014. A Neural Network Approach to Selectional Preference Acquisition. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 26–35. Sriram Venkatapathy and Aravind Joshi. 2005. Measuring the Relative Compositionality of Verb-Noun (V-N) Collocations by Integrating Features. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 899–906. Majid Yazdani, Meghdad Farahmand, and James Henderson. 2015. Learning Semantic Composition to Detect Non-compositionality of Multiword Expressions. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1733–1742. 215
2016
20
Neural Relation Extraction with Selective Attention over Instances Yankai Lin1, Shiqi Shen1, Zhiyuan Liu1,2∗, Huanbo Luan1, Maosong Sun1,2 1 Department of Computer Science and Technology, State Key Lab on Intelligent Technology and Systems, National Lab for Information Science and Technology, Tsinghua University, Beijing, China 2 Jiangsu Collaborative Innovation Center for Language Competence, Jiangsu, China Abstract Distant supervised relation extraction has been widely used to find novel relational facts from text. However, distant supervision inevitably accompanies with the wrong labelling problem, and these noisy data will substantially hurt the performance of relation extraction. To alleviate this issue, we propose a sentence-level attention-based model for relation extraction. In this model, we employ convolutional neural networks to embed the semantics of sentences. Afterwards, we build sentence-level attention over multiple instances, which is expected to dynamically reduce the weights of those noisy instances. Experimental results on real-world datasets show that, our model can make full use of all informative sentences and effectively reduce the influence of wrong labelled instances. Our model achieves significant and consistent improvements on relation extraction as compared with baselines. The source code of this paper can be obtained from https: //github.com/thunlp/NRE. 1 Introduction In recent years, various large-scale knowledge bases (KBs) such as Freebase (Bollacker et al., 2008), DBpedia (Auer et al., 2007) and YAGO (Suchanek et al., 2007) have been built and widely used in many natural language processing (NLP) tasks, including web search and question answering. These KBs mostly compose of relational facts with triple format, e.g., (Microsoft, founder, Bill Gates). Although existing KBs contain a ∗ Corresponding author: Zhiyuan Liu ([email protected]). massive amount of facts, they are still far from complete compared to the infinite real-world facts. To enrich KBs, many efforts have been invested in automatically finding unknown relational facts. Therefore, relation extraction (RE), the process of generating relational data from plain text, is a crucial task in NLP. Most existing supervised RE systems require a large amount of labelled relation-specific training data, which is very time consuming and labor intensive. (Mintz et al., 2009) proposes distant supervision to automatically generate training data via aligning KBs and texts. They assume that if two entities have a relation in KBs, then all sentences that contain these two entities will express this relation. For example, (Microsoft, founder, Bill Gates) is a relational fact in KB. Distant supervision will regard all sentences that contain these two entities as active instances for relation founder. Although distant supervision is an effective strategy to automatically label training data, it always suffers from wrong labelling problem. For example, the sentence “Bill Gates ’s turn to philanthropy was linked to the antitrust problems Microsoft had in the U.S. and the European union.” does not express the relation founder but will still be regarded as an active instance. Hence, (Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012) adopt multi-instance learning to alleviate the wrong labelling problem. The main weakness of these conventional methods is that most features are explicitly derived from NLP tools such as POS tagging and the errors generated by NLP tools will propagate in these methods. Some recent works (Socher et al., 2012; Zeng et al., 2014; dos Santos et al., 2015) attempt to use deep neural networks in relation classification without handcrafted features. These methods build classifier based on sentence-level annotated data, which cannot be applied in large-scale x1 x2 x3 xn CNN CNN CNN CNN x1 x2 x3 xn s α1 α2 α3 αn Figure 1: The architecture of sentence-level attention-based CNN, where xi and xi indicate the original sentence for an entity pair and its corresponding sentence representation, αi is the weight given by sentence-level attention, and s indicates the representation of the sentence set. KBs due to the lack of human-annotated training data. Therefore, (Zeng et al., 2015) incorporates multi-instance learning with neural network model, which can build relation extractor based on distant supervision data. Although the method achieves significant improvement in relation extraction, it is still far from satisfactory. The method assumes that at least one sentence that mentions these two entities will express their relation, and only selects the most likely sentence for each entity pair in training and prediction. It’s apparent that the method will lose a large amount of rich information containing in neglected sentences. In this paper, we propose a sentence-level attention-based convolutional neural network (CNN) for distant supervised relation extraction. As illustrated in Fig. 1, we employ a CNN to embed the semantics of sentences. Afterwards, to utilize all informative sentences, we represent the relation as semantic composition of sentence embeddings. To address the wrong labelling problem, we build sentence-level attention over multiple instances, which is expected to dynamically reduce the weights of those noisy instances. Finally, we extract relation with the relation vector weighted by sentence-level attention. We evaluate our model on a real-world dataset in the task of relation extraction. The experimental results show that our model achieves significant and consistent improvements in relation extraction as compared with the state-of-the-art methods. The contributions of this paper can be summarized as follows: • As compared to existing neural relation extraction model, our model can make full use of all informative sentences of each entity pair. • To address the wrong labelling problem in distant supervision, we propose selective attention to de-emphasize those noisy instances. • In the experiments, we show that selective attention is beneficial to two kinds of CNN models in the task of relation extraction. 2 Related Work Relation extraction is one of the most important tasks in NLP. Many efforts have been invested in relation extraction, especially in supervised relation extraction. Most of these methods need a great deal of annotated data, which is time consuming and labor intensive. To address this issue, (Mintz et al., 2009) aligns plain text with Freebase by distant supervision. However, distant supervision inevitably accompanies with the wrong labelling problem. To alleviate the wrong labelling problem, (Riedel et al., 2010) models distant supervision for relation extraction as a multiinstance single-label problem, and (Hoffmann et al., 2011; Surdeanu et al., 2012) adopt multiinstance multi-label learning in relation extraction. Multi-instance learning was originally proposed to address the issue of ambiguously-labelled training data when predicting the activity of drugs (Dietterich et al., 1997). Multi-instance learning considers the reliability of the labels for each instance. (Bunescu and Mooney, 2007) connects weak supervision with multi-instance learning and extends it to relation extraction. But all the feature-based methods depend strongly on the quality of the features generated by NLP tools, which will suffer from error propagation problem. Recently, deep learning (Bengio, 2009) has been widely used for various areas, including computer vision, speech recognition and so on. It has also been successfully applied to different NLP tasks such as part-of-speech tagging (Collobert et al., 2011), sentiment analysis (dos Santos and Gatti, 2014), parsing (Socher et al., 2013), and machine translation (Sutskever et al., 2014). Due to the recent success in deep learning, many researchers have investigated the possibility of using neural networks to automatically learn features for relation extraction. (Socher et al., 2012) uses a recursive neural network in relation extraction. They parse the sentences first and then represent each node in the parsing tree as a vector. Moreover, (Zeng et al., 2014; dos Santos et al., 2015) adopt an end-to-end convolutional neural network for relation extraction. Besides, (Xie et al., 2016) attempts to incorporate the text information of entities for relation extraction. Although these methods achieve great success, they still extract relations on sentence-level and suffer from a lack of sufficient training data. In addition, the multi-instance learning strategy of conventional methods cannot be easily applied in neural network models. Therefore, (Zeng et al., 2015) combines at-least-one multi-instance learning with neural network model to extract relations on distant supervision data. However, they assume that only one sentence is active for each entity pair. Hence, it will lose a large amount of rich information containing in those neglected sentences. Different from their methods, we propose sentencelevel attention over multiple instances, which can utilize all informative sentences. The attention-based models have attracted a lot of interests of researchers recently. The selectivity of attention-based models allows them to learn alignments between different modalities. It has been applied to various areas such as image classification (Mnih et al., 2014), speech recognition (Chorowski et al., 2014), image caption generation (Xu et al., 2015) and machine translation (Bahdanau et al., 2014). To the best of our knowledge, this is the first effort to adopt attention-based model in distant supervised relation extraction. 3 Methodology Given a set of sentences {x1, x2, · · · , xn} and two corresponding entities, our model measures the probability of each relation r. In this section, we will introduce our model in two main parts: • Sentence Encoder. Given a sentence x and two target entities, a convolutional neutral network (CNN) is used to construct a distributed representation x of the sentence. • Selective Attention over Instances. When the distributed vector representations of all sentences are learnt, we use sentence-level attention to select the sentences which really express the corresponding relation. 3.1 Sentence Encoder Bill_Gates is the founder of Microsoft. Sentence Vector Representaion word position Convolution Layer Max Pooling x W * + b Non-linear Layer Figure 2: The architecture of CNN/PCNN used for sentence encoder. As shown in Fig. 2, we transform the sentence x into its distributed representation x by a CNN. First, words in the sentence are transformed into dense real-valued feature vectors. Next, convolutional layer, max-pooling layer and non-linear transformation layer are used to construct a distributed representation of the sentence, i.e., x. 3.1.1 Input Representation The inputs of the CNN are raw words of the sentence x. We first transform words into lowdimensional vectors. Here, each input word is transformed into a vector via word embedding matrix. In addition, to specify the position of each entity pair, we also use position embeddings for all words in the sentence. Word Embeddings. Word embeddings aim to transform words into distributed representations which capture syntactic and semantic meanings of the words. Given a sentence x consisting of m words x = {w1, w2, · · · , wm}, every word wi is represented by a real-valued vector. Word representations are encoded by column vectors in an embedding matrix V ∈Rda×|V |where V is a fixed-sized vocabulary. Position Embeddings. In the task of relation extraction, the words close to the target entities are usually informative to determine the relation between entities. Similar to (Zeng et al., 2014), we use position embeddings specified by entity pairs. It can help the CNN to keep track of how close each word is to head or tail entities. It is defined as the combination of the relative distances from the current word to head or tail entities. For example, in the sentence “Bill Gates is the founder of Microsoft.”, the relative distance from the word “founder” to head entity Bill Gates is 3 and tail entity Microsoft is 2. In the example shown in Fig. 2, it is assumed that the dimension da of the word embedding is 3 and the dimension db of the position embedding is 1. Finally, we concatenate the word embeddings and position embeddings of all words and denote it as a vector sequence w = {w1, w2, · · · , wm}, where wi ∈Rd(d = da + db × 2). 3.1.2 Convolution, Max-pooling and Non-linear Layers In relation extraction, the main challenges are that the length of the sentences is variable and the important information can appear in any area of the sentences. Hence, we should utilize all local features and perform relation prediction globally. Here, we use a convolutional layer to merge all these features. The convolutional layer first extracts local features with a sliding window of length l over the sentence. In the example shown in Fig. 2, we assume that the length of the sliding window l is 3. Then, it combines all local features via a max-pooling operation to obtain a fixed-sized vector for the input sentence. Here, convolution is defined as an operation between a vector sequence w and a convolution matrix W ∈Rdc×(l×d), where dc is the sentence embedding size. Let us define the vector qi ∈Rl×d as the concatenation of a sequence of w word embeddings within the i-th window: qi = wi−l+1:i (1 ≤i ≤m + l −1). (1) Since the window may be outside of the sentence boundaries when it slides near the boundary, we set special padding tokens for the sentence. It means that we regard all out-of-range input vectors wi(i < 1 or i > m) as zero vector. Hence, the i-th filter of convolutional layer is computed as: pi = [Wq + b]i (2) where b is bias vector. And the i-th element of the vector x ∈Rdc as follows: [x]i = max(pi), (3) Further, PCNN (Zeng et al., 2015), which is a variation of CNN, adopts piecewise max pooling in relation extraction. Each convolutional filter pi is divided into three segments (pi1, pi2, pi3) by head and tail entities. And the max pooling procedure is performed in three segments separately, which is defined as: [x]ij = max(pij), (4) And [x]i is set as the concatenation of [x]ij. Finally, we apply a non-linear function at the output, such as the hyperbolic tangent. 3.2 Selective Attention over Instances Suppose there is a set S contains n sentences for entity pair (head, tail), i.e., S = {x1, x2, · · · , xn}. To exploit the information of all sentences, our model represents the set S with a real-valued vector s when predicting relation r. It is straightforward that the representation of the set S depends on all sentences’ representations x1, x2, · · · , xn. Each sentence representation xi contains information about whether entity pair (head, tail) contains relation r for input sentence xi. The set vector s is, then, computed as a weighted sum of these sentence vector xi: s = X i αixi, (5) where αi is the weight of each sentence vector xi. In this paper, we define αi in two ways: Average: We assume that all sentences in the set X have the same contribution to the representation of the set. It means the embedding of the set S is the average of all the sentence vectors: s = X i 1 nxi, (6) It’s a naive baseline of our selective attention. Selective Attention: However, the wrong labelling problem inevitably occurs. Thus, if we regard each sentence equally, the wrong labelling sentences will bring in massive of noise during training and testing. Hence, we use a selective attention to de-emphasize the noisy sentence. Hence, αi is further defined as: αi = exp(ei) P k exp(ek), (7) where ei is referred as a query-based function which scores how well the input sentence xi and the predict relation r matches. We select the bilinear form which achieves best performance in different alternatives: ei = xiAr, (8) where A is a weighted diagonal matrix, and r is the query vector associated with relation r which indicates the representation of relation r. Finally, we define the conditional probability p(r|S, θ) through a softmax layer as follows: p(r|S, θ) = exp(or) Pnr k=1 exp(ok), (9) where nr is the total number of relations and o is the final output of the neural network which corresponds to the scores associated to all relation types, which is defined as follows: o = Ms + d, (10) where d ∈Rnr is a bias vector and M is the representation matrix of relations. (Zeng et al., 2015) follows the assumption that at least one mention of the entity pair will reflect their relation, and only uses the sentence with the highest probability in each set for training. Hence, the method which they adopted for multi-instance learning can be regarded as a special case as our selective attention when the weight of the sentence with the highest probability is set to 1 and others to 0. 3.3 Optimization and Implementation Details Here we introduce the learning and optimization details of our model. We define the objective function using cross-entropy at the set level as follows: J(θ) = s X i=1 log p(ri|Si, θ), (11) where s indicates the number of sentence sets and θ indicates all parameters of our model. To solve the optimization problem, we adopt stochastic gradient descent (SGD) to minimize the objective function. For learning, we iterate by randomly selecting a mini-batch from the training set until converge. In the implementation, we employ dropout (Srivastava et al., 2014) on the output layer to prevent overfitting. The dropout layer is defined as an element-wise multiplication with a a vector h of Bernoulli random variables with probability p. Then equation (10) is rewritten as: o = M(s ◦h) + d. (12) In the test phase, the learnt set representations are scaled by p, i.e., ˆsi = psi. And the scaled set vector ˆri is finally used to predict relations. 4 Experiments Our experiments are intended to demonstrate that our neural models with sentence-level selective attention can alleviate the wrong labelling problem and take full advantage of informative sentences for distant supervised relation extraction. To this end, we first introduce the dataset and evaluation metrics used in the experiments. Next, we use cross-validation to determine the parameters of our model. And then we evaluate the effects of our selective attention and show its performance on the data with different set size. Finally, we compare the performance of our method to several state-of-the-art feature-based methods. 4.1 Dataset and Evaluation Metrics We evaluate our model on a widely used dataset1 which is developed by (Riedel et al., 2010) and has also been used by (Hoffmann et al., 2011; Surdeanu et al., 2012). This dataset was generated by aligning Freebase relations with the New York Times corpus (NYT). Entity mentions are found using the Stanford named entity tagger (Finkel et al., 2005), and are further matched to the names of Freebase entities. The Freebase relations are divided into two parts, one for training and one for testing. It aligns the the sentences from the corpus of the years 2005-2006 and regards them as training instances. And the testing instances are the aligned sentences from 2007. There are 53 possible relationships including a special relation NA which indicates there is no relation between head and tail entities. The training data contains 522,611 sentences, 281,270 entity pairs and 18,252 relational facts. The testing set contains 172,448 sentences, 96,678 entity pairs and 1,950 relational facts. Similar to previous work (Mintz et al., 2009), we evaluate our model in the held-out evaluation. It evaluates our model by comparing the relation 1http://iesl.cs.umass.edu/riedel/ecml/ facts discovered from the test articles with those in Freebase. It assumes that the testing systems have similar performances in relation facts inside and outside Freebase. Hence, the held-out evaluation provides an approximate measure of precision without time consumed human evaluation. We report both the aggregate curves precision/recall curves and Precision@N (P@N) in our experiments. 4.2 Experimental Settings 4.2.1 Word Embeddings In this paper, we use the word2vec tool 2 to train the word embeddings on NYT corpus. We keep the words which appear more than 100 times in the corpus as vocabulary. Besides, we concatenate the words of an entity when it has multiple words. 4.2.2 Parameter Settings Following previous work, we tune our models using three-fold validation on the training set. We use a grid search to determine the optimal parameters and select learning rate λ for SGD among {0.1, 0.01, 0.001, 0.0001}, the sliding window size l ∈{1, 2, 3, · · · , 8}, the sentence embedding size n ∈{50, 60, · · · , 300}, and the batch size B among {40, 160, 640, 1280}. For other parameters, since they have little effect on the results, we follow the settings used in (Zeng et al., 2014). For training, we set the iteration number over all the training data as 25. In Table 1 we show all parameters used in the experiments. Table 1: Parameter settings Window size l 3 Sentence embedding size dc 230 Word dimension da 50 Position dimension db 5 Batch size B 160 Learning rate λ 0.01 Dropout probability p 0.5 4.3 Effect of Sentence-level Selective Attention To demonstrate the effects of the sentence-level selective attention, we empirically compare different methods through held-out evaluation. We select the CNN model proposed in (Zeng et al., 2014) and the PCNN model proposed in (Zeng 2https://code.google.com/p/word2vec/ et al., 2015) as our sentence encoders and implement them by ourselves which achieve comparable results as the authors reported. And we compare the performance of the two different kinds of CNN with sentence-level attention (ATT) , its naive version (AVE) which represents each sentence set as the average vector of sentences inside the set and the at-least-one multi-instance learning (ONE) used in (Zeng et al., 2015). 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Precision CNN CNN+ONE CNN+AVE CNN+ATT 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Precision PCNN PCNN+ONE PCNN+AVE PCNN+ATT Figure 3: Top: Aggregate precion/recall curves of CNN, CNN+ONE, CNN+AVE, CNN+ATT. Bottom: Aggregate precion/recall curves of PCNN, PCNN+ONE, PCNN+AVE, PCNN+ATT From Fig. 3, we have the following observation: (1) For both CNN and PCNN, the ONE method brings better performance as compared to CNN/PCNN. The reason is that the original distant supervision training data contains a lot of noise and the noisy data will damage the performance of relation extraction. (2) For both CNN and PCNN, the AVE method is useful for relation extraction as compared to CNN/PCNN. It indicates that considering more sentences is beneficial to relation extraction since the noise can be reduced by mutual complementation of information. (3) For both CNN and PCNN, the AVE method has a similar performance compared to the ONE method. It indicates that, although the AVE method brings in information of more sentences, since it regards each sentence equally, it also brings in the noise from the wrong labelling sentences which may hurt the performance of relation extraction. (4) For both CNN and PCNN, the ATT method achieves the highest precision over the entire range of recall compared to other methods including the AVE method. It indicates that the proposed selective attention is beneficial. It can effectively filter out meaningless sentences and alleviate the wrong labelling problem in distant supervised relation extraction. 4.4 Effect of Sentence Number In the original testing data set, there are 74,857 entity pairs that correspond to only one sentence, nearly 3/4 over all entity pairs. Since the superiority of our selective attention lies in the entity pairs containing multiple sentences, we compare the performance of CNN/PCNN+ONE, CNN/PCNN+AVE and CNN/PCNN+ATT on the entity pairs which have more than one sentence. And then we examine these three methods in three test settings: • One: For each testing entity pair, we randomly select one sentence and use this sentence to predict relation. • Two: For each testing entity pair, we randomly select two sentences and proceed relation extraction. • All: We use all sentences of each entity pair for relation extraction. Note that, we use all the sentences in training. We will report the P@100, P@200, P@300 and the mean of them for each model in held-out evaluation. Table 2 shows the P@N for compared models in three test settings. From the table, we can see that: (1) For both CNN and PCNN, the ATT method achieves the best performance in all test settings. It demonstrates the effectiveness of sentence-level selective attention for multi-instance learning. (2) For both CNN and PCNN, the AVE method is comparable to the ATT method in the One test setting. However, when the number of testing sentences per entity pair grows, the performance of the AVE methods has almost no improvement. It even drops gradually in P@100, P@200 as the sentence number increases. The reason is that, since we regard each sentence equally, the noise contained in the sentences that do not express any relation will have negative influence in the performance of relation extraction. (3) CNN+AVE and CNN+ATT have 5% to 8% improvements compared to CNN+ONE in the ONE test setting. Since each entity pair has only one sentence in this test setting, the only difference of these methods is from training. Hence, it shows that utilizing all sentences will bring in more information although it may also bring in some extra noises. (4) For both CNN and PCNN, the ATT method outperforms other two baselines over 5% and 9% in the Two and All test settings. It indicates that by taking more useful information into account, the relational facts which CNN+ATT ranks higher are more reliable and beneficial to relation extraction. 4.5 Comparison with Feature-based Approaches 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Precision Mintz MultiR MIML CNN+ATT PCNN+ATT Figure 4: Performance comparison of proposed model and traditional methods To evaluate the proposed method, we select the following three feature-based methods for comparison through held-out evaluation: Mintz (Mintz et al., 2009) is a traditional distant supervised model. MultiR (Hoffmann et al., 2011) proposes a probabilistic, graphical model of multi-instance learning which handles overlapping relations. MIML (Surdeanu et al., 2012) jointly models both multiple instances and multiple relations. We implement them with the source codes released by the authors. Table 2: P@N for relation extraction in the entity pairs with different number of sentences Test Settings One Two All P@N(%) 100 200 300 Mean 100 200 300 Mean 100 200 300 Mean CNN+ONE 68.3 60.7 53.8 60.9 70.3 62.7 55.8 62.9 67.3 64.7 58.1 63.4 +AVE 75.2 67.2 58.8 67.1 68.3 63.2 60.5 64.0 64.4 60.2 60.1 60.4 +ATT 76.2 65.2 60.8 67.4 76.2 65.7 62.1 68.0 76.2 68.6 59.8 68.2 PCNN+ONE 73.3 64.8 56.8 65.0 70.3 67.2 63.1 66.9 72.3 69.7 64.1 68.7 +AVE 71.3 63.7 57.8 64.3 73.3 65.2 62.1 66.9 73.3 66.7 62.8 67.6 +ATT 73.3 69.2 60.8 67.8 77.2 71.6 66.1 71.6 76.2 73.1 67.4 72.2 Fig. 4 shows the precision/recall curves for each method. We can observe that: (1) CNN/PCNN+ATT significantly outperforms all feature-based methods over the entire range of recall. When the recall is greater than 0.1, the performance of feature-based method drop out quickly. In contrast, our model has a reasonable precision until the recall approximately reaches 0.3. It demonstrates that the human-designed feature cannot concisely express the semantic meaning of the sentences, and the inevitable error brought by NLP tools will hurt the performance of relation extraction. In contrast, CNN/PCNN+ATT which learns the representation of each sentences automatically can express each sentence well. (2) PCNN+ATT performs much better as compared to CNN+ATT over the entire range of recall. It means that the selective attention considers the global information of all sentences except the information inside each sentence. Hence, the performance of our model can be further improved if we have a better sentence encoder. 4.6 Case Study Table 3 shows two examples of selective attention from the testing data. For each relation, we show the corresponding sentences with highest and lowest attention weight respectively. And we highlight the entity pairs with bold formatting. From the table we find that: The former example is related to the relation employer of. The sentence with low attention weight does not express the relation between two entities, while the high one shows that Mel Karmazin is the chief executive of Sirius Satellite Radio. The later example is related to the relation place of birth. The sentence with low attention weight expresses where Ernst Haefliger is died in, while the high one expresses where he is born in. Table 3: Some examples of selective attention in NYT corpus Relation employer of Low When Howard Stern was preparing to take his talk show to Sirius Satellite Radio, following his former boss, Mel Karmazin, Mr. Hollander argued that ... High Mel Karmazin, the chief executive of Sirius Satellite Radio, made a lot of phone calls ... Relation place of birth Low Ernst Haefliger, a Swiss tenor who ... roles , died on Saturday in Davos, Switzerland, where he maintained a second home. High Ernst Haefliger was born in Davos on July 6, 1919, and studied at the Wettinger Seminary ... 5 Conclusion and Future Works In this paper, we develop CNN with sentencelevel selective attention. Our model can make full use of all informative sentences and alleviate the wrong labelling problem for distant supervised relation extraction. In experiments, we evaluate our model on relation extraction task. The experimental results show that our model significantly and consistently outperforms state-of-the-art featurebased methods and neural network methods. In the future, we will explore the following directions: • Our model incorporates multi-instance learning with neural network via instance-level selective attention. It can be used in not only distant supervised relation extraction but also other multi-instance learning tasks. We will explore our model in other area such as text categorization. • CNN is one of the effective neural networks for neural relation extraction. Researchers also propose many other neural network models for relation extraction. In the future, we will incorporate our instance-level selective attention technique with those models for relation extraction. Acknowledgments This work is supported by the 973 Program (No. 2014CB340501), the National Natural Science Foundation of China (NSFC No. 61572273, 61303075) and the Tsinghua University Initiative Scientific Research Program (20151080406). References S¨oren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. Springer. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Yoshua Bengio. 2009. Learning deep architectures for ai. Foundations and trends R⃝in Machine Learning, 2(1):1–127. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of KDD, pages 1247–1250. Razvan Bunescu and Raymond Mooney. 2007. Learning to extract relations from the web using minimal supervision. In Proceedings of ACL, volume 45, page 576. Jan Chorowski, Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. End-to-end continuous speech recognition using attention-based recurrent nn: first results. arXiv preprint arXiv:1412.1602. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. JMLR, 12:2493–2537. Thomas G Dietterich, Richard H Lathrop, and Tom´as Lozano-P´erez. 1997. Solving the multiple instance problem with axis-parallel rectangles. Artificial intelligence, 89(1):31–71. Cıcero Nogueira dos Santos and Maıra Gatti. 2014. Deep convolutional neural networks for sentiment analysis of short texts. In Proceedings of COLING. Cıcero Nogueira dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convolutional neural networks. In Proceedings of ACL, volume 1, pages 626–634. Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In Proceedings of ACL, pages 363–370. Association for Computational Linguistics. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledgebased weak supervision for information extraction of overlapping relations. In Proceedings of ACLHLT, pages 541–550. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of ACLIJCNLP, pages 1003–1011. Volodymyr Mnih, Nicolas Heess, Alex Graves, et al. 2014. Recurrent models of visual attention. In Proceedings of NIPS, pages 2204–2212. Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Proceedings of ECML-PKDD, pages 148–163. Richard Socher, Brody Huval, Christopher D Manning, and Andrew Y Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of EMNLP-CoNLL, pages 1201–1211. Richard Socher, John Bauer, Christopher D Manning, and Andrew Y Ng. 2013. Parsing with compositional vector grammars. In Proceedings of ACL. Citeseer. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. JMLR, 15(1):1929–1958. Fabian M Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In Proceedings of WWW, pages 697–706. ACM. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In Proceedings of EMNLP, pages 455–465. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of NIPS, pages 3104–3112. Ruobing Xie, Zhiyuan Liu, Jia Jia, Huanbo Luan, and Maosong Sun. 2016. Representation learning of knowledge graphs with entity descriptions. Kelvin Xu, Jimmy Ba, Ryan Kiros, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. Proceedings of ICML. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING, pages 2335–2344. Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Proceedings of EMNLP.
2016
200
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2134–2143, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Leveraging FrameNet to Improve Automatic Event Detection Shulin Liu, Yubo Chen, Shizhu He, Kang Liu and Jun Zhao National Laboratory of Pattern Recognition Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China {shulin.liu, yubo.chen, shizhu.he, kliu, jzhao}@nlpr.ia.ac.cn Abstract Frames defined in FrameNet (FN) share highly similar structures with events in ACE event extraction program. An event in ACE is composed of an event trigger and a set of arguments. Analogously, a frame in FN is composed of a lexical unit and a set of frame elements, which play similar roles as triggers and arguments of ACE events respectively. Besides having similar structures, many frames in FN actually express certain types of events. The above observations motivate us to explore whether there exists a good mapping from frames to event-types and if it is possible to improve event detection by using FN. In this paper, we propose a global inference approach to detect events in FN. Further, based on the detected results, we analyze possible mappings from frames to event-types. Finally, we improve the performance of event detection and achieve a new state-of-the-art result by using the events automatically detected from FN. 1 Introduction In the ACE (Automatic Context Extraction) event extraction program, an event is represented as a structure consisting of an event trigger and a set of arguments. This paper tackles with the event detection (ED) task, which is a crucial component in the overall task of event extraction. The goal of ED is to identify event triggers and their corresponding event types from the given documents. FrameNet (FN) (Baker et al., 1998; Fillmore et al., 2003) is a linguistic resource storing considerable information about lexical and predicateargument semantics. In FN, a frame is defined as a composition of a Lexical Unit (LU) and a set of Frame Elements (FE). Most frames contain a set of exemplars with annotated LUs and FEs (see Figure 2 and Section 2.2 for details). From the above definitions of events and frames, it is not hard to find that the frames defined in FN share highly similar structures as the events defined in ACE. Firstly, the LU of a Frame plays a similar role as the trigger of an event. ACE defines the trigger of an event as the word or phrase which most clearly expresses an event occurrence. For example, the following sentence “He died in the hospital.” expresses a Die event, whose trigger is the word died. Analogously, the LU of a frame is also the word or phrase which is capable of indicating the occurrence of the expressed semantic frame. For example, the sentence “Aeroplanes bombed London.” expresses an Attack1 frame, whose LU is the word bombed. Secondly, the FEs of a frame also play similar roles as arguments of an event. Both of them indicate the participants involved in the corresponding frame or event. For example, in the first sentence, He and hospital are the arguments, and in the second sentence, Aeroplanes and London are the FEs. Besides having similar structure as events, many frames in FN actually express certain types of events defined in ACE. Table 1 shows some examples of frames which also express events. Frame Event Sample in FN Attack Attack Aeroplanes bombed London. Invading Attack Hitler invaded Austria . Fining Fine The court fined her $40. Execution Execute He was executed yesterday. Table 1: Examples of frames expressing events. The aforementioned observations motivate us to 1The notation of frames distinguishes from that of events by the italic decoration. 2134 explore: (1) whether there exists a good mapping from frames to event-types, and (2) whether it is possible to improve ED by using FN. Figure 1: Our framework for detecting events in FN (including training and detecting processes). For the first issue, we investigate whether a frame could be mapped to an event-type based on events expressed by exemplars annotated for that frame. Therefore the key is to detect events from the given exemplar sentences in FN. To achieve this goal, we propose a global inference approach (see figure 1). We firstly learn a basic ED model based on the ACE labeled corpus and employ it to yield initial judgements for each sentence in FN. Then, we apply a set of soft constraints for global inference based on the following hypotheses: 1). Sentences belonging to the same LU tend to express events of the same type; 2). Sentences belonging to related frames tend to express events of the same type; 3). Sentences belonging to the same frame tend to express events of the same type. All of the above constraints and initial judgments are formalized as first-order logic formulas and modeled by Probabilistic Soft Logic (PSL) (Kimmig et al., 2012; Bach et al., 2013). Finally, we obtain the final results via PSL-based global inference. We conduct both manual and automatic evaluations for the detected results. For the second issue, ED generally suffers from data sparseness due to lack of labeled samples. Some types, such as Nominate and Extradite, contain even less than 10 labeled samples. Apparently, from such a small scale of training data is difficult to yield a satisfying performance. We notice that ACE corpus only contains about 6,000 labeled instances, while FN contains more than 150,000 exemplars. Thus, a straightforward solution to alleviate the data sparseness problem is to expand the ACE training data by using events detected from FN. The experimental results show that events from FN significantly improve the performance of the event detection task. Figure 2: The hierarchy of FN corpus, where each Sk under a LU is a exemplar annotated for that LU. Inheritance is a semantic relation between the frames Invading and Attack. To sum up, our main contributions are: (1) To our knowledge, this is the first work performing event detection over ACE and FN to explore the relationships between frames and events. (2) We propose a global inference approach to detect events in FN, which is demonstrated very effective by our experiments. Moreover, based on the detected results, we analyze possible mappings from frames to event-types (all the detecting and mapping results are released for further use by the NLP community2). (3) We improve the performance of event detection significantly and achieve a new state-of-the-art result by using events automatically detected from FN as extra training data. 2 Background 2.1 ACE Event Extraction In ACE evaluations, an event is defined as a specific occurrence involving several participants. ACE event evaluation includes 8 types of events, with 33 subtypes. Following previous work, we treat them simply as 33 separate event types and ignore the hierarchical structure among them. In this paper, we use the ACE 2005 corpus3 in our experiments. It contains 599 documents, which include about 6,000 labeled events. 2.2 FrameNet The FrameNet is a taxonomy of manually identified semantic frames for English4. Figure 2 shows 2Available at https://github.com/subacl/acl16 3https://catalog.ldc.upenn.edu/LDC2006T06 4We use the latest released version, FrameNet 1.5 in this work (http://framenet.icsi.berkeley.edu). 2135 the hierarchy of FN corpus. Listed in the FN with each frame are a set of lemmas with part of speech (i.e “invade.v”) that can evoke the frame, which are called lexical units (LUs). Accompanying most LUs in the FN is a set of exemplars annotated for them. Moreover, there are a set of labeled relations between frames, such as Inheritance. FN contains more than 1,000 various frames and 10,000 LUs with 150,000 annotated exemplars. Eight relations are defined between frames in FN, but in this paper we only use the following three of them because the others do not satisfy our hypotheses (see section 4.2): Inheritance: A inherited from B indicates that A must correspond to an equally or more specific fact about B. It is a directional relation. See also: A and B connected by this relation indicates that they are similar frames. Perspective on: A and B connected by this relation means that they are different points-of-view about the same fact (i.e. Receiving vs. Transfer). 2.3 Related Work Event extraction is an increasingly hot and challenging research topic in NLP. Many approaches have been proposed for this task. Nearly all the existing methods on ACE event task use supervised paradigm. We further divide them into featurebased methods and representation-based methods. In feature-based methods, a diverse set of strategies has been exploited to convert classification clues into feature vectors. Ahn (2006) uses the lexical features(e.g., full word), syntactic features (e.g., dependency features) and externalknowledge features(WordNet (Miller, 1995)) to extract the event. Inspired by the hypothesis of One Sense Per Discourse (Yarowsky, 1995), Ji and Grishman (2008) combined global evidence from related documents with local decisions for the event extraction. To capture more clues from the texts, Gupta and Ji (2009), Liao and Grishman (2010) and Hong et al. (2011) proposed the crossevent and cross-entity inference for the ACE event task. Li et al. (2013) proposed a joint model to capture the combinational features of triggers and arguments. Liu et al. (2016) proposed a global inference approach to employ both latent local and global information for event detection. In representation-based methods, candidate event mentions are represented by embedding, which typically are fed into neural networks. Two similarly related work has been proposed on event detection (Chen et al., 2015; Nguyen and Grishman, 2015). Nguyen and Grishman (2015) employed Convolutional Neural Networks (CNNs) to automatically extract sentence-level features for event detection. Chen et al. (2015) proposed dynamic multi-pooling operation on CNNs to capture better sentence-level features. FrameNet is a typical resource for framesemantic parsing, which consists of the resolution of predicate sense into a frame, and the analysis of the frame’s participants (Thompson et al., 2003; Giuglea and Moschitti, 2006; Hermann et al., 2014; Das et al., 2014). Other tasks which have been studied based on FN include question answering (Narayanan and Harabagiu, 2004; Shen and Lapata, 2007), textual entailment (Burchardt et al., 2009) and paraphrase recognition (Pad´o and Lapata, 2005). This is the first work to explore the application of FN to event detection. 3 Basic Event Detection Model Alike to existing work, we model event detection (ED) as a word classification task. In the ED task, each word in the given sentence is treated as a candidate trigger and the goal is to classify each of these candidates into one of 34 classes (33 event types plus a NA class). However, in this work, as we assumed that the LU of a frame is analogical to the trigger of an event, we only treat the LU annotated in the given sentence as a trigger candidate. Each sentence in FN only contains one candidate trigger, thus “the candidate” denotes both the candidate trigger of a sentence and the sentence itself for FN in the remainder of this paper. Another notable difference is that we train the detection model on one corpus (ACE) but apply it on another (FN). That means our task is also a cross-domain problem. To tackle with it, our basic ED approach follows representation-based paradigm, which has been demonstrated effective in the cross-domain situation (Nguyen and Grishman, 2015). 3.1 Model We employ a simple three-layer (a input layer, a hidden layer and a soft-max output layer) Artificial Neural Networks (ANNs) (Hagan et al., 1996) to model the ED task. In our model, adjacent layers are fully connected. Word embeddings learned from large amount of unlabeled data have been shown to be able to capture the meaningful semantic regularities of words 2136 (Bengio et al., 2003; Erhan et al., 2010). This paper uses unsupervised learned word embeddings as the source of base features. We use the Skipgram model (Mikolov et al., 2013) to learn word embeddings on the NYT corpus5. Given a sentence, we concatenate the embedding vector of the candidate trigger and the average embedding vector of the words in the sentence as the input to our model. We train the model using a simple optimization technique called stochastic gradient descent (SGD) over shuffled mini-batches with the Adadelta update rule (Zeiler, 2012). Regularization is implemented by a dropout (Kim, 2014; Hinton et al., 2012). The experiments show that this simple model is surprisingly effective for event detection. 4 Event Detection in FrameNet To detect events in FN, we first learned the basic ED model based on ACE labeled corpus and then employ it to generate initial judgements (possible event types with confidence values) for each sentence in FN. Then, we apply a set of constraints for global inference based on the PSL model. 4.1 Probabilistic Soft Logic PSL is a framework for collective, probabilistic reasoning in relational domains (Kimmig et al., 2012; Bach et al., 2013). Similar to Markov Logic Networks (MLNs) (Richardson and Domingos, 2006), it uses weighted first-order logic formulas to compactly encode complex undirected probabilistic graphical models. However, PSL brings two remarkable advantages compared with MLNs. First, PSL relaxes the boolean truth values of MLNs to continuous, soft truth values. This allows for easy integration of continuous values, such as similarity scores. Second, PSL restricts the syntax of first order formulas to that of rules with conjunctive bodies. Together with the soft truth values constraint, the inference in PSL is a convex optimization problem in continuous space and thus can be solved using efficient inference approaches. For further details, see the references (Kimmig et al., 2012; Bach et al., 2013). 4.2 Global Constraints Our global inference approach is based on the following three hypotheses. 5https://catalog.ldc.upenn.edu/LDC2008T19 H1: Same Frame Same Event This hypothesis indicates that sentences under the same frame tend to express events of the same type. For example, all exemplars annotated for the frame Rape express events of type Attack, and all sentences under the frame Clothing express NA (none) events. With this hypothesis, sentences annotated for the same frame help each other to infer their event types during global inference. H2: Related Frame Same Event This hypothesis is an extension of H1, which relaxes “the same frame” constraint to “related frames”. In this paper, frames are considered to be related if and only if they are connected by one of the following three relations: Inheritance, See also and Perspective on (see section 2.2). For example, the frame Invading is inherited from Attack, and they actually express the same type of event, Attack. With this hypothesis, sentences under related frames help each other to infer their event types during global inference. The previous two hypotheses are basically true for most frames but not perfect. For example, for the frame Dead or alive, only a few of the sentences under it express Die events while the remainder do not. To amend the this flaw, we introduce the third hypothesis. H3: Same LU Same Event This hypothesis indicates that sentences under the same LU tend to express events of the same type (as a remind, LUs are under frames). It is looser than the previous two hypotheses thus holds true in more situations. For example, H3 holds true for the frame Dead or alive which violates H1 and H2. In FN, LUs annotated for that frame are alive.a, dead.a, deceased.a, lifeless.a, living.n, undead.a and undead.n. All exemplars under dead.a, deceased.a and lifeless.a express Die events. Therefore, this hypothesis amends the flaws of the former two hypotheses. On the other hand, the first two hypotheses also help H3 in some cases. For example, most of the sentences belonging to the LU suit.n under the frame Clothing are misidentified as Sue events due to the ambiguity of the word “suit”. However, in this situation, H1 can help to rectify it because the majority of LUs under Clothing are not ambiguous words. Thus, under the first hypothesis, the misidentified results are expected to be corrected by the the results of other exemplars belonging to Clothing. 2137 4.3 Inference To model the above hypotheses as logic formulas in PSL, we introduce a set of predicates (see Table 2), which are grouped into two categories: observed predicates and target predicates. Observed predicates are used to encode evidences, which are always assumed to be known during the inference, while target predicates are unknown and thus need to be predicted. CandEvt(c, t) is introduced to represent conf(c, t), which is the confidence value generated by the basic ED model for classifying the candidate c as an event of the type t. SameFr(c1, c2) indicates whether the candidates c1 and c2 belong to the same frame. It is initialized by the indicator function Isf(c1, c2), which is defined as follows: Isf(c1, c2) = ( 1 c1, c2 from the same frame 0 otherwise (1) SameLU(c1, c2) is similar, but applies for candidates under the same LU. The last three observed predicates in Table 2 are used to encode the aforementioned semantic relations between frames. For example, Inherit(c1, c2) indicates whether the frame of c1 is inherited from that of c2, and it is initialized by the indicator function Iih(c1, c2), which is set to 1 if and only if the frame of c1 is inherited from that of c2, otherwise 0. Evt(c, t) is the only target predicate, which indicates that the candidate c triggers an event of type t. Type Predicate Assignment Observed CandEvt(c, t) conf(c, t) SameFr(c1, c2) Isf(c1, c2) SameLU(c1, c2) Isl(c1, c2) Inherit(c1, c2) Iih(c1, c2) SeeAlso(c1, c2) Isa(c1, c2) Perspect(c1, c2) Ipe(c1, c2) Target Evt(c, t) — Table 2: Predicates and their initial assignments. Putting all the predicates together, we design a set of formulas to apply the aforementioned hypotheses in PSL (see Table 3). Formula f1 connects the target predicate with the initial judgements from the basic ED model. Formulas f2 and f3 respectively encode H1 and H3. Finally, the remaining formulas are designed for various relations between frames in H2. We tune the formulas’s weights via grid search (see Section 5.4). The inference results provide us with the most likely Formulas f1 CandEvt(c, t) →Evt(c, t) f2 SameFr(c1, c2) ∧Evt(c1, t) →Evt(c2, t) f3 SameLU(c1, c2) ∧Evt(c1, t) →Evt(c2, t) f4 Inherit(c1, c2) ∧Evt(c1, t) →Evt(c2, t) f5 SeeAlso(c1, c2) ∧Evt(c1, t) →Evt(c2, t) f6 Perspect(c1, c2) ∧Evt(c1, t) →Evt(c2, t) Table 3: Formulas in the PSL model interpretation, that is, the soft-truth values of the predicate Evt. The final detected event type t of candidate c is decided by the the equation: t = argmax t′ Evt(c, t′) (2) 5 Evaluations In this section, we present the experiments and the results achieved. We first manually evaluate our novel PSL-based ED model on the FN corpus. Then, we also conduct automatic evaluations for the events detected from FN based on ACE corpus. Finally, we analyze possible mappings from frames/LUs to event types. 5.1 Data We learned the basic ED model on ACE2005 dataset. In order to evaluate the learned model, we followed the evaluation of (Li et al., 2013): randomly selected 30 articles from different genres as the development set, and we subsequently conducted a test on a separate set of 40 ACE 2005 newswire documents. We used the remaining 529 articles as the training data set. We apply our proposed PSL-based approach to detect events in FrameNet. Via collecting all exemplars annotated in FN, we totally obtain 154,484 sentences for detection. 5.2 Setup and Performance of Basic Model We have presented the basic ED model in Section 3. Hyperparameters were tuned by grid search on the development data set. In our experiments, we set the size of the hidden layer to 300, the size of word embedding to 200, the batch size to 100 and the dropout rate to 0.5. Table 4 shows the experimental results, from which we can see that the three-layer ANN model is surprisingly effective for event detection, which even yields competitive results compared with Nguyen’s CNN and Chen’s DMCNN. We believe the reason is that, compared with CNN and DMCNN, 2138 Methods Pre Rec F1 Nguyen’s CNN (2015) 71.8 66.4 69.0 Chen’s DMCNN (2015) 75.6 63.6 69.1 Liu’s Approach (2016) 75.3 64.4 69.4 ANN (ours) 79.5 60.7 68.8 ANN-Random (ours) 81.0 49.5 61.5 Table 4: Performance of the basic ED model. ANN uses pre-trained word embeddings while ANNRandom uses randomly initialized embeddings. ANN focuses on capturing lexical features which have been proved much more important than sentence features for the ED task by (Chen et al., 2015). Moreover, our basic model achieves much higher precision than state-of-the-art approaches (79.5% vs. 75.6%). We also investigate the performance of the basic ED model without pre-trained word embeddings6 (denoted by ANN-Random). The result shows that randomly initialized word embeddings decrease the F1 score by 7.3 (61.5 vs. 68.8). The main reasons are: 1). ACE corpus only contains 599 articles, which are far insufficient to train good embeddings. 2). Words only existing in the test dataset always retain random embeddings. 5.3 Baselines For comparison, we designed four baseline systems that utilize different hypotheses to detect events in FN. (1) ANN is the first baseline, which directly uses a basic ED model learned on ACE training corpus to detect events in FN. This system does not apply any hypotheses between frames and events. (2) SameFrame (SF) is the second baseline system, which applies H1 over the results from ANN. For each frame, we introduce a score function φ(f, t) to estimate the probability that the frame f could be mapped to the event type t as follows: φ(f, t) = 1 ||Sf|| X c∈Sf I(c, t) (3) where Sf is the set of sentences under the frame f; I(c, t) is an indicator function which is true if and only if ANN predicts the candidate c as an event of type t. Then for each frame f satisfying φ(f, t) > α, we mapped it to event type t, where α is a hyperparameter. Finally, all sentences under mapped frames are labeled as events. Note that, 6We thank the anonymous reviewer for this suggestion. unlike the PSL-based approach which applies constraints as soft rules, this system utilizes H1 as a hard constraint. (3) RelatedFrame (RF) is the third baseline system, which applies H2 over the results from ANN. For each frame f, we merge it and its related frames into a super frame, f ′. Similar with SF, a score function ζ(f ′, t), which shares the same expression to equation 3, is introduced. For the merged frame satisfying ζ(f ′, t) > β, we mapped it to the event type t. Finally, all sentences under f ′ are labeled as events. (4) SameLU (SL) is the last baseline, which applies the hypothesis H3 over the results from ANN. Also, a score function ψ(l, t) is introduced: ψ(l, t) = 1 ||Sl|| X c∈Sl I(c, t) (4) where Sl is the set of sentences under the LU l. For each LU satisfying ψ(l, t) > γ, we mapped it to the event type t. Finally, all sentences under l are labeled as events. 5.4 Manual Evaluations In this section, we manually evaluate the precision of the baseline systems and our proposed PSLbased approach. For fair comparison, we set α, β and γ to 0.32, 0.29 and 0.42 respectively to ensure they yield approximately the same amount of events as the first baseline system ANN. We tune the weights of formulas in PSL via grid search by using ACE development dataset. In details, we firstly detect events in FN under different configurations of formulas’ weights and add them to ACE training dataset, respectively. Consequently, we obtain several different expanded training datasets. Then, we separately train a set of basic ED models based on each of these training datasets and evaluate them over the development corpus. Finally, the best weights are selected according to their performances on the development dataset. The weights of f1 :f5 used in this work are 100, 10, 100, 5, 5 and 1, respectively. Manual Annotations Firstly, we randomly select 200 samples from the results of each system. Each selected sample is a sentence with a highlighted trigger and a predicted event type. Figure 3 illustrates three samples. The first line of each sample is a sentence labeled with the trigger. The next line is the predicted event 2139 type of that sentence. Annotators are asked to assign one of two labels to each sample (annotating in the third line): Y: the word highlighted in the given sentence indeed triggers an event of the predicted type. N: the word highlighted in the given sentence does not trigger any event of the predicted type. We can see that, it is very easy to annotate a sample for annotators, thus the annotated results are expected to be of high quality. Figure 3: Examples of manual annotations. To make the annotation more credible, each sample is independently annotated by three annotators7 (including one of the authors and two of our colleagues who are familiar with ACE event task) and the final decision is made by voting. Results Table 5 shows the results of manual evaluations. Through the comparison of ANN and SF, we can see that the application of H1 caused a loss of 5.5 point. It happens mainly because the performance of SF is very sensitive to the wrongly mapped frames. That is, if a frame is mismapped, then all sentences under it would be mislabeled as events. Thus, even a single mismapped frame could significantly hurt the performance. This result also proves that H1 is inappropriate to be used as a hard constraint. As H2 is only an extension of H1, RF performs similarly with SF. Moreover, SL obtains a gain of 2.0% improvement compared with ANN, which demonstrates that the ”same LU” hypothesis is very useful. Finally, with all the hypotheses, the PSL-based approach achieves the best performance, which demonstrates that our hypotheses are useful and it is an effective way to jointly utilize them as soft constraints through PSL for event detection in FN. 5.5 Automatic Evaluations To prepare for automatic evaluations, we respectively add the events detected from FN by each of the aforementioned five systems to ACE training corpus. Consequently, we obtain five ex7The inter-agreement rate is 86.1% Methods Precision (%) Baselines ANN 77.5 SF 72.0 RF 71.0 SL 79.5 PSL-based Approach 81.0 Table 5: Results of manual evaluations. panded training datasets: ACE-ANN-FN, ACE-SFFN, ACE-RF-FN, ACE-SL-FN and ACE-PSL-FN. Then, we separately train five basic ED models on each of these corpus and evaluate them on the ACE testing data set. This experiment is an indirect evaluation of the events detected from FN, which is based on the intuition that events with higher accuracy are expected to bring more improvements to the basic model. Training Corpus Pre Rec F1 ACE-ANN-FN 77.2 63.5 69.7 ACE-SF-FN 73.2 64.1 68.4 ACE-RF-FN 72.6 63.9 68.0 ACE-SL-FN 77.5 64.3 70.3 ACE-PSL-FN 77.6 65.2 70.7 Table 6: Automatic evaluations of events from FN. Table 6 presents the results where we measure precision, recall and F1. Compared with ACEANN-FN, events from SF and RF hurt the performance. As analyzed in previous section, SF and RF yield quite a few false events, which dramatically hurt the accuracy. Moreover, ACE-SL-FN obtains a score of 70.3% in F1 measure, which outperforms ACE-ANN-FN. This result illustrates the effectiveness of our “same LU” hypothesis. Finally and most importantly, consistent with the results of manual evaluations, ACE-PSL-FN performs the best, which further proves the effectiveness of our proposed approach for event detection in FN. 5.6 Improving Event Detection Using FN Event detection generally suffers from data sparseness due to lack of labeled samples. In this section, we investigate the effects of alleviating the aforementioned problem by using the events detected from FN as extra training data. Our investigation is conducted by the comparison of two basic ED models, ANN and ANN-FN: the former is trained on ACE training corpus and the latter is trained on the new training corpus ACE-PSL-FN (introduced in the previous section), which contains 3,816 extra events detected from FN. 2140 Methods Pre Rec F1 Nguyen’s CNN(2015) 71.8 66.4 69.0 Chen’s DMCNN(2015) 75.6 63.6 69.1 Liu’s Approach(2016) 75.3 64.4 69.4 ANN (Ours) 79.5 60.7 68.8 ANN-FN (Ours) 77.6 65.2 70.7 Table 7: Effects of expanding training data using events automatically detected from FN. Table 7 presents the experimental results. Compared with ANN, ANN-FN achieves a significant improvement of 1.9% in F1 measure. It happens mainly because that the high accurate extra training data makes the model obtain a higher recall (from 60.7% to 65.2%) with less decrease of precision (from 79.5% to 77.6%). The result demonstrates the effectiveness of alleviating the data sparseness problem of ED by using events detected from FN. Moreover, compared with state-ofthe-art methods, ANN-FN outperforms all of them with remarkable improvements (more than 1.3%). 5.7 Analysis of Frame-Event Mapping In this section, we illustrate the details of mappings from frames to event types. The mapping pairs are obtained by computing the function φ (see Section 5.3) for each (frame, event-type) pair (f, t) based on the events detected by the PSLbased approach. Table 8 presents the top 10 mappings. We manually evaluate their quality by investigating: (1) whether the definition of each frame is compatible with its mapped event type; (2) whether exemplars annotated for each frame actually express events of its mapped event type. For the first issue, we manually compare the definitions of each mapped pair. Except Relational nat features8, definitions of all the mapped pairs are compatible. For the second issue, we randomly sample 20 exemplars (if possible) from each frame and manually annotate them. Except the above frame and Invading, exemplars of the remaining frames all express the right events. The only exemplar of Invading failing to express its mapped event is as follows: “The invasion of China by western culture has had a number of far-reaching effects on Confucianism.” ACE requires an Attack event to be a physical act, while the invasion of culture is unphysical. Thus, the above sentence does not express an 8The full name is Relational natural relations in FN. Frame Event Ne/||Sf|| φ Hit target Attack 2/2 1.0 Relational nat features Meet 1/1 1.0 Invading Attack 120/121 0.99 Fining Fine 26/27 0.96 Being born Be-Born 32/36 0.88 Rape Attack 104/125 0.83 Sentencing Sentence 57/70 0.81 Attack Attack 99/129 0.77 Quitting End-Position 102/137 0.74 Notification of charges Charge-Indict 73/103 0.71 Table 8: Top 10 mappings from frames to event types. Ne is the number of exemplars detected as events; ||Sf|| and φ hold the same meanings as mentioned in Section 5.3. Attack event. To sum up, the quality of our mappings is good, which demonstrates that the hypothesis H1 is basically true. 5.8 Analysis of LU-Event Mapping This section illustrates the details of mappings from LUs to event types. The mapping pairs are obtained by computing the function ψ (see Section 5.3). Table 9 presents the top 10 mappings. In FN, each LU belongs to a frame. In table 9, we omit the frame of each LU because of space limitation9. LU Event Ne/||Sl|| ψ gunfight.n Attack 14/14 1.0 injure.v Injure 14/14 1.0 divorce.n Divorce 11/11 1.0 decapitation.n Die 5/5 1.0 trial.n Trial-Hearing 25/25 1.0 assault.v Attack 21/21 1.0 fight.v Attack 12/12 1.0 arrest.n Arrest-Jail 38/38 1.0 divorce.v Divorce 35/35 1.0 shoot.v Attack 2/2 1.0 Table 9: Top 10 mappings from LUs to event types. Ne is the number of exemplars detected as events; ||Sl|| and ψ hold the same meanings as mentioned in Section 5.3. To investigate the mapping quality, we manu9Their frames separately are Hostile encounter, Cause harm, Forming relationships, Killing, Trial, Attack, Quarreling, Arrest, Forming relationships and Hit target. 2141 ally annotate the exemplars under these LUs. The result shows that all exemplars are rightly mapped. These mappings are quite good. We believe the reason is that an LU is hardly ambiguous due to its high specificity, which is not only specified by a lemma but also by a frame and a part of speech tag. Table 9 only presents the top 10 mappings. In fact, we obtain 54 mappings in total with ψ = 1.0. We released all the detected events and mapping results for further use by the NLP community. 6 Conclusions and Future Work Motivated by the high similarity between frames and events, we conduct this work to study their relations. The key of this research is to detect events in FN. To solve this problem, we proposed a PSL-based global inference approach based on three hypotheses between frames and events. For evaluation, we first conduct manual evaluations on events detected from FN. The results reveal that our hypotheses are very useful and it is an effective way to jointly utilize them as soft rules through PSL. In addition, we also perform automatic evaluations. The results further demonstrate the effectiveness of our proposed approach for detecting events in FN. Furthermore, based on the detected results, we analyze the mappings from frames/LUs to event types. Finally, we alleviate the data sparseness problem of ED by using events detected from FN as extra training data. Consequently, we obtain a remarkable improvement and achieve a new state-of-the-art result for the ED task. Event detection is only a component of the overall task of event extraction, which also includes event role detection. In the future, we will extend this work to the complete event extraction task. Furthermore, event schemas in ACE are quite coarse. For example, all kinds of violent acts, such as street fights and wars, are treated as a single event type Attack. We plan to refine the event schemas by the finer-grained frames defined in FN (i.e. Attack may be divided into Terrorism, Invading, etc.). Acknowledgements This work was supported by the Natural Science Foundation of China (No. 61533018), the National Basic Research Program of China (No. 2014CB340503) and the National Natural Science Foundation of China (No. 61272332). And this work was also supported by Google through focused research awards program. References David Ahn. 2006. The stages of event extraction. In Proceedings of the Workshop on Annotating and Reasoning about Time and Events, pages 1–8. Stephen Bach, Bert Huang, Ben London, and Lise Getoor. 2013. Hinge-loss markov random fields: convex inference for structured prediction. In Proceedings of 29th Annual Meeting of the Association for Uncertainty in Artificial Inteligence, pages 1–10. Collin F Baker, Charles J Fillmore, and John B Lowe. 1998. The berkeley framenet project. In Proceedings of 17th Annual Meeting of the Association for Computational Linguistics, pages 86–90. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. The Journal of Machine Learning Research, 3:1137–1155. Aljoscha Burchardt, Marco Pennacchiotti, Stefan Thater, and Manfred Pinkal. 2009. Assessing the impact of frame semantics on textual entailment. Natural Language Engineering, 15(04):527–550. Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multi-pooling convolutional neural networks. In Proceedings of 53rd Annual Meeting of the Association for Computational Linguistics, pages 167–176. Dipanjan Das, Desai Chen, Andr´e FT Martins, Nathan Schneider, and Noah A Smith. 2014. Framesemantic parsing. Computational Linguistics, 40(1):9–56. Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. 2010. Why does unsupervised pre-training help deep learning? The Journal of Machine Learning Research, 11:625–660. Charles J Fillmore, Christopher R Johnson, and Miriam RL Petruck. 2003. Background to framenet. International Journal of Lexicography, 16(3):235– 250. Ana-Maria Giuglea and Alessandro Moschitti. 2006. Shallow semantic parsing based on framenet, verbnet and propbank. European Conference on Artificial Intelligence, 141:563–567. Prashant Gupta and Heng Ji. 2009. Predicting unknown time arguments based on cross-event propagation. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 369–372. 2142 Martin T Hagan, Howard B Demuth, Mark H Beale, et al. 1996. Neural network design. Pws Pub. Boston. Karl Moritz Hermann, Dipanjan Das, Jason Weston, and Kuzman Ganchev. 2014. Semantic frame identification with distributed word representations. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics, pages 1448– 1458. Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. 2012. Improving neural networks by preventing coadaptation of feature detectors. arXiv preprint arXiv:1207.0580. Yu Hong, Jianfeng Zhang, Bin Ma, Jianmin Yao, Guodong Zhou, and Qiaoming Zhu. 2011. Using cross-entity inference to improve event extraction. In Proceedings of 49th Annual Meeting of the Association for Computational Linguistics, pages 1127– 1136. Heng Ji and Ralph Grishman. 2008. Refining event extraction through cross-document inference. In Proceedings of 46th Annual Meeting of the Association for Computational Linguistics, pages 254–262. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1746–1751. Angelika Kimmig, Stephen Bach, Matthias Broecheler, Bert Huang, and Lise Getoor. 2012. A short introduction to probabilistic soft logic. In Proceedings of the NIPS Workshop on Probabilistic Programming: Foundations and Applications, pages 1–4. Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features. In Proceedings of 51st Annual Meeting of the Association for Computational Linguistics, pages 73–82. Shasha Liao and Ralph Grishman. 2010. Using document level cross-event inference to improve event extraction. In Proceedings of 48th Annual Meeting of the Association for Computational Linguistics, pages 789–797. Shulin Liu, Kang Liu, Shizhu He, and Jun Zhao. 2016. A probabilistic soft logic based approach to exploiting latent and global information in event classification. In Proceedings of the thirtieth AAAI Conference on Artificail Intelligence. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. George A. Miller. 1995. Wordnet: a lexical database for english. Communications of the Acm, 38(11):39–41. Srini Narayanan and Sanda Harabagiu. 2004. Question answering based on semantic structures. International Conference on Computational Linguistics. Thien Huu Nguyen and Ralph Grishman. 2015. Event detection and domain adaptation with convolutional neural networks. In Proceedings of 53rd Annual Meeting of the Association for Computational Linguistics, pages 365–371. Sebastian Pad´o and Mirella Lapata. 2005. Crosslinguistic projection of role-semantic information. In Proceedings of the conference on human language technology and empirical methods in natural language processing, pages 859–866. Association for Computational Linguistics. Matthew Richardson and Pedro Domingos. 2006. Markov logic networks. Machine learning, pages 107–136. Dan Shen and Mirella Lapata. 2007. Using semantic roles to improve question answering. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 12–21. Cynthia A Thompson, Roger Levy, and Christopher D Manning. 2003. A generative model for semantic role labeling. European Conference on Machine Learning, pages 397–408. David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of 14th Annual Meeting of the Association for Computational Linguistics, pages 189–196. Matthew D Zeiler. 2012. Adadelta: An adaptive learning rate method. arXiv preprint arXiv:1212.5701. 2143
2016
201
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2144–2153, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Learning To Use Formulas To Solve Simple Arithmetic Problems Arindam Mitra Arizona State University [email protected] Chitta Baral Arizona State University [email protected] Abstract Solving simple arithmetic word problems is one of the challenges in Natural Language Understanding. This paper presents a novel method to learn to use formulas to solve simple arithmetic word problems. Our system, analyzes each of the sentences to identify the variables and their attributes; and automatically maps this information into a higher level representation. It then uses that representation to recognize the presence of a formula along with its associated variables. An equation is then generated from the formal description of the formula. In the training phase, it learns to score the <formula, variables> pair from the systematically generated higher level representation. It is able to solve 86.07% of the problems in a corpus of standard primary school test questions and beats the state-of-the-art by a margin of 8.07%. 1 Introduction Developing algorithms to solve math word problems (Table 1) has been an interest of NLP researchers for a long time (Feigenbaum and Feldman, 1963). It is an interesting topic of study from the point of view of natural language understanding and reasoning for several reasons. First, it incorporates rigorous standards of accurate comprehension. Second, we know of a good representation to solve the word problems, namely algebraic equations. Finally, the evaluation is straightforward and the problems can be collected easily. In the recent years several challenges have been proposed for natural language understanding. This includes the Winograd Schema challenge for commonsense reasoning (Levesque, 2011), Story Comprehension Challenge (Richardson et al., 2013), Facebook bAbl task (Weston et al., 2015), Semantic Textual Similarity (Agirre et al., 2012) and Textual Entailment (Bowman et al., 2015; Dagan et al., 2010). The study of word math problems is also an important problem as quantitative reasoning is inextricably related to human life. Clark & Etzioni (Clark, 2015; Clark and Etzioni, 2016) discuss various properties of math word (and science) problems emphasizing elementary school science and math tests as a driver for AI. Researchers at Allen AI Institute have published two standard datasets as part of the Project Euclid1 for future endeavors in this regard. One of them contains simple addition-subtraction arithmetic problems (Hosseini et al., 2014) and the other contains general arithmetic problems (KoncelKedziorski et al., 2015). In this research, we focus on the former one, namely the AddSub dataset. Dan grew 42 turnips and 38 cantelopes . Jessica grew 47 turnips . How many turnips did they grow in total ? Formula Associated variables part-whole whole: x, parts: {42, 47} Equation x = 42 + 47 Table 1: Solving a word problem using part-whole Broadly speaking, common to the existing approaches (Kushman et al., 2014; Hosseini et al., 2014; Zhou et al., 2015; Shi et al., 2015; Roy and Roth, 2015) is the task of grounding, that takes as input a word problem in the natural language and represents it in a formal language, such as, a system of equations, expression trees or states (Hosseini et al., 2014), from which the answer can be easily computed. In this work, we divide this task of grounding into two parts as follows: 1http://allenai.org/euclid.html 2144 In the first step, the system learns to connect the assertions in a word problem to abstract mathematical concepts or formulas. In the second step, it maps that formula into an algebraic equation. Examples of such formulas in the arithmetic domain includes part whole which says, ‘the whole is equal to the sum of its parts’, or the Unitary Method that is used to solve problems like ‘A man walks seven miles in two hours. What is his average speed?’. Consider the problem in Table 1. If the system can determine it is a ‘part whole’ problem where the unknown quantity X plays the role of whole and its parts are 42 and 47, it can easily express the relation as X = 42 + 47. The translation of a formula to an equation requires only the knowledge of the formula and can be formally encoded. Thus, we are interested in the question, ‘how can an agent learn to apply the formulas for the word problems?’ Solving a word problem in general, requires several such applications in series or parallel, generating multiple equations. However, in this research, we restrict the problems to be of a single equation which requires only one application. Our system currently considers three mathematical concepts: 1) the concept of part whole, 2) the concept of change and 3) the concept of comparison. These concepts are sufficient to solve the arithmetic word problems in AddSub. Table 2 illustrates each of these three concepts with examples. The part whole problems deal with the part whole relationships and ask for either the part or the whole. The change problems make use of the relationship between the new value of a quantity and its original value after the occurrence of a series of increase or decrease. The question then asks for either the initial value of the quantity or the final value of the quantity or the change. In case of comparison problems, the equation can be visualized as a comparison between two quantities and the question typically looks for either the larger quantity or the smaller quantity or the difference. While the equations are simple, the problems describe a wide variety of scenarios and the system needs to make sense of multiple sentences without a priori restrictions on the syntax or the vocabulary to solve the problem. Training has been done in a supervised fashion. For each example problem, we specify the formula that should be applied to generate the apChange RESULT UNKNOWN Mary had 18 baseball cards , and 8 were torn . Fred gave Mary 26 new baseball cards . Mary bought 40 baseball cards . How many baseball cards does Mary have now ? CHANGE UNKNOWN There were 28 bales of hay in the barn . Tim stacked bales in the barn today . There are now 54 bales of hay in the barn . How many bales did he store in the barn ? START UNKNOWN Sam ’s dog had puppies and 8 had spots . He gave 2 to his friends . He now has 6 puppies . How many puppies did he have to start with? Part Whole TOTAL SET UNKNOWN Tom went to 4 hockey games this year , but missed 7 . He went to 9 games last year . How many hockey games did Tom go to in all ? PART UNKNOWN Sara ’s high school played 12 basketball games this year . The team won most of their games . They were defeated during 4 games . How many games did they win ? Comparision DIFFERENCE UNKNOWN Last year , egg producers in Douglas County produced 1416 eggs . This year , those same farms produced 4636 eggs . How many more eggs did the farms produce this year ? LARGE QUANTITY UNKNOWN Bill has 9 marbles. Jim has 7 more marbles than Bill. How many marbles does Jim have? SMALL QUANTITY UNKNOWN Bill has 9 marbles. He has 7 more marbles than Jim. How many marbles does Jim have? Table 2: Examples of Add-Sub Word Problems propriate equation and the relevant variables. The system then learns to apply the formulas for new problems. It achieves an accuracy of 86.07% on the AddSub corpus containing 395 word arithmetic problems with a margin of 8.07% with the current state-of-the-art (Roy and Roth, 2015). Our contributions are three-fold: (a) We model the application of a formula and present a novel method to learn to apply a formula; (b) We annotate the publicly available AddSub corpus with the 2145 correct formula and its associated variables; and (c) We make the code publicly available. 2 The rest of the paper is organized as follows. In section 2, we formally define the problem and describe our learning algorithm. In section 3, we define our feature function. In section 4, we discuss related works. Section 5 provides a detailed description of the experimental evaluation. Finally, we conclude the paper in section 6. 2 Problem Formulation A single equation word arithmetic problem P is a sequence of k words ⟨w1, ..., wk⟩and contains a set of variables VP = {v0, v1, ..., vn−1, x} where v0, v1, ..., vn−1 are numbers in P and x is the unknown whose value is the answer we seek (Koncel-Kedziorski et al., 2015). Let Paddsub be the set of all such problems, where each problem P ∈Paddsub can be solved by a evaluating a valid mathematical equation E formed by combining the elements of VP and the binary operators from O = {+, −}. We assume that each target equation E of P ∈ Paddsub is generated by applying one of the possible mathematical formulas from C = {Cpartwhole, Cchange, Ccomparision}. Let P1 addsub ⊆Paddsub be the set of all problems where the target equation E can be generated by a single application of one of the possible formulas from C. The goal is then to find the correct application of a formula for the problem P ∈P1 addsub. 2.1 Modelling Formulas And their Applications We model each formula as a template that has predefined slots and can be mapped to an equation when the slots are filled with variables. Application of a formula C ∈C to the problem P, is then defined as the instantiation of the template by a subset of VP that contains the unknown. Part Whole The concept of part whole has two slots, one for the whole that accepts a single variable and the other for its parts that accepts a set of variables of size at least two. If the value of the whole is w and the value of the parts are p1, p2, ..., pm, then that application is mapped to the equation, w = p1 + p2 + ... + pm, denoting that whole is equal to the sum of its parts. 2The code and data is publicly available at https://github.com/ari9dam/MathStudent. Change The change concept has four slots, namely start, end, gains, losses which respectively denote the original value of a variable, the final value of that variable, and the set of increments and decrements that happen to the original value of the variable. The start slot can be empty; in that case it is assumed to be 0. For example, consider the problem, ‘Joan found 70 seashells on the beach . she gave Sam some of her seashells. She has 27 seashell . How many seashells did she give to Sam?’. In this case, our assumption is that before finding the 70 seashells Joan had an empty hand. Given an instantiation of change concept the equation is generated as follows: valstart + X g∈gains valg = X l∈losses vall + valend Comparision The comparision concept has three slots namely the large quantity, the small quantity and their difference. An instantiation of the comparision concept is mapped to the following equation: large = small + difference. 2.2 The Space of Possible Applications Consider the problem in Table 1. Even though the correct application is an instance of part whole formula with whole = x and the parts being {42, 47}, there are many other possible applications, such as, partWhole(whole=47, parts=x,42), change(start=47, losses={x}, gains={}, end = 42), comparison(large=47, small=x, difference=42). Note that, comparison(large=47, small=38, difference=42) is not a valid application since none of the associated variables is an unknown. Let AP be the set of all possible applications to the problem P. The following lemma characterizes the size of AP as a function of the number of variables in P. Lemma 2.2.1. Let P ∈P1 addsub be an arithmetic word problem with n variables (|VP | = n), then the following are true: 1. The number of possible applications of part whole formula to the problem P, Npartwhole is (n + 1)2n−2 + 1. 2. The number of possible applications of change formula to the problem P, Nchange is 3n−3(2n2 + 6n + 1) −2n + 1. 3. The number of possible applications of comparison formula to the problem P, Ncomparison is 3(n −1)(n −2). 2146 4. The number of all possible applications to the problem P is Npartwhole + Nchange + Ncomparison. Proof of lemma 2.2.1 is provided in the Appendix. The total number of applications for problems having 3, 6, 7, 8 number of variables are 47, 3, 105, 11, 755, 43, 699 respectively. AdditionSubtraction arithmetic problems hardly contain more than 6 variables. So, the number of possible applications is not intractable in practice. The total number of applications increases rapidly mainly due to the change concept. Since, the template involves two sets, there is a 3n−3 factor present in the formula of Nchange. However, any application of change concept with gains and losses slots containing a collection of variables can be broken down into multiple instances of change concept where the gains and losses slots accepts only a single variable by introducing more intermediate unknown variables. Since, for any formula that does not have a slot that accepts a set, the number of applications is polynomial in the number of variables, there is a possibility to reduce the application space. We plan to explore this possibility in our future work. For the part whole concept, even though there is a exponential term involved, it is practically tractable (for n = 10, Npartwhole = 2, 817 ). In practice, we believe that there will hardly be any part whole application involving more than 10 variables. For formulas that are used for other categories of word math problems (algebraic or arithmetic), such as the unitary method, formulas for ratio, percentage, time-distance and rate of interest, none of them have any slot that accepts sets of variables. Thus, further increase in the space of possible applications will be polynomial. 2.3 Probabilistic Model For each problem P there are different possible applications y ∈AP , however not all of them are meaningful. To capture the semantics of the word problem to discriminate between competing applications we use the log-linear model, which has a feature function φ and parameter vector θ ∈Rd. The feature function φ : H →Rd takes as input a problem P and a possible application y and maps it to a d-dimensional real vector (feature vector) that aims to capture the important information required to discriminate between competing applications. Here, the set H is defined as {(P, y) : P ∈P1 addsub ∧y ∈AP }, to accommodate the dependency of the possible applications on the problem instance. Given the definition of the feature function φ and the parameter vector θ, the probability of an application y given a problem P is defined as, p(y|P; θ) = eθ.φ(P,y) P y′∈AP eθ.φ(P,y′) Here, . denotes dot product. Section 3 defines the feature function. Assuming that the parameter θ is known, the function f that computes the correct application is defined as, f(P) = arg max y∈AP p(y|P; θ) 2.4 Parameter Estimation To learn the function f, we need to estimate the parameter vector θ. For that, we assume access to n training examples, {Pi, y∗ i : i = 1 . . . n}, each containing a word problem Pi and the correct application y∗ i for the problem Pi. We estimate θ by minimizing the negative of the conditional loglikelihood of the data: O(θ) = − n X i=1 log p(y∗ i |Pi; θ) = − n X i=1 [θ.φ(Pi, y∗ i ) −log X y∈APi eθ.φ(Pi,y)] We use stochastic gradient descent to optimize the parameters. The gradient of the objective function is given by: ∇O ∇θ = − n X i=1 [φ(Pi, y∗ i ) − X y∈APi p(y|Pi; θ) × φ(Pi, y)] (1) Note that, even though the space of possible applications vary with the problem Pi, the gradient for the example containing the problem Pi can be easily computed. 3 Feature Function φ A formula captures the relationship between variables in a compact way which is sufficient to generate an appropriate equation. In a word problem, those relations are hidden in the assertions 2147 of the story. The goal of the feature function is thus to gather enough information from the story so that underlying mathematical relation between the variables can be discovered. The feature function thus needs to be aware of the mathematical relations so that it knows what information it needs to find. It should also be “familiar” with the word problem language so that it can extract the information from the text. In this research, the feature function has access to machine readable dictionaries such as WordNet (Miller, 1995), ConceptNet (Liu and Singh, 2004) which captures inter word relationships such as hypernymy, synonymy, antonymy etc, and syntactic and dependency parsers that help to extract the subject, verb, object, preposition and temporal information from the sentences in the text. Given these resources, the feature function first computes a list of attributes for each variable. Then, for each application y it uses that information, to compute if some aspects of the expected relationship described in y is satisfied by the variables in y. Let the first b dimensions of the feature vector contain part whole related features, the next c dimensions are for change related features and the remaining d features are for comparison concept. Then the feature vector for a problem P and an application of a formula y is computed in the following way: Data: A word problem P, an application y Result: d-dimensional feature vector, fv Initialize fv := 0 if y is instance of part whole then compute fv[1 : b] end if y is instance of change then compute fv[b + 1 : b + c] end if y is instance of comparision then compute fv[b + c + 1 : b + c + d] end Algorithm 1: Skeleton of the feature function φ The rest of the section is organized as follows. We first describe the attributes of the variables that are computed from the text. Then, we define a list of boolean variables which computes semantic relations between the attributes of each pair of variables. Finally, we present the complete definition of the feature function using the description of the attributes and the boolean variables. 3.1 Attributes of Variables For each occurrence of a number in the text a variable is created with the attribute value referring to that numeric value. An unknown variable is created corresponding to the question. A special attribute type denotes the kind of object the variable refers to. Table 3 shows several examples of the type attribute. It plays an important role in identifying irrelevant numbers while answering the question. Text Type John had 70 seashells seashells 70 seashells and 8 were broken seashells 61 male and 78 female salmon male, salmon 35 pears and 27 apples pear Table 3: Example of type for highlighted variables. The other attributes of a variable captures its linguistic context to surrogate the meaning of the variable. This includes the verb attribute i.e. the verb attached to the variable, and attributes corresponding to Stanford dependency relations (De Marneffe and Manning, 2008), such as nsubj, tmod, prep in, that spans from either the words in associated verb or words in the type. These attributes were computed using Stanford Core NLP (Manning et al., 2014). For the sentence, “John found 70 seashells on the beach.” the attributes of the variable are the following: { value : {70}, verb : {found} , nsubj : {John}, prep on : {beach }}. 3.2 Cross Attribute Relations Once the variables are created and their attributes are extracted, our system computes a set of boolean variables, each denoting whether the attribute a1 of the variable v1 has the same value as the attribute a2 of the variable v2. The value of each attribute is a set of words, consequently set equality is used to calculate attribute equality. Two words are considered equal if their lemma matches. Four more boolean variables are computed for each pair of variables based on the attribute type and they are defined as follows: subType: Variable v1 is a subType of variable v2 if v2.type ⊂v1.type or their type consists of a single word and there exists the IsA relation between them in ConceptNet (Speer and Havasi, 2013; Liu and Singh, 2004). 2148 disjointType is true if v1.type T v2.type = φ intersectingType is true if v1 is neither a subType of v2 nor is disjointType nor equal. We further compute some more variables by utilizing several relations that exist between words: antonym: For every pair of variables v1 and v2, we compute an antonym variable that is true if there exists a pair of word in (v1.verb S v1.adj)× (v2.verb S v2.adj) that are antonym to each other in WordNet irrespective of their part of speech tag. relatedVerbs: The verbs of two variables are related if there exists a RelatedTo relations in ConceptNet between them. subjConsume: The nsubj of v1 consumes the nsubj of v2 if the formers refers to a group and the latter is a part of that group. For example, in the problem, ‘Joan grew 29 carrots and 14 watermelons . Jessica grew 11 carrots . How many carrots did they grow in all ?’, the nsubj of the unknown variable consumes others. This is computed using Stanford co-reference resolution. For the situation where there is a variable with nsubj as ‘they’ and it does not refer to any entity, the subjConsume variable is assumed to be implicitly true for any variable having a nsubj of type person. 3.3 Features: Part Whole The part whole features look for some combinations of the boolean variables and the presence of some cue words (e.g. ‘all’) in the attribute list. These features capture the underlying reasonings that can affect the decision of applying a part whole concept. We describe the conditions which when satisfied activate the features. If active, the value of a feature is the number of variables associated with the application y and 0 otherwise. This is also true for change and comparision features also. Part whole features are computed only when the y is an instance of the formula part whole. The same applies for change and comparision features. Generic Word Cue This feature is activated if y.whole has a word in its attributes that belongs to the “total words set” containing the followings words “all”, “total”, “overall”, “altogether”, “together” and “combine”; and none of the variables in parts are marked with these words. ISA Type Cue is active if all the part variables are subType of the whole. Type-Verb Cue is active if the type and verb attributes of vwhole matches that of all the variables in the part slot of y. Type-Individual Group Cue is active if the variable vwhole subjConsume each part variable vp in y and their type matches. Type-Verb-Tmod Cue is active if the variable in the slot whole is the unknown and for each part variable vp their verb, type and tmod (time modifier of the verb) attributes match. Type-SubType-Verb Cue is active if the variable in the slot whole is either the unknown or marked with a word in “total words set” and for all parts vp, their verb matches and one of the type or subType boolean variable is true. Type-SubType-Related Verb Cue is similar to Type-SubType-Verb Cue however relaxes the verb match conditions to related verb match. This is helpful in problems like ‘Mary went to the mall. She spent $ 13.04 on a shirt and $ 12.27 on a jacket . She went to 2 shops . In total , how much money did Mary spend on clothing ? ’. Type-Loose Verb Cue ConceptNet does not contain all relations between verbs. For example, according to ConceptNet ‘buy’ and ‘spend’ are related however there is no relation in ConceptNet between ‘purchase’ and ‘spend’. To handle these situations, we use this feature which is similar to the previous one. The difference is that it assumes that the verbs of part-whole variable pairs are related if all verbs associated with the parts are same, even though there is no relation in ConceptNet. Type-Verb-Prep Cue is active if type and verb matches. The whole does not have a “preposition” but parts have and they are different. Other Cues There are also features that add nsubj match criteria to the above ones. The prior feature for part whole is that the whole if not unknown, is smaller than the sum of the parts. There is one more feature that is active if the two part variables are antonym to each other; one of type or subType should be true. 3.4 Features: Change The change features are computed from a set of 10 simple indicator variables, which are computed in the following way: 2149 Start Cue is active if the verb associated with the variable in start slot has one of the following possessive verbs : {‘call for’, ‘be’, ‘contain’, ‘remain’, ‘want’, ‘has’, ‘have’, ‘hold’, ...}; the type and nsubj of start variable match with the end variable and the tense of the end does not precede the start. The list of ‘possessive verbs’ is automatically constructed by adding all the verbs associated with the start and the end slot variables in annotated corpus. Start Explicit Cue is active if one of following words, “started with”, “initially”, “begining”, “originally” appear in the context of the start variable and the type of start and end variables match. Start prior is active if the verb associated with the variable in start slot is a member of the set ‘possessive verbs’ and the variable appears in first sentence. Start Default Cue is active if the start variable has a “possessive verb” with past tense. End Cue is active if the verb associated with the variable in slot end has a possessive verb with the tense of the verb not preceding the tense of the start, in case the start is not missing. The type and nsubj should match with either the start or the gains in case the start is missing. End Prior is true if vend has a possessive verb and an unknown quantity and at least one of vend or vstart does not have a nsubj attribute. Gain Cue is active if for all variables in the gains slot, the type matches with either vend or vstart and one of the following is true: 1) the nsubj of the variable matches with vend or vstart and the verb implies gain (such as ‘find’) and 2) the nsubj of the variable does not match with vend or vstart and the verb implies losing (e.g. spend). The set of gain and loss verbs are collected from the annotated corpus by following the above procedure. Gain Prior is true if the problem contains only three variables, with vstart < vend and the only variable in the gain slot, associated with nonpossessive verb is the unknown. Loss Cue & Loss prior are designed in a fashion similar to the Gain cue and Gain Prior. Let us say badgains denotes that none of the gain prior or gain cue is active even though the gain slot is not empty. badlosses is defined similarly and let bad = badgains ∨badlosses . Then the change features are computed from these boolean indicators using logical operators and, or, not. Table4 shows some of the change features. !bad ∧gaincue ∧startdefault ∧endcue !bad∧!gaincue∧losscue∧startdefault∧endcue !bad ∧ (gaincue ∨ losscue) ∧ startcue∧!startdefault ∧endcue !bad ∧ (gaincue ∨ losscue) ∧ startexplicit∧!startdefault ∧endcue !bad ∧(gaincue ∨losscue) ∧startprior ∧ (endcue||endprior) !bad ∧(gaincue ∨losscue) ∧(startprior ∨ startcue)∧!startdefault ∧endprior Table 4: Activation criteria of some change related features. 3.5 Features: Comparison The features for the “compare” concept are relatively straight forward. Difference Unknown Que If the application y states that the unknown quantity is the difference between the larger and smaller quantity, it is natural to see if the variable in the difference slot is marked with a comparative adjective or comparative adverb. The prior is that the value of the larger quantity must be bigger than the small one. Another two features add the type and subject matching criteria along with the previous ones. Large & Small Unknown Que These features can be active only when the variable in the large or small slot is unknown. To detect if the referent is bigger or smaller, it is important to know the meaning of the comparative words such as ‘less’ and ‘longer’. Since, the corpus contains only 33 comparison problems we collect these comparative words from web which are then divided into two categories. With these categories, the features are designed in a fashion similar to change features that looks for type, subject matches. 3.6 Handling Arbitrary Number of Variables This approach can handle arbitrary number of variables. To see that consider the problem, ‘Sally found 9 seashells , Tom found 7 seashells , and Jessica found 5 seashells on the beach . How many seashells did they find together ?’. Let us say that feature vector contains only the ‘TypeIndividual Group Cue’ feature and the weight 2150 of that feature is 1. Consider the two following applications: y1 = partWhole(x,{9,7}) and y2 = partWhole(x,{9,7, 5}). For both y1 and y2 the ‘Type-Individual Group Cue’ feature is active since the subject of the unknown x refers to a group that contains the subject of all part variables in y1 and y2 and their types match. However, as mentioned in section 3.3, when active, the value of a feature is the number of variables associated with the application. Thus p(y2;P,θ) p(y1;P,θ) = e4 e3 = e. Thus, y2 is more probable than y1. 4 Related Works Researchers in early years have studied math word problems in a constrained domain by either limiting the input sentences to a fixed set of patterns (Bobrow, 1964b; Bobrow, 1964a; Hinsley et al., 1977) or by directly operating on a propositional representation instead of a natural language text (Kintsch and Greeno, 1985; Fletcher, 1985). Mukherjee and Garain (2008) survey these works. Among the recent algorithms, the most general ones are the work in (Kushman et al., 2014; Zhou et al., 2015) . Both algorithms try to map a word math problem to a ‘system template’ that contains a set of ‘equation templates’ such as ax + by = c. These ‘system templates’ are collected from the training data. They implicitly assume that these templates will reoccur in the new examples which is a major drawback of these algorithms. Also, Koncel-Kedziorski et al. (2015) show that the work of Kushman et al. (2014) heavily relies on the overlap between train and test data and when this overlap is reduced the system performs poorly. Work of (Koncel-Kedziorski et al., 2015; Roy and Roth, 2015) on the other hand try to map the math word problem to an expression tree. Even though, these algorithms can handle all the four arithmetic operators they cannot solve problems that require more than one equation. Moreover, experiments show that our system is much more robust to diversity in the problem types between training and test data for the problems it handles. The system ARIS in (Hosseini et al., 2014) solves the addition-subtraction problems by categorizing the verbs into seven categories such as ‘positive transfer’, ‘loss’ etc. It represents the information in a problem as a state and then updates the state according to the category of a verb as the story progresses. Both ARIS and our system share the property that they give some explanation behind the equation they create. However, the verb categorization approach of ARIS can only solve a subset of addition-subtraction problems (see error analysis in (Hosseini et al., 2014)); whereas the usage of formulas to model the word problem world, gives our system the ability to accommodate other math word problems as well. 5 Experimental Evaluation 5.1 Dataset The AddSub dataset consist of a total of 395 addition-subtraction arithmetic problems for third, fourth, and fifth graders. The dataset is divided into three diverse set MA1, MA2, IXL containing 134, 140 and 121 problems respectively. As mentioned in (Hosseini et al., 2014), the problems in MA2 have more irrelevant information compared to the other two datasets, and IXL includes more information gaps. 5.2 Result Hosseini et al. (2014) evaluate their system using 3-fold cross validation. We follow that same procedure. Table 5 shows the accuracy of our system on each dataset (when trained on the other two datasets). Table 6 shows the distribution of the part whole, change, comparison problems and the accuracy on recognizing the correct formula. MA1 IXL MA2 Avg ARIS 83.6 75.0 74.4 77.7 KAZB 89.6 51.1 51.2 64.0 ALGES 77.0 Roy & Roth 78.0 Majority 45.5 71.4 23.7 48.9 Our System 96.27 82.14 79.33 86.07 Table 5: Comparision with ARIS, KAZB (Kushman et al., 2014), ALGES (Koncel-Kedziorski et al., 2015) and the state of the art Roy & Roth on the accuracy of solving arithmetic problems. As we can see in Table 6 only IXL contains problems of type ‘comparison’. So, to study the accuracy in detecting the compare formula we uniformly distribute the 33 examples over the 3 datasets. Doing that results in only two errors in the recognition of a compare formula and also increases the overall accuracy of solving arithmetic problems to 90.38%. 2151 5.3 Error Analysis An equation that can be generated from a change or comparision formula can also be generated by a part whole formula. Four such errors happened for the change problems and out of the 33 compare problems, 18 were solved by part whole. Also, there are 3 problems that require two applications. One example of such problem is, “There are 48 erasers in the drawer and 30 erasers on the desk. Alyssa placed 39 erasers and 45 rulers on the desk. How many erasers are now there in total ?”. To solve this we need to first combine the two numbers 48 and 30 to find the total number of erasers she initially had. This requires the knowledge of ‘part-whole’. Now, that sum of 48 and 30, 39 and x can be connected together using the ‘change’ formula. With respect to ‘solving’ arithmetic problems, we find the following categories as the major source of errors: Problem Representation: Solving problems in this category requires involved representation. Consider the problem, ‘Sally paid $ 12.32 total for peaches , after a ‘3 dollar’ coupon , and $ 11.54 for cherries . In total , how much money did Sally spend?’. Since the associated verb for the variable 3 dollar is ‘pay’, our system incorrectly thinks that Sally did spend it. Information Gap: Often, information that is critical to solve a problem is not present in the text. E.g. Last year , 90171 people were born in a country , and 16320 people immigrated to it . How many new people began living in the country last year ?. To correctly solve this problem, it is important to know that both the event ‘born’ and ‘immigration’ imply the ‘began living’ event, however that information is missing in the text. Another example is the problem, “Keith spent $6.51 on a rabbit toy , $5.79 on pet food , and a cage cost him $12.51 . He found a dollar bill on the ground. What was the total cost of Keith ’s purchases? ”. It is important to know here that if a cage cost Keith $12.51 then Keith has spent $12.51 for cage. Modals: Consider the question ‘Jason went to 11 football games this month . He went to 17 games last month , and plans to go to 16 games next month . How many games will he attend in all?’ To solve this question one needs to understand the meanings of the verb “plan” and “will”. If we replace “will” in the question by “did” the answer will be different. Currently our algorithm Type MA1 IXL MA2 part whole Total 59 89 51 correct 59 81 40 change Total 74 18 68 correct 70 15 56 compare Total 0 33 0 correct 0 0 0 Table 6: Accuracy on recognizing the correct application. None of the MA1 and MA2 dataset contains “compare” problems so the cross validation accuracy on “IXL” for “compare” problems is 0. cannot solve this problem and we need to either use a better representation or a more powerful learning algorithm to be able to answer correctly. Another interesting example of this kind is the following: “For his car , Mike spent $118.54 on speakers and $106.33 on new tires . Mike wanted 3 CD ’s for $4.58 but decided not to . In total , how much did Mike spend on car parts?” Incomplete IsA Knowledge: For the problem “Tom bought a skateboard for $ 9.46 , and spent $ 9.56 on marbles . Tom also spent $ 14.50 on shorts . In total , how much did Tom spend on toys ? ”, it is important to know that ‘skateboard’ and ‘marbles’ are toys but ‘shorts’ are not. However, such knowledge is not always present in ConceptNet which results in error. Parser Issue: Error in dependency parsing is another source of error. Since the attribute values are computed from the dependency parse tree, a wrong assignment (mostly for verbs) often makes the entity irrelevant to the computation. 6 Conclusion Solving math word problems often requires explicit modeling of the word. In this research, we use well-known math formulas to model the word problem and develop an algorithm that learns to map the assertions in the story to the correct formula. Our future plan is to apply this model to general arithmetic problems which require multiple applications of formulas. 7 Acknowledgement We thank NSF for the DataNet Federation Consortium grant OCI-0940841 and ONR for their grant N00014-13-1-0334 for partially supporting this research. 2152 References Eneko Agirre, Mona Diab, Daniel Cer, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pilot on semantic textual similarity. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 385–393. Association for Computational Linguistics. Daniel G Bobrow. 1964a. Natural language input for a computer problem solving system. Daniel G. Bobrow. 1964b. A question-answering system for high school algebra word problems. In Proceedings of the October 27-29, 1964, Fall Joint Computer Conference, Part I, AFIPS ’64 (Fall, part I), pages 591–614, New York, NY, USA. ACM. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326. Peter Clark and Oren Etzioni. 2016. My computer is an honor student but how intelligent is it? standardized tests as a measure of ai. AI Magazine.(To appear). Peter Clark. 2015. Elementary school science and math tests as a driver for ai: Take the aristo challenge! In AAAI, pages 4019–4021. Ido Dagan, Bill Dolan, Bernardo Magnini, and Dan Roth. 2010. Recognizing textual entailment: Rational, evaluation and approaches–erratum. Natural Language Engineering, 16(01):105–105. Marie-Catherine De Marneffe and Christopher D Manning. 2008. Stanford typed dependencies manual. Technical report, Technical report, Stanford University. Edward A Feigenbaum and Julian Feldman. 1963. Computers and thought. Charles R Fletcher. 1985. Understanding and solving arithmetic word problems: A computer simulation. Behavior Research Methods, Instruments, & Computers, 17(5):565–571. Dan A Hinsley, John R Hayes, and Herbert A Simon. 1977. From words to equations: Meaning and representation in algebra word problems. Cognitive processes in comprehension, 329. Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014. Learning to solve arithmetic word problems with verb categorization. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 523–533. Walter Kintsch and James G Greeno. 1985. Understanding and solving word arithmetic problems. Psychological review, 92(1):109. Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang. 2015. Parsing algebraic word problems into equations. Transactions of the Association for Computational Linguistics, 3:585–597. Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. Association for Computational Linguistics. Hector J Levesque. 2011. The winograd schema challenge. Hugo Liu and Push Singh. 2004. Conceptneta practical commonsense reasoning tool-kit. BT technology journal, 22(4):211–226. Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In ACL (System Demonstrations), pages 55–60. George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39–41. Anirban Mukherjee and Utpal Garain. 2008. A review of methods for automatic understanding of natural language mathematical problems. Artificial Intelligence Review, 29(2):93–122. Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP, volume 1, page 2. Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. EMNLP. Shuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang Liu, and Yong Rui. 2015. Automatically solving number word problems by semantic parsing and reasoning. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Lisbon, Portugal. Robert Speer and Catherine Havasi. 2013. Conceptnet 5: A large semantic network for relational knowledge. In The Peoples Web Meets NLP, pages 161– 176. Springer. Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698. Lipu Zhou, Shuaixiang Dai, and Liwei Chen. 2015. Learn to solve algebra word problems using quadratic programming. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 817–822. 2153
2016
202
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2154–2163, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Unravelling Names of Fictional Characters Katerina Papantoniou Institute of Computer Science, FORTH Heraklion, Greece [email protected] Stasinos Konstantopoulos Institute of Informatics & Telecommunications, NCSR ‘Demokritos’ Ag. Paraskevi 153 10, Athens, Greece [email protected] Abstract In this paper we explore the correlation between the sound of words and their meaning, by testing if the polarity (‘good guy’ or ‘bad guy’) of a character’s role in a work of fiction can be predicted by the name of the character in the absence of any other context. Our approach is based on phonological and other features proposed in prior theoretical studies of fictional names. These features are used to construct a predictive model over a manually annotated corpus of characters from motion pictures. By experimenting with different mixtures of features, we identify phonological features as being the most discriminative by comparison to social and other types of features, and we delve into a discussion of specific phonological and phonotactic indicators of a character’s role’s polarity. 1 Introduction Could it be possible for fictional characters’ names such as ‘Dr. No’ and ‘Hannibal Lecter’ to be attributed to positive characters whereas names such as ‘Jane Eyre’ and ‘Mary Poppins’ to negative ones? Could someone guess who is the hero and who is the competitor based only on the name of the character and what would be the factors that contribute to such intuition? Literary theory suggests that it should be possible, because fictional character names function as expressions of experience, ethos, teleology, values, culture, ideology, and attitudes of the character. However, work in literary theory, psychology, linguistics and philosophy has studied fictional names by analysing individual works or small clusters of closely related works, such as those of a particular author. By contrast, we apply tools from computational linguistics at a larger scale aiming to identify more general patterns that are not tied to any specific creator’s idiosyncrasies and preferences; in the hope that extracting such patterns can provide valuable insights about how the sound of names and, more generally, words correlates with their meaning. At the core of our approach is the idea that the names of fictional characters follow (possibly subconsciously) a perception of what a positive or a negative name ought to sound like that is shared between the creator and the audience. Naturally the personal preferences or experiences of the creator might add noise, but fictional characters’ names will at least not suffer (or suffer less) from the systematic cultural bias bound to exist in real persons’ names. In the remainder of this paper, we first present the relevant background, including both theoretical work and computational work relevant to peoples’ names (Section 2). Based on this theoretical work, we then proceed to formulate a set of features that can be computationally extracted from names, and which we hypothesise to be discriminative enough to allow for the construction of a model that accurately predicts whether a character plays a positive or negative role in a work of fiction (Section 3). In order to test this hypothesis, we constructed a corpus of characters from popular English-language motion pictures. After describing corpus construction and presenting results (Section 4), we proceed to discuss these results (Section 5) and conclude (Section 6). 2 Background 2.1 Onomastics The procedure of naming an individual, a location or an object is of particular importance and serves 2154 purposes beyond the obvious purpose of referring to distinct entities. Characteristics such as place of origin, gender, and socioeconomic status can often be guessed from the name or nickname that has been attributed to an individual. Onomastics, the study of the origin, history, and use of proper names has attracted scholarly attention as early as antiquity and Plato’s ‘Cratylos’ (Hajd´u, 1980). In fiction and art, in particular, names are chosen or invented without having to follow the naming conventions that are common in many cultures. This allows creators to apply other criteria in selecting a name for their characters, one of which being the intuitions and preconceptions about the character that the name alone implies to the audience. Black and Wilcox (2011) note that writers take informed and careful decisions when attributing names to their characters. Specifically, while care is taken to have names that are easily identifiable and phonologically attractive, or that are important for personal reasons, these are not the only considerations: names are chosen so that they match the personality, the past, and the cultural background of a character. According to Algeo (2010) behind each name lies a story while Ashley (2003) suggests that a literary name must be treated as a small poem with all the wealth of information that implies. Markey (1982) and Nicolaisen (2008) raised concerns on whether onomastics can be applied to names in art given the different functional roles of names as well as their intrinsic characteristics, namely sensitivity and creativity. ‘Redende namen’ (significant names) is a widespread theory that seeks the relationship between name and form (Rudnyckyj, 1959). According to this theory, there is a close relationship between the form of a name and its role. This consideration is still prevalent to date as shown by Chen (2008) in her analysis of names in comic books, where names transparently convey the intentions of the creator for the role of each character. Another concern is whether the study of literary names should be examined individually for each creative work or if generalizations can be made (Butler, 2013). However, the scope of most studies is limited to individual projects or creators, creating an opportunity for computational methods that can identify generalizations and patterns across larger bodies of literary work than what is manually feasible. 2.2 Related Work Although serving radically different purposes and applications than our investigation, various methods for the computational analysis of proper nouns have been developed in natural language processing. Without a doubt, some of the oldest and most mature technologies that exploit the properties of proper nouns are those addressing named entity recognition and categorization (NERC). In this direction, there is a recently ongoing effort for the extension of NERC tools so that they cover the needs of literary texts (Borin et al., 2007; Volk et al., 2009; Kokkinakis and Malm, 2011). Moving beyond recognition, effort has been made to explore characteristics and relationships of literary characters (Nastase et al., 2007). Typically, however, these efforts take advantage of the context, and very little work tries to extract characteristics of literary characters from their names alone. One example is the application of language identification methods in order to extract the cultural background of proper names (Konstantopoulos, 2007; Bhargava and Kondrak, 2010; Florou and Konstantopoulos, 2011). This work showed that people’s names in isolation are more amenable to language identification than common nouns. Konstantopoulos (2007), in particular, reports inconclusive results at pinpointing the discriminative features that are present in people’s names but not in other words. Another relatively recent and related research direction that does not focus on proper nouns investigates elements of euphony mostly by examining phonetic devices. The focus is to identify how the sound of words can foster its effectiveness in terms of persuasion (Guerini et al., 2015) or memorability (Danescu-Niculescu-Mizil et al., 2012). 3 Approach These earlier attempts relied on the examination of predictive models of n-grams in order to identify the n-grams that are the best discriminants. The aim was that by inspecting these most discriminative n-grams, meaningful patterns would emerge and serve as the vehicle for formulating hypotheses about the correlation between what names sound like and the cultural background of the persons bearing them. This approaches largely ignored the background in onomastics and literary research. By contrast, we exploit this prior body of theoretical work 2155 ID Feature Category Type 1 words count phonological numeric 2 vowels count phonological numeric 3 consonants count phonological numeric 4 plosives count phonological numeric 5 fricatives count phonological numeric 6 affricates count phonological numeric 7 nasals count phonological numeric 8 vowel start phonological categorical 9 vowel end phonological categorical 10 voice start phonological categorical 11 subsequent letters count phonological categorical 12 low vowel phonological categorical 13 high vowel phonological categorical 14 definite article lexical form categorical 15 consonance poetic numeric 16 assonance poetic numeric 17 alliteration poetic numeric 18 name and title resemblance domain numeric 19 credit index domain numeric 20 genre domain categorical 21 sentiment soundex wordnet emotions numeric 22 sentiment levenshtein wordnet emotions numeric 23 gender social categorical 24 foreign suffix social categorical 25 first name frequency social numeric 26 last name frequency social numeric 27 full name frequency social numeric 28 honor social categorical Table 1: List of features to define more sophisticated features that directly correspond to theoretical hypotheses. Our empirical experiments are now aimed at identifying the features (and thus hypotheses) that are the most discriminative, rather than at hoping that a coherent hypothesis can be formulated by observing patterns in n-gram features. In the remainder of this section, we will present these hypotheses and the machine-extracted features that reflect them. The features are also collected in Table 1. 3.1 Emotions Hypothesis 1 The (positive or negative) polarity of the sentiment that a character’s name evokes is associated with the polarity of the character’s role. The understanding of how the language transmits emotions has attracted significant research attention in the field of Computational Linguistics. Most of the relevant literature is directed towards calculating sentiment for units at the document or sentence level. These works are usually boosted by semantic dictionaries that provide information about the emotional hue of concepts such as the Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001), the Harvard General Inquirer (Stone et al., 1966), the WordNet Affect (Strapparava and Valitutti, 2004) and SentiWordNet (Esuli and Sebastiani, 2006). In our task, the absence of context and the inherent arbitrariness in naming (even in fictional names) increases the difficulty in conveying emotional quality to names. More specifically, the intriguing part was to associate fictional names with concepts from a semantic sentiment resource in order to approximate a sentiment value. To achieve this we used SentiWordNet: a linguistic resource that has derived from the annotation of WordNet synsets according to the estimated degree of positive, negative or neutral hue. The overall valence for a given name is calculated as the sum of the valence of its elements (first name, surname). The valence of each name element is the average valence of all SentiWordNet concepts that are associated with it. To associate a name element and a SentiWordNet concept we used the Soundex phonetic distance and the Levenshtein lexicographic distance (Levenshtein, 1966). A heuristic threshold is used to decide whether a name and a SentiWordNet concept are associated. More formally, the valence val(n) of a name n comprising name elements ei is calculated as follows: val(ei) = X u∈assS(ei) swn(u) + X v∈assL(ei) swn(v) |assS(ei)| + |assL(ei)| val(n) = X i val(ei) where assS(·) is the set of SentiWordNet concepts that are Soundex-associated with the given name element, assL(·) the set of SentiWordNet concepts that are Levenshtein-associated with the given name element, and swn(·) the valence assigned to the given concept by SentiWordNet. 3.2 Stylistic and poetic features Hypothesis 2 Assuming Ashley’s (2003) and Butler’s (2013) position that ‘a name can be a whole “poem” in as little as a single word’ we assume that stylistic features usually found in poems can be extracted from the names of fictional charac2156 ters, and that such features correlate with the polarity of their roles. The first quantitative analysis efforts of the poetic style can be found in the 1940s and in the study of the poet and literary critic Josephine Miles (1946; 1967) where she studied the features of poems over time. Despite the great contribution of this work and others that followed, the creation of a framework for quantitative poetic style analysis remained limited to a small number of poems and much of the work was done manually. The work of Kaplan and Blei (2007) is an attempt to automate and analyze large volumes of poems exploring phonological, spelling and syntax features. For our work, we identified the following poetic devices that can be applied to isolated names: • Alliteration: a stylistic literary device identified by the repeated sound of the first consonant in a series of multiple words, or the repetition of the same sounds of the same kinds of sounds at the beginning of words or in stressed syllables of a phrase. Examples: Peter Parker, Peter Pan • Consonance: a poetic device characterized by the repetition of the same consonant two or more times in short succession. Examples: Lillian Hellman, Freddy Krueger, Hannibal Lecter, Kristen Parker • Assonance: a poetic device characterized by the repetition of the same vowel two or more times in short succession. Examples: Bobbie Ritchie 3.3 Phonological features Hypothesis 3 The presence of specific phonological features can reveal evidence of the role of a character in an artistic artifact. Linguistic theory widely adopts the concept of arbitrary relationship between the signifier and the signified (de Saussure, 1916 1983; Jakobson, 1965). However, an increasing volume of works in various fields investigates the existence of nonarbitrary relations between phonological representation and semantics, a phenomenon known as phonological iconicity. Standing from the side of Computational Linguistics and with the intuition that in fictional names the correlation between a word’s form and the emotion it expresses will be stronger, we examined a wide range of phonologyrelated features, shown in Table 1. It should be noted that these features are extracted from the phonetic representation of names derived by applying the spelling-to-phoneme module of the espeak speech synthesizer.1 3.4 Sociolinguistic features Hypothesis 4 We hypothesize that social aspects of names — such as frequency of use or use of foreign names in a given environment — can relate to role of a fictional character. For instance, a ‘girl next door’ role is more likely to be assigned a very popular female name than a name that sounds hostile or foreign. The frequency of names in U.S.A was calculated based on the Social Security Death Index (SSDI), a publicly available database that records deaths of U.S.A citizens since 1936.2 The same dataset was also used to build a model for recognizing foreignlooking names. More specifically, we trained ngram language models of order 2–5 against the dataset for both orthographic and phonetic representation using the berkeleylm library (Pauls and Klein, 2011). We then heuristically defined a threshold that correlates well with foreign-looking suffixes. Analogously with the name frequency we extract the gender of each name using a baby names dataset that includes gender information.3 For unisex names the prevalent gender was picked. Finally, honorific titles (e.g. Professor, Phd, Mr, Mrs etc.) were also extracted from names. Honorific titles are intriguing due to their ambiguous meaning since they can express respect and irony in different contexts. 3.5 Domain features Hypothesis 5 We pursued indications to check if domain-related features such as the appearance time of a character in a movie, the movie title or the movie genre is associated (correlates) with the problem under study. In this category lies the featuresameastitle since anyone with a quick glance in a list of films would notice that a fictional name often consists of, or is the part of, the movie title, as in, There’s Something about Mary, Hannibal, Thelma & Louise, Rocky, etc. On IMDB character names are presented in the form of a list in 1Please cf. http://espeak.sourceforge.net 2Please cf. https://archive.org/details/ DeathMasterFile 3Specifically, we used https://www.ssa.gov/ oact/babynames/state/index.html 2157 descending order based on screen credits. In the featurecreditindex we want to check if the naming process is more assiduous for the roles of protagonists based on this list. In the same direction, we examine the featuregenre for a possible correlation between the role of a character and the genre of a film. 4 Experiments and Results 4.1 Data Collection and Annotation In order to validate our approach, we first need a corpus of names of fictional characters, annotated with the polarity of their role. As such a resource does not exist to the best of our knowledge, we have created it for the purposes of the work described here. Our decision to use motion pictures rather than other fictional work is motivated by the relative ease of finding annotators familiar with the plot of these works, so that we could get reliable annotations of the polarity of the leading roles. We compiled a list of 409 movies based on the following criteria: • That they are widely known films, covering all genres of film production. We automatically crosschecked if the candidate movies are included in DBPedia4 and YAGO5, as these are indicators that the films are known to the general public. • That they have received some award or are positively evaluated by users (i.e., have an IMDB rating of 5.0 or higher). The underlying assumption is that this criterion selects major productions where care has been given to even the most minute detail, including the names of the major characters and what these names connote to the audience. • That they are recent productions, so that annotators can easily recall the plot and the characters. We then asked volunteers to select any movie from the list that they where very familiar with, and to assign one of positive, negative or neutral to the top-most characters in the credits list, working only as far down the credits list as they felt confident to. The three categories were defined as follows in the annotation guidelines: 4http://wiki.dbpedia.org 5http://www.mpi-inf.mpg.de/yago Figure 1: Character annotation tool • Positive: when the role of the character in the plot left a positive impression on you when you saw the movie. • Negative: when the role of a character left a negative impression on you when you saw the movie. • Neutral: when the role of the character is important for the plot, but you are in doubt or cannot recall whether it was a positive or a negative role. Neutral tags are ignored in our experiments. They were foreseen only to allow annotators to skip characters and still have a sense of accomplishment, so that they only make choices that they are confident with. We used the Hypothes.is6 open source annotation application. The annotation was carried out by having volunteers install the Hypothes.is Web browser extension and then visit the IMDB7 page of any of the movies on our list (direct links were provided to them in the guidelines). IMDB was chosen due to its popularity, so that annotators would already be familiar with the online environment. The annotators tagged the character names directly on the IMDB page and the annotations where collected for us by Hypothes.is (Figure 1). Eight annotators participated in the procedure and provided 1102 positive and 434 negative tags for characters of 202 movies, out of the 409 movies in the original list. Table 2 gives the annotation distribution per movie genre. The reliability of the annotated collection by means of inter-rater agreement was also measured. For this purpose, various standard agreement measures (Meyer et al., 2014) were calculated, all showing very high agreement among the annotators (Table 3). This demonstrates that the annota6https://hypothes.is 7http://www.imdb.com 2158 Original Resampled Pos Neg Pos neg Action 262 107 244 102 Adventure 126 63 133 62 Animation 73 22 63 27 Biography 28 6 39 8 Comedy 78 25 23 21 Crime 68 25 81 18 Drama 81 40 76 32 Horror 16 12 28 13 Musical 0 0 0 0 Mystery 20 13 26 17 Sci-Fi 2 0 2 0 Thriller 0 0 0 0 Western 1 2 3 2 Sum 755/315 768/302 Table 2: Number of annotations per genre before and after resampling Measure Value Percentage Agreement 0.963 Hubert Kappa Agreement 0.980 Fleiss Kappa Agreement 0.973 Krippendorff Alpha Agreement 0.979 Table 3: Inter-annotator agreement tion task is well-formulated, but does not guarantee that our classification task is consistent, since the latter will use different information than that used by the annotators. That is to say, the annotators had access to their understanding of the movies’ plot to carry out the task, whereas our classification task will be performed over the characters’ names alone. The collection is publicly available, including the guidelines and instructions to the annotators, the source code for the annotation tool, and the source code for the tool that compiles Weka ARFF files from the JSON output of the annotation tool.8 4.2 Experimental Design The experimental design consisted of an iterated approach performing experiments with different sets of features. This process was driven by a preliminary chi-squared analysis in order to exploit feature significance. The algorithms that are used for the experiments are Naive Bayes and J48 8https://bitbucket.org/ dataengineering/fictionalnames Figure 2: Learning curve for the number of instances All Without domain NB J48 NB J48 Recall 0.723 0.824 0.718 0.803 Prec. 0.731 0.822 0.515 0.801 F-score 0.618 0.823 0.6 0.802 Table 4: Comparison of Naive Bayes and J48 (Salzberg, 1994) decision trees. Each experiment is done using a 10-fold cross validation on the available data, using a confidence factor of 0.25 for post-pruning. For all the experiments we used the Weka toolkit (Hall et al., 2009). Due to the imbalance of our dataset in favor of positive classes (see Table 2), we sub-sampled the dataset maintaining the initial genre distribution. We also applied principal component analysis (PCA) in order to guarantee the independence of the classification features, as required by the Naive Bayes algorithm. To explore the behavior of the algorithms to the change of trained data we generated the learning curves shown in Figure 2. In both cases the learning curves are well-behaved since the error rate grows monotonically as the training set shrinks. However, the precision, recall, and Fscores achieved by J48 are significantly better that those of Naive Bayes (Table 4). This preliminary experiment led us to use J48 for the main experiment, where we try different features in order to understand which are the most discriminative ones. These results are collected in Table 5 and discussed immediately below. 5 Discussion of Results A first observation that can be easily made is that the domain features are good discriminants. As these features exploit information such as credit2159 Rec. Prec. F-score Without domain features 0.803 0.801 0.802 Only domain features 0.725 0.699 0.667 Only phonological features 0.790 0.786 0.787 Without poetic features 0.836 0.832 0.833 Without consonance feature 0.823 0.820 0.821 Without emotions features 0.814 0.810 0.811 Without phonological features 0.798 0.792 0.793 Without social features 0.807 0.803 0.804 All features 0.824 0.822 0.823 Table 5: Performance of J48 for different feature settings Most frequent in positive characters Phoneme n-gram Examples /lI/ Ned Alleyn (Shakespeare in Love) /an/ Anouk Rocher (Chocolat) /aI/ Eliza Doolittle (My Fair Lady) /nI/ Linguini (Ratatouille) /Ist/ Kevin McCallister (Home Alone) /ô@U/ Frodo (The Lord of the Rings) /and/ Dylan Sanders (Charlie’s Angels) /st@/ C.C. Baxter (The Apartment) Most frequent in negative characters Phoneme n-gram Examples /@n/ Tom Buchanan (The Great Gatsby) /@U/ Iago (Aladdin) /t@/ Norrington (Pirates of the Caribbean) /ôI/ Tom Ripley (The Talented Mr. Ripley) /m@n/ Norman Bates (Psycho) /mIs/ Mystique (X-Men) /kt@/ Hannibal Lecter (Hannibal) Table 6: Frequent phoneme {2,3}-grams ing order that is outside the scope of our hypotheses, there were expected to be good discriminants and are included for comparison only. By comparing the performance of all features (F = 82%), domain-only features (F = 68%), and allexcept-domain features (F = 80%), we can immediately understand that our name-intrinsic features are better discriminants than domain features; in fact, name-intrinsic features not just better than domain features, they are by themselves almost as good as domain and name-intrinsic features combined. This is a significant finding, as it validates our core hypothesis that there is a correlation between what fictional character names look and sound like and the role they play in the plot of the fictional work they appear in. We will now proceed to look in more detail into the different categories of features used, in order to gain further insights about specific discriminants. 5.1 Phonological Features The phonological features are important separation criteria as evidenced by the drop in performance when they are excluded from the experimental setup (Table 5). Specifically, using all features except phonological features is equivalent to using phonological features alone (about F = 79% in both cases) and slightly worse that using all name-intrinsic features (about F = 80%). By comparison, removing any other category increases performance, leading us to believe that all other features are actually adding noise (rather than discriminatory power) to the feature space. In order to delve more into this category of features, we proceeded with an n-gram analysis (of order 1 through 4) to look for correlations between phonemes. The results clearly demonstrated the positive effect of the number of vowels (normalized by the length of the utterance) to the positive category. As far as the consonants are concerned, voiced (e.g. /2/, /g/, /d/, /w/) seem to relate more to the negative class. Table 7 summarizes a more fine-grained analysis for the consonants based on their categorization. The environment plays an important role, with specific combinations showing tendencies that are not observed with isolated phonemes. For example, diphoneme /an/ relates to positive class while /@n/ to negative. Table 6 lists some frequent phoneme 2- and 3-gram examples. The position of each phoneme also seems to play an crucial role 2160 Phonemes Class /p/, /b/ (bilabial plosive) P /l/ (alveolar lateral) P /f/, /v/ (labiodental africative) N /k/, /g/ (velar plosive) N /t/, /d/ (alveolar plosive) N /dZ/, /tS/ (affricate) N /m/, /n/ (nasal) N /ô/ (alveolar retroflex) N Table 7: Consonants behavior in the classification task. Specifically, we note that starting with a vowel or a consonant are among the most discriminating features. These observations are consistent to a great extent with work in psychology and literary theory that studied phonological iconicity for common words (Nastase et al., 2007; Auracher et al., 2011; Schmidtke et al., 2014). Some contradictory conclusions in these works are attributed by researchers to the methodologies applied, while at the same time concerns are raised whether such methodologies can inductively lead to cross-language and general conclusions (Auracher et al., 2011). Table 8 summarizes some of the outcomes of these works. 5.2 Emotion and Affect The analysis showed that the features that calculate the emotional load of fictional names based on SentiWordNet contribute to the classification task. However, we believe that there is still room for improvement for the performance of this feature mainly towards the optimization of the selection threshold in order to reduce the degree of false positive matches as well as the addition of more lexical resources for example WordNet Affect or LIWC. 5.3 Social Features The annual publication It’s a Man’s (Celluloid) World examines the representation of female characters every year. According to its 2015 results (Lauzen, 2015), gender stereotypes were abundant with female characters being younger than their male counterparts and more likely to have prosocial goals including supporting and helping others. This bias makes the gender feature discriminative, but in a way that is not linguistically interesting: female characters are simply related to the Reference Description Taylor and Taylor (1965) evidence that pleasantness relations are language specific Fonagy (1961) sonorants (e.g., /l/,/m/) more common in tender poems, plosives (e.g., /k/,/t/) in aggressive ones Miall (2001) Passages about Hell from Miltons “Paradise Lost” were found to contain significantly more front vowels and hard consonants than passages about Eden while the latter contained more medium back vowels Whissell (1999) plosives correlate with unpleasant words Auracher et al. (2011) nasals (e.g., /m/) relate to sadness, plosives (e.g., /p/) to happiness, parallels across remote languages Zajonc et al. (1989) umlaut /y/ causes negative affective states Table 8: Phonological iconicity studies positive class. A somewhat surprising result was that the foreign suffix feature is not discriminative. The hypothesis that the concept of the ‘other’ is stereotyped negatively does not seem to be true in our dataset. A closer investigation might identify genres where this hypothesis holds (e.g., war movies), but this would be implicit pragmatic information about the context of the film rather than a linguistically interesting finding. 5.4 Poetic and Stylistic Features The experimental findings show that literary devices can actually be identified in fictional characters names, but the same findings also indicate that they do not contribute significantly to the classification task. More specifically, consonance is the only stylistic/poetic feature that affects classification. 6 Conclusions and Future Work In this paper we test the hypothesis that the sound and the form of fictional characters’ names correlates with meaning, in our particular case with 2161 the respective characters’ role in the work of fiction. We restricted our study to fictional characters since they are not tied to cultural conventions of naming, such as names that run in a family, so that we are able to look for patterns that are perceived as positive or negative by the audience and used as such (consciously or not) by the creator. Our experiments have verified that features intrinsic to the names and without any reference to the plot or, in general, any other context are discriminative. Furthermore, we have discovered that the most discriminative features are of phonological nature, rather than features that hint at pragmatic information such as the gender or origin of the character. A further contribution of our work is that we ran an annotation campaign and created an annotated corpus of fictional movie characters and their corresponding polarity. This corpus is offered publicly, and can serve experimentation in the digital humanities beyond the scope of the experiments presented here. Our future research will test the correlation between the polarity and the name of a fictional character beyond the movie domain. It would, for example, be interesting to seek differences between spoken names (as in films) and names that are only meant to be read (as in literature). In addition, using written literature will allow us to compare texts from different periods, pushing earlier than the relatively young age of motion pictures. Character polarity annotations in written literature could be created by, for example, applying sentiment analysis to the full text of the work. References [Algeo2010] John Algeo. 2010. Is a theory of names possible? Names, 58(2):90–96. [Ashley2003] Leonard R. N. Ashley. 2003. Names in Literature. Bloomington, IN: Authorhouse (formerly 1st Books). [Auracher et al.2011] Jan Auracher, Sabine Albers, Yuhui Zhai, Gulnara Gareeva, and Tetyana Stavniychuk. 2011. P is for happiness, N is for sadness: Universals in sound iconicity to detect emotions in poetry. Discourse Processes, 48(1):1–25. [Bhargava and Kondrak2010] Aditya Bhargava and Grzegorz Kondrak. 2010. Language identification of names with SVMs. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 693–696, Los Angeles, California, June. Association for Computational Linguistics. [Black and Wilcox2011] Sharon Black and Brad Wilcox. 2011. 188 unexplainable names: Book of Mormon names no fiction writer would choose. Religious Educator, 12(2). [Borin et al.2007] Lars Borin, Dimitrios Kokkinakis, and Leif-J¨oran Olsson. 2007. Naming the past: Named entity and animacy recognition in 19th century Swedish literature. In ACL 2007 Workshop on Language Technology for Cultural Heritage Data (LaTeCH 2007), pages 1–8. [Butler2013] James Odelle Butler. 2013. Name, Place, and Emotional Space: Themed Semantics in Literary Onomastic Research. Ph.D. thesis, University of Glasgow. [Chen2008] Lindsey N. Chen. 2008. Ethnic marked names as a reflection of United States isolationist attitudes in Uncle $crooge comic books. Names, 56(1):19–22. [Danescu-Niculescu-Mizil et al.2012] Cristian Danescu-Niculescu-Mizil, Justin Cheng, Jon M. Kleinberg, and Lillian Lee. 2012. You had me at hello: How phrasing affects memorability. CoRR, abs/1203.6360. [de Saussure1916 1983] Ferdinand de Saussure. [1916] 1983. Course in General Linguistics. Duckworth, London. (translation Roy Harris). [Esuli and Sebastiani2006] Andrea Esuli and Fabrizio Sebastiani. 2006. SENTIWORDNET: A publicly available lexical resource for opinion mining. In Proceedings of the 5th Conference on Language Resources and Evaluation (LREC’06), pages 417–422. [Florou and Konstantopoulos2011] Eirini Florou and Stasinos Konstantopoulos. 2011. A quantitative and qualitative analysis of Nordic surnames. In Proceedings of the 18th Nordic Conference of Computational Linguistics (NODALIDA 2011), May 11-13, 2011, Riga, Latvia, volume 11 of NEALT Proceedings Series. [Fonagy1961] Ivan Fonagy. 1961. Communication in Poetry. William Clowes. [Guerini et al.2015] Marco Guerini, G¨ozde ¨Ozbal, and Carlo Strapparava. 2015. Echoes of Persuasion: The Effect of Euphony in Persuasive Communication. CoRR, abs/1508.05817. [Hajd´u1980] Mih´aly Hajd´u. 1980. The history of Onomastics. Onomastica Uralica, 2:7–45. [Hall et al.2009] Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The WEKA data mining software: An update. SIGKDD Explor. Newsl., 11(1):10–18, November. 2162 [Jakobson1965] Roman Jakobson. 1965. Quest for the Essence of Language. Diogenes, 13(51):21–37. [Kaplan and Blei2007] David M. Kaplan and David M. Blei. 2007. A computational approach to style in American poetry. In Proceedings of the 7th IEEE International Conference on Data Mining (ICDM 2007), pages 553–558, October. [Kokkinakis and Malm2011] Dimitrios Kokkinakis and Mats Malm. 2011. Character profiling in 19th century fiction. In Workshop: Language Technologies for Digital Humanities and Cultural Heritage in conjunction with the Recent Advances in Natural Language Processing (RANLP). Hissar, Bulgaria. [Konstantopoulos2007] Stasinos Konstantopoulos. 2007. What’s in a name? In Petya Osenova, Erhard Hinrichs, and John Nerbonne, editors, Proceedings of Computational Phonology Workshop, International Conf. on Recent Advances in NLP, (RANLP), Borovets, Bulgaria, September 2007. [Lauzen2015] Martha Lauzen. 2015. It’s a man’s (celluloid) world: On-screen representations of female characters in the top 100 films of 2011. Technical report, San Diego State University Center for the Study of Women in Television and Film, School of Theatre, Television and Film, San Diego State University, San Diego, CA. [Levenshtein1966] Vladimir I. Levenshtein. 1966. Binary codes capable of correcting deletions, insertions and reversals. Soviet Physics Doklady, 10:707, feb. [Markey1982] T. L. Markey. 1982. Crisis and cognition in onomastics. Names, 30(3):129–142. [Meyer et al.2014] Christian M. Meyer, Margot Mieskes, Christian Stab, and Iryna Gurevych. 2014. DKPro Agreement: An Open-Source Java Library for Measuring Inter-Rater Agreement. In Proceedings of the 25th International Conference on Computational Linguistics: System Demonstrations (COLING), pages 105–109, Dublin, Ireland, August. [Miall2001] David Miall. 2001. Sounds of contrast: An empirical approach to phonemic iconicity. Poetics, 29(1):55–70. [Miles1946] Josephine Miles. 1946. Major adjectives in English poetry: From Wyatt to Auden. University of California Publications in English, 12. [Miles1967] Josephine Miles. 1967. Style and Proportion: The Language of Prose and Poetry. Little, Brown and Co., Boston. [Nastase et al.2007] Vivi Nastase, Marina Sokolova, and Jelber Sayyad Shirabad. 2007. Do happy words sound happy? A study of the relation between form and meaning for English words expressing emotions. In Recent Advances in Natural Language Processing (RANLP 2007). [Nicolaisen2008] William F. H. Nicolaisen. 2008. On names in literature. Nomina, 31:89–98. [Pauls and Klein2011] Adam Pauls and Dan Klein. 2011. Faster and smaller n-gram language models. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. [Pennebaker et al.2001] JW Pennebaker, ME Francis, and RJ Booth. 2001. Linguistic inquiry and word count [computer software]. Mahwah, NJ: Erlbaum Publishers. [Rudnyckyj1959] Jaroslav B. Rudnyckyj. 1959. Function of proper names in literary works. Internationalen Vereinigung f¨ur moderne Sprachen und Literaturen, 61:378–383. [Salzberg1994] Steven L. Salzberg. 1994. C4.5: Programs for machine learning. Machine Learning, 16(3):235–240. [Schmidtke et al.2014] David S. Schmidtke, Markus Conrad, and Jacobs Arthur M. 2014. Phonological iconicity. Frontiers in Psychology, 12. [Stone et al.1966] Philip J. Stone, Dexter C. Dunphy, Marshall S. Smith, and Daniel M. Ogilvie. 1966. The General Inquirer: A Computer Approach to Content Analysis. MIT Press, Cambridge, MA. [Strapparava and Valitutti2004] Carlo Strapparava and Alessandro Valitutti. 2004. WordNet-Affect: An affective extension of WordNet. In Proceedings of the 4th International Conference on Language Resources and Evaluation, pages 1083–1086. ELRA. [Taylor and Taylor1965] I. K. Taylor and M. M. Taylor. 1965. Another look at phonetic symbolism. Psychological Bulletin, 65. [Volk et al.2009] Martin Volk, Noah Bubenhofer, Adrian Althaus, and Maya Bangerter. 2009. Classifying named entities in an Alpine heritage corpus. K¨unstliche Intelligenz, pages 40–43. [Whissell1999] Cynthia Whissell. 1999. Phonosymbolism and the emotional nature of sounds: Evidence of the preferential use of particular phonemes in texts of differing emotional tone. Perceptual and Motor Skills, 89(1):19–48, August. [Zajonc et al.1989] R. B. Zajonc, Sheila T. Murphy, and Marita Inglehart. 1989. Feeling and facial efference: Implications of the vascular theory of emotion. Psychological Review, 96(3):395–416, July. 2163
2016
203
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2164–2173, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Most babies are little and most problems are huge: Compositional Entailment in Adjective-Nouns Ellie Pavlick University of Pennsylvania [email protected] Chris Callison-Burch University of Pennsylvania [email protected] Abstract We examine adjective-noun (AN) composition in the task of recognizing textual entailment (RTE). We analyze behavior of ANs in large corpora and show that, despite conventional wisdom, adjectives do not always restrict the denotation of the nouns they modify. We use natural logic to characterize the variety of entailment relations that can result from AN composition. Predicting these relations depends on context and on commonsense knowledge, making AN composition especially challenging for current RTE systems. We demonstrate the inability of current stateof-the-art systems to handle AN composition in a simplified RTE task which involves the insertion of only a single word. 1 Overview The ability to perform inference over utterances is a necessary component of natural language understanding (NLU). Determining whether one sentence reasonably implies another is a complex task, often requiring a combination of logical deduction and simple common-sense. NLU tasks are made more complicated by the fact that language is compositional: understanding the meaning of a sentence requires understanding not only the meanings of the individual words, but also understanding how those meanings combine. Adjectival modification is one of the most basic types of composition in natural language. Most existing work in NLU makes a simplifying assumption that adjectives tend to be restrictive– i.e. adding an adjective modifier limits the set of things to which the noun phrase can refer. For example, the set of little dogs is a subset of the set of dogs, and we cannot in general say that dog entails little dog. This assumption has been exploited by high-performing RTE systems (MacCartney and Manning, 2008; Stern and Dagan, 2012), as well as used as the basis for learning new entailment rules (Baroni et al., 2012; Young et al., 2014). However, this simplified view of adjectival modification often breaks down in practice. Consider the question of whether laugh entails bitter laugh in the following sentences: 1. Again his laugh echoed in the gorge. 2. Her laugh was rather derisive. In (1), we have no reason to believe the man’s laugh is bitter. In (2), however, it seems clear from context that we are dealing with an unpleasant person for whom laugh entails bitter laugh. Automatic NLU should be capable of similar reasoning, taking both context and common sense into account when making inferences. This work aims to deepen our understanding of AN composition in relation to automated NLU. The contributions of this paper are as follows: • We conduct an empirical analysis of ANs and their entailment properties. • We define a task for directly evaluating a system’s ability to predict compositional entailment of ANs in context. • We benchmark several state-of-the-art RTE systems on this task. 2 Recognizing Textual Entailment The task of recognizing textual entailment (RTE) (Dagan et al., 2006) is commonly used to evaluate the state-of-the-art of automatic NLU. The RTE task is: given two utterances, a premise (p) and a hypothesis (h), would a human reading p typically infer that h is most likely true? Systems are expected to produce either a binary (YES/NO) or trinary (ENTAILMENT/CONTRADICTION/UNKNOWN) output. The type of knowledge tested in the RTE task has shifted in recent years. While older datasets mostly captured logical reasoning (Cooper et al., 1996) and lexical knowledge (Giampiccolo et al., 2007) (see Examples (1) and (2) in Table 1), the recent datasets have become increasingly reliant on common-sense knowledge of scenes and events (Marelli et al., 2014). In Example (4) in Table 1, for which the gold label is ENTAILMENT, it is perfectly reasonable to assume the dogs are playing. However, this is not necessarily true that running entails playing– maybe the dogs are being 2164 (1) FraCas p No delegate finished the report on time. Quantifiers h Some Scandinavian delegate finished the report on time. (no →¬some) (2) RTE2 p Trade between China and India is expected to touch $20 bn this year. . . Definitions h There is a profitable trade between China and India. ($20 bn →profitable) (3) NA p Some delegates finished the report on time. Implicature h Not all of the delegates finished the report on time. (some →¬all) (4) SICK p A couple of white dogs are running along a beach. Common Sense h Two dogs are playing on the beach. (running →playing) Table 1: Examples of sentence pairs coming from various RTE datasets, and the types of inference highlighted by each. While linguistic phenomena like implicature (3) have yet to be explicitly included in RTE tasks, commonsense inferences like those in (4) (from the SICK dataset) have become a common part of NLU tasks like RTE, question answering, and image labeling. chased by a bear and are running for their lives! Example (4) is just one of many RTE problems which rely on intuition rather than strict logical inference. Transformation-based RTE. There have been an enormous range of approaches to automatic RTE– from those based on theorem proving (Bjerva et al., 2014) to those based on vector space models of semantics (Bowman et al., 2015a). Transformation-based RTE systems attempt to solve the RTE problem by identifying a sequence of atomic edits (MacCartney, 2009) which can be applied, one by one, in order to transform p into h. Each edit can be associated with some entailment relation. Then, the entailment relation that holds between p and h overall is a function of the entailment relations associated with each atomic edit. This approach is appealing in that it breaks potentially complex p/h pairs into a series of bite-sized pieces. Transformation-based RTE is widely used, not only in rule-based approaches (MacCartney and Manning, 2008; Young et al., 2014), but also in statistical RTE systems (Stern and Dagan, 2012; Pad´o et al., 2014). MacCartney (2009) defines an atomic edit applied to a linguistic expression as the deletion DEL, insertion INS, or substitution SUB of a subexpression. If x is a linguistic expression and e is an atomic edit, than e(x) is the result of applying the edit e to the expression x. For example: x = a1 girl2 in3 a4 red5 dress6 e = DEL(red, 5) e(x) = a1 girl2 in3 a4 dress5 We say that the entailment relation that holds between x and e(x) is generated by the edit e. In the above example, we would say that e generates a forward entailment (⊏) since a girl in a red dress entails a girl in a dress. 3 Natural Logic Entailment Relations Natural logic (MacCartney, 2009) is a formalism that describes entailment relationships between natural language strings, rather than operating over mathematical formulae. Natural logic enables both light-weight representation and robust inference, and is an increasingly popular choice for NLU tasks (Angeli and Manning, 2014; Bowman et al., 2015b; Pavlick et al., 2015). There are seven “basic entailment relations” described by natural logic, five of which we explore here.1 These five relations, as they might hold between an AN and the head N, are summarized in Figure 1. The forward entailment relation is the restrictive case, in which the AN (brown dog) is a subset of (and thus entails) the N (dog) but the N does not entail the AN (dog does not entail brown dog). The symmetric reverse entailment can also occur, in which the N is a subset of the set denoted by the AN. An example of this is the AN possible solution: i.e. all actual solutions are possible solutions, but there are an abundance of possible solutions that are not and will never be actual solutions. In the equivalence relation, AN and N denote the same set (e.g. the entire universe is the same as the universe), whereas in the alternation relation, AN and N denote disjoint sets (e.g. a former senator is not a senator). In the independence relation, the AN has no determinable entailment relationship to the N (e.g. an alleged criminal may or may not be a criminal). 4 Simplified RTE Task The focus of this work is to determine the entailment relation that exists between an AN and its head N in a given context. To do this, we define a simplified entailment task identical to the normal RTE task, with the constraint that p and h differ only by one atomic edit e as defined in Section 2. We look only at insertion INS(A) and deletion DEL(A), where A must be a single adjective. We use a 3-way entailment classification where the possible labels are ENTAILMENT, CONTRADICTION, and UNKNOWN. This allows us to recover the basic entailment relation from Section 3: by determining the labels associated with the INS operation and the DEL 1We omit two relationships: negation and cover. These relations require that the sets denoted by the strings being compared are “exhaustive.” In this work, this requirement would be met when everything in the universe is either an instance of the noun or it is an instance of the adjective-noun (or possibly both). This is a hard constraint to meet, and we believe that the interesting relations that result from AN composition are adequately captured by the remaining 5 relations. 2165 N does not entail AN N entails AN Alternation (AN | N) Forward Entailment (AN ⊏ N) Independent (AN # N) Equivalence (AN ≣ N) Reverse Entailment (AN ⊐ N) N AN alleged criminal N AN former senator AN brown dog N possible solution AN entire universe Figure 1: Different entailment relations that can exist between an adjective-noun and the head noun. The best-known case is that of forward entailment, in which the AN denotes a subset of the N (e.g. brown dog). However, many other relationships may exist, as modeled by natural logic. operation, we can uniquely identify each of the five relations (Table 2). INS DEL Equivalence ENTAILMENT ENTAILMENT Forward Entail. ENTAILMENT UNKNOWN Reverse Entail. UNKNOWN ENTAILMENT Independence UNKNOWN UNKNOWN Alternation CONTRADICTION CONTRADICTION Table 2: Entailment generated by INS(A) or DEL(A) for possible relations holding between AN and N. Both INS and DEL are required to distinguish all five entailment relations. 4.1 Limitations Modeling denotations of ANs and N. We note that this task design does not directly ask about the relationship between the sets denoted by the AN and by the N (as shown in Figure 1). Rather than asking “Is this instance of AN an instance of N?” we ask “Is this statement that is true of AN also true of N?” While these are not the same question, they are often conflated in NLP, for example, in information extraction, when we use statements about ANs as justification for extracting facts about the head N (Angeli et al., 2015). We focus on the latter question and accept that this prevents us from drawing conclusions about the actual set theoretic relation between the denotation of AN and the denotation of N. However, we are able to draw conclusions about the practical entailment relation between statements about the AN and statements about the N. Monotonicity. In this simplified RTE task, we assume that the entailment relation that holds overall between p and h is attributable wholly to the atomic edit (i.e. the inserted or deleted adjective). This is an over-simplification. In practice, several factors can cause the entailment relation that holds between the sentences overall to differ from the relation that holds between the AN and the N. For example, quantifiers and other downward-monotone operators can block or reverse entailments (brown dog →dog, but no brown dog ̸→no dog). While we make some effort to avoid selecting such sentences for our analysis (Section 5.3), fully identifying and handling such cases is beyond the scope of this paper. We acknowledge that monotone operators and other complicating factors (e.g. multiword expressions) might be present in our data, but we believe, based on manual inspection, that they not frequent enough to substantially effect our analyses. 5 Experimental Design To build an intuition about the behavior of ANs in practice, we collect human judgments of the entailments generated by inserting and deleting adjectives from sentences drawn from large corpora. In this section, we motivate our design decisions, before carrying out our full analysis in Section 6. 5.1 Human judgments of entailment People often draw conclusions based on “assumptions that seem plausible, rather than assumptions that are known to be true” (Kadmon, 2001). We therefore collect annotations on a 5-point scale, ranging from 1 (definite contradiction) to 5 (definite entailment), with 2 and 4 capturing likely (but not certain) contradiction/entailment respectively. We recruit annotators on Amazon Mechanical Turk. We tell each annotator to assume that the premise “is true, or describes a real scenario” and then, using their best judgement, to indicate how likely it is, on a scale of 1 to 5, that the hypothesis “is also true, or describes the same scenario.” They are given short descriptions and several examples of sentence pairs that constitute each score along the 1 to 5 scale. They are also given the option to say that “the sentence does not make sense,” to account for poorly constructed p/h pairs, or errors in our parsing. We use the mean score of the three annotators as the true score for each sentence pair. Inter-annotator agreement. To ensure that our judgements are reproducible, we re-annotate a random 10% of our pairs, using the same annotation setup but a different set of annotators. We compute the intra-class correlation (ICC) between the scores received on the first round of annotation, and those received in the second pass. ICC is related to Pearson correlation, and is used to measure consistency among annotations when the group of annotators measuring each observation is not fixed, as opposed to metrics like Fleiss’s κ which assume a fixed set of annotators. On our data, the ICC is 0.77 (95% CI 0.73 - 0.81) indicating very high agreement. These twice-annotated pairs will become our test set in Section 7. 2166 5.2 Data Selecting contexts. We first investigate whether, in naturally occurring data, there is a difference between contexts in which the author uses the AN and contexts in which the author uses only the (unmodified) N. In other words, in order to study the effect of an A (e.g. financial) on the denotation of an N (e.g. system), is it better to look at contexts like (a) below, in which the author originally used the AN financial system, or to use contexts like (b), in which the author used only the N system? (a) The TED spread is an indication of investor confidence in the U.S. financial system. (b) Wellers hopes the system will be fully operational by 2015. We will refer to contexts like (a) as natural contexts, and those like (b) as artificial. We take sample of 500 ANs from the Annotated Gigaword corpus (Napoles et al., 2012), and choose three natural and three artificial contexts for each. We generate p/h pairs by deleting/inserting the A for the natural/artificial contexts, respectively, and collect human judgements on the effect of the INS(A) operation for both cases. Figure 2 displays the results of this pilot study. In sentences which contain the AN naturally, there is a clear bias toward judgements of “entailment.” That is, in contexts when an AN appears, it is often the case that this A is superfluous: the information carried by the A is sufficiently entailed by the context that removing it does not remove information. Sentences (a) and (b) above provide intuition: in the case of sentence (a), trigger phrases like investor confidence make it clear that the system we are discussing is the financial system, whether or not the adjective financial actually appears. No such triggers exist in sentence (b). Figure 2: p/h pairs derived from natural contexts result in a notable bias toward judgements of “entailment” for the INS(A) operation, compared to p/h pairs derived from artificial contexts. Selecting ANs. We next investigate whether the frequency with which an AN is used effects its tendency to entail/be entailed by the head N. Again, we run a small pilot study. We choose 500 ANs stratified across different levels of frequency of occurrence in order to determine if sampling the most frequent ANs introduces bias into our annotation. We see no significant relationship between the frequency with which an AN appears and the entailment judgements we received. 5.3 Final design decisions As a result of the above pilot experiments, we proceed with our study as follows. First, we use only artificial contexts, as we believe this will result in a greater variety of entailment relations and will avoid systematically biasing our judgements toward entailments. Second, we use the most frequent AN pairs, as these will better represent the types of ANs that NLU systems are likely to encounter in practice. We look at four different corpora capturing four different genres: Annotated Gigaword (Napoles et al., 2012) (News), image captions (Young et al., 2014) (Image Captions), the Internet Argument Corpus (Walker et al., 2012) (Forums), and the prose fiction subset of GutenTag dataset (Brooke et al., 2015) (Literature). From each corpus, we select the 100 nouns which occur with the largest number of unique adjectives. Then, for each noun, we take the 10 adjectives with which the noun occurs most often. For each AN, we choose 3 contexts2 in which the N appears unmodified, and generate p/h pairs by inserting the A into each. We collect 3 judgements for each p/h pair. Since this task is subjective, and we want to focus our analysis on clean instances on which human agreement is high, we remove pairs for which one or more of the annotators chose the “does not make sense” option and pairs for which we do not have at least 2 out of 3 agreement (i.e. at least two workers must have chosen the same score on the 5-point scale). In the end, we have a total of 5,560 annotated p/h pairs3 coming roughly evenly from our 4 genres. 6 Empirical Analysis Figure 3 shows how the entailment relations are distributed in each genre. In Image Captions, the vast majority of ANs are in a forward entailment (restrictive) relation with their head N. In the other genres, however, a substantial fraction (36% for Forums) are in equivalence relations: i.e. the AN denotes the same set as is denoted by the N alone. When does N entail AN? If it is possible to insert adjectives into a sentence without adding new information, when does this happen? When is adjectival 2As a heuristic, we skip sentences containing obvious downward-monotone operators, e.g. not, every (Section 4). 3Our data is available at http://www.seas.upenn. edu/˜nlp/resources/AN-composition.tgz 2167 Figure 3: Basic entailment relations assigned to ANs according to the 5,560 p/h pairs our data. modification not restrictive? Based on our qualitative analysis, two clear patterns stand out: 1) When the adjective is prototypical of the noun it modifies. In general, we see that adding adjectives which are seen as attributes of the “prototypical” instance of the noun tend to generate entailments. E.g. people are generally comfortable concluding that beach→sandy beach. The same adjective may be prototypical and thus entailed in the context of one noun, but generate a contradiction in the context of another. E.g. if someone has a baby, it is probably fine to say they have a little baby, but if someone has control, it would be a lie to say they have little control (Figure 4).4 Empirical Analysis Empirical Analysis Figure 4: Inserting adjectives that are seen as “prototypical” of the noun tends to generate entailments. E.g., beach generally entails sandy beach. 2) When the adjective invokes a sense of salience or importance. Nouns are assumed to be salient and relevant. E.g. answers are assumed (perhaps naively) to be correct, and problems are assumed (perhaps melodramatically) to be current and huge. Inserting adjectives like false or empty tend to generate contradictions (Figure 5). What do the different natural logic relations look like in practice? Table 3 shows examples of ANs and 4These curves show the distribution over entailment scores associated with the INS(A) operation. Yellow curves show, for a single N, the distribution over all the As that modify it. Blue curves show, for a single A, the distribution over all the Ns it modifies. Figure 5: Unless otherwise specified, nouns are considered to be salient and relevant. Answers are assumed to be correct, and problems to be current. contexts exhibiting each of the basic entailment relations. Some entailment inferences depend entirely on contextual information (Example 2a) while others arise from common-sense inference (Example 2b). Many of the most interesting examples fall into the independence relation. Recall from Section 3 that independence, in theory, covers ANs such as alleged criminal, in which the AN may or may not entail the N. In practice, the cases we observe falling into the independence relation tend to be those which are especially effected by world knowledge. In Example 3, local economy is considered to be independent of economy when used in the context of President Obama: i.e. the assumption that the president would be discussing the national economy is so strong that even when the president says the local economy is improving, people do not take this to mean that he has said the economy is improving. Undefined entailment relations. Our annotation methodology– i.e. inferring entailment relations based on the entailments generated by INS and DEL edits– does not enforce that all of the ANs fit into one of the five entailment relations defined by natural logic. Specifically, we observe many instances (∼5% of p/h pairs) in which INS is determined to generate a contradiction, while DEL is said to generate an entailment. In terms of set theory, this is equivalent to the (non-sensical) setting in which “every AN is an instance of N, but no N is an instance of AN.” On inspection, these again represent cases in which commonsense assumptions dominate the inference. In Example 6, when given the premise Bush travels to Michigan to discuss the economy, annotators are confident enough that economy does not entail Japanese economy (why on earth would Bush travel to Michigan to discuss the Japanese economy?) that they label the insertion of Japanese as generating a contradiction. However, when presented with the p/h in the opposite direction, annotators agree that the Japanese economy does indeed entail the economy. These examples highlight the flexibility with which humans perform natural language inference, and the need for automated systems to 2168 (1) AN ⊏N He underwent a [successful] operation on his leg at a Lisbon hospital in December. (2a) AN ≡N The [deadly] attack killed at least 12 civilians. (2b) AN ≡N The [entire] bill is now subject to approval by the parliament. (3) AN # N President Obama cited the data as evidence that the [local] economy is improving. (4) AN ⊐N The [militant] movement was crushed by the People’s Liberation Army. (5) AN | N Red numbers spelled out their [perfect] record: 9-2. (6) AN ? N Bush travels Monday to Michigan to make remarks on the [Japanese] economy. Table 3: Examples of ANs in context exhibiting each of the different entailment relations. Note that these are “artificial” contexts (Section 5.2), meaning the adjective was not originally a part of the sentence. be equally flexible. Take aways. Our analysis in this section results in three key conclusions about AN composition. 1) Despite common assumptions, adjectives do not always restrict the denotation of a noun. Rather, adjectival modification can result in a range of entailment relations, including equivalence and contradiction. 2) There are patterns to when the insertion of an adjective is or is not entailment-preserving, but recognizing these patterns requires common-sense and a notion of “prototypical” instances of nouns. 3) The entailment relation that holds between an AN and the head N is highly context dependent. These observations describe sizable obstacles for automatic NLU systems. Common-sense reasoning is still a major challenge for computers, both in terms of how to learn world knowledge and in how to represent it. In addition, context-sensitivity means that entailment properties of ANs cannot be simply stored in a lexicon and looked-up at run time. Such properties make AN composition an important problem on which to focus NLU research. 7 Benchmarking Current SOTA We have highlighted why AN composition is an interesting and likely challenging phenomenon for automated NLU systems. We now turn our investigation to the performance of state-of-the-art RTE systems, in order to quantify how well AN composition is currently handled. The Add-One Entailment Task. We define the “Add-One Entailment” task to be identical to the normal RTE task, except with the constraint that the premise p and the hypothesis h differ only by the atomic insertion of an adjective: h = e(p) where e=INS(A) and A is a single adjective. To provide a consistent interface with a range of RTE systems, we use a binary label set: NON-ENTAILMENT (which encompasses both CONTRADICTION and UNKNOWN) and ENTAILMENT. We want to test on only straightforward examples, so as not to punish systems for failing to classify examples which humans themselves find difficult to judge. In our test set, therefore, we label pairs with mean human scores ≤3 as NONENTAILMENT, pairs with scores ≥4 as ENTAILMENT, and throw away the pairs which fall into the ambiguous range in between.5 Our resulting train, dev, and test sets contain 4,481, 510, and 387 pairs, respectively. These splits cover disjoint sets of ANs– i.e. none of the ANs appearing in test were seen in train. Individual adjectives and/or nouns can appear in both train and test. The dataset consists of roughly 85% NONENTAILMENT and 15% ENTAILMENT. Inter-annotator agreement achieves 93% accuracy. 7.1 RTE Systems We test a variety of state-of-the-art RTE systems, covering several popular approaches to RTE. These systems are described in more detail below. Classifier-based. The Excitement Open RTE platform (Magnini et al., 2014) includes a suite of RTE systems, including baseline systems as well as featurerich supervised systems which provide state-of-the-art performance on the RTE3 datasets (Giampiccolo et al., 2007). We test two systems from Excitement: the simple Maximum Entropy (MaxEnt) model which uses a suite of dense, similarity-based features (e.g. word overlap, cosine similarity), and the more sophisticated Maximum Entropy model (MaxEnt+LR) which uses the same similarity-based features but additionally incorporates features from external lexical resources such as WordNet (Miller, 1995) and VerbOcean (Chklovski and Pantel, 2004). We also train a standard unigram model (BOW). Transformation-based. The Excitement platform also includes a transformation-based RTE system called BIUTEE (Stern and Dagan, 2012). The BIUTEE system derives a sequence of edits that can be used to transform the premise into the hypothesis. These edits are represented using feature vectors, and the system searches over edit sequences for the lowest cost “proof” of either entailment or non-entailment. The feature weights are set by logistic regression during training. Deep learning. Bowman et al. (2015a) recently reported very promising results using deep learning ar5For our training and dev sets, we include all pairs, considering scores < 3.5 as NON-ENTAILMENT and scores ≥ 3.5 as ENTAILMENT. We tried removing “ambiguous” pairs from the training and dev sets as well, but it did not improve the systems’ performances on the test set. 2169 chitectures and large training data for the RTE task. We test the performance of those same implementations on our Add-One task. Specifically, we test the following models: a basic Sum-of-words model (Sum), which represents both p and h as the sum of their word embeddings, an RNN model, and an LSTM model. We also train a bag-of-vectors model (BOV), which is simply a logistic regression whose features are the concatenated averaged word embeddings of p and h. For the LSTM, in addition to the normal training setting– i.e. training only on the 5K Add-One training pairs– we test a transfer-learning setting (Transfer). In transfer learning, the model trains first on a large general dataset before fine-tuning its parameters on the smaller set of target-domain training data. For our Transfer model, we train first on the 500K pair SNLI dataset (Bowman et al., 2015a) until convergence, and then fine-tune on the 5K Add-One pairs. This setup enabled Bowman et al. (2015a) to train a high-performance LSTM for the SICK dataset, which is of similar size to our Add-One dataset (∼5K training pairs). 7.2 Results Out of the box performances. To calibrate expectations, we first report the performance of each of the systems on the datasets for which they were originally designed. For the Excitement systems, this is the RTE3 dataset (Table 6a). For the deep learning systems, this is the SNLI dataset (Table 6b). For the deep learning systems, in addition to reporting performance when trained on the SNLI corpus (500K p/h pairs), we report the performance in a reduced training setting in which systems only have access to 5K p/h pairs. This is equivalent to the amount of data we have available for the Add-One task, and is intended to give a sense of the performance improvements we should expect from these systems given the size of the training data. RTE3 Majority 51.3 BOW 51.0 Edit Dist. 61.9 MaxEnt+LR 63.6 BIUTEE 65.6 (a) Systems from Magnini et al. (2014) on RTE3. SNLI 500K / 5K Majority 65.7 BOV 74.4 / 71.5 RNN 82.1 / 67.0 Sum 85.3 / 69.2 LSTM 86.2 / 68.0 (b) Systems from Bowman et al. (2015a) on SNLI. Figure 6: Performance of SOTA systems on the datasets for which they were originally developed. 7.3 Performance on Add-One RTE. Finally, we train each of the systems on the 5,000 AddOne p/h pairs in our dataset and test on our heldout set of 387 pairs. Figure 7 reports the results in terms of accuracy and precision/recall for the ENTAILMENT class. The baseline strategy of predicting the majority class for each adjective, based on the training data, reaches close to human performance (92% accuracy). Given the simplicity of the task (p and h differ by a single word), this baseline strategy should be achievable. However, none of the systems tested come close to this level of performance, suggesting that they fail to learn even the most-likely entailment generated by adjectives (e.g. that INS(brown) probably generates NON-ENTAILMENT and INS(possible) probably generates ENTAILMENT). The best performing system is the RNN, which achieves 87% accuracy, only two points above the baseline of always guessing NON-ENTAILMENT. Figure 7: Performances of all systems on AddOne RTE task. The strategy of predicting the majority class for each adjective– based on the training data– reaches near human performance. None of the systems tested come close to human levels, indicating that the systems fail even to memorize the most-likely class for each adjective in training. 8 Related Work Past work, both in linguistics and in NLP, has explored different classes of adjectives (e.g. privative, intensional) as they relate to entailment (Kamp and Partee, 1995; Partee, 2007; Boleda et al., 2013; Nayak et al., 2014). In general, prior studies have focused on modeling properties of the adjectives alone, ignoring the context-dependent nature of AN/N entailments– i.e. in prior work little is always restrictive, whether it is modifying baby or control. Pustejovsky (2013) offer a preliminary analysis of the contextual complexities surrounding adjective inference, which reinforces many of the observations we have made here. Hartung and Frank (2011) analyze adjectives in terms of the properties they modify but don’t address them from an entailment perspective. Tien Nguyen et al. (2014) look at the 2170 adjectives in the restricted domain of computer vision. Other past work has employed first-order logic and other formal representations of adjectives in order to provide compositional entailment predictions (Amoia and Gardent, 2006; Amoia and Gardent, 2007; McCrae et al., 2014). Although theoretically appealing, such rigid logics are unlikely to provide the flexibility needed to handle the type of common-sense inferences we have discussed here. Distributional representations provide much greater flexibility in terms of representation (Baroni and Zamparelli, 2010; Guevara, 2010; Boleda et al., 2013). However, work on distributional AN composition has so far remained out-of-context, and has mostly been evaluated in terms of overall “similarity” rather than directly addressing the entailment properties associated with composition. 9 Conclusion We have investigated the problem of adjective-noun composition, specifically in relation to the task of RTE. AN composition is capable of producing a range of natural logic entailment relationship, at odds with commonly-used heuristics which treat all adjectives a restrictive. We have shown that predicting these entailment relations is dependent on context and on world knowledge, making it a difficult problem for current NLU technologies. When tested, state-of-the-art RTE systems fail to learn to differentiate entailmentpreserving insertions of adjectives from non-entailing ones. This is an important distinction for carrying out human-like reasoning, and our results reveal important weaknesses in the representations and algorithms employed by current NLU systems. The Add-One Entailment task we have introduced will allow ongoing RTE research to better diagnose systems’ abilities to capture these subtleties of ANs, which that have practical effects on natural language inference. Acknowledgments This research was supported by a Facebook Fellowship, and by gifts from the Alfred P. Sloan Foundation, Google, and Facebook. This material is based in part on research sponsored by the NSF grant under IIS-1249516 and DARPA under number FA8750-132-0017 (the DEFT program). The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA and the U.S. Government. We would like to thank Sam Bowman for helping us to replicate his prior work and ensure a fair comparison. We would also like to thank the anonymous reviewers for thoughtful comments, and the Amazon Mechanical Turk annotators for their contributions. References Marilisa Amoia and Claire Gardent. 2006. Adjective based inference. In Proceedings of the Workshop KRAQ’06 on Knowledge and Reasoning for Language Processing, pages 20–27. Association for Computational Linguistics. Marilisa Amoia and Claire Gardent. 2007. A first order semantic approach to adjectival inference. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pages 185–192, Prague, June. Association for Computational Linguistics. Gabor Angeli and Christopher D. Manning. 2014. NaturalLI: Natural logic inference for common sense reasoning. In Empirical Methods in Natural Language Processing (EMNLP), October. Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D. Manning. 2015. Leveraging linguistic structure for open domain information extraction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 344–354, Beijing, China, July. Association for Computational Linguistics. Marco Baroni and Roberto Zamparelli. 2010. Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1183–1193, Cambridge, MA, October. Association for Computational Linguistics. Marco Baroni, Raffaella Bernardi, Ngoc-Quynh Do, and Chung-chieh Shan. 2012. Entailment above the word level in distributional semantics. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 23–32, Avignon, France, April. Association for Computational Linguistics. Johannes Bjerva, Johan Bos, Rob van der Goot, and Malvina Nissim. 2014. The meaning factory: Formal semantics for recognizing textual entailment and determining semantic similarity. SemEval 2014, page 642. Gemma Boleda, Marco Baroni, Louise McNally, and Nghia Pham. 2013. Intensionality was only alleged: On adjective-noun composition in distributional semantics. In Proceedings of the 10th International Conference on Computational Semantics, pages 35– 46. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015a. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal, September. Association for Computational Linguistics. 2171 Samuel R. Bowman, Christopher Potts, and Christopher D. Manning. 2015b. Learning distributed word representations for natural logic reasoning. In 2015 AAAI Spring Symposium Series. Julian Brooke, Adam Hammond, and Graeme Hirst. 2015. GutenTag: an NLP-driven tool for digital humanities research in the Project Gutenberg corpus. In Proceedings of the Fourth Workshop on Computational Linguistics for Literature, pages 42–47, Denver, Colorado, USA, June. Association for Computational Linguistics. Timothy Chklovski and Patrick Pantel. 2004. VerbOcean: Mining the web for fine-grained semantic verb relations. In EMNLP, volume 2004, pages 33–40. Robin Cooper, Dick Crouch, Jan Van Eijck, Chris Fox, Johan Van Genabith, Jan Jaspars, Hans Kamp, David Milward, Manfred Pinkal, Massimo Poesio, et al. 1996. A framework for computational semantics. Technical report, Technical Report LRE 62-051 D16, The FraCaS Consortium. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL recognizing textual entailment challenge. In Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment, pages 177–190. Springer. Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recognizing textual entailment challenge. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pages 1–9, Prague, June. Association for Computational Linguistics. Emiliano Guevara. 2010. A regression model of adjective-noun compositionality in distributional semantics. In Proceedings of the 2010 Workshop on GEometrical Models of Natural Language Semantics, pages 33–37, Uppsala, Sweden, July. Association for Computational Linguistics. Matthias Hartung and Anette Frank. 2011. Exploring supervised LDA models for assigning attributes to adjective-noun phrases. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 540–551, Edinburgh, Scotland, UK., July. Association for Computational Linguistics. Nirit Kadmon. 2001. Formal Pragmatics: Semantics, Pragmatics, Presupposition, and Focus. Willey. Blackwell. Oxford. Hans Kamp and Barbara Partee. 1995. Prototype theory and compositionality. Cognition, 57(2):129– 191. Bill MacCartney and Christopher D. Manning. 2008. Modeling semantic containment and exclusion in natural language inference. In Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1, COLING ’08, pages 521– 528. Bill MacCartney. 2009. Natural language inference. Ph.D. thesis, Citeseer. Bernardo Magnini, Roberto Zanoli, Ido Dagan, Kathrin Eichler, Guenter Neumann, Tae-Gil Noh, Sebastian Pad´o, Asher Stern, and Omer Levy. 2014. The Excitement Open Platform for textual inferences. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 43–48, Baltimore, Maryland, June. Association for Computational Linguistics. Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zamparelli. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), pages 216–223, Reykjavik, Iceland, May. ACL Anthology Identifier: L14-1314. John P. McCrae, Francesca Quattri, Christina Unger, and Philipp Cimiano. 2014. Modelling the semantics of adjectives in the ontology-lexicon interface. In Proceedings of the 4th Workshop on Cognitive Aspects of the Lexicon (CogALex), pages 198–209, Dublin, Ireland, August. Association for Computational Linguistics and Dublin City University. George A. Miller. 1995. WordNet: A lexical database for English. Communications of the ACM, 38(11):39–41, November. Courtney Napoles, Matthew Gormley, and Benjamin Van Durme. 2012. Annotated gigaword. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction, pages 95–100. Neha Nayak, Mark Kowarsky, Gabor Angeli, and Christopher D. Manning. 2014. A dictionary of nonsubsective adjectives. Technical Report CSTR 2014-04, Department of Computer Science, Stanford University, October. Sebastian Pad´o, Tae-Gil Noh, Asher Stern, Rui Wang, and Roberto Zanoli. 2014. Design and realization of a modular architecture for textual entailment. Journal of Natural Language Engineering. Barbara Partee. 2007. Compositionality and coercion in semantics: The dynamics of adjective meaning. Cognitive foundations of interpretation, pages 145– 161. Ellie Pavlick, Johan Bos, Malvina Nissim, Charley Beller, Benjamin Van Durme, and Chris CallisonBurch. 2015. Adding semantics to data-driven paraphrasing. In Association for Computational Linguistics, Beijing, China, July. Association for Computational Linguistics. James Pustejovsky. 2013. Inference patterns with intensional adjectives. In Proceedings of the 9th Joint ISO - ACL SIGSEM Workshop on Interoperable Semantic Annotation, pages 85–89, Potsdam, Germany, March. Association for Computational Linguistics. 2172 Asher Stern and Ido Dagan. 2012. BIUTEE: A modular open-source system for recognizing textual entailment. In Proceedings of the ACL 2012 System Demonstrations, pages 73–78, Jeju Island, Korea, July. Association for Computational Linguistics. Dat Tien Nguyen, Angeliki Lazaridou, and Raffaella Bernardi. 2014. Coloring objects: Adjective-noun visual semantic compositionality. In Proceedings of the Third Workshop on Vision and Language, pages 112–114, Dublin, Ireland, August. Dublin City University and the Association for Computational Linguistics. Marilyn A. Walker, Jean E. Fox Tree, Pranav Anand, Rob Abbott, and Joseph King. 2012. A corpus for research on deliberation and debate. In LREC, pages 812–817. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics (TACL), 2(Feb):67–78. 2173
2016
204
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2174–2184, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Modeling Stance in Student Essays Isaac Persing and Vincent Ng Human Language Technology Research Institute University of Texas at Dallas Richardson, TX 75083-0688 {persingq,vince}@hlt.utdallas.edu Abstract Essay stance classification, the task of determining how much an essay’s author agrees with a given proposition, is an important yet under-investigated subtask in understanding an argumentative essay’s overall content. We introduce a new corpus of argumentative student essays annotated with stance information and propose a computational model for automatically predicting essay stance. In an evaluation on 826 essays, our approach significantly outperforms four baselines, one of which relies on features previously developed specifically for stance classification in student essays, yielding relative error reductions of at least 11.3% and 5.3%, in micro and macro F-score, respectively. 1 Introduction State-of-the-art automated essay scoring engines such as E-rater (Attali and Burstein, 2006) do not grade essay content, focusing instead on providing diagnostic trait feedback on categories such as grammar, usage, mechanics, style and organization. Hence, persuasiveness and other content-dependent dimensions of argumentative essay quality are largely ignored in existing automated essay scoring research. While full-fledged content-based essay scoring is still beyond the reach of state-of-the-art essay scoring engines, recent work has enabled us to move one step closer to this ambitious goal by analyzing essay content, attempting to determine the argumentative structure of student essays (Stab and Gurevych, 2014) and the persuasiveness of the arguments made in these essays (Persing and Ng, 2015). Stance classification is an important first step in determining how persuasive an argumentative student essay is because persuasiveness depends on how well the author argues w.r.t. the stance she takes using the supporting evidence she provides. For instance, if her stance is Agree Somewhat, a persuasive argument would involve explaining what reservations she has about the given proposition. As another example, an argumentative essay in which the author takes a neutral stance or the author presents evidence that does not support the stance she claims to take should receive a low persuasiveness score. Given the important role played by stance classification in determining an essay’s persuasiveness, our goal in this paper is to examine stance classification in argumentative student essays. While there is a large body of work on stance classification1, stance classification in argumentative essays is largely under-investigated and is different from previous work in several respects. First, in automated essay grading, the majority of the essays to be assessed are written by students who are learners of English. Hence our stance classification task could be complicated by the authors’ lack of fluency in English. Second, essays are longer and more formally written than the text typically used in previous stance classification research (e.g., debate posts). In particular, a student essay writer typically expresses her stance on the essay’s topic in a thesis sentence/clause, while a debate post’s author may never even explicitly express her stance. Although the explicit expression of stance in essays seems to make our task easier, 1Previous approaches to stance classification have focused on three discussion/debate settings, namely congressional floor debates (Thomas et al., 2006; Bansal et al., 2008; Balahur et al., 2009; Yessenalina et al., 2010; Burfoot et al., 2011), company-internal discussions (Agrawal et al., 2003; Murakami and Raymond, 2010), and online social, political, and ideological debates (Wang and Ros´e, 2010; Biran and Rambow, 2011; Walker et al., 2012; Abu-Jbara et al., 2013; Hasan and Ng, 2013; Boltuˇzi´c and ˇSnajder, 2014; Sobhani et al., 2015; Sridhar et al., 2015). 2174 Prompt Prompt Parts Most university degrees are theoretical and do not prepare students for the real world. They are therefore of very little value. 1) Most university degrees are theoretical. 2) Most university degrees do not prepare students for the real world. 3) Most university degrees are of very little value. The prison system is outdated. No civilized society should punish its criminals: it should rehabilitate them. 1) The prison system is outdated. 2) No civilized society should punish its criminals. 3) Civilized societies should rehabilitate criminals. Table 1: Some examples of essay prompts and their associated parts. identifying stancetaking text in the midst of nonstancetaking sentences in a potentially long essay, as we will see, is by no means a trivial task. To our knowledge, the essay stance classification task has only been attempted by Faulkner (2014). However, the version of the task we address is different from his. First, Faulkner only performed two-class stance classification: while his corpus contains essays labeled with For (Agree), Against (Disagree), and Neither, he simplified the task by leaving out the arguably most difficult-to-identify stance, Neither. In contrast, we perform fine-grained stance classification, where we allow essay stance to take one of six values: Agree Strongly, Agree Somewhat, Neutral, Disagree Somewhat, Disagree Strongly, and Never Addressed, given the practical need to perform fine-grained stance classification in student essays, as discussed above. Second, given that many essay prompts are composed of multiple simpler propositions (e.g., the prompt “Most university degrees are theoretical and do not prepare students for the real world” has two parts, “Most university degrees are theoretical” and “Most university degrees do not prepare students for the real world.”), we manually split such prompts into prompt parts and determine the stance of the author w.r.t. each part, whereas Faulkner assigned an overall stance to a given prompt regardless of whether it is composed of multiple propositions. The distinction is important because an analysis of our annotations described in Section 2 shows that essay authors take different stances w.r.t. different prompt parts in 49% of essays, and in 39% of essays, authors even take stances with different polarities w.r.t. different prompt parts. In sum, our contributions in this paper are twofold. First, we propose a computational model for essay stance classification that outperforms four baselines, including our re-implementation of Faulkner’s approach. Second, in order to stimulate further research on this task, we make our annotations publicly available. Since progress on this task is hindered in part by the lack of a publicly annotated corpus, we believe that our data set will be a valuable resource for the NLP community. 2 Corpus We use as our corpus the 4.5 million word International Corpus of Learner English (ICLE) (Granger et al., 2009), which consists of more than 6000 essays on a variety of different topics written by university undergraduates from 16 countries and 16 native languages who are learners of English as a Foreign Language. 91% of the ICLE texts are written in response to prompts that trigger argumentative essays, and thus are expected to take a stance on some issue. We select 11 such prompts, and from the subset of argumentative essays written in response to them, we select 826 essays to annotate for training and testing our stance classification system.2 Table 1 shows two of the 11 topics selected for annotation. We pair each of the 826 essays with each of the prompt parts to which it responds, resulting in 1,593 instances.3 We then familiarize two human annotators, both of whom are native speakers of English, with the stance definitions in Table 2 and ask them to assign each instance the stance label they believe the essay’s author would have chosen if asked how strongly she agrees with the prompt part. We additionally furnish the annotators with descriptions of situations that might cause an author to select the more ambiguous classes. For example, an author might choose Agree Somewhat if she appears to mostly agree with the prompt part, but qualifies her opinion in a way that is not captured by the prompt part’s bluntness (e.g. an author who claims the prison system in a lot of countries is outdated would Agree Somewhat with the first part of Table 1’s second prompt). Or she may choose Disagree Somewhat if she appears to dis2See our website at http://www.hlt.utdallas. edu/˜persingq/ICLE/ for the complete list of essay stance annotations. 3We do not segment the essays’ texts according to which prompt part is being responded to. Each (entire) essay is viewed as a response to all of its associated prompt parts. 2175 Stance Definition Agree Strongly (885) The author seems to agree with and care about the claim. Agree Somewhat (148) The author generally agrees with the claim, but might be hesitant to choose “Agree Strongly”. Neutral (28) The author agrees with the claim as much as s/he disagrees with it. Disagree Somewhat (91) The author generally disagrees with the claim, but might be hesitant to choose “Disagree Strongly”. Disagree Strongly (416) The author seems to disagree with and care about the claim. Never Addressed (25) A stance cannot be inferred because the proposition was never addressed. Table 2: Stance label counts and definitions. agree with the prompt part, but mentions the disagreement only in passing because she does not care much about the topic. To ensure consistency in annotation, we randomly select 100 essays (187 instances) for annotation by both annotators. Their labels agree in 84.5% of the instances, yielding a Cohen’s (1960) Kappa of 0.76. Each case of disagreement is resolved through discussion between the annotators. 3 Baseline Stance Classification Systems In this section, we describe four baseline systems. 3.1 Agree Strongly Baseline Given the imbalanced stance distribution shown in Table 2, we create a simple but by no means weak baseline, which predicts that every instance has most frequent class label (Agree Strongly), regardless of the prompt part or the essay’s contents. 3.2 N-Gram Baseline Previous work on stance classification, which assumes that stance-annotated training data is available for every topic for which stance classification is performed, has shown that the N-Gram baseline is a strong baseline. Not only is this assumption unrealistic in practice, but it has led to undesirable consequences. For instance, the proposition “feminists have done more harm to the cause of women than good” elicits much more disagreement than normal. So, if instances from this proposition appeared in both the training and test sets, the unigram feature “feminist” would be strongly correlated with the disagreement classes even though intuitively it tells us nothing about stance. This partly explains why the N-Gram baseline was strong in previous work (Somasundaran and Wiebe, 2010). In light of this problem, we perform leave-one-out cross validation where we partition the instances by prompt, leaving the instances created for one prompt out in each test set. To understand how strong n-grams are when evaluated in our leave-one-prompt-out crossvalidation setting, we employ them as features in our second baseline. Specifically, we train a multiclass classifier on our data set using a feature set composed solely of unigram, bigram, and trigram features, each of which indicates the number of times the corresponding n-gram is present in the associated essay. 3.3 Duplicated Faulkner Baseline While it is true that no system exists for solving our exact problem, the system proposed by Faulkner (2014) comes fairly close. Hence, as our third baseline, we train a multiclass classifier on our data set for fine-grained essay stance classification using the two types of features proposed by Faulkner, as described below. Part-of-speech (POS) generalized dependency subtrees. Faulkner first constructs a lexicon of stance words in the style of Somasundaran and Wiebe (2010). The lexicon consists of (1) the set of stemmed first unigrams appearing in all stanceannotated text spans in the Multi-Perspective Question Answering (MPQA) corpus (Wiebe et al., 2005), and (2) the set of boosters (clearly, decidedly), hedges (claim, estimate), and engagement markers (demonstrate, evaluate) from the appendix of Hyland (2005). He then manually removes from this list any words that appear not to be stancetaking, resulting in a 453 word lexicon. Stance words target propositions, which Faulkner notes, usually contain some opinionbearing language that can serve as a proxy for the targeted proposition. In order to find the locations in an essay where a stance is being taken, he first finds each stance word in the essay. Then he finds the shortest path from the stance word to an opinion word in the sentence’s dependency tree, using the MPQA subjectivity lexicon of opinion words (Wiebe et al., 2005). If this nearest opinion word appears in the stance word’s immediate or embedded clause, he creates a binary feature by concatenating all the words in the dependency path, POS generalizing all words other than the stance and opinion word, and finally prepending 2176 “not” if the stance word is adjacent to a negator in the dependency tree. Thus given the sentence “I can only say that this statement is completely true.” he would add the feature can-V-true, which suggests agreement with the prompt. Prompt topic words. Recall that for the previous feature type, a feature was generated whenever an opinion word occurred in a stance word’s immediate or embedded clause. Each content word in this clause is used as a binary feature if its similarity with one of the prompt’s content words meets an empirically determined threshold. 3.4 N-Gram+Duplicated Faulkner Baseline To build a stronger baseline, we employ as our fourth baseline a classifier trained on both n-gram features and duplicated Faulkner’s features. 4 Our Approach Our approach to stance classification is a learningbased approach where we train a multiclass classifier using four types of features: n-gram features (Section 3.2), duplicated Faulkner’s features (Section 3.3), and two novel types of features, stancetaking path-based features (Section 4.1) and knowledge-based features (Section 4.2). 4.1 Stancetaking Path-Based Features Recall that, in order to identify his POS generalized dependency subtrees, Faulkner relies on two lexica, a lexicon of stancetaking words and a lexicon of opinion-bearing words. He then extracts a feature any time words from the two lexica are syntactically close enough. A major problem with this approach is that the lexica are so broad that nearly 80% of sentences in our corpus contain text that can be identified as stancetaking using this method. Intuitively, an essay may state its stance w.r.t. a prompt part in a thesis or conclusion sentence, but most of essay’s text will be at most tangentially related to any particular prompt part. For this reason, we propose to identifying stancetaking text to target only text that appears directly related to the prompt part. Below we first show how we identify and stance-labeling relevant stancetaking dependency paths, and then describe the features we derive from these paths. 4.1.1 Identifying relevant stancetaking paths As noted above, we first identify stancetaking text that appears directly related to the prompt part. Figure 1: Automatic dependency parse of a prompt part. To begin, we note that the prompt parts themselves must express a stance on a topic if they can be agreed or disagreed with. By examining the dependency parses4 of the prompt parts, we can recognize elements of how stancetaking text is structured. From the prompt part shown in Figure 1, for example, we notice that the important words that express a stance in the sentence are “money”, “root”, and “evil”. By analyzing the dependency structure in this and other prompt parts, we discovered that stancetaking text often consists of (1) a subject word, which is the child in an nsubj or nsubjpass relation, (2) a governor word which is the subject’s parent, and (3) an object, which is a content word from which there is a (not always direct) dependency path from the governor. We therefore abstract a stance in an essay as a dependency path from a subject to an object that passes through the governor. Thus, the stancetaking dependency path we identify from the prompt part shown in Figure 1 could be represented as moneyroot-evil. The obvious problem with identifying stancetaking text in this way is that nearly all sentences contain this kind of stancetaking structure, and just as with Faulkner’s dependency paths, there is little reason to believe that any particular path is relevant to an instance’s prompt part. Does this mean that nearly all sentences are stancetaking? We would argue that they can be, as even sentences that appear on their face to be mere statements of fact with no apparent value judgment can be viewed as taking a stance on the factuality of the statement, and people often disagree about the factuality of statements. For this reason, after we have identified a stancetaking path, we must determine whether the stance being expressed is relevant to the prompt part before extracting features from it. 4Dependency parsing, POS tagging, and lemmatization are performed automatically using the Stanford CoreNLP system (Manning et al., 2014) 2177 Figure 2: Automatic dependency parse of an essay sentence. For this reason, we ignore all stancetaking paths that do not meet the following three relevance conditions. First, the lemma of the path’s governor must match the lemma of a governor in the prompt part. Second, the lemma of the path’s object must match the lemma of some content word5 in the prompt part. Finally, the containing sentence must not contain a question mark or a quotation, as such sentences are usually rhetorical in nature. We do not require that the subject word match the prompt part’s subject word because this substantially reduces coverage for various reasons. For one, of the three words (subject, governor, object), the subject is the word most likely to be replaced with some other word like a pronoun, and possibly because the essays were written by non-native English speakers, automatic coreference resolution cannot reliably identify these cases. We also do not fully trust that the subject identified by the dependency parser will reliably match the subject we are looking for. Given these constraints, we can automatically identify the “itself-root-of-evil” dependency path in Figure 2 as a relevant stancetaking path. 4.1.2 Stance-labeling the paths Next, we determine whether a stancetaking path identified in the previous step appears to agree or disagree with the prompt part. To begin, we count the number of negations occurring in the prompt part. Any word like “no”, “not”, or “none” counts as a negation unless it begins a non-negation phrase like “no doubt” or “not only”.6 Thus, the count of negations in the prompt part in Figure 1 is 0. After that, we count the number of times the identified stancetaking path is negated. Because 5For our purpose, a content word (1) is a noun, pronoun, verb, adjective, or adverb, (2) is not a stopword, and (3) is at the root, is a child in a dobj or pobj relation, or is the child in a conj relation whose parent is the child in a dobj or pobj relation in the dependency tree. 6See our website at http://www.hlt.utdallas. edu/˜persingq/ICLE/ for our list of manually constructed negation words and non-negation phrases. these paths occur in student essays and are therefore often not as simply-stated as the prompt parts, this is a little bit more complicated than just counting the containing sentence’s negations since the sentence may contain a lot of additional material. To do this, we construct a list of all the dependency nodes in the stancetaking path as well as all of their dependency tree children. We then remove from this list any node that, in the sentence, occurs after the last node in the stancetaking path. The total negation count we are looking for is the number of nodes in this list that correspond to negation words (unless the negation word begins a negation phrase). Thus, because the word “not” is the child of “root” in the path “itself-root-of-evil” we identified in Figure 2, we consider this path to have been negated one time. Finally, we sum the prompt part negations and the stancetaking path negations. If this sum is even, we believe that the relevant stancetaking path agrees with the prompt part in the instance. If it is odd, however (as in the case of the prompt part and stancetaking text in the dependency tree figures), we believe that it disagrees with the prompt part. To illustrate why we are concerned with whether this sum is even, consider the following examples. If both the prompt part and the stancetaking text are negated, both disagree with the opposite of the prompt part’s stance. Thus, they agree with each other, and their negation sum is even (2). If the stancetaking path was negated twice, however, the sum would be odd (3) due to the stance path’s double negations canceling each other out, and the stancetaking path would disagree with the prompt part. 4.1.3 Deriving path-based features We extract four features from the relevant stancetaking dependency paths identified and stancelabeled so far, as described below. The first feature encodes the count of relevant stancetaking paths that appear to agree with the prompt part. The second feature encodes the count of relevant stancetaking paths that appear to disagree with the prompt part. While we expect these first two features to be correlated with the agreement and disagreement classes, respectively, they may not be sufficient to distinguish between agreeing and disagreeing instances. It is possible, for example, that both features may be greater than zero in a single instance if we have identified one stancetaking path that appears to agree 2178 with the prompt part and another stancetaking path that appears to disagree with the prompt part. It is not clear whether this situation is indicative of only the Neutral class, or perhaps it indicates partial (Somewhat) (Dis)Agreement, or maybe our method of detecting disagreement is not reliable enough, and it therefore makes sense, when we get these conflicting signals, to ignore them entirely and just assign the instance to the most frequent (Agree Strongly) class. For that matter, if neither feature is greater than zero, does this mean that the instance Never Addressed the prompt part, or does it instead mean that our method for identifying stancetaking paths doesn’t have high enough recall to work on all instances? We let our learner sort these problems out by adding two more binary features to our instances, one which indicates that both of the first two features are zero, and one that indicates whether both are greater than zero. 4.2 Knowledge-Based Features Our second feature type is composed of five linguistically informed binary features that correspond to five of the six classes in our fine-grained stance classification task. Intuitively, if an instance has one of these features turned on, it should be assigned to the feature’s corresponding class. 1. Neutral. Stancetaking text indicating neutrality tends to be phrased somewhat differently than stancetaking text indicating any other class. In particular, neutral text often makes claims that are about the prompt part’s subject, but which are tangential to the proposition expressed in the prompt part. For this reason, we search the essay for words that match the prompt part’s subject lemmatically. After identifying a sentence that is about the prompt part’s subject in this way, we check whether the sentence begins with any neutral indicating phrase.7 If we find a sentence that both begins with a neutral phrase and is about the prompt part’s subject, we turn the Neutral feature on. Thus, sentences like the following can be captured: “In all probability university students wonder whether or not they spend their time uselessly in studying through four or five years in order to take their degree.” 7We construct a list of neutral phrases for introducing another person’s ideas from a writing skills website (http://www.myenglishteacher.eu/question/ other-ways-to-say-according-to/). 2. (Dis)Agree Somewhat. In order to set the values of the features associated with the Somewhat classes, we first identify relevant stancetaking paths as described above. We then trim the list of paths by removing any path whose governor or subject does not have a hedge word as an adverb modifier child in the dependency tree.8 Thus, we are able to determine that the essay containing the sentence “There is nearly no place left for dream and imagination” is likely to belong to one of the Somewhat classes w.r.t. the prompt part “There is no longer a place for dreaming and imagination.” The question now is how to determine which (if any) of the Somewhat classes it should belong to. We analyze all the paths from the list for negation in much the same way we described above, but with one major difference. We hypothesize that when taking a Somewhat stance, students are more likely to explicitly state that the stance being taken is their opinion rather than stating the stance bluntly without attribution. For example, one Disagree Somewhat essay includes the sentence, “I never believed these people were honest if saying that money is just the root of all evil.” In order to determine that this sentence contains an indication of the Disagree Somewhat class, we need to account for the negation that occurs at the beginning, far away from the stancetaking path (money-rootof-evil). To do this, we semantically parse the sentence using SEMAFOR (Das et al., 2010). Each of the semantic frames detected by SEMAFOR describes an event that occurs in a sentence, and the event’s frame elements may be the people or other entities that participate in the event. One of the semantic frames detected in this example sentence describes a Believer (I) and the content of his or her belief (all the text after “believed”). Because the sentence includes a semantic frame that (1) contains a first person (I, we) Cognizer, Speaker, Perceiver, or Believer element, (2) contains an element that covers all the text in the dependency path (a Content frame element, in this case), and (3) the word that triggers the frame (“believed”) has a negator child in the dependency tree, we add one to this relevant stancetaking path’s negation count. This makes this hedged stancetaking path’s negation count odd, so we believe that this sentence likely disagrees with its instance’s prompt part somewhat. If we find a hedged stancetaking 8See our website at http://www.hlt.utdallas. edu/˜persingq/ICLE/ for our manually constructed list of hedge words. 2179 path with an odd negation count, we turn on the Disagree Somewhat feature. Similarly, if we find a hedged stancetaking path with an even negation count, we turn on the Agree Somewhat feature. 3. (Dis)Agree Strongly. When we believe there is strong evidence that an instance should belong to one of the Strongly classes, we turn on the corresponding (Dis)Agree Strongly feature. In particular, if we find a relevant stancetaking path that appears to agree with the prompt part (as described in Section 4.1.2), but do not find any such path that appears to disagree with it, we turn on the Agree Strongly feature. Similarly, if we find a relevant stancetaking path that appears to disagree with the prompt part, but do not find a relevant stancetaking path that appears to agree with it, we turn on the Disagree Strongly feature. 5 Evaluation 5.1 Experimental Setup Data partition. All our results are obtained via leave-one-prompt-out cross-validation experiments. So, in each fold experiment, we partition the instances from our 11 prompts into a training set (10 prompts) and a test set (1 prompt). Evaluation metrics. We employ two metrics to evaluate our systems: (1) micro F-score, which treats each instance as having equal weight; and (2) macro F-score, which treats each class as having equal weight.9 To gain insights into how different systems perform on different classes, we additionally report per-class F-scores. Training. We train the baselines and our approach using two learning algorithms, MALLET’s (McCallum, 2002) implementation of maximum entropy (MaxEnt) classification and our own implementation of the one nearest neighbor (1NN) algorithm using the cosine similarity metric. Note that these two learners have their own strengths and weaknesses: in comparison to 1NN, MaxEnt is better at exploiting high-dimensional features but less robust to skewed class distributions. For the baseline systems, we select the learner by performing cross validation on the training folds to maximize the average of micro and macro F-scores in each fold experiment. When training our approach, we perform exhaustive feature selection to determine which sub9Since stance classification is a multiclass, single-label task, micro F-score, precision, recall, and accuracy are all equivalent. set of the four sets of features (i.e., n-gram, duplicated Faulkner, path-based, and knowledge-based features) should be used. Specifically, we select the feature groups and learner jointly by performing cross validation on the training folds, choosing the combination yielding the highest average of micro and macro F-scores in each fold experiment. To prevent any feature type from dominating the others, to each feature we apply a weight of one divided by the number of features having its type. Testing. In case of a tie when applying 1NN, the tie is broken by selecting the class appearing higher in Table 2. 5.2 Results and Discussion Results on fine-grained essay stance classification are shown in Table 3. The first four rows show our baselines’ performances. Among the four baselines, Always Agree Strongly performs best w.r.t. micro F-score, obtaining a score of 55.6%, whereas Duplicated Faulkner performs best w.r.t. macro F-score, obtaining a score of 15.6%. Despite its poor performance, Duplicated Faulkner is a state-of-the-art approach on this task. Its poor performance can be attributed to three major factors. First, it was intended to identify only Agree and Disagree instances (note that Faulkner simply removed neutral instances from his experimental setup), which should not prevent them from performing well w.r.t. micro F-score. Second, it is far too permissive, generating features from a large majority of sentences while relevant sentences are far rarer. Third, while it does succeed at predicting Disagree Strongly far more frequently than either of the other baselines that excludes the Faulkner feature set, the problem’s class skewness means that a learner is much more likely to be punished for predicting minority classes, which are more difficult to predict with high precision. The fact that it makes an attempt to solve the problem rather than relying on class skewness for good performance makes Duplicated Faulkner a more interesting baseline than either N-Gram or Always Agree Strongly, even though both technically outperform it w.r.t. micro F-score. Similarly, the statistically significant improvements in micro and macro F-score our approach achieves over the best baselines are more impressive when taking the skewness problem into consideration. The results of our approach, which has access 2180 System Micro-F Macro-F A+ A− Neu D− D+ Nev 1 Always Agree Strongly 55.6 11.9 71.4 .0 .0 .0 .0 .0 2 N-Gram 55.4 12.0 71.3 .0 .0 .0 .5 .0 3 Duplicated Faulkner 50.8 15.6 66.8 4.0 .0 .0 22.9 .0 4 N-Gram + Duplicated Faulkner 53.4 15.4 69.1 2.5 .0 .0 20.6 .0 5 Our approach 60.6 20.1 73.6 .0 .0 2.1 44.8 .0 Table 3: Cross-validation results for fine-grained essay stance classification, including per-class F-scores for Agree Strongly (A+), Agree Somewhat (A−), Neutral (Neu), Disagree Somewhat (D−), Disagree Strongly (D+), and Never Addressed (Nev). to all four feature groups, are shown in row 5 of the table. It obtains micro and macro F-scores of 60.6% and 20.1%, which correspond to statistically significant relative error reductions over the best baselines of 11.3% and 5.3%, respectively.10 Recall that we turned on one of our knowledgebased features only when we believed there was strong evidence that an instance belonged to its associated class. To get an idea of how useful these features are, we calculate the precision, recall, and F-score that would be obtained for each class if we treated our knowledge-based features as heuristic classifiers. The respective precisions, recalls, and F-scores we obtained are: 0.66/0.28/0.40 (A+), 0.50/0.02/0.04 (A−), 0.00/0.00/0.00 (Neu), 0.50/0.01/0.02 (D−), and 0.63/0.31/0.42 (D+). Since the rule predictions are encoded as features for the learner, they may not necessarily be used by the learner even if the underlying rules are precise. For instance, despite the rule’s high precision on the Agree Somewhat class, the learner did not make use of its predictions due to its low coverage. 5.3 Additional Experiments Since all the systems we examined fared poorly on identifying Somewhat classes, one may wonder how these systems would perform if we considered a simplified version of the task where we merged each Somewhat class with the corresponding Strongly class. In particular, since Faulkner’s approach was originally not designed to distinguish between Strongly and Somewhat classes, it may seem fairer to compare our approach against Duplicated Faulkner on the four-class essay stance classification task, where stance can take one of four values: Agree (created by merging Agree 10All significance tests are approximate randomization tests with p < 0.01. Boldfaced results are significant w.r.t. micro F-score for the Always Agree Strongly baseline, and macro F-score w.r.t. the Duplicated Faulkner baseline. Strongly and Agree Somewhat), Disagree (created by merging Disagree Strongly and Disagree Somewhat), Neutral, and Never Addressed. In the results for the different systems on this four-class stance classification task, shown in Table 4, we see that the same patterns we noticed in the six-class version of the task persist. The approaches’ relative order w.r.t. micro and macro Fscore remains the same, though they are adjusted upwards due to the problem’s increased simplicity. Our approach’s performance on Agree increases (compared to Agree Strongly) because Agree is a bigger class, making predictions of the class safer. Our approach’s performance decreases on Disagree (compared to Disagree Strongly) since it is not good at predicting Disagree Somewhat instances which are part of the class. 5.4 Error Analysis To gain additional insights into our approach, we analyze its six major sources of error below. Stances not presented in a straightforward manner. As an example, consider “To my opinion this technological progress triggers off the imagination in a certain way.” To identify this sentence as strongly disagreeing with the proposition “there is no longer a place for dreaming and imagination”, we need to understand (1) the world knowledge that technological progress is occurring, (2) that “triggers off the imagination in a certain way” means that the technological progress coincides with imagination occurring, (3) that if imagination is occurring, there must be “a place for dreaming and imagination”, and (4) that the prompt part is negated. In general, in order to construct reliable features to increase our coverage of essays that express their stance like this, we would need additional world knowledge and a deeper understanding of the text. Rhetorical statements occasionally misidentified as stancetaking. For example, our method 2181 System Micro-F Macro-F A Neu D Nev 1 Always Agree Strongly 64.8 19.7 78.7 .0 .0 .0 2 N-Gram 64.3 19.7 78.2 .0 .8 .0 3 Duplicated Faulkner 62.3 25.1 75.1 .0 25.2 .0 4 N-Gram + Duplicated Faulkner 62.6 23.7 75.8 .0 19.0 .0 5 Our approach 67.6 29.1 78.5 .0 38.0 .0 Table 4: Cross-validation results for four-class essay stance classification for Agree (A), Neutral (Neu), Disagree (D), and Never Addressed (Nev). for identifying stancetaking paths misidentifies “I am going to discuss the topic that television is the opium of the masses in modern society” as stancetaking. To handle this, we need to incorporate more sophisticated methods for detecting rhetorical statements than those we are using (e.g., ignoring sentences ending in question marks). Negation expressed without negation words. Our techniques for capturing negation are unable to detect when negation is expressed without the use of simple negation words. For example, “In this sense money is the root of life” should strongly disagree with “money is the root of all evil”. The author replaced “life” with “evil”, and detecting that this constitutes something like negation would require semantic knowledge about words that are somehow opposite of each other. Insufficient feature/heuristic coverage of the Disagree Strongly class. Our stancetaking pathbased features that we identified as intuitively having a connection to the Disagree Strongly class together cover only 51% of Disagree Strongly instances, meaning that it is in principle impossible for our system to identify the remaining 49%. However, our decision to incorporate only features that are expected to have fairly high precision for some class was intentional, as the lesson we learned from the Faulkner-based system is that it is difficult to learn a good classifier for stance classification using a large number of weakly or non-predictive features. To solve this problem, we would therefore need to exploit other aspects of strongly disagreeing essays that act as reliable predictors of the class. Rarity of minority class instances. It is perhaps not surprising that our learning-based approach performs poorly on the minority classes. Even though the knowledge-based features were designed in part to improve the prediction of minority classes, our results suggest that the resulting features were not effectively exploited by the learners. To address this problem, one could employ a hybrid rule-based and learning-based approach where we use our machine-learned classifier to classify an instance only if it cannot be classified by any of these rules. Lack of obvious similarity between instances of the same class. For example, if the most straightforward stancetaking sentence in an Agree Somewhat instance reads something like this, “In conclusion, I will not go to such extremes as to declare nihilistically that university does not prepare me for the real world in the least”, (given the prompt part “Most university degrees do not prepare us for real life”), and we somehow managed to identify the instance’s class as Agree Somewhat, what would the instance have in common with other Agree Somewhat instances? Given the numerous ways of expressing a stance, we believe a deeper understanding of essay text is required in order automatically detect how instances like this are similar to instances of the same class, and such similarities are required for learning in general. 6 Conclusion We examined the new task of fine-grained essay stance classification, in which we determine stance for each prompt part and allow stance to take one of six values. We addressed this task by proposing two novel types of features, stancetaking path-based features and knowledge-based features. In an evaluation on 826 argumentative essays, our learning-based approach, which combines our novel features with n-gram features and Faulkner’s features, significantly outperformed four baselines, including our re-implementation of Faulkner’s system. Compared to the best baselines, our approach yielded relative error reductions of 11.3% and 5.3%, in micro and macro Fscore, respectively. Nevertheless, accurately predicting the Somewhat, Neutral, and Never Addressed stances remains a challenging task. To stimulate further research on this task, we make all of our stance annotations publicly available. 2182 Acknowledgments We thank the three anonymous reviewers for their detailed comments. This work was supported in part by NSF Grants IIS-1219142 and IIS-1528037. Any opinions, findings, conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views or official policies, either expressed or implied, of NSF. References Amjad Abu-Jbara, Ben King, Mona Diab, and Dragomir Radev. 2013. Identifying opinion subgroups in arabic online discussions. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 829–835. Rakesh Agrawal, Sridhar Rajagopalan, Ramakrishnan Srikant, and Yirong Xu. 2003. Mining newsgroups using networks arising from social behavior. In Proceedings of the 12th International Conference on World Wide Web, pages 529–535. Yigal Attali and Jill Burstein. 2006. Automated essay scoring with E-rater v.2.0. Journal of Technology, Learning, and Assessment, 4(3). Alexandra Balahur, Zornitsa Kozareva, and Andr´es Montoyo. 2009. Determining the polarity and source of opinions expressed in political debates. In Proceedings of the 10th International Conference on Computational Linguistics and Intelligent Text Processing, pages 468–480. Mohit Bansal, Claire Cardie, and Lillian Lee. 2008. The power of negative thinking: Exploiting label disagreement in the min-cut classification framework. In Proceedings of the 22nd International Conference on Computational Linguistics: Companion volume: Posters, pages 15–18. Or Biran and Owen Rambow. 2011. Identifying justifications in written dialogs. In Proceedings of the 2011 IEEE Fifth International Conference on Semantic Computing, pages 162–168. Filip Boltuˇzi´c and Jan ˇSnajder. 2014. Back up your stance: Recognizing arguments in online discussions. In Proceedings of the First Workshop on Argumentation Mining, pages 49–58. Clinton Burfoot, Steven Bird, and Timothy Baldwin. 2011. Collective classification of congressional floor-debate transcripts. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1506–1515. Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1):37. Dipanjan Das, Nathan Schneider, Desai Chen, and Noah A. Smith. 2010. Probabilistic frame-semantic parsing. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 948–956. Adam Faulkner. 2014. Automated classification of stance in student essays: An approach using stance target information and the Wikipedia link-based measure. In Proceedings of the Twenty-Seventh International Florida Artificial Intelligence Research Society Conference. Sylviane Granger, Estelle Dagneaux, Fanny Meunier, and Magali Paquot. 2009. International Corpus of Learner English (Version 2). Presses universitaires de Louvain. Kazi Saidul Hasan and Vincent Ng. 2013. Stance classification of ideological debates: Data, models, features, and constraints. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 1348–1356. Ken Hyland. 2005. Metadiscourse: Exploring interaction in writing. Continuum Discourse. Continuum, London. Andrew Kachites McCallum. 2002. MALLET: A Machine Learning for Language Toolkit. http: //mallet.cs.umass.edu. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 55–60. Akiko Murakami and Rudy Raymond. 2010. Support or oppose? classifying positions in online debates from reply activities and opinion expressions. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 869–875. Isaac Persing and Vincent Ng. 2015. Modeling argument strength in student essays. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 543–552. Parinaz Sobhani, Diana Inkpen, and Stan Matwin. 2015. From argumentation mining to stance classification. In Proceedings of the 2nd Workshop on Argumentation Mining, pages 67–77. Swapna Somasundaran and Janyce Wiebe. 2010. Recognizing stances in ideological on-line debates. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, pages 116–124. 2183 Dhanya Sridhar, James Foulds, Bert Huang, Lise Getoor, and Marilyn Walker. 2015. Joint models of disagreement and stance in online debate. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 116– 125. Christian Stab and Iryna Gurevych. 2014. Identifying argumentative discourse structures in persuasive essays. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 46–56. Matt Thomas, Bo Pang, and Lillian Lee. 2006. Get out the vote: Determining support or opposition from congressional floor-debate transcripts. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 327–335. Marilyn Walker, Pranav Anand, Rob Abbott, and Ricky Grant. 2012. Stance classification using dialogic properties of persuasion. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 592–596. Yi-Chia Wang and Carolyn P. Ros´e. 2010. Making conversational structure explicit: Identification of initiation-response pairs within online discussions. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 673–676. Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. Language Resources and Evaluation, 39(2–3):165–210. Ainur Yessenalina, Yisong Yue, and Claire Cardie. 2010. Multi-level structured models for documentlevel sentiment classification. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1046–1056. 2184
2016
205
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2185–2194, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics A New Psychometric-inspired Evaluation Metric for Chinese Word Segmentation Peng Qian Xipeng Qiu∗ Xuanjing Huang Shanghai Key Laboratory of Intelligent Information Processing, Fudan University School of Computer Science, Fudan University 825 Zhangheng Road, Shanghai, China {pqian11, xpqiu, xjhuang}@fudan.edu.cn Abstract Word segmentation is a fundamental task for Chinese language processing. However, with the successive improvements, the standard metric is becoming hard to distinguish state-of-the-art word segmentation systems. In this paper, we propose a new psychometric-inspired evaluation metric for Chinese word segmentation, which addresses to balance the very skewed word distribution at different levels of difficulty1. The performance on a real evaluation shows that the proposed metric gives more reasonable and distinguishable scores and correlates well with human judgement. In addition, the proposed metric can be easily extended to evaluate other sequence labelling based NLP tasks. 1 Introduction Word segmentation is a fundamental task for Chinese language processing. In recent years, Chinese word segmentation (CWS) has undergone great development, which is, to some degree, driven by evaluation conferences of CWS, such as SIGHAN Bakeoffs (Emerson, 2005; Levow, 2006; Jin and Chen, 2008; Zhao and Liu, 2010). The current state-of-the-art methods regard word segmentation as a sequence labeling problem (Xue, 2003; Peng et al., 2004). The goal of sequence labeling is to assign labels to all elements in a sequence, which can be handled with supervised learning algorithms, such as maximum entropy (ME) (Berger et al., 1996), conditional random fields (CRF) (Lafferty et al., 2001) and Perceptron (Collins, 2002). ∗Corresponding author. 1We release the word difficulty of the popular word segmentation datasets at http://nlp.fudan.edu.cn/data/ . Benefiting from the public datasets and feature engineering, Chinese word segmentation achieves quite high precision after years of intensive research. To evaluate a word segmenter, the standard metric consists of precision p, recall r, and an evenly-weighted F-score f1. However, with the successive improvement of performance, state-of-the-art segmenters are hard to be distinguished under the standard metric. Therefore, researchers also report results with some other measures, such as out-of-vocabulary (OOV) recall, to show their strengths besides p, r and f1. Furthermore, although state-of-the-art methods have achieved high performances on p, r and f1, there exists inconsistence between the evaluation ranking and the intuitive feelings towards the segmentation results of these methods. The inconsistence is caused by two reasons: (1) The high performance is due to the fact that the distribution of difficulties of words is unbalanced. The proportion of trivial cases is very high, such as ‘的(’s)’,‘我们(we)’, which results in that the non-trivial cases are relatively despised. Therefore, a good measure should have a capability to balance the skewed distribution by weighting the test cases. (2) Human judgement depends on difficulties of segmentations. A segmenter can earn extra credits when correctly segmenting a difficult word than an easy word. Conversely, a segmenter can take extra penalties when wrongly segmenting an easy word than a difficult word. Taking a sentence and two predicted segmentations as an example: S : 白藜芦醇 是 一 种 酚类 物质 (Trans: Resveratrol is a kind of phenols material.) P1: 白 藜芦 醇 是 一种 酚类 物质 P2: 白藜 芦醇 是 一 种 酚类物 质 We can see that the two segmentations have the 2185 same scores in p, r and f1. But intuitively, P1 should be better than P2, since P2 is worse even on the trivial cases, such as ‘酚类(phenols)’ and ‘物质(material)’. Therefore, we think that an appropriate evaluation metric should not only provide an all-around quantitative analysis of system performances, but also explicitly reveal the strengths and potential weaknesses of a model. Inspired by psychometrics, we propose a new evaluation metric for Chinese word segmentation in this paper. Given a labeled dataset, not all words have the same contribution to judge the performance of a segmenter. Based on psychometric research (Lord et al., 1968), we assign a difficulty value to each word. The difficulty of a word is automatically rated by a committee of segmenters, which are diversified by training on different datasets and features. We design a balanced precision, recall to pay different attentions to words according to their difficulties. We also give detailed analysis on a real evaluation of Chinese word segmentation with our proposed metric. The analysis result shows that the new metric gives a more balanced evaluation result towards the human intuition of the segmentation quality. We will release the weighted datasets focused this paper to the academic community. Although our proposed metric is applied to Chinese word segmentation for a case study, it can be easily extended to other sequence labelling based NLP tasks. 2 Standard Evaluation Metric The standard evaluation usually uses three measures: precision, recall and balanced F-score. Precision p is defined as the number of correctly segmented words divided by the total number of words in the automatically segmented corpus. Recall r is defined as the number of correctly segmented words divided by the total number of words in the gold standard, which is the manually annotated corpus. F-score f1 the harmonic mean of precision and recall. Given a sentence, the gold-standard segmentation of a sentence is w1, · · · , WN, N is the number of words. The predicted segmentation is w′ 1, · · · , w′ N′, N′ is the number of words. Among that, the number of words correctly identified by the predicted segmentation is c, and the number of incorrectly predicted words is e. p, r and f1 are defined as follows: p = c N′ , (1) r = c N , (2) f1 = 2 × p × r p + r . (3) As a complement to these metrics, researchers also use the recall of out-of-vocabulary (OOV) words to measure the segmenter’s performance in detecting unknown words. 3 A New Psychometric-inspired Evaluation Metric We involve the basic idea from psychometrics and improve the evaluation metric by assigning weights to test cases. 3.1 Background Theory This work is inspired by the test theory in psychometrics (Lord et al., 1968). Psychologists, as well as educators, have studied the way of analyzing items in a psychological test, such as IQ test. The general idea is that test cases should be given different weights, which reflects the effectiveness of a certain item to a certain measuring object. Similarly, we consider an evaluation task as a kind of special psychological test. The psychological traits, or the ability of the model, is not an explicit characteristics. We propose that the test cases for NLP task should also be assigned a real value to account for the credits that the tagger earned from answering the test case. In analogy to the way of computing difficulty in psychometrics, the difficulty of a target word wi is defined as the error rate of a committee in the case of word segmentation. Given a committee of K base segmenters, we can get K segmentations for sentence w1, · · · , WN. We use a mark mk i ∈{0, 1} to indicate whether word wi is correctly segmented by the k-th segmenter. The number of words ck correctly identified by the k-th segmenter is ck = N ∑ i=1 mk i . (4) 2186 Thus, we can calculate the degree of difficulty of each word wi. di = 1 K K ∑ k=1 (1 −mk i ). (5) This methodology of measuring test item difficulty is also widely applied in assessing standardized exams such as TOEFL (Service, 2000). 3.2 Psychometric-Inspired Evaluation Metric Since the distribution of the difficulties of words is very skew, we design a new metric to balance the weights of different words according to their difficulties. In addition, we also should keep strictly a fair rule for rewards and punishments. Intuitively, if the difficulty of a word is high, a correct segmentation should be given an extra rewards; otherwise, if the difficulty of a word is low, it is reasonable to give an extra punishment to a wrong segmentation. Our new metric of precision, recall and balanced F-score is designed as follows. Balanced Recall Given a new predicted segmentation, the mark mi ∈{0, 1} indicates whether word wi is correctly segmented. di is the degree of difficulty of word wi. According to the difficulties of each word, we can calculated the reward recall rreward which biased for the difficult cases. rreward = ∑N i=1 di × mi ∑N i=1 di , (6) where r′ reward ∈[0, 1] is biased recall, which places more attention on the difficult cases and less attention on the easy cases. Conversely, we can calculated another punishment recall rpunishment which biased for the easy cases. rpunishment = ∑N i=1 (1 −di) × mi ∑N i=1 (1 −di) , (7) where rpunishment ∈[0, 1] is biased recall, which places more attention on the easy cases and less attention on the difficult cases. rpunishment can be interpreted as a punishment as bellows. rpunishment = ∑N i=1 (1 −di) × mi ∑N i=1 (1 −di) , (8) = 1 − ∑N i=1 (1 −di) × (1 −mi) ∑N i=1 (1 −di) . (9) From Eq (9), we can see that an extra punishment is given to wrong segmentation for low difficult word. In detailed, for a word wi that is easy to segment, its weights (1 −di) is relative higher. When its segmentation is wrong, mi = 0. Therefore, (1 −di) × (1 −mi) = (1 −di) will be larger, which results to a smaller final score. To balance the reward and punishment, a balanced recall rb is used, which is the harmonic mean of rreward and rpunishment. rb = 2 × rpunishment × rreward rpunishment + rreward (10) Balanced Precision Given a new predicted segmentation, the mark m′ i ∈ {0, 1} to indicate whether segment s′ i is correctly segmented. d′ i is the degree of difficulty of segment s′ i, which is an average difficulty of the corresponding gold words. Similar to balanced recall, we use the same way to calculate balance precision pb. Here N ′ is the number of words in the predicted segmentation. d′ i is the weight for the predicted segmentation unit w′ i. It equals to the word difficulty of the corresponding word w that cover the right boundary of w′ i in the gold segmentation. preward = ∑N i=1 (1 −di) × mi ∑N′ i=1 (1 −d′ i) , (11) ppunishment = ∑N i=1 (1 −di) × mi ∑N′ i=1 (1 −d′ i) , (12) pb = 2 × preward × ppunishment preward + ppunishment . (13) (14) Balanced F-score The final balanced F-score is fb = 2 × pbalanced × rbalanced pbalanced + rbalanced . (15) 4 Committee of Segmenters It is infeasible to manually judge the difficulty of each word in a dataset. Therefore, an empirical method is needed to rate each word. Since the difficulty is also not derivable from the observation of the surface forms of the text, we use a committee of automatic segmenters instead. To keep fairness 2187 F1 CiT0, (i = −1, 0, 1) Ci:i+1T0, (i = −1, 0) T−1,0 F2 CiT0, (i = −2, −1, 0, 1, 2) Ci:i+1T0, (i = −2, −1, 0, 1) T−1,0 F3 CiT0, (i = −2, −1, 0, 1, 2) Ci:i+1T0, (i = −2, −1, 0, 1) Ci:i+2T0, (i = −2, −1, 0) T−1,0 Table 1: Feature templates. C represents a Chinese character, and T represents the character-based tag in set {B, M, E, S}. The subscript indicates its position relative to the current character, whose subscript is 0. Ci:j represents the subsequence of characters form relative position i to j. and justice of the committee, we need a large number of diversified committee members. Thus, the grading result of committee is fair and accurate, avoiding the laborious human annotation and the deviation caused by the subjective factor of the artificial judgement. 4.1 Building the Committee Base Segmenters The committee is composed of a series of base segmenters, which are based on discriminative character-based sequence labeling method. Each character is labeled as one of {B, M, E, S} to indicate the segmentation. ‘B’ indicates the beginning character of a word. ‘M’ indicates the middle character of a word. ‘E’ indicates the end character of a word. ‘S’ indicates that the word consists of only a single character. Diversity of Committee To objectively assess the difficulty of a word, we need to maintain a large enough committee with diversity. To encourage diversity among committee members, we train them with different datasets and features. Specifically, each base segmenter adopts one of three types of feature templates (shown in Table 1), and are trained on randomly sampling training sets. To keep a large diversity, we set sampling ratio to be 10%, 20% and 30%. In short, each base segmenter is constructed with a random combination of the candidate feature template and the sampling ratio for training dataset. Size of Committee To obtain a valid and reliable assessment for a word, we need to choose the 10 30 50 70 90 0.0 0.2 0.4 0.6 0.8 1.0 The Size of the Committee Difficulty Figure 1: Judgement of difficulty against the committee size. Each line represents a sampled word. appropriate size of committee. For a given test case, the judgement of its difficulty should be relatively stable. We analyze how the judgement of its difficulty changes as the size of committee increases. Figure 2 show PKU data from SIGHAN 2005 (Emerson, 2005) the difficulty is stable when the sample size is large enough. 4.2 Interpreting Difficulty with Linguistic Features Since we get the difficulty for each word empirically, we naturally want to know whether the difficulty is explainable, as what TOEFL researchers have done (Freedle and Kostin, 1993; Kostin, 2004). We would like to know whether the variation of word difficulty can be partially expalined by a series of traceable linguistic features. Based on the knowledge about the characteristics of Chinese grammar and the practical experiences of corpus annotation, we consider the following surface linguistic features. In order to explicitly display the relationship between the linguistic predictors and the distribution of the word difficulty at a micro level, we divide the difficulty scale into ten discrete intervals and calculate the distributions of these linguistic features on different ranges of difficulty. Here, we interpret the difficulties of the words from the perspective of three important linguistic features: Idiom In Chinese, the 4-character idioms have special linguistic structures. These structure usually form a different pattern that is hard for the machine algorithm to understand. Therefore, 2188 11.43% 58.1% (a) Idiom 90.53% (b) Disyllabic words 47.55% 20.06% 0.0-0.1 0.1-0.2 0.2-0.3 0.3-0.4 0.4-0.5 0.5-0.6 0.6-0.7 0.7-0.8 0.8-0.9 0.9-1.0 (c) OOV Figure 2: Difficulty distribution of (a) idioms, (b) dysyllabic words and (c) Out-of-vocabulary words from PKU dataset. Similar pattern has also been found in other datasets. it is reasonable to hypothesize that the an idiom phrase is more likely to be a difficult word for word segmentation task. We can see from Figure 2a that 58.1% of idioms have a difficulty at (0.9,1]. The proportion does increase with the degree of difficulty, which corresponds with the human intuition. Dissyllabic Word Disyllabic word is a word formed by two consecutive Chinese characters. We can see from Figure 2b that the frequency of disyllabic words has a negative correlations with the degree of difficulty. This is an interesting result. It means that a two-syllable word pattern is easy for a machine algorithm to recognize. This is consistent with the lexical statistics (Yip, 2000), which shows that dissyllabic words account for 64% of the common words in Chinese. Out-of-vocabulary Word Processing out-ofvocabulary (OOV) word is regarded as one of the key factors to the improvement of model performance. Since these words never occur in the training dataset, it is for sure that the word segmentation system will find it hard to correctly recognize these words from the contexts. We can see from Figure 2c that OOV generally has high difficulty. However, a lot of OOV is relatively easy for segmenters. All the linguistic predictors above prove that the degree of difficulty, namely the weight for each word, is not only rooted in the foundation of test theory, but also correlated with linguistic intuition. 5 Evaluation with New Metric Here we demonstrate the effectiveness of the proposed method in a real evaluation by reanalyzing the submission results from NLPCC P1 P2 P3 P4 P5 P6 P7 0.8 0.9 Participants ID Score 0.2 0.4 0.6 0.8 f1 fb H (a) Closed Track P2 P1 P8 P5 P7 0.8 0.9 Participants ID Score 0.2 0.4 0.6 0.8 f1 fb H (b) Open Track Figure 3: Comparisons of standard metric and our new metric for the closed track and the open track of NLPCC 2015 Weibo Text Word Segmentation Shared Task. The black lines for f1 and fb are plotted against the left y-axis. The red lines for human judgement scores are plotted against the right y-axis. 2015 Shared Task2 of Chinese word segmentation. The dataset of this shared task is collected from micro-blog text. For convenience, we use WB to represent this dataset in the following discussions. We select the submissions of all 7 participants from the closed track and the submissions of all 2Conference on Natural Language Processing and Chinese Computing. http://tcci.ccf.org.cn/conference/2015/ 2189 0 - 0.1 0.1 - 0.2 0.2 - 0.3 0.3 - 0.4 0.4 - 0.5 0.5 - 0.6 0.6 - 0.7 0.7 - 0.8 0.8 - 0.9 0.9 - 1.0 0.7 0.8 0.9 1 Degree of Difficulty Accuracy P1 P2 P3 P4 P5 P6 P7 Figure 4: Accuracies of different participants in Closed Track by different difficulties on WB dataset. 5 participants from the open track. In the closed track, participants could only use information found in the provided training data. In the open track, participants could use the information which should be public and be easily obtained. We compare the standard precision, recall and F-score with our new metric. The result is displayed in Figure 3. Considering the related privacy issues, we will refer to the participants as P1, P2, etc. The order of these participants in the sub-figures is sorted according to the original ranking given by the standard metric in each track. The same ID number refers to the same participants. It is interesting to see that the proposed metric gives out significantly different rankings for the participants, compared to the original rankings. Based on the standard metric, Participant 1 (P1) ranks the top in closed track while P7 is ranked as the worst in both tracks. However, P2 ranks first under the evaluation of the new metric in the Closed track. P7 also get higher ranking than its original one. 5.1 Correlation with Human Judgement To tell whether the standard metric or the proposed metric is more reasonable, we asked three experts to evaluate the quality of the submissions from the participants. We randomly selected 50 test sentences from the WB dataset. For each test sentence, we present all the submitted candidate segmentation results to the human judges in random order. Then, the judges are asked to choose the best candidate(s) with the highest segmentation quality as well as the second-best candidate(s) among all the submissions. Human judges had no access to the source of the sentences. Once we collect the human judgement of the segmentation quality, we can compute the score for each participants. If a candidate segmentation result from a certain participant is ranked first for n times, then this participants earned n point. If second for m times, then this participants earned m 2 points. Then we can get the probability of a participants being ranked the best or sub-best by computing n+ m 2 50 . Finally, we get the humanintuition-based gold ranking of the participants through the means of scores from all the human judges. It is worth noticing that the ranking result of our proposed metric correlates with the human judgements better than that of the standard metric, as is shown in Figure 3. The Pearson correlation between our proposed metric and human judgements are 0.9056 (p = 0.004) for closed session and 0.8799 (p = 0.04) for open session while the Pearson correlation between standard metric and human judgements are only 0.096 (p = 0.836) for closed session and 0.670 (p = 0.216). This evidence strongly supports that the proposed method is a good approximate of human judgements. 5.2 Detailed Analysis Since we have empirically got the degree of difficulty for each word in the test dataset, we can compute the distribution of the difficulty for words that have been correctly segmented. We divided the whole range of difficulty into 10 intervals. Then, we count the ratio of the correct segmented units for each difficulty interval. In this way, we can quantitatively measure to what extent the segmentation system performs on difficult test cases and easy test cases. As is shown in Figure 4, P7 works better on difficult cases than other systems, but the worst on easy cases. This explains why P7 gets good rank based on the new evaluation metric. Besides, 2190 P1 P2 P3 P4 P5 P6 P7 0.6 0.7 0.8 0.9 1 Participants ID Score p ppunishment preward pb (a) Standard and Weighted Precision P1 P2 P3 P4 P5 P6 P7 0.6 0.8 1 Participants ID Score r rpunishment rreward rb (b) Standard and Weighted Recall Figure 5: Comparisons of standard and weighted precision and recall on NLPCC Closed Track. if we compare P1 and P2, we will notice that P2 performs just slightly worse than P1 on easy cases, but much better than P1 on difficult cases. Therefore, conventional evaluation metric rank P1 as the top system because the P1 gets a lot of credits from a large portion of easy cases. Unlike conventional metric, our new metric achieves balance between hard cases and easy cases and ranks P2 as the top system. The experiment result indicates that the new metric can reveal the implicit difference and improvement of the model, while standard metric cannot provide us with such a fine-grained result. 0.3 0.4 0.5 0.6 0.7 0.4 0.6 fb on parallel test 1 fb on parallel test 2 Figure 6: Correlation between the evaluation results fb of two parallel testsets with the proposed metrics on a collection of models. The Pearson correlation is 0.9961, p = 0.000. 5.3 Validity and Reliability Jones (1994) concluded some important criteria for the evaluation metrics of NLP system. It is very important to check the validity and reliability of a new metric. Previous section has displayed the validity of the proposed evaluation metric by comparing the evaluation results with human judgements. The evaluation results with our new metric correlated with human intuition well. Regarding reliability, we perform the paralleltest experiment. We randomly split the test dataset into two halves. These two halves have similar difficulty distribution and, therefore, can be considered as a parallel test. Then different models, including those used in the first experiment, are evaluated on the first half and the second half. The results in Figure 6 shows that the performances of different models with our proposed evaluation metric are significantly correlated in two parallel tests. 5.4 Visualization of the Weight As is known, there might be some annotation inconsistency in the dataset. We find that most of the cases with high weight are really valuable difficult test cases, such as the visualized sentences from WB dataset in Figure 7. In the first sentence, the word ‘BMW 族’ (NOUN.People who take bus, metro and then walk to the destination) is an OOV word and contains English characters. The weight of this word, as expected, is very high. In the second sentence, the word ‘素不相 识’ (VERB.not familiar with each other) is a 4character Chinese idiom. the conjunction word ‘就 算’ (CONJ.even if) has structural ambiguity. It can also be decomposed into a two-word phrase ‘就’ (ADV.just) and ‘算’ (VERB.count). From the visualization of the weight, we can see that these difficult words are all given high weights. 2191 Data Corpus Size p r f1 pb rb fb PKU 20% 90.04 89.90 89.97 45.22 43.37 44.28 50% 92.87 91.58 92.22 54.24 49.12 51.55 80% 94.07 92.21 93.13 61.80 54.74 58.05 100% 94.03 92.91 93.47 64.22 59.16 61.59 MSR 20% 92.93 92.58 92.76 45.76 44.13 44.93 50% 95.22 95.18 95.20 63.00 62.22 62.60 80% 95.68 95.74 95.71 67.26 66.96 67.11 100% 96.19 96.02 96.11 70.80 69.45 70.12 NCC 20% 87.32 86.37 86.84 42.16 40.23 41.17 50% 89.34 89.03 89.19 50.31 49.26 49.78 80% 91.42 91.10 91.26 60.48 59.25 59.86 100% 92.00 91.77 91.89 63.72 62.70 63.20 SXU 20% 89.70 89.31 89.50 43.53 42.35 42.93 50% 93.04 92.42 92.73 56.21 54.27 55.23 80% 94.45 93.94 94.19 64.55 62.50 63.51 100% 94.89 94.61 94.75 68.10 66.63 67.36 Table 2: Model evaluation with standard metric and our new metric. Models vary in the amount of training data and feature types. 6 Comparisons on SIGHAN datasets In this section, we give comparisons on SIGHAN datasets. We use four simplified Chinese datasets: PKU and MSR (SIGHAN 2005) as well as NCC and SXU (SIGHAN 2008). For each dataset, we train four segmenters with varying abilities, based on 20%, 50%, 80% and 100% of training data respectively. The used feature template is F2 in Table 1. Table 2 shows the different evaluation results with standard metric and our balanced metric. We can see that the proposed evaluation metric generally gives lower and more distinguishable score, compared with the standard metric. 7 Related work Evaluation metrics has been a focused topic for a long time. Researchers have been trying to evaluate various NLP tasks towards human intuition (Papineni et al., 2002; Graham, 2015a; Graham, 2015b). Previous work (Fournier and Inkpen, 2012; Fournier, 2013; Pevzner and Hearst, 2002) mainly deal with the near-miss error case on the context of text segmentation. Much attention has been given to different penalization for the error. These work criticize that traditional metrics such as precision, recall and F-score, consider all the error similar. In this sense, some studies aimed at assigning different penalization to the word. We think that these explorations can be regarded as the foreshadowing of our evaluation metric that balances reward and punishment. Our paper differs from previous research in that we take the difficulty of the test case into consideration, while previous works only focus on the variation of error types and penalisation. We involve the basic idea from psychometrics and improve the evaluation with a balance between difficult cases and easy cases, reward and punishment. We would like to emphasize that our weighted evaluation metric is not a replacement of the traditional precision, recall, and F-score. Instead, our new weighted metrics can reveal more details that traditional evaluation may not be able to present. 8 Conclusion In this paper, we put forward a new psychometric-inspired method for Chinese word segmentation evaluation by weighting all the words in test dataset based on the methodology applied to psychological tests and standardized exams. We empirically analyze the validity and reliability of the new metric on a real evaluation dataset. Experiment results reveal that our weighted evaluation metrics gives more reasonable and distinguishable scores and 2192 B M W 0.00 0.15 0.30 0.45 0.60 0.75 0.90 by this way travel of people thus be called BMW man (a) Sentence 1043 in WB dataset 0.00 0.15 0.30 0.45 0.60 0.75 0.90 even if not familiar still will give a hand help (b) Sentence 3852 in WB dataset Figure 7: Visualising the word weight of WB dataset. correlates well with human judgement. We will release the weighted datasets to the academic community. Additionally, the proposed evaluation metric can be easily extended to word segmentation task for other languages (e.g. Japanese) and other sequence labelling-based NLP tasks, with just tiny changes. Our metric also points out a promising direction for the researchers to take into the account of the biased distribution of test case difficulty and focus on tackling the hard bones of natural language processing. Acknowledgments We would like to thank the anonymous reviewers for their valuable comments. This work was partially funded by National Natural Science Foundation of China (No. 61532011, 61473092, and 61472088), the National High Technology Research and Development Program of China (No. 2015AA015408). References A.L. Berger, V.J. Della Pietra, and S.A. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39–71. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing. T. Emerson. 2005. The second international Chinese word segmentation bakeoff. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing, pages 123–133. Jeju Island, Korea. Chris Fournier and Diana Inkpen. 2012. Segmentation similarity and agreement. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 152–161. Association for Computational Linguistics. Chris Fournier. 2013. Evaluating text segmentation using boundary edit distance. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 1702–1712. Roy Freedle and Irene Kostin. 1993. The prediction of toefl reading item difficulty: Implications for construct validity. Language Testing, 10(2):133–170. Yvette Graham. 2015a. Improving evaluation of machine translation quality estimation. In Proceedings of the 53th Annual Meeting on Association for Computational Linguistics. Yvette Graham. 2015b. Re-evaluating automatic summarization with bleu and 192 shades of rouge. In Proceedings of EMNLP. C. Jin and X. Chen. 2008. The fourth international Chinese language processing bakeoff: Chinese word segmentation, named entity recognition and Chinese pos tagging. In Sixth SIGHAN Workshop on Chinese Language Processing, page 69. Karen Sparck Jones. 1994. Towards better nlp system evaluation. In Proceedings of the workshop on Human Language Technology, pages 102–107. Association for Computational Linguistics. Irene Kostin. 2004. Exploring item characteristics that are related to the difficulty of toefl dialogue items. ETS Research Report Series, 2004(1):i–59. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning. Gina-Anne Levow. 2006. The third international chinese language processing bakeoff: Word segmentation and named entity recognition. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, pages 108–117, Sydney, Australia, July. 2193 Frederic M Lord, Melvin R Novick, and Allan Birnbaum. 1968. Statistical Theories of Mental Test Scores. Addison-Wesley. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. F. Peng, F. Feng, and A. McCallum. 2004. Chinese segmentation and new word detection using conditional random fields. Proceedings of the 20th international conference on Computational Linguistics. Lev Pevzner and Marti A Hearst. 2002. A critique and improvement of an evaluation metric for text segmentation. Computational Linguistics, 28(1):19–36. Educational Testing Service. 2000. Computer-Based TOEFL Score User Guide. Princeton, NJ. N. Xue. 2003. Chinese word segmentation as character tagging. Computational Linguistics and Chinese Language Processing, 8(1):29–48. Po-Ching Yip. 2000. The Chinese Lexicon: A Comprehensive Survey. Psychology Press. H. Zhao and Q. Liu. 2010. The cips-sighan clp 2010 chinese word segmentation bakeoff. In Proceedings of the First CPS-SIGHAN Joint Conference on Chinese Language Processing. 2194
2016
206
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2195–2204, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Temporal Anchoring of Events for the TimeBank Corpus Nils Reimers†‡, Nazanin Dehghani†‡ ∗, Iryna Gurevych†‡ †Ubiquitous Knowledge Processing Lab (UKP-TUDA) Department of Computer Science, Technische Universit¨at Darmstadt ‡ Research Training Group AIPHES Technische Universit¨at Darmstadt http://www.ukp.tu-darmstadt.de Abstract Today’s extraction of temporal information for events heavily depends on annotated temporal links. These so called TLINKs capture the relation between pairs of event mentions and time expressions. One problem is that the number of possible TLINKs grows quadratic with the number of event mentions, therefore most annotation studies concentrate on links for mentions in the same or in adjacent sentences. However, as our annotation study shows, this restriction results for 58% of the event mentions in a less precise information when the event took place. This paper proposes a new annotation scheme to anchor events in time. Not only is the annotation effort much lower as it scales linear with the number of events, it also gives a more precise anchoring when the events have happened as the complete document can be taken into account. Using this scheme, we annotated a subset of the TimeBank Corpus and compare our results to other annotation schemes. Additionally, we present some baseline experiments to automatically anchor events in time. Our annotation scheme, the automated system and the annotated corpus are publicly available.1 1 Introduction In automatic text analysis, it is often important to precisely know when an event occurred. A user ∗Guest researcher from the School of Electrical and Computer Engineering, University of Tehran. 1https://www.ukp.tu-darmstadt.de/data /timeline-generation/temporal-anchoring -of-events/ might be interested in retrieving news articles that discuss certain events which happened in a given time period, for example articles discussing car bombings in the 1990s. The user might not only be interested in articles from that time period, but also in more recent articles that cover events from that period. Knowing when an event happened is also essential for time aware summarization, automated timeline generation as well as automatic knowledge base creation. In many cases, time plays a crucial role for facts stored in a knowledge base, for example for the facts when a person was born or died. Also, some facts are only true for a certain time period, like being the president of a country. Event extraction can be used to automatically infer many facts for knowledge bases, however, to be useful, it is crucial that the date when the event happened can precisely be extracted. The TimeBank Corpus (Pustejovsky et al., 2003) is a widely used corpus using the TimeML specifications (Saur´ı et al., 2004) for the annotations of event mentions and temporal expressions. In order to anchor events in time, the TimeBank Corpus uses the concept of temporal links (TLINKs) that were introduced by Setzer (2001). A TLINK states the temporal relation between two events or an event and a time expression. For example, an event could happen before, simultaneous, or after a certain expression of time. The TimeBank Corpus served as dataset for the shared tasks TempEval-1, 2 and 3 (Verhagen et al., 2007; Verhagen et al., 2010; UzZaman et al., 2013). In this paper we describe a new approach to anchor every event in time. Instead of using temporal links between events and temporal expressions, we consider the event time as an argument of the event mention. The annotators are asked to write down the date when an event happened in a normalized format for every event mention. The annotation effort is for this reason identical 2195 to the number of event mentions, i.e. for a document with 200 event mentions, the annotators must perform 200 annotations. When annotating the event mentions, the annotators are asked to take the complete document into account. Section 3 presents our annotation scheme, and section 4 gives details about the conducted annotation study. The number of possible TLINKs scales quadratic with the number of events and temporal expressions. Some documents of the TimeBank Corpus contain more than 200 events and temporal expressions, resulting in more than 20.000 possible TLINKs. Hand-labeling all links is extremely time-consuming and even when using transitive closures and computational support, it is not feasible to annotate all possible TLINKs for a larger set of documents. Therefore, all annotation studies limited the number of TLINKs to annotate. For example, in the original TimeBank Corpus, only links that are salient were annotated. Which TLINKs are salient is fairly vague and results in a comparably low reported inter-annotator agreement. Furthermore, around 62% of all events do not have any attached TLINK, i.e. for most of the events in the original TimeBank Corpus, no temporal statement can be made. In contrast to the sparse annotation of TLINKs used in the TimeBank Corpus, the TimeBankDense Corpus (Cassidy et al., 2014) used a dense annotation and all temporal links for events and time expressions in the same sentence and in directly succeeding sentences were annotated. For a subset of 36 documents with 1729 events and 289 time expressions, they annotated 12,715 temporal links, which is around 6.3 links per event and time expression. Besides the large effort needed for a dense annotation, a major downside is the limitation that events and time expressions must be in the same or in adjacent sentences. Our annotation study showed that in 58.72% of the cases the most informative temporal expression is more than one sentence apart from the event mention. For around 25% of the events, the most informative temporal expression is even five or more sentences away. Limiting the TLINKs to pairs that are at most one sentence apart poses the risk that important TLINKs are not annotated and consequently cannot be learned by automated systems. A further drawback of TLINKs is that it can be difficult or even impossible to encode temporal information that originates from different parts in the text. Given the sentence: December 30th, 2015 - During New Year’s Eve, it is traditionally very busy in the center of Brussels and people gather for the fireworks display. But the upcoming [display]Event was canceled today due to terror alerts. For a human it is simple to infer the date for the event display. But it is not possible to encode this knowledge using TLINKs, as the date is not explicitly mentioned in the text. To make our annotations comparable to the dense TLINK annotation scheme of the TimeBank-Dense Corpus (Cassidy et al., 2014), we annotated the same documents and compare the results in section 5. For 385 out of 872 events (44.14%), our annotation scheme results in a more precise value on which date an event happened. Section 6 presents a baseline system to extract event times. For a subset of events, it achieves an F1-score of 49.01% while human agreement for these events is 80.50%. 2 Previous Annotation Work The majority of corpora on events uses sparse temporal links (TLINKs) to enable anchoring of events in time. The original TimeBank (Pustejovsky et al., 2003) only annotated salient temporal relations. The subsequent TempEval competitions (Verhagen et al., 2007; Verhagen et al., 2010; UzZaman et al., 2013) are based on the original TimeBank annotations, but tried to improve the coverage and added some further temporal links for mentions in the same sentence. The MEANtime corpus (van Erp et al., 2015) applied a sparse annotation and only temporal links between events and temporal expressions in the same and in succeeding sentences were annotated. The MEANtime corpus distinguishes between main event mentions and subordinated event mentions, and the focus for TLINKs was on main events. More dense annotations were applied by Bramsen et al. (2006), Kolomiyets et al. (2012), Do et al. (2012) and by Cassidy et al. (2014). While Bramsen et al., Kolomiyets et al., and Do et al. only annotated some temporal links, Cassidy et al. annotated all Event-Event, Event-Time, and TimeTime pairs in the same sentence as well as in directly succeeding sentences leading to the densest annotation for the TimeBank Corpus. 2196 A drawback of the previous annotation works is the limitation that only links between expressions in the same or in succeeding sentences are annotated. In case the important temporal expression, that defines when the event occurred, is more than one sentence away, the TLINK will not be annotated. Consequently, retrieving the information when the event occurred is not possible. Increasing this window size would result in a significantly increased annotation effort as the number of links grows quadratic with the number of expressions. Our annotation is the first for the TimeBank Corpus that does not try to annotate the quadratic growing number of temporal links. Instead, we consider the event time as an argument of the individual event mention and it is annotated directly by the annotators. This reduces the annotation effort by 85% in comparison to the TimeBankDense Corpus. This allows an annotator to annotate significant more documents in the same time. Also, all temporal information, independent where it is mentioned in the document, can be taken into account resulting in a much more precise anchoring of events in time, as section 5 shows. 3 Event Time Annotation Scheme The annotation guidelines for the TimeBank Corpus (Saur´ı et al., 2004) define an event as a cover term for situations that happen or occur. Events can be punctual or last for a period of time. They also consider as events those predicates describing states or circumstances in which something holds true. For the TimeBank Corpus, the smallest extent of text (usually a single word) that expresses the occurrence of an event is annotated. The aspectual type of the annotated events in the TimeBank Corpus can be distinguished into achievement events, accomplishment events, and states (Pustejovsky, 1991). An achievement is an event that results into an instantaneous change of some sort. Examples of achievement events are to find, to be born, or to die. Accomplishment events also result into a change of some sort, however, the change spans over a longer time period. Examples are to build something or to walk somewhere. States on the other hand do not describe a change of some sort, but that something holds true for some time, for example, being sick or to love someone. The aspectual type of an event does not only depend on the event itself, but also on the context in which the event is expressed. Our annotation scheme was created with the goal of being able to create a knowledge base from the extracted events in combination with their event times. Punctual events are a single dot on the time axis while events that last for a period of time have a begin and an end point. It can be difficult to distinguish between punctual events and events with a short duration. Furthermore, the documents typically do not report precise starting and ending times for events, hence we decided to distinguish between events that happened at a Single Day and Multi-Day Events that span over multiple days. We used days as the smallest granularity for the annotation as none of the annotated articles contained any information on the hour, the minute or the second when the event happened. In case a corpus contains this information, the annotation scheme could be extended to include this information as well. For Single Day Events, the event time is written in the format YYYY-MM-DD. For Multi-Day Events, the annotator annotates the begin point and the end point of the event. In case no statement can be made on when an event happened, the event will be annotated with the label not applicable. This applies only to 0.67% of the annotated events in the TimeBank Corpus which is mainly due to annotation errors in the TimeBank Corpus. He was sent into space on May 26, 1980. He spent six days aboard the Salyut 6 spacecraft. The first event in this text, sent, will be annotated with the event time 1980-05-26. The second event, spent, is a Multi-Day Event and is annotated with the event time beginPoint=1980-05-26 and endPoint=1980-06-01. In case the exact event time is not stated in the document, the annotators are asked to narrow down the possible event time as precisely as possible. For this purpose, they can annotate the event time with after YYYY-MM-DD and before YYYYMM-DD. In 1996 he was appointed military attache at the Hungarian embassy in Washington. [...] McBride was part of a seven-member crew aboard the Orbiter Challenger in October 1984 The event appointed is annotated after 1996-0101 before 1996-12-31 as the event must have happened sometime in 1996. The Multi-Day Event 2197 part is annotated with beginPoint=after 1984-1001 before 1984-10-31 and endPoint=after 198410-01 before 1984-10-31. To speed up the annotation process, annotators were allowed to write YYYY-MM-xx to express that something happened sometime within the specified month and YYYY-xx-xx to express that the event happened sometime during the specified year. Annotators were also allowed to annotate events that happened at the Document Creation Time with the label DCT. The proposed annotation scheme requires that event mentions are already annotated. For our annotation study we used the event mentions that were already defined in the TimeBank Corpus. In contrast to the annotation of TLINKs, temporal expressions must not be annotated in the corpus. 4 Annotation Study The annotation study was performed on the same subset of documents as used by the TimeBankDense Corpus (Cassidy et al., 2014) with the event mentions that are present in the TempEval-3 dataset (UzZaman et al., 2013). Cassidy et al. selected 36 random documents from the TimeBank Corpus (Pustejovsky et al., 2003). These 36 documents include a total of 1498 annotated events. This allows to compare our annotations to those of the TimeBank-Dense Corpus (see section 5). Each document has been independently annotated by two annotators according to the annotation scheme introduced above. We used the freely available WebAnno (Yimam et al., 2013). To speed up the annotation process, the existent temporal expressions that are defined in the TimeBank Corpus were highlighted. These temporal expressions are in principle not required to perform our annotations, but the highlighting of them helps to determine the event time. Figure 1 depicts a sample annotation made by WebAnno. The two annotators were trained on 15 documents distinct from the 36 documents annotated for the study. During the training stage, the annotators discussed the decisions they have made with each other. After both annotators completed the annotation task, the two annotations were curated by one person to derive one final annotation. The curator examined the events where the annotators disagreed and decided on the final annotation. The final annotation might be a merge of the two provided annotations. Figure 1: Sample Annotation made with WebAnno. The violet annotations are existing annotations of temporal expressions from the TimeBank Corpus. The span for the beige annotations, the event mentions, come also from the TimeBank Corpus. Our annotators added the value for the event time for those beige annotations. 4.1 Inter-Annotator-Agreement We use Krippendorff’s α (Krippendorff, 2004) with the nominal metric to compute the InterAnnotator-Agreement (IAA). The nominal metric considers all distinct labels equally distant from one another, i.e. partial agreement is not measured. The annotators must therefore completely agree. Using this metric, the Krippendorff’s α for the 36 annotated documents is α = 0.617. Cassidy et al. (2014) reported a Kappa agreement between 0.56−0.64 for their annotation of TLINKs. Comparing these numbers is difficult, as the annotation tasks were different. According to Landis and Koch (1977), these numbers lie on the border of a moderate and a substantial level of agreement. 4.2 Disagreement Analysis In 648 out of 1498 annotated events, the annotators disagreed on the event time. In 42.3% of the disagreements, the annotators disagreed on whether the event mention is a Single Day Event or a Multi-Day Event. Such disagreement occurs when it is unclear from the text whether the event lasted for one or for several days. For example, an article reported on a meeting and due to a lack of precise temporal information in the document, one annotator assumed that the meeting lasted for one day, the other that it lasted for several days. A different source for the disagreement has been the annotation of states. They can either be annotated with the date where the text gives evidence that they hold true, or they can be annotated as a Multi-Day Event that begins before that date and ends after that date. Different annotations for Multi-Day Events account for 231 out of the 648 disagreements (35.6%). In this category, the annotators disagreed 2198 on the begin point in 110 cases (47.6%), on the end point in 57 cases (24.7%) and on the begin as well as on the end point in 64 cases (27.7%). The Krippendorff’s α for all begin point annotations is 0.629 and for all end point annotations it is 0.737. A disagreement on Single Day Events was observed for 143 event mentions and accounts for 22.1% of the disagreements. The observed agreement for Single Day Events is 80.5% or α = 0.799. Most disagreements for Single Day Events were whether the event occurred on the same date as the document was written or if it occurred before the document was written. 4.3 Measuring Partial Agreement One issue of the strict nominal metric is that it does not take partial agreement into account. In several cases, the two annotators agreed in principle on the event time, but might have labeled it slightly differently. One annotator might have taken more clues from the text into account to narrow down when an event has happened. One annotator for example, has annotated an event with the label after 1998-08-01 before 1998-08-31. The second annotator has taken an additional textual clue into account, which was that the event must have happened in the first half of August 1998 and annotated it as after 1998-08-01 before 1998-0815. Even though both annotators agree in principle, when using the nominal metric it would be considered as a distinct annotation. To measure this effect, we created a relaxed metric to measure mutual exclusivity: dME(a, b) = ( 1 if a and b are mutual exclusive 0 else The metric measures whether two annotations can be satisfied at the same time. Given the event happened on August 5th, 1998, then the two annotations after 1998-08-01 before 1998-08-31 and after 1998-08-01 before 1998-08-15 would both be satisfied. In contrast, the two annotations after 1998-02-01 and before 1997-12-31 can never be satisfied at the same time and are therefore mutual exclusive. Out of the 648 disagreements, 71 annotations were mutually exclusive. Computing the Krippendorff’s α with the above metric yields a value of αME = 0.912. 4.4 Annotation Statistics Table 1 gives an overview of the assigned labels. Around 58.21% of the events are either instantaneous events or their duration is at most one day. 41.12% of the events are Multi-Day Events that take place over multiple days. While for Single Day Events there is a precise date for 55.73% of the events, the fraction is much lower for MultiDay Events. In this category, only in 19.81% of the cases the begin point is precisely mentioned in the article and only in 15.75% of the cases, the end point is precisely mentioned. The most prominent label for Single Day Events is the Document Creation Time (DCT). 48.28% of Single Day Events happened on the day the article was created, 33.49% of these events happened at least one day before the DCT and 17.43% of the mentions refer to future events. This distribution shows, that the news articles and TV broadcast transcripts from the TimeBank Corpus mainly report on events that happened on the same day. For Multi-Day Events, the distribution looks different. In 76.46% of the cases, the event started in the past, and in 65.10% of the cases, it is still ongoing. 4.5 Most Informative Temporal Expression Not all temporal expressions in a text are of the same relevance for an event. In fact, in many cases only a single temporal expression is of importance, which is the expression stating when the event occurred. Our annotations allow us to determine most informative temporal expression for an event. We define the most informative temporal expression as the expression that has been used by the annotator to determine the event time. We checked for all annotations whether the event date can be found as a temporal expression in the document and computed the distance to the closest one with a matching value. The distance is measured as the number of sentences. 421 out of 1498 events happened on the Document Creation Time and were excluded from this computation. The Document Creation Time is provided as additional metadata in the TimeBank Corpus, and it is often not explicitly mentioned in the document text. Figure 2 shows the distance between the most informative temporal expression and the event mention. In 23.68% of the cases, the time expression is in the same sentence, and in 17.59% of the cases, the time expression is either in the 2199 # Events % Single Day Events 872 58.21% with precise date 486 55.73% after + before 145 16.63% after 124 14.22% before 117 13.42% past events 292 33.49% events at DCT 421 48.28% future events 152 17.43% Multi-Day Events 616 41.12% precise begin point 122 19.81% precise end point 97 15.75% begins in the past 471 76.46% begins on the DCT 38 6.17% begins in the future 105 17.05% ends in the past 179 29.06% ends on the DCT 26 4.22% ends in the future 401 65.10% Not applicable 10 0.67% Table 1: Statistic on the annotated event times. Single Day Events happen on a single day, MultiDay Events take place over multiple days. The event time can either be precise or the annotators used before and after to narrow down the event time, e.g. the event has happened in a certain month and year. DCT = Document Creation Time. next or in the previous sentence. It follows that in 58.72%, of the cases the most informative time expression cannot be found in the same or in the preceding or succeeding sentence. This is important to note, as previous shared tasks like TempEval1,-2, and -3 (Verhagen et al., 2007; Verhagen et al., 2010; UzZaman et al., 2013) and previous annotation studies like the TimeBank-Dense Corpus (Cassidy et al., 2014) only considered the relation between event mentions and temporal expressions in the same and in adjacent sentences. However, for the majority of events, the most informative temporal expression is not in the same or in the preceding / succeeding sentence. For 7.31% of the annotated events, no matching temporal expression was found in the document. Those were mainly events where the event time was inferred by the annotators from multiple temporal expressions in the document. An example is that the year of the event was mentioned in the beginning of the document and the month of the event was mentioned in a later part of the document. ≤−5 −4 −3 −2 −1 0 1 2 3 4 ≥5 MS 0 0.1 0.2 0.3 Distance (# sentences) Distribution of Distances Figure 2: Distribution of distances in sentences between the event mention and the most informative temporal expression. For 58.72% of the event mentions, the most informative time expression is not in the same or in the previous/next sentence. For 7.3% of the mentions, the time expression originates from multiple sources (MS). 5 Comparison of Annotation Schemes Depending on the application scenario and the text domain, the use of TLINKs or the proposed annotation scheme may be advantageous. TLINKs have the capability to capture the temporal order of events, even when temporal expressions are completely absent in a document, which is often the case for novels. The proposed annotation scheme has the advantage that temporal information, independent where and in which form it is mentioned in the document, can be taken into account. However, the proposed scheme requires that the events can be anchored on a time axis, which is easy for news articles and encyclopedic text but hard for novels and narratives. In this section, we evaluate the application scenario of temporal knowledge base population and time-aware information retrieval. For temporal knowledge base population, it is important to derive the date for facts and events as precisely as possible (Surdeanu, 2013). Those facts can either be instantaneous, e.g. a person died, or they can last for a longer time like a military conflict. 2200 Similar requirements are given for time-aware information retrieval, where it can be important to know at which point in time something occurred (Kanhabua and Nørv˚ag, 2012). We use the TimeBank-Dense Corpus (Cassidy et al., 2014) with its TLINKs annotations and compare those to our event time annotations. The TimeBank-Dense Corpus annotated all TLINKs between Event-Event, Event-Time, and TimeTime pairs in the same sentence and between succeeding sentences as well as all Event-DCT and Time-DCT pairs. Six different link types were defined: BEFORE, AFTER, INCLUDES, IS INCLUDED, SIMULTANOUS, and VAGUE, where VAGUE encodes that the annotators where not able to make a statement on the temporal relation of the pair. We studied how well the event time is captured by the dense TLINK annotation. We used transitive closure rules as described by Chambers et al. (2014) to deduct also TLINKs for pairs that were not annotated. For example, when event1 happened before event2 and event2 happened before date1, we can infer that event1 happened before date1. Using this transitivity allows to infer relations for pairs that are more than one sentence apart. For all annotated events, we evaluated all TLINKs, including the TLINKs inferred from the transitivity rules, and derived the event time as precisely as possible. We then computed how precise the inferred event times are in comparison to our annotations. Preciseness is measured in the number of days. An event that is annotated with 199802-13 has the preciseness of 1 day. If the inferred event time from the TLINKs is after 1998-02-01 and before 1998-02-15, then the preciseness is 15 days. A more precise anchoring is preferred. The TimeBank-Dense Corpus does not have a link type to mark that an event has started or ended at a certain time point. This makes the TLINK annotation impractical for the durative events that span over multiple days. According to our annotation study, 41.12% of the events in the TimeBank Corpus last for longer time periods. For these 41.12%, it cannot be inferred from when to when the events lasted. In 487 out of the 872 Single Day Events (55.85%), the TLINKs give a result with the same precision as our annotations. For 198 events (22.71%), our annotation is more precise, i.e. the time window where the event might have happened is smaller. For 187 events (21.44%), no event time could be inferred from the TLINKs. This is due to the fact that there was no link to any temporal expression even when transitivity was taken into account. For the 487 events where the TLINKs resulted in an event time as precise as our annotation, the vast majority of them were events that happened at the Document Creation Time. As depicted in Table 1, 421 events happened at DCT. For those events the precise date can directly be derived from the annotated link between each event mention and the DCT. For all other events that did not happen at the Document Creation Time, the TLINKs result for the most cases in a less precise anchoring in time and for around a fifth of these cases in no temporal anchoring at all while we do anchor them. We can conclude, that even a dense TLINK annotation gives suboptimal information on when events have happened, and due to the restriction that TLINKs are only annotated in the same and in adjacent sentences, a lot of relevant temporal information gets lost. 6 Automated Event Time Extraction In this section, we present a baseline system for automatic event time extraction. The system uses temporal relations in which the event is involved and anchors the event to the most precise time. For this purpose, we have defined a two-step process to determine the events’ time. Given a set of documents in which the events and time expressions are already annotated, the system first obtains a set of possible times for each of the events. Second, the most precise time is selected or generated for each event. For the first step, we use the multi-pass architecture introduced by Chambers et al. (2014) that was trained and evaluated on the TimeBankDense Corpus (Cassidy et al., 2014). Chambers et al. describe multiple rules and machine learning based classifiers to extract relations between events and temporal expressions. This architecture extracts temporal relations of the type BEFORE, AFTER, INCLUDES, IS INCLUDED, and SIMULTANOUS. The classifiers are combined into a precision-ranked cascade of sieves. The architecture presented by Chambers et al. does not produce temporal information that an event has started or ended at a certain time point and can 2201 therefore only be used for Single Day Events. We use these sieves to add the value of the temporal expression and the corresponding relation to a set of possible times for each event. In fact, for each event we generate a set of <relation, time> tuples in which the event is involved. Police confirmed Friday that the body found along a highway For example, the one sieve adds [IS INCLUDED, Friday1998−02−13] and a second sieve adds [BEFORE, DCT1998−02−14] to the set of possible event times for the confirmed event. Applying the sequence of the sieves will obtain all various temporal links for each event. In the next step, if the event has a relation of type SIMULTANEOUS, IS INCLUDED or INCLUDES, the system sets the event time to the value of the time expression. If the event has a relation of type BEFORE and/or AFTER, the system narrows down the event time as precisely as possible. If the sieve determines the relation type as VAGUE, the set of possible event times remains unchanged. Algorithm 1 demonstrates how the event time is selected or generated from a set of possible times. Algorithm 1 Automatic Event Time Extraction 1: function EVENTTIME(times) 2: if times is empty then 3: return ’Not Available’ ▷the event has no non-vague relation 4: end if 5: min before time = DATE.MAX VALUE 6: max after time = DATE.MIN VALUE 7: for [relation, time] in times do 8: if relation is SIMULTANEOUS or IS INCLUDED or INCLUDES then 9: return time 10: else if relation is BEFORE and time < min before time then 11: min before time = time 12: else if relation is AFTER and time > max after time then 13: max after time = time 14: end if 15: end for 16: event time = AFTER + max after time + BEFORE + min before time 17: return event time 18: end function Applying the proposed method on the TimeBank-Dense Corpus, we obtained some value for the event time for 593 of 872 (68%) Single Day Events. For 359 events (41%), the system generates the event time with the same precision as our annotations. Table 2 gives statistics of the automatically obtained event times. To evaluate the output of the proposed system, we evaluated how precise the automatically obtained event times are in comparison with our annotations. Table 3 shows for 41% of events, the proposed system generates the same event time Single Day Events # Events % with precise date 260 29.82% after + before 16 1.84% after 99 11.35% before 218 25% not available 279 31.99% Table 2: Statistics on the automatically obtained event times for events happened on a single day. The obtained event time can either be precise or the system used before and after to narrow down the event time. For 279 events, the system cannot infer any event time. as our annotations. For 21% events our annotation is more precise, i.e. the time window where the event might have happened is smaller. For 47 events (5.38%), the system infers an event time that is in conflict with the human annotation, for example a disagreement if an event happened before or after DCT. Considering event times that have the same preciseness as our annotations as true positives, the precision of the proposed system is 60.54% and the recall is 41.17% for Single Day Events. As presented in section 4, human annotators agree in 80.50% of the cases on the label for Single Day Events. The less precise and noninferred event times are mainly due to the fact that temporal expressions, that are more than one sentence apart, are not taken into account by the sieve architecture. Obtained event time # Events % same as human annotation 359 41.17% less precise 187 21.44% conflicting annotations 47 5.38% cannot infer event time 279 31.99% Precision 60.54% Recall 41.17% F1-Score 49.01% Human F1-Score 80.50% Table 3: Evaluation results of proposed system in comparison with our annotations. In this work we focused on the automated anchoring of Single Day Events and presented a baseline system that relies on the work of Chambers et al. (2014). The F1-score with 49.01% is in comparison to the human score of 80.50% comparatively low. However, only in 5.38% of the cases, the automatically inferred event time is plain wrong. In the most cases, no event time could be inferred (31.99%) or it was less precise 2202 than the human annotation (21.44%). Extending the described approach to MultiDay-Events is not straight forward. The TimeBank-Dense Corpus and consequently the system by Chambers et al. does not include a TLINK type to note that an event has started or ended at a certain date, hence, extracting the begin point and end point for Multi-Day-Events is not possible. A fundamental adaption of the system by Chambers et al. would be required. In contrast to Single Day Events, extracting the event time for Multi-Day Events requires more advanced logic. The start date of the event must be before the end date of the event. The relation to events that are included in the Multi-Day Events must be checked to avoid inconsistencies. The development of an automated system for Multi-Day Events is subject of our ongoing work. 7 Conclusion We presented a new annotation scheme for anchoring events in time and annotated a subset of the TimeBank Corpus (Pustejovsky et al., 2003) using this annotation scheme. The annotation guidelines as well as the annotated corpus are publicly available.2 In the performed annotation study, the Krippendorff’s α inter-annotator agreement was considerably high at α = 0.617. The largest disagreement resulted from events in which it was not explicitly mentioned when the event happened. Using a more relaxed measure for Krippendorff’s α which only assigns a distance to mutual exclusive annotations, the agreement changed to αME = 0.912. We can conclude that after little training, annotators are able to perform the annotation with a high agreement. The effort for annotating TLINKs on the other hand scales quadratic with the number of events and temporal expressions. This imposes the often used restriction that only temporal links between events and temporal expressions in the same or in succeeding sentences are annotated. Even with this restriction, the annotation effort is quite significant, as on average 6.3 links per mention must be annotated. As Figure 2 depicts, in more than 58.72% of the cases the most informative temporal expression is more than one sentence apart from the event mention. As a consequence, inferring 2https://www.ukp.tu-darmstadt.de/data /timeline-generation/temporal-anchoring -of-events/ from TLINKs when an event happened is less precise as temporal information that is more than one sentence away can often not be taken into account. For the 872 Single Day Events, the correct event time could be inferred from the TLINKs only in 487 cases. For 187 Single Day Events, no event time at all could be inferred, as no temporal expression was within the one sentence window of that event. A drawback of the proposed scheme is the lack of temporal ordering of events beyond the smallest unit of granularity, which was in our case one day. The scheme is suitable to note that several events occurred at the same date, but their order on that date cannot be encoded. In case the temporal ordering is important for the application scenario, the annotation scheme could be extended and TLINKs could be annotated for events that fall on the same date. Another option is to increase the granularity, but this requires that the information in the documents also allow this more precise anchoring. Acknowledgement This work has been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) under grant No. GRK 1994/1 and by the Volkswagen Foundation as part of the Lichtenberg-Professorship Program under grant No. I/82806. Additional support was provided by the German Federal Ministry of Education and Research (BMBF) as a part of the Software Campus program under the promotional reference 01-S12054. References Philip Bramsen, Pawan Deshpande, Yoong Keok Lee, and Regina Barzilay. 2006. Inducing Temporal Graphs. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, EMNLP ’06, pages 189–198, Stroudsburg, PA, USA. Association for Computational Linguistics. Taylor Cassidy, Bill McDowell, Nathanael Chambers, and Steven Bethard. 2014. An Annotation Framework for Dense Event Ordering. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 501–506, Baltimore, Maryland, USA. Association for Computational Linguistics. Nathanael Chambers, Taylor Cassidy, Bill McDowell, and Steven Bethard. 2014. Dense Event Ordering 2203 with a Multi-Pass Architecture. Transactions of the Association for Computational Linguistics, 2:273– 284. Quang Xuan Do, Wei Lu, and Dan Roth. 2012. Joint Inference for Event Timeline Construction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLPCoNLL ’12, pages 677–687, Stroudsburg, PA, USA. Association for Computational Linguistics. Nattiya Kanhabua and Kjetil Nørv˚ag. 2012. Learning to Rank Search Results for Time-sensitive Queries. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management, CIKM ’12, pages 2463–2466, New York, NY, USA. ACM. Oleksandr Kolomiyets, Steven Bethard, and MarieFrancine Moens. 2012. Extracting Narrative Timelines As Temporal Dependency Structures. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers Volume 1, ACL ’12, pages 88–97, Stroudsburg, PA, USA. Association for Computational Linguistics. Klaus Krippendorff. 2004. Content Analysis: An Introduction to Its Methodology (second edition). Sage Publications. J. Richard Landis and Gary G. Koch. 1977. The Measurement of Observer Agreement for Categorical Data. Biometrics, 33(1):159–174. James Pustejovsky, Patrick Hanks, Roser Sauri, A. See, Robert Gaizauskas, Andrea Setzer, Dragomir Radev, Beth Sundheim, D. Day, Lisa Ferro, and Marcia Lazo. 2003. The TIMEBANK Corpus. In Proceedings of Corpus Linguistics 2003, pages 647– 656, Lancaster, UK. James Pustejovsky. 1991. The Syntax of Event Structure. Cognition 41 (1991), pages 47–81. Roser Saur´ı, Jessica Littman, Robert Gaizauskas, Andrea Setzer, and James Pustejovsky. 2004. TimeML Annotation Guidelines, Version 1.2.1. Andrea Setzer. 2001. Temporal Information in Newswire Articles: An Annotation Scheme and Corpus Study. Ph.D. thesis, University of Sheffield, Sheffield, UK. Mihai Surdeanu. 2013. Overview of the TAC 2013 Knowledge Base Population Evaluation: English Slot Filling and Temporal Slot Filling. In Proceedings of the TAC-KBP 2013 Workshop, Gaithersburg, Maryland, USA. Naushad UzZaman, Hector Llorens, Leon Derczynski, Marc Verhagen, James F. Allen, and James Pustejovsky. 2013. SemEval-2013 Task 1: TempEval-3: Evaluating Time Expressions, Events, and Temporal Relations. In Proceedings of the 7th International Workshop on Semantic Evaluation (SemEval 2013), pages 1–9, Atlanta, Gerogia, USA. Marieke van Erp, Piek Vossen, Rodrigo Agerri, AnneLyse Minard, Manuela Speranza, Ruben Urizar, Egoitz Laparra, Itziar Aldabe, and German Rigau. 2015. Annotated Data, version 2. Technical report, Amsterdam, Netherlands. Marc Verhagen, Robert Gaizauskas, Frank Schilder, Mark Hepple, Graham Katz, and James Pustejovsky. 2007. SemEval-2007 Task 15: TempEval Temporal Relation Identification. In Proceedings of the 4th International Workshop on Semantic Evaluations, SemEval ’07, pages 75–80, Stroudsburg, PA, USA. Association for Computational Linguistics. Marc Verhagen, Roser Saur´ı, Tommaso Caselli, and James Pustejovsky. 2010. SemEval-2010 Task 13: TempEval-2. In Proceedings of the 5th International Workshop on Semantic Evaluation, SemEval ’10, pages 57–62, Stroudsburg, PA, USA. Association for Computational Linguistics. Seid Muhie Yimam, Iryna Gurevych, Richard Eckart de Castilho, and Chris Biemann. 2013. WebAnno: A Flexible, Web-based and Visually Supported System for Distributed Annotations. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 1–6, Sofia, Bulgaria, August. Association for Computational Linguistics. 2204
2016
207
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2205–2215, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Grammatical Error Correction: Machine Translation and Classifiers Alla Rozovskaya Department of Computer Science Virginia Tech Blacksburg, VA 24060 [email protected] Dan Roth Department of Computer Science University of Illinois Urbana, IL 61820 [email protected] Abstract We focus on two leading state-of-the-art approaches to grammatical error correction – machine learning classification and machine translation. Based on the comparative study of the two learning frameworks and through error analysis of the output of the state-of-the-art systems, we identify key strengths and weaknesses of each of these approaches and demonstrate their complementarity. In particular, the machine translation method learns from parallel data without requiring further linguistic input and is better at correcting complex mistakes. The classification approach possesses other desirable characteristics, such as the ability to easily generalize beyond what was seen in training, the ability to train without human-annotated data, and the flexibility to adjust knowledge sources for individual error types. Based on this analysis, we develop an algorithmic approach that combines the strengths of both methods. We present several systems based on resources used in previous work with a relative improvement of over 20% (and 7.4 F score points) over the previous state-of-the-art. 1 Introduction For the majority of English speakers today, English is not the first language. These writers make a variety of grammar and usage mistakes that are not addressed by standard proofing tools. Recently, there has been a spike in research on grammatical error correction (GEC), correcting writing mistakes made by learners of English as a Second Language, including four shared tasks: HOO (Dale and Kilgarriff, 2011; Dale et al., 2012) and System Method Performance P R F0.5 CoNLL-2014 top 3 MT 41.62 21.40 35.01 CoNLL-2014 top 2 Classif. 41.78 24.88 36.79 CoNLL-2014 top 1 MT, rules 39.71 30.10 37.33 Susanto et al. (2014) MT, classif. 53.55 19.14 39.39 Miz. & Mats. (2016) MT 45.80 26.60 40.00 This work MT, classif. 60.17 25.64 47.40 Table 1: (Lack of) progress in GEC over the last few years. CoNLL (Ng et al., 2013; Ng et al., 2014). These shared tasks facilitated progress on the problem within the framework of two leading methods – machine learning classification and statistical machine translation (MT). The top CoNLL system combined a rule-based module with MT (Felice et al., 2014). The second system that scored almost as highly used machine learning classification (Rozovskaya et al., 2014), and the third system used MT (Junczys-Dowmunt and Grundkiewicz, 2014). Furthermore, Susanto et al. (2014) showed that a combination of the two methods is beneficial, but the advantages of each method have not been fully exploited. Despite success of various methods and the growing interest in the task, the key differences between the leading approaches have not been identified or made explicit, which could explain the lack of progress on the task. Table 1 shows existing state-of-the-art since CoNLL-2014. The top results are close, suggesting that several groups have competitive systems. Two improvements (of <3 points) were published since then (Susanto et al., 2014; Mizumoto and Matsumoto, 2016). The purpose of this work is to gain a better understanding of the values offered by each method and to facilitate progress on the task, building on the advantages of each approach. Through better understanding of the methods, we exploit the strengths of each technique and, building on existing architecture, develop superior systems within 2205 each framework. Further combination of these systems yields even more significant improvements over existing state-of-the-art. We make the following contributions: • We examine two state-of-the-art approaches to GEC and identify strengths and weaknesses of the respective learning frameworks. • We perform an error analysis of the output of two state-of-the-art systems, and demonstrate how the methods differ with respect to the types of language misuse handled by each. • We exploit the strengths of each framework: with classifiers, we explore the ability to learn from native data, i.e. without supervision, and the flexibility to adjust knowledge sources to specific error types; with MT, we leverage the ability to learn without further linguistic input and to better identify complex mistakes that cannot be easily defined in a classifier framework. • As a result, we build several systems that combine the strengths of both frameworks and demonstrate substantial progress on the task. Specifically, the best system outperforms the previous best result by 7.4 F score points. Section 2 describes related work. Section 3 presents error analysis. In Section 4, we develop classifier and MT systems that make use of the strengths of each framework. Section 5 shows how to combine the two approaches. Section 6 concludes. 2 Related Work We first introduce the CoNLL-2014 shared task and briefly describe the state-of-the-art GEC systems in the competition and beyond. Next, an overview of the two leading methods is presented. 2.1 CoNLL-2014 shared task and approaches CoNLL-2014 training data (henceforth CoNLLtrain) is a corpus of learner essays (1.2M words) written by students at the National University of Singapore (Dahlmeier et al., 2013), corrected and error-tagged. The CoNLL-2013 test set was included in CoNLL-2014 and is used as development. Both the development and the test sets are also from the student population studying at the same University but annotated separately. We report results on the CoNLL-2014 test. The annotation includes specifying the relevant correction as well as the information about each error type. The tagset consists of 28 categories. Table 2 illustrates the 11 most frequent errors in the development data; errors are marked with an asterisk, and ∅denotes a missing word. The majority of these errors are related to grammar but also include mechanical, collocation, and other errors. An F-based scorer, named M2, was used to score the systems (Dahlmeier and Ng, 2012). The metric in CoNLL-2014 was F0.5, i.e. weighing precision twice as much as recall. Two types of annotations were used: original and revised. We follow the recommendations of the organizers and use the original data (Ng et al., 2014). The approaches varied widely: classifiers, MT, rules, hybrid systems. Table 3 summarizes the top five systems. The top team used a hybrid system that combined rules and MT. The second system developed classifiers for common grammatical errors. The third system used MT. As for external resources, the top 1 and top 3 teams used additional learner data to train their MT systems, the Cambridge University Press Learners’ Corpus and the Lang-8 corpus (Mizumoto et al., 2011), respectively. Many teams also used native English datasets. The most common ones are the Web1T corpus (Brants and Franz, 2006), the CommonCrawl dataset, which is similar to Web1T, and the English Wikipedia. Several teams used off-the-shelf spellcheckers. In addition, Susanto et al. (2014) made an attempt at combining MT and classifiers. They used CoNLL-train and Lang-8 as non-native data and English Wikipedia as native data. We believe that the reason this study did not yield significant improvements (Table 1) is that individual strengths of each framework have not been fully exploited. Further, each system was applied separately and decisions were combined using a general MT combination technique (Heafield et al., 2009). Finally, Mizumoto and Matsumoto (2016) attempt to improve an MT system also trained on Lang-8 with discriminative re-ranking using part-of-speech (POS) and dependency features but only obtain a small improvement. These results suggest that standard combination and re-ranking techniques are not sufficient. 2.2 Overview of the State-of-the-Art The statistical machine translation approach is based on the noisy-channel model. The best translation for a foreign sentence f is: e∗= arg max e p(e)p(f|e) 2206 Error type Rel. freq. (%) Examples Article (ArtOrDet) 19.93 *∅/The government should help encourage *the/∅breakthroughs as well as *a/∅complete medication system . Wrong collocation (Wci) 12.51 Some people started to *think/wonder if electronic products can replace human beings for better performances . Noun number (Nn) 11.44 There are many reports around the internet and on newspaper stating that some users ’ *iPhone/iPhones exploded . Preposition (Prep) 8.98 I do not agree *on/with this argument... Word form (Wform) 6.56 ...the application of surveillance technology serves as a warning to the *murders/murderers and they might not commit more murder . Orthography/punctuation (Mec) 5.75 Even British Prime Minister , Gordon Brown *∅/, has urged that all cars in *britain/Britain to be green by 2020 . Verb tense (Vt) 4.56 Through the thousands of years , most Chinese scholars *are/{have been} greatly affected by Confucianism . Linking words/phrases (Trans) 4.10 *However/Although , video surveillance may be a great help . Local redundancy (Rloc-) 3.70 Some solutions *{as examples}/∅would be to design plants/fertilizers that give higher yield ... Subject-verb agreement (SVA) 3.58 However , tracking people *are/is different from tracking goods . Verb form (Vform) 3.52 Travelers survive in desert thanks to GPS *guide/guiding them . Table 2: Example errors. In the parentheses, the error codes used in the shared task are shown. Errors exemplifying the relevant phenomena are marked; the sentences may contain other mistakes. Rank System F0.5 Approach External training data External name Native data Learner data error modules 1 CAMB 37.33 Rules and MT Microsoft Web LM Cambridge Corpus, Eng. Vocab Profile Cambridge “Write and Improve” 2 CUUI 36.79 Classif.; patterns Web1T 3 AMU 35.01 MT Wikipedia, CommonCrawl Lang-8 4 POST 30.88 LM and rules Web1T PyEnchant Spell 5 NTHU 29.92 Rules, MT, classif. Web1T, Gigaword, BNC, Google Books Spellcheckers: Aspell, GingerIt Table 3: The top 5 systems in CoNLL-2014. The last column lists external proofing tools used. LM stands for language models. The model consists of two components: a language model assigning a probability p(e) for any target sentence e, and a translation model that assigns a conditional probability p(f|e). The language model is learned using a monolingual corpus in the target language. The parameters of the translation model are estimated from a parallel corpus, i.e. the set of foreign sentences and their corresponding translations into the target language. In error correction, the task is cast as translating from erroneous learner writing into corrected well-formed English. The MT approach relies on the availability of a parallel corpus for learning the translation model. In case of error correction, a set of learner sentences and their corrections functions as a parallel corpus. State-of-the-art MT systems are phrase-based, i.e. parallel data is used to derive a phrase-based lexicon (Koehn et al., 2003). The resulting lexicon consists of a list of pairs (seqf, seqe) where seqf is a sequence of one or more foreign words, seqe is a predicted translation. Each pair comes with an associated score. At decoding time, all phrases from sentence f are collected with their corresponding translations observed in training. These are scored together with the language modeling scores and may include other features. The phrasebased approach by Koehn et al. (2003) uses a loglinear model (Och and Ney, 2002), and the best correction maximizes the following: e∗ = arg max e P(e|f) (1) = arg max e exp( M X m=1 λmhm(e, f)) where hm is a feature function, such as language model score and translation scores, and λm corresponds to a feature weight. The classifier approach is based on the contextsensitive spelling correction methodology (Golding and Roth, 1996; Golding and Roth, 1999; Banko and Brill, 2001; Carlson et al., 2001; Carlson and Fette, 2007) and goes back to earlier approaches to article and preposition error correction (Izumi et al., 2003; Han et al., 2006; Gamon et al., 2008; Felice and Pulman, 2008; Tetreault et al., 2010; Gamon, 2010; Dahlmeier and Ng, 2011; Dahlmeier and Ng, 2012). The classifier approach to error correction has been prominent for a long time before MT, since building a classifier does not require having annotated learner data. 2207 Property MT Classifier (1a) Error coverage: ability to address a wide variety of error phenomena +All errors occurring in the training data are automatically covered -Only errors covered by the classifiers; new errors need to be added explicitly (1b) Error complexity: ability to handle complex and interacting mistakes that go beyond word boundaries +Automatically through parallel data, via phrase-based lexicons -Need to develop via specific approaches (2) Generalizability: going beyond the error confusions observed in training -Only confusions observed in training can be corrected +Easily generalizable via confusion sets and features (3) Supervision/Annotation: role of learner data in training the system -Required +Not required (4) System flexibility: adapting knowledge sources per error phenomena -Not easy to integrate errorspecific knowledge resources +Flexible; phenomenon-specific knowledge sources Table 4: Summary of the key properties of the MT and the classifier-based approaches. We use + and −to indicate a positive or a negative value with respect to each factor. Classifiers are trained individually for a specific error type. Because an error type needs to be defined, typically only well-defined mistakes can be addressed in a straightforward way. Given an error type, a confusion set is specified and includes a list of confusable words. For some errors, confusion sets are constructed using a closed list (e.g. prepositions). For other error types, NLP tools are required. To identify locations where an article was likely omitted incorrectly, for example, a phrase chunker is used. Each occurrence of a confusable word in text is represented as a vector of features derived from a context window around the target. The problem is cast as a multi-class classification task. In the classifier paradigm, there are various algorithms – generative (Gamon, 2010; Park and Levy, 2011), discriminative (Han et al., 2006; Gamon et al., 2008; Felice and Pulman, 2008; Tetreault et al., 2010), and joint approaches (Dahlmeier and Ng, 2012; Rozovskaya and Roth, 2013). Earlier works trained on native data (due to lack of annotation). Later approaches incorporated learner data in training in various ways (Han et al., 2010; Gamon, 2010; Rozovskaya and Roth, 2010a; Dahlmeier and Ng, 2011). 3 Error Analysis of MT and Classifiers This section presents error analysis of the MT and classifier approaches. We begin by identifying several key properties that distinguish between MT systems and classifier systems and that we use to characterize the learning frameworks and the outputs of the systems: (1a) Error coverage denotes the ability of a system to identify and correct a variety of error types. (1b) Error complexity indicates the capacity of a system to address complex mistakes such as those where multiple errors interact. (2) Generalizibility refers to the ability of a system to identify mistakes in new unseen contexts and propose corrections beyond those observed in training data. (3) The role of supervision or having annotated learner data for training. (4) System flexibility is a property of the system that allows it to adapt resources specially to correct various phenomena. The two paradigms are summarized in Table 4. We use + and −to indicate whether a learning framework has desirable (+) or undesirable characteristic with regard to each factor. The first three properties characterize system output, while (3) and (4) arise from the system frameworks. Below we analyze the output of several state-of-the-art CoNLL-2014 systems in more detail.1 Section 4 explores (3) and (4) that relate to the learning frameworks. 3.1 Error Coverage and Complexity Error coverage To understand how systems differ with respect to error coverage, we consider recall of each system per error type. Error-type recall can be easily computed using error tags and is reported in the CoNLL overview paper. The recall numbers show substantial variations among the systems. If we consider error categories that have non-negligible recall numbers (higher than 10%), classifier-based approaches have a much lower proportion of error types for which 10% recall was achieved. Among the 28 error types, the top classifier systems – Columbia University-University of Illinois (CUUI, top-2) and National Tsing Hua University (NTHU, top5) – have a recall higher than 10% for 8 and 9 error types, respectively. In contrast, the two MTbased systems – Cambridge University (CAMB, 1Outputs are available on the CoNLL-2014 website. 2208 (1) It is a concern that will be with us *{during our whole life}/{for our entire life} . (2) The decision to inform relatives of *{such genetic disorder}/{such genetic disorders} will be dependent . . . (3) .. we need to respect it and we have no right *{in saying}/{to say} that he must tell his relatives about it . (4) ...and his family might be a *{genetically risked}/{genetic risk} family . (5) ...he was *diagnosis/{diagnosed with} a kind of genetic disease which is very serious . (6) The situation may become *worst/worse if the child has diseases like cancer or heart disease . . . Table 5: Complex and interacting mistakes that MT successfully addresses. Output of the MT-based AMU system. top-1) and the Adam Mickiewicz University system (AMU, top-3) – have 15 and 17 error types, respectively, for which the recall is at least 10%. These recall discrepancies indicate that the MT approach has a better overall coverage, which is intuitive given that all types of confusions are automatically added through phrase-based translation tables in MT, while classifiers must explicitly model each error type. Note, however, that these numbers do not necessarily indicate good typebased performance, since high recall may correspond to low precision. Error complexity In the MT approach, error confusions are learned automatically via the phrase translation tables extracted from the parallel training data. Thus, an MT system can easily handle interacting and complex errors where replacements involve a sequence of words. Table 5 illustrates complex and interacting mistakes that the MT approach is able to handle. Example (1) contains a phrase-level correction that includes both a preposition replacement and an adjective change. (2) is an instance of an interacting mistake where there is a dependency between the article and the noun number, and a mistake can be corrected by changing one of the properties but not both. (3), (4) and (5) require multiple simultaneous corrections on various words in a phrase. (6) is an example of an incorrect adjectival form, an error that is typically not modeled with standard classifiers. 3.2 Generalizability Because MT systems extract error/correction pairs from phrase-translation tables, they can only identify erroneous surface forms observed in training and propose corrections that occurred with the corresponding surface forms. Crucially, in a standard MT scenario, any resulting translation consists of “matches” mined from the translation tables, so a standard MT model lacks lexical abstractions that might help generalize, thus out-of-vocabulary words is a well-known problem in MT (Daume and Jagarlamudi, 2011). While more advanced MT models can abstract by adding higher-level Error AMU (MT) CUUI (Classif.) type P R F0.5 P R F0.5 Orthog./punc. (Mec) 61.6 16.3 39.6 53.3 8.7 26.4 Article (ArtOrDet) 38.0 10.9 25.4 31.8 47.9 34.0 Preposition (Prep) 54.9 10.4 29.5 31.7 8.8 20.9 Noun number (Nn) 49.6 43.2 48.2 42.5 46.2 43.2 Verb tense (Vt) 30.2 9.3 20.8 61.1 5.4 19.9 Subj.-verb agr. (SVA) 48.3 14.9 33.3 57.7 57.7 57.7 Verb form (Vform) 40.5 16.8 31.8 69.2 15.1 40.3 Word form (Wform) 59.0 36.6 52.6 60.0 13.5 35.6 Table 6: Performance of MT and classifier systems from CoNLL-2014 on common errors. features such as POS, previous attempt yielded only marginal improvements (Mizumoto and Matsumoto, 2016), since one typically needs different types of abstractions depending on the error type, as we show below. With classifiers, it is easy to generalize using higher-level information that goes beyond surface form and to adjust the abstraction to the error type. Many grammatical errors may benefit from generalizations based on POS or parse information; we can thus expect that classifiers will do better on errors that require linguistic abstractions. To validate this hypothesis, we evaluate typebased performance of two systems: a top-3 MTbased AMU system and a top-2 classifier-based CUUI; we do not include the top-1 system, since it is a hybrid system that also uses rules. Unlike recall, estimating type-based precision requires knowing the type of the correction supplied by the system, which is not specified in the output. We thus manually analyze the output of the AMU and CUUI systems for seven common error categories and assign to each correction an appropriate type to estimate precision and F0.5 (Table 6). The CUUI system addresses all of these errors, with the exception of mechanical (Mec), of which it handles a small subset. The AMU system does better on mechanical, preposition, word form, and noun number. CUUI does better on articles, verb agreement, and verb form. We now consider examples of errors that are corrected by the classifier-based CUUI system in these three categories but are missed by the MTbased AMU system (Table 7). Examples (1) and 2209 Long-distance dependencies: verb agreement (1) As a result , in the case that when one of the members *happen/happens to feel uncomfortable or abnormal , he or she should be aware that . . . (2) A study of New York University in 2010 shown that patients with family members around generally *recovers/recover 2-4 days faster than those taken care by professional nurses . Confusions not found in training: verb agreement and verb form (3) Hence , the social media sites *serves/serve as a platform for the connection . (4) After *came/coming back from the hospital , the man told his parents that the problem was that he carried . . . (5) social media is the only resource they can approach to know everything *happened/happening in their country . . . Superfluous words: articles (6) For *an/∅example , if exercising is helpful, we can always look for more chances for the family to go exercise . (7) . . . as soon a person is made aware of his or her genetic profile , he or she has *a/∅knowledge about others . Omissions: articles (8) In this case , if one of the family members or close relatives is found to carry *∅/a genetic risk . . . Table 7: Generalizing beyond surface form: Examples of mistakes that classifiers successfully address. Output of the classifier-based CUUI system. (2) illustrate verb errors with long-distance subjects (“one” and “patients”). This is handled in the classification approach via syntactic features. An MT system misses these errors because it is limited to edits within short spans. Examples (3), (4), and (5) illustrate verb mistakes for which the correct replacements were not observed in training but that are nonetheless corrected by generalizing beyond surface form. Finally, (6) and (7) illustrate omission and insertion errors, a majority of article mistakes. The MT system is especially bad at correcting such mistakes. Notably, the classifier-based CUUI system correctly identified twice as many omitted articles and more than 20 times more superfluous articles than the MTbased AMU system. This happens because an MT system is restricted to suggesting deletions and insertions in those contexts that were observed in training, whereas a classifier uses shallow parse information, which allows it to insert or delete an article in front of every eligible noun phrase. These examples demonstrate that the ability of a system to generalize beyond the surface forms is indeed beneficial for long-distance dependencies, for abstracting away from surface forms when formulating confusion sets, and for mistakes involving omitting or inserting a word. 4 Developing New State-of-the-Art MT and Classifier Systems In this section, we explore the advantages of each learning approach, as identified in the previous section, within each learning framework. To this end, drawing on the strengths of each framework, we develop new state-of-the-art MT and classifier systems.2 In the next section, we will use these 2Implementation details can be found at cogcomp.cs. illinois.edu/page/publication view/793 System Learner Native CoNLLtrain Lang-8 Eng. Wiki. Web1T 1.2M 48M 2B 1T MT ✓ ✓ ✓ Classif. ✓ ✓ Table 8: Data used in the experiments. Corpora sizes are in the number of words. MT and classifier components and show how to exploit the strengths of each framework in combination. Table 8 summarizes the data used. Results are reported with respect to all errors in the test data. This is different from performance for individual errors in Table 6. 4.1 Machine Translation Systems A key advantage of the MT framework is that, unlike with classifiers, error confusions are learned from parallel data automatically, without further (linguistic) input. We build two MT systems that differ only in the use of parallel data: the CoNLL2014 training data and Lang-8. Our MT systems are trained using Moses (Koehn et al., 2007) and follow the standard approach (Junczys-Dowmunt and Grundkiewicz, 2014; Susanto et al., 2014). Both systems use two 5-gram language models – English Wikipedia and the corrected side of CoNLL-train – trained with KenLM (Heafield et al., 2013). Table 9 reports the performance of the systems. As shown, performance increases by more than 11 points when a larger parallel corpus is used. The best MT system outperforms the top CoNLL system by 2 points. 4.2 Classifiers We now present several classifier systems, exploring the two important properties of the classification framework – the ability to train without super2210 Parallel data Performance P R F0.5 CoNLL-train 43.34 11.81 28.25 Lang-8 66.15 15.11 39.48 CoNLL-2014 top 1 39.71 30.10 37.33 Table 9: MT systems trained in this work. vision and system flexibility (see Table 4). 4.2.1 Supervision Supervision in the form of annotated learner data plays an important role in developing an error correction system but is expensive. Native data, in contrast, is cheap and available in large quantities. Therefore, the fact that, unlike with MT, it is possible to build a classifier system without any annotated data, is a clear advantage of classifiers. Training without supervision is possible in the classification framework, as follows. For a given mistake type, e.g. preposition, a classifier is trained on native data that is assumed to be correct; the classifier uses context words around each preposition as features. The resulting model is then applied to learner prepositions and will predict the most likely preposition in a given context. If the preposition predicted by the classifier is different from what the author used in text, this preposition is flagged as a mistake. We refer the reader to Rozovskaya and Roth (2010b) and Rozovskaya and Roth (2011) for a description of training classifiers with and without supervision for error correction tasks. Below, we address two questions related to the use of supervision: • Training with supervision: When training using learner data, how does a classifier-based system compare against an MT system? • Training without supervision: How well can we do by building a classifier system with native data only, compared to MT and classifier-based systems that use supervision? Our classifier system is based on the implementation framework of the second CoNLL-2014 system (Rozovskaya et al., 2014) and consists of classifiers for 7 most common grammatical errors in CoNLL-train: article; preposition; noun number; verb agreement; verb form; verb tense; word form. All modules take as input the corpus documents pre-processed with a POS tagger3 (EvenZohar and Roth, 2001), a shallow parser4 (Pun3http://cogcomp.cs.illinois.edu/page/ software view/POS 4http://cogcomp.cs.illinois.edu/page/ software view/Chunker System Performance P R F0.5 Classifiers (learner) 32.15 17.96 27.76 Classifiers (native) 38.41 23.05 33.89 MT 43.34 11.81 28.25 CoNLL-2014 top 1 39.71 30.10 37.33 CoNLL-2014 top 2 41.78 24.88 36.79 CoNLL-2014 top 3 41.62 21.40 35.01 Table 10: Classifier systems trained with and without supervision. Learner data refers to CoNLL-train. Native data refers to Web1T. The MT system uses CoNLL-train for parallel data. yakanok and Roth, 2001), a syntactic parser (Klein and Manning, 2003) and a dependency converter (Marneffe et al., 2006). Classifiers are trained either on learner data (CoNLL-train) or native data (Web1T). Classifiers built on CoNLL-train are trained discriminatively with the Averaged Perceptron algorithm (Rizzolo and Roth, 2010) and use rich POS and syntactic features tailored to specific error types that are standard for these tasks (Lee and Seneff, 2008; Han et al., 2006; Tetreault et al., 2010; Rozovskaya et al., 2011); Na¨ıve Bayes classifiers are trained on Web1T with word n-gram features. A detailed description of the classifiers and the features used can be found in Rozovskaya and Roth (2014). We also add several novel ideas that are described below. Table 10 shows the performance of two classifier systems, trained with supervision (on CoNLLtrain) and without supervision on native data (Web1T), and compares these to an MT approach trained on CoNLL-train. The first classifier system performs comparably to the MT system (27.76 vs. 28.25), however, the native-trained classifier system outperforms both, and does not use any annotated data. The native-trained classifier system would place fourth in CoNLL-2014. 4.2.2 Flexibility We now explore another advantage of the classifier-based approach, that of allowing for a flexible architecture where we can tailor knowledge sources for individual phenomena. In Section 4.2.1, we already took advantage of the fact that in the classifier framework it is easy to incorporate features suited to individual error types. We now show that by adding supervision in a way tailored toward specific errors we can further improve the classifier-based approach. Adding Supervision in a Tailored Way There is a trade-off between training on native and learner 2211 Training Performance data P R F0.5 (1) Learner 32.15 17.96 27.76 (2) Native 38.41 23.05 33.89 (3) Tailored 57.07 14.74 36.26 Table 11: Classifiers: supervision in a tailored way. Trained on (1) learner data (CoNLL-train); (2) native data (Web1T); (3) data sources tailored per error type. data. The advantage of training on native data is clearly the size, which is important for estimating context parameters. Learner data provides additional information, such as learner error patterns and the manner of non-native writing. Instead of choosing to train on one data type, the classifier framework allows one to combine the two data sources in various ways: voting (Rozovskaya et al., 2014), alternating structure optimization (Dahlmeier and Ng, 2011), training a meta-classifier (Gamon, 2010), and extracting error patterns (Rozovskaya and Roth, 2011). We compare two approaches of adding supervision: (1) Learner error patterns: Error patterns are extracted from learner data and “injected” into models trained on native data (Rozovskaya and Roth, 2011). Learner data is used to estimate mistake parameters; contextual cues are based on native data. (2) Learner error patterns+native predictions: Classifiers are trained on native data. Classifier predictions are used as features in models trained on learner data. Learner data thus contributes both the specific manner of learner writing and the mistake parameters. The native data contributes contextual information. We found that (2) is superior to (1) for article, agreement, and preposition errors; (1) works better on verb form and word form errors; and noun number errors perform best when a classifier is trained on native data. (Learner error patterns were found not to be beneficial for correcting noun number errors (Rozovskaya and Roth, 2014)). Tailored supervision yields an improvement of almost 3 points over the system trained on native data and almost 9 points over the system trained on learner data (Table 11). Adding Mechanical Errors Finally, we add components for mechanical errors: punctuation, spelling, and capitalization. These are distinguished from the grammatical mistakes, as they are not specific to GEC and can be handled with existing resources or simple methods. For capitalization and missing commas, we System/training data Performance P R F0.5 Native 38.41 23.05 33.89 Native+mechanical 42.72 27.69 38.54 Tailored 57.07 14.74 36.26 Best (tailored+mechanical) 60.79 19.93 43.11 CoNLL-2014 top system 39.71 30.10 37.33 Susanto et al. (2014) 53.55 19.14 39.39 Miz. & Mats. (2016) 45.80 26.60 40.00 Table 12: Classifier systems in this work. Comparison to existing state-of-the-art. System Performance P R F0.5 MT is trained on CoNLL-train MT 43.34 11.81 28.25 Spelling+MT 49.86 16.36 35.37 Article+MT 45.11 13.99 31.22 Verb agr.+MT 46.36 14.63 32.33 Art.+Verb agr.+Spell+MT 52.07 20.89 40.10 MT is trained on Lang-8 MT 66.15 15.11 39.48 Spelling+MT 65.87 16.94 41.75 Article+MT 63.81 17.70 41.95 Verb. agr.+MT 66.09 18.01 43.08 Art.+Verb agr.+Spell+MT 64.13 22.15 46.51 Table 13: Pipelines: select classifiers and MT. compile a list of patterns using CoNLL training data. We also use an off-the-shelf speller (Flor, 2012; Flor and Futagi, 2012). Results are shown in Table 12. Performance improves by almost 5 and 7 points for the native-trained system and for the best configuration of classifiers with supervision. Both systems also outperform the top CoNLL system, by 1 and 6 points, respectively. The result of 43.11 by the best classifier configuration substantially outperforms the existing state-of-the-art: a combination of two MT systems and two classifier systems, and MT with re-ranking (Susanto et al., 2014; Mizumoto and Matsumoto, 2016). 5 Combining MT and Classifier Systems Since MT and classifiers differ with respect to the types of errors they can better handle, we combine these systems in a pipeline architecture where the MT is applied to the output of classifiers. Classifiers are applied first, since MT is better at handling complex phenomena. First, we add the speller and those classifier components that perform substantially better than MT (articles and verb agreement), due to the ability of classifiers to generalize beyond lexical information. The added classifiers are part of the best system in Table 12. Results are shown in Table 13. Adding classifiers improves the performance, thereby demon2212 System Performance P R F0.5 MT (CoNLL-train) 43.34 11.81 28.25 MT (Lang-8) 66.15 15.11 39.48 Best classifier (Table 12) 60.79 19.93 43.11 Best class.+MT (CoNLL-train) 51.92 25.08 42.77 Best class.+MT (Lang-8) 60.17 25.64 47.40 Table 14: Pipelines: the best classifier system and MT systems. System Performance P R F0.5 Best classifier (Table 12) 60.79 19.93 43.11 Art.+Verb agr.+Spell+MT 64.13 22.15 46.51 Best classifier+MT 60.17 25.64 47.40 CoNLL-2014 top system 39.71 30.10 37.33 Susanto et al. (2014) 53.55 19.14 39.39 Miz. & Mats. (2016) 45.80 26.60 40.00 Table 15: Best systems in this work. Comparison to existing state-of-the-art. strating that the classifiers address a complementary set of mistakes. Adding all three modules improves the results from 28.25 to 40.10 and from 39.48 to 46.51 for the MT systems trained on CoNLL-train and Lang-8, respectively. Notably, the CoNLL-train MT system especially benefits, which shows that when the parallel data is small, it is particularly worthwhile to add classifiers. It should be stressed that even with a smaller parallel corpus, when the three modules are added, the resulting system is very competitive with previous state-of-the-art that uses a lot more supervision: Susanto et al. (2014) and Mizumoto and Matsumoto (2016) use Lang-8. These results show that when one has an MT system, it is possible to improve by investing effort into building select classifiers for phenomena that are most challenging for MT. Finally, Table 14 demonstrates that combining MT with the best classifier system improves the result further when the MT system is trained on Lang-8, but not when the MT system is trained on CoNLL-train. We also note that the CoNLL-train MT system also has a much lower precision than the other systems. We conclude that when only a limited amount of data is available, the classifier approach on its own performs better. As a summary, Table 15 lists the best systems developed in this work – a classifier system, a pipeline of select classifiers and MT, and a pipeline consisting of the best classifier and the MT systems – and compares to existing state-ofthe-art. Our classifier system is a 3-point improvement over the existing state-of-the-art, while the best pipeline is a 7.4-point improvement (20% relative improvement). 6 Discussion and Conclusions A recent surge in GEC research has produced two leading state-of-the-art approaches – machine learning classification and machine translation. Based on the analysis of the methods and an error analysis on the outputs of state-of-the-art systems that adopt these approaches, we explained the differences and the key advantages of each. With respect to error phenomena, we showed that while MT is better at handling complex mistakes, classifiers are better at correcting mistakes that require abstracting beyond lexical context. We further showed that the key strengths of the classification framework are its flexibility and the ability to train without supervision. We built several systems that draw on the strengths of each approach individually and in a pipeline. The best classifier system and the pipelines outperform reported best results on the task, often by a large margin. The purpose of this work is to gain a better understanding of the advantages offered by each learning method in order to make further progress on the GEC task. We showed that the values provided by each method can be exploited within each approach and in combination, depending on the resources available, such as annotated learner data (MT), and additional linguistic resources (classifiers). As a result, we built robust systems and showed substantial improvement over existing state-of-the-art. For future work, we intend to study the problem in the context of other languages. However, it is important to realize that the problem is far from being solved even in English, and the current work makes very significant progress on it. Acknowledgments The authors thank Michael Flor for his help with running the spelling system on the data and John Wieting for sharing the English Wikipedia corpus. The authors are also grateful to Mark Sammons, Peter Chew, and the anonymous reviewers for the insightful comments on the paper. The work of Dan Roth on this project was supported by DARPA under agreement number FA8750-13-2-0008. Any opinions, findings, conclusions or recommendations are those of the authors and do not necessarily reflect the view of the agencies. References M. Banko and E. Brill. 2001. Scaling to very very large corpora for natural language disambiguation. In Proceedings 2213 of 39th Annual Meeting of the Association for Computational Linguistics, pages 26–33, Toulouse, France, July. T. Brants and A. Franz. 2006. Web 1T 5-gram Version 1. Linguistic Data Consortium. A. Carlson and I. Fette. 2007. Memory-based contextsensitive spelling correction at web scale. In Proceedings of the IEEE International Conference on Machine Learning and Applications (ICMLA). A. J. Carlson, J. Rosen, and D. Roth. 2001. Scaling up context sensitive text correction. In IAAI. D. Dahlmeier and H. T. Ng. 2011. Grammatical error correction with alternating structure optimization. In Proceedings of ACL. D. Dahlmeier and H.T Ng. 2012. A beam-search decoder for grammatical error correction. In Proceedings of EMNLPCoNLL. D. Dahlmeier, H.T. Ng, and S.M. Wu. 2013. Building a large annotated corpus of learner English: The NUS corpus of learner English. In Proceedings of the NAACL Workshop on Innovative Use of NLP for Building Educational Applications. R. Dale and A. Kilgarriff. 2011. Helping Our Own: The HOO 2011 pilot shared task. In Proceedings of the 13th European Workshop on Natural Language Generation. R. Dale, I. Anisimoff, and G. Narroway. 2012. A report on the preposition and determiner error correction shared task. In Proceedings of the NAACL Workshop on Innovative Use of NLP for Building Educational Applications. H. Daume and J. Jagarlamudi. 2011. Domain adaptation for machine translation by mining unseen words. In ACL. Y. Even-Zohar and D. Roth. 2001. A sequential model for multi class classification. In Proceedings of EMNLP. R. De Felice and S. Pulman. 2008. A classifier-based approach to preposition and determiner error correction in L2 English. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 169–176, Manchester, UK, August. M. Felice, Z. Yuan, Ø. Andersen, H. Yannakoudakis, and E. Kochmar. 2014. Grammatical error correction using hybrid systems and type filtering. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task. M. Flor and Y. Futagi. 2012. On using context for automatic correction of non-word misspellings in student essays. In Proceedings of the 7th Workshop on Innovative Use of NLP for Building Educational Applications. Association for Computational Linguistics. M. Flor. 2012. Four types of context for automatic spelling correction. Traitement Automatique des Langues (TAL). (Special Issue: Managing noise in the signal: error handling in natural language processing), 3(53):61–99. M. Gamon, J. Gao, C. Brockett, A. Klementiev, W. Dolan, D. Belenko, and L. Vanderwende. 2008. Using contextual speller techniques and language modeling for ESL error correction. In Proceedings of IJCNLP. M. Gamon. 2010. Using mostly native data to correct errors in learners’ writing. In Proceedings of NAACL. A. R. Golding and D. Roth. 1996. Applying Winnow to context-sensitive spelling correction. In ICML. A. R. Golding and D. Roth. 1999. A Winnow based approach to context-sensitive spelling correction. Machine Learning. N. Han, M. Chodorow, and C. Leacock. 2006. Detecting errors in English article usage by non-native speakers. Journal of Natural Language Engineering, 12(2):115–129. N. Han, J. Tetreault, S. Lee, and J. Ha. 2010. Using an errorannotated learner corpus to develop and ESL/EFL error correction system. In Proceedings of LREC. K. Heafield, G. Hanneman, and A. Lavie. 2009. Machine translation system combination with flexible word ordering. In Proceedings of the Fourth Workshop on Statistical Machine Translation. Association for Computational Linguistics. K. Heafield, I. Pouzyrevsky, J. H. Clark, and P. Koehn. 2013. Scalable modified Kneser-Ney language model estimation. In ACL. E. Izumi, K. Uchimoto, T. Saiga, T. Supnithi, and H. Isahara. 2003. Automatic error detection in the Japanese learners’ English spoken data. In Proceedings of ACL. M. Junczys-Dowmunt and R. Grundkiewicz. 2014. The AMU system in the CoNLL-2014 shared task: Grammatical error correction by data-intensive and feature-rich statistical machine translation. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task. D. Klein and C. D. Manning. 2003. Fast exact inference with a factored model for natural language parsing. In Proceedings of NIPS. P. Koehn, F. J. Och, and D. Marcu. 2003. Statistical phrasebased translation. In ACL. P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin, and E. Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In ACL. J. Lee and S. Seneff. 2008. An analysis of grammatical errors in non-native speech in English. In Proceedings of the 2008 Spoken Language Technology Workshop. M. Marneffe, B. MacCartney, and Ch. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of LREC. T. Mizumoto and Y. Matsumoto. 2016. Discriminative reranking for grammatical error correction with statistical machine translation. In NAACL. To appear. T. Mizumoto, M. Komachi, M. Nagata, and Y. Matsumoto. 2011. Mining revision log of language learning SNS for automated japanese error correction of second language learners. In IJCNLP. H. T. Ng, S. M. Wu, Y. Wu, Ch. Hadiwinoto, and J. Tetreault. 2013. The CoNLL-2013 shared task on grammatical error correction. In Proceedings of CoNLL: Shared Task. 2214 H. T. Ng, S. M. Wu, T. Briscoe, C. Hadiwinoto, R. H. Susanto, and C. Bryant. 2014. The CoNLL-2014 shared task on grammatical error correction. In Proceedings of CoNLL: Shared Task. F. J. Och and H. Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. In ACL. A. Park and R. Levy. 2011. Automated whole sentence grammar correction using a noisy channel model. In ACL, Portland, Oregon, USA, June. Association for Computational Linguistics. V. Punyakanok and D. Roth. 2001. The use of classifiers in sequential inference. In Proceedings of NIPS. N. Rizzolo and D. Roth. 2010. Learning Based Java for Rapid Development of NLP Systems. In Proceedings of LREC. A. Rozovskaya and D. Roth. 2010a. Generating confusion sets for context-sensitive error correction. In Proceedings of EMNLP. A. Rozovskaya and D. Roth. 2010b. Training paradigms for correcting errors in grammar and usage. In Proceedings of NAACL. A. Rozovskaya and D. Roth. 2011. Algorithm selection and model adaptation for ESL correction tasks. In Proceedings of ACL. A. Rozovskaya and D. Roth. 2013. Joint learning and inference for grammatical error correction. In Proceedings of EMNLP. A. Rozovskaya and D. Roth. 2014. Building a State-of-theArt Grammatical Error Correction System. In Transactions of ACL. A. Rozovskaya, M. Sammons, J. Gioja, and D. Roth. 2011. University of Illinois system in HOO text correction shared task. In Proceedings of the European Workshop on Natural Language Generation (ENLG). A. Rozovskaya, K.-W. Chang, M. Sammons, D. Roth, and N. Habash. 2014. The University of Illinois and Columbia system in the CoNLL-2014 shared task. In Proceedings of CoNLL Shared Task. R. H. Susanto, P. Phandi, and H. T. Ng. 2014. System combination for grammatical error correction. In EMNLP. J. Tetreault, J. Foster, and M. Chodorow. 2010. Using parse features for preposition selection and error detection. In Proceedings of ACL. 2215
2016
208
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2216–2225, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Recurrent neural network models for disease name recognition using domain invariant features Sunil Kumar Sahu and Ashish Anand Department of Computer Science and Engineering Indian Institute of Technology Guwahati Assam, India - 781039 {sunil.sahu, anand.ashish}@iitg.ernet.in Abstract Hand-crafted features based on linguistic and domain-knowledge play crucial role in determining the performance of disease name recognition systems. Such methods are further limited by the scope of these features or in other words, their ability to cover the contexts or word dependencies within a sentence. In this work, we focus on reducing such dependencies and propose a domain-invariant framework for the disease name recognition task. In particular, we propose various end-to-end recurrent neural network (RNN) models for the tasks of disease name recognition and their classification into four pre-defined categories. We also utilize convolution neural network (CNN) in cascade of RNN to get character-based embedded features and employ it with word-embedded features in our model. We compare our models with the state-of-the-art results for the two tasks on NCBI disease dataset. Our results for the disease mention recognition task indicate that state-of-the-art performance can be obtained without relying on feature engineering. Further the proposed models obtained improved performance on the classification task of disease names. 1 Introduction Automatic recognition of disease names in biomedical and clinical texts is of utmost importance for development of more sophisticated NLP systems such as information extraction, question answering, text summarization and so on (Rosario and Hearst, 2004). Complicate and inconsistent terminologies, ambiguities caused by use of abbreviations and acronyms, new disease names, multiple names (possibly of varying number of words) for the same disease, complicated syntactic structure referring to multiple related names or entities are some of the major reasons for making automatic identification of the task difficult and challenging (Leaman et al., 2009). State-ofthe-art disease name recognition systems (Mahbub Chowdhury and Lavelli, 2010; Do˘gan and Lu, 2012; Dogan et al., 2014) depends on user defined features which in turn try to capture context keeping in mind above mentioned challenges. Feature engineering not only requires linguistic as well as domain insight but also is time consuming and is corpus dependent. Recently window based neural network approach of (Collobert and Weston, 2008; Collobert et al., 2011) got lot of attention in different sequence tagging tasks in NLP. It gave state-of-art results in many sequence labeling problems without using many hand designed or manually engineered features. One major drawback of this approach is its inability to capture features from outside window. Consider a sentence “Given that the skin of these adult mice also exhibits signs of de novo hair-follicle morphogenesis, we wondered whether human pilomatricomas might originate from hair matrix cells and whether they might possess beta-catenin-stabilizing mutations” (taken verbatim from PMID: 10192393), words such as signs and originate appearing both sides of the word “pilomatricomas”, play important role in deciding it is a disease. Any model relying on features defined based on words occurring within a fixed window of neighboring words will fail to capture information of influential words occurring outside this window. Our motivation can be summarized in the following question: can we identify disease name and categorize them without relying on feature en2216 gineering, domain-knowledge or task specific resources? In other words, we can say this work is motivated towards mitigating the two issues: first, feature engineering relying on linguistic and domain-specific knowledge; and second, bring flexibility in capturing influential words affecting model decisions irrespective of their occurrence anywhere within the sentence. For the first, we used character-based embedding (likely to capture orthographic and morphological features) as well as word embedding (likely to capture lexicosemantic features) as features of the neural network models. For the second issue, we explore various recurrent neural network (RNN) architectures for their ability to capture long distance contexts. We experiment with bidirectional RNN (Bi-RNN), bidirectional long short term memory network (BiLSTM) and bidirectional gated recurrent unit (BiGRU). In each of these models we used sentence level log likelihood approach at the top layer of the neural architecture. The main contributions of the work can be summarized as follows • Domain invariant features with various RNN architectures for the disease name recognition and classification tasks, • Comparative study on the use of character based embedded features, word embedding features and combined features in the RNN models. • Failure analysis to check where exactly our models are failed in the considered tasks. Although there are some related works (discussed in sec 6), this is the first study, to the best of our knowledge, which comprehensively uses various RNN architectures without resorting to feature engineering for disease name recognition and classification tasks. Our results show near state-of-the-art performance can be achieved on the disease name recognition task. More significantly, the proposed models obtain significantly improved performance on the disease name classification task. 2 Methods We first give overview of the complete model used for the two tasks. Next we explained embedded features used in different neural network models. We provide short description of different RNN models in the section 2.3. Training and inference strategies are explained in the section 2.4. 2.1 Model Architectures Similar to any named entity recognition task, we formulate the disease mention recognition task as a token level sequence tagging problem. Each word has to be labeled with one of the defined tags. We choose BIO model of tagging, where B stands for beginning, I for intermediate and O for outsider or other. This way we have two possible tags for all entities of interest, i.e., for all disease mentions, and one tag for other entities. Generic neural architecture is shown in the figure 1. In the very first step, each word is mapped to its embedded features. Figure 1: Generic bidirectional recurrent neural network with sentence level log likelihood at the top-layer for sequence tagging task We call this layer as embedding layer. This layer acts as input to hidden layers of RNN model. We study the three different RNN models, and have described them briefly in the section 2.3. Output of the hidden layers is then fed to the output layer to compute the scores for all tags of interest (Collobert et al., 2011; Huang et al., 2015). In output layer we are using sentence level log likelihood, to make inference. Table 1 briefly describes all notations used in the paper. 2.2 Features Distributed Word Representation (WE) Distributed word representation or word embedding or simply word vector (Bengio et al., 2003; Collobert and Weston, 2008) is the technique of learning vector representation of a word in a given 2217 Symbols Explanation V Vocabulary of words (v1, v2...v|V |) C Vocabulary of characters (c1, c2..c|C|) T Tag set (t1, t2...t|T|) dwe Dimension of word embedding dchr Dimension of character embedding dce Dimension of character level word embedding Mwe ∈RdweX|V | word embedding matrix, where every column Mwe i is a vector representation of corresponding word vi in V Mcw ∈RdchrX|C| character embedding matrix, where every column Mcw i is a vector representation of corresponding character ci in C. w(i) ∈Rdwe word embedding of vi y(i) ∈Rdce character level word embedding of vi x(i) ∈Rdwe+dce feature vector of word w(i). We get this after concatenating w(i) and y(i) z(i) ∈R|T| score for ith word in sentence at output layer of neural network. Here j th element will indicate the score for tth j tag. W ∗ ∗, U∗ ∗, V ∗ ∗ Parameters of different neural networks Table 1: Notation corpus. Word vectors are present in columns of matrix Mwe. We can get this vector by taking product of matrix Mwe and one hot vector of vi. w(i) = Mwe h(i) (1) Here h(i) is the one hot vector representation of ith word in V. We use pre-trained 50 dimensional word vectors learned using skipgram method on a biomedical corpus (Mikolov et al., 2013b; Mikolov et al., 2013a; TH et al., 2015). Character Level Word Embedding (CE) Word embedding preserve syntactic and semantic information well but fails to seize morphological and shape information. However, for the disease entity recognition task, such information can play an important role. For instance, letter -o- in the word gastroenteritis is used to combine various body parts gastro for stomach, enter for intestines, and itis indicates inflammation. Hence taken together it implies inflammation of stomach and intestines, where -itis play significant role in determining it is actually a disease name. Character level word embedding was first introduced by (dos Santos and Zadrozny, 2014) with the motivation to capture word shape and morphological features in word embedding. Character level word embedding also automatically mitigate the problem of out of vocabulary words as we can embed any word by its characters through character level embedding. In this case, a vector is initialized for every character in the corpus. Then we learn vector representation for any word by applying CNN on each vector of character sequence of that word as shown in figure 2. These character vectors will get update while training RNN in supervised manner only. Since number of characters in the dataset is not high we assume that every character vectors will get sufficient updation while training RNN itself. Figure 2: CNN with Max Pooling for Character Level Embedding (p1 and p2 are padding). Here, filter length is 3. Let {p1, c1, c2...cM, p2} is sequence of characters for a word with padding at beginning and ending of word and let {al, a1, a2...aM, ar} is its sequence of character vector, which we obtain by multiplying Mcw with one hot vector of corresponding character. To obtain character level word embedding we need to feed this in convolution neural network (CNN) with max pooling layer (dos Santos and Zadrozny, 2014). Let W c ∈ 2218 RdceX(dchrXk) is a filter and bc bias of CNN, then [y(i)]j = max 1<m<M[W cq(m) + bc]j (2) Here k is window size, q(m) is obtained by concatenating the vector of (k −1)/2 character left to (k−1)/2 character right of cm. Same filter will be used for all window of characters and max pooling operation is performed on results of all. We learn 100 dimensional character embedding for all characters in a given dataset (avoiding case sensitivity) and 25 dimensional character level word embedding from character sequence of words. 2.3 Recurrent Neural Network Models Recurrent Neural Network (RNN) is a class of artificial neural networks which utilizes sequential information and maintains history through its intermediate layers (Graves et al., 2009; Graves, 2013). We experiment with three different variants of RNN, which are briefly described in subsequent subsections. Bi-directional Recurrent Neural Network In Bi-RNN, context of the word is captured through past and future words. This is achieved by having two hidden components in the intermediate layer, as schematically shown in the fig 1. One component process the information in forward direction (left to right) and other in reverse direction. Subsequently outputs of these components then concatenated and fed to the output layer to get score for all tags of the considered word. Let x(t) is a feature vector of tth word in sentence (concatenation of corresponding embedding features wti and yti) and h(t−1) l is the computation of last hidden state at (t −1)th word, then computation of hidden and output layer values would be: h(t) l = tanh(Ulx(t) + W lh(t−1) l ) z(t) = V (h(t) l : h(t) r ) (3) Here Ul ∈RnH×nI and W l ∈RnH×nH, where nI is input vector of length dwe + dce, nH is hidden layer size and V ∈RnO×(nH+nH) is the output layer parameter. h(t) l and h(t) r correspond to left and right hidden layer components respectively and h(t) r is calculated similarly to h(t) l by reversing the words in the sentence. At the beginning h(0) l and h(0) r are initialized randomly. Bi-directional Long Short Term Memory Network Traditional RNN models suffer from both vanishing and exploding gradient (Pascanu et al., 2012; Bengio et al., 2013). Such models are likely to fail where we need longer contexts to do the job. These issues were the main motivation behind the LSTM model (Hochreiter and Schmidhuber, 1997). LSTM layer is just another way to compute a hidden state which introduces a new structure called a memory cell (ct) and three gates called as input (it), output (ot) and forget (ft) gates. These gates are composed of sigmoid activation function and responsible for regulating information in memory cell. The input gate by allowing incoming signal to alter the state of the memory cell, regulates proportion of history information memory cell will keep. On the other hand, the output gate regulates what proportion of stored information in the memory cell will influence other neurons. Finally, the forget gate can modulate the memory cells and allowing the cell to remember or forget its previous state. Computation of memory cell (c(t)) is done through previous memory cell and candidate hidden state (g(t)) which we compute through current input and the previous hidden state. The final output of hidden state would be calculated based on memory cell and forget gate. In our experiment we used model discussed in (Graves, 2013; Huang et al., 2015). Let x(t) is feature vector for tth word in a sentence and h(t−1) l is previous hidden state then computation of hidden (h(t) l ) and output layer (z(t)) of LSTM would be. i(t) l = σ(U(i) l x(t) + W (i) l h(t−1) l + bi l) f(t) l = σ(U(f) l x(t) + W (f) l h(t−1) l + bf l ) o(t) l = σ(U(o) l x(t) + W (o) l h(t−1) l + bo l ) g(t) l = tanh(U(g) l x(t) + W (g) l h(t−1) l + bg l ) c(t) l = c(t−1) l ∗fl + gl ∗il h(t) l = tanh(c(t) l ) ∗ol Where σ is sigmoid activation function, ∗is a element wise product, U(i) l , U(f) l , U(o) l , U(g) l ∈ RnH×nI and W (i) l , W (o) l , W (f) l , W (g) l ∈RnH×nH, where nI is input size (dwe + dce) and nH is hidden layer size. We compute h(t) r in similar manner as h(t) l by reversing the all words of sentence. Let V ∈RnO×(nH+nH) (nO size of output layer) is 2219 the parameter of output layer of LSTM then computation of output layer will be: z(t) = V (h(t) l : h(t) r ) (4) Bi-directional Gated Recurrent Unit Network A gated recurrent unit (GRU) was proposed by (Cho et al., 2014) to make each recurrent unit to adaptively capture dependencies of different time scales. Similar to the LSTM unit, the GRU has gating units reset r and update z gates that modulate the flow of information inside the unit, however, without having a separate memory cells. The resulting model is simpler than standard LSTM models. We follow (Chung et al., 2014) model of GRU to transform the extracted word embedding and character embedding features to score for all tags. Let x(t) embedding feature for tth word in sentence and h(t−1) l is computation of hidden state for (t−1)th word then computation of GRU would be: z(t) l = σ(U(z) l x(t) + W (z) l h(t−1) l + b(z) l ) r(t) l = σ(U(r) l x(t) + W (r) l h(t−1) l + b(r) l ) ˜h(t) l = tanh(U(h) l x(t) + W (h) l h(t−1) l ∗rl + b(h) l ) h(t) l = z(t) l ∗˜hl + (1 −z(t) l ) ∗h(t−1) l z(t) = V (h(t) l : h(t) r ) (5) Where ∗is pair wise multiplication, U(z) l , U(r) l , U(h) l , U(h) l ∈ RnH×nI and W (z) l , W (r) l W (h) l ∈ RnH×nH are parameters of GRU. V ∈ RnO×(nH+nH) is output layer parameter. Computation of h(t) r is done in similar manner as h(t) l by reversing the words of sentence. 2.4 Training and Inference Equations 3, 4 and 5 are the scores of all possible tags for tth word sentence. We follow sentencelevel log-likelihood (SLL) (Collobert et al., 2011) approach equivalent to linear-chain CRF to infer the scores of a particular tag sequence for the given word sequence. Let [w]|s| 1 is sentence and [t]|s| 1 is the tag sequence for which we want to find the joint score, then score for the whole sentence with the particular tag sequence would be: s([w]|s| 1 , [t]|s| 1 ) = X 1≤i≤|s| (W trans ti−1,ti + z(i) ti ), (6) where W trans is transition score matrix and W trans i,j is indicating the transition score moving from tag ti to tj; tj is tag for the jth word; z(i) ti is the output score from the neural network model for the tag ti of ith word. To train our model we used cross entropy loss function and adagrad (Duchi et al., 2010) approach to optimize the loss function. Entire neural network parameters, word embedding, character embedding and W trans (transition score matrix used in the SLL) was updated during training. Entire code has been implemented using theano (Bastien et al., 2012) library in python language. 3 Experiments 3.1 Dataset We used NCBI dataset (Do˘gan and Lu, 2012), the most comprehensive publicly available dataset annotated with disease mentions, in this work. NCBI dataset has been manually annotated by a group of medical practitioners for identifying diseases and their types in biomedical articles. All disease mentions were categorized into four different categories, namely, specific disease, disease class, composite disease and modifier. A word is annotated as specific disease, if it indicates a particular disease. Disease class category indicates a word describing a family of many specific diseases, such as autoimmune disorder. A string signifying two or more different disease mentions is annotated with composite mention. Modifier category indicates disease mention has been used as modifiers for other concepts. This dataset is a extension of the AZDC dataset (Leaman et al., 2009) which was annotated with disease mentions only and not with their categories. Statistics of the dataset is mentioned in the Table 2. Corpus Train set Dev set Test set sentences 5661 939 961 disease 5148 791 961 spe. dis. 2959 409 556 disease class 781 127 121 modifier 1292 218 264 comp. men. 116 37 20 Table 2: Dataset statistics. spe. dis. : specific disease and comp. men.: composite mention In our evaluation we used this dataset in two settings, A: disease mention recognition, where all 2220 Task Model Validation Set Test Set Precision Recall F1 Score Precision Recall F1 Score A NN+CE 76.98 75.80 76.39 78.51 72.75 75.52 Bi-RNN+CE 71.96 74.90 73.40 74.14 72.12 73.11 Bi-GRU+CE 76.28 74.14 75.19 76.03 69.81 72.79 Bi-LSTM+CE 81.52 72.86 76.94 76.98 75.80 76.39 B NN+CE 67.27 53.45 59.57 67.90 49.95 57.56 Bi-RNN+CE 61.34 56.32 58.72 60.32 57.28 58.76 Bi-GRU+CE 61.94 59.11 60.49 62.56 56.50 59.38 Bi-LSTM+CE 61.82 57.03 59.33 64.74 55.53 59.78 Table 3: Performance of various models using 25 dimensional CE features, A:Disease name recognition, B: Disease classification task disease types are flattened into a single category and, the B: disease class recognition, where we need to decide exact categories of disease mentions. It is noteworthy to mention that the Task B is more challenging as it requires model to capture semantic contexts to put disease mentions into appropriate categories. 4 Results and Discussion Evaluation of different models using CE We first evaluate the performance of different RNNs using only character embedding features. We compare the results of RNN models with window based neural network (Collobert et al., 2011) using sentence level log likelihood approach (NN + CE). For the window based neural network, we considered window size 5 (two words from both left and right, and one central word) and same settings of character embedding were used as features. The same set of parameters are used in all experiments unless we mention specifically otherwise. We used exact matching scheme to evaluate performance of all models. Table 3 shows the results obtained by different RNN models with only character level word embedding features. For the task A (Disease name recognition) Bi-LSTM and NN models gave competitive performance on the test set, while Bi-RNN and Bi-GRU did not perform so well. On the other hand for the task B, there is 2.08% −3.8% improved performance (F1-score) shown by RNN models over the NN model again on the test set. Bi-LSTM model obtained F1-score of 59.78% while NN model gave 57.56%. As discussed earlier, task B is difficult than task A as disease category is more likely to be influenced by the words falling outside the context window considered in window based methods. This could be reason for RNN models to perform well over the NN model. This hypothesis will be stronger if we observe similar pattern in our other experiments. Evaluation of different models with WE and WE+CE Next we investigated the results obtained by the various models using only 50 dim word embedding features. The first part of table 4 shows the results obtained by different RNNs and the window based neural network (NN). In this case RNN models are giving better results than the NN model for both the tasks. In particular performance of Bi-LSTM models are best than others in both the tasks. We observe that for the task A, RNN models obtained 1.2% to 3% improvement in F1-score than the baseline NN performance. Similarly 2.55% to 4% improvement in F1-score are observed for the task B, with Bi-LSTM model obtaining more than 4% improvement. In second part of this table we compare the results obtained by various models using the features set obtained by combining the two feature sets. If we look at performance of individual model using three different set of features, model using only word embedding features seems to give consistently best performance. Among all models, BiLSTM using word embedding features obtained best F1-scores of 79.13% and 63.16% for the tasks A and B respectively. Importance of tuning pre-trained word vectors We further empirically evaluate the importance of updating of word vectors while training. For this, we performed another set of experiments, where pre-trained word vectors are not updated while 2221 Task Model Validation Set Test Set Precision Recall F1 Score Precision Recall F1 Score A NN+WE 81.86 76.82 79.26 80.32 73.58 76.81 Bi-RNN+WE 84.14 77.46 80.67 82.49 73.58 77.78 Bi-GRU+WE 84.51 78.23 81.25 82.32 75.16 78.58 Bi-LSTM+WE 85.13 77.72 81.26 84.87 74.11 79.13 B NN+WE 65.33 56.43 60.55 64.23 57.14 60.48 Bi-RNN+WE 63.62 56.84 60.04 67.47 57.50 62.09 Bi-GRU+WE 66.42 57.41 61.59 68.25 58.58 63.05 Bi-LSTM+WE 67.48 58.01 62.39 68.97 58.25 63.16 A NN+WE+CE 76.37 78.62 77.48 74.92 75.16 75.04 Bi-RNN+WE+CE 76.10 75.03 75.56 77.01 72.33 74.59 Bi-GRU+WE+CE 77.73 76.44 77.08 78.04 73.38 75.63 Bi-LSTM+WE+CE 76.94 77.34 77.14 76.10 74.11 75.09 B NN+WE+CE 67.60 56.70 61.67 67.60 56.70 61.67 Bi-RNN+WE+CE 60.94 61.34 61.14 64.36 60.90 62.58 Bi-GRU+WE+CE 61.58 61.99 61.78 61.92 63.85 62.87 Bi-LSTM+WE+CE 64.92 58.61 61.60 61.14 60.54 60.84 Table 4: Performance of various models using 50 dimensional WE features. A:Disease name recognition, B: Disease classification task training. Results obtained on the validation dataset of the Task A are shown in the Table 5. One can observe that performance of all models have deteriorated. Next, instead of using pre-trained word vectors, we initialized each word with zero vector but kept updating them while training. Although performance (Table 6) deteriorated (compare to Table 4) but not as much as in table 5. This observation highlights the importance of tuning word vectors for a specific task during training. Model P R F NN+WE 74.02 67.86 70.81 Bi-RNN+WE 72.17 64.40 68.06 Bi-GRU+WE 77.06 70.55 73.66 Bi-LSTM+WE 77.32 73.75 75.49 Table 5: Performance of different models with 50 dim embedded vectors in Task A validation set when word vectors are not getting updated while training Comparison with State-of-art At the end we are comparing our results with stateof-the art results reported in (Do˘gan and Lu, 2012) on this dataset using BANNER (Leaman and Gonzalez, 2008) in table 7. BANNER is a CRF based bio entity recognition model, which uses general linguistic, orthographic, syntactic dependency feaModel P R F NN+RV 81.64 74.01 77.64 Bi-RNN+RV 82.32 72.73 77.2 Bi-GRU+RV 82.48 74.14 78.08 Bi-LSTM+RV 83.41 72.73 77.70 Table 6: Results of different models with 50 dim random vectors in Task A validation set tures. Although the result reported in (Do˘gan and Lu, 2012) (F1-score = 81.8) is better than that of our RNN models but it should be noted that competitive result (F1-score = 79.13%) is obtained by the proposed Bi-LSTM model which does not depend on any feature engineering or domainspecific resources and is using only word embedding features trained in unsupervised manner on a huge corpus. For the task B, we did not find any paper except (Li, 2012). Li (2012) used linear soft margin support vector (SVM) machine with a number of hand designed features including dictionary based features. The best performing proposed model shows more than 37% improvement in F1-score (benchmark: 46% vs Bi-LSTM+WE: 63.16%). 5 Failure Analysis To see where exactly our models failed to recognize diseases, we analyzed the results carefully. 2222 Task Model Validation Set Test Set P R F P R F A Bi-LSTM+WE 85.13 77.72 81.26 84.87 74.11 79.13 BANNER (Do˘gan and Lu, 2012) 81.9 81.8 B Bi-LSTM+WE 67.48 58.01 62.39 68.97 58.25 63.16 SM-SVM(Li, 2012) 66.1 35.2 46.0 Table 7: Comparisons of our best model results and state-of-art results. SM-SVM :Soft Margin Support Vector Machine We found that significant proportion of errors are coming due to use of acronyms of diseases and use of disease form which is rarely appearing in our corpus. Examples of few such cases are “CD”, “HNPCC”,“SCA1”. We observe that this error is occurring because we do not have exact word embedding for these words. Most of the acronyms in the disease corpus were mapped to rare-word embedding1. Another major proportion of errors in our results were due to difficulty in recognizing nested forms of disease names. For example, in all of the following cases: “hereditary forms of ’ovarian cancer’” , “inherited ‘breast cancer’”, “male and female ‘breast cancer’”, part of phrase such as ovarian cancer in hereditary forms of ovarian cancer, breast cancer in inherited breast cancer and male and female breast cancer are disease names and our models are detecting this very well. However, according to annotation scheme if any disease is part of nested disease name, annotators considered whole phrase as a single disease. So even our model is able to detect part of the disease accurately but due to the exact matching scheme, this will be false positive for us. 6 Related Research In biomedical domain, named entity recognition has attracted much attention for identification of entities such as genes and proteins (Settles, 2005; Leaman and Gonzalez, 2008; Leaman et al., 2009) but not as much for disease name recognition. Notable works, such as of Chowdhury and Lavelli (2010), are mainly conditional random field (CRF) based models using lots of manually designed template features. These include linguistic, orthographic, contextual and dictionary based features. However, they have evaluated their model on the AZDC dataset which is small compared to 1we obtained pre-trained word-embedding features from (TH et al., 2015) and in their pre-processing strategy, all words of frequency less than 50 were mapped to rare-word. the NCBI dataset, which we have considered in this study. Nikfarjam et al. (2015) have proposed a CRF based sequence tagging model, where cluster id of embedded word as an extra feature with manually engineered features is used for adverse drug reaction recognition in tweets. Recently deep neural network models with minimal dependency on feature engineering have been used in few studies in NLP including NER tasks (Collobert et al., 2011; Collobert and Weston, 2008). dos Santos et al. (2015) used deep neural network based model such as window based network to recognize named entity in Portuguese and Spanish texts. In this work, they exploit the power of CNN to get morphological and shape features of words in character level word embedding, and used it as feature with concatenation of word embedding. Their results indicate that CNN are able to preserve morphological and shape features through character level word embedding. Our models are quite similar to this model but we used different variety of RNN in place of window based neural network. Labeau et al. (2015) used Bi-RNN with character level word embedding only as a feature for PoS tagging in German text. Their results also show that with only character level word embedding we can get state-of-art results in PoS tagging in German text. Our model used word embedding as well as character level word embedding together as features and also we have tried more sophisticated RNN models such as LSTM and GRU in bi-directional structure. More recent work of Huang et al. (2015) used LSTM and CRF in variety of combination such as only LSTM, LSTM with CRF and Bi-LSTM with CRF for PoS tagging, chunking and NER tasks in general texts. Their results shows that Bi-LSTM with CRF gave best results in all these tasks. These two works have used either Bi-RNN with character embedding features or Bi-LSTM with word embedding 2223 features in general or news wire texts, whereas in this work we compare the performance of three different types of RNNs: Bi-RNN, Bi-GRU and Bi-LSTM with both word embedding and character embedding features in biomedical text for disease name recognition. 7 Conclusions In this work, we used three different variants of bidirectional RNN models with word embedding features for the first time for disease name and class recognition tasks. Bidirectional RNN models are used to capture both forward and backward long term dependencies among words within a sentence. We have shown that these models are able to obtain quite competitive results compared to the benchmark result on the disease name recognition task. Further our results have shown a significantly improved results on the relatively harder task of disease classification which has not been studied much. All our results were obtained without putting any effort on feature engineering or requiring domain-specific knowledge. Our results also indicate that RNN based models perform better than window based neural network model for the two tasks. This could be due to the implicit ability of RNN models to capture variable range dependencies of words compared to explicit dependency on context window size of window based neural network models. Acknowledgments We acknowledge the use of computing resources made available from the Board of Research in Nuclear Science (BRNS), Dept of Atomic Energy (DAE) Govt. of India sponsered project (No.2013/13/8-BRNS/10026) by Dr Aryabartta Sahu at Department of Computer Science and Engineering, IIT Guwahati. References Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio. 2012. Theano: new features and speed improvements. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. J. Mach. Learn. Res., 3:1137–1155, March. Yoshua Bengio, Nicolas Boulanger-Lewandowski, and Razvan Pascanu. 2013. Advances in optimizing recurrent networks. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 8624–8628. IEEE. KyungHyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. CoRR, abs/1409.1259. Junyoung Chung, C¸ aglar G¨ulc¸ehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR, abs/1412.3555. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pages 160–167. ACM. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493–2537, November. Rezarta Islamaj Dogan, Robert Leaman, and Zhiyong Lu. 2014. NCBI disease corpus: A resource for disease name recognition and concept normalization. Journal of Biomedical Informatics, 47:1–10. Cicero Nogueira dos Santos and Bianca Zadrozny. 2014. Learning character-level representations for part-of-speech tagging. In International Conference on Machine Learning (ICML), volume 32, pages 1818–1826. JMLR W&CP. Cicero dos Santos, Victor Guimaraes, RJ Niter´oi, and Rio de Janeiro. 2015. Boosting named entity recognition with neural character embeddings. Proceedings of NEWS 2015 The Fifth Named Entities Workshop, page 9. Rezarta Islamaj Do˘gan and Zhiyong Lu. 2012. An improved corpus of disease mentions in pubmed citations. In Proceedings of the 2012 Workshop on Biomedical Natural Language Processing, BioNLP ’12, pages 91–99, Stroudsburg, PA, USA. Association for Computational Linguistics. John Duchi, Elad Hazan, and Yoram Singer. 2010. Adaptive subgradient methods for online learning and stochastic optimization. Technical Report UCB/EECS-2010-24, EECS Department, University of California, Berkeley, Mar. Alex Graves, Marcus Liwicki, Santiago Fern´andez, Roman Bertolami, Horst Bunke, and J¨urgen Schmidhuber. 2009. A novel connectionist system for unconstrained handwriting recognition. IEEE Trans. Pattern Anal. Mach. Intell., 31(5):855–868, May. Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850. 2224 Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735– 1780, November. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. CoRR, abs/1508.01991. Matthieu Labeau, Kevin Lser, and Alexandre Allauzen. 2015. Non-lexical neural architecture for fine-grained pos tagging. In Llus Mrquez, Chris Callison-Burch, Jian Su, Daniele Pighin, and Yuval Marton, editors, EMNLP, pages 232–237. The Association for Computational Linguistics. Robert Leaman and Graciela Gonzalez. 2008. Banner: An executable survey of advances in biomedical named entity recognition. In Russ B. Altman, A. Keith Dunker, Lawrence Hunter, Tiffany Murray, and Teri E. Klein, editors, Pacific Symposium on Biocomputing, pages 652–663. World Scientific. Robert Leaman, Christopher Miller, and G Gonzalez. 2009. Enabling recognition of diseases in biomedical text with machine learning: corpus and benchmark. Proceedings of the 2009 Symposium on Languages in Biology and Medicine, 82(9). Gang Li. 2012. Disease mention recognition using soft-margin svm. Training, 593:5–148. Md. Faisal Mahbub Chowdhury and Alberto Lavelli. 2010. Disease mention recognition with specific features. In Proceedings of the 2010 Workshop on Biomedical Natural Language Processing, BioNLP ’10, pages 83–90, Stroudsburg, PA, USA. Association for Computational Linguistics. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119. Azadeh Nikfarjam, Abeed Sarker, Karen OConnor, Rachel Ginn, and Graciela Gonzalez. 2015. Pharmacovigilance from social media: mining adverse drug reaction mentions using sequence labeling with word embedding cluster features. Journal of the American Medical Informatics Association, page ocu041. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2012. Understanding the exploding gradient problem. CoRR, abs/1211.5063. Barbara Rosario and Marti A. Hearst. 2004. Classifying semantic relations in bioscience texts. In Proceedings of the 42Nd Annual Meeting on Association for Computational Linguistics, ACL ’04, Stroudsburg, PA, USA. Association for Computational Linguistics. B. Settles. 2005. ABNER: An open source tool for automatically tagging genes, proteins, and other entity names in text. Bioinformatics, 21(14):3191–3192. MUNEEB TH, Sunil Sahu, and Ashish Anand. 2015. Evaluating distributed word representations for capturing semantics of biomedical concepts. In Proceedings of BioNLP 15, pages 158–163, Beijing, China, July. Association for Computational Linguistics. 2225
2016
209
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 216–225, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Metaphor Detection with Topic Transition, Emotion and Cognition in Context Hyeju Jang, Yohan Jo, Qinlan Shen, Michael Miller, Seungwhan Moon, Carolyn P. Ros´e Language Technologies Institute Carnegie Mellon University 5000 Forbes Ave, Pittsburgh, PA 15213, USA {hyejuj,yohanj,qinlans,millerm,seungwhm,cprose}@cs.cmu.edu Abstract Metaphor is a common linguistic tool in communication, making its detection in discourse a crucial task for natural language understanding. One popular approach to this challenge is to capture semantic incohesion between a metaphor and the dominant topic of the surrounding text. While these methods are effective, they tend to overclassify target words as metaphorical when they deviate in meaning from its context. We present a new approach that (1) distinguishes literal and non-literal use of target words by examining sentence-level topic transitions and (2) captures the motivation of speakers to express emotions and abstract concepts metaphorically. Experiments on an online breast cancer discussion forum dataset demonstrate a significant improvement in metaphor detection over the state-of-theart. These experimental results also reveal a tendency toward metaphor usage in personal topics and certain emotional contexts. 1 Introduction Figurative language is commonly used in human communication ranging from literature to everyday speech. One of the most common forms of non-literal language is metaphor, in which two dissimilar concepts are compared. In the utterance, “Time is money” (Lakoff and Johnson, 1980), for example, the concept of “time” is compared to “money” to emphasize that time is valuable. Bringing in information from another domain allows more effective ways of expressing thoughts, feelings, and ideas than only using literal language. Previous approaches to modeling metaphor have used either the semantic and syntactic information in just the sentence that contains a metaphor (Turney et al., 2011; Tsvetkov et al., 2014), or the context beyond a single sentence (Broadwell et al., 2013; Strzalkowski et al., 2013; Schulder and Hovy, 2014; Klebanov et al., 2015; Jang et al., 2015) to detect topical discrepancy between a candidate metaphor and the dominant theme (See Section 2 for more detailed literature review). Although previous approaches were effective at capturing some aspects of the governing context of a metaphor, the space of how to best use the contextual information is still wide open. Previous context-based models tend to overclassify literal words as metaphorical if they find semantic contrast with the governing context. These cases manifested in the work by Schulder and Hovy (2014) and Jang et al. (2015) as high recall but low precision for metaphorical instances. We present a new approach that uses lexical and topical context to resolve the problem of low precision on metaphor detection. To better capture the relevant context surrounding a metaphor, we approach the problem in two directions. First, we hypothesize that topic transition patterns between sentences containing metaphors and their contexts are different from that of literal sentences. To this end, we incorporate several indicators of sentence-level topic transitions as features, such as topic similarity between a sentence and its neighboring sentences, measured by Sentence LDA. Second, we observe that metaphor is often used to express speakers’ emotional experiences; we therefore model a speaker’s motivation in using metaphor by detecting emotion and cognition words in metaphorical and literal sentences and their contexts. To demonstrate the efficacy of our approach, we 216 evaluate our system on the metaphor detection task presented by Jang et al. (2015) using a breast cancer discussion forum dataset. This dataset is distinct in that it features metaphors occurring in conversational text, unlike news corpora or other formal texts typical in computational linguistics. Our contributions are three-fold: (1) We extend the previous approaches for contextually detecting metaphor by exploring topic transitions between a metaphor and its context rather than only detecting lexical discrepancies. In addition, (2) we propose to capture emotional and cognitive content to better uncover speakers’ motivation for using metaphors. Lastly, (3) through our empirical evaluation, we find that metaphor occurs more frequently around personal topics. 2 Relation to Prior Work Research in automatic metaphor detection has spanned from detecting metaphor in limited sets of syntactic constructions to studying the use of metaphor in discourse, with approaches ranging from rule-based methods using lexical resources to statistical machine learning models. Here, we focus in particular on approaches that use context wider than a sentence for metaphor detection. For a more thorough review of metaphor processing systems, refer to Shutova (2015). The main idea behind using context in metaphor detection is that metaphorically used words tend to violate lexical cohesion in text. Different methods, however, approach the problem of detecting semantic outliers in different ways. Li and Sporleder (2009; 2010) identify metaphorical idioms using the idea that non-literal expressions break lexical cohesion of a text. Li and Sporleder (2009) approached the problem by constructing a lexical cohesion graph. In the graph, content words in a text are represented as vertices, which are connected by edges representing semantic relatedness. The intuition behind their approach was that non-literal expressions would lower the average semantic relatedness of the graph. To classify a word as literal or metaphorical, Li and Sporleder (2010) use Gaussian Mixture Models with semantic similarity features, such as the relatedness between this target word and words in its context. Broadwell et al. (2013) and Strzalkowski et al. (2013) base their approach on the idea that metaphors are likely to be concrete words that are not semantically associated with the surrounding context. Broadwell et al. (2013) implemented this idea using topic chains, which consist of noun phrases that are connected by pronominal mention, repetition, synonym, or hyponym relations. Strzalkowski et al. (2013) build on this idea by taking nouns and adjectives around the target concept as candidate source relations. They filtered out candidate sources that were in the same topical chain as the target concept or were not linked to the word being classified by a direct dependency path. Schulder and Hovy (2014) also hypothesize that novel metaphors are marked by their unusualness in a given context. They use a domain-specific term relevance metric, which measures how typical a term is for the domain associated with the literal usage of a word, and common relevance, which measures how common a word is across domains. If a term is neither typical for a text’s domain nor common, it is taken as a metaphor candidate. A particular strength of this approach is its accommodation of common words without discriminative power, which often confuse contextbased models. Jang et al. (2015) model context by using both global context, the context of an entire post, and local context, the context within a sentence, in relationship to a word being classified as metaphorical or literal. They used word categories from FrameNet, topic distribution, and lexical chain information (similar in concept to the topic chain information in (Broadwell et al., 2013)) to model the contrast between a word and its global context. To model the contrast between a word and its local context, they used lexical concreteness, word categories and semantic relatedness features. Mohler et al. (2013) built a domain-aware semantic signature for a text to capture the context surrounding a metaphorical candidate. Unlike other approaches that try to discriminate metaphors from their context, their approach uses binary classifiers to compare the semantic signature for a text with that of known metaphors. The above approaches attempted to capture governing context in various ways and were effective when applied to the problem of metaphor detection. However, these methods tend to overclassify literal instances as metaphorical when semantic cohesion is violated within their governing contexts. Additionally, these methods could 217 fail to detect extended metaphors, which span over wider contexts. In this paper, we specifically focus on the problem of discriminating literal instances from metaphorical instances by expanding the scope of what is captured within a context. Like (Mohler et al., 2013), we share the intuition that there could be associations between specific metaphors and their contexts, but we relax the assumption that metaphors must be similar to known metaphors. 3 Our Approach To better capture the distinctions between metaphorical and literal usages of the same word (target word), we approach the task in two directions. First, we model how topics in context change for both metaphorical and literal instances of a target word (Section 3.1). Second, we consider the situational context for why individuals choose to use metaphor (Section 3.2). We use multi-level modeling to combine these two types of features with the specific target word to model interactions between the features and a particular metaphor (Section 3.3). 3.1 Topic Transition In writing, cohesion refers to the presence or absence of explicit cues in the text that allow the reader to make connections between ideas (Crossley and McNamara, 2010). For example, overlapping words and concepts between sentences indicate that the same ideas are being referred to across these sentences. Metaphorically used words tend to be semantically incohesive with the governing context. Therefore, determining semantic or topical cohesion is important for metaphor detection. However, even if a text is literal and cohesive, not all words within the text are semantically related. In example (1), a human could easily determine that “pillows”, “music”, “flickering candles”, and “a foot massage” share the theme of relaxation. But it is difficult to define their relatedness computationally – these terms are not synonyms, hypernyms, antonyms, or in any other well-defined lexical relation. Additionally, even if the whole sentence is correctly interpreted as ways of indulging oneself, it is still semantically contrasted with the surrounding sentences about medicine. In this example, the target word “candle” is used literally, but the contrast between the sentence containing the target word and its context makes it computationally difficult to determine that it is not metaphorical: (1) ... yet encouraged to hear you have a diagnosis and it’s being treated. Since you have to give up your scented stuff you’ll just have to figure out some very creative ways to indulge yourself. Soft pillows, relaxing music, flickering candles, maybe a foot massage. Let’s hope your new pain relief strategy works and the Neulasta shot is not so bad . I never had Taxotere, but have read it can be much easier than AC for many people. ... Example (2) also shows semantic inconsistency between the candidate metaphor “boat” and the surrounding sentences about medicine. However, in this example, “boat” is metaphorically used. Thus, it is difficult to determine whether a word is metaphorical or literal when there is semantic contrast because both example (1) and example (2) show semantic contrast. (2) When my brain mets were discovered last year, I had to see a neurosurgeon. He asked if I understood that my treatment was pallative care. Boy, did it rock my boat to hear that phrase! I agree with Fitz, pallative treatment is to help with pain and alleviate symptoms.....but definitely different than hospice care. The primary difference between these two examples is in the nature of the semantic contrast. In example (1), the topic of the sentence containing “candle” is relaxation, while the topic of the previous and following sentences is medicine. The transition between medicine and relaxation tends to be more literal, whereas the transition between the topic in the sentence containing “boat” and the surrounding medical topic sentences tends to be more metaphorical. We use these differences in the topic transition for metaphor detection. We consider topic transitions at the sentence level, rather than the word level, because people often represent an idea at or above the sentence level. Thus, topic is betterrepresented at the sentence level. 218 To model context at the sentence level, we first assign topics to each sentence using Sentence Latent Dirichlet Allocation (LDA) (Jo and Oh, 2011). Sentence LDA has two main advantages over standard LDA for our work. First, while standard LDA assumes that each word is assigned a topic derived from the topic distribution of a document, Sentence LDA makes the constraint that all words in the same sentence must be assigned the same topic. Due to this property, the generated topics are better aligned with the role or purpose of a sentence, compared to topics generated from LDA. Additionally, having each sentence assigned to one topic helps us avoid using heuristics for representing the topic of each sentence. 1 Using Sentence LDA, we modeled four features to capture how the topic changes around the sentence where a target word resides. We refer to this sentence as the target sentence. Target Sentence Topic (TargetTopic): We hypothesize that sentences containing a metaphor may prefer topics that are different from those of sentences where the same word is used literally. Hence, TargetTopic is a T-dimensional binary feature, where T is the number of topics, that indicates the topic assigned to the sentence containing the target word. Topic Difference (TopicDiff): We hypothesize that a metaphorical sentence is more likely to be different from its neighboring sentences, in terms of topic, than a literal sentence. Therefore, TopicDiff is a two-dimensional binary feature that indicates whether the topic assigned to the target sentence is different from that of the previous and next sentences. Topic Similarity (TopicSim): Under the same hypothesis as TopicDiff, TopicSim is a twodimensional feature that represents the similarity between the topic of the target sentence and its previous and next sentences. Unlike TopicDiff, which is binary, TopicSim has continuous values between 0 and 1, as we use the cosine similarity between each topic’s word distributions as topic similarity. Note that in Sentence LDA, all topics share the same vocabulary, but assign different probabilities to different words as in LDA although all tokens in a sentence are assigned to the 1We also tried standard LDA for assigning topics to sentences, by representing each sentence as a topic distribution over its words. However, this representation was not as informative as Sentence LDA in our task, so we leave out the LDA topics in further discussion. same topic in Sentence LDA. Topic Transition (TopicTrans): The topic of a metaphorical sentence may extend over multiple sentences, so a topic transition may occur a few sentences ahead or behind the target sentence. TopicTrans looks for the nearest sentences with a different topic before and after the current target sentence and encodes the topics of the different-topic sentences. Hence, TopicTrans is a 2T-dimensional feature, where T is the number of topics, that indicates the topics of the nearest sentences that have a different topic from the target sentence. Topic Transition Similarity (TopicTransSim): The topics before and after a transition, even in the extended case for TopicTrans, are still expected to be more different in metaphorical cases than in literal cases, as we assume for TopicSim. Therefore, TopicTransSim is a two-dimensional continuous feature that encodes the cosine similarity between the topic of the target sentence and the topics of the nearest sentences that have a different topic before and after the target sentence. 3.2 Emotion and Cognition Metaphors are often used to explain or describe abstract ideas, such as difficult concepts or emotions (Meier and Robinson, 2005). (Fainsilber and Ortony, 1987) showed that descriptions of feelings contain more metaphorical language than descriptions of behavior. In our domain, writers are searching for support through the emotionally tumultuous experience of breast cancer and often turn to metaphor to express this emotion. For example, the word “road” can be used as a metaphor to express the emotional experiences of waiting for or passing through steps in treatment. A similar phenomenon is that markers of cognition, such as “I think”, can occur to introduce the abstract source of the metaphor. In example (3), one breast cancer patient in our data describes her speculation about her condition metaphorically, writing, (3) i have such a long road i just wonder what to do with myself. To encode these emotional and cognitive elements as features, we use Linguistic Inquiry Word Count (LIWC) (Tausczik and Pennebaker, 2010). LIWC is a tool that counts word use in certain 219 psychologically relevant categories. Focusing on emotional and cognitive processes, we use the LIWC term lists for categories seen in Table 1. LIWC category Example Terms affect ache, like, sweet positive emotion passion, agree, giving negative emotion agony, annoy, miss anxiety embarrass, avoid anger assault, offend sadness despair, grim cognitive mechanisms if, could insight believe, aware cause make, pick discrep would, hope tentativeness anyone, suppose certainty never, true Table 1: Selected LIWC categories. We count the number of words that fall into each category within either an immediate or global context. For these LIWC features, we take the target sentence and its neighboring sentences as the immediate context and the entire post as the global context for a candidate metaphor instance. The counts for each category in either the immediate or global context are used as features encoded by what degree the immediate or global context expresses the emotional or cognitive category. We expect words indicative of emotion and cognition to appear more frequently in metaphorical cases. Our preliminary statistical analysis on the development set revealed that this holds true within the target sentence and shows a tendency in the surrounding sentences. 3.3 Multi-Level Modeling Our topical and emotion and cognition context features are general across target words. However, the specific features that are informative for metaphor identification may depend on the target word. To account for the specificity of target words, we use multi-level modeling (Daume III, 2007). The idea of multi-level modeling is to pair each of our features with every target word while keeping one set of features independent of the target words. There are then multiple copies of each topic transition and emotion/cognition feature, all paired with a different target word. Thus, if there are N target words, our feature space becomes N + 1 times larger. 4 Experiments Our main experimental task is metaphor detection or disambiguation – given a post containing a candidate metaphor word, we aim to determine whether the word is used literally or metaphorically in context. 4.1 Data We conducted experiments on a dataset of posts from a public breast cancer support group discussion forum, annotated by Jang et al. (2015). We chose to work on this dataset because it features metaphors occurring in naturalistic language. In this dataset, posts are restricted to those containing one of seven candidate metaphors that appear either metaphorically or literally: “boat”, “candle”, “light”, “ride”, “road”, “spice”, and “train”. We split the data randomly into a development set of 800 posts for preliminary analysis and a cross-validation set of 1,870 posts for classification as in (Jang et al., 2015). 4.2 Metrics We report five evaluation metrics for every model: kappa, F1 score, precision, recall, and accuracy. Kappa, which corrects for agreement by chance, was calculated between predicted results and actual results. Because the dataset is skewed towards metaphorical instances, we rely on the first four measures over accuracy for our evaluation. 4.3 Baselines We use the following two baselines: the feature set of (Jang et al., 2015) and a context unigram model. Jang et al. (2015): We use the best configuration of features from Jang et al. (2015), the stateof-the-art model on our dataset, as a baseline. This feature set consists of all of their local context features (word category, semantic relatedness, concreteness), all of their global context features except lexical chaining (word category, global topic distribution), and context unigrams. Context Unigram Model: All the words in a post, including the target word, are used as context features. 4.4 Settings We ran Sentence LDA, setting the number of topics to 10, 20, 30, 50, and 100. α and β determine the sparsity of the topic distribution of each document and the word distribution of each topic, 220 Model κ F1 P-L R-L P-M R-M A Unigram .435 .714 .701 .434 .845 .943 .824 Unigram + AllTopic + AllLIWC*** .533 .765 .728 .550 .872 .937 .847 Unigram + MM AllTopic + MM AllLIWC*** .543 .770 .754 .546 .872 .946 .852 J .575 .786 .758 .587 .882 .943 .859 J + AllTopic + AllLIWC* .609 .804 .772 .626 .892 .943 .869 J + MM AllTopic** .619 .809 .784 .630 .893 .947 .873 J + MM AllLIWC .575 .787 .757 .589 .882 .942 .859 J + MM AllTopic + MM AllLIWC*** .631 .815 .792 .642 .896 .948 .876 Table 2: Performance on metaphor identification task. (Models) J: Jang et al. (2015), MM - Multilevel Modeling (Metrics) κ: Cohen’s kappa, F1: average F1 score on M/L, P-L: precision on literals, R-L: recall on literals, P-M: precision on metaphors, R-M: recall on metaphors, A: accuracy, *: marginally statistically significant (p < 0.1), **: statistically significant (p < 0.05), ***: highly statistically significant (p < 0.01) improvement over corresponding baseline by Student’s t-test. respectively; the lower the sparser. Following convention, we set these parameters to 0.1 and 0.001, respectively, to enforce sparsity. We also removed the 37 most frequent words in the corpus, drawing the threshold at the point where content words and pronouns started to appear in the ranked list. The models with 10 topics performed the best on the development set, with performance degrading as the number of topics increased. We suspect that poorer performance on the models with more topics is due to feature sparsity. We used the support vector machine (SVM) classifier provided in the LightSIDE toolkit (Mayfield and Ros´e, 2010) with sequential minimal optimization (SMO) and a polynomial kernel of exponent 2. For each experiment, we performed 10-fold cross-validation. We also trained the baselines with the same SVM settings. 4.5 Results The results of our classification experiment are shown in Table 2. We tested our topical and emotion and cognition features in combination with lexical features from our baselines: unigram and Jang et al. (2015). Adding our topical and emotion/cognition features to the baselines improved performance in predicting metaphor detection. We see that our features combined with the unigram features improved over the Unigram baseline although they do not beat the Jang et al. (2015) baseline. However, when our features are combined with the features from Jang et al. (2015), we see large gains in performance. Additionally, our multi-level modeling significantly improved performance by takT0 T1 T2 T3 T4 T5 T6 T7 T8 T9 Topic Distribution of Target Sentences 0 1 Metaphorical Literal Figure 1: Proportions of topics assigned to target sentences, when target words were used metaphorically vs. literally. The proportions of metaphorical and literal cases are different with statistical significance of p < 0.01 by Pearson’s chi-square test. ing into account the effects of specific metaphors. The topical features added to the baseline led to a significant improvement in accuracy, while emotion and cognition features only slightly improved the accuracy without statistical significance. However, the combination of these emotion and cognition features with topical features (in the last row of Table 2) leads to improvement. We performed a Student’s t-test for calculating statistical significance. 5 Discussion Metaphorical instances tend to have personal topics. An author was more likely to use target words metaphorically when the target sentence relates more closely to their own experience of disease and treatment. Specifically, metaphors were relatively frequent when people shared their own disease experience (Topic 0, Topic 9) or sympa221 Topic Top Words Example Sentences 0 Disease/ Treatment get, chemo, if, they, as, out, can, like, now, she, feel, did, up, know, think, been, good, time, or, when I’m scared of chemo and ct scans because it makes cancer come back and you become more resistance to treatment with drugs like these later. 1 Food good, they, gt, can, like, eat, fat, or, if, some, one, as, them, get, up, fiber, think, more, what *Martha’s Way* Stuff a miniature marshmallow in the bottom of a sugar cone to prevent ice cream drips. 2 Emotions love, great, laura, good, hope, like, debbie, amy, up, happy, too, everyone, day, glad, look, fun, mary, what, kelly, how Too funny. / You’re so cute! / ene23...the photo in the locket idea sounds great! 3 Time chemo, week, go, last, then, next, weeks, taxol, good, done, treatment, first, start, one, more, rads, after, today, ’ll, now I am now 45, and just had my ONE year anniversary from finishing chemo last week!! 4 Greetings/ Thanks thanks, hugs, hi, here, carrie, thank, welcome, love, us, glad, know, greg, good, everyone, thread, ladies, there, how, sorry, mags Thank you so much for the story!! / Big Hugs! 5 People she, he, they, out, get, up, her, when, like, one, as, from, there, our, time, did, if, can, go, what She has three children and her twin sister has taken her and her 3 children in. 6 Support good, hope, well, happy, everyone, doing, glad, luck, hear, better, take, jen, care, great, liz, birthday, hugs, lol, watson, feeling YAY! / lol. / I wish you all good luck and peace. 7 Relation what, know, she, as, can, her, cancer, if, there, has, think, been, how, like, our, who, when, they, would, us She knows that she has BC but does not know that it has spread. / I just read your message and I wondered about you. 8 Religion god, love, lord, us, prayers, our, bless, dear, her, lu, may, day, patti, thank, know, comfort, amen, xoxo, he, pray Dear Lord, I come to you with a friend that is not doing well, Please bless her that her hands will reach for you threw the last part of her breast cancer fight. 9 Diagnosis diagnosed, when, chemo, she, breast, years, stage, cancer, dx, now, found, nodes, no, after, lump, they, age, then, year, mastectomy I was 64 when diagnosed wtth pure DCIS.....I had my ninght radiation treatment today. / I was diagnosed Nov 2007 at age 45. Table 3: Topics learned by Sentence LDA. T0 T1 T2 T3 T4 T5 T6 T7 T8 T9 Vs. Previous Sentence 0 1 Metaphorical Literal T0 T1 T2 T3 T4 T5 T6 T7 T8 T9 Vs. Next Sentence 0 1 Topic Distribution of the Sentences Nearest to the Target Sentence and with a Different Topic Figure 2: Proportions of the topics of the sentences that are nearest to the target sentence and have a different topic from the target sentence. The proportions of metaphorical and literal cases are different with statistical significance of p < 0.01 by Pearson’s chi-square test. thized with other people’s experiences (Topic 7), but were more infrequent when they simply talked about other people in Topic 5 (Figure 1). According to our closer examination of sample sentences, Topic 0 had many personal stories about disease and treatment, and Topic 7 was about learning and relating to other people’s experiences. Example metaphorical expressions include “There is light during chemo.” (Topic 0) and “Hi Dianne - I am glad I found your post as I am sort of in the same Metaphorical Literal 0 1 Vs. Previous Sentence Metaphorical Literal 0 1 Vs. Next Sentence Proportions of Target Sentences With A Different Topic from Context Figure 3: Proportions of target sentences whose topic is different from that of the previous/next sentence, when target words were used metaphorically vs. literally. The proportions of metaphorical and literal cases are different with statistical significance of p < 0.01 by Pearson’s chi-square test. boat.” (Topic 7). Analysis of our LIWC features also supports the reflective nature of metaphors: “insight” and “discrepancy” words such as “wish”, “seem”, and “feel” occur more frequently around metaphorical uses of target terms. The topics of the surrounding context (TopicTrans) were also informative for metaphor detection (Figure 2). However, the topics of the surrounding sentences followed an opposite pattern to the topics of the target sentence; talking 222 G G G G G G Metaphorical Literal 0 1 Vs. Previous Sentence G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G Metaphorical Literal 0 1 Vs. Next Sentence Topic Similarity Between Target Sentence and Context Figure 4: Cosine similarity between the topic of a target sentence and the topic of its previous/next sentence, when target words were used metaphorically vs. literally. The means of the metaphorical and literal cases are different with statistical significance of p < 0.01 by Welch’s t-test. Vs. Previous Sentence Metaphorical Literal 0 1 Vs. Next Sentence Metaphorical Literal 0 1 Topic Similarity Between Target Sentence and Nearest Transitioning Context Figure 5: Cosine similarity of the topic of a target sentence and the topic of the sentences that are nearest to the target sentence and have a different topic from the target sentence. The means of metaphorical and literal cases are different with statistical significance only for the next sentence, with p < 0.01 by Welch’s t-test. about other people (Topic 5) in the context of a target sentence led to more metaphorical usage of target words. Similarly, writers used target words more literally before or after they shared their personal stories (Topic 0). This pattern could be because the topic of the target sentence differs from the topics of the surrounding sentences in these instances, which would mean that the target sentence is a topic that is more likely to be literal. Topic 9, however, does not follow the same pattern. One possible reason is that Topic 9 and Topic 0 tend to frequently co-occur and be metaphorical. Thus, if a target word comes after or before Topic 9 and it is Topic 0, then this word may more likely be metaphorical. Topic transitions are effective indicators of metaphor. Metaphorical instances accompanied more drastic topic transitions than literal instances. This tendency, which matched our hypothesis, was shown in all our topic features. The immediately neighboring sentences of metaphorical instances were more likely to have a different topic from the target sentence than those of literal instances (Figure 3). Additionally, differences in topic between the target sentence and the neighboring sentences were greater for metaphorical instances (Figure 4). The nearest sentences with topics different from the target sentence (TopicTransSim) also showed this pattern (Figure 5). An interesting finding was that a topic transition after the target sentence was more indicative of metaphor than a transition before. Emotion and cognitive words are discriminative depending on the metaphor. Emotion and cognition in the surrounding contexts, which were captured by the LIWC features, helped identify metaphors when combined with topical features. This result supports the claim in (Fainsilber and Ortony, 1987) that descriptions of feelings contain more metaphorical language than descriptions of behavior. This effect, however, was limited to specific target words and emotions. For example, we saw a higher number of anxiety words in the immediate and global contexts of metaphors, but the trend was the opposite for anger words. This may be because our target words, “boat”, “candle”, “light”, “ride”, “road”, “spice” and “train”, relate more to anxiety in metaphors such as “bumpy road” and “rollercoaster ride”, than to anger. On the other hand, cognitive words had more consistency, as words marking insight and discrepancy were seen significantly higher around metaphorical uses of the target words. These patterns, nevertheless, could be limited to our domain. It would be interesting to explore other patterns in different domains. A multi-level model captures word-specific effects. Our features in context helped recognize metaphors in different ways for different target words, captured by the multi-level model. The paucity of general trends across metaphorical terms does not mean a limited applicability of our method, though, as our features do not suppose any specific trends. Rather, our method only assumes the existence of a correlation between metaphors and the theme of their context, and our multi-level model effectively identifies the interaction between metaphorical terms and their con223 texts as useful information. For all the figures in this section, most target words have a similar pattern. See our supplemental material for graphs by target word. 6 Conclusion We propose a new, effective method for metaphor detection using (1) sentence level topic transitions between target sentences and surrounding contexts and (2) emotion and cognition words. Both types of features showed significant improvement over the state-of-the-art. In particular, our system made significant gains in solving the problem of overclassification in metaphor detection. We also find that personal topics are markers of metaphor, as well as certain patterns in topic transition. Additionally, language expressing emotion and cognition relates to metaphor, but in ways specific to particular candidate words. For our breast cancer forum dataset, we find more words related to anxiety around metaphors. Our proposed features can be expanded to other domains. Though in other domains, the specific topic transition and emotion/cognition patterns would likely be different, these features would still be relevant to metaphor detection. Acknowledgments This research was supported in part by NSF Grant IIS-1302522. References George Aaron Broadwell, Umit Boz, Ignacio Cases, Tomek Strzalkowski, Laurie Feldman, Sarah Taylor, Samira Shaikh, Ting Liu, Kit Cho, and Nick Webb. 2013. Using imageability and topic chaining to locate metaphors in linguistic corpora. In Social Computing, Behavioral-Cultural Modeling and Prediction, pages 102–110. Springer. Scott A Crossley and Danielle S McNamara. 2010. Cohesion, coherence, and expert evaluations of writing proficiency. In Proceedings of the 32nd annual conference of the Cognitive Science Society, pages 984–989. Hal Daume III. 2007. Frustratingly easy domain adaptation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 256–263, Prague, Czech Republic, June. Association for Computational Linguistics. Lynn Fainsilber and Andrew Ortony. 1987. Metaphorical uses of language in the expression of emotions. Metaphor and Symbolic Activity, 2(4):239–250. Hyeju Jang, Seunghwan Moon, Yohan Jo, and Carolyn Penstein Ros´e. 2015. Metaphor detection in discourse. In 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, page 384. Yohan Jo and Alice Oh. 2011. Aspect and Sentiment Unification Model for Online Review Analysis. In Proceedings of the fourth ACM international conference on Web search and data mining, pages 815– 824. Beata Beigman Klebanov, Chee Wee Leong, and Michael Flor. 2015. Supervised word-level metaphor detection: Experiments with concreteness and reweighting of examples. NAACL HLT 2015 3rd Metaphor Workshop, page 11. George Lakoff and M. Johnson. 1980. Metaphors we live by. Chicago/London. Linlin Li and Caroline Sporleder. 2010. Using gaussian mixture models to detect figurative language in context. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT ’10, pages 297–300, Stroudsburg, PA, USA. Association for Computational Linguistics. Elijah Mayfield and Carolyn Ros´e. 2010. An interactive tool for supporting error analysis for text mining. In Proceedings of the NAACL HLT 2010 Demonstration Session, pages 25–28. Association for Computational Linguistics. Brian P Meier and Michael D Robinson. 2005. The metaphorical representation of affect. Metaphor and symbol, 20(4):239–257. Michael Mohler, David Bracewell, David Hinote, and Marc Tomlinson. 2013. Semantic signatures for example-based linguistic metaphor detection. In Proceedings of the First Workshop on Metaphor in NLP, pages 27–35. Marc Schulder and Eduard Hovy. 2014. Metaphor detection through term relevance. ACL 2014, page 18. Ekaterina Shutova. 2015. Design and evaluation of metaphor processing systems. Computational Linguistics. Caroline Sporleder and Linlin Li. 2009. Unsupervised recognition of literal and non-literal use of idiomatic expressions. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, pages 754–762. Association for Computational Linguistics. Tomek Strzalkowski, George Aaron Broadwell, Sarah Taylor, Laurie Feldman, Boris Yamrom, Samira Shaikh, Ting Liu, Kit Cho, Umit Boz, Ignacio Cases, et al. 2013. Robust extraction of metaphors from novel data. Meta4NLP 2013, page 67. 224 Yla R Tausczik and James W Pennebaker. 2010. The psychological meaning of words: Liwc and computerized text analysis methods. Journal of language and social psychology, 29(1):24–54. Yulia Tsvetkov, Leonid Boytsov, Anatole Gershman, Eric Nyberg, and Chris Dyer. 2014. Metaphor detection with cross-lingual model transfer. Peter D Turney, Yair Neuman, Dan Assaf, and Yohai Cohen. 2011. Literal and metaphorical sense identification through concrete and abstract context. In Proceedings of the 2011 Conference on the Empirical Methods in Natural Language Processing, pages 680–690. 225
2016
21
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2226–2235, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Domain Adaptation for Authorship Attribution: Improved Structural Correspondence Learning Upendra Sapkota University of Alabama at Birmingham [email protected] Thamar Solorio University of Houston [email protected] Manuel Montes-y-G´omez Instituto Nacional de Astrof´ısica, Optica y Electr´onica [email protected] Steven Bethard University of Alabama at Birmingham [email protected] Abstract We present the first domain adaptation model for authorship attribution to leverage unlabeled data. The model includes extensions to structural correspondence learning needed to make it appropriate for the task. For example, we propose a median-based classification instead of the standard binary classification used in previous work. Our results show that punctuation-based character n-grams form excellent pivot features. We also show how singular value decomposition plays a critical role in achieving domain adaptation, and that replacing (instead of concatenating) non-pivot features with correspondence features yields better performance. 1 Introduction Authorship Attribution (AA) can be used for historical purposes, such as disentangling the different authors contributing to a literary work. It can also help in understanding language evolution and change at the individual level, revealing a writer’s changes in linguistic patterns over time (Hirst and Feng, 2012). Authorship attribution can also help to settle disputes over the original creators of a given piece of text. Or it can help build a prosecution case against an online abuser, an important application especially considering the rising trends in cyber-bullying and other electronic forms of teen violence1. The absorbing social media networks, together with the ever increasing use of electronic communications will require robust approaches to authorship attribution that can help to determine with certainty the author of a text, determine the provenance of a written sample, and in sum, help us determine the trustworthiness of electronic data. 1http://cyberbullying.org/ One of the scenarios that has received limited attention is cross-domain authorship attribution, when we need to identify the author of a text but all the text with known authors is from a different topic, genre, or modality. Here we propose to solve the problem of cross-domain authorship attribution by adapting the Structural Correspondence Learning (SCL) algorithm proposed by Blitzer et al. (2006). We make the following contributions: • We introduce the first domain adaptation model for authorship attribution that combines labeled data in a source domain with unlabeled data from a target domain to improve performance on the target domain. • We examine two sets of features that have previously been successful in cross-domain authorship attribution, explain how these can be used to select the “pivot” features required by SCL, and show that typed n-gram features (which differentiate between the the in their and the the in breathe) produce simpler models that are just as accurate. • We propose a new approach for defining SCL’s pivot feature classification task so that it is able to handle count-based features, and show that this median-based approach outperforms the standard SCL approach. • We examine the importance of the dimensionality reduction step in SCL, and show that the singular value decomposition increases robustness even beyond the robustness achieved by SCL’s learned feature transformations. • We propose an alternative approach to combining features within SCL, and show that excluding the non-pivot features from the final classifier generally improves performance. Our experimental results show that using standard SCL for this domain adaptation authorship attribution task improves prediction accuracy by 2226 only 1% over a model without any domain adaptation. In contrast, our proposed improvements to SCL reach an accuracy boost of more than 15% over the no domain adaptation model and of 14% over the standard SCL formulation. The extensions to SCL that we propose in this work are likely to yield performance improvements in other tasks where SCL has been successfully applied, such as part-of-speech tagging and sentiment analysis. We plan to investigate this further in the future. 2 Related Work Cross-Domain Authorship Attribution Almost all previous authorship attribution studies have tackled traditional (single-domain) authorship problems where the distribution of the test data is the same as that of the training data (Madigan et al., 2005; Stamatatos, 2006; Luyckx and Daelemans, 2008; Escalante et al., 2011). However, there are a handful of authorship attribution studies that explore cross-domain authorship attribution scenarios (Mikros and Argiri, 2007; Goldstein-Stewart et al., 2009; Schein et al., 2010; Stamatatos, 2013; Sapkota et al., 2014). Here, following prior work, cross-domain is a cover term for cross-topic, cross-genre, cross-modality, etc., though most work focuses on the cross-topic scenario. Mikros and Argiri (2007) illustrated that many stylometric variables are actually discriminating topic rather than author. Therefore, the authors suggest their use in authorship attribution should be done with care. However, the study did not attempt to construct authorship attribution models where the source and target domains differ. Goldstein-Stewart et al. (2009) performed a study on cross-topic authorship attribution by concatenating the texts of an author from different genres on the same topics. Such concatenation allows some cross-topic analysis, but as each test document contains a mix of genres it is not representative of real world authorship attribution problems. Stamatatos (2013) and Sapkota et al. (2014) explored a wide variety of features, including lexical, stopword, stylistic, and character n-gram, and demonstrated that character n-grams are the most effective features in cross-topic authorship attribution. Stamatatos (2013) concluded that avoiding rare features is effective in both intra-topic and cross-topic authorship attribution by training a SVM classifier on one fixed topic and testing on each of the remaining topics. Sapkota et al. (2014), rather than fixing a single training topic in advance, considered all possible training/testing topic combinations to investigate cross-topic authorship attribution. This showed that training on documents from multiple topics (thematic areas) improves performance in cross-topic authorship attribution (Sapkota et al., 2014), even when controlling the amount of training data. However, none of these studies exploited domain adaptation methods that combine labeled data in a source domain with unlabeled data from a target domain to improve performance on the target domain. Instead, they focused on identifying relevant features and simply evaluating them when trained on source-domain data and tested on target-domain data. To our knowledge, we are the first to leverage unlabeled data from the target domain to improve authorship attribution. Domain Adaptation Domain adaptation is the problem of modifying a model trained on data from a source domain to a different, possibly related, target domain. Given the effort and the cost involved in labeling data for a new target domain, there is a lot of interest in the design of domain adaptation techniques. In NLP related tasks, researchers have explored domain adaptation for part-of-speech tagging, parsing, semantic role labeling, word-sense disambiguation, and sentiment analysis (Li, 2012). Daum´e (2007) proposed a feature space transformation method for domain adaptation based on a simple idea of feature augmentation. The basic idea is to create three versions of each feature from the original problem: the general (domainindependent) version, the source specific version, and the target specific version. While generally successful, there are some limitations of this method. First, it requires labeled instances in the target domain. Second, since this method simply duplicates each feature in the source domain as domainindependent and domain-specific versions, it is unable to extract the potential correlations when the features in the two domains are different, but have some hidden correspondences. In contrast, structural correspondence learning (SCL) is a feature space transformation method that requires no labeled instances from the target domain, and can capture the hidden correlations among different domain-independent features. SCL’s basic idea is to use unlabeled data from both the source and target domains to obtain a common feature representation that is meaningful across 2227 domains (Blitzer et al., 2006). Although the distributions of source and target domain differ, the assumption is that there will still be some general features that share similar characteristics in both domains. SCL has been applied to tasks such as sentiment analysis, dependency parsing, and partof-speech tagging, but has not yet been explored for the problem of authorship attribution. The common feature representation in SCL is created by learning a projection to “pivot” features from all other features. These pivot features are a critical component of the successful use of SCL, and their selection is something that has to be done carefully and specifically to the task at the hand. Tan and Cheng (2009) studied sentiment analysis, using frequently occurring sentiment words as pivot features. Similarly, Zhang et al. (2010) proposed a simple and efficient method for selecting pivot features in domain adaptive sentiment analysis: choose the frequently occurring words or wordbigrams among domains computed after applying some selection criterion. In dependency parsing, Shimizu and Nakagawa (2007) chose the presence of a preposition, a determiner, or a helping verb between two tokens as the pivot features. For partof-speech tagging, Blitzer et al. (2006) used words that occur more than 50 times in both domains as the pivot features, resulting in mostly function words. In cross-lingual adaptation using SCL, semantically related pairs of words from source and target domains were used as pivot features (Prettenhofer and Stein, 2011). For authorship attribution, we propose two ways of selecting pivot and nonpivot features based on character n-grams. Another important aspect of the SCL algorithm is associating a binary classification problem with each pivot feature. The original SCL algorithm assumes that pivot features are binary-valued, so creating a binary classification problem for each pivot feature is trivial: is the value 0 or 1? Most previous work on part-of-speech tagging, sentiment analysis, and dependency parsing also had only binary-valued pivot features. However, for authorship attribution, all features are count-based, so translation from a pivot feature value to a binary classification problem is not trivial. We propose a median-based solution to this problem. 3 Methodology Structural Correspondence Learning (Blitzer et al., 2006) uses only unlabeled data to find a common feature representation for a source and a target domain. The idea is to first manually identify “pivot” features that are likely to have similar behavior across both domains. SCL then learns a transformation from the remaining non-pivot features into the pivot feature space. The result is a new set of features that are derived from all the non-pivot features, but should be domain independent like the pivot features. A classifier is then trained on the combination of the original and the new features. Table 1 gives the details of the SCL algorithm. First, for each pivot feature, we train a linear classifier to predict the value of that pivot feature using only the non-pivot features. The weight vectors learned for these linear classifiers, ˆwi, are then concatenated into a matrix, W, which represents a projection from non-pivot features to pivot features. Singular value decomposition is used to reduce the dimensionality of the projection matrix, yielding a reduced-dimensionality projection matrix θ. Finally, a classifier is trained on the combination of the original features and the features generated by applying the reduced-dimensionality projection matrix θ to the non-pivot features x[p:m]. 3.1 Standard SCL parameter definitions Standard SCL does not define how pivot features are selected; this must be done manually for each new task. However, SCL does provide standard definitions for the loss function (L), the conversion to binary values (Bi), the dimensionality of the new correspondence space (d), and the feature combination function (C). L is defined as Huber’s robust loss: L(a, b) = ( max(0, 1 −ab)2 if ab ≥−1 −4ab otherwise The conversion from pivot feature values to binary classification is defined as: Bi(y) = ( 1 if y > 0 0 otherwise A few different dimensionalities for the reduced feature space have been explored (Prettenhofer and Stein, 2011), but most implementations have followed the standard SCL description (Blitzer et al., 2006) with d defined as: d = 25 The feature combination function, C, is defined as simple concatenation, i.e., use all of the old pivot 2228 Input: • S = {x : x ∈Rm}, the labeled instances from source domain • U = {x : x ∈Rm}, the unlabeled instances from both domains • p and n such that x[0:p] are the p pivot features and x[p:m] are n = m −p non-pivot features • f : S →A, the source domain labels, where A is the set of authors • L : R × R →R, a loss function • Bi : R →{0, 1} for 0 ≤i < p, a conversion from a real-valued pivot feature i to binary classification • d, the size of the reduced-dimensionality correspondence space to learn • C : Rm × Rd →Rk, a function for combining the original and new features Output: • θ ∈Rn×d, a projection from non-pivot features to the correspondence space • h : Rm+d →A, the trained predictor Algorithm: 1. For each pivot feature i : 0 ≤i < p, learn prediction weights ˆwi = min w∈Rn X x∈U L(w⊤x[p:m], B(xi)) 2. Construct a matrix W ∈Rn×p using each ˆwi as a column 3. Apply singular value decomposition W = UΣV ⊤where U ∈Rn×n, Σ ∈Rn×p, V ⊤∈Rp×p 4. Select the reduced-dimensionality projection, θ = U[0:d,:] ⊤ 5. Train a classifier h from [C(x, x[p:m]θ), f(x)  : x ∈S Table 1: The structural correspondence learning (SCL) algorithm features, all the old non-pivot features, and all the new correspondence features: C(x, z) = [x; z] We call this the pivot+nonpivot+new setting of C. The following sections discuss alternative parameter choices for pivot features, Bi, d, and C. 3.2 Pivot Features for Authorship Attribution The SCL algorithm depends heavily on the pivot features being domain-independent features, and as discussed in Section 2, which features make sense as pivot features varies widely by task. No previous studies have explored structural correspondence learning for authorship attribution, so one of the outstanding questions we tackle here is how to identify pivot features. Research has shown that the most discriminative features in attribution and the most robust features across domains are character n-grams (Stamatatos, 2013; Sapkota et al., 2014). We thus consider two types of character n-grams used in authorship attribution that might make good pivot features. 3.2.1 Untyped Character N-grams Classical character n-grams are simply the sequences of characters in the text. For example, given the text: The structural correspondence character 3-gram features would look like: "The", "he ", "e s", " st", "str", "tru", "ruc", "uct", ... We propose to use as pivot features the p most frequent character n-grams. For non-pivot features, we use the remaining features from prior work (Sapkota et al., 2014). These include both the remaining (lower frequency) character n-grams, as well as stop-words and bag-of-words lexical features. We call this the untyped formulation of pivot features. 3.2.2 Typed Character N-grams Sapkota et al. (2015) showed that classical character n-grams lose some information in merging together instances of n-grams like the which could be a prefix (thesis), a suffix (breathe), or a standalone word (the). Therefore, untyped character n-grams were separated into ten distinct categories. Four of the ten categories are related to affixes: prefix, suffix, space-prefix, and space-suffix. Three are wordrelated: whole-word, mid-word, and multi-word. The final three are related to the use of punctuation: beg-punct, mid-punct, and end-punct. For example, the character n-grams from the last section would instead be replaced with: "whole-word:The", "space-suffix:he ", "multi-word:e s", "space-prefix: st", "prefix:str", "mid-word:tru", "mid-word:ruc", "mid-word:uct", ... 2229 Sapkota et al. (2015) demonstrated that n-grams starting with a punctuation character (the beg-punct category) and with a punctuation character in the middle (the mid-punct category) were the most effective character n-grams for cross-domain authorship attribution. We therefore propose to use as pivot features the p/2 most frequent character ngrams from each of the beg-punct and mid-punct categories, yielding in total p pivot features. For non-pivot features, we use all of the remaining features of Sapkota et al. (2015). These include both the remaining (lower frequency) beg-punct and mid-punct character n-grams, as well as all of the character n-grams from the remaining eight categories. We call this the typed formulation of pivot features.2 3.3 Pivot feature binarization parameters Authorship attribution typically relies on countbased features. However, the classic SCL algorithm assumes that all pivot features are binary, so that it can train binary classifiers to predict pivot feature values from non-pivot features. We propose a binarization function to produce a binary classification problem from a count-based pivot feature by testing whether the feature value is above or below the feature’s median value in the training data: Bi(y) = ( 1 if y > median({xi : x ∈S ∪U}) 0 otherwise The intuition is that for count-based features, “did this pivot feature appear at least once in the text” is not a very informative distinction, especially since the average document has hundreds of words, and pivot features are common. A more informative distinction is “was this pivot feature used more or less often than usual?” and that corresponds to the below-median vs. above-median classification. 3.4 Dimensionality reduction parameters The reduced dimensionality (d) of the low-rank representation varies depending on the task at hand, though lower dimensionality may be preferred as it will result in faster run times. We empirically compare different choices for d: 25, 50, and 100. We also consider the question, how critical is dimensionality reduction? For example, if there 2Because the untyped and typed feature sets are designed to directly replicate Sapkota et al. (2014) and Sapkota et al. (2015), respectively, both include character n-grams, but only untyped includes stop-words and lexical features. Topics 4 Authors 13 Documents/author/topic 10 Average sentences/document 53 Average words/document 1034 Table 2: Statistics of the Guardian dataset. are only p = 100 pivot features, is there any need to run singular-value decomposition? The goal here is to determine if SCL is increasing the robustness across domains primarily through transforming non-pivot features into pivot-like features, or if the reduced dimensionality from the singular-value decomposition contributes something beyond that. 3.5 Feature combination parameters It’s not really clear why the standard formulation of SCL uses the non-pivot features when training the final classifier. All of the non-pivot features are projected into the pivot feature space in the form of the new correspondence features, and the pivot feature space is, by design, the most domain independent part of the feature space. Thus, it seems reasonable to completely replace the nonpivot features with the new pivot-like features. We therefore consider a pivot+new setting of C: pivot+new: C(x, z) = [x[0:p]; z] We also consider other settings of C, primarily for understanding how the different pieces of the SCL feature space contribute to the overall model. pivot: C(x, z) = x[0:p] nonpivot: C(x, z) = x[p:m] new: C(x, z) = z pivot+nonpivot: C(x, z) = x Note that the pivot+nonpivot setting corresponds to a model that does not apply SCL at all. 4 Dataset To explore cross-domain settings of authorship attribution, we need datasets containing documents from a number of authors from different domains (different topics, different genres). We use a corpus that consists of texts published in The Guardian daily newspaper that is actively used by the authorship attribution community in cross-domain studies (Stamatatos, 2013; Sapkota et al., 2014; 2230 Sapkota et al., 2015). The Guardian corpus contains opinion articles written by 13 authors in four different topics: World, U.K., Society, and Politics. Following prior work, to make the collection balanced across authors, we choose at most ten documents per author for each of the four topics. Table 2 presents some statistics about the datasets. 5 Experimental Settings We trained support vector machine (SVM) classifiers using the Weka implementation (Witten and Frank, 2005) with default parameters. For the untyped features, we used character 3-grams appearing at least 5 times in the training data, a list of 643 predefined stop-words, and the 3,500 most frequent non-stopword words as the lexical features. For the typed features, we used the top 3,500 most frequent 3-grams occurring at least five times in the training data for each of the 10 character n-gram categories. In both cases, we selected p = 100 pivot features as described in Section 3.2. We measured performance in terms of accuracy across all possible topic pairings. That is, we paired each of the 4 topics in the Guardian corpus with each of the 3 remaining topics: train on Politics, test on Society; train on Politics, test on UK; train on Politics, test on World; etc. For each such model, we allowed SCL to learn feature correspondences from the labeled data of the 1 training topic and the unlabeled data of the 1 test topic. This resulted in 12 pairings of training/testing topics. We report both accuracy on the individual pairings and an overall average of the 12 accuracies. We compare performance against two state-ofthe-art baselines: Sapkota et al. (2014) and Sapkota et al. (2015), as described in Section 3.2, and whose features are denoted as untyped and typed, respectively. We replicate these models by using the pivot+nonpivot setting of C, i.e., not including any of the new SCL-based features. 6 Results The following sections explore the results of our innovations in different areas: pivot features, feature binarizations, dimensionality reduction, and feature combination. For each section, we hold the other parameters constant and vary only the one parameter of interest. Thus, where not otherwise specified, we set parameters to the best values we observed in our experiments: we set the feature set to typed, the binarization Bi(y) to the median, Dataset untyped typed Politics-Society 61.29 67.74 Politics-UK 66.67 63.33 Politics-World 58.97 64.10 Society-Politics 62.96 62.96 Society-UK 72.50 72.50 Society-World 56.62 48.08 UK-Politics 68.75 60.71 UK-Society 66.13 67.74 UK-World 57.27 58.97 World-Politics 62.50 59.82 World-Society 61.29 62.90 World-UK 46.67 54.44 Average 61.80 61.94 Table 3: Accuracy of untyped and typed feature sets. The difference between the averages is not statistically significant (p=0.927). the reduced dimensionality d to 50, and the feature combination C(x, z) to pivot+new (i.e., we use the old pivot features alongside the new correspondence features). All reports of statistical significance are based on paired, two-tailed t-tests over the 12 different topic pairings. 6.1 Untyped vs. Typed features Table 3 compares the untyped feature set to the typed feature set. Both feature sets perform reasonably well, and substantially better than a model without SCL, where the performance of untyped is 56.43 and typed is 53.62 (see the pivot+nonpivot columns of Table 6 and Table 7, discussed in Section 6.4). Recall that the typed formulation includes only character n-gram features, while the untyped formulation includes stopwords and lexical features as well. Thus, given their very similar performance in Table 3, typed being slightly better, we select the simpler typed feature formulation for the remaining experiments. 6.2 Greater-than-zero vs. Median Binarization Table 4 compares choices for Bi(y), the function for converting a pivot feature value into a binary classification problem. In every single train/test scenario, and for both untyped and typed feature sets, our proposed median-based binarization function yielded performance greater than or equal to that of the traditional SCL greater-than-zero binarization function. This confirms our hypothesis that count-based features were inadequately modeled 2231 Dataset untyped typed >0 >med >0 >med Politics-Society 58.06 61.29 61.29 67.74 Politics-UK 66.67 66.67 63.33 63.33 Politics-World 55.56 58.97 63.81 64.10 Society-Politics 61.81 62.96 62.67 62.96 Society-UK 72.50 72.50 71.00 72.50 Society-World 51.92 56.62 46.00 48.08 UK-Politics 59.82 68.75 60.00 60.71 UK-Society 59.68 66.13 64.52 67.74 UK-World 47.86 57.27 57.27 58.97 World-Politics 56.25 62.50 56.50 59.82 World-Society 50.00 61.29 61.52 62.90 World-UK 42.22 46.67 50.00 54.44 Average 56.11 61.80 59.83 61.94 Table 4: Accuracy of greater-than-zero and median formulations of the Bi(y) binarization function. Median is significantly better than greaterthan-zero in both untyped (p=0.0007) and typed (p=0.003). Dataset d=25 d=50 d=100 no SVD Politics-Society 66.13 67.74 72.58 50.00 Politics-UK 62.22 63.33 66.67 48.89 Politics-World 63.25 64.10 64.10 47.01 Society-Politics 64.81 62.96 55.56 57.41 Society-UK 67.50 72.5 67.5 70.00 Society-World 48.08 48.08 44.23 46.15 UK-Politics 60.71 60.71 58.93 51.79 UK-Society 64.52 67.74 56.45 59.68 UK-World 60.68 58.97 58.12 49.57 World-Politics 62.50 59.82 51.79 55.36 World-Society 59.68 62.90 67.74 62.90 World-UK 54.44 54.44 55.56 51.11 Average 61.21 61.94 59.94 54.16 Table 5: Accuracy of different choices for dimensionality reduction with typed features. The pattern is similar for untyped. d = 50 is significantly better than no SVD (p=0.0009), but not significantly different from d = 25 (p=0.291) or d = 100 (p=0.211). in standard SCL and that the median-based binarization function improves the modeling of such features. 6.3 Dimensionality Reduction Choices Table 5 compares different choices for the dimensionality reduction parameter d, as well as the possibility of not performing any dimensionality reduction at all (“No-SVD”). While each value of d yields the best performance on some of the train/test scenarios, d = 50 achieves the highest average accuracy (61.94). Removing the SVD entirely generally performs worse, and though on a small number of train/test scenarios it outperforms d = 25 and d = 100, it is always worse than d = 50. This shows that SCL’s feature correspondences alone are not sufficient to achieve domain adaptation. Without the SVD, performance is barely above a model without SCL: 54.16 vs. 53.62 (see Section 6.4). Much of the benefit appears to be coming from the SVD’s basis-shift, since d = 100 outperforms no-SVD by more than 5 points3, while d = 50 only outperforms d = 100 by 2 points. These results are consistent with SCL’s origins in alternating structural optimization (Ando and Zhang, 2005), where SVD is derived as a necessary step for identifying a shared low-dimensional subspace. 6.4 Replacing vs. Concatenating Features Table 6 and Table 7 compare the performance of different choices for the feature combination function C(x, z) on untyped and typed features, respectively. Our proposed pivot+new combination function, which replaces the non-pivot features with the new correspondence features, performs better on average than the two state-of-the-art baselines with no domain adaptation (pivot+nonpivot) and than the two state-of-the-art baselines augmented with classic SCL (pivot+nonpivot+new): 61.80 vs. 56.43 and 56.93 for untyped, and 61.94 vs. 53.62 and 54.23 for typed). These 5-8 point performance gains confirm the utility of our proposed pivot+new combination function, which replaces the old nonpivot features with the new correspondence features. These gains are consistent with (Blitzer et al., 2006), who included both pivot and non-pivot features, but found that they had to give pivot features a weight “five times that of the [non-pivot] features” to see improved performance. While our approach is better on average, in some individual scenarios, it performs worse than classic SCL or no domain adaptation. For example, on Politics-Society, Politics-UK, and World-UK, using typed features, pivot+new performs worse than no domain adaptation (pivot+nonpivot). Our results suggest a rule for predicting when this degradation will happen: pivot+new will outperform 3Recall that p = 100, so d = 100 means the full matrix. 2232 Dataset pivot nonpivot new pivot+nonpivot pivot+nonpivot+new pivot+new Politics-Society 54.84 75.81 62.9 75.81 77.42 61.29 Politics-UK 63.33 68.89 58.89 70.00 71.11 66.67 Politics-World 58.12 63.25 53.85 64.96 65.41 58.97 Society-Politics 61.11 46.30 48.15 46.30 46.30 62.96 Society-UK 67.5 45.00 60.00 47.50 47.50 72.50 Society-World 50.00 42.31 53.85 46.15 46.15 56.62 UK-Politics 62.50 42.86 59.82 42.86 44.64 68.75 UK-Society 59.68 43.55 55.83 45.16 45.16 66.13 UK-World 45.30 38.46 48.72 39.32 39.32 57.27 World-Politics 55.36 69.64 56.25 68.75 69.64 62.5 World-Society 46.77 67.74 53.23 69.35 69.35 61.29 World-UK 43.33 61.11 50.00 61.11 61.11 46.67 Average 55.65 55.41 55.12 56.43 56.93 61.80 Table 6: Accuracy of different untyped feature combinations. The best performance for each dataset is in bold. The performance of pivot+new is not significantly different from pivot+nonpivot (p=0.258) or pivot+nonpivot+new (p=0.305). Dataset pivot nonpivot new pivot+nonpivot pivot+nonpivot+new pivot+new Politics-Society 48.39 70.97 59.68 72.58 72.58 67.74 Politics-UK 52.22 68.89 66.67 71.11 72.22 63.33 Politics-World 46.15 61.54 61.54 63.25 64.10 64.10 Society-Politics 55.56 48.15 61.11 48.15 50.00 62.96 Society-UK 65.00 45.00 65.00 45.00 45.00 72.50 Society-World 38.46 46.15 53.85 44.23 46.15 48.08 UK-Politics 48.21 44.64 55.36 45.54 45.54 60.71 UK-Society 51.61 41.94 66.13 41.94 41.94 67.74 UK-World 44.44 33.33 45.30 35.90 35.90 58.97 World-Politics 50.89 51.79 61.39 57.14 57.14 59.82 World-Society 54.84 59.68 43.55 59.68 61.29 62.9 World-UK 44.44 56.67 50.00 58.89 58.89 54.44 Average 50.02 52.40 57.47 53.62 54.23 61.94 Table 7: Accuracy of different typed feature combinations. The best performance for each dataset is in bold. The performance of pivot+new is significantly better than pivot+nonpivot (p=0.041) but not significantly different from pivot+nonpivot+new (p=0.059). both pivot+nonpivot and pivot+nonpivot+new iff the new features alone outperform the nonpivot features alone. This rule holds in all 12 of 12 train/test scenarios for untyped features and 11 of 12 scenarios for typed features (failing on only World-Society). Intuitively, if the new correspondence features that result from SCL aren’t better than the features they were meant to replace, then it is unlikely that they will result in performance gains. This might happen if the pivot features are not strong enough predictors, either because they have been selected poorly or because there are too few of them. 7 Discussion To the best of our knowledge, we are the first to introduce a domain adaption model for authorship attribution that combines labeled data in a source domain with unlabeled data from a target domain to improve performance on the target domain. We proposed several extensions to the popular structural correspondence learning (SCL) algorithm for domain adaptation to make it more amenable to tasks like authorship attribution. The SCL algorithm requires the manual identification of domain independent pivot features for each task, so we proposed two feature formulations using charac2233 ter n-grams as the pivot features, and showed that both yielded state-of-the-art performance. We also showed that for the binary classification task that is used by SCL to learn the feature correspondences, replacing the traditional greater-than-zero classification task with a median-based classification task allowed the model to better handle our count-based features. We explored the dimensionality reduction step of SCL and showed that singular value decomposition (SVD) over the feature correspondence matrix is critical to achieving high performance. Finally, we introduced a new approach to combining the original features with the learned correspondence features, and showed that replacing (rather than concatenating) the non-pivot features with the correspondence features generally yields better performance. In the future, we would like to extend this work in several ways. First, though our median-based approach was successful in converting pivot feature values to binary classification problems, learning a regression model might be an even better approach for count-based features. Second, since the SVD basis-shift seems to be the source of much of the gains, we would like to explore replacing the SVD with other algorithms, such as independent component analysis. Finally, we would like to explore further our finding that the performance of the overall model seems to be predicted by the difference in performance between the non-pivot features and the new correspondence features, especially to see if this can be predicted at training time rather than as a post-hoc analysis. 8 Acknowledgments This work was supported in part by CONACYT Project number 247870, and by the National Science Foundation award number 1462141. References Rie Kubota Ando and Tong Zhang. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. J. Mach. Learn. Res., 6:1817–1853, December. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, EMNLP 2006, pages 120–128, Stroudsburg, PA, USA. Hal Daum´e. 2007. Frustratingly easy domain adaptation. In Annual meeting-association for computational linguistics, volume 45, pages 256–256, 2007. H. J. Escalante, T. Solorio, and M. Montes-y Gomez. 2011. Local histograms of character n-grams for authorship attribution. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 288–298, Portland, Oregon, USA, June. Association for Computational Linguistics. Jade Goldstein-Stewart, Ransom Winder, and Roberta Evans Sabin. 2009. Person identification from text and speech genre samples. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, EACL ’09, pages 336–344, Stroudsburg, PA, USA. Association for Computational Linguistics. Graeme Hirst and Vanessa Wei Feng. 2012. Changes in style in authors with alzheimer’s disease. English Studies, 93(3):357–370. Qi Li. 2012. Literature survey: Domain adaptation algorithms for natural language processing. Technical report, February. Kim Luyckx and Walter Daelemans. 2008. Authorship attribution and verification with many authors and limited data. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 513–520, Manchester, UK, August. D. Madigan, A. Genkin, S. Argamon, D. Fradkin, and L. Ye. 2005. Author identification on the large scale. In Proceedings of CSNA/Interface 05. George K. Mikros and Eleni K. Argiri. 2007. Investigating topic influence in authorship attribution. In Proceedings of the International Workshop on Plagiarism Analysis, Authorship Identification, and Near-Duplicate Detection, pages 29–35. Peter Prettenhofer and Benno Stein. 2011. Crosslingual adaptation using structural correspondence learning. ACM Trans. Intell. Syst. Technol., 3(1):13:1–13:22, October. Upendra Sapkota, Thamar Solorio, Manuel Montes, Steven Bethard, and Paolo Rosso. 2014. Crosstopic authorship attribution: Will out-of-topic data help? In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1228–1237, Dublin, Ireland, August. Dublin City University and Association for Computational Linguistics. Upendra Sapkota, Steven Bethard, Manuel Montes, and Thamar Solorio. 2015. Not all character ngrams are created equal: A study in authorship attribution. In Proceedings of the 2015 Conference of the North American Chapter of the Association for 2234 Computational Linguistics: Human Language Technologies, pages 93–102, Denver, Colorado, May– June. Association for Computational Linguistics. Andrew I. Schein, Johnnie F. Caver, Randale J. Honaker, and Craig H. Martell. 2010. Author attribution evaluation with novel topic cross-validation. In KDIR ’10, pages 206–215. Nobuyuki Shimizu and Hiroshi Nakagawa. 2007. Structural correspondence learning for dependency parsing. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007, pages 1166–1169, Prague, Czech Republic, June. Association for Computational Linguistics. E. Stamatatos. 2006. Authorship attribution based on feature set subspacing ensembles. International Journal on Artificial Intelligence tools, 15(5):823– 838. Efstathios Stamatatos. 2013. On the robustness of authorship attribution based on character n-gram features. Journal of Law & Policy, 21(2):421 – 439. Songbo Tan and Xueqi Cheng. 2009. Improving SCL model for sentiment-transfer learning. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers, pages 181–184, Boulder, Colorado, June. Association for Computational Linguistics. I. H. Witten and E. Frank. 2005. Data Mining: Practical Machine Learning Tools and Techniques. Morgan Kauffmann, 2nd edition. Yanbo Zhang, Youli Qu, and Junsan Zhang. 2010. A new method of selecting pivot features for structural correspondence learning in domain adaptive sentiment analysis. In Database Technology and Applications (DBTA), 2010 2nd International Workshop on, pages 1–3. 2235
2016
210
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2236–2244, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics A Corpus-Based Analysis of Canonical Word Order of Japanese Double Object Constructions Ryohei Sasano Manabu Okumura Tokyo Institute of Technology {sasano,oku}@pi.titech.ac.jp Abstract The canonical word order of Japanese double object constructions has attracted considerable attention among linguists and has been a topic of many studies. However, most of these studies require either manual analyses or measurements of human characteristics such as brain activities or reading times for each example. Thus, while these analyses are reliable for the examples they focus on, they cannot be generalized to other examples. On the other hand, the trend of actual usage can be collected automatically from a large corpus. Thus, in this paper, we assume that there is a relationship between the canonical word order and the proportion of each word order in a large corpus and present a corpusbased analysis of canonical word order of Japanese double object constructions. 1 Introduction Japanese has a much freer word order than English. For example, a Japanese double object construction has six possible word orders as follows: (1) a: Ken-ga Aya-ni camera-wo miseta. Ken-NOM Aya-DAT camera-ACC showed b: Ken-ga camera-wo Aya-ni miseta. Ken-NOM camera-ACC Aya-DAT showed c: Aya-ni Ken-ga camera-wo miseta. Aya-DAT Ken-NOM camera-ACC showed d: Aya-ni camera-wo Ken-ga miseta. Aya-DAT camera-ACC Ken-NOM showed e: Camera-wo Ken-ga Aya-ni miseta. camera-ACC Ken-NOM Aya-DAT showed f: Camera-wo Aya-ni Ken-ga miseta. camera-ACC Aya-DAT Ken-NOM showed In these examples, the position of the verb miseta (showed) is fixed but the positions of its nominative (NOM), dative (DAT), and accusative (ACC) arguments are scrambled. Note that, although the word orders are different, they have essentially the same meaning “Ken showed a camera to Aya.” In the field of linguistics, each language is assumed to have a basic word order that is fundamental to its sentence structure and in most cases there is a generally accepted theory on the word order for each structure. That is, even if there are several possible word orders for essentially same sentences consisting of the same elements, only one of them is regarded as the canonical word order and the others are considered to be generated by scrambling it. However, in the case of Japanese double object constructions, there are several claims on the canonical argument order. There have been a number of studies on the canonical word order of Japanese double object constructions ranging from theoretical studies (Hoji, 1985; Miyagawa and Tsujioka, 2004) to empirical ones based on psychological experiments (Koizumi and Tamaoka, 2004; Nakamoto et al., 2006; Shigenaga, 2014) and brain science (Koso et al., 2004; Inubushi et al., 2009). However, most of them required either manual analyses or measurements of human characteristics such as brain activities or reading times for each example. Thus, while these analyses are reliable for the example they focus on, they cannot be easily generalized to other examples1. That is, another manual analysis or measurement is required to consider the canonical word order of another example. On the other hand, the trend of actual usage can be collected from a large corpus. While it is difficult to say whether a word order is canonical or 1Note that in this work, we assume that there could be different canonical word orders for different double-object sentences as will be explained in Section 2.2. 2236 not from one specific example, we can consider that a word order would be canonical if it is overwhelmingly dominant in a large corpus. For example, since the DAT-ACC order2 is overwhelmingly dominant in the case that the verb is kanjiru (feel), its dative argument is kotoba (word), and its accusative argument is aijˆo (affection) as shown in Example (2), we can consider that the DAT-ACC order would be canonical in this case. Note that, the numbers in parentheses represent the proportion of each word order in Examples (2) and (3); φX denotes the omitted noun or pronoun X in this paper. (2) DAT-ACC: Kotoba-ni aijˆo-wo kanjiru. (97.5%) word-DAT affection-ACC feel ACC-DAT: Aijˆo-wo kotoba-ni kanjiru. (2.5%) affection-ACC word-DAT feel (φI feel the affection in φyour words.) On the contrary, since the ACC-DAT order is overwhelmingly dominant in the case that the verb is sasou (ask), its dative argument is dˆeto (date), and its accusative argument is josei (woman) as shown in Example (3), the ACC-DAT order is considered to be canonical in this case. (3) DAT-ACC: Dˆeto-ni josei-wo sasou. (0.4%) date-DAT woman-ACC ask ACC-DAT: Josei-wo dˆeto-ni sasou. (99.6%) woman-ACC date-DAT ask (φI ask a woman out on a date.) Therefore, in this paper, we assume that there is a relationship between the canonical word order and the proportion of each word order in a large corpus and attempt to evaluate several claims on the canonical word order of Japanese double object constructions on the basis of a large corpus. Since we extract examples of double object constructions only from reliable parts of parses of a very large corpus, consisting of more than 10 billion unique sentences, we can reliably leverage a large amount of examples. To the best of our knowledge, this is the first attempt to analyze the canonical word order of Japanese double object constructions on the basis of such a large corpus. 2Since Japanese word order is basically subject-objectverb (SOV) and thus the canonical position of nominative argument is considered to be the first position, we simply call the nominative, dative, accusative order as the DAT-ACC order, and the nominative, accusative, dative order as the ACC-DAT order in this paper. 2 Japanese double object constructions 2.1 Relevant Japanese grammar We briefly describe the relevant Japanese grammar. Japanese word order is basically subjectobject-verb (SOV) order, but the word order is often scrambled and does not mark syntactic relations. Instead, postpositional case particles function as case markers. For example, nominative, dative, and accusative cases are represented by case particles ga, ni, and wo, respectively. In a double object construction, the subject, indirect object, and direct object are typically marked with the case particles ga (nominative), ni (dative), and wo (accusative), respectively, as shown in Example (4)-a. (4) a: Watashi-ga kare-ni camera-wo miseta. I-NOM him-DAT camera-ACC showed b: Watashi-wa kare-ni camera-wo miseta. I-TOP him-DAT camera-ACC showed c: φI kare-ni camera-wo miseta. φI-NOM him-DAT camera-ACC showed (I showed him a camera.) However, when an argument represents the topic of the sentence (TOP), the topic marker wa is used as a postpositional particle, and case particles ga and wo do not appear explicitly. For example, since watashi (I) in Example (4)-b represents the topic of the sentence, the nominative case particle ga is replaced by the topic marker wa. Similarly, an argument modified by its predicate does not accompany a postpositional case particle that represents the syntactic relation between the predicate and argument. For example, since camera in Example (5) is modified by the predicate miseta (showed), the accusative case particle wo does not appear explicitly. (5) Watashi-ga kare-ni miseta camera. I-NOM him-DAT showed camera (A camera that I showed him.) In addition, arguments are often omitted in Japanese when we can easily guess what the omitted argument is or we do not suppose a specific object. For example, the nominative argument is omitted in Example (4)-c, since we can easily guess the subject is the first person. These characteristics make it difficult to automatically extract examples of word orders in double object construction from a corpus. 2237 2.2 Canonical argument order There are three major claims as to the canonical argument order of Japanese double object constructions (Koizumi and Tamaoka, 2004). One is the traditional analysis by Hoji (1985), which argues that only the nominative, dative, accusative (DAT-ACC) order like in Example (1)-a is canonical for all cases. The second claim, made by Matsuoka (2003), argues that Japanese double object constructions have two canonical word orders, the DAT-ACC order and the ACC-DAT order, depending on the verb types. The third claim, by Miyagawa (1997), asserts that both the DAT-ACC order and ACC-DAT order are canonical for all cases. Note that, the definition of the term canonical word order varies from study to study. Some studies presume that there is only one canonical word order for one construction (Hoji, 1985), while others presume that a canonical word order can be different for each verb or each tuple of a verb and its arguments (Matsuoka, 2003). In addition, some studies presume that there can be multiple canonical word orders for one sentence (Miyagawa, 1997). In this paper, we basically adopt the position that there is only one canonical word order for one tuple of a verb and its arguments but the canonical word orders can be different for different tuples of a verb and its arguments. 2.3 Other features related to word order There are a number of known features that affect word order. For example, it is often said that long arguments tend to be placed far from the verb, whereas short arguments tend to be placed near the verb. The From-Old-to-New Principle (Kuno, 2006) is also well known; it argues that the unmarked word order of constituents is old, predictable information first; and new, unpredictable information last. Note that these types of features are not specific to argument orders in Japanese double object constituents. For example, Bresnan et al. (2007) reported the similar features were also observed in the English dative alternation and useful for predicting the dative alternation. However, since we are interested in the canonical word order, we do not want to take these features into account. In this work, we assume that these features can be ignored by using a very large corpus and analyzing the word order on the basis of statistical information acquired from the corpus. 3 Claims on the canonical word order of Japanese double object constructions In this paper, we will address the following five claims on the canonical word order of Japanese double object constructions. Claim A: The DAT-ACC order is canonical. Claim B: There are two canonical word orders, the DAT-ACC and the ACC-DAT order, depending on the verb types. Claim C: An argument whose grammatical case is infrequently omitted with a given verb tends to be placed near the verb. Claim D: The canonical word order varies depending on the semantic role and animacy of the dative argument. Claim E: An argument that frequently co-occurs with the verb tends to be placed near the verb. Claim A (Hoji, 1985) presumes that there is only one canonical word order for Japanese double object constructions regardless of the verb type. On the other hand, Claims B and C argue that the canonical word order varies depending on verb, but they still do not take into account the lexical information of the arguments. Thus, these claims can be verified by investigating the distribution of word orders for each verb. With regard to Claim B, Matsuoka (2003) classified causative-inchoative alternating verbs into two types: show-type and pass-type, and claimed the DAT-ACC order is the canonical order for show-type verbs, whereas the ACC-DAT order is the canonical order for pass-type verbs. The definitions of each verb type are as follows. In the case of show-type verbs, the dative argument of a causative sentence is the subject of its corresponding inchoative sentence as shown in Example (6). On the other hand, in the case of pass-type verbs, the accusative argument is the subject of its corresponding inchoative sentence as shown in Example (7). (6) Causative: Kare-ni camera-wo miseta. him-DAT camera-ACC showed (φI showed him a camera.) Inchoative: Kare-ga mita. he-NOM saw (He saw φsomething.) 2238 (7) Causative: Camera-wo kare-ni watashita. camera-ACC him-DAT passed (φI passed him a camera.) Inchoative: Camera-ga watatta. camera-NOM passed (A camera passed to φsomeone.) Claim C is based on our observation. It is based on the assumption that if an argument of a verb is important for interpreting the meaning of the verb, it tends to be placed near the verb and does not tend to be omitted. Claims D and E take into account the lexical information of arguments and assume that the canonical word order of Japanese double object constructions is affected by the characteristics of the dative and/or accusative arguments. With regard to Claim D, Matsuoka (2003) asserted that the canonical order varies depending on the semantic role of the dative argument. Specifically, the DAT-ACC order is more preferred when the semantic role of dative argument is animate Possessor than when the semantic role is inanimate Goal. Claim E is based on our observation again, which argues that if the dative or accusative argument frequently co-occurs with the verb, it has a strong relationship with the verb, and thus is placed nearby. A typical example that satisfies this claim is idiomatic expressions as will be discussed in Section 5.4. 4 Example collection A corpus-based analysis of canonical word order can leverage a much larger number of examples than approaches based on theoretical analysis, psychological experiments, or brain science can. However, automatically collected examples sometimes include inappropriate ones. For example, if we extract all sequences of a verb and its preceding argument candidates, the sequence “Kagi-wo kare-ni iwareta” (φI am told the key by him) is mistakenly extracted from Example (8), although kagi-wo is not actually an argument of iwareta but an argument of oita. (8) Kagi-wo kare-ni iwareta basho-ni oita. key-ACC him-DAT told place-DAT put (φI put the key on the place where he told φme.) As predicted, we can alleviate this problem by using a dependency parser. However, the accuracy of the state-of-the-art Japanese dependency parser is not very high, specifically about 92% for news paper articles (Yoshinaga and Kitsuregawa, 2014), and thus, inappropriate examples would be extracted even if we used one. Therefore, in this work, we decided to extract examples only from reliable parts of dependency parses. Specifically, we used a corpus consisting of more than 10 billion unique sentences extracted from the Web, selected parse trees that have no syntactic ambiguity, and then extracted examples only from the selected parse trees. This strategy basically follows Kawahara and Kurohashi (2002)’s strategy for automatic case frame construction. The detailed procedure of example collection is as follows: 1. Extract Japanese Web pages using linguistic information, split the Web pages into sentences using periods and HTML tags, and merge sentences that are the exactly same into one sentence to avoid collecting the same example several times, which might be extracted from a mirror site. 2. Employ the Japanese morphological analyzer JUMAN3 and the syntactic analyzer KNP4, and extract examples of verbs and their arguments from parse trees that have no syntactic ambiguity5. 3. Collect the examples if the verb satisfies all the following conditions: (a) The verb has an entry in the JUMAN dictionary and appears in the active voice. (b) The verb has more than 500 different examples of dative and accusative argument pairs. (c) The proportion of examples that include both the dative and accusative arguments out of all examples that include the target verb is larger than 5%. We employ the syntactic analyzer KNP with options “-dpnd-fast -tab -check.” KNP with these 3http://nlp.ist.i.kyoto-u.ac.jp/EN/index.php?JUMAN 4http://nlp.ist.i.kyoto-u.ac.jp/EN/index.php?KNP 5Murawaki and Kurohashi (2012) reported that 20.7% of the dependency relations were extracted from a newspaper corpus and the accuracy was 98.3% when they adopted Kawahara and Kurohashi (2002)’s strategy. 2239 options outputs all head candidates for each bunsetsu6 on the basis of heuristic rules. We then extract the example of a verb and its argument if the argument candidate have only one head candidate. For example, since Japanese is a head-final language and only the verb bunsetsu can be the head of the most noun bunsetsu in Japanese, basho-ni in Example (8) has only one head candidate oita (put), whereas kagi-wo and kare-ni have two head candidates iwareta (told) and oita (put). Thus, we extract only the example “basho-ni oita” from Example (8). In addition, when an argument consists of a compound noun, we only extract the head noun and its postpositional particle as the argument to avoid data sparsity. Condition 3-(c) is set in order to extract only ditransitive verbs, which take both dative and accusative arguments. Although the threshold of 5% seems small at first glance, most verbs that satisfy it are actually ditransitive. This is because arguments are often omitted in Japanese, and thus, only some of the examples explicitly include both dative and accusative arguments even in the case of ditransitive verb. Out of a corpus consisting of more than 10 billion unique sentences, 648 verbs satisfied these conditions. Hereafter, we will focus on these 648 verbs. The average number of occurrences of each verb was about 350 thousand and the average number of extracted examples that include both dative and accusative arguments was about 59 thousand. 5 Corpus-based analysis of canonical word order Here, we present a corpus-based analysis of the canonical word order of Japanese double object constructions. We will address Claims A and C in Section 5.1, Claim B in Section 5.2, Claim D in Section 5.3, and Claim E in Section 5.4. 5.1 Word order for each verb Let us examine the relation between the proportion of the DAT only example RDAT-only and the proportion of the ACC-DAT order RACC-DAT for each of the 648 verbs to inspect Claims A and C. 6In Japanese, bunsetsu is a basic unit of dependency, consisting of one or more content words and the following zero or more function words. In this paper, we segment each example sentence into a sequence of bunsetsu. RDAT-only is calculated as follows: RDAT-only = NDAT-only NDAT-only + NACC-only , where NDAT/ACC-only is the number of example types that only include the corresponding argument out of the dative and accusative arguments. For example, we count the number of example types like Example (9) that include an accusative argument but do not include a dative argument to get the value of NACC-only. Accordingly, the large RDAT-only value indicates that the dative argument is less frequently omitted than the accusative argument. (9) Gakuchˆo-ga gakui-wo juyo-shita. president-NOM degree-ACC conferred (The president conferred a degree on φsomeone.) However, if we use all extracted examples that include only one of the dative and accusative arguments for calculating RDAT-only, the value is likely to suffer from a bias that the larger RACC-DAT is, the larger RDAT-only becomes. This is because the arguments that tend to be placed near the verb have relatively few syntactic ambiguity. Since we extract the examples from the reliable parts of parses that have no syntactic ambiguity, these arguments tend to be included in the extracted examples more frequently than the other arguments. To avoid this bias, we use only these examples in which the nominative case is also extracted for calculating RDAT-only. This is based on the assumption that since Japanese word order is basically subject-object-verb order, if the nominative argument is collected but one of the dative and accusative arguments is not collected, the argument is actually omitted. Through a preliminary investigation on Kyoto University Text Corpus7, we confirmed the effect of this constraint to avoid the bias. On the other hand, RACC-DAT is calculated as follows: RACC-DAT = NACC-DAT NDAT-ACC + NACC-DAT , where NDAT-ACC/ACC-DAT is the number of example types that include both the dative and accusative arguments in the corresponding order. Figure 1 shows the results. The left figure shows the relation between the proportion of the DAT 7Kyoto University Text Corpus 4.0: http://nlp.ist.i.kyotou.ac.jp/EN/index.php?Kyoto University Text Corpus 2240                            Figure 1: The left figure shows the relation between the proportion of the DAT only example RDAT-only (x-axis) and the proportion of the ACC-DAT order RACC-DAT (y-axis). The right figure shows the number of verbs in the corresponding range of RACC-DAT. only example RDAT-only and the proportion of the ACC-DAT order RACC-DAT for each of the 648 verbs. The x-axis denotes RDAT-only, the y-axis denotes RACC-DAT, and each point in the figure represents one of the 648 verbs. The dashed line is a linear regression line. The right figure shows the number of verbs in the corresponding range of RACC-DAT. Pearson’s correlation coefficient between RDAT-only and RACC-DAT is 0.391, which weakly supports Claim C: an argument whose grammatical case is infrequently omitted with a given verb tends to be placed near the verb. The proportion of the ACC-DAT order for all 648 verbs is 0.328. Thus, if we presume that there is only one canonical word order for Japanese double object constructions, this result suggests that the DAT-ACC order is the canonical one, as claimed by Hoji (Claim A). However, the right figure shows that the proportions of the ACC-DAT order differ from verb to verb. Moreover, the values of RACC-DAT for 435 out of 648 verbs are between 0.2 and 0.8. From these observations, we can say the preferred word order cannot be determined even if the verb is given in most cases. 5.2 Word order and verb type To inspect Matsuoka (2003)’s claim that the DAT-ACC order is canonical for show-type verbs, whereas the ACC-DAT order is canonical for passtype verbs, we investigated the proportions of the ACC-DAT order for several pass-type and showtype verbs. In this investigation, we used 11 passtype verbs and 22 show-type verbs that were used by Koizumi and Tamaoka (2004) in their psychological experiments8. Table 1 shows the results. Although we can see that the macro average of RACC-DAT of pass-type verbs is larger than that of show-type verbs, the difference is not significant9. Moreover, even in the case of pass-type verbs, the DAT-ACC order is dominant, which suggests Matsuoka (2003)’s claim is not true. Note that this conclusion is consistent with the experimental results reported by both Miyamoto and Takahashi (2002) and Koizumi and Tamaoka (2004). 5.3 Relation between word order and semantic role of the dative argument Next, let us examine the relation between the category of the dative argument and the word order to verify the effect of the semantic role of the dative argument. We selected eight categories in the JUMAN dictionary10 that appear more than 1 million times as dative arguments. Table 2 shows the results. We can see that there are differences in the 8We excluded a show-type verb hakaseru (dress), since it is divided into two morphemes by JUMAN. Instead, we added two show-type verbs shiraseru (notify) and kotodukeru (leave a message). 9The two-tailed p-value of permutation test is about 0.177. 10In JUMAN dictionary, 22 categories are defined and tagged to common nouns. 2241 Show-type Pass-type verb RACC-DAT verb RACC-DAT verb RACC-DAT shiraseru (notify) 0.522 modosu (put back) 0.771 otosu (drop) 0.351 azukeru (deposit) 0.399 tomeru (lodge) 0.748 morasu (leak) 0.332 kotodukeru (leave a message) 0.386 tsutsumu (wrap) 0.603 ukaberu (float) 0.255 satosu (admonish) 0.325 tsutaeru (inform) 0.522 mukeru (direct) 0.251 miseru (show) 0.301 noseru (place on) 0.496 nokosu (leave) 0.238 kabuseru (cover) 0.256 todokeru (deliver) 0.491 umeru (bury) 0.223 osieru (teach) 0.235 naraberu (range) 0.481 mazeru (blend) 0.200 sazukeru (give) 0.186 kaesu (give back) 0.448 ateru (hit) 0.185 abiseru (shower) 0.177 butsukeru (knock) 0.436 kakeru (hang) 0.108 kasu (lend) 0.118 tsukeru (attach) 0.368 kasaneru (pile) 0.084 kiseru (dress) 0.113 watasu (pass) 0.362 tateru (build) 0.069 Macro average 0.274 Macro average 0.365 Table 1: Proportions of the ACC-DAT order for each pass-type verb and show-type verb. Category # of examples RACC-DAT Typical examples PLACE-FUNCTION 1376990 0.499 shita (bottom), yoko (side), soto (outside), hˆokˆo (direction), . . . ANIMAL-PART 1483885 0.441 te (hand), mi (body), atama (head), hada (skin), mune (chest), . . . PERSON 5511281 0.387 tomodachi (friend), hito (human), shichˆo (mayor), watashi (I), . . . ARTIFACT-OTHER 2751008 0.372 pasokon (PC), fairu (file), furo (bath), hon (book), . . . PLACE-INSTITUTION 1618690 0.342 heya (room), mise (shop), tokoro (location), gakkˆo (school), . . . PLACE-OTHER 2439188 0.341 basho (place), sekai (world), ichi (position), zenmen (front), . . . QUANTITY 1100222 0.308 zu (figure), hyˆo (table), hanbun (half), atai (value), . . . ABSTRACT 10219318 0.307 blog (blog), kokoro (mind), list (list), shiya (sight), . . . Total 26500582 0.353 Table 2: Proportions of the ACC-DAT order for each category of dative argument. proportions of the ACC-DAT order. In particular, when the dative argument’s category is PLACEFUNCTION such as shita (bottom) and yoko (side) or ANIMAL-PART such as te (hand) and mi (body), the ACC-DAT order is more preferred than otherwise. As mentioned in Section 3, Matsuoka (2003) claimed the DAT-ACC order is more preferred when the semantic role of the dative argument is animate Possessor than when the semantic role is inanimate Goal. Thus, we thought the DAT-ACC order would be preferred when the dative argument’s category is PERSON, but we could not find such a trend. We think, however, this is due to that dative arguments of the PERSON category do not always have the semantic role of an animate Possessor. Thus, we conducted a further investigation in an attempt to verify Matsuoka (2003)’s claim. First, we collected examples that satisfied the following two conditions: the accusative argument belongs to ARTIFACT-OTHER category, and the dative argument belongs to either PLACEINSTITUTION or PERSON category. We call the former Type-A11, and the latter Type-B hereafter, and consider that the semantic role of the dative argument is inanimate Goal in most cases 11That is, the categories of the accusative and dative arguments of a Type-A example are ARTIFACT-OTHER and PLACE-INSTITUTION, respectively. of Type-A, whereas it is animate Possessor in most cases of Type-B. Example (10) shows typical examples of Type-A and Type-B. Here, the categories of hon (book), gakkˆo (school), and sensei (teacher) are ARTIFACT-OTHER, PLACEINSTITUTION, and PERSON, respectively, and the semantic roles of dative arguments are considered to be Goal in (10)-a and Possessor in (10)-b. (10) a: Hon-wo gakkˆo-ni kaeshita. book-ACC school-DAT returned (φsomeone returned the book to school.) b: Sensei-ni hon-wo kaeshita. teacher-DAT book-ACC returned (φsomeone returned the book to the teacher.) Next, we extracted verbs that had at least 100 examples of both types, calculated the proportion of the ACC-DAT order RACC-DAT for each verb and type, and counted the number of verbs for which the values of RACC-DAT were significantly different between Type-A and Type-B12. Out of 126 verbs that have at least 100 examples for both types, 64 verbs show the trend that Type-A prefers the ACC-DAT order more than Type-B does, and only 30 verbs have the opposite trend. This fact supports Matsuoka (2003)’s claim. 12We conducted a two-proportion z-test with a significance level of 0.05. 2242                      NPMI(, )NPMI(, )  Figure 2: The left figure shows the relation between the difference of NPMI(nDAT, v) from NPMI(nACC, v) (x-axis) and the proportion of the ACC-DAT order RACC-DAT (y-axis). The tuples whose verb and accusative/dative argument are used as an idiom are represented by +/×. The right figure shows the number of tuples of a verb and its dative and accusative arguments in the corresponding range of RACC-DAT. 5.4 Relation between word order and degree of co-occurrence of verb and arguments Now let us turn to the relation between the proportion of the ACC-DAT order RACC-DAT and the degree of co-occurrence of a verb and its argument to verify Claim E. Here, we leverage the normalized pointwise mutual information (NPMI) for measuring the degree of co-occurrence between a verb and its argument. NPMI is a normalized version of PMI. The value ranges between [-1,+1] and takes 1 for never occurring together, 0 for independence, +1 for complete co-occurrence. The NPMI of a verb v and its argument nc (c ∈{DAT,ACC}) is calculated as NPMI(nc, v) = PMI(nc, v) −log(p(nc, v)), where PMI(nc, v) = log p(nc, v) p(nc)p(v). We investigate the relation between the proportion of the ACC-DAT order RACC-DAT and the difference of NPMI(nDAT, v) from NPMI(nACC, v), i.e., NPMI(nDAT, v) −NPMI(nACC, v). If Claim E is true, when the dative argument co-occurs with the verb frequently, the dative argument tends to be placed near the verb and thus the proportion of the ACC-DAT order would take a large value. We investigated 2302 tuples of a verb and its dative and accusative arguments that appear more than 500 times in the corpus. The average number of occurrences of each tuple was 1532. Figure 2 shows the results. The left figure shows the relation between the difference of NPMI(nDAT, v) from NPMI(nACC, v) and the proportion of the ACC-DAT order RACC-DAT. Each point in the figure represents one of the 2302 tuples. The dashed line is a linear regression line. The right figure shows the number of tuples in the corresponding range of RACC-DAT. Pearson’s correlation coefficient between the difference of NPMI and RACC-DAT is 0.567, which supports Claim E: an argument that frequently cooccurs with the verb tends to be placed near the verb. Moreover, the values of RACC-DAT are larger than 0.9 or smaller than 0.1 for 1631 out of 2302 tuples. This result indicates that if a tuple of a verb and its dative and accusative arguments is given, the preferred word order is determined. This is contrastive to the conclusion that the preferred word order cannot be determined even if the verb is given as discussed in Section 5.1. One of the typical examples that satisfy Claim E is an idiomatic expression. Indeed, a verb and its argument that are used as an idiom co-occur frequently and are usually placed adjacent to each other. In addition, it is well known that if the argument order is scrambled, the idiomatic meaning disappears (Miyagawa and Tsujioka, 2004). Thus, we investigated to what extent idiomatic expres2243 sions affected the findings discussed above. For all 2302 tuples, we manually judged whether the verb and the adjacent argument are used as an idiom in most cases. As a result, the verbs and their accusative arguments are judged as idiomatic for 404; the verbs and their dative arguments are judged as idiomatic for 84 out of 2302 tuples. We show these tuples by + and × in Figure 2, respectively. As predicted, the values of RACC-DAT are smaller than 0.1 for all of the former examples, and larger than 0.9 for all of the latter examples. However, even if we ignore these idiomatic examples, Pearson’s correlation coefficient between the difference of NPMI and RACC-DAT is 0.513, which is usually considered as moderate correlation. 6 Conclusion This paper presented a corpus-based analysis of canonical word order of Japanese double object constructions. Our analysis suggests 1) the canonical word order of such constructions varies from verb to verb, 2) there is only a weak relation between the canonical word order and the verb type: show-type and pass-type, 3) an argument whose grammatical case is infrequently omitted with a given verb tends to be placed near the verb, 4) the canonical word order varies depending on the semantic role of the dative argument, and 5) an argument that frequently co-occurs with the verb tends to be placed near the verb. Acknowledgments We would like to thank Sadao Kurohashi and Daisuke Kawahara for helping us to collect examples from the Web. This work was supported by JSPS KAKENHI Grant Number 25730131 and 16K16110. References Joan Bresnan, Anna Cueni, Tatiana Nikitina, and Harald Baayen. 2007. Predicting the dative alternation. In Gerlof Bouma, Irene Kr¨amer, and Joost Zwarts, editors, Cognitive foundations of interpretation, pages 69–94. Amsterdam: Royal Netherlands Academy of Science. Hajime Hoji. 1985. Logical Form Constraints and Configurational Structures in Japanese. Ph.D. thesis, University of Washington. Tomoo Inubushi, Kazuki Iijima, Masatoshi Koizumi, and Kuniyoshi L. Sakai. 2009. The effect of canonical word orders on the neural processing of double object sentences: An MEG study. In Proceedings of the 32nd Annual Meeting of the Japan Neuroscience Society. Daisuke Kawahara and Sadao Kurohashi. 2002. Fertilization of case frame dictionary for robust Japanese case analysis. In Proceedings of the 19th International Conference on Computational Linguistics (COLING 2002), pages 425–431. Masatoshi Koizumi and Katsuo Tamaoka. 2004. Cognitive processing of Japanese sentences with ditransitive verbs. Gengo Kenkyu, 125:173–190. Ayumi Koso, Hiroko Hagiwara, and Takahiro Soshi. 2004. What a multi-channel EEG system reveals about the processing of Japanese double object constructions (in Japanese). In Technical report of IEICE. Thought and language 104(170), pages 31–36. Susumu Kuno. 2006. Empathy and direct discourse perspectives. In Laurence Horn and Gergory Ward, editors, The Handbook of Pragmatics, Blackwell Handbooks in Linguistics, pages 315–343. Wiley. Mikinari Matsuoka. 2003. Two types of ditransitive consturctions in Japanese. Journal of East Asian Linguistics, 12:171–203. Shigeru Miyagawa and Takae Tsujioka. 2004. Argument structure and ditransitive verbs in Japanese. Journal of East Asian linguistics, 13:1–38. Shigeru Miyagawa. 1997. Against optional scrambling. Linguistic Inquiry, 28:1–26. Edson T. Miyamoto and Shoichi Takahashi. 2002. Sources of difficulty in processing scrambling in Japanese. In Mineharu Nakayama, editor, Sentence processing in East Asian languages. Stanford, Calif, pages 167–188. CSLI Publications. Yugo Murawaki and Sadao Kurohashi. 2012. Semisupervised noun compound analysis with edge and span features. In Proceedings of the 24th International Conference on Computational Linguistics (COLING 2012), pages 1915–1932. Keiko Nakamoto, Jae-Ho Lee, and Kow Kuroda. 2006. Cognitive mechanisms for sentence comprehension preferred word orders correlate with ”sentential” meanings that cannot be reduced to verb meanings: A new perspective on ”construction effects” in Japanese (in Japanese). Cognitive Studies, 13(3):334–352. Yasumasa Shigenaga. 2014. Canonical word order of Japanese ditransitive sentences: A preliminary investigation through a grammaticality judgment survey. Advances in Language and Literary Studies, 5(2):35–45. Naoki Yoshinaga and Masaru Kitsuregawa. 2014. A self-adaptive classifier for efficient text-stream processing. In Proceedings of the 24th International Conference on Computational Linguistics (COLING 2014), pages 1091–1102. 2244
2016
211
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2245–2254, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Knowledge-Based Semantic Embedding for Machine Translation Chen Shi†∗ Shujie Liu‡ Shuo Ren‡ Shi Feng§ Mu Li‡ Ming Zhou‡ Xu Sun† Houfeng Wang†¶ †MOE Key Lab of Computational Linguistics, Peking University ‡Microsoft Research Asia §Shanghai Jiao Tong University ¶Collaborative Innovation Center for Language Ability {shichen, xusun, wanghf}@pku.edu.cn [email protected] {shujliu, v-shuren, muli, mingzhou}@microsoft.com Abstract In this paper, with the help of knowledge base, we build and formulate a semantic space to connect the source and target languages, and apply it to the sequence-to-sequence framework to propose a Knowledge-Based Semantic Embedding (KBSE) method. In our KBSE method, the source sentence is firstly mapped into a knowledge based semantic space, and the target sentence is generated using a recurrent neural network with the internal meaning preserved. Experiments are conducted on two translation tasks, the electric business data and movie data, and the results show that our proposed method can achieve outstanding performance, compared with both the traditional SMT methods and the existing encoder-decoder models. 1 Introduction Deep neural network based machine translation, such as sequence-to-sequence (S2S) model (Cho et al., 2014; Sutskever et al., 2014), try to learn translation relation in a continuous vector space. As shown in Figure 1, the S2S framework contains two parts: an encoder and a decoder. To compress a variable-length source sentence into a fixed-size vector, with a recurrent neural network (RNN), an encoder reads words one by one and generates a sequence of hidden vectors. By reading all the source words, the final hidden vector should contain the information of source sentence, and it is called the context vector. Based on the context vector, another RNN-based neural network is used to generate the target sentence. ∗This work was done while the first author was visiting Microsoft Research. yT’ y2 y1 X1 X2 XT Decoder Encoder c 给我推荐个 4G 手机吧,最 好白的,屏幕 要大。 I want a white 4G cellphone with a big screen. Figure 1: An illustration of the RNN-based neural network model for Chinese-to-English machine translation The context vector plays a key role in the connection of source and target language spaces, and it should contain all the internal meaning extracted from source sentence, based on which, the decoder can generate the target sentence keeping the meaning unchanged. To extract the internal meaning and generate the target sentence, S2S framework usually needs large number of parameters, and a big bilingual corpus is acquired to train them. In many cases, the internal meaning is not easy to learn, especially when the language is informal. For the same intention, there are various expressions with very different surface string, which aggravates the difficulty of internal meaning extraction. As shown in Table 1, there are three different expressions for a same intention, a customer wants a white 4G cellphone with a big screen. The first and second expressions (Source1 and Source2) are wordy and contain lots of verbiage. To extract the internal meaning, the encoder should ignore these verbiage and focus on key information. This is hard for the encoder-decoder mechanism, since it is not defined or formulated that what kind of information is key information. The meaning s2245 X1 Source Grounding 给我推荐个4G 手机吧,最好白 的,屏幕要大。 I want a white 4G cellphone with a big screen. X2 XT y1 Target Generation y2 yT Semantic Space Category.cellphone Appearance.color.white Appearance.size.big_screen Network.4G_network Function.ability.smart Price.$NUM People.my_father Carrier.China_Unicom Brand.iPhone OS.Android People.students …… Source Sentence Target Sentence Figure 2: An illustration of Knowledge-Based Semantic Embedding (KBSE). Source1 啊,那个有大屏幕的4G 手机吗?要 白色的。 Source2 给我推荐个4G 手机吧,最好白的, 屏幕要大。 Source3 我想买个白色的大屏幕的4G 手机。 Intention I want a white 4G cellphone with a big screen. Enc-Dec I need a 4G cellphone with a big screen. Table 1: An example of various expressions for a same intention. pace of the context vector is only a vector space of continuous numbers, and users cannot add external knowledge to constrain the internal meaning space. Therefore, the encoder-decoder system (Enc-Dec) does not generate the translation of “白 色的”/“white”, and fails to preserve the correct meaning of Source1, shown in Table 1. No matter how different between the surface strings, the key information is the same (want, white, 4G, big screen, cellphone). This phenomenon motivates a translation process as: we firstly extract key information (such as entities and their relations) from the source sentence; then based on that, we generate target sentence, in which entities are translated with unchanged predication relations. To achieve this, background knowledge (such as, phone/computer, black/white, 3G/4G) should be considered. In this paper, we propose a Knowledge-Based Semantic Embedding (KBSE) method for machine translation, as shown in Figure 2. Our KBSE contains two parts: a Source Grounding part to extract semantic information in source sentence, and a Target Generation part to generate target sentence. In KBSE, source monolingual data and a knowledge base is leveraged to learn an explicit semantic vector, in which the grounding space is defined by the given knowledge base, then the same knowledge base and a target monolingual data are used to learn a natural language generator, which produce the target sentence based on the learned explicit semantic vector. Different from S2S models using large bilingual corpus, our KBSE only needs monolingual data and corresponding knowledge base. Also the context/semantic vector in our KBSE is no longer implicit continuous number vector, but explicit semantic vector. The semantic space is defined by knowledge base, thus key information can be extracted and grounded from source sentence. In such a way, users can easily add external knowledge to guide the model to generate correct translation results. We conduct experiments to evaluate our KBSE on two Chinese-to-English translation tasks, one in electric business domain, and the other in movie domain. Our method is compared with phrasal SMT method and the encoder-decoder method, and achieves significant improvement in both BLEU and human evaluation. KBSE is also combined with encoder-decoder method to get further improvement. In the following, we first introduce our framework of KBSE in section 2, in which the details of Source Grounding and Target Generation are illustrated. Experiments is conducted in Section 3. Discussion and related work are detailed in Section 4, followed by conclusion and future work. 2246 2 KBSE: Knowledge-Based Semantic Embedding Our proposed KBSE contains two parts: Source Grounding part (in Section 2.1) embeds the source sentence into a knowledge semantic space, in which the grounded semantic information can be represented by semantic tuples; and Target Generation part (in Section 2.2) generates the target sentence based on these semantic tuples. 2.1 Source Grounding Source 啊,那个有大屏幕的4G 手机吗?要 白色的。 Category.cellphone Tuples Appearance.color.white Appearance.size.big screen Network.4G network Table 2: Source sentence and the grounding result. Grounding result is organized as several tuples. As shown in Table 2, given the source sentence, Source Grounding part tries to extract the semantic information, and map it to the tuples of knowledge base. It is worth noticing that the tuples are language-irrelevant, while the name of the entities inside can be in different languages. To get the semantic tuples, we first use RNN to encode the source sentence into a real space to get the sentence embedding, based on which, corresponding semantic tuples are generated with a neuralnetwork-based hierarchical classifier. Since the knowledge base is organized in a tree structure, the tuples can be seen as several paths in the tree. For Root Category Network … Appearance Computer Cellphone 4G 3G Size Shape Color Laptop Desktop … white red … … small … big_screen Figure 3: Illustration of the tuple tree for Table 2. Each tuple extracted from source sentence can be represented as a single path (solid line) in tuple tree. There are 4 solid line paths representing 4 tuples of Table 2. The path circled in dashed lines stands for the tuple Appearance.color.white. input layer embedding layer f hidden layer g ht ht-1 H We tuple tree xt dot rt LR classifier Figure 4: Illustration of Source Grounding. The input sentence x is transformed through an embedding layer f and a hidden layer g. Once we get the sentence embedding H, we calculate the inner product of H and the weight We for the specific edge e, and use a logistic regression as the classifier to decide whether this edge should be chosen. tuples in Table 2, Figure 3 shows the corresponding paths (in solid lines). 2.1.1 Sentence Embedding Sentence embedding is used to compress the variable-length source sentence into a fixed-size context vector. Given the input sentence x = (x1 ... xT ), we feed each word one by one into an RNN, and the final hidden vector is used as the sentence embedding. In detail, as shown in Figure 4, at time-stamp t, an input word xt is fed into the neural network. With the embedding layer f, the word is mapped into a real vector rt = f(xt). Then the word embedding rt is fed into an RNN g to get the hidden vector ht = g(rt, ht−1). We input the words one by one at time 1, 2, ..., T, and get the hidden vectors h1, h2, ..., hT . The last hidden state hT should contain all the information of the input sentence, and it is used as the sentence embedding H. To model the long dependency and memorize the information of words far from the end, Gated Recurrent Unit(GRU) (Cho et al., 2014) is leveraged as the recurrent function g. 2.1.2 Tuple Generation In our system, we need a tuple tree for tuple generation. For those knowledge base who is naturally organized as tree structure, such as Freebase, we use its own stucture. Otherwise, we manually build the tuple tree as the representation of the introduced knowledge base. Given a knowl2247 edge base for a specific domain, we divide the intention of this domain into several classes, while each class has subclasses. All the classes above can be organized as a tree structure, which is the tuple tree we used in our system, as shown in Figure 3. It is worth noticing that the knowledge base captures different intentions separately in different tree structures. Following the hierarchical log-bilinear model (HLBL) (Mnih and Hinton, 2009; Mikolov et al., 2013), based on the sentence embedding H, we build our neural-network-based hierarchical classifier as follows: Each edge e of tuple tree has a weight vector we, which is randomly initialized, and learned with training data. We go through the tuple tree top-down to find the available paths. For each current node, we have a classifier to decide which children can be chosen. Since several children can be chosen at the same time independently, we use logistic regression as the classifier for each single edge, rather than a softmax classifier to choose one best child node. For the source sentence and corresponding tuples in table 2, in the first layer, we should choose three children nodes: Category, Appearance and Network, and in the second layer with the parent node Appearance, two children nodes color and size should be selected recursively. As shown in Figure 4, the probability to choose an edge e with its connected child is computed as follows: p(1|e, H) = 1 1 + e−we·H (1) where the operator · is the dot product function. The probability of the tuples conditioned on the source sentence p(S|x1 ... xT ) is the product of all the edges probabilities, calculated as follows: p(S|x1 ... xT ) = p(S|H) = Y e∈C p(1|e, H) Y e′ /∈C p(0|e′, H) where p(1|e, H) is the probability for an edge e belonging to the tuple set S, and p(0|e′, H) is the probability for an edge e′ not in the tuple set S. 2.2 Target Generation With the semantic tuples grounded from source sentence, in this section, we illustrate how to generate target sentence. The generation of the target sentence is another RNN, which predicts the next word yt+1 conditioned on the semantic vector C and all the previously predicted words y1, ..., yt. Given current word yt, previous hidden vector ht−1, and the semantic vector C, the probability of next target word yt+1 is calculated as: ht = g(ht−1, yt, C) (2) p(yt+1|y1...yt, C) = es(yt+1,ht) P y′ es(y′,ht) (3) where equation (2) is used to generate the next hidden vector ht, and equation (3) is the softmax function to compute the probability of the next word yt+1. For the recurrent function g in equation (2), in order to generate target sentence preserving the semantic meaning stored in C , we modified GRU (Cho et al., 2014) following (Wen et al., 2015; Feng et al., 2016): rt = σ(W ryt + Urht−1 + V rct) h ′ t = tanh(Wyt + U(rt ⊙ht−1) + V ct) zt = σ(W zyt + Uzht−1 + V zct) dt = σ(W dyt + Udht−1 + V dct) ct = dt ⊙ct−1 ht = (1 −zt) ⊙h ′ t + zt ⊙ht−1 + tanh(V hct) in which, ct is the semantic embedding at time t, which is initialized with C, and changed with a extraction gate dt. The introduced extraction gate dt retrieve and remove information from the semantic vector C to generate the corresponding target word. To force our model to generate the target sentence keeping information contained in C unchanged, two additional terms are introduced into the cost function: X t log(p(yt|C)) + ∥cT ∥2 + 1 T T X j=1 ∥dt −dt−1∥2 where the first term is log-likelihood cost, the same as in the encoder-decoder. And the other two terms are introduced penalty terms. ∥cT ∥2 is for forcing the decoding neural network to extract as much information as possible from the semantic vector C, thus the generated target sentence keeps the same meaning with the source sentence. The third term is to restrict the extract gate from extracting too much information in semantic vector C at each time-stamp. For the semantic tuples in Table 2, our modified RNN generates the target sentence word by word, until meets the end symbol character: “I want a white 4G cellphone with a big screen.”. 2248 2.3 Combination The two components of KBSE (Source Grounding and Target Generation) are separately trained, and can be used in three ways: • Source Grounding can be used to do semantic grounding for a given sentence and get the key information as a form of tuples; • Target Generation can generate a natural language sentence based on the existing semantic tuples; • Combining them, KBSE can be used to translation a source sentence into another language with a semantic space defined by a given knowledge base. 3 Experiments To evaluate our proposed KBSE model, in this section, we conduct experiments on two Chineseto-English translation tasks. One is from electric business domain, and the other is from movie domain. 3.1 Baseline and Comparison Systems We select two baseline systems. The first one is an in-house implementation of hierarchical phrasebased SMT (Koehn et al., 2003; Chiang, 2007) with traditional features, which achieves a similar performance to the state-of-the-art phrase-based decoder in Moses 1 (Koehn et al., 2007). The 4gram language model is trained with target sentences from training set plus the Gigaword corpus 2. Our phrase-based system is trained with MERT (Och, 2003). The other system is the encoderdecoder system (van Merri¨enboer et al., 2015) 3, based on which our KBSE is implemented. We also combine KBSE with encoder-decoder system, by adding the knowledge-based semantic embedding to be another context vector. Hence, for the decoder there are two context vectors, one from the encoder and the other is generated by the Semantic Grounding part. We call this model Enc-Dec+KBSE. For our proposed KBSE, the number of hidden units in both parts are 300. Embedding size of both source and target are 200. Adadelta (Zeiler, 2012) 1http://www.statmt.org/moses/ 2https://catalog.ldc.upenn.edu/ LDC2011T07 3The implementation is from https://github. com/mila-udem/blocks-examples Source Sentence Semantic Tuples Category.cellphone 我要iPhone 移版的 Carrier.China Mobile Brand.iPhone 黑客帝国是一部由 Name.The Matrix 沃卓斯基兄弟执导 Genre.science fiction 的科幻电影,影片 Director.Wachowski bro 语言为英语。 Language.English Semantic Tuples Target Sentence Category.cellphone Appearance.color.white I want a white 4G phone Appearance.size.big screen with a big screen . Network.4G network Name.Pirates of Caribbean The Pirates of the Released.2003 Caribbean is a 2003 Country.America American film, starring Starring.Johnny Depp Johnny Depp . Table 3: Illustration of dataset structure in this paper. We show one example for both corpus in both part, respectively. is leveraged as the optimizer for neural network training. The batch size is set to 128, and learning rate is initialized as 0.5. The model weights are randomly initialized from uniform distribution between [-0.005, 0.005]. 3.2 Dataset Details To train our KBSE system, we only need two kinds of pairs: the pair of source sentence and semantic tuples to train our Source Grounding, the pair of semantic tuples and target sentence to train our Target Generation. Examples of our training data in the electric business and movie domains are shown in Table 3. To control the training process of KBSE, we randomly split 1000 instances from both corpus for validation set and another 1000 instances for test set. Our corpus of electric business domain consists of bilingual sentence pairs labeled with KB tuples manually 4, which is a collection of source-KB-target triplets. For the Movie domain, all the data are mined from web, thus we only have small part of source-KB-target triplets. In order to show the advantage of our proposed KBSE, we also mined source-KB pairs and KB-target pairs separately. It should be noted that, similar as the encoder-decoder method, bilingual data is needed for Enc-Dec+KBSE, thus with the added knowledge tuples, Enc-Dec+KBSE are trained with source-KB-target triplets. 4Due to the coverage problem, knowledge bases of common domain (such as Freebase) are not used in this paper. 2249 Electric Business Movie Model BLEU HumanEval Tuple F-score BLEU HumanEval Tuple F-score SMT 54.30 78.6 42.08 51.4 Enc-Dec 60.31 90.8 44.27 65.8 KBSE 62.19 97.1 92.6 47.83 72.4 80.5 Enc-Dec + KBSE 64.52 97.9 46.35 74.6 KBSE upperbound 63.28 98.2 100 49.68 77.1 100 Table 4: The BLEU scores, human evaluation accuracy, tuple F-score for the proposed KBSE model and other benchmark models. Our electric business corpus contains 50,169 source-KB-target triplets. For this data, we divide the intention of electric business into 11 classes, which are Category, Function, Network, People, Price, Appearance, Carrier, Others, Performance, OS and Brand. Each class above also has subclasses, for example Category class has subclass computer and cellphone, and computer class can be divided into laptop, tablet PC, desktop and AIO. Our movie corpus contains 44,826 source-KBtarget triplets, together with 76,134 source-KB pairs and 85,923 KB-target pairs. The data is crawling from English Wikipedia 5 and the parallel web page in Chinese Wikipedia 6. Simple rule method is used to extract sentences and KB pairs by matching the information in the infobox and the sentences in the page content. Since not all the entities from Chinese wikipedia has english name, we have an extra entity translator to translate them. For a fair comparison, this entity translator are also used in other systems. Due to the whole process is semi-automatic, there may be a few irregular results within. We divided the intention of movie data into 14 classes, which are BasedOn, Budget, Country, Director, Distributor, Genre, Language, Name, Producer, Released, Starring, Studio, Theme and Writer. 3.3 Evaluation We use BLEU (Papineni et al., 2002) as the automatical evaluation matrix, significant testing is carried out using bootstrap re-sampling method (Koehn, 2004) with a 95% confidence level. As an addition, we also do human evaluation for all the comparison systems. Since the first part Source Grounding of our KBSE is separately trained, the F-score of KB tuples is also evaluated. Table 4 5https://en.wikipedia.org 6https://zh.wikipedia.org lists evaluation results for the electric business and movie data sets. 3.3.1 BLEU Evaluation From Table 4, we can find that our proposed method can achieve much higher BLEU than SMT system, and we can also achieve 1.9 and 3.6 BLEU points improvement compared with the raw encoder-decoder system on both eletric business and movies data. For the Enc-Dec+KBSE method, with the same training data on electric business domain, introducing knowledge semantic information can achieve about 4 BLEU points compared with the encoder-decoder and more than 2 BLEU points compared with our KBSE. Compared with encoder-decoder, Enc-Dec+KBSE method leverages the constrained semantic space, so that key semantic information can be extracted. Compared with KBSE, which relies on the knowledge base, Enc-Dec+KBSE method can reserve the information which is not formulated in the knowledge base, and also may fix errors generated in the source grounding part. Since Enc-Dec+KBSE can only be trained with source-KB-target triplets, for the movie dataset, the performance is not as good as our KBSE, but still achieves a gain of more than 2 BLEU point compared with the raw Enc-Dec system. On movie data, our KBSE can achieve significant improvement compared with the models (SMT, EncDec, Enc-Dec+KBSE ) only using bilingual data. This shows the advantage of our proposed method, which is our model can leverage monolingual data to learn Source Grounding and Target Generation separately. We also separately evaluate the Source Grounding and Target Generation parts. We evaluate the F-score of generated KB tuples 2250 0 0.2 0.4 0.6 0.8 1 I want a white 4G cellphone with a big screen . tuple feature values cellphone 4G_network white big_screen Figure 5: An example showing how the KB tuples control the tuple features flowing into the network via its learned semantic gates. compared with the golden KB tuples. The result shows that our semantic grounding performance is quite high (92.6%), which means the first part can extract the semantic information in high coverage and accuracy. We evaluate the translation result by feeding the Target Generation network with human labeled KB tuples. The translation result (shown as KBSE upperbound in Table 4) with golden KB tuples can achieve about 1.1 and 1.8 BLEU scores improvement compared with KBSE with generated KB tuples in both dataset. 3.3.2 Human Evaluation For the human evaluation, we do not need the whole sentence to be totally right. We focus on the key information, and if a translation is right by main information and grammar correction, we label it as correct translation, no matter how different of the translation compared with the reference on surface strings. Examples of correct and incorrect translations are shown in Table 5. As shown in Table 4, the human evaluation result shares the same trend as in BLEU evaluation. Our proposed method achieves the best results compared with SMT and raw encoder-decoder. In our method, important information are extracted and normalized by encoding the source sentence into the semantic space, and the correct translation of important information is key for human evaluation, thus our method can generate better translation. 3.4 Qualitative Analysis In this section, we compare the translation result with baseline systems. Generally, since KB is introduced, our model is good at memorizing the key information of the source sentence. Also thanks to the strong learning ability of GRU, our model rarely make grammar mistakes. In many translations generated by traditional SMT, key informaTarget I want a black Dell desktop. Correct I want a Dell black desktop. Could you please recommend me a black Dell desktop? I want a white Dell desktop. Incorrect I want a black Dell laptop. I want a black Dell desktop desktop. Table 5: Some examples of which kind of sentence can be seen as a correct sentence and which will be seen as incorrect in the part of human evaluation. tion is lost. Encoder-Decoder system does much better, but some key information is also lost or even repetitively generated. Even for a long source sentence with a plenty of intentions, our model can generate the correct translation. To show the process of Target Generation, Figure 5 illustrates how the KB-tuples control the target sentence generation. Taking the semantic tuple Appearance.color.white as an example, the GRU keeps the feature value almost unchanged until the target word “white” is generated. Almost all the feature values drop from 1 to 0, when the corresponding words generated, except the tuple Appearance.size.big screen. To express the meaning of this tuple, the decoding neural network should generate two words, “big” and “screen”. When the sentence finished, all the feature values should be 0, with the constraint loss we introduced in Section 2.2. Table 6 lists several translation example generated by our system, SMT system and the EncoderDecoder system. The traditional SMT model sometimes generate same words or phrases several times, or some information is not translated. But our model rarely repeats or lose information. Besides, SMT often generate sentences unreadable, since some functional words are lost. But for KB2251 Source 啊,那个有大屏幕的4G 手机吗?要白色的。 Reference I want a 4G network cellphone with China Telecom supported. KBSE I need a white 4G cellphone with China Telecom supported. Enc-Dec I want a 3G cellphone with China Telecom. SMT Ah, that has a big screen, 4G network cellphone? give white. Source 黑客帝国是一部2003 年由沃卓斯基兄弟执导的电影,里维斯主演,影片语言为英语。 Reference The Matrix is a 2003 English film directed by Wachowski Brothers, starring Keanu Reeves. KBSE The Matrix is a 2003 English movie starring Keanu Reeves, directed by Wachowski Brothers. Enc-Dec The Matrix is a 2013 English movie directed by Wachowski, starring Johnny Depp. SMT The Matrix is directed by the Wachowski brothers film, and starring film language English. Table 6: Examples of some translation results for our proposed KBSE system and the baseline systems. SE, the target sentence is much easier to read. The Encoder-Decoder model learns the representation of the source sentence to a hidden vector, which is implicit and hard to tell whether the key information is kept. However KBSE learns the representation of the source sentence to a explicit tuple embedding, which contains domain specific information. So sometimes when encoder-decoder cannot memorize intention precisely, KBSE can do better. 3.5 Error Analysis Our proposed KBSE relies on the knowledge base. To get the semantic vector of source sentence, our semantic space should be able to represent any necessary information in the sentence. For example, since our designed knowledge base do not have tuples for number of objects, some results of our KBSE generate the entities in wrong plurality form. Since our KBSE consists of two separate parts, the Source Grounding part and the Target Generation part, the errors generated in the first part cannot be corrected in the following process. As we mentioned in Section 3.3.1, combining KBSE with encoder-decoder can alleviate these two problems, by preserving information not captured and correct the errors generated in source grounding part. 4 Related Work Unlike previous works using neural network to learn features for traditional log-linear model (Liu et al., 2013; Liu et al., 2014), Sutskever et al. (2014) introduced a general end-to-end approach based on an encoder-decoder framework. In order to compress the variable-sized source sentence into a fixed-length semantic vector, an encoder RNN reads the words in source sentence and generate a hidden state, based on which another decoder RNN is used to generate target sentence. Different from our work using a semantic space defined by knowledge base, the hidden state connecting the source and target RNNs is a vector of implicit and inexplicable real numbers. Learning the semantic information from a sentence, which is also called semantic grounding, is widely used for question answering tasks (Liang et al., 2011; Berant et al., 2013; Bao et al., 2014; Berant and Liang, 2014). In (Yih et al., 2015), with a deep convolutional neural network (CNN), the question sentence is mapped into a query graph, based on which the answer is searched in knowledge base. In our paper, we use RNN to encode the sentence to do fair comparison with the encoderdecoder framework. We can try using CNN to replace RNN as the encoder in the future. To generate a sentence from a semantic vector, Wen et al. (2015) proposed a LSTM-based natural language generator controlled by a semantic vector. The semantic vector memorizes what information should be generated for LSTM, and it varies along with the sentence generated. Our Target Generation part is similar with (Wen et al., 2015), while the semantic vector is not predefined, but generated by the Source Grounding part. 5 Conclusion and Future Work In this paper, we propose a Knowledge Based Semantic Embedding method for machine translation, in which Source Grounding maps the source sentence into a semantic space, based on which Target Generation is used to generate the translation. Unlike the encoder-decoder neural network, in which the semantic space is implicit, the semantic space of KBSE is defined by a given knowledge base. Semantic vector generated by KBSE can extract and ground the key information, with the help of knowledge base, which is preserved in the translation sentence. Experiments are conducted on a electronic business and movie data sets, 2252 and the results show that our proposed method can achieve significant improvement, compared with conventional phrase SMT system and the state-ofthe-art encoder-decoder system. In the future, we will conduct experiments on large corpus in different domains. We also want to introduce the attention method to leverage all the hidden states of the source sentence generated by recurrent neural network of Source Grounding. Acknowledgement We thank Dongdong Zhang, Junwei Bao, Zhirui Zhang, Shuangzhi Wu and Tao Ge for helpful discussions. This research was partly supported by National Natural Science Foundation of China (No.61333018 No.61370117) and Major National Social Science Fund of China (No.12&ZD227). References Junwei Bao, Nan Duan, Ming Zhou, and Tiejun Zhao. 2014. Knowledge-based question answering as machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 967– 976, Baltimore, Maryland, June. Association for Computational Linguistics. Jonathan Berant and Percy Liang. 2014. Semantic parsing via paraphrasing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, Volume 1: Long Papers, pages 1415–1425. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1533–1544, Seattle, Washington, USA, October. Association for Computational Linguistics. David Chiang. 2007. Hierarchical phrase-based translation. computational linguistics, 33(2):201–228. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734, Doha, Qatar, October. Association for Computational Linguistics. Shi Feng, Shujie Liu, Mu Li, and Ming Zhou. 2016. Implicit distortion and fertility models for attentionbased encoder-decoder NMT model. CoRR, abs/1601.03317. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language TechnologyVolume 1, pages 48–54. Association for Computational Linguistics. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions, pages 177–180. Association for Computational Linguistics. Philipp Koehn, 2004. Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, chapter Statistical Significance Tests for Machine Translation Evaluation. Percy Liang, Michael I. Jordan, and Dan Klein. 2011. Learning dependency-based compositional semantics. In ACL, pages 590–599. Lemao Liu, Taro Watanabe, Eiichiro Sumita, and Tiejun Zhao. 2013. Additive neural networks for statistical machine translation. In ACL (1), pages 791–801. Shujie Liu, Nan Yang, Mu Li, and Ming Zhou. 2014. A recursive recurrent neural network for statistical machine translation. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Andriy Mnih and Geoffrey E Hinton. 2009. A scalable hierarchical distributed language model. In Advances in neural information processing systems, pages 1081–1088. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160–167, Sapporo, Japan, July. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA, July. Association for Computational Linguistics. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. 2253 Bart van Merri¨enboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde-Farley, Jan Chorowski, and Yoshua Bengio. 2015. Blocks and fuel: Frameworks for deep learning. arXiv preprint arXiv:1506.00619. Tsung-Hsien Wen, Milica Gasic, Nikola Mrkˇsi´c, PeiHao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1711–1721, Lisbon, Portugal, September. Association for Computational Linguistics. Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1321–1331, Beijing, China, July. Association for Computational Linguistics. Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR, abs/1212.5701. 2254
2016
212
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2255–2264, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics One for All: Towards Language Independent Named Entity Linking Avirup Sil and Radu Florian IBM T. J. Watson Research Center 1101 Kitchawan Road Yorktown Heights, NY 10598 [email protected], [email protected] Abstract Entity linking (EL) is the task of disambiguating mentions in text by associating them with entries in a predefined database of mentions (persons, organizations, etc). Most previous EL research has focused mainly on one language, English, with less attention being paid to other languages, such as Spanish or Chinese. In this paper, we introduce LIEL, a Language Independent Entity Linking system, which provides an EL framework which, once trained on one language, works remarkably well on a number of different languages without change. LIEL makes a joint global prediction over the entire document, employing a discriminative reranking framework with many domain and language-independent feature functions. Experiments on numerous benchmark datasets, show that the proposed system, once trained on one language, English, outperforms several state-of-the-art systems in English (by 4 points) and the trained model also works very well on Spanish (14 points better than a competitor system), demonstrating the viability of the approach. 1 Introduction We live in a golden age of information, where we have access to vast amount of data in various forms: text, video and audio. Being able to analyze this data automatically, usually involves filling a relational database, which, in turn, requires the processing system to be able to identify actors across documents by assigning unique identifiers to them. Entity Linking (EL) is the task of mapping specific textual mentions of entities in a text document to an entry in a large catalog of entities, often called a knowledge base or KB, and is one of the major tasks in the Knowledge-Base Population track at the Text Analysis Conference (TAC) (Ji et al., 2014). The task also involves grouping together (clustering) NIL entities which do not have any target referents in the KB. Previous work, pioneered by (Bunescu and Pasca, 2006; Cucerzan, 2007; Sil et al., 2012; Ratinov et al., 2011; Guo et al., 2013), have used Wikipedia as this target catalog of entities because of its wide coverage and its frequent updates made by the community. As with many NLP approaches, most of the previous EL research have focused on English, mainly because it has many NLP resources available, it is the most prevalent language on the web, and the fact that the English Wikipedia is the largest among all the Wikipedia datasets. However, there are plenty of web documents in other languages, such as Spanish (Fahrni et al., 2013; Ji et al., 2014), and Chinese (Cao et al., 2014; Shi et al., 2014), with a large number of speakers, and there is a need to be able to develop EL systems for these languages (and others!) quickly and inexpensively. In this paper, we investigate the hypothesis that we can train an EL model that is entirely unlexicalized, by only allowing features that compute similarity between the text in the input document and the text/information in the KB. For this purpose, we propose a novel approach to entity linking, which we call Language Independent Entity Linking (henceforth LIEL). We test this hypothesis by applying the English-trained system on Spanish and Chinese datasets, with great success. This paper has three novel contributions: 1) extending a powerful inference algorithm for global entity linking, built using similarity measures, corpus statistics, along with knowledge base statis2255 tics, 2) integrates many language-agnostic and domain independent features in an exponential framework, and 3) provide empirical evidence on a large variety of popular benchmark datasets that the resulting model outperforms or matches the best published results, and, most importantly, the trained model transfers well across languages, outperforming the state-of-the-art (SOTA) in Spanish and matching it in Chinese. We organize the paper as follows: the next section motivates the problem and discusses the language-independent model along with the features. Section 3 describes our experiments and comparison with the state-of-the-art. Section 4 illustrates the related previous work and Section 5 concludes. 2 Problem Formulation 2.1 Motivation for Language Independence Our strategy builds an un-lexicalized EL system by training it on labeled data, which consists of pairs of mentions in text and entries in a database extracted from a Wikipedia collection in English. Unlike traditional EL, however, the purpose here is to be able to perform entity linking with respect to any Wikipedia collection. Thus the strategy must take care to build a model that can transfer its learned model to a new Wikipedia collection, without change. At a first glance, the problem seems very challenging learning how to discriminate Lincoln, Nebraska and Abraham Lincoln 1, the former US President, seemingly bears little resemblance to disambiguating between different Spanish person entities named “Ali Quimico”. The crux of the problem lies in the fact that Wikipedia-driven features are language-specific: for instance, counting how many times the category 2010 Deaths appears in the context of an entity is highly useful in the English EL task, but not directly useful for Spanish EL. Also, learning vocabulary-specific information like the list of “deaths”, “presidents”, etc. is very useful for disambiguating person entities like “Lincoln” in English, but the same model, most likely, will not work for mentions like “李娜” in a Chinese document which might either refer to the famous athlete 李娜(网球运动 员) or the singer 李娜(歌手). 1Teletype font denotes Wikipedia titles and categories. Practically we assume the existence of a knowledge base that defines the space of entities we want to disambiguate against, where each entry contains a document with the entity; Wikipedia is a standard example for this2. If there are other properties associated with the entries, such as categories, in-links, out-links, redirects, etc., the system can make use of them, but they are theoretically not required. The task is defined as: given a mention m in a document d, find the entry e in the knowledge base that m maps to. We expand on the architecture described in (Sil and Yates, 2013) (henceforth NEREL), because of the flexibility provided by the feature-based exponential framework which results in an English SOTA EL system. However, we design all our features in such a way that they measure the similarity between the context where the mention m appears in d and the entries in the knowledge base. For example, instead of counting how often the category 2010 Deaths 3 appears in the context around an entity mention, we create a feature function such as CATEGORY FREQUENCY(m, e), which counts how often any category of entity referent e appears in the context of mention m. For entities like Lincoln, Nebraska in the English EL, CATEGORY FREQUENCY will add together counts for appearances of categories like Cities in Lancaster County, Nebraska and Lincoln metropolitan area, among other categories. At the same time, in the Spanish EL domain, CATEGORY FREQUENCY will add together counts for Pol´ıticos de Irak and Militares de Irak for the KB id corresponding to “Ali Quimico”. This feature is well-defined in both domains, and larger values of the feature indicate a better match between m and e. As mentioned earlier, it is our hypothesis, that the parameters trained for such features on one language (English, in our case) can be successfully used, without retraining, on other languages, namely Spanish and Chinese. While training, the system will take as input a knowledge base in source language S, KBS (extracted from Wikipedia) and a set of training examples (mi, ei, gi), where instances mi are mentions in a document of language S, ei are entity links, ei ∈KBS, and gi are Boolean val2We will assume, without loss of generality, that the knowledge base is derived from Wikipedia. 3Or a specific Freebase type. 2256 ues indicating the gold-standard match / mismatch between mi and ei. During decoding, given language T 4, the system must classify examples (mj, ej) drawn from a target language T and knowledge-base KBT . 2.2 LIEL: Training and Inference Our language-independent system consists of two components: 1. extracting mentions of namedentities from documents and 2. linking the detected mentions to a knowledge base, which in our case is Wikipedia (focus of this paper). We run the IBM Statistical Information and Relation Extraction (SIRE) 5 system which is a toolkit that performs mention detection, relation extraction, coreference resolution, etc. We use the system to extract mentions and perform coreference resolution: in particular, we use the CRF model of IBM SIRE for mention detection and a maximum entropy clustering algorithm for coreference resolution. The system identifies a set of 53 entity types. To improve the mention detection and resolution, case restoration is performed on the input data. Case restoration is helpful to improve the mention detection system’s performance, especially for discussion forum data. Obviously, this processing step is language-dependent, as the information extraction system is - but we want to emphasize that the entity linking system is language independent. In the EL step, we perform a full document entity disambiguation inference, described as follows. Given a document d, and a selected mention m ∈d, our goal is to identify its label ˆe that maximizes ˆe = P (e|m, d) (1) = arg max e:m X k,m∈mk 1 ,ek 1 P  mk 1|m, d  P  ek 1|mk 1, d  where mk 1 are mentions found in document d, and ek 1 are some label assignment. In effect, we are looking for the best mention labeling of the entire document mk 1 (that contains m) and a label to these mentions that would maximize the information extracted from the entire document. Since direct inference on Equation 1 is hard, if not intractable, we are going to select the most likely 4Language prediction can be done relatively accurately, given a document; however, in this paper, we focus on the EL task, so we assume we know the identity of the target language T. 5The IBM SIRE system can be currently accessed at : http://www.ibm.com/smarterplanet/us/en/ibmwatson/ developercloud/relationship-extraction.html mention assignment instead (as found by an information extraction system): we will only consider the detected mentions (m1, . . . , mk), and other optional information that can be extracted from the document, such as links l, categories r, etc. The goal becomes identifying the set of labels (e1, . . . , ek) that maximize P  ek 1|mk 1, d  (2) Since searching over all possible sets of (mention, entity)-pairs for a document is still intractable for reasonable large values of k, typical approaches to EL make simplifying assumption on how to compute the probability in Equation 2. Several full-document EL approaches have investigated generating up to N global tuples of entity ids (e1, . . . , ek), and then build a model to rank these tuples of entity ids (Bunescu and Pasca, 2006; Cucerzan, 2007). However, Ratinov et al. (Ratinov et al., 2011) argue that this type of global model provides a relatively small improvement over the purely-local approach (where P ek 1|mk 1, d  = Q i P (ei|mi, d)). In this paper, we follow an approach which combines both of these strategies. Following the recent success of (Sil and Yates, 2013), we partition the full set of extracted mentions, (mi)i= ¯ 1,n of the input document d into smaller subsets of mentions which appear near one another: we consider two mentions that are closer then 4 words to be in the same connected component, then we take the transitive closure of this relation to partition the mention set. We refer to these sets as the connected components of d, or CC(d). We perform classification over the set of entity-mention tuples T (C) = n ei1, . . . , einC |mi1, . . . , minC  |eij ∈KB, ∀j o 6 that are formed using candidate entities within the same connected component C ∈CC(d). Consider this small snippet of text: “...Home Depot CEO Nardelli quits ...” In this example text, the phrase “Home Depot CEO Nardelli” would constitute a connected component. Two of the entity-mention tuples for this connected component would be: (Home Depot, Robert Nardelli |”Home Depot”, “Nardelli”) and (Home Depot, Steve Nardelli | ”Home Depot”,“Nardelli”). 6For simplicity, we denote by (e|m) the tuple (e, m), written like that to capture the fact that m is fixed, while e is predicted. 2257 2.2.1 Collective Classification Model To estimate P(t|d, C), the probability of an entitymention tuple t for a given connected component C ∈CC(d), LIEL uses a maximum-entropy model: P(t|d, C) = exp (w · f(t, d, C)) P t′∈T(C) exp (w · f(t′, d, C)) (3) where f(t, d, C) is a feature vector associated with t, d, and C, and w is a weight vector. For training, we use L2-regularized conditional log likelihood (CLL) as the objective CLL(G, w) = X (t,d,C)∈G log P(t|d, C, w)−σ∥w∥2 2 (4) where G is the gold-standard training data, consisting of pairs (t, d, C), where t is the correct tuple of entities and mentions for connected component C in document d, and σ is a regularization parameter. Given that the function 4 is convex, we use LBFGS (Liu and Nocedal, 1989) to find the globally optimal parameter settings over the training data. 2.3 Extracting potential target entities From the dump of our Wikipedia data, we extract all the mentions that can refer to Wikipedia titles, and construct a set of disambiguation candidates for each mention (which are basically the hyperlinks in Wikipedia). This is, hence, an anchor-title index that maps each distinct hyperlink anchortext to its corresponding Wikipedia titles and also stores their relative popularity score. For example, the anchor text (or mention) “Titanic” is used in Wikipedia to refer both to the ship or to the movie. To retrieve the disambiguation candidates ei for a given mention mi, we query the anchor-title index that we constructed and use lexical sub-word matching. ei is taken to be the set of titles (or entities, in the case of EL) most frequently linked to with anchor text mi in Wikipedia. We use only the top 40 most frequent Wikipedia candidates for the anchor text for computational efficiency purposes for most of our experiments. We call this step “Fast Search” since it produces a bunch of candidate links by just looking up an index. 2.3.1 Decoding At decoding time, given a document d, we identify its connected components CC (d) and run inference on each component C containing the desired input mention m. To further reduce the run time, for each mention mj ∈C, we obtain the set of potential labels ej using the algorithm described in Section 2.3, and then exhaustively find the pair that maximizes equation 3. For each candidate link, we also add a NIL candidate to fast match to let the system link mentions to ids not in a KB. 2.4 Language-Independent Feature Functions LIEL makes use of new as well as well-established features in the EL literature. However, we make sure to use only non-lexical features. The local and global feature functions computed from this extracted information are described below. Generically, we have two types of basic features: one that takes as input a KB entry e, the mention m and its document and a second type that scores two KB entries, e1 and e2. When computing the probability in Equation 3, where we consider a set of KB entries t7, we either sum or apply a boolean AND operator (in case of boolean features) among all entities e ∈t, while the entityentity functions are summed/and’ed for consecutive entities in t. We describe the features in these terms, for simplicity. 2.4.1 Mention-Entity Pair Features Text-based Features: We assume the existence of a document with most entries in the KB, and the system uses similarity between the input document and these KB documents. The basic intuition behind these features, inspired by Ratinov et al.(2011), is that a mention m ∈d is more likely to refer to entity e if its KB page, W(e), has high textual similarity to input document d. Let Text (W (e)) be the vector space model associated with W (e), Top (W (e)) be the vector of the top most frequently occurring words (excluding stop-words) from W (e), and Context(W(e)) be the vector space of the 100 word window around the first occurrence of m in W(e). Similarly, we create vector space models Text(m) and Context(m). We then use cosine similarity over these vector space models as features: i. cosine(Text (W (e)) , Text (m)), ii. cosine(Text (W (e)) , Context (m)), iii. cosine(Context (W (e)) , Text (m)), iv. cosine(Context (W (e)) , Context (m)), v. cosine(Top (W (e)) , Text (m)), 7Recall that the probability is computed for all the entity assignments for mentions in a clique. 2258 vi. cosine (Top (W (e)) , Context (m)). KB Link Properties: LIEL can make use of existing relations in the KB, such as inlinks, outlinks, redirects, and categories. Practically, for each such relation l, a KB entry e has an associated set of strings I(l, e)8; given a mention-side set M (either Text(m) or Context(m)), LIEL computes FREQUENCY feature functions for the names of the Categories, Inlinks, Outlinks and Redirects, we compute f(e, m, d) = |I(l, e) ∩M| Title Features: LIEL also contains a number of features that make use of the Wikipedia title of the entity links in t (remember t = entity mention tuples and not a Wikipedia title) : • NIL FREQUENCY: Computes the frequency of entities that link to NIL • EXACT MATCH FREQUENCY: returns 1 if the surface form of m is a redirect for e; • MATCH ALL: returns true if m matches exactly the title of e; • MATCH ACRONYM: returns true if m is an acronym for a redirect of e; • LINK PRIOR: the prior link probability P(e|m), computed from anchor-title pairs in KB (described in Section 2.3). 2.4.2 Entity-Entity Pair Features Coherence Features: To better model consecutive entity assignments, LIEL computes a coherence feature function called OUTLINK OVERLAP. For every consecutive pair of entities (e1, e2) that belongs to mentions in t, the feature computes Jaccard(Out(e1), Out(e2)), where Out(e) denotes the Outlinks of e. Similarly, we also compute INLINK OVERLAP. LIEL also uses categories in Wikipedia which exist in all languages. The first feature ENTITY CATEGORY PMI, inspired by Sil and Yates (2013), make use of Wikipedia’s category information system to find patterns of entities that commonly appear next to one another. Let C(e) be the set of Wikipedia categories for entity e. We manually inspect and remove a handful of common Wikipedia categories based on threshold frequency on our training data, which are associated with almost every entity in text, like Living 8For instance, redirect strings for “Obama” are “Barack Obama”, “Barack Obama Jr.” and “Barack Hussein Obama”. People etc., since they have lower discriminating power. These are analogous to all WP languages. From the training data, the system first computes point-wise mutual information (PMI) (Turney, 2002) scores for the Wikipedia categories of pairs of entities, (e1, e2): PMI(C(e1), C(e2)) = ntC −1 X j=1 1[C(e1) = C(eij) ∧C(e2) = C(eij+1)] X j 1[C(e1) = C(eij)] × X j 1[C(e2) = C(eij)] • ENTITY CATEGORY PMI adds these PMI scores up for every consecutive (e1, e2) pair in t. • CATEGORICAL RELATION FREQUENCY We would like to boost consecutive entity assignments that have been seen in the training data. For instance, for the text “England captain Broad fined for..”, we wish to encourage the tuple that links “England” to the entity id of the team name England cricket team, and “Broad” to the entity id of the person Stuart Broad. Wikipedia contains a relation displayed by the category called English cricketers that indicates that Stuart Broad is a team member of England cricket team, and counts the number of such relations between every consecutive pair of entities in (e, e′) ∈ t. • TITLE CO-OCCURRENCE FREQUENCY feature computes for every pair of consecutive entities (e, e′) ∈t, the number of times that e′ appears as a link in the Wikipedia page for e, and vice versa (similar to (Cucerzan, 2007). It adds these counts up to get a single number for t. 3 Experiments We evaluate LIEL’s capability by testing against several state-of-the-art EL systems on English, then apply the English-trained system to Spanish and Chinese EL tasks to test its language transcendability. 3.1 Datasets English: The 3 benchmark datasets for the English EL task are: i) ACE (Ratinov et al., 2011), ii) MSNBC (Cucerzan, 2007) and iii) TAC 2014 (Ji et 2259 Name |M| In KB Not in KB ACE 257 100% 0 MSNBC 747 90% 10% TAC En14 5234 54% 46% TAC Es13 2117 62% 38% TAC Es14 2057 72% 28% TAC Zh13 2155 57% 43% WikiTrain 158715 100% 0% Table 1: Data statistics: number of mention queries, % of mention queries that have their referents present in the Wikipedia/KB, and % of mention queries that have no referents in Wikipedia/KB as per our datasets. En=English, Es=Spanish and Zh=Chinese for the evaluation data for TAC for the years 2013 and 2014. al., 2014)9, which contain data from diverse genre like discussion forum, blogs and news. Table 1 provides key statistics on these datasets. In the TAC10 evaluation setting, EL systems are given as input a document and a query mention with its offsets in the input document. As the output, systems need to predict the KB id of the input query mention if it exists in the KB or NIL if it does not. Further, they need to cluster the mentions which contain the same NIL ids across queries. The training dataset, WikiTrain, consists of 10,000 random Wikipedia pages, where all of the phrases that link to other Wikipedia articles are treated as mentions, and the target Wikipedia page is the label. The dataset was made available by Ratinov et al. and (Sil and Yates, 2013), added Freebase to Wikipedia mappings resulting in 158,715 labeled mentions with an average of 12.62 candidates per mention. The total number of unique mentions in the data set is 77,230 with a total of 974,381 candidate entities and 643,810 unique candidate entities. The Wikipedia dump that we used as our knowledge-base for English, Spanish and Chinese is the April 2014 dump. The TAC dataset involves the TAC KB which is a dump of May 2008 of English Wikipedia. LIEL links entities to the Wikipedia 2014 dump and uses the redirect information to link back to the TAC KB. Spanish: We evaluate LIEL on both the 2013 9This is the traditional Entity Linking (EL) task and not Entity Discovery and Linking (EDL), since we are comparing the linking capability in this paper. 10For more details on TAC see http://nlp.cs.rpi.edu/kbp/2014/index.html and 2014 benchmark datasets of the TAC Spanish evaluation. Chinese: We test LIEL on the TAC 2013 Chinese dataset. 3.2 Evaluation Metric We follow standard measures used in the literature for the entity linking task. To evaluate EL accuracy on ACE and MSNBC, we report on a Bag-of-Titles (BOT) F1 evaluation as introduced by (Milne and Witten, 2008; Ratinov et al., 2011). In BOT-F1, we compare the set of Wikipedia titles output for a document with the gold set of titles for that document (ignoring duplicates), and compute standard precision, recall, and F1 measures. On the TAC dataset, we use standard metrics B3+ variant of precision, recall and F1. On these datasets, the B3 + F1 metric includes the clustering score for the NIL entities, and hence systems that only perform binary NIL prediction would be heavily penalized11. 3.3 Comparison with the State-of-the-art To follow the guidelines for the TAC NIST evaluation, we anonymize participant system names as System 1 through 9. Interested readers may look at their system description and scores in (Ji et al., 2014; Fahrni et al., 2013; Miao et al., 2013; Mayfield, 2013; Merhav et al., 2013). Out of these systems, System 1 and System 7 obtained the top score in Spanish and Chinese EL evaluation at TAC 2013 and hence can be treated as the current state-of-the-art for the respective EL tasks. We also compare LIEL with some traditional “wikifiers” like MW08 (Milne and Witten, 2008) and UIUC (Cheng and Roth, 2013) and also NEREL (Sil and Yates, 2013) which is the system which LIEL resembles the most. 3.4 Parameter Settings LIEL has two tuning parameters: σ, the regularization weight; and the number of candidate links per mention we select from the Wikipedia dump. We set the value of σ by trying five possible values in the range [0.1, 10] on held-out data (the TAC 2009 data). We found σ = 0.5 to work best for our experiments. We chose to select a maximum of 40 candidate entities from Wikipedia for each candidate mention (or fewer if the dump had fewer than 40 links with nonzero probability). 11For more details on the scoring metric used for TAC EL see: http://nlp.cs.rpi.edu/kbp/2014/scoring.html 2260 0.728 0.853 0.859 0.862 0.685 0.812 0.846 0.850 0.500 0.550 0.600 0.650 0.700 0.750 0.800 0.850 0.900 MW08 UIUC NEREL LIEL BOT-F1 BOT-F1: LIEL vs. The State-of-the-art ACE MSNBC Figure 1: LIEL outperforms all its competitors on both ACE and MSNBC. 3.5 Results English: Figure 1 compares LIEL with previously reported results by MW08, UIUC and NEREL on the ACE and MSNBC datasets in (Cheng and Roth, 2013; Sil and Yates, 2013). LIEL achieves an F1 score of 86.2 on ACE and 85.0 on MSNBC, clearly outperforming the others e.g. 3.8% absolute value higher than UIUC on MSNBC. We believe that LIEL’s strong model comprising relational information (coherence features from large corpus statistics), textual and title lets it outperform UIUC and MW08 where the former uses relational information and the latter a naive version of LIEL’s coherence features. Comparison with NEREL is slightly unfair (though we outperform them marginally) since they use both Freebase and Wikipedia as their KB whereas we are comparing with systems which only use Wikipedia as their KB. To test the robustness of LIEL on a diverse genre of data, we also compare it with some of the other state-of-the-art systems on the latest benchmark TAC 2014 dataset. Figure 2 shows our results when compared with the top systems in the evaluation. Encouragingly, LIEL’s performance is tied with the top performer, System 6, and outperforms all the other top participants from this challenging annual evaluation. Note that LIEL obtains 0.13 points more than System 1, the only other multi-lingual EL system and, in that sense, LIEL’s major competitor. Several other factors are evident from the results: System 1 and 2 are statistically tied and so are System 3, 4 and 5. We also show the bootstrapped percentile confidence intervals (Singh and Xie, 2008) for LIEL which are [0.813, 0.841]: (we do not have access to the other competing systems). 0.69 0.70 0.76 0.76 0.77 0.82 0.82 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 Axis Title English EL Test: LIEL vs. other TAC EL systems Figure 2: Comparison of several state-of-the-art English EL systems along with LIEL on the latest TAC 2014 dataset and LIEL obtains the best score. * indicates systems that perform multilingual EL. 0.55 0.65 0.71 0.74 0.66 0.80 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 System9 System7 System1 LIEL asdasda Spanish EL Test: LIEL vs. other TAC EL Systems 2013 2014 Figure 3: System performance on the TAC 2013 and 2014 Spanish datasets are shown. LIEL outperforms all the systems in terms of overall F1 score. 3.5.1 Foreign Language Experiments Note that LIEL was trained only on the English Wikitrain dataset (Section 3.1), and then applied, unchanged, to all the evaluation datasets across languages and domains described in Section 3.1. Hence, it is the same instance of the model for all languages. As we will observe, this one system consistently outperforms the state of the art, even though it is using exactly the same trained model across the datasets. We consider this to be the take-away message of this paper. Spanish: LIEL obtains a B3 + F1 score of 0.736 on the TAC 2013 dataset and clearly outperforms the SOTA, System 1, which obtains 0.709 as shown in Figure 3 and considerably higher than the other participating systems. We could only obtain the results for Systems 9 and 7 on 2013. On the 2014 evaluation dataset, LIEL obtains a higher gain of 0.136 points (precision of 0.814 and recall of 0.787) over its major competitor System 1, showing the power of its language-independent model. 2261 0.60 0.60 0.63 0.63 0.5 0.52 0.54 0.56 0.58 0.6 0.62 0.64 System 1 LIEL System 8 System 7 Axis Title Chinese EL Test: LIEL vs. other TAC EL Systems Figure 4: LIEL achieves competitive performance in Chinese EL further proving its robustness to multilingual data. Chinese: Figure 4 shows the results of LIEL’s performance on the Chinese benchmark dataset compared to the state-of-the-art. Systems 7 and 8 obtains almost similar scores. We observe that LIEL is tied with System 1 and achieves competitive performance compared to Systems 7 and 8 (note that LIEL has a confidence interval of [0.597, 0.632]) which requires labeled Chinese TAC data to be trained on and the same model does not work for other languages. Emphasizing again: LIEL is trained only once, on English, and tested on Chinese unchanged. 3.5.2 Error Analysis While we see LIEL’s strong multi-lingual empirical results, it is important to note some of the areas which confuses the system. Firstly, a major source of error which affects LIEL’s performance is due to coreference resolution e.g. from the text “Beltran Leyva, also known as “The Bearded One,” is ...”, TAC’s mention query asks the systems to provide the disambiguation for The Bearded One. LIEL predicts that the The Bearded One refers to the entity Richard Branson, which is the most common entity in Wikipedia that refers to that nickname (based on our dump), while, clearly, the correct entity should have been Beltran Levya. We believe that this type of an error can be handled by performing joint EL and coreference resolution, which is a promising future research area for LIEL. Contextual information can also hurt system performance e.g. from the text, “.. dijo Alex S´anchez , analista..”, LIEL predicts the Wikipedia title Alex S´anchez (outfielder) for the mention Alex S´anchez since the document talks about sports and player names. The query mention was actually referring to a journalist, not in the KB, and hence a NIL. Handling sparse entities, similar to this, are also an important future direction. 4 Related Work Entity linking has been introduced and actively developed under the NIST-organized Text Analysis Conference, specifically the Knowledge Base Population track. The top performing English EL system in the TAC evaluation has been the MS MLI system (Cucerzan and Sil, 2013), which has obtained the top score in TAC evaluation in the past 4 years (2011 through 2014): the system links all mentions in a document simultaneously, with the constraint that their resolved links should be globally consistent on the category level as much as possible. Since global disambiguation can be expensive, (Milne and Witten, 2008) uses the set of unambiguous mentions in the text surrounding a mention to define the mention’s context, and uses the Normalized Google Distance (Cilibrasi and Vitanyi, 2007) to compute the similarity between this context and the candidate Wikipedia entry. The UIUC system, (Cheng and Roth, 2013), another state-of-the-art EL system, which is an extension of (Ratinov et al., 2011), adds relational inference for wikification. NEREL (Sil and Yates, 2013) is a powerful joint entity extraction and linking system. However, by construction their model is not language-independent due to the heavy reliance on type systems of structured knowledgebases like Freebase. It also makes use of lexical features from Wikipedia as their model performs joint entity extraction and disambiguation. Some of the other systems which use a graph based algorithm such as partitioning are LCC, NYU (Ji et al., 2014) and HITS (Fahrni et al., 2013) which obtained competitive score in the TAC evaluations. Among all these systems, only the HITS system has ventured beyond English and has obtained the top score in Spanish EL evaluation at TAC 2013. It is the only multilingual EL system in the literature which performs reliably well across a series of languages and benchmark datasets. Recently, (Wang et al., 2015) show a new domain and language-independent EL system but they make use of translation tables for non-English (Chinese) EL; thereby not making the system entirely language-independent. Empirically their performance comes close to System 1 which LIEL outperforms. The BASIS system (Merhav et al., 2262 2013), is the state-of-the-art for Chinese EL as it obtained the top score in TAC 2013. The FUJITSU system (Miao et al., 2013) obtained similar scores. It is worth noting that these systems, unlike LIEL, are heavily language dependent, e.g. performing lexicon specific information extraction, using inter-language links to map between the languages or training using labeled Chinese data. In more specialized domains, Dai et al. (2011) employed a Markov logic network for building an EL system with good results in a bio-medical domain; it would be interesting to find out how their techniques might extended to other languages/corpora. Phan et al. (2008) utilize topic models derived from Wikipedia to help classify short text segment, while Guo et al. (2013) investigate methods for disambiguating entities in tweets. Neither of these methods do show how to transfer the EL system developed for short texts to different languages, if at all. The large majority of entity linking research outside of TAC involves a closely related task wikification (Bunescu and Pasca, 2006; Cucerzan, 2007; Ratinov et al., 2011; Guo et al., 2013), and has been mainly performed on English datasets, for obvious reasons (data, tools availability). These systems usually achieve high accuracy on the language they are trained on. Multilingual studies, e.g. (McNamee et al., 2011), use a large number of pipelines and complex statistical machine translation tools to first translate the original document contexts into English equivalents and transform the cross-lingual EL task into a monolingual EL one. The performance of the entity linking system is highly dependent on the existence and potential of the statistical machine translation system in the given pair of languages. 5 Conclusion In this paper we discussed a new strategy for multilingual entity linking that, once trained on one language source with accompanying knowledge base, performs without adaptation in multiple target languages. Our proposed system, LIEL is trained on the English Wikipedia corpus, after building its own knowledge-base by exploiting the rich information present in Wikipedia. One of the main characteristics of the system is that it makes effective use of features that are built exclusively around computing similarity between the text/context of the mention and the document text of the candidate entity, allowing it to transcend language and perform inference on a completely new language or domain, without change or adaptation. The system displays a robust and strong empirical evidence by not only outperforming all stateof-the-art English EL systems, but also achieving very good performance on multiple Spanish and Chinese entity linking benchmark datasets, and it does so without the need to switch, retrain, or even translate, a major differentiating factor from the existing multi-lingual EL systems out there. Acknowledgments We would like to thank the anonymous reviewers for their suggestions. We also thank Salim Roukos, Georgiana Dinu and Vittorio Castelli for their helpful comments. This work was funded under DARPA HR0011-12-C-0015 (BOLT). The views and findings in this paper are those of the authors and are not endorsed by DARPA. References R. Bunescu and M. Pasca. 2006. Using encyclopedic knowledge for named entity disambiguation. In EACL. Ziqiang Cao, Sujian Li, and Heng Ji. 2014. Joint learning of chinese words, terms and keywords. In EMNLP. X. Cheng and D. Roth. 2013. Relational inference for wikification. In Proc. of the Conference on Empirical Methods in Natural Language Processing (EMNLP). R.L. Cilibrasi and P.M.B. Vitanyi. 2007. The google similarity distance. IEEE Transactions on Knowledge and Data Engineering, 19(3):370–383. Silviu Cucerzan and Avirup Sil. 2013. The MSR Systems for Entity Linking and Temporal Slot Filling at TAC 2013. In Text Analysis Conference. Silviu Cucerzan. 2007. Large-scale named entity disambiguation based on wikipedia data. In EMNLPCoNLL, pages 708–716. Hong-Jie Dai, Richard Tzong-Han Tsai, Wen-Lian Hsu, et al. 2011. Entity disambiguation using a markov-logic network. In IJCNLP. Angela Fahrni, Benjamin Heinzerling, Thierry G¨ockel, and Michael Strube. 2013. Hits monolingual and cross-lingual entity linking system at TAC 2013. In Text Analysis Conference. 2263 Stephen Guo, Ming-Wei Chang, and Emre Kıcıman. 2013. To link or not to link? a study on end-to-end tweet entity linking. In NAACL. Heng Ji, HT Dang, J Nothman, and B Hachey. 2014. Overview of tac-kbp2014 entity discovery and linking tasks. In Proc. Text Analysis Conference (TAC2014). D.C. Liu and J. Nocedal. 1989. On the limited memory method for large scale optimization. Mathematical Programming B, 45(3):503–528. James Mayfield. 2013. Overview of the kbp 2013 entity linking track. Paul McNamee, James Mayfield, Douglas W Oard, Tan Xu, Ke Wu, Veselin Stoyanov, and David Doermann. 2011. Cross-language entity linking in maryland during a hurricane. In Text Analysis Conference. Yuval Merhav, Joel Barry, James Clarke, David Murgatroyd, and One Alewife Center. 2013. Basis technology at tac 2013 entity linking. Qingliang Miao, Ruiyu Fang, Yao Meng, and Shu Zhang. 2013. Frdc’s cross-lingual entity linking system at tac 2013. David Milne and Ian H. Witten. 2008. Learning to link with wikipedia. In CIKM. Xuan-Hieu Phan, Le-Minh Nguyen, and Susumu Horiguchi. 2008. Learning to classify short and sparse text & web with hidden topics from largescale data collections. In Proceedings of the 17th international conference on World Wide Web. L. Ratinov, D. Roth, D. Downey, and M. Anderson. 2011. Local and global algorithms for disambiguation to wikipedia. In Proc. of the Annual Meeting of the Association of Computational Linguistics (ACL). Xing Shi, Kevin Knight, and Heng Ji. 2014. How to speak a language without knowing it. In ACL. Avirup Sil and Alexander Yates. 2013. Re-ranking for Joint Named-Entity Recognition and Linking. In CIKM. Avirup Sil, Ernest Cronin, Penghai Nie, Yinfei Yang, Ana-Maria Popescu, and Alexander Yates. 2012. Linking Named Entities to Any Database. In EMNLP-CoNLL. Kesar Singh and Minge Xie. 2008. Bootstrap: a statistical method. P. D. Turney. 2002. Thumbs up or thumbs down? semantic orientation applied to unsupervised classification of reviews. In Procs. of ACL, pages 417–424. Han Wang, Jin Guang Zheng, Xiaogang Ma, Peter Fox, and Heng Ji. 2015. Language and domain independent entity linking with quantified collective validation. In Proc. Conference on Empirical Methods in Natural Language Processing (EMNLP2015). 2264
2016
213
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2265–2275, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics On Approximately Searching for Similar Word Embeddings Kohei Sugawara Hayato Kobayashi Masajiro Iwasaki Yahoo Japan Corporation 1-3 Kioicho, Chiyoda-ku, Tokyo 102-8282, Japan {ksugawar, hakobaya, miwasaki}@yahoo-corp.jp Abstract We discuss an approximate similarity search for word embeddings, which is an operation to approximately find embeddings close to a given vector. We compared several metric-based search algorithms with hash-, tree-, and graphbased indexing from different aspects. Our experimental results showed that a graph-based indexing exhibits robust performance and additionally provided useful information, e.g., vector normalization achieves an efficient search with cosine similarity. 1 Introduction An embedding or distributed representation of a word is a real-valued vector that represents its “meaning” on the basis of distributional semantics, where the meaning of a word is determined by its context or surrounding words. For a given meaning space, searching for similar embeddings is one of the most basic operations in natural language processing and can be applied to various applications, e.g., extracting synonyms, inferring the meanings of polysemous words, aligning words in two sentences in different languages, solving analogical reasoning questions, and searching for documents related to a query. In this paper, we address how to quickly and accurately find similar embeddings in a continuous space for such applications. This is important from a practical standpoint, e.g., when we want to develop a real-time query expansion system on a search engine on the basis of an embedding similarity. A key difference from the existing work is that embeddings are not high-dimensional sparse (traditional count) vectors, but (relatively) low-dimensional dense vectors. We therefore need to use approximate search methods instead of inverted-index-based methods (Zobel and Moffat, 2006). Three types of indexing are generally used in approximate similarity search: hash-, tree-, and graph-based indexing. Hash-based indexing is the most common in natural language processing due to its simplicity, while tree/graph-based indexing is preferred in image processing because of its performance. We compare several algorithms with these three indexing types and clarify which algorithm is most effective for similarity search for word embeddings from different aspects. To the best of our knowledge, no other study has compared approximate similarity search methods focusing on neural word embeddings. Although one study has compared similarity search methods for (count-based) vectors on the basis of distributional semantics (Gorman and Curran, 2006), our study advances this topic and makes the following contributions: (a) we focus on neural word embeddings learned by a recently developed skip-gram model (Mikolov, 2013), (b) show that a graph-based search method clearly performs better than the best one reported in the Gorman and Curran study from different aspects, and (c) report the useful facts that normalizing vectors can achieve an effective search with cosine similarity, the search performance is more strongly related to a learning model of embeddings than its training data, the distribution shape of embeddings is a key factor relating to the search performance, and the final performance of a target application can be far different from the search performance. We believe that our timely results can lead to the practical use of embeddings, especially for real-time applications in the real world. The rest of the paper is organized as follows. In Section 2, we briefly survey hash-, tree-, and graph-based indexing methods for achieving similarity search in a metric space. In Section 3, we 2265 compare several similarity search algorithms from different aspects and discuss the results. Finally, Section 4 concludes the paper. 2 Similarity Search We briefly survey similarity search algorithms for real-valued vectors, where we focus on approximate algorithms that can deal with large scale data. In fact, word embeddings are usually trained on a very large corpus. For example, well known pretrained word embeddings (Mikolov, 2013) were trained on the Google News dataset and consist of about 1,000 billion words with 300-dimensional real-valued vectors. Search tasks on large-scale real-valued vectors have been more actively studied in the image processing field than in the natural language processing field, since such tasks naturally correspond to searching for similar images with their feature vectors. Many similarity search algorithms have been developed and are classified roughly into three indexing types: hash-, tree-, and graph-based. In natural language processing, hash-based indexing seems to be preferred because of its simplicity and ease of treating both sparse and dense vectors, while in image processing, tree- and graphbased indexing are preferred because of their performance and flexibility in adjusting parameters. We explain these three indexing types in more detail below. 2.1 Hash-based Indexing Hash-based indexing is a method to reduce the dimensionality of high-dimensional spaces by using some hash functions so that we can efficiently search in the reduced space. Locality-sensitive hashing (LSH) (Gionis et al., 1999) is a widely used hash-based indexing algorithm, which maps similar vectors to the same hash values with high probability by using multiple hash functions. There are many hash-based indexing algorithms that extend LSH for different metric spaces. Datar et al. (2004) applied the LSH scheme to Lp spaces, or Lebesgue spaces, and experimentally showed that it outperformed the existing methods for the case of p = 2. Weiss et al. (2009) showed that the problem of finding the best hash function is closely related to the problem of graph partitioning and proposed an efficient approximate algorithm by reducing the problem to calculating thresholded eigenvectors of the graph Laplacian. In this paper, we focus on approximation of knearest neighbors and are not concerned about the hash-based indexing algorithms, since they are basically designed for finding (not k-nearest) neighbors within a fixed radius of a given point, i.e., a so-called radius search. 2.2 Tree-based Indexing Tree-based indexing is used to recursively divide the entire search space into hierarchical subspaces, where the subspaces are not necessarily disjointed, so that the search space forms a tree structure. Given a search query, we can efficiently find the subspaces including the query by descending from the root note to the leaf nodes in the tree structure and then obtain its search results by scanning only neighbors belonging to the subspaces. Note that in contrast to the hash-based indexing, we can easily extend the size of search results or the number of nearest neighbors by ascending to the parent subspaces. Arya et al. (1998) proposed the balanced boxdecomposition tree (BBD-tree) as a variant of the kd-tree (Bentley, 1975) for approximately searching for similar vectors on the basis of Minkowski metrics, i.e., in Lp spaces when p ≥1. Fast library for approximate nearest neighbors (FLANN) (Muja and Lowe, 2008) is an open-source library for approximate similarity search. FLANN automatically determines the optimal one from three indices: a randomized kd-tree where multiple kd-trees are searched in parallel (Silpa-Anan and Hartley, 2008), a k-means tree that is constructed by hierarchical k-means partitioning (Nister and Stewenius, 2006), and a mix of both kdtree and k-means tree. Spatial approximation sample hierarchy (SASH) (Houle and Sakuma, 2005) achieves approximate search with multiple hierarchical structures created by random sampling. According to the results in the previous study (Gorman and Curran, 2006), SASH performed the best for vectors on the basis of distributional semantics, and its performance surpassed that of LSH. 2.3 Graph-based Indexing Graph-based indexing is a method to approximately find nearest neighbors by using a neighborhood graph, where each node is connected to its nearest neighbors calculated on the basis of a certain metric. A simple search procedure for a given query is achieved as follows. An arbitrary node in the graph is selected as a candidate for the 2266 true nearest neighbor. In the process of checking the nearest neighbor of the candidate, if the query is closer to the neighbor than the candidate, the candidate is replaced by the neighbor. Otherwise, the search procedure terminates by returning the current candidate as the nearest neighbor of the query. This procedure can be regarded as a bestfirst search, and the result is an approximation of that of an exact search. Sebastian and Kimia (2002) first used a knearest neighbor graph (KNNG) as a search index, and Hajebi et al. (2011) improved the search performance by performing hill-climbing starting from a randomly sampled node of a KNNG. Their experimental results with image features, i.e., scale invariant feature transform (SIFT), showed that a similarity search based on a KNNG outperforms randomized kd-trees and LSH. Although the brute force construction cost of a KNNG drastically increases as the number of nodes increases because the construction procedure needs to calculate the nearest neighbors for each node, we can efficiently approximate a KNNG (so-called ANNG) by incrementally constructing an ANNG with approximate k-nearest neighbors calculated on a partially constructed ANNG. Neighborhood graph and tree for indexing (NGT) (Iwasaki, 2015) is a library released from Yahoo! JAPAN that achieves a similarity search on an ANNG; it has already been applied to several services. 3 Experiments In this paper, we focused on the pure similarity search task of word embeddings rather than complex application tasks for avoiding extraneous factors, since many practical tasks can be formulated as k-nearest neighbor search. For example, assuming search engines, we can formalize query expansion, term deletion, and misspelling correction as finding frequent similar words, infrequent similar words, and similar words with different spellings, respectively. We chose FLANN from the tree-based methods and NGT from the graph-based methods since they are expected to be suitable for practical use. FLANN and NGT are compared with SASH, which was the best method reported in a previous study (Gorman and Curran, 2006). In addition, we consider LSH only for confirmation, since it is widely used in natural language processing, although several studies have reported that LSH performed worse than SASH and FLANN. We used the E2LSH package (Andoni, 2004), which includes an implementation of a practical LSH algorithm. 3.1 Problem Definition The purpose of an approximate similarity search is to quickly and accurately find vectors close to a given vector. We formulate this task as a problem to find k-nearest neighbors as follows. Let (X, d) be a metric space. We denote by Nk(x, d) the set of k-nearest neighbors of a vector x ∈X with respect to a metric d. Formally, the following condition holds: ∀y ∈Nk(x, d), ∀z ∈X \ Nk(x, d), d(x, y) ≤d(x, z). Our goal with this problem is to approximate Nk(x, d) for a given vector x. We calculate the precision of an approximate search method A using the so-called precision at k or P@k, which is a widely used evaluation measure in information retrieval. The precision at k of A is defined as |Nk(x, d) ∩˜Nk(x, A)|/k, where ˜Nk(x, A) is the set of approximate knearest neighbors of a vector x calculated by A. Since we use the same size k for an exact set Nk(x, d) and its approximate set ˜Nk(x, A), there is no trade-off between precision and recall. 3.2 Basic Settings This section describes the basic settings in our experiments, where we changed a specific setting (e.g., number of dimensions) in order to evaluate the performance in each experiment. All the experiments were conducted on machines with two Xeon L5630 2.13-GHz processors and 24 GB of main memory running Linux operating systems. We prepared 200-dimensional word embeddings learned from English Wikipedia in February 2015, which contains about 3 billion sentences spanning about 2 million words and 35 billion tokens, after preprocessing with the widely used script (Mahoney, 2011), which was also used for the word2vec demo (Mikolov, 2013). We used the skip-gram learning model with hierarchical softmax training in the word2vec tool, where the window size is 5, and the down-sampling parameter is 0.001. We constructed and evaluated the index by dividing the learned embeddings into 2 million embeddings for training and 1,000 embeddings for testing by random sampling, after normalizing them so that the norm of each embedding was one. We built the search index of each search method 2267 for the training set on the basis of the Euclidean distance. The Euclidean distance of normalized vectors is closely related to the cosine similarity, as described later. We prepared the top-10 (exact) nearest neighbors in the training set corresponding to each embedding in the testing set and plotted the average precision at 10 over the test set versus its computation time (log-scale), by changing the parameter for precision of each method as described below. Note that it is difficult to compare different algorithms in terms of either precision or computation time, since there is a trade-off between precision and computation time in approximate search. We set the parameters of the three search methods SASH, FLANN, and NGT as follows. We determined stable parameters for indexing using grid search and changed an appropriate parameter that affected the accuracy when evaluating each method. For confirmation, we added LSH in the first experiment but did not use it in the other experiments since it clearly performs worse than the other methods. SASH We set the maximum number (p) of parents per node to 6 for indexing and changed the scale factor for searching1. FLANN We set the target precision to 0.8, the build weight to 0, and the sample fraction to 0.01 for indexing, and we changed the number of features to be checked in the search2. The k-means index was always selected as the optimal index in our experiments. NGT We set the edge number (E) to 10 for indexing and changed the search range (e) for searching. LSH We set the success probability (1 −δ) to 0.9 and changed the radius (R) for indexing. Note that there are no parameters for searching since LSH was developed to reduce dimensionality, and we need to construct multiple indices for adjusting its accuracy. 1The scale factor is implemented as “scaleFactor” in the source code (Houle, 2005), although there is no description in the original paper (Houle and Sakuma, 2005). 2Since FLANN is a library integrating several algorithms, the parameters can be described only by variables in the source code (Muja and Lowe, 2008). The target precision, build weight, and sample fraction for auto-tuned indexing are implemented as “target precision”, “build weight”, and “sample fraction” in the structure “AutotunedIndexParams”, respectively. The number of features is implemented as “checks” in the structure “SearchParams”. 3.3 Results In this section we report the results of the performance comparison of SASH, FLANN, and NGT from the following different aspects: the distance function for indexing, the number of dimensions of embeddings, the number of neighbors to be evaluated, the size of a training set for indexing, the learning model/data used for embeddings, and the target task to be solved. 3.3.1 Distance Function for Indexing We evaluated the performance by changing the distance function for indexing. In natural language processing, cosine similarity cos(x, y) = x·y ∥x∥∥y∥ of two vectors x and y is widely used from a practical perspective, and cosine distance dcos(x, y) = 1 −cos(x, y) as its complement seems to be appropriate for the distance function for indexing. Unfortunately, however, the cosine distance is not strictly metric but semimetric since the triangle inequality is not satisfied. Thus, we cannot directly use the cosine distance because the triangle inequality is a key element for efficient indexing in a metric space. In this paper, we use two alternatives: normalized and angular distances. The former is the Euclidean distance after normalizing vectors, i.e., dnorm(x, y) = deuc( x ∥x∥, y ∥y∥), where deuc(x, y) = ∥x −y∥. The set of k-nearest neighbors by dnorm is theoretically the same as that by dcos, i.e., Nk(x, dnorm) = Nk(x, dcos), since dnorm(x, y)2 = ∥x∥2 ∥x∥2 + ∥y∥2 ∥y∥2 − 2 x ∥x∥· y ∥y∥ = 2dcos(x, y). The latter is the angle between two vectors, i.e., darc(x, y) = arccos(cos(x, y)). The set of k-nearest neighbors by darc is also the same as that by dcos, i.e., Nk(x, darc) = Nk(x, dcos), since arccos is a monotone decreasing function. Note that darc is not strictly metric, but it satisfies the triangle inequality, i.e., pseudometric. Figure 1 plots the performances of SASH, FLANN, and NGT using the normalized, angular, and ordinal Euclidean distances. Higher precision at the same computational time (upper left line) indicates a better result. The graphs show that NGT performed the best for the normalized distance (a), while SASH performed the best for the angular distance (b). This large difference is caused by the long computational time of darc. Because we only want the maximum performance in graphs (a) and (b) for each method, we used only the normalized distance in the later experiments since the perfor2268 100 101 102 103 Time [msec] 0.0 0.2 0.4 0.6 0.8 1.0 Precision SASH (norm) FLANN (norm) NGT (norm) LSH (a) Normalized 100 101 102 103 Time [msec] 0.0 0.2 0.4 0.6 0.8 1.0 Precision SASH (angle) FLANN (angle) NGT (angle) (b) Angular 100 101 102 103 Time [msec] 0.0 0.2 0.4 0.6 0.8 1.0 Precision SASH (euc) FLANN (euc) NGT (euc) (c) Euclidean Figure 1: Precision versus computation time of SASH, FLANN, and NGT using the normalized, angular, and Euclidean distances. 1.5 1.0 0.5 0.0 0.5 1.0 1.5 1.5 1.0 0.5 0.0 0.5 1.0 1.5 (a) Normalized 8 6 4 2 0 2 4 6 6 4 2 0 2 4 6 (b) Un-normalized Figure 2: 2D visualization of normalized and un-normalized embeddings by multi-dimensional scaling. mance of SASH in graph (a) is almost the same as that in (b). For confirmation, we added the result of LSH in graph (a) only. The graph clearly indicates that the performance of LSH is very low even for neural word embeddings, which supports the results in the previous study (Gorman and Curran, 2006), and therefore we did not use LSH in the later experiments. Graph (c) shows that the performance using the Euclidean distance has a similar tendency to that using the normalized distance, but its computation time is much worse than that using the normalized distance. The reason for this is that it is essentially difficult to search for distant vectors in a metric-based index, and normalization can reduce the number of distant embeddings by aligning them on a hypersphere. In fact, we can confirm that the number of distant embeddings was reduced after normalization according to Figure 2, which visualizes 1,000 embeddings before/after normalization on a two-dimensional space by multi-dimensional scaling (MDS) (Borg and Groenen, 2005), where the radius of each circle represents the search time of the corresponding embedding calculated by NGT. MDS is a dimensionality reduction method to place each point in a low-dimensional space such that the distances between any two points are preserved as much as possible. Note that the scale of graph (b) is about Distance Method Time (min) Normalized SASH 74.6 FLANN 56.5 NGT 33.9 LSH 44.6 Angular SASH 252.4 FLANN 654.9 NGT 155.4 Euclidean SASH 58.1 FLANN 20.2 NGT 83.0 Table 1: Indexing time of SASH, FLANN, NGT, and LSH using the normalized, angular, Euclidean distance functions. five times larger than that of graph (a). This also suggests that the normalized distance should be preferred even when it has almost the same precision as the Euclidean distance. Table 1 lists the indexing times of SASH, FLANN, and NGT on the basis of the normalized, angular, and Euclidean distances, where LSH is also added only in the result of the normalized distance. The table indicates that NGT performed the best for the normalized and angular distances, while FLANN performed the best for the Euclidean distance. However, all methods seem to be suitable for practical use in terms of indexing because we can create an index of English Wikipedia embeddings in several hours (only once). The large indexing time with the angular distance also supports our suggestion that the normalized distance should be used. 3.3.2 Number of Dimensions of Embeddings We also evaluated the performances by changing the number of dimensions of embeddings. Since the optimal number of dimensions should depend on the tasks, we wanted to see how the search 2269 100 101 102 103 Time [msec] 0.5 0.6 0.7 0.8 0.9 1.0 Precision SASH (100 dim) FLANN (100 dim) NGT (100 dim) (a) 100 dimensions 100 101 102 103 Time [msec] 0.5 0.6 0.7 0.8 0.9 1.0 Precision SASH (200 dim) FLANN (200 dim) NGT (200 dim) (b) 200 dimensions 100 101 102 103 Time [msec] 0.5 0.6 0.7 0.8 0.9 1.0 Precision SASH (300 dim) FLANN (300 dim) NGT (300 dim) (c) 300 dimensions Figure 3: Precision versus computation time of SASH, FLANN, and NGT using 100-, 200-, and 300dimensional embeddings. 100 101 102 103 Time [msec] 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Precision SASH (top 10) FLANN (top 10) NGT (top 10) (a) P@10 100 101 102 103 Time [msec] 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Precision SASH (top 100) FLANN (top 100) NGT (top 100) (b) P@100 100 101 102 103 Time [msec] 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Precision SASH (top 200) FLANN (top 200) NGT (top 200) (c) P@200 Figure 4: Precision versus computation time of SASH, FLANN, and NGT using precision at 10, 100, and 200. methods performed when the number of dimensions varied, while the number of dimensions of image features is usually fixed. For example, SIFT features (Lowe, 1999) are represented as 128dimensional vectors. Figure 3 plots the performances of SASH, FLANN, and NGT using 100-, 200-, and 300dimensional embeddings. The graphs indicate that NGT always performed the best. SASH is expected to perform well when the number of dimensions is large, since FLANN and NGT perform worse as the number of dimensions becomes larger. However, NGT would be a better choice since most existing pre-trained embeddings (Turian et al., 2010; Mikolov, 2013; Pennington et al., 2014a) have a few hundred dimensions. 3.3.3 Number of Neighbors to Be Evaluated We also conducted performance evaluations by changing the number k of neighbors, i.e., the size of the set of k-nearest neighbors, to calculate the precision at k. We need to change the number k on demand from target applications. For example, we may use small numbers for extracting synonyms and large numbers for selecting candidates for news recommendations, where they will be reduced via another sophisticated selection process. The performances of SASH, FLANN, and NGT using 10-, 100-, and 200-nearest neighbors are shown in Figure 4. The graphs indicate that NGT performed the best in this measure also. With 200-nearest neighbors, the performance of SASH dropped sharply, which means that SASH is not robust for the indexing parameter. One possible reason is that searching for relatively distant neighbors is difficult for a tree-based index, where the divided subspaces are not appropriate. 3.3.4 Size of Training Set for Indexing We conducted further performance evaluations by changing the size of a training set, i.e., the number of embeddings used for indexing. We wanted to know how the search methods performed with different sized search indices since a large search index will bring about extra operational costs in a practical sense, and a small search index is preferred for a small application system. Figure 5 plots the performances of SASH, FLANN, and NGT using 100K, 1M, and 2M training sets, which were randomly sampled so that each training set can be virtually regarded as embeddings with a vocabulary of its training set size. The graphs indicate that NGT always performed the best for all search index sizes. Moreover, we can see that all results for each method have a similar tendency. This fact implies that a distribution of embeddings is related to the search per2270 100 101 102 103 Time [msec] 0.5 0.6 0.7 0.8 0.9 1.0 Precision SASH (size 100K) FLANN (size 100K) NGT (size 100K) (a) 100K 100 101 102 103 Time [msec] 0.5 0.6 0.7 0.8 0.9 1.0 Precision SASH (size 1M) FLANN (size 1M) NGT (size 1M) (b) 1M 100 101 102 103 Time [msec] 0.5 0.6 0.7 0.8 0.9 1.0 Precision SASH (size 2M) FLANN (size 2M) NGT (size 2M) (c) 2M Figure 5: Precision versus computation time of SASH, FLANN, and NGT using 100K, 1M, and 2M training sets. 100 101 102 103 Time [msec] 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Precision SASH (GN) FLANN (GN) NGT (GN) (a) GN 100 101 102 103 Time [msec] 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Precision for search SASH (CW) FLANN (CW) NGT (CW) (b) CW 100 101 102 103 Time [msec] 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Precision SASH (GV) FLANN (GV) NGT (GV) (c) GV Figure 6: Precision versus computation time of SASH, FLANN, and NGT using GN, CW, and GV embeddings. 0 50 100 150 200 250 300 Dimension −0.6 −0.4 −0.2 0.0 0.2 0.4 0.6 Kurtosis (a) GN 0 50 100 150 200 250 300 Dimension −2 0 2 4 6 8 10 Kurtosis (b) GV Figure 7: Kurtosis of each dimension of GN and GV embeddings. formance, and the next section will actually confirm the same property on another dataset used for learning embeddings. 3.3.5 Model and Data Used for Embeddings We also conducted performance evaluations by changing the learning models and training data for embeddings. We used the following three pretrained embeddings to investigate the performance when changing the data distributions used for indexing. GN 300-dimensional embeddings (Mikolov, 2013) learned by the skip-gram model with negative sampling (Mikolov et al., 2013a) using part of the Google News dataset, which contains about 3 million words and phrases and 100 billion tokens. CW 200-dimensional embeddings (Turian et al., 2010) learned by deep neural networks (Collobert and Weston, 2008) using the RCV1 corpus, which contains about 269 thousand words and 63 million tokens. GV 300-dimensional embeddings (Pennington et al., 2014a) learned by the global vectors for word representation (GloVe) model (Pennington et al., 2014b) using Common Crawl corpora, which contain about 2 million words and 42 billion tokens. The performances of SASH, FLANN, and NGT using GN, CW, and GV embeddings are plotted in Figure 6. The graphs indicate that NGT consistently performed the best over different learning models. A comparison of the results using GN embeddings and the previous results using Wikipedia embeddings reveals that they had almost the same tendency. This fast can be acceptable assuming an empirical rule that a corpus follows a power law or Zipf’s law. On the other hand, graphs (a), (b), and (c) have quite different tendencies. Specifically, all search methods compete with each other for CW embeddings, while they could not perform well for GV embeddings. This implies that the performance of a search method can be affected by learning models rather than training sets used for embeddings. 2271 100 101 102 103 Time [msec] 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Precision for analogy SASH (sem) FLANN (sem) NGT (sem) (a) Semantic analogy 100 101 102 103 Time [msec] 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Precision for analogy SASH (syn) FLANN (syn) NGT (syn) (b) Syntactic analogy 100 101 102 103 Time [msec] 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Precision for search SASH (search) FLANN (search) NGT (search) (c) Similarity search Figure 8: Precision versus computation time of SASH, FLANN, and NGT using the semantic analogy, syntactic analogy, and similarity search tasks. 0.5 0.6 0.7 0.8 0.9 1.0 Precision for search 0.4 0.5 0.6 0.7 0.8 Precision for analogy SASH FLANN NGT (a) Semantic analogy 0.5 0.6 0.7 0.8 0.9 1.0 Precision for search 0.4 0.5 0.6 0.7 0.8 Precision for analogy SASH FLANN NGT (b) Syntactic analogy Figure 9: Precision of the semantic and syntactic analogy tasks versus that of the similarity search task. We further investigated why GV embeddings deteriorate the search performance. Table 2 lists the variance and kurtosis of Wikipedia, GN, CW, and GV embeddings for clarifying the variation or dispersion of these distributions. Kurtosis K(X) is a measure of the “tailedness” of the probability distribution of a random variable X, defined by K(X) = µ4/µ2 2 −3, where µn represents the n-th central moment, i.e., E[(X −E[X])n]. The constant “3” in the above definition sets the kurtosis of a normal distribution to 0. The table clearly indicates that GV has a heavy tailed distribution in accordance with the kurtosis values, although all variances have almost the same value. In fact, GV has several high kurtosis peaks, while GN has only small values, according to Figure 7, which visualizes the kurtosis of each dimension. Note that the y-axis scale of graph (b) is about 20 times larger than that of graph (a). Because distant points in a metric space tend to deteriorate the performance in a search process, we need to pay attention to the distribution shape of embeddings as well as their quality, so as to efficiently search for similar embeddings. 3.3.6 Target Task to Be Solved We finally evaluated the performance by changing the target task to be solved by using embeddings. We wanted to know how the search methods perEW GN CW GV Variance 0.0033 0.0033 0.0050 0.0033 Kurtosis 0.034 -0.026 -0.075 0.57 Table 2: Variance and kurtosis of English Wikipedia (EW), GN, CW, and GV embeddings. formed with different task settings since even if the precision of the search task is not good, it might be sufficient for another task to be solved on the basis of similarity search. In this section, we address well known analogy tasks (Mikolov et al., 2013a), where semantic and syntactic analogy questions are considered, e.g., “Which word corresponds to Japan when Paris corresponds to France?”, the answer being “Tokyo”. These questions can be solved by searching for the nearest neighbors of analogical vectors generated via arithmetic operations., i.e., vec(“Paris”) −vec(“France”) + vec(“Japan”), where vec(w) represents an embedding of word w. Figure 8 plots the performances of SASH, FLANN, and NGT using the semantic and syntactic analogy tasks as well as that using the similarity search task (in Figure 1), which is added for comparison. The graphs indicate that NGT clearly performed the best even in the analogy tasks. Comparing the curves of NGT, we can see that those in graphs (a) and (b) are quite different from that in (c), and the analogy precisions can maintain their quality, even when the search precision is about 0.9. For further analysis, we aligned the precisions of the search task with those of the analogy tasks in Figure 9, where each point represents the results calculated with the same parameters. The dotted line without markers in each graph is a line from the origin (0, 0) to the point where the analogy precision is maximum when the search precision 2272 100 101 102 103 Time [msec] 0.0 0.2 0.4 0.6 0.8 1.0 Precision for analogy SASH (analogy GN) FLANN (analogy GN) NGT (analogy GN) (a) Analogy by GN 100 101 102 103 Time [msec] 0.0 0.2 0.4 0.6 0.8 1.0 Precision for analogy SASH (analogy CW) FLANN (analogy CW) NGT (analogy CW) (b) Analogy by CW 100 101 102 103 Time [msec] 0.0 0.2 0.4 0.6 0.8 1.0 Precision for analogy SASH (analogy GV) FLANN (analogy GV) NGT (analogy GV) (c) Analogy by GV Figure 10: Precision versus computation time of SASH, FLANN, and NGT for the analogy task (including both semantic and syntactic questions) using GN, CW, and GV embeddings. is 1.0, and thus it naively estimates a deterioration rate of the analogy precision on the basis of the search precision. The graphs indicate that the search precision can be far different from the estimated precision of another task. In fact, when the search precision by NGT is 0.8 in Figure 9 (a), the analogy precision 0.75 is unexpectedly high, since the naive estimation is 0.64 calculated by the maximum analogy precision 0.8 times the search precision 0.8. This suggests that it is a good idea to check the final performance of a target application, although the search performance is valuable from a standpoint of general versatility. Finally, we conducted performance evaluations for the analogy task instead of the search task by changing the learning models and training data for embeddings as in Section 3.3.5, in order to support the robustness of NGT even for an operation more sophisticated than just finding similar words. Figure 10 plots the performances of SASH, FLANN, and NGT for the analogy task including both semantic and syntactic questions using GN, CW, and GV embeddings. The graphs indicate that NGT performed the best over different learning models even for the analogy task. Although the precisions of CW embeddings in graph (b) are very low, the result seems to be acceptable according to the previous work (Mikolov et al., 2013b), which reported that the precisions of a syntactic analogy task using CW embeddings in similar settings were at most 5 % (0.05). The results of GN and GV embeddings in graphs (a) and (c) show a similar tendency to those of Wikipedia embeddings in Figure 8. However, the overall performance for the analogy task using GV embeddings is unexpectedly high, contrary to the results for the search task in Figure 6 (c). One of the reasons is that arithmetic operations for solving analogy questions can reduce kurtosis peaks, although we omitted the kurtosis results due to space limitation. This fact also supports our finding that distant points in a metric space tend to deteriorate the performance in a search process. 4 Conclusion We investigated approximate similarity search for word embeddings. We compared three methods: a graph-based method (NGT), a tree-based method (FLANN), the SASH method, which was reported to have the best performance in a previous study (Gorman and Curran, 2006). The results of experiments we conducted from various aspects indicated that NGT generally performed the best and that the distribution shape of embeddings is a key factor relating to the search performance. Our future research includes improving the search performance for embeddings with heavy-tailed distributions and creating embeddings that can keep both task quality and search performance high. We will release the source code used for our comparative experiments from the NGT page (Iwasaki, 2015). Since we need to implement additional glue codes for running FLANN and SASH, our code would be useful for researchers who want to compare their results with ours. Acknowledgments We would like to thank the anonymous reviewers for giving us helpful comments. References Alexandr Andoni. 2004. LSH Algorithm and Implementation (E2LSH). http://web.mit.edu/ andoni/www/LSH/. Sunil Arya, David M. Mount, Nathan S. Netanyahu, Ruth Silverman, and Angela Y. Wu. 1998. An Optimal Algorithm for Approximate Nearest Neighbor 2273 Searching Fixed Dimensions. Journal of the ACM (JACM), 45(6):891–923. Jon Louis Bentley. 1975. Multidimensional Binary Search Trees Used for Associative Searching. Communication of the ACM, 18(9):509–517. Ingwer Borg and Patrick J. F. Groenen. 2005. Modern Multidimensional Scaling. Springer Series in Statistics. Springer-Verlag New York. Ronan Collobert and Jason Weston. 2008. A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning. In Proceedings of the 25th International Conference on Machine Learning (ICML 2008), pages 160–167. ACM. Mayur Datar, Nicole Immorlica, Piotr Indyk, and Vahab S. Mirrokni. 2004. Locality-sensitive Hashing Scheme Based on P-stable Distributions. In Proceedings of the 20th Annual Symposium on Computational Geometry (SCG 2004), pages 253–262. ACM. Aristides Gionis, Piotr Indyk, and Rajeev Motwani. 1999. Similarity Search in High Dimensions via Hashing. In Proceedings of the 25th International Conference on Very Large Data Bases (VLDB 2009), pages 518–529. Morgan Kaufmann Publishers Inc. James Gorman and James R. Curran. 2006. Scaling Distributional Similarity to Large Corpora. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics (COLING-ACL 2006), pages 361–368. Association for Computational Linguistics. Kiana Hajebi, Yasin Abbasi-Yadkori, Hossein Shahbazi, and Hong Zhang. 2011. Fast Approximate Nearest-neighbor Search with K-nearest Neighbor Graph. In Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI 2011), pages 1312–1317. AAAI Press. Michael E. Houle and Jun Sakuma. 2005. Fast Approximate Similarity Search in Extremely HighDimensional Data Sets. In Proceedings of the 21st International Conference on Data Engineering (ICDE 2005), pages 619–630. IEEE Computer Society. Michael E. Houle. 2005. The SASH Page. http://research.nii.ac.jp/%7Emeh/ sash/sashpage.html. Masajiro Iwasaki. 2015. NGT : Neighborhood Graph and Tree for Indexing. http://research-lab.yahoo.co.jp/ software/ngt/. David G. Lowe. 1999. Object Recognition from Local Scale-Invariant Features. In Proceedings of the International Conference on Computer Vision (ICCV 1999), pages 1150–1157. IEEE Computer Society. Matt Mahoney. 2011. About the Test Data. http: //mattmahoney.net/dc/textdata.html. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013a. Distributed Representations of Words and Phrases and their Compositionality. In Advances in Neural Information Processing Systems 26 (NIPS 2013), pages 3111–3119. Curran Associates, Inc. Tomas Mikolov, Wen tau Yih, and Geoffrey Zweig. 2013b. Linguistic Regularities in Continuous Space Word Representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2013), pages 746–751. Association for Computational Linguistics. Tomas Mikolov. 2013. word2vec: Tool for computing continuous distributed representations of words. https://code.google.com/p/ word2vec/. Marius Muja and David G. Lowe. 2008. FLANN — Fast Library for Approximate Nearest Neighbors. http://www.cs.ubc.ca/research/ flann/. David Nister and Henrik Stewenius. 2006. Scalable Recognition with a Vocabulary Tree. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2006), pages 2161–2168. IEEE Computer Society. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014a. GloVe: Global Vectors for Word Representation. http://nlp. stanford.edu/projects/glove/. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014b. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), pages 1532– 1543. Thomas B. Sebastian and Benjamin B. Kimia. 2002. Metric-Based Shape Retrieval in Large Databases. In Proceedings of the 16th International Conference on Pattern Recognition (ICPR 2002), pages 291– 296. Chanop Silpa-Anan and Richard Hartley. 2008. Optimised KD-trees for fast image descriptor matching. In Proceedings of the 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2008), pages 1–8. IEEE Computer Society. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. CCG: RTE Annotation Data for ACL 2010 publication. http://cogcomp.cs.illinois.edu/ Data/ACL2010_NER_Experiments.php. 2274 Yair Weiss, Antonio Torralba, and Robert Fergus. 2009. Spectral Hashing. In Advances in Neural Information Processing Systems 21 (NIPS 2008), pages 1753–1760. Curran Associates, Inc. Justin Zobel and Alistair Moffat. 2006. Inverted Files for Text Search Engines. ACM Computing Surveys, 38(2). 2275
2016
214
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2276–2286, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Composing Distributed Representations of Relational Patterns Sho Takase Naoaki Okazaki Kentaro Inui Graduate School of Information Sciences, Tohoku University {takase, okazaki, inui}@ecei.tohoku.ac.jp Abstract Learning distributed representations for relation instances is a central technique in downstream NLP applications. In order to address semantic modeling of relational patterns, this paper constructs a new dataset that provides multiple similarity ratings for every pair of relational patterns on the existing dataset (Zeichner et al., 2012). In addition, we conduct a comparative study of different encoders including additive composition, RNN, LSTM, and GRU for composing distributed representations of relational patterns. We also present Gated Additive Composition, which is an enhancement of additive composition with the gating mechanism. Experiments show that the new dataset does not only enable detailed analyses of the different encoders, but also provides a gauge to predict successes of distributed representations of relational patterns in the relation classification task. 1 Introduction Knowledge about entities and their relations (relation instances) are crucial for a wide spectrum of NLP applications, e.g., information retrieval, question answering, and recognizing textual entailment. Learning distributed representations for relation instances is a central technique in downstream applications as a number of recent studies demonstrated the usefulness of distributed representations for words (Mikolov et al., 2013; Pennington et al., 2014) and sentences (Sutskever et al., 2014; Cho et al., 2014; Kiros et al., 2015). In particular, semantic modeling of relations and their textual realizations (relational patterns hereafter) is extremely important because a relaZeichner+ (2012) (increase the risk of, cause): (be open from, close at): ......... 5 6 6 6 7 1 1 2 2 2 .......... Similarity ratings of pattern pairs increase: risk: open: increase the risk of: cause: Corpus (ukWaC) ... evidence that passive smoking increases the risk of lung cancer by 10%. The research also ... ...... Cigarette smoking causes breathing problems ... Usually, pubs closes at midnight, and people increase the risk of passive smoking lung cancer cause cigarette smoking heart attack close at pub midnight be open from department store 10am Relational pattern X Y ... ... ... Encoder for relational patterns (§3) Additive composition, RNN, LSTM, GRU, GAC be open from: Annotating human judgments using crowd sourcing (§2) Embeddings of relational patterns Word embeddings SemEval 2010 Task 8 (word2vec) Training Open IE (Reverb) Relational pattern similarity Evaluation (§4.1) Relation classification Evaluation (§4.2) Figure 1: Overview of this study. tion (e.g., causality) can be mentioned by various expressions (e.g., “X cause Y”, “X lead to Y”, “Y is associated with X”). To make matters worse, relational patterns are highly productive: we can produce a emphasized causality pattern “X increase the severe risk of Y” from “X increase the risk of Y” by inserting severe to the pattern. To model the meanings of relational patterns, the previous studies built a co-occurrence matrix between relational patterns (e.g., “X increase the risk of Y”) and entity pairs (e.g., “X: smoking, Y: cancer”) (Lin and Pantel, 2001; Nakashole et al., 2012). Based on the distributional hypothesis (Harris, 1954), we can compute a semantic vector of a relational pattern from the co-occurrence matrix, and measure the similarity of two relational patterns as the cosine similarity of the vectors. Nowadays, several studies adopt distributed representations computed by neural networks for semantic modeling of relational patterns (Yih et al., 2014; Takase et al., 2016). Notwithstanding, the previous studies paid little attention to explicitly evaluate semantic modeling of relational patterns. In this paper, we construct a new dataset that contains a pair of relational patterns with five similarity ratings judged by human annotators. The new dataset shows a 2276 high inter-annotator agreement, following the annotation guideline of Mitchell and Lapata (2010). The dataset is publicly available on the Web site1. In addition, we conduct a comparative study of different encoders for composing distributed representations of relational patterns. During the comparative study, we present Gated Additive Composition, which is an enhancement of additive composition with the gating mechanism. We utilize the Skip-gram objective for training the parameters of the encoders on a large unlabeled corpus. Experiments show that the new dataset does not only enable detailed analyses of the different encoders, but also provides a gauge to predict successes of distributed representations of relational patterns in another task (relation classification). Figure 1 illustrates the overview of this study. 2 Data Construction 2.1 Target relation instances We build a new dataset upon the work of Zeichner et al. (2012), which consists of relational patterns with semantic inference labels annotated. The dataset includes 5,555 pairs2 extracted by Reverb (Fader et al., 2011), 2,447 pairs with inference relation and 3,108 pairs (the rest) without one. Initially, we considered using this high-quality dataset as it is for semantic modeling of relational patterns. However, we found that inference relations exhibit quite different properties from those of semantic similarity. Take a relational pattern pair “X be the part of Y” and “X be an essential part of Y” filled with “X = the small intestine, Y = the digestive system” as an instance. The pattern “X be the part of Y” does not entail “X be an essential part of Y” because the meaning of the former does not include ‘essential’. Nevertheless, both statements are similar, representing the same relation (PART-OF). Another uncomfortable pair is “X fall down Y” and “X go up Y” filled with “X = the dude, Y = the stairs”. The dataset indicates that the former entails the latter probably because falling down from the stairs requires going up there, but they present the opposite meaning. For this reason, we decided to re-annotate semantic similarity 1http://github.com/takase/relPatSim 2More precisely, the dataset includes 1,012 meaningless pairs in addition to 5,555 pairs. A pair of relational patterns was annotated as meaningless if the annotators were unable to understand the meaning of the patterns easily. We ignore the meaningless pairs in this study. judgments on every pair of relational patterns on the dataset. 2.2 Annotation guideline We use instance-based judgment in a similar manner to that of Zeichner et al. (2012) to secure a high inter-annotator agreement. In instancebased judgment, an annotator judges a pair of relational patterns whose variable slots are filled with the same entity pair. In other words, he or she does not make a judgment for a pair of relational patterns with variables, “X prevent Y” and “X reduce the risk of Y”, but two instantiated statements “Cephalexin prevent the bacteria” and “Cephalexin reduce the risk of the bacteria” (“X = Cephalexin, Y = the bacteria”). We use the entity pairs provided in Zeichner et al. (2012). We asked annotators to make a judgment for a pair of relation instances by choosing a rating from 1 (dissimilar) to 7 (very similar). We provided the following instructions for judgment, which is compatible with Mitchell and Lapata (2010): (1) rate 6 or 7 if the meanings of two statements are the same or mostly the same (e.g., “Palmer team with Jack Nicklaus” and “Palmer join with Jack Nicklaus”); (2) rate 1 or 2 if two statements are dissimilar or unrelated (e.g., “the kids grow up with him” and “the kids forget about him”); (3) rate 3, 4, or 5 if two statements have some relationships (e.g., “Many of you know about the site” and “Many of you get more information about the site”, where the two statements differ but also reasonably resemble to some extent). 2.3 Annotation procedure We use a crowdsourcing service CrowdFlower3 to collect similarity judgments from the crowds. CrowdFlower has the mechanism to assess the reliability of annotators using Gold Standard Data (Gold, hereafter), which consists of pairs of relational patterns with similarity scores assigned. Gold examples are regularly inserted throughout the judgment job to enable measurement of the performance of each worker4. Two authors of this paper annotated 100 pairs extracted randomly from 5,555 pairs, and prepared 80 Gold examples showing high agreement. Ratings of the Gold examples were used merely for quality assessment of the workers. In other words, we discarded the 3http://www.crowdflower.com/ 4We allow ±1 differences in rating when we measure the performance of the workers. 2277 Figure 2: Number of judgments for each similarity rating. The total number of judgments is 27, 775 (5, 555 pairs × 5 workers). similarity ratings of the Gold examples, and used those judged by the workers. To build a high quality dataset, we use judgments from workers whose confidence values (reliability scores) computed by CrowdFlower are greater than 75%. Additionally, we force every pair to have at least five judgments from the workers. Consequently, 60 workers participated in this job. In the final version of this dataset, each pair has five similarity ratings judged by the five most reliable workers who were involved in the pair. Figure 2 presents the number of judgments for each similarity rating. Workers seldom rated 7 for a pair of relational patterns, probably because most pairs have at least one difference in content words. The mean of the standard deviations of similarity ratings of all pairs is 1.16. Moreover, we computed Spearman’s ρ between similarity judgments from each worker and the mean of five judgments in the dataset. The mean of Spearman’s ρ of workers involved in the dataset is 0.728. These statistics show a high inter-annotator agreement of the dataset. 3 Encoder for Relational Patterns The new dataset built in the previous section raises two new questions — What is the reasonable method (encoder) for computing the distributed representations of relational patterns? Is this dataset useful to predict successes of distributed representations of relational patterns in real applications? In order to answer these questions, this section explores various methods for learning distributed representations of relational patterns. 3.1 Baseline methods without supervision A na¨ıve approach would be to regard a relational pattern as a single unit (word) and to train word/pattern embeddings as usual. In fact, Mikolov et al. (2013) implemented this approach as a preprocessing step, mining phrasal expressions with strong collocations from a training corpus. However, this approach might be affected by data sparseness, which lowers the quality of distributed representations. Another simple but effective approach is additive composition (Mitchell and Lapata, 2010), where the distributed representation of a relational pattern is computed by the mean of embeddings of constituent words. Presuming that a relational pattern consists of a sequence of T words w1, ..., wT , then we let xt ∈Rd the embedding of the word wt. This approach computes 1 T ∑T t=1 xt as the embedding of the relational pattern. Muraoka et al. (2014) reported that the additive composition is a strong baseline among various methods. 3.2 Recurrent Neural Network Recently, a number of studies model semantic compositions of phrases and sentences by using (a variant of) Recurrent Neural Network (RNN) (Sutskever et al., 2014; Tang et al., 2015). For a given embedding xt at position t, the vanilla RNN (Elman, 1990) computes the hidden state ht ∈Rd by the following recursive equation5, ht = g(Wxxt + Whht−1). (1) Here, Wx and Wh are d×d matrices (parameters), g(.) is the elementwise activation function (tanh). We set h0 = 0 at t = 1. In essence, RNN computes the hidden state ht based on the one at the previous position (ht−1) and the word embedding xt. Applying Equation 1 from t = 1 to T, we use hT as the distributed representation of the relational pattern. 3.3 RNN variants We also employ Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and Gated Recurrent Unit (GRU) (Cho et al., 2014) as an encoder for relational patterns. LSTM has been applied successfully to various NLP tasks including word segmentation (Chen et al., 2015), dependency parsing (Dyer et al., 2015), machine translation (Sutskever et al., 2014), and sentiment analysis (Tai et al., 2015). GRU is also successful in machine translation (Cho et al., 2014) and various 5We do not use a bias term in this study. We set the number of dimensions of hidden states identical to that of word embeddings (d) so that we can adapt the objective function of the Skip-gram model for training (Section 3.5). 2278 passive smoking increases the risk of lung cancer xs xs+1 xs+2 xs+L-1 xs+L xs+L+1 xs-2 xs-1 (3) (4) (5) hs hs+1 hs+2 hs+L-1 (3) ws ws+1 ws+2 ws+L-1 (3) ws+L ws+L+1 (4) (5) ws-2 ws-1 f s f s+1 f s+2 i s i s+1 i s+2 i s+3 ~ ~ ~ ~ Parameter update by Skip-gram model Parameter update by Skip-gram model Pattern vector T = L = 4 δ = 2 δ = 2 Context window Context window Relation pattern (word vectors) (context vectors) (context vectors) (hidden vectors) Gated Additive Composition (GAC) Figure 3: Overview of GAC trained with Skipgram model. GAC computes the distributed representation of a relational pattern using the input gate and forget gate, and learns parameters by predicting surrounding words (Skip-gram model). tasks including sentence similarity, paraphrase detection, and sentiment analysis (Kiros et al., 2015). LSTM and GRU are similar in that the both architectures have gates (input, forget, and output for LSTM; reset and update for GRU) to remedy the gradient vanishing or explosion problem in training RNNs. Although some researchers reported that GRU is superior to LSTM (Chung et al., 2014), we have no consensus about the superiority. Besides, we are not sure whether LSTM or GRU is really necessary for relational patterns, which ususlly consist of a few words. Thus, we compare RNN, LSTM, and GRU empirically with the same training data and the same training procedure. Similarly to RNN, we use the hidden state hT of LSTM6 or GRU as the distributed representation of a relation pattern. 3.4 Gated Additive Composition (GAC) In addition to the gradient problem, LSTM or GRU may be suitable for relational patterns, having the mechanism of adaptive control of gates for input words and hidden states. Consider the relational pattern “X have access to Y”, whose meaning is mostly identical to that of “X access Y”. Because ‘have’ in the pattern is a light verb, it may be harmful to incorporate the semantic vector of ‘have’ into the distributed representation of the pattern. The same may be true for the functional word ‘to’ in the pattern. However, the additive composition nor RNN does not have a mechanism to ignore the semantic vectors of these words. It is interesting to explore a method somewhere between additive composition and LSTM/GRU: additive composition with the gating mechanism. For this reason, we present an another variant of RNN in this study. Inspired by the input and 6We omitted peephole connections and bias terms. forget gates in LSTM, we compute the input gate it ∈Rd and forget gate ft ∈Rd at position t. We use them to control the amount to propagate to the hidden state ht from the current word xt and the previous state ht−1. it = σ(Wixxt + Wihht−1) (2) ft = σ(Wfxxt + Wfhht−1) (3) ht = g(ft ⊙ht−1 + it ⊙xt) (4) Here, Wix, Wih, Wfx, Wfh are d × d matrices. Equation 4 is interpreted as a weighted additive composition between the vector of the current word xt and the vector of the previous hidden state ht−1. The elementwise weights are controlled by the input gate it and forget gate ft; we expect that input gates are closed (close to zero) and forget gates are opened (close to one) when the current word is a control verb or function word. We name this architecture gated additive composition (GAC). 3.5 Parameter estimation: Skip-gram model To train the parameters of the encoders (RNN, LSTM, GRU, and GAC) on an unlabeled text corpus, we adapt the Skip-gram model (Mikolov et al., 2013). Formally, we designate an occurrence of a relational pattern p as a subsequence of L words ws, ..., ws+L−1 in a corpus. We define δ words appearing before and after pattern p as the context words, and let Cp = (s −δ, ..., s − 1, s + L, ..., s + L + δ) denote the indices of the context words. We define the log-likelihood of the relational pattern lp, following the objective function of Skip-gram with negative sampling (SGNS) (Levy and Goldberg, 2014). lp = ∑ τ∈Cp ( log σ(h⊤ p ˜xτ) + K ∑ k=1 log σ(−h⊤ p ˜x˘τ) ) (5) In this formula: K denotes the number of negative samples; hp ∈Rd is the vector for the relational pattern p computed by each encoder such as RNN; ˜xτ ∈Rd is the context vector for the word wτ 7; x˘τ ′ ∈Rd is the context vector for the word 7The Skip-gram model has two kinds of vectors xt and ˜xt assigned for a word wt. Equation 2 of the original paper (Mikolov et al., 2013) denotes xt (word vector) as v (input vector) and ˜xt (context vector) as v′ (output vector). The word2vec implementation does not write context (output) vectors but only word (input) vectors to a model file. Therefore, we modified the source code to save context vectors, and use them in Equation 5. This modification ensures the consistency of the entire model. 2279 that were sampled from the unigram distribution8 at every iteration of ∑ k. At every occurrence of a relational pattern in the corpus, we use Stochastic Gradient Descent (SGD) and backpropagation through time (BPTT) for training the parameters (matrices) in encoders. More specifically, we initialize the word vectors xt and context vectors ˜xt with pre-trained values, and compute gradients for Equation 5 to update the parameters in encoders. In this way, each encoder is trained to compose a vector of a relational pattern so that it can predict the surrounding context words. An advantage of this parameter estimation is that the distributed representations of words and relational patterns stay in the same vector space. Figure 3 visualizes the training process for GAC. 4 Experiments In Section 4.1, we investigate the performance of the distributed representations computed by different encoders on the pattern similarity task. Section 4.2 examines the contribution of the distributed representations on SemEval 2010 Task 8, and discusses the usefulness of the new dataset to predict successes of the relation classification task. 4.1 Relational pattern similarity For every pair in the dataset built in Section 2, we compose the vectors of the two relational patterns using an encoder described in Section 3, and compute the cosine similarity of the two vectors. Repeating this process for all pairs in the dataset, we measure Spearman’s ρ between the similarity values computed by the encoder and similarity ratings assigned by humans. 4.1.1 Training procedure We used ukWaC9 as the training corpus for the encoders. This corpus includes the text of 2 billion words from Web pages crawled in the .uk domain. Part-of-speech tags and lemmas are annotated by TreeTagger10. We used lowercased lemmas throughout the experiments. We apply word2vec to this corpus to pre-train word vectors xt and context vectors ˜xt. All encoders use word vectors xt to compose vectors of relational patterns; and the Skip-gram model uses context 8We use the probability distribution of words raised to the 3/4 power (Mikolov et al., 2013). 9http://wacky.sslmit.unibo.it 10http://www.cis.uni-muenchen.de/ ˜schmid/tools/TreeTagger/ Figure 4: Performance of each method on the relational pattern similarity task with variation in the number of dimensions. vectors ˜xt to compute the objective function and gradients. We fix the vectors xt and ˜xt with pretrained values during training. We used Reverb (Fader et al., 2011) to the ukWaC corpus to extract relational pattern candidates. To remove unuseful relational patterns, we applied filtering rules that are compatible with those used in the publicly available extraction result11. Additionally, we discarded relational patterns appearing in the evaluation dataset throughout the experiments to assess the performance under which an encoder composes vectors of unseen relational patterns. This preprocessing yielded 127, 677 relational patterns. All encoders were implemented on Chainer12, a flexible framework of neural networks. The hyperparameters of the Skip-gram model are identical to those in Mikolov et al. (2013): the width of context window δ = 5, the number of negative samples K = 5, the subsampling of 10−5. For each encoder that requires training, we tried 0.025, 0.0025, and 0.00025 as an initial learning rate, and selected the best value for the encoder. In contrast to the presentation of Section 3, we compose a pattern vector in backward order (from the last to the first) because preliminary experiments showed a slight improvement with this treatment. 4.1.2 Results and discussions Figure 4 shows Spearman’s rank correlations of different encoders when the number of dimensions of vectors is 100–500. The figure shows that GAC achieves the best performance on all dimensions. Figure 4 includes the performance of the na¨ıve approach, “NoComp”, which regards a relational pattern as a single unit (word). In this approach, 11http://reverb.cs.washington.edu/ 12http://chainer.org/ 2280 Length # NoComp Add LSTM GRU RNN GAC 1 636 0.324 0.324 0.324 0.324 0.324 0.324 2 1,018 0.215 0.319 0.257 0.274 0.285 0.321 3 2,272 0.234 0.386 0.344 0.370 0.387 0.404 4 1,206 0.208 0.306 0.314 0.329 0.319 0.323 > 5 423 0.278 0.315 0.369 0.384 0.394 0.357 All 5,555 0.215 0.340 0.336 0.356 0.362 0.370 Table 1: Spearman’s rank correlations on different pattern lengths (number of dimensions d = 500). we allocated a vector hp for each relational pattern p in Equation 5 instead of the vector composition, and trained the vectors of relational patterns using the Skip-gram model. The performance was poor for two reasons: we were unable to compute similarity values for 1,744 pairs because relational patterns in these pairs do not appear in ukWaC; and relational patterns could not obtain sufficient statistics because of data sparseness. Table 1 reports Spearman’s rank correlations computed for each pattern length. Here, the length of a relational-pattern pair is defined by the maximum of the lengths of two patterns in the pair. In length of 1, all methods achieve the same correlation score because they use the same word vector xt. The table shows that additive composition (Add) performs well for shorter relational patterns (lengths of 2 and 3) but poorly for longer ones (lengths of 4 and 5+). GAC also exhibits the similar tendency to Add, but it outperforms Add for shorter patterns (lengths of 2 and 3) probably because of the adaptive control of input and forget gates. In contrast, RNN and its variants (RNN, GRU, and LSTM) enjoy the advantage on longer patterns (lengths of 4 and 5+). To examine the roles of input and forget gates of GAC, we visualize the moments when input/forget gates are wide open or closed. More precisely, we extract the input word and scanned words when |it|2 or |ft|2 is small (close to zero) or large (close to one) on the relational-pattern dataset. We restate that we compose a pattern vector in backward order (from the last to the first): GAC scans ‘of’, ‘author’, and ‘be’ in this order for composing the vector of the relational pattern ‘be author of’. Table 2 displays the top three examples identified using the procedure. The table shows two groups of tendencies. Input gates open and forget gates close when scanned words are only a preposition and the current word is a content word. In these situations, GAC tries to read the semantic wt wt+1 wt+2 ... large it reimburse for (input payable in open) liable to small it a charter member of (input a valuable member of close) be an avid reader of large ft be eligible to participate in (forget be require to submit open) be request to submit small ft coauthor of (forget capital of close) center of Table 2: Prominent moments for input/forget gates. vector of the content word and to ignore the semantic vector of the preposition. In contrast, input gates close and forget gates open when the current word is ‘be’ or ‘a’ and scanned words form a noun phrase (e.g., “charter member of”), a complement (e.g., “eligible to participate in”), or a passive voice (e.g., “require(d) to submit”). This behavior is also reasonable because GAC emphasizes informative words more than functional words. 4.2 Relation classification 4.2.1 Experimental settings To examine the usefulness of the dataset and distributed representations for a different application, we address the task of relation classification on the SemEval 2010 Task 8 dataset (Hendrickx et al., 2010). In other words, we explore whether high-quality distributed representations of relational patterns are effective to identify a relation type of an entity pair. The dataset consists of 10, 717 relation instances (8, 000 training and 2, 717 test instances) with their relation types annotated. The dataset 2281 Method Feature set F1 SVM BoW, POS 77.3 SVM + NoComp embeddings, BoW, POS 79.9 SVM + LSTM embeddings, BoW, POS 81.1 SVM + Add embeddings, BoW, POS 81.1 SVM + GRU embeddings, BoW, POS 81.4 SVM + RNN embeddings, BoW, POS 81.7 SVM + GAC embeddings, BoW, POS 82.0 + dependency, WordNet, NE 83.7 Ranking loss + GAC w/ fine-tuning embeddings, BoW, POS + dependency, WordNet, NE 84.2 SVM (Rink and Harabagiu, 2010) BoW, POS, dependency, Google n-gram, etc. 82.2 MV-RNN (Socher et al., 2012) embeddings, parse trees 79.1 + WordNet, POS, NE 82.4 FCM (Gormley et al., 2015) w/o fine-tuning embeddings, dependency 79.4 + WordNet 82.0 w/ fine-tuning embeddings, dependency 82.2 + NE 83.4 RelEmb (Hashimoto et al., 2015) embeddings 82.8 + dependency, WordNet, NE 83.5 CR-CNN (dos Santos et al., 2015) w/ Other embeddings, word position embeddings 82.7 w/o Other embeddings, word position embeddings 84.1 depLCNN (Xu et al., 2015) embeddings, dependency 81.9 + WordNet 83.7 depLCNN + NS embeddings, dependency 84.0 + WordNet 85.6 Table 3: F1 scores on the SemEval 2010 dataset. defines 9 directed relations (e.g.,CAUSE-EFFECT) and 1 undirected relation OTHER. Given a pair of entity mentions, the task is to identify a relation type in 19 candidate labels (2 × 9 directed + 1 undirected relations). For example, given the pair of entity mentions e1 = ‘burst’ and e2 = ‘pressure’ in the sentence “The burst has been caused by water hammer pressure”, a system is expected to predict CAUSE-EFFECT(e2, e1). We used Support Vector Machines (SVM) with a Radial Basis Function (RBF) kernel implemented in libsvm13. Basic features are: partof-speech tags (predicted by TreeTagger), surface forms, lemmas of words appearing between an entity pair, and lemmas of the words in the entity pair. Additionally, we incorporate distributed representations of a relational pattern, entities, and a word before and after the entity pair (number of dimensions d = 500). In this task, we regard words appearing between an entity pair as a re13https://www.csie.ntu.edu.tw/˜cjlin/ libsvm/ lational pattern. We compare the vector representations of relational patterns computed by the five encoders presented in Section 4.1: additive composition, RNN, GRU, LSTM, and GAC. Hyperparameters related to SVM were tuned by 5-fold cross validation on the training data. 4.2.2 Results and discussions Table 3 presents the macro-averaged F1 scores on the SemEval 2010 Task 8 dataset. The first group of the table provides basic features and enhancements with the distributed representations. We can observe a significant improvement even from the distributed representation of NoComp (77.3 to 79.9). Moreover, the distributed representation that exhibited the high performance on the pattern similarity task was also successful on this task; GAC, which yielded the highest performance on the pattern similarity task, also achieved the best performance (82.0) of all encoders on this task. It is noteworthy that the improvements brought by the different encoders on this task roughly cor2282 respond to the performance on the pattern similarity task. This fact implies two potential impacts. First, the distributed representations of relational patterns are useful and easily transferable to other tasks such as knowledge base population. Second, the pattern similarity dataset provides a gauge to predict successes of distributed representations in another task. We could further improve the performance of SVM + GAC by incorporating external resources in the similar manner as the previous studies did. Concretely, SVM + GAC achieved 83.7 F1 score by adding features for WordNet, named entities (NE), and dependency paths explained in Hashimoto et al. (2015). Moreover, we could obtain 84.2 F1 score, using the ranking based loss function (dos Santos et al., 2015) and fine-tuning of the distributed representations initially trained by GAC. Currently, this is the second best score among the performance values reported in the previous studies on this task (the second group of Table 3). If we could use the negative sampling technique proposed by Xu et al. (2015), we might improve the performance further14. 5 Related Work Mitchell and Lapata (2010) was a pioneering work in semantic modeling of short phrases. They constructed the dataset that contains two-word phrase pairs with semantic similarity judged by human annotators. Korkontzelos et al. (2013) provided a semantic similarity dataset with pairs of two words and a single word. Wieting et al. (2015) annotated a part of PPDB (Ganitkevitch et al., 2013) to evaluate semantic modeling of paraphrases. Although the target unit of semantic modeling is different from that for these previous studies, we follow the annotation guideline and instruction of Mitchell and Lapata (2010) to build the new dataset. The task addressed in this paper is also related to the Semantic Textual Similarity (STS) task (Agirre et al., 2012). STS is the task to measure the degree of semantic similarity between two sentences. Even though a relational pattern appears as a part of a sentence, it may be difficult to transfer findings from one to another: for example, the encoders of RNN and its variants explored 14In fact, we made substantial efforts to introduce the negative sampling technique. However, Xu et al. (2015) omits the detail of the technique probably because of the severe page limit of short papers. For this reason, we could not reproduce their method in this study. in this study may exhibit different characteristics, influenced by the length and complexity of input text expressions. In addition to data construction, this paper addresses semantic modeling of relational patterns. Nakashole et al. (2012) approached the similar task by constructing a taxonomy of relational patterns. They represented a vector of a relational pattern as the distribution of entity pairs co-occurring with the relational pattern. Grycner et al. (2015) extended Nakashole et al. (2012) to generalize dimensions of the vector space (entity pairs) by incorporating hyponymy relation between entities. They also used external resources to recognize the transitivity of pattern pairs and applied transitivities to find patterns in entailment relation. These studies did not consider semantic composition of relational patterns. Thus, they might suffer from the data sparseness problem, as shown by NoComp in Figure 4. Numerous studies have been aimed at encoding distributed representations of phrases and sentences from word embeddings by using: Recursive Neural Network (Socher et al., 2011), Matrix Vector Recursive Neural Network (Socher et al., 2012), Recursive Neural Network with different weight matrices corresponding to syntactic categories (Socher et al., 2013) or word types (Takase et al., 2016), RNN (Sutskever et al., 2011), LSTM (Sutskever et al., 2014), GRU (Cho et al., 2014), PAS-CLBLM (Hashimoto et al., 2014), etc. As described in Section 3, we applied RNN, GRU, and LSTM to compute distributed representations of relational patterns because recent papers have demonstrated their superiority in semantic composition (Sutskever et al., 2014; Tang et al., 2015). In this paper, we presented a comparative study of different encoders for semantic modeling of relational patterns. To investigate usefulness of the distributed representations and the new dataset, we adopted the relation classification task (SemEval 2010 Task 8) as a real application. On the SemEval 2010 Task 8, several studies considered semantic composition. Gormley et al. (2015) proposed Feature-rich Compositional Embedding Model (FCM) that can combine binary features (e.g., positional indicators) with word embeddings via outer products. dos Santos et al. (2015) addressed the task using Convolutional Neural Network (CNN). Xu et al. (2015) achieved a higher performance than dos 2283 Santos et al. (2015) by application of CNN on dependency paths. In addition to the relation classification task, we briefly describe other applications. To populate a knowledge base, Riedel et al. (2013) jointly learned latent feature vectors of entities, relational patterns, and relation types in the knowledge base. Toutanova et al. (2015) adapted CNN to capture the compositional structure of a relational pattern during the joint learning. For open domain question answering, Yih et al. (2014) proposed the method to map an interrogative sentence on an entity and a relation type contained in a knowledge base by using CNN. Although these reports described good performance on the respective tasks, we are unsure of the generality of distributed representations trained for a specific task such as the relation classification. In contrast, this paper demonstrated the contribution of distributed representations trained in a generic manner (with the Skip-gram objective) to the task of relation classification. 6 Conclusion In this paper, we addressed the semantic modeling of relational patterns. We introduced the new dataset in which humans rated multiple similarity scores for every pair of relational patterns on the dataset of semantic inference (Zeichner et al., 2012). Additionally, we explored different encoders for composing distributed representations of relational patterns. The experimental results shows that Gated Additive Composition (GAC), which is a combination of additive composition and the gating mechanism, is effective to compose distributed representations of relational patterns. Furthermore, we demonstrated that the presented dataset is useful to predict successes of the distributed representations in the relation classification task. We expect that several further studies will use the new dataset not only for distributed representations of relational patterns but also for other NLP tasks (e.g., paraphrasing). Analyzing the internal mechanism of LSTM, GRU, and GAC, we plan to explore an alternative architecture of neural networks that is optimal for relational patterns. Acknowledgments We thank the reviewers and Jun Suzuki for valuable comments. This work was partially supported by Grant-in-Aid for JSPS Fellows Grant no. 26.5820, JSPS KAKENHI Grant number 15H05318, and JST, CREST. References Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pilot on semantic textual similarity. In The First Joint Conference on Lexical and Computational Semantics (*SEM 2012), pages 385–393. Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Pengfei Liu, and Xuanjing Huang. 2015. Long short-term memory neural networks for chinese word segmentation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015), pages 1197–1206. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), pages 1724–1734. Junyoung Chung, C¸ aglar G¨ulc¸ehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. Cicero dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACLIJCNLP 2015), pages 626–634. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACLIJCNLP 2015), pages 334–343. Jeffrey L Elman. 1990. Finding structure in time. Cognitive science, 14(2):179–211. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP 2011), pages 1535–1545. Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. Ppdb: The paraphrase database. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2013), pages 758–764. 2284 Matthew R. Gormley, Mo Yu, and Mark Dredze. 2015. Improved relation extraction with feature-rich compositional embedding models. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015), pages 1774– 1784. Adam Grycner, Gerhard Weikum, Jay Pujara, James Foulds, and Lise Getoor. 2015. Relly: Inferring hypernym relationships between relational phrases. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015), pages 971–981. Zellig Harris. 1954. Distributional structure. Word, 10(23):146–162. Kazuma Hashimoto, Pontus Stenetorp, Makoto Miwa, and Yoshimasa Tsuruoka. 2014. Jointly learning word representations and composition functions using predicate-argument structures. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), pages 1544–1555. Kazuma Hashimoto, Pontus Stenetorp, Makoto Miwa, and Yoshimasa Tsuruoka. 2015. Task-oriented learning of word embeddings for semantic relation classification. In Proceedings of the 19th Conference on Computational Natural Language Learning (CoNLL 2015), pages 268–278. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid ´O S´eaghdha, Sebastian Pad´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 33–38. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in Neural Information Processing Systems 28 (NIPS 2015), pages 3276–3284. Ioannis Korkontzelos, Torsten Zesch, Fabio Massimo Zanzotto, and Chris Biemann. 2013. Semeval-2013 task 5: Evaluating phrasal semantics. In Second Joint Conference on Lexical and Computational Semantics (*SEM 2013), pages 39–47. Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. In Advances in Neural Information Processing Systems 27 (NIPS 2014), pages 2177–2185. Dekang Lin and Patrick Pantel. 2001. Dirt – discovery of inference rules from text. In Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 01), pages 323–328. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26 (NIPS 2013), pages 3111–3119. Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive Science, 34(8):1388–1439. Masayasu Muraoka, Sonse Shimaoka, Kazeto Yamamoto, Yotaro Watanabe, Naoaki Okazaki, and Kentaro Inui. 2014. Finding the best model among representative compositional models. In Proceedings of the 28th Pacific Asia Conference on Language, Information, and Computation (PACLIC 28), pages 65–74. Ndapandula Nakashole, Gerhard Weikum, and Fabian Suchanek. 2012. Patty: A taxonomy of relational patterns with semantic types. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL 2012), pages 1135–1145. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), pages 1532–1543. Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M. Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2013), pages 74–84. Bryan Rink and Sanda Harabagiu. 2010. Utd: Classifying semantic relations by combining lexical and semantic resources. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 256–259. Richard Socher, Cliff Chiung-Yu Lin, Andrew Y. Ng, and Christopher D. Manning. 2011. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the 28th International Conference on Machine learning (ICML 2011), pages 129–136. Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL 2012), pages 1201–1211. Richard Socher, John Bauer, Christopher D. Manning, and Ng Andrew Y. 2013. Parsing with compositional vector grammars. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL 2013), pages 455–465. 2285 Ilya Sutskever, James Martens, and Geoffrey Hinton. 2011. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML 2011), pages 1017–1024. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27 (NIPS 2014), pages 3104–3112. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2015), pages 1556–1566. Sho Takase, Naoaki Okazaki, and Kentaro Inui. 2016. Modeling semantic compositionality of relational patterns. Engineering Applications of Artificial Intelligence, 50(C):256–264. Duyu Tang, Bing Qin, and Ting Liu. 2015. Document modeling with gated recurrent neural network for sentiment classification. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015), pages 1422– 1432. Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon. 2015. Representing text for joint embedding of text and knowledge bases. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015), pages 1499– 1509. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. From paraphrase database to compositional paraphrase model and back. Transactions of the Association for Computational Linguistics (TACL 2015), 3:345–358. Kun Xu, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2015. Semantic relation classification via convolutional neural networks with simple negative sampling. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015), pages 536–540. Wen-tau Yih, Xiaodong He, and Christopher Meek. 2014. Semantic parsing for single-relation question answering. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL 2014), pages 643–648. Naomi Zeichner, Jonathan Berant, and Ido Dagan. 2012. Crowdsourcing inference-rule evaluation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL 2012), pages 156–160. 2286
2016
215
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2287–2296, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics The More Antecedents, the Merrier: Resolving Multi-Antecedent Anaphors Hardik Vala1, Andrew Piper2, Derek Ruths1 1School of Computer Science, 2Dept. of Languages, Literartures, & Cultures McGill University Montreal, Canada [email protected], {andrew.piper, derek.ruths}@mcgill.ca Abstract Anaphor resolution is an important task in NLP with many applications. Despite much research effort, it remains an open problem. The difficulty of the problem varies substantially across different sub-problems. One sub-problem, in particular, has been largely untouched by prior work despite occurring frequently throughout corpora: the anaphor that has multiple antecedents, which here we call multi-antecedent anaphors or manaphors. Current coreference resolvers restrict anaphors to at most a single antecedent. As we show in this paper, relaxing this constraint poses serious problems in coreference chain-building, where each chain is intended to refer to a single entity. This work provides a formalization of the new task with preliminary insights into multi-antecedent noun-phrase anaphors, and offers a method for resolving such cases that outperforms a number of baseline methods by a significant margin. Our system uses local agglomerative clustering on candidate antecedents and an existing coreference system to score clusters to determine which cluster of mentions is antecedent for a given anaphor. When we augment an existing coreference system with our proposed method, we observe a substantial increase in performance (0.6 absolute CoNLL F1) on an annotated corpus. 1 Introduction Anaphor resolution is a very difficult task in Natural Language Understanding, involving the complex interaction of discourse cues, syntactic rules, and semantic phenomena. It is closely related to the task of coreference resolution (Van Deemter and Kibble, 2000), for which a myriad of solutions have been proposed (Clark and Manning, 2015; Peng et al., 2015; Wiseman et al., 2015; Bj¨orkelund and Farkas, 2012; Lee et al., 2011; Stoyanov et al., 2010; Ng, 2008; Bergsma and Lin, 2006; Soon et al., 2001). However, given the complexity of the problem, a comprehensive approach remains elusive. The difficulty varies drastically across different cases (proper nouns, pronouns, gerunds, etc.), each of which involves different assumptions about and models of various linguistic phenomena (e.g., vocabulary, syntax, and semantics). As a result, state-of-theart systems yield varying performance across subproblems (Mitkov, 2014; Kummerfeld and Klein, 2013; Bj¨orkelund and Nugues, 2011; Recasens and Hovy, 2009; Stoyanov et al., 2009; Bengtson and Roth, 2008; Van Deemter and Kibble, 2000; Ng and Cardie, 2002b; Kameyama, 1997). To avoid the complexity of the overarching resolution task, many current systems — whether learning-based (Clark and Manning, 2015; Peng et al., 2015; Wiseman et al., 2015; Durrett and Klein, 2013; Bj¨orkelund and Farkas, 2012) or rule-based (Lee et al., 2011) — focus on a restricted version of the problem, where candidate anaphors are linked to at most one antecedent, from which coreference chains are built by propagating the induced equivalence relation, with each chain corresponding to an entity (Van Deemter and Kibble, 2000). While this single-antecedent inference task does resolve a very large number of anaphors in any given text, it leaves one quite common subproblem virtually untouched: anaphors that link to multiple antecedents. These have sometimes been called split-antecedent anaphors; here we use the term multi-antecedent anaphors or m-anaphors in 2287 order to emphasize the existence of more than one (possibly more than two) antecedents for a given anaphor. Consider the following examples: (1) [Elizabeth]1 met [Mary]2 at the park and [they]1,2 began their stroll to the river. (2) Mrs. Dashwood, having moved to another country, saw her [mother]1 and [sister-inlaw]2 demoted to occasional visitors. As such, however, her old [kin]1,2 were treated by her new family with quiet civility. Such cases present a challenge to state-of-theart methods: certain features well-suited for the single-antecedent case do not apply (e.g. gender and pluarity) (Recasens and Hovy, 2009; Stoyanov et al., 2009; Bergsma and Lin, 2006), and strong long-distance effects cannot be ignored (Ingria and Stallard, 1989). Moreover, the presence of multiple antecedents for a single anaphor violates the separation between coreference chains. In this paper, we address the multi-antecedent case of noun-phrase (NP) anaphor resolution in English, the most widely understood and studied form of coreference resolution (Ng, 2010; Ng, 2008). While we frame the general question of multi-antecedent inference, we restrict our analyses to one particular sub-problem: resolving the antecedents of the pronouns they and them. These pronouns best isolate the characteristics of manaphors (see Section 2 for more on the motivation of this choice). We propose a system for resolving they and them that models grouping compatibility of mentions through a maximum entropy pairwise model, independently from coreference of groupings, which is handled through an existing coreference resolution system leveraging corpus knowledge. This paper makes four core contributions. First, it provides a generalization of the anaphor resolution problem to permit linking to multiple antecedents. Second, we characterize core properties of m-anaphors and their linguistic environments in a large, annotated corpus. Third, we provide a entity-centric system for specifically resolving multi-antecedent cases that outperforms a number of baselines. And, finally, we show how to pair our system with an existing coreference system and show a gain of 0.6 points (CoNLL F1) on the complete coreference resolution task (resolving all anaphors, single- and multi-antecedent). The rest of the paper is organized as follows: We introduce the terminology and problem statement for split-antecedent resolution in Section 2. A summary of the data is given in Section 3 and the behaviour of split-antecedent anaphors is analyzed in Section 4. Our approach to antecedent prediction is presented in Section 5 and the results and analysis are reported in Section 6. Finally, we review related work in Section 7 and conclude and discuss future work in Section 8. 2 Problem This section establishes the terminology used throughout the paper and reformulates the anaphor resolution problem to incorporate linking to multiple antecedents. 2.1 Terminology We introduce the term m-anaphor for convenience as a special case of anaphor that has to multiple antecedents. For example, they and kin in Examples (1) and (2), respectively, from the Introduction are m-anaphors. By extension, 1-anaphors are anaphors that have only one antecedent. Similarly, we define an m-antecedent as one of multiple antecedents of an m-anaphor and we refer to m-antecedents with the same m-anaphor as siblings. In Example (1) from the Introduction, Elizabeth and Mary are sibling m-antecedents of they, and in Example (2), mother and sister-in-law are sibling m-antecedents of kin. Finally, we refer to anaphors with two, three, and four m-antecedents as 2-anaphors, 3anaphors, and 4-anaphors, respectively. We provide two more examples: (3) [Mr. Holmes]1 stared off into the distance. [Watson]2 simply walked off. [Both]1,2 were troubled by the news. (4) Virginia found herself alone with her [brother]1, and then the thought of her [sister]2 came to mind. [She]3 remembered the camping trip [they]1,2,3 embarked on a few summers ago. The anaphor in Example (3) is a 2-anaphor and the anaphor in Example (4) is a 3-anaphor. 2.2 Definition We define the NP anaphor resolution problem similar to Wiseman et al. (2015), Durrett and Klein 2288 Pronoun # m-anaphors they 278 them 165 we 140 you 43 everybody 12 Table 1: Counts of the most frequent m-anaphoric pronouns in P&P. (2013), and Hirschman (1997): Let M denote the set of all identified mentions in a document and let M(x) ⊆M denote all mentions preceding a mention x ∈M. The objective of the task is, for each x ∈M, to find C ⊆M(x) such that all mentions in C are antecedent to x. If C = ∅, then x is nonanaphoric and if |C| ≥1, then x is 1-anaphoric, and if |C| > 1, then x is m-anaphoric. Hence, this formulation generalizes the problem to account for multi-antecedent anaphors. To constrain the scope of the study, we perform all our analyses on gold mentions, leaving the effect of imperfect mention detection as a problem for future work (this has been studied for the single-antecedent case in Stoyanov et al. (2009)). Moreover, we only consider mentions of they and them that are known to be m-anaphoric for three reasons. First, non-pronomial m-anaphors, i.e. proper and common nouns, are much more susceptible to long-distance effects and may require external knowledge to resolve. Second, by focusing on this case, we circumvent a host of very involved aspects of the complete m-anaphor resolution problem, i.e. determining whether a mention is m-anaphoric, 1-anaphoric, or not anaphoric at all. For example, you may refer to one person or multiple, who can be used as an interrogative (non-anaphoric) or reflexive pronoun (anaphoric)), pronouns such as anyone and everyone introduce many scoping difficulties, and pleonastic pronouns must be removed from the inference task entirely. Third, they and them are the most prevalent of all pronouns in our dataset (refer to Table 1). 3 Data Our dataset comprises of the Pride and Prejudice novel (P&P) (121440 words) and 36 short stories from the Scribner Anthology of Contemporary Short Fiction (Martone et al., 1999) (Scribner) (total of 216901 words), representing an eclectic collection of stories from the modern era. For P&P, they them Total # % # % # % P&P 278 32.10 165 19.05 443 51.15 Scribner 243 12.96 79 4.21 322 17.17 Total 521 19.01 244 8.90 765 27.91 Table 2: Number of m-anaphoric they and them mentions and % of all they and them mentions that are m-anaphors. all mentions of character have been fully resolved to their antecedents, including mentions referencing multiple characters. For Scribner, all mentions of they and them are resolved (m-anaphoric, 1anaphoric, and singleton), including those of nonperson entities. These stories were annotated by three annotators according to a slightly modified version of the ACE coreference resolution task formulation (Doddington et al., 2004) to allow multiple antecedents. Annotations were conducted through the brat1 annotation tool (Stenetorp et al., 2012)) and the inter-annotator agreement on the shared texts (3 stories from Scribner + 7 chapters from P&P) was 86.5%. Overall, in P&P, 1289 m-anaphors were discovered, of which 34 (2.6%) were proper nouns, 536 (41.6%) were common nouns, and 719 (55.8%) were pronouns. Table 2 shows the number of gold m-anaphoric they and them mentions and the percentage of all they and them mentions that are manaphoric. Literary works were chosen over other textual modalities, e.g. news articles, because they showed a higher density of m-anaphors (a preliminary annotation exercise showed that literary works contained 37% more m-anaphors per word). The dataset is partitioned according to a roughly, 60/20/20 split into training, validation, and testing sets, where the split is applied to the text of P&P (e.g. the first 60% of story text is used for training), and the collection of Scribner stories (e.g. 60% of the stories were used for training). 4 Behaviour of m-anaphors m-anaphors present a novel class of anaphor for which very little knowledge exists. To better understand the linguistic behaviour of m-anaphors, we perform the following analyses. First, we examine first and second order statistics of our 1http://brat.nlplab.org 2289 First Second Avg. distance (# words) 17.08 33.50 Std. distance (# words) 23.80 40.66 Avg. distance (# sent.) 1.19 2.28 Std. distance (# sent.) 3.18 5.10 Avg. # intermediates 1.44 4.21 Std. # intermediates 2.33 4.44 Table 3: Average and standard deviations of the word distance, sentence distance, and number of intermediate mentions between the first and second most recent mentions to an m-anaphor. dataset to gain insight into the distribution of manaphors across a number of dimensions. Second, we fit a maximum entropy model over common coreference features for distinguishing manaphoric and anaphoric mentions to evaluate the importance of various features in determining manaphoricity versus anaphoricity of mentions. 4.1 m-anaphor Statistics The distribution of m-anaphors according to the number of referenced m−antecedents is as follows: 79.3% are 2-anaphors, 13.2% are 3anaphors, 3.7% are 4-anaphors, and the remaining 3.8% refer to larger numbers of antecedents. Despite the bias towards 2-anaphors, the simple approach to m-anaphor resolution of taking the previous two mentions as m-antecedent siblings will fail according to Table 3. The usual presence of intermediate mentions between manaphors and their m-antecedents makes the resolution task non-trivial. Moreover, the large distances between m-anaphors and their antecedents attenuates any signal for coreference, introducing greater noise to the problem. 4.2 m-anaphoricity Features The statistics discussed above shed light on the complexity of this problem. Here, we examine whether certain surface-level features of anaphoric phenomena from prior work exhibit any differences for m-anaphoric mentions over anaphoric ones. We construct a maximum entropy model from the training data over the combination of syntactic and semantic features in Table 4, inspired by Wiseman et al. (2015), Durrett and Klein (2013), and Recasens et al. (2013b). The binary classification decision is between m-anaphoric and 1anaphoric mentions, coded as ‘1’ and ‘0’, respectively. Therefore, the estimated coefficients that Feature Coefficient p-value Sentence position = first 0.16 0.13 Sentence position = last -0.18 0.006 Dependency = subject 0.27 0.05 Dependency = object 0.08 0.24 Dependency = preposition -0.22 0.07 Coordinated = true 0.29 0.08 Presence of negation 0.06 0.31 Presence of modality 0.04 0.21 Table 4: Features for m-anaphoricity versus 1anaphoricity with coefficients estimated from a maximum entropy model, and associated p-values. are positive favor m-anaphoricity and those that are negative favor 1-anaphoricity. Except for the feature testing on the last sentence position, none of the results in Table 4 were able to reach statistical significance, suggesting at a surface level, m-anaphoricity and 1-anaphoricity behave very similarly and operate in similar linguistic environments. One possibility is that a deeper set of features is required for distinguishing m-anaphors from 1-anaphors. We identify this as an important topic for future work in this area. 5 m-anaphor Resolution Our approach to m-anaphor resolution draws inspiration from mention pair models for coreference that make independent binary classification decisions (Ng, 2010). In our method, we employ a maximum entropy model that makes binary decisions on mention pairs as well, but the decision corresponds to “group compatibility” of mentions, i.e. to what degree can a given set of mentions be the sibling m-antecedents to the same manaphor. This model is embedded in an agglomerative clustering process, after which a coreference decision is made between clusters and the given m-anaphor. Thus, our model treats the grouping of candidate mentions into sibling sets independently from antecedent-anaphor linking. 5.1 Architecture Given an m-anaphor g in document D, the steps of our approach are as follows: 1. Mentions preceding g within a k-sentence window are extracted as candidate mantecedents to g. 2. Perform an agglomerative clustering of the candidate mentions using similarity metric 2290 SIM1 and average-linkage criteria. Let C represent the clustering. 3. Each non-singleton cluster C ∈C is scored according to the probability of coreference of the m-anaphor to the cluster. This is done by appealing to an external corpus comprising of sentences containing either they or them. The grouping of sentences in the document containing all of the mentions in C (and sentences in-between) are compared to each they or them sentence in the external corpus (depending on the identity of g) using similarity metric SIM2. The sentence yielding the maximum similarity is selected. The probability of coreference is then calculated by replacing the sentence grouping with the extracted sentence and applying an existing coreference system COREF between g and its counterpart (they or them) in the extracted sentence. 4. The cluster Cmax producing the highest probability of coreference is predicted as the group of m-antecedents for g. Again, inspired by mention-pair models for coreference resolution (Clark and Manning, 2015; Bj¨orkelund and Farkas, 2012; Ng and Cardie, 2002a), the SIM1 similarity metric is defined as σ(w⊤x), where w is a weight vector and x is a feature vector defined for a pair of mentions. The parameter vector w is learned using the standard cross-entropy loss function in a maximum entropy model, where the target variable is a decision on whether the mentions pairs are siblings or not. The learning is conducted over the training set with L2regularization. For SIM2, which is responsible for selecting replacement sentences, we experiment with two different similarity metrics: (1) longest common subsequence normalized by sentence length (LCS) and (2) a subset tree kernel (Collins and Duffy, 2002) with a bag-of-words extension as described in Moschitti (2006), which also describes a simple adaptation to forests (for multiple sentences). The named entity (NE) mentions in sentences are replaced by corresponding NE type placeholders (PERSON, LOCATION, etc. as described in Finkel et al. (2005)) before comparison. In the experiments to follow, we adopt the classification mention-pair model, a component of the statistical coreference resolution system available in the Stanford CoreNLP suite2 system, described in Clark and Manning (2015), as COREF for scoring coreference. The external corpus was built from texts comparable to our dataset. 651,108 sentences containing one of they or them were mined from a larger corpus of 798 literary texts spanning the nineteenth and twentieth centuries (including novels such as To The Lighthouse, by Virginia Woolf). Lastly, the candidate m-antecedents are extracted from a 5-sentence pre-window of the given m-anaphor (k = 5) and the regularization parameter in learning is set to 0.20. 5.2 Clustering Features Table 5 depicts the features we chose to use in the pairwise similarity metric (SIM1) for agglomerative clustering of candidate m-antecedents. All are common to many coreference resolver systems (Durrett and Klein, 2013; Recasens et al., 2013b; Stoyanov et al., 2010). We distinguish between mention features (Columns 1 & 2), which are defined for each candidate m-antecedent in a pair, and pairwise features (Columns 3-5), which are defined over a pair of candidate m-antecedents. Three features, in particular, deserve further discussion. Under morphosyntax (Column 3), [Type Conjunctions] is a placeholder for a number of conjunctive boolean features derived from the noun type (pronoun/proper/common) of each antecedent in a pairing: e.g., pronoun-pronoun, pronoun-proper, proper-pronoun. Similarly, [Dependency Conjunctions] is a placeholder for conjunctive boolean features derived from the grammatical dependency of each antecedent in a pairing: e.g., subject-subject, subject-object, objectsubject. The [# Dependency Pairings] is an ordinal version of the Dependency Conjunctions feature set - a count of the number of occurrences rather than an indicator variable. The ‘Governor = except’ feature triggers if one of the mentions in the mention pair is governed by except or exclude. It represents a form of negation of group membership (e.g. Everyone except for Mary visited Castlebary). Features were extracted using the Stanford CoreNLP system (Manning et al., 2014) and animacy information was specifically obtained through the Stanford deterministic coreference resolution module (Lee et al., 2011). 2http://stanfordnlp.github.io/CoreNLP/ coref.html 2291 Morphosyntax (Mention) Grammatical (Mention) Morphosyntax (Pairwise) Grammatical (Pairwise) Semantic (Pairwise) Type = pronoun Sentence position = first Head match Word distance (max. 30) Governor = except Type = proper noun Sentence position = last [Type Conjunctions] Sentence distance # Conjunctive pairings Animacy = animate Dependency = subject Coordination = and [# Dependency Pairings] Animacy = unknown Dependency = object [Dependency Conjunctions] Person = first Dependency = preposition Person = third Singular = true Quantified = true # Modifiers Table 5: Features used in the clustering similarity metric, separated by category. The features [Type Conjunctions], [Dependency Conjunctions], and [# Dependency Pairings] are all placeholders for feature sets. See the text for details. 6 Experiments In order to assess the performance of our method, we conduct two experiments. In the first, we assess performance of our system on the specific they-them m-anaphor resolution sub-task. Our system, and its variants, are compared against a number of baseline methods based on performance on the test set. In the second experiment, we consider how our system improves the performance of a coreference resolution system when all anaphors (both 1-anaphors and m-anaphors) are considered. 6.1 Evaluation Accuracy is measured in terms of the number of mention pairs correctly grouped as mantecedents for a given m-anaphor — similar to previous works in anaphor resolution (Peng et al., 2015). We use the standard classification metrics for precision, recall, and F1-score. If n1, n2, . . . , nN represent the number of gold m-antecedents for m-anaphors g1, g2, . . . , gN in a document, and m1, m2, . . . , mN are predicted, of which k1, k2, . . . , kN are correct, then precision is defined as P i ki/ P i mi and recall as P i ki/ P i ni, where i ranges from 1 to N. In order to align ourselves with the gold labels, we adjust the predicted mention corresponding to an entity to the closest one preceding the given m-anaphor. Because a given entity may appear multiple times in a candidate mention window, the most recent one, relative to the m-anaphor, is not always the one carrying the strongest signal and hence is not always predicted as an antecedent. For the purposes of evaluation, such cases are considered correct. Automatic handling would involve a separate, single-antecedent coreference resolver, but given the thesis of this work is the multi-antecedent case, this choice is justified. 6.2 System Comparison We first describe the various baselines and variants of our method we assess and then analyze the performance results. Systems • The “most-recent-k” baselines (denoted RECENT-k), which predict the most recent k mentions, relative to the m-anaphor, as the m-antecedents for k = 2, 3, 4. • The random selection baseline (denoted RANDOM), which randomly predicts mentions in a 5-sentence pre-window as the antecedents according to a binomial with probability 0.5 (imposing the constraint that at least two must be predicted). • A simple rule-based method (denoted RULE) which proceeds as follows: – If the m-anaphor occupies a subject or prepositional position, then predict the most recent mentions in subject positions if they are coordinated, otherwise take them from previous, distinct sentences. If no such mentions can be found take the most recent mentions in subject and object positions governed by the same verb. – If the m-anaphor occupies in object position, take the previous mentions in object or prepositional positions if they are coordinated, otherwise take them from previous, distinct sentences. If no such mentions can be found, take the most recent mentions in subject and object positions governed by the same verb. – Otherwise, take the two most recent mentions (usually arrive here if there is an error in the dependency parsing). 2292 Precision Recall F1 RECENT-2 21.46 17.68 19.39 RECENT-3 23.73 30.10 26.54 RECENT-4 21.43 38.82 27.62 RANDOM 30.02 29.11 29.56 RULE 39.23 17.45 24.16 LEE 46.78 9.91 16.36 M-LCS 41.35 37.81 39.50 M-TREE 41.94 44.88 43.36 Table 6: Test set performance of each system on the m-anaphor resolution task. m-anaphor class Precison Recall F1 2-anaphor 48.14 52.90 50.41 3-anaphor 35.92 34.77 35.34 4-anaphor 36.74 12.87 19.06 Table 7: Performance results of the M-TREE system on the different classes of m-anaphors. • The system described in Lee et al. (2011) (denoted LEE), which performs some light m-anaphor resolution (solely for conjunctive cases). • The two variants of the developed method, one using the LCS similarity metric (denoted M-LCS) and the other using the subset tree kernel (M-TREE). Results and Discussion Accuracy results on the test set for each of the systems are given in Table 6. Both the proposed systems, M-LCS and M-TREE, outperform all other methods by a substantial margin. The Stanford system achieves the highest precision, which is not surprising because it targets conjunctive mentions, which often serve as m-antecedents. Based on the analysis of Section 4, the poor performance of RECENT-2, RECENT-3, and RECENT-4 is expected. The results for the best-performing system, MTREE, on the different classes of m-anaphors is given in Table 7. M-TREE outperforms all other systems but exhibits a bias towards 2-anaphors, recent mentions, and mentions coordinated by conjunction. This is not surprising given such cases are the easiest to resolve. 6.3 Full Coreference Resolution For the complete coreference resolution task, the M-TREE system can be integrated with an existMUC B3 CEAFe Avg. CLARK 42.3 39.5 32.4 38.1 CLARK+M-TREE 43.4 40.0 31.9 38.7 Table 8: CoNLL metric scores for coreference resolution on the test portion of P&P for the Clark and Manning (2015) system, with (CLARK+MTREE) and without (CLARK) the pairing with MTREE. ing coreference system. For this experiment, we pair the full coreference resolution system of Clark and Manning (2015) with M-TREE, and we raise the prediction threshold of our model to 0.89, at which point precision on the validation set is 78.9. Moreover, we restrict ourselves to the P&P portion of the test set, given the Scribner stories only have gold labels for instances of they and them. The Clark and Manning (2015) system is first run over the test set, producing coreference chains which are then filtered for character entities using the approach of Vala et al. (2015). Our adjusted M-TREE system is then applied over all they and them mentions. Each such mention predicted as m-anaphoric is added to the coreference chains of the entities corresponding to the m-antecedent mentions. To evaluate the accuracy against the gold mention clusters, each m-anaphoric they and them is added to each cluster containing a gold mantecedent. The CoNLL metric scores (Bagga and Baldwin, 1998) of the coreference predictions are shown in Table 8, with the integrated system outperforming the Clark and Manning (2015) system by 0.6 average score (pairing the Clark and Manning (2015) system instead with an oracle manaphor resolver yields an average score of 44.8, an increase of 6.7 points). 7 Related Work The formal problem statement for the noun phrase anaphor resolution we propose is an extension of the standard ACE (Doddington et al., 2004), MUC (Hirschman, 1997), and Ontonotes (Hovy et al., 2006) formulations, as well as the problem settings outlined in Wiseman et al. (2015) and Durrett and Klein (2013), to allow anaphors to link to multiple antecedents. Most previous works impose the constraint that anaphors can be assigned at most one antecedent. Some works cast the coreference resolution problem in an Integer Linear Programming framework, with an explicit constraint for 2293 assigning at most one antecedent to an anaphor (Peng et al., 2015; Denis et al., 2007). The early work of Ingria and Stallard (1989) proposes the resolution of pronouns without the restriction they be linked to at most one antecedent. The method uses an indexing scheme for parse trees, similar to Hobb’s algorithm (Hobbs, 1978), that eliminates candidates antecedents as more information is acquired. Those pronouns with multiple candidates remaining after treetraversal are predicted as m-anaphors. The method considers each parse tree in isolation, and hence does not permit inter-sentential linking, a severe limitation in corpora such as the one offered in this work. Other researchers have evaluated noun phrase coreference resolvers along a number of dimensions, including different classes of anaphors (Mitkov, 2014; Kummerfeld and Klein, 2013; Bj¨orkelund and Nugues, 2011; Recasens and Hovy, 2009; Stoyanov et al., 2009; Bengtson and Roth, 2008; Van Deemter and Kibble, 2000; Ng and Cardie, 2002b; Kameyama, 1997). This work explores a new class of anaphor, previously unstudied, and evaluates its impact on the coreference resolution problem. Many state-of-the-art systems for coreference resolution, especially supervised, are constrained to the single-antecedent case (Clark and Manning, 2015; Peng et al., 2015; Wiseman et al., 2015; Bj¨orkelund and Farkas, 2012; Ng, 2010; Stoyanov et al., 2010; Ng, 2008; Soon et al., 2001). The most well-known, benchmark datasets for coreference resolution (e.g. Ontonotes and ACE-2005), do not offer gold annotations for multi-antecendet anaphors. Our work presents the first dataset for tackling this problem. The Lee et al. (2011) is a deterministic system that attempts to resolve the “easy” multiantecedent cases, namely those in which mentions are joined by some conjunction. Our system goes beyond and attempts to predict more difficult cases as well. Many of the individual features we employ in our model appear in a variety of other coreference systems, especially those involving mentionpair models (Durrett and Klein, 2013; Recasens et al., 2013b; Stoyanov et al., 2010). Recasens et al. (2013a) attempts to perform coreference resolution under conditions where many standard features for coreference are not suited. Peng et al. (2015) resort to corpus counts of predicates as features, much in the same way we obtain counts of mention pairings according to simple predicates on dependency structures. The system of Clark and Manning (2015) also makes uses of agglomerative clustering, although it’s employed in merging coreference chains, rather than candidate antecedent groupings. Last, resorting to an external corpus for sentence structures is common practice in the Natural Language Generation literature for producing phrases that are coherent and consistent(Krishnamoorthy et al., 2013; Bangalore and Rambow, 2000; Langkilde and Knight, 1998). 8 Conclusion We introduced a new class of anaphors to the anaphor resolution problem, m-anaphors, and extended the problem formulation to incorporate them. We offered insights into the linguistic behaviour of m-anaphors, finding that surface-level syntactic and semantic features do not carry enough discriminative power in distinguishing them from 1-anaphors. Furthermore, we developed a system combining a mention-pair model, an existing coreference resolver, and corpus knowledge to resolve m-anaphors that scores higher than a number of baseline methods. Finally, we paired this system with a coreference resolver to solve the general coreference resolution task, showing that m-anaphor prediction can help boost performance. An important component of the m-anaphor resolution problem that falls outside the scope of this study, but is important for practical application, is the detection of m-anaphoric mentions. Section 4 gives some insight into the problem but a much deeper investigation is necessary to devise a detection method. Moreover, for simplicity, this study focused solely on m-anaphoric they and them mentions, but as explained earlier, m-anaphoric mentions can take many forms, each introducing their own particular complexities that warrant special attention. Regarding the system developed for m-anaphor resolution, resorting to an external corpus to obtain well-formed sentences proved to be very computationally expensive. In future work, we look to incorporate methods that incur less cost, possibly tolerating some error in the formation of 2294 sentences without significantly degrading performance. Also, negation of group membership is a complex linguistic phenomenon that was handled in a crude manner in our system. We look to devote future work to handling such cases. To promote further research into m-anaphors, we make all our data and software freely available at http://www.github.com/ networkdynamics/manaphor-acl2016. References Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In The first international conference on language resources and evaluation workshop on linguistics coreference, volume 1, pages 563–566. Citeseer. Srinivas Bangalore and Owen Rambow. 2000. Exploiting a probabilistic hierarchical model for generation. In Proceedings of the 18th conference on Computational linguistics-Volume 1, pages 42–48. Association for Computational Linguistics. Eric Bengtson and Dan Roth. 2008. Understanding the value of features for coreference resolution. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 294–303. Association for Computational Linguistics. Shane Bergsma and Dekang Lin. 2006. Bootstrapping path-based pronoun resolution. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 33–40. Association for Computational Linguistics. Anders Bj¨orkelund and Rich´ard Farkas. 2012. Datadriven multilingual coreference resolution using resolver stacking. In Joint Conference on EMNLP and CoNLL-Shared Task, pages 49–55. Association for Computational Linguistics. Anders Bj¨orkelund and Pierre Nugues. 2011. Exploring lexicalized features for coreference resolution. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task, pages 45–50. Association for Computational Linguistics. Kevin Clark and Christopher D Manning. 2015. Entity-centric coreference resolution with model stacking. In Association of Computational Linguistics (ACL). Michael Collins and Nigel Duffy. 2002. New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 263–270. Association for Computational Linguistics. Pascal Denis, Jason Baldridge, et al. 2007. Joint determination of anaphoricity and coreference resolution using integer programming. In HLT-NAACL, pages 236–243. Citeseer. George R Doddington, Alexis Mitchell, Mark A Przybocki, Lance A Ramshaw, Stephanie Strassel, and Ralph M Weischedel. 2004. The automatic content extraction (ace) program-tasks, data, and evaluation. In LREC, volume 2, page 1. Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In EMNLP, pages 1971–1982. Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 363–370. Association for Computational Linguistics. Lynette Hirschman. 1997. {MUC-7 Coreference Task Definition}. Jerry R Hobbs. 1978. Resolving pronoun references. Lingua, 44(4):311–338. Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: the 90% solution. In Proceedings of the human language technology conference of the NAACL, Companion Volume: Short Papers, pages 57–60. Association for Computational Linguistics. Robert JP Ingria and David Stallard. 1989. A computational mechanism for pronominal reference. In Proceedings of the 27th annual meeting on Association for Computational Linguistics, pages 262–271. Association for Computational Linguistics. Megumi Kameyama. 1997. Recognizing referential links: An information extraction perspective. In Proceedings of a Workshop on Operational Factors in Practical, Robust Anaphora Resolution for Unrestricted Texts, pages 46–53. Association for Computational Linguistics. Niveda Krishnamoorthy, Girish Malkarnenkar, Raymond J Mooney, Kate Saenko, and Sergio Guadarrama. 2013. Generating natural-language video descriptions using text-mined knowledge. In AAAI, volume 1, page 2. Jonathan K Kummerfeld and Dan Klein. 2013. Errordriven analysis of challenges in coreference resolution. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 265–277. Irene Langkilde and Kevin Knight. 1998. Generation that exploits corpus-based statistical knowledge. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Compu2295 tational Linguistics-Volume 1, pages 704–710. Association for Computational Linguistics. Heeyoung Lee, Yves Peirsman, Angel Chang, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2011. Stanford’s multi-pass sieve coreference resolution system at the conll-2011 shared task. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task, pages 28–34. Association for Computational Linguistics. Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In ACL (System Demonstrations), pages 55–60. Michael Martone, Lex Williford, and Rosellen Brown. 1999. The Scribner Anthology of Contemporary Short Fiction: Fifty North American Stories Since 1970. Touchstone. Ruslan Mitkov. 2014. Anaphora resolution. Routledge. Alessandro Moschitti. 2006. Making tree kernels practical for natural language learning. In EACL, volume 113, page 24. Vincent Ng and Claire Cardie. 2002a. Identifying anaphoric and non-anaphoric noun phrases to improve coreference resolution. In Proceedings of the 19th international conference on Computational linguistics-Volume 1, pages 1–7. Association for Computational Linguistics. Vincent Ng and Claire Cardie. 2002b. Improving machine learning approaches to coreference resolution. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 104– 111. Association for Computational Linguistics. Vincent Ng. 2008. Unsupervised models for coreference resolution. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 640–649. Association for Computational Linguistics. Vincent Ng. 2010. Supervised noun phrase coreference research: The first fifteen years. In Proceedings of the 48th annual meeting of the association for computational linguistics, pages 1396–1411. Association for Computational Linguistics. Haoruo Peng, Daniel Khashabi, and Dan Roth. 2015. Solving hard coreference problems. Urbana, 51:61801. Marta Recasens and Eduard Hovy. 2009. A deeper look into features for coreference resolution. In Anaphora Processing and Applications, pages 29– 42. Springer. Marta Recasens, Matthew Can, and Daniel Jurafsky. 2013a. Same referent, different words: Unsupervised mining of opaque coreferent mentions. In HLT-NAACL, pages 897–906. Marta Recasens, Marie-Catherine de Marneffe, and Christopher Potts. 2013b. The life and death of discourse entities: Identifying singleton mentions. In HLT-NAACL, pages 627–633. Wee Meng Soon, Hwee Tou Ng, and Daniel Chung Yong Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational linguistics, 27(4):521–544. Pontus Stenetorp, Sampo Pyysalo, Goran Topi´c, Tomoko Ohta, Sophia Ananiadou, and Jun’ichi Tsujii. 2012. brat: a web-based tool for NLP-assisted text annotation. In Proceedings of the Demonstrations Session at EACL 2012, Avignon, France, April. Association for Computational Linguistics. Veselin Stoyanov, Nathan Gilbert, Claire Cardie, and Ellen Riloff. 2009. Conundrums in noun phrase coreference resolution: Making sense of the stateof-the-art. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 656–664. Association for Computational Linguistics. Veselin Stoyanov, Claire Cardie, Nathan Gilbert, Ellen Riloff, David Buttler, and David Hysom. 2010. Reconcile: A coreference resolution research platform. Hardik Vala, David Jurgens, Andrew Piper, and Derek Ruths. 2015. Mr. bennet, his coachman, and the archbishop walk into a bar but only one of them gets recognized: On the difficulty of detecting characters in literary texts. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 769–774. Kees Van Deemter and Rodger Kibble. 2000. On coreferring: Coreference in muc and related annotation schemes. Computational linguistics, 26(4):629– 637. Sam Wiseman, Alexander M Rush, Stuart M Shieber, Jason Weston, Heather Pon-Barry, Stuart M Shieber, Nicholas Longenbaugh, Sam Wiseman, Stuart M Shieber, Elif Yamangil, et al. 2015. Learning anaphoricity and antecedent ranking features for coreference resolution. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, volume 1, pages 92–100. Association for Computational Linguistics. 2296
2016
216
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2297–2305, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Automatic Labeling of Topic Models Using Text Summaries Xiaojun Wan and Tianming Wang Institute of Computer Science and Technology, The MOE Key Laboratory of Computational Linguistics, Peking University, Beijing 100871, China {wanxiaojun, wangtm}@pku.edu.cn Abstract Labeling topics learned by topic models is a challenging problem. Previous studies have used words, phrases and images to label topics. In this paper, we propose to use text summaries for topic labeling. Several sentences are extracted from the most related documents to form the summary for each topic. In order to obtain summaries with both high relevance, coverage and discrimination for all the topics, we propose an algorithm based on submodular optimization. Both automatic and manual analysis have been conducted on two real document collections, and we find 1) the summaries extracted by our proposed algorithm are superior over the summaries extracted by existing popular summarization methods; 2) the use of summaries as labels has obvious advantages over the use of words and phrases. 1 Introduction Statistical topic modelling plays very important roles in many research areas, such as text mining, natural language processing and information retrieval. Popular topic modeling techniques include Latent Dirichlet Allocation (LDA) (Blei et al., 2003) and Probabilistic Latent Semantic Analysis (pLSA) (Hofmann, 1999). These techniques can automatically discover the abstract “topics” that occur in a collection of documents. They model the documents as a mixture of topics, and each topic is modeled as a probability distribution over words. Although the discovered topics’ word distributions are sometimes intuitively meaningful, a major challenge shared by all such topic models is to accurately interpret the meaning of each topic (Mei et al., 2007). The interpretation of each topic is very important when people want to browse, understand and leverage the topic. However, it is usually very hard for a user to understand the discovered topics based only on the multinomial distribution of words. For example, here are the top terms for a discovered topic: {fire miles area north southern people coast homes south damage northern river state friday central water rain high california weather}. It is not easy for a user to fully understand this topic if the user is not very familiar with the document collection. The situation may become worse when the user faces with a number of discovered topics and the sets of top terms of the topics are often overlapping with each other on many practical document collections. In order to address the above challenge, a few previous studies have proposed to use phrases, concepts and even images for labeling the discovered topics (Mei et al., 2007; Lau et al., 2011; Hulpus et al., 2013; Aletras and Stevenson, 2013). For example, we may automatically extract the phrase “southern california” to represent the example topic mentioned earlier. These topic labels can help the user to understand the topics to some extent. However, the use of phrases or concepts as topic labels are not very satisfactory in practice, because the phrases or concepts are still very short, and the information expressed in these short labels is not adequate for user’s understanding. The case will become worse when some ambiguous phrase is used or multiple discrete phrases with poor coherence are used for a topic. To address the drawbacks of the above short labels, we need to provide more contextual information and consider using long text descriptions to represent the topics. The long text descriptions can be used independently or used as beneficial complement to the short labels. For example, below is part of the summary label produced by our proposed method and it provides much more contextual information for understanding the topic. Showers and thunderstorms developed in parched areas of the southeast , from western north carolina into south central alabama , north central and northeast texas and the central and southern gulf coast . … The quake was felt over a 2297 large area , extending from santa rosa , about 60 miles north of san francisco , to the santa cruz area 70 miles to the south …. Fourteen homes were destroyed in baldwin park 20 miles northeast of downtown los angeles and five were damaged along with five commercial buildings when 75 mph gusts snapped power lines , igniting a fire at allan paper co. , fire officials said . … The contributions of this paper are summarized as follows: 1) We are the first to invesitage using text summaries for topic labeling; 2) We propose a summarization algorithm based on submodular optimization to extract summaries with both high relevance, coverage and discrimination for all topics. 3) Automatic and manual analysis reveals the usefulness and advantages of the summaries produced by our algorithm. 2 Related Work 2.1 Topic Labeling After topics are discovered by topic modeling techniques, these topics are conventionally represented by their top N words or terms (Blei et al., 2003; Griffiths and Steyvers, 2004). The words or terms in a topic are ranked based on the conditional probability p(𝑤𝑖|𝑡𝑗) in that topic. It is sometimes not easy for users to understand each topic based on the terms. Sometimes topics are presented with manual labeling for exploring research publications (Wang and McCallum, 2006; Mei et al., 2006), and the labeling process is time consuming. In order to make the topic representations more interpretable and make the topics easier to understand, there are a few studies proposing to automatically find phrases, concepts or even images for topic labeling. Mei et al. (2007) proposed to use phrases (chunks or ngrams) for topic labeling and cast the labeling problem as an optimization problem involving minimizing Kullback-Leibler (KL) divergence between word distributions and maximizing mutual information between a label and a topic model. Lau et al. (2011) also used phrases as topic labels and they proposed to use supervised learning techniques for ranking candidate labels. In their work, candidate labels include the top-5 topic terms and a few noun chunks extracted from related Wikipedia articles. Mao et al. (2012) proposed two effective algorithms that automatically assign concise labels to each topic in a hierarchy by exploiting sibling and parent-child relations among topics. Kou et al. (2015) proposed to map topics and candidate labels (phrases) to word vectors and letter trigram vectors in order to find which candidate label is more semantically related to that topic. Hulpus et al. (2013) took a new approach based on graph centrality measures to topic labelling by making use of structured data exposed by DBpedia. Different from the above works, Aletras and Stevenson (2013) proposed to use images for representing topics, where candidate images for each topic are retrieved from the web and the most suitable image is selected by using a graph-based algorithm. In a very recent study (Aletras et al., 2015), 3 different topic representations (lists of terms, textual phrase labels and images labels) are compared in a document retrieval task, and results show that textual phrase labels are easier for users to interpret than term lists and image labels. The phrase-based labels in the above works are still very short and are sometimes not adequate for interpreting the topics. Unfortunately, none of previous works has investigated using textual summaries for representing topics yet. 2.2 Document Summarization The task of document summarization aims to produce a summary with a length limit for a given document or document set. The task has been extensively investigated in the natural language processing and information retrieval fields, and most previous works focus on directly extracting sentences from a news document or collection to form the summary. The summary can be used for helping users quickly browse and understand a document or document collection. Typical multi-document summarization methods include the centroid-based method (Radev et al., 2004), integer linear programming (ILP) (Gillick et al., 2008), sentence-based LDA (Chang and Chien, 2009), submodular function maximization (Lin and Bilmes, 2010; Lin and Bilmes, 2011), graph based methods (Erkan and Radev, 2004; Wan et al., 2007; Wan and Yang, 2008), and supervised learning based methods (Ouyang et al., 2007; Shen et al., 2007). Though different summarization methods have been proposed in recent years, the submodular function maximization method is still one of the state-of-the-art summarization methods. Moreover, the method is easy to follow and its framework is very flexible. One can design specific submodular functions for addressing special summarization tasks, without altering the overall greedy selection framework. 2298 Though various summarization methods have been proposed, none of existing works has investigated or tried to adapt document summarization techniques for the task of automatic labeling of topic models. 3 Problem Formulation Given a set of latent topics extracted from a text collection and each topic is represented by a multinomial distribution over words, our goal is to produce understandable text summaries as labels for interpreting all the topics. We now give two useful definitions for later use. Topic: Each topic 𝜃 is a probability distribution of words {𝑝𝜃(𝑤)}𝑤∈𝑉, where V is the vocabulary set, and we have ∑ 𝑝𝜃(𝑤) = 1 𝑤∈𝑉 . Topic Summary: In this study, a summary for each topic 𝜃 is a set of sentences extracted from the document collection and it can be used as a label to represent the latent meaning of 𝜃. Typically, the length of the summary is limited to 250 words, as defined in recent DUC and TAC conferences. Like the criteria for the topic labels in (Mei et al., 2007), the topic summary for each topic needs to meet the following two criteria: High Relevance: The summary needs to be semantically relevant to the topic, i.e., the summary needs to be closely relevant to all representative documents of the topic. The higher the relevance is, the better the summary is. This criterion is intuitive because we do not expect to obtain a summary unrelated to the topic. High Coverage: The summary needs to cover as much semantic information of the topic as possible. The summary usually consists of several sentences, and we do not expect all the sentences to focus on the same piece of semantic information. A summary with high coverage will certainly not contain redundant information. This criterion is very similar to the diversity requirement of multi-document summarization. Since we usually produce a set of summaries for all the topics discovered in a document collection. In order to facilitate users to understand all the topics, the summaries need to meet the following additional criterion: High Discrimination: The summaries for different topics need to have inter-topic discrimination. If the summaries for two or more topics are very similar with each other, users can hardly understand each topic appropriately. The higher the inter-topic discrimination is, the better the summaries are. 4 Our Method Our proposed method is based on submodular optimization, and it can extract summaries with both high relevance, coverage and discrimination for all topics. We choose the framework of submodular optimization because the framework is very flexible and different objectives can be easily incorporated into the framework. The overall framework of our method consists of two phases: candidate sentence selection, and topic summary extraction. The two phrases are described in the next two subsections, respectively. 4.1 Candidate Sentence Selection There are usually many thousands of sentences in a document collection for topic modelling, and all the sentences are more or less correlated with each topic. If we use all the sentences for summary extraction, the summarization efficiency will be very low. Moreover, many sentences are not suitable for summarization because of their low relevance with the topic. Therefore, we filter out the large number of unrelated sentences and treat the remaining sentences as candidates for summary extraction. For each topic 𝜃, we compute the KullbackLeibler (KL) divergence between the word distributions of the topic and each sentence s in the whole document collection as follows: 𝐾𝐿(𝜃, 𝑠) = ∑ 𝑝𝜃(𝑤) ∗𝑙𝑜𝑔 𝑝𝜃(𝑤) 𝑡𝑓(𝑤, 𝑠) 𝑙𝑒𝑛(𝑠) ⁄ 𝑤∈𝑇𝑊∪𝑆𝑊 where 𝑝𝜃(𝑤) is the probability of word w in topic 𝜃. TW denotes the set of top 500 words in topic 𝜃 according to the probability distribution. SW denotes the set of words in sentence s after removing stop words. 𝑡𝑓(𝑤, 𝑠) denotes the frequency of word w in sentence s, and 𝑙𝑒𝑛(𝑠) denotes the length of sentence s after removing stop words. For a word w which does not appear in SW, we set 𝑡𝑓(𝑤, 𝑠) 𝑙𝑒𝑛(𝑠) ⁄ to a very small value (0.00001 in this study). Then we rank the sentences by an increasing order of the divergence scores and keep the top 500 sentences which are most related to the topic. These 500 sentences are treated as candidate sentences for the subsequent summarization step for each topic. Note that different topics have different candidate sentence sets. 4.2 Topic Summary Extraction Our method for topic summary extraction is based on submodular optimization. For each topic 𝜃 associated with the candidate sentence set V, our 2299 method aims to find an optimal summary 𝐸̃ from all possible summaries by maximizing a score function under budget constraint: 𝐸̃ = 𝑎𝑟𝑔𝑚𝑎𝑥𝐸⊆𝑉{𝑓(𝐸)} s.t. 𝑙𝑒𝑛(𝐸) ≤𝐿 where 𝑙𝑒𝑛(𝐸) denotes the length of summary E. Here E is also used to denote the set of sentences in the summary. L is a predefined length limit, i.e. 250 words in this study. 𝑓(𝐸) is the score function to evaluate the overall quality of summary E. Usually, 𝑓(𝐸) is required to be a submodular function, so that we can use a simple greedy algorithm to find the near-optimal summary with theoretical guarantee. Formally, for any 𝐴⊆𝐵⊆𝑉\𝑣, we have 𝑓(𝐴+ 𝑣) −𝑓(𝐴) ≥𝑓(𝐵+ 𝑣) −𝑓(𝐵) which means that the incremental “value” of v decreases as the context in which v is considered grows from A to B. In this study, the score function 𝑓(𝐸) is decomposed into three parts and each part evaluates one aspect of the summary: 𝑓(𝐸) = 𝑅𝐸𝐿(𝐸) + 𝐶𝑂𝑉(𝐸) + 𝐷𝐼𝑆(𝐸) where 𝑅𝐸𝐿(𝐸) , 𝐶𝑂𝑉(𝐸) and 𝐷𝐼𝑆(𝐸) evaluate the relevance, coverage and discrimination of summary E respectively. We will describe them in details respectively. 4.2.1 Relevance Function Instead of intuitively measuring relevance between the summary and the topic via the KL divergence between the word distributions of them, we consider to measure the relevance of summary E for topic 𝜃 by the relevance of the sentences in the summary to all the candidate sentences for the topic as follows: 𝑅𝐸𝐿(𝐸) = ∑min⁡{∑𝑠𝑖𝑚(𝑠′, 𝑠), 𝛼∑𝑠𝑖𝑚(𝑠′, 𝑠) 𝑠∈𝑉 𝑠∈𝐸 } 𝑠′∈𝑉 where V represents the candidate sentence set for topic 𝜃, and E is used to represent the sentence set of the summary. 𝑠𝑖𝑚(𝑠′, 𝑠) is the standard cosine similarity between sentences 𝑠′⁡and s. 𝛼∈ [0,1] is a threshold co-efficient. The above function is a monotone submodular function because 𝑓(𝑥) = 𝑚𝑖𝑛⁡(𝑥, 𝑎) where 𝑎≥0 is a concave non-decreasing function. ∑ 𝑠𝑖𝑚(𝑠′, 𝑠) 𝑠∈𝐸 measures how similar E is to sentence 𝑠′ and then ∑ 𝑠𝑖𝑚(𝑠′, 𝑠) 𝑠∈𝑉 is the largest value that ∑ 𝑠𝑖𝑚(𝑠′, 𝑠) 𝑠∈𝐸 can achieve. Therefore, 𝑠′ is saturated by E when ∑ 𝑠𝑖𝑚(𝑠′, 𝑠) ≥ 𝑠∈𝐸 𝛼∑ 𝑠𝑖𝑚(𝑠′, 𝑠) 𝑠∈𝑉 . When 𝑠′is already saturated by E in this way, any new sentence very similar to 𝑠′ cannot further improve the overall relevance of E, and this sentence is less possible to be added to the summary. 4.2.2 Coverage Function We want the summary to cover as many topic words as possible and contain as many different sentences as possible. The coverage function is thus defined as follows: 𝐶𝑂𝑉(𝐸) = 𝛽∗∑{𝑝𝜃(𝑤) ∗√∑𝑡𝑓(𝑤, 𝑠) 𝑠∈𝐸 } 𝑤∈𝑇𝑊 where 𝛽≥0 is a combination co-efficient. The above function is a monotone submodular function and it encourages the summary E to contain many different words, rather than a small set of words. Because 𝑓(𝑥) = √𝑥 where 𝑥≥0 is a concave non-decreasing function, we have 𝑓(𝑥+ 𝑦) ≤𝑓(𝑥) + 𝑓(𝑦). The value of the function will be larger when we use x and y to represent two frequency values of two different words respectively than that when we use (𝑥+ 𝑦) to represent the frequency value of a single word. Therefore, the use of this function encourages the coverage of more different words in the summary. In other words, the diversity of the summary is enhanced. 4.2.3 Discrimination Function The function for measuring the discrimination between the summary E of topic 𝜃 and all other topics {𝜃′} is defined as follows: 𝐷𝐼𝑆(𝐸) = −𝛾∑∑∑𝑝𝜃′(𝑤) ∗𝑡𝑓(𝑤, 𝑠) ⁡ 𝑤∈𝑇𝑊 𝑠∈𝐸 𝜃′ where 𝛾≥0 is a combination co-efficient. The above function is still a monotone submodular function. The negative sign indicates that the summary E of topic 𝜃 needs to be as irrelevant with any other topic as possible, and thus making different topic summaries have much differences. 4.2.4 Greedy Selection Since 𝑅𝐸𝐿(𝐸), 𝐶𝑂𝑉(𝐸) and 𝐷𝐼𝑆(𝐸) are all submodular functions, 𝑓(𝐸) is also a submodular function. In order to find a good approximation to the optimal summary, we use a greedy algorithm similar to (Lin and Bilmes, 2010) to select sentence one by one and produce the final summary, as shown in Algorithm 1. 2300 Algorithm 1 Greedy algorithm for summary extraction 1: 𝐸←∅ 2: 𝑈←𝑉 3: while 𝑈≠∅ do 4: 𝑠̂ ←𝑎𝑟𝑔𝑚𝑎𝑥𝑠∈𝑈 𝑓(𝐸∪{𝑠})−𝑓(𝐸) 𝑙𝑒𝑛(𝑠)𝜀 5: 𝐸←𝐸∪{𝑠̂} if ∑ 𝑙𝑒𝑛(𝑠) + 𝑙𝑒𝑛(𝑠̂) ≤𝐿 𝑠∈𝐸 and 𝑓(𝐸∪{𝑠}) −𝑓(𝐸) ≥0 6: 𝑈←𝑈∖{𝑠̂} 7: end while 8: return 𝐸 In the algorithm, 𝑙𝑒𝑛(𝑠) denotes the length of sentence s and 𝜀> 0 is the scaling factor. At each iteration, the sentence with the largest ratio of objective function gain to scaled cost is found in step 4, and if adding the sentence can increase the objective function value while not violating the length constraint, it is then selected into the summary and otherwise bypassed. 5 Evaluation and Results 5.1 Evaluation Setup We used two document collections as evaluation datasets, as in (Mei et al. 2007): AP news and SIGMOD proceedings. The AP news dataset contains a set of 2250 AP news articles, which are provided by TREC. There is a total of 43803 sentences in the AP news dataset and the vocabulary size is 37547 (after removing stop words). The SIGMOD proceeding dataset contains a set of 2128 abstracts of SIGMOD proceedings between the year 1976 and 2015, downloaded from the ACM digital library. There is a total of 15211sentences in the SIGMOD proceeding dataset and the vocabulary size is 13688. For topic modeling, we adopted the most popular LDA to discover topics in the two datasets, respectively. Particularly, we used the LDA module implemented in the MALLET toolkit1. Without loss of generality, we extracted 25 topics from the AP news dataset and 25 topics from the SIGMOD proceeding dataset. The parameter values of our proposed summarization method is either directly borrowed from previous works or empirically set as follows: 𝛼= 0.05, 𝛽= 250, 𝛾= 300 and 𝜀= 0.15. 1 http://mallet.cs.umass.edu/ We have two goals in the evaluation: comparison of different summarization methods for topic labeling, and comparison of different kinds of labels (summaries, words, and phrases). In particular, we compare our proposed summarization method (denoted as Our Method) with the following typical summarization methods and all of them extract summaries from the same candidate sentence set for each topic: MEAD: It uses a heuristic way to obtain each sentence’s score by summing the scores based on different features (Radev et al., 2004): centroidbased weight, position and similarity with first sentence. LexRank: It constructs a graph based on the sentences and their similarity relationships and then applies the PageRank algorithm for sentence ranking (Erkan and Radev, 2004). TopicLexRank: It is an improved version of LexRank by considering the probability distribution of top 500 words in a topic as a prior vector, and then applies the topic-sensitive PageRank algorithm for sentence ranking, similar to (Wan 2008). Submodular(REL): It is based on submodular function maximization but only the relevance function is considered. Submodular(REL+COV): It is based on submodular function maximization and combines two functions: the relevance function and the coverage function. We also compare the following three different kinds of labels: Word label: It shows ten topic words as labels for each topic, which is the most intuitive interpretation of the topic. Phrase label: It uses three phrases as labels for each topic, and the phrase labels are extracted by using the method proposed in (Mei et al., 2007), which is very closely related to our work and considered a strong baseline in this study. Summary Label: It uses a topic summary with a length of 250 words to label each topic and the summary is produced by our proposed method. 5.2 Evaluation Results 5.2.1 Automatic Comparison of Summarization Methods In this section, we compare different summarization methods with the following automatic measures: 2301 KL divergence between word distributions of summary and topic: For each summarization method, we compute the KL divergence between the word distributions of each topic and the summary for the topic, then average the KL divergence across all topics. Table 1 shows the results. We can see that our method and Submodular(REL+COV) have the lowest KL divergence with the topic, which means our method can produce summaries relevant to the topic representation. Topic word coverage: For each summarization method, we compute the ratio of the words covered by the summary out of top 20 words for each topic, and then average the ratio across all topics. We use top 20 words instead of 500 words because we want to focus on the most important words. The results are shown in Table 2. We can see that our method has almost the best coverage ratio and the produced summary can cover most important words in a topic. AP SIGMOD MEAD 0.832503 1.470307 LexRank 0.420137 1.153163 TopicLexRank 0.377587 1.112623 Submodular(REL) 0.43264 1.002964 Submodular(REL+COV) 0.349807 0.991071 Our Method 0.360306 0.907193 Table 1. Comparison of KL divergence between word distributions of summary and topic AP SIGMOD MEAD 0.422246 0.611355 LexRank 0.651217 0.681728 TopicLexRank 0.678515 0.692066 Submodular(REL) 0.62815 0.713159 Submodular(REL+COV) 0.683998 0.723228 Our Method 0.673585 0.74572 Table 2. Comparison of the ratio of the covered words out of top 20 topic words AP SIGMOD average max average max MEAD 0.026961 0.546618 0.078826 0.580055 LexRank 0.019466 0.252074 0.05635 0.357491 TopicLexRank 0.022548 0.283742 0.062034 0.536886 Submodular(REL) 0.028035 0.47012 0.07522 0.52629 Submodular (REL+COV) 0.023206 0.362795 0.048872 0.524863 Our Method 0.010304 0.093017 0.024551 0.116905 Table 3. Comparison of the average and max similarity between different topic summaries Similarity between topic summaries: For each summarization method, we compute the cosine similarity between the summaries of any two topics, and then obtain the average similarity and the maximum similarity. Seen from Table 3, the topic summaries produced by our method has the lowest average and maximum similarity with each other, and thus the summaries for different topics have much difference. 5.2.2 Manual Comparison of Summarization Methods In this section, we compare our summarization method with three typical summarization methods (MEAD, TopicLexRank and Submodular(REL)) manually. We employed three human judges to read and rank the four summaries produced for each topic by the four methods in three aspects: relevance between the summary and the topic with the corresponding sentence set, the content coverage (or diversity) in the summary and the discrimination between different summaries. The human judges were encouraged to read a few closely related documents for better understanding each topic. Note that the judges did not know which summary was generated by our method and which summaries were generated by the baseline methods. The rank k for each summary ranges from 1 to 4 (1 means the best, and 4 means the worst; we allow equal ranks), and the score is thus (4-k). We average the scores across all summaries and all judges and the results on the two datasets are shown in Tables 4 and 5, respectively. In the table, the higher the score is, the better the corresponding summaries are. We can see that our proposed method outperforms all the three baselines over almost all metrics. relevance coverage discrimination MEAD 1.03 0.8 1.13 TopicLexRank 1.9 1.6 1.83 Submodular(REL) 2.23 2 2.07 Our Method 2.33 2.4 2.33 Table 4. Manual comparison of different summarization methods on AP news dataset relevance coverage discrimination MEAD 1.6 1.4 1.83 TopicLexRank 1.77 2.1 2.1 Submodular(REL) 2.07 2.1 2.03 Our Method 2.43 2.17 2.1 Table 5. Manual comparison of different summarization methods on SIGMOD proceeding dataset 5.2.3 Manual Comparison of Different Kinds of Labels In this section, we manually compare the three kinds of labels: words, phrases and summary, as 2302 mentioned in Section 5.1. Similarly, the three human judges were asked to read and rank the three kinds of labels in the same three aspects: relevance between the label and the topic with the corresponding sentence set, the content coverage (or diversity) in the label and the discrimination between different labels. The rank k for each kind of labels ranges from 1 to 3 (1 means the best, and 3 means the worst; we allow equal ranks), and the score is thus (3-k). We average the scores across all labels and all judges and the results on the two datasets are shown in Tables 6 and 7, respectively. It is clear that the summary labels produced by our proposed method have obvious advantages over the conventional word labels and phrase labels. The summary labels have better evaluation results on relevance, coverage and discrimination. relevance coverage discrimination Word label 0.67 0.67 1.11 Phrase label 1 0.87 1.4 Summary label 1.83 1.87 1.9 Table 6. Manual comparison of different kinds of labels on AP news dataset relevance coverage discrimination Word label 0.87 0.877 1.27 Phrase label 1.4 1.53 1.43 Summary label 1.8 1.97 1.9 Table 7. Manual comparison of different kinds of labels on AP news dataset 5.2.4 Example Analysis In this section, we demonstrate some running examples on the SIGMOD proceeding dataset. Two topics and the three kinds of labels are shown below. For brevity, we only show the first 100 words of the summaries to users unless they want to see more. We can see that the word labels are very confusing, and the phrase labels for the two topics are totally overlapping with each other and have no discrimination. Therefore, it is hard to understand the two topics by looking at the word or phrase labels. Fortunately, by carefully reading the topic summaries, we can understand what the two topics are really about. In this example, the first topic is about data analysis and data integration, while the second topic is about data privacy. Though the summary labels are much longer than the word labels or phrase labels, users can obtain more reliable information after reading the summary labels and the summaries can help users to better understand each topic and also know the difference between different topics. In practice, the different kinds of labels can be used together to allow users to browse topic models in a level-wise matter, as described in next section. Topic 1 on SIGMOD proceeding dataset: word label: data analysis scientific set process analyze tool insight interest scenario phrase label: data analysis ; data integration ; data set summary label: The field of data analysis seek to extract value from data for either business or scientific benefit . … Nowadays data analytic application are accessing more and more data from distributed data store , creating a large amount of data traffic on the network . …these service will access data from different data source type and potentially need to aggregate data from different data source type with different data format ….Various data model will be discussed , including relational data , xml data , graphstructured data , data stream , and workflow …. Topic 2 on SIGMOD proceeding dataset: word label: user information attribute model privacy quality record result individual provide phrase label: data set ; data analysis ; data integration summary label: An essential element for privacy metric is the measure of how much adversaries can know about an individual ' sensitive attribute ( sa ) if they know the individual ' quasi-identifier ( qi) ….We present an automated solution that elicit user preference on attribute and value , employing different disambiguation technique ranging from simple keyword matching , to more sophisticated probabilistic model ….Privgene need significantly less perturbation than previous method , and it achieve higher overall result quality , even for model fitting task where ga is not the first choice without privacy consideration …. 5.2.5 Discussion of Practical Use Although the summary labels produced by our method have higher relevance, coverage and discrimination than the word labels and the phrase labels, the summary labels have one obvious shortcoming of consuming more reading time of users, because the summaries are much longer than the words and phrases. The feedback from the human judges also reveals the above problem and all the three human judges said they need to take more than five times longer to read the summaries. Therefore, we want to find a better way to make use of the summary label in practice. In order to consider both the shorter reading time of the phrase labels and the better quality of 2303 the summary labels, we can use both of the two kinds of labels in the following hierarchical way: For each topic, we first present only the phrase label to users, and if they can easily know about the topic after they read the phrase label, the summary label will not be shown to them. Whereas, if users cannot know well about the topic based on the phrase label, or they need more information about the topic, they may choose to read the summary label for better understanding the topic. Only the first 100 words of the summary label are shown to users, and the rest words will be shown upon request. In this way, the summary label is used as an important complement to the phrase label, and the burden of reading the longer summary label can be greatly alleviated. 6 Conclusions and Future Work In this study, we addressed the problem of topic labeling by using text summaries. We propose a summarization algorithm based on submodular optimization to extract representative summaries for all the topics. Evaluation results demonstrate that the summaries produced by our proposed algorithm have high relevance, coverage and discrimination, and the use of summaries as labels has obvious advantages over the use of words and phrases. In future work, we will explore to make use of all the three kinds of labels together to improve the users’ experience when they want to browse, understand and leverage the topics. In this study, we do not consider the coherence of the topic summaries because it is really very challenging to get a coherent summary by extracting different sentences from a large set of different documents. In future work, we will try to make the summary label more coherent by considering the discourse structure of the summary and leveraging sentence ordering techniques. Acknowledgments The work was supported by National Natural Science Foundation of China (61331011), National Hi-Tech Research and Development Program (863 Program) of China (2015AA015403) and IBM Global Faculty Award Program. We thank the anonymous reviewers and mentor for their helpful comments. References Nikolaos Aletras, and Mark Stevenson. 2013. Representing topics using images. HLT-NAACL. Nikolaos Aletras, Timothy Baldwin, Jey Han Lau, and Mark Stevenson. 2015. Evaluating topic representations for exploring document collections. Journal of the Association for Information Science and Technology (2015). David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. Journal of machine Learning research 3: 993-1022. Ying-Lang Chang and Jen-Tzung Chien. 2009. Latent Dirichlet learning for document summarization. Proccedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP2009). Güneş Erkan and Dragomir R. Radev. 2004. LexPageRank: Prestige in multi-document text summarization. In Proceedings of EMNLP. Dan Gillick, Benoit Favre, and Dilek Hakkani-Tur. 2008. The ICSI summarization system at TAC 2008. In Proceedings of the Text Understanding Conference. Thomas Hofmann. 1999. Probabilistic latent semantic indexing. Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval. ACM. Ioana Hulpus, Conor Hayes, Marcel Karnstedt, and Derek Greene. 2013. Unsupervised graph-based topic labelling using dbpedia. Proceedings of the sixth ACM international conference on Web search and data mining. ACM. Wanqiu Kou, Fang Li, and Timothy Baldwin. 2015. Automatic labelling of topic models using word vectors and letter trigram vectors. Information Retrieval Technology. Springer International Publishing, 253-264. Jey Han Lau, Karl Grieser, David Newman, and Timothy Baldwin. 2011. Automatic labelling of topic models. Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesVolume 1. Association for Computational Linguistics. Hui Lin and Jeff Bilmes. 2010. Multi-document summarization via budgeted maximization of submodular functions. Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics. Hui Lin and Jeff Bilmes. 2011. A class of submodular functions for document summarization. Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Association for Computational Linguistics. Qiaozhu Mei, Chao Liu, Hang Su, and ChengXiang Zhai. 2006. A probabilistic approach to spatiotemporal theme pattern mining on weblogs. In Proceedings of the 15th international conference on World Wide Web, pp. 533-542. ACM. 2304 Qiaozhu Mei, Xuehua Shen, and ChengXiang Zhai. 2007. Automatic labeling of multinomial topic models. Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM. Xian-Ling Mao, Zhao-Yan Ming, Zheng-Jun Zha, TatSeng Chua, Hongfei Yan, and Xiaoming Li. 2012. Automatic labeling hierarchical topics. In Proceedings of the 21st ACM international conference on Information and knowledge management, pp. 2383-2386. ACM. You Ouyang, Sujian Li, and Wenjie Li. 2007. Developing learning strategies for topic-based summarization. Proceedings of the sixteenth ACM conference on Conference on information and knowledge management. ACM. Dragomir R. Radev, Hongyan Jing, Małgorzata Styś, and Daniel Tam. 2004. Centroid-based summarization of multiple documents. Information Processing & Management 40, no. 6: 919-938. Dou Shen, Jian-Tao Sun, Hua Li, Qiang Yang, and Zheng Chen. 2007. Document summarization using Conditional Random Fields. In IJCAI, vol. 7, pp. 2862-2867. Thomas L. Griffiths and Mark Steyvers. 2004. Finding scientific topics. Proceedings of the National Academy of Sciences 101.suppl 1: 5228-5235. Xiaojun Wan. 2008. Using only cross-document relationships for both generic and topic-focused multi-document summarizations. Information Retrieval 11.1: 25-49. Xiaojun Wan, Jianwu Yang, and Jianguo Xiao. 2007. Manifold-ranking based topic-focused multidocument summarization. In IJCAI, vol. 7, pp. 2903-2908. Xiaojun Wan and Jianwu Yang. 2008. Multi-document summarization using cluster-based link analysis. Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval. ACM. Xuerui Wang and Andrew McCallum. 2006. Topics over time: a non-Markov continuous-time model of topical trends. Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM. 2305
2016
217
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2306–2315, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Graph-based Dependency Parsing with Bidirectional LSTM Wenhui Wang Baobao Chang Key Laboratory of Computational Linguistics, Ministry of Education. School of Electronics Engineering and Computer Science, Peking University, No.5 Yiheyuan Road, Haidian District, Beijing, 100871, China Collaborative Innovation Center for Language Ability, Xuzhou, 221009, China. {wangwenhui,chbb}@pku.edu.cn Abstract In this paper, we propose a neural network model for graph-based dependency parsing which utilizes Bidirectional LSTM (BLSTM) to capture richer contextual information instead of using high-order factorization, and enable our model to use much fewer features than previous work. In addition, we propose an effective way to learn sentence segment embedding on sentence-level based on an extra forward LSTM network. Although our model uses only first-order factorization, experiments on English Peen Treebank and Chinese Penn Treebank show that our model could be competitive with previous higher-order graph-based dependency parsing models and state-of-the-art models. 1 Introduction Dependency parsing is a fundamental task for language processing which has been investigated for decades. It has been applied in a wide range of applications such as information extraction and machine translation. Among a variety of dependency parsing models, graph-based models are attractive for their ability of scoring the parsing decisions on a whole-tree basis. Typical graph-based models factor the dependency tree into subgraphs, including single arcs (McDonald et al., 2005), sibling or grandparent arcs (McDonald and Pereira, 2006; Carreras, 2007) or higher-order substructures (Koo and Collins, 2010; Ma and Zhao, 2012) and then score the whole tree by summing scores of the subgraphs. In these models, subgraphs are usually represented as high-dimensional feature vectors which are then fed into a linear model to learn the feature weights. However, conventional graph-based models heavily rely on feature engineering and their performance is restricted by the design of features. In addition, standard decoding algorithm (Eisner, 2000) only works for the first-order model which limits the scope of feature selection. To incorporate high-order features, Eisner algorithm must be somehow extended or modified, which is usually done at high cost in terms of efficiency. The fourth-order graph-based model (Ma and Zhao, 2012), which seems the highest-order model so far to our knowledge, requires O(n5) time and O(n4) space. Due to the high computational cost, highorder models are normally restricted to producing only unlabeled parses to avoid extra cost introduced by inclusion of arc-labels into the parse trees. To alleviate the burden of feature engineering, Pei et al. (2015) presented an effective neural network model for graph-based dependency parsing. They only use atomic features such as word unigrams and POS tag unigrams and leave the model to automatically learn the feature combinations. However, their model requires many atomic features and still relies on high-order factorization strategy to further improve the accuracy. Different from previous work, we propose an LSTM-based dependency parsing model in this paper and aim to use LSTM network to capture richer contextual information to support parsing decisions, instead of adopting a high-order factorization. The main advantages of our model are as follows: • By introducing Bidirectional LSTM, our model shows strong ability to capture potential long range contextual information and exhibits improved accuracy in recovering long distance dependencies. It is different to previous work in which a similar effect is usually achieved by high-order factorization. More2306 over, our model also eliminates the need for setting feature selection windows and reduces the number of features to a minimum level. • We propose an LSTM-based sentence segment embedding method named LSTMMinus, in which distributed representation of sentence segment is learned by using subtraction between LSTM hidden vectors. Experiment shows this further enhances our model’s ability to access to sentence-level information. • Last but important, our model is a first-order model using standard Eisner algorithm for decoding, the computational cost remains at the lowest level among graph-based models. Our model does not trade-off efficiency for accuracy. We evaluate our model on the English Penn Treebank and Chinese Penn Treebank, experiments show that our model achieves competitive parsing accuracy compared with conventional high-order models, however, with a much lower computational cost. 2 Graph-based dependency parsing In dependency parsing, syntactic relationships are represented as directed arcs between head words and their modifier words. Each word in a sentence modifies exactly one head, but can have any number of modifiers itself. The whole sentence is rooted at a designated special symbol ROOT, thus the dependency graph for a sentence is constrained to be a rooted, directed tree. For a sentence x, graph-based dependency parsing model searches for the highest-scoring tree of x: y∗(x) = arg max ˆy∈Y (x) Score(x, ˆy; θ) (1) Here y∗(x) is the tree with the highest score, Y (x) is the set of all valid dependency trees for x and Score(x, ˆy; θ) measures how likely the tree ˆy is the correct analysis of the sentence x, θ are the model parameters. However, the size of Y (x) grows exponentially with respect to the length of the sentence, directly solving equation (1) is impractical. The common strategy adopted in the graphbased model is to factor the dependency tree ˆy into Figure 1: First-order, Second-order and Thirdorder factorization strategy. Here h stands for head word, m stands for modifier word, s and t stand for the sibling of m. g stands for the grandparent of m. a set of subgraph c which can be scored in isolation, and score the whole tree ˆy by summing score of each subgraph: Score(x, ˆy; θ) = X c∈ˆy ScoreC(x, c; θ) (2) Figure 1 shows several factorization strategies. The order of the factorization is defined according to the number of dependencies that subgraph contains. The simplest first-order factorization (McDonald et al., 2005) decomposes a dependency tree into single dependency arcs. Based on the first-order factorization, second-order factorization (McDonald and Pereira, 2006; Carreras, 2007) brings sibling and grandparent information into their model. Third-order factorization (Koo and Collins, 2010) further incorporates richer contextual information by utilizing grand-sibling and tri-sibling parts. Conventional graph-based models (McDonald et al., 2005; McDonald and Pereira, 2006; Carreras, 2007; Koo and Collins, 2010; Ma and Zhao, 2012) score subgraph by a linear model, which heavily depends on feature engineering. The neural network model proposed by Pei et al. (2015) alleviates the dependence on feature engineering to a large extent, but not completely. We follow Pei et al. (2015) to score dependency arcs using neural network model. However, different from their work, we introduce a Bidirectional LSTM to capture long range contextual information and an extra forward LSTM to better represent segments of the sentence separated by the head and modifier. These make our model more accurate in recovering long-distance dependencies and further decrease the number of atomic features. 2307 Figure 2: Architecture of the Neural Network. x1 to x5 stand for the input token of Bidirectional LSTM. a1 to a5 stand for the feature embeddings used in our model. 3 Neural Network Model In this section, we describe the architecture of our neural network model in detail, which is summarized in Figure 2. 3.1 Input layer In our neural network model, the words, POS tags are mapped into distributed embeddings. We represent each input token xi which is the input of Bidirectional LSTM by concatenating POS tag embedding epi ∈Rde and word embedding ewi ∈Rde, de is the the dimensionality of embedding, then a linear transformation we is performed and passed though an element-wise activation function g: xi = g(we[ewi; epi] + be) (3) where xi ∈Rde, we ∈Rde×2de is weight matrix, be ∈Rde is bias term. the dimensionality of input token xi is equal to the dimensionality of word and POS tag embeddings in our experiment, ReLU is used as our activation function g. 3.2 Bidirectional LSTM Given an input sequence x = (x1, . . . , xn), where n stands for the number of words in a sentence, a standard LSTM recurrent network computes the hidden vector sequence h = (h1, . . . , hn) in one direction. Bidirectional LSTM processes the data in both directions with two separate hidden layers, which are then fed to the same output layer. It computes the forward hidden sequence −→h , the backward hidden sequence ←−h and the output sequence v by iterating the forward layer from t = 1 to n, the backward layer from t = n to 1 and then updating the output layer: vt = −→h t + ←−h t (4) where vt ∈Rdl is the output vector of Bidirectional LSTM for input xt, −→h t ∈Rdl, ←−h t ∈Rdl, dl is the dimensionality of LSTM hidden vector. We simply add the forward hidden vector −→h t and the backward hidden vector ←−h t together, which gets similar experiment result as concatenating them together with a faster speed. The output vectors of Bidirectional LSTM are used as word feature embeddings. In addition, they are also fed into a forward LSTM network to learn segment embedding. 3.3 Segment Embedding Contextual information of word pairs1 has been widely utilized in previous work (McDonald et al., 2005; McDonald and Pereira, 2006; Pei et al., 2015). For a dependency pair (h, m), previous work divides a sentence into three parts (prefix, infix and suffix) by head word h and modifier word m. These parts which we call segments in our work make up the context of the dependency pair (h, m). Due to the problem of data sparseness, conventional graph-based models can only capture contextual information of word pairs by using bigrams or tri-grams features. Unlike conventional models, Pei et al. (2015) use distributed representations obtained by averaging word embeddings in segments to represent contextual information of the word pair, which could capture richer syntactic and semantic information. However, their method is restricted to segment-level since their segment embedding only consider the word information within the segment. Besides, averaging operation simply treats all the words in segment equally. However, some words might carry more 1A word pair is limited to the dependency pair (h, m) in our work since we use only first-order factorization. In previous work, word pair could be any pair with particular relation (e.g., sibling pair (s, m) in Figure 1). 2308 Figure 3: Illustration for learning segment embeddings based on an extra forward LSTM network, vh, vm and v1 to v7 indicate the output vectors of Bidirectional LSTM for head word h, modifier word m and other words in sentence, hh, hm and h1 to h7 indicate the hidden vectors of the forward LSTM corresponding to vh, vm and v1 to v7. salient syntactic or semantic information and they are expected to be given more attention. A useful property of forward LSTM is that it could keep previous useful information in their memory cell by exploiting input, output and forget gates to decide how to utilize and update the memory of previous information. Given an input sequence v = (v1, . . . , vn), previous work (Sutskever et al., 2014; Vinyals et al., 2014) often uses the last hidden vector hn of the forward LSTM to represent the whole sequence. Each hidden vector ht (1 ≤t ≤n) can capture useful information before and including vt. Inspired by this, we propose a method named LSTM-Minus to learn segment embedding. We utilize subtraction between LSTM hidden vectors to represent segment’s information. As illustrated in Figure 3, the segment infix can be described as hm −h2, hm and h2 are hidden vector of the forward LSTM network. The segment embedding of suffix can also be obtained by subtraction between the last LSTM hidden vector of the sequence (h7) and the last LSTM hidden vector in infix (hm). For prefix, we directly use the last LSTM hidden vector in prefix to represent it, which equals to subtract a zero embedding. When no prefix or suffix exists, the corresponding embedding is set to zero. Specifically, we place an extra forward LSTM layer on top of the Bidirectional LSTM layer and learn segment embeddings using LSTM-Minus based on this forward LSTM. LSTM-minus enables our model to learn segment embeddings from information both outside and inside the segments and thus enhances our model’s ability to access to sentence-level information. 3.4 Hidden layer and output layer As illustrated in Figure 2, we map all the feature embeddings to a hidden layer. Following Pei et al. (2015), we use direction-specific transformation to model edge direction and tanh-cube as our activation function: h = g  X i W d hiai + bd h  (5) where ai ∈Rdai is the feature embedding, dai indicates the dimensionality of feature embedding ai, W d hi ∈Rdh×dai is weight matrices which corresponding to ai, dh indicates the dimensionality of hidden layer vector, bd h ∈Rdh is bias term. W d hi and bd h are bound with index d ∈{0, 1} which indicates the direction between head and modifier. A output layer is finally added on the top of the hidden layer for scoring dependency arcs: ScoreC(x, c) = W d o h + bd o (6) Where W d o ∈RL×dh is weight matrices, bd o ∈RL is bias term, ScoreC(x, c) ∈RL is the output vector, L is the number of dependency types. Each dimension of the output vector is the score for each kind of dependency type of head-modifier pair. 3.5 Features in our model Previous neural network models (Pei et al., 2015; Pei et al., 2014; Zheng et al., 2013) normally set context window around a word and extract atomic features within the window to represent the contextual information. However, context window limits their ability in detecting long-distance information. Simply increasing the context window size to get more contextual information puts their model in the risk of overfitting and heavily slows down the speed. Unlike previous work, we apply Bidirectional LSTM to capture long range contextual information and eliminate the need for context windows, avoiding the limit of the window-based feature selection approach. Compared with Pei et al. (2015), the cancellation of the context window allows our model to use much fewer features. Moreover, by combining a word’s atomic features (word form and POS tag) together, our model further decreases the number of features. 2309 Pei et al. (2015) h−2.w, h−1.w, h.w, h1.w, h2.w h−2.p, h−1.p, h.p, h1.p, h2.p m−2.w, m−1.w, m.w, m1.w, m2.w m−2.p, m−1.p, m.p, m1.p, m2.p dis(h, m) Our basic model vh, vm dis(h, m) Table 1: Atomic features in our basic model and Pei’s 1st-order atomic model. w is short for word and p for POS tag. h indicates head and m indicates modifier. The subscript represents the relative position to the center word. dis(h, m) is the distance between head and modifier. vh and vm indicate the outputs of Bidirectional LSTM for head word and modifier word. Table 1 lists the atomic features used in 1storder atomic model of Pei et al. (2015) and atomic features used in our basic model. Our basic model only uses the outputs of Bidirectional LSTM for head word and modifier word, and the distance between them as features. Distance features are encoded as randomly initialized embeddings. As we can see, our basic model reduces the number of atomic features to a minimum level, making our model run with a faster speed. Based on our basic model, we incorporate additional segment information (prefix, infix and suffix), which further improves the effect of our model. 4 Neural Training In this section, we provide details about training the neural network. 4.1 Max-Margin Training We use the Max-Margin criterion to train our model. Given a training instance (x(i), y(i)), we use Y (x(i)) to denote the set of all possible dependency trees and y(i) is the correct dependency tree for sentence x(i). The goal of Max Margin training is to find parameters θ such that the difference in score of the correct tree y(i) from an incorrect tree ˆy ∈Y (x(i)) is at least △(y(i), ˆy). Score(x(i),y(i); θ)≥Score(x(i),ˆy; θ)+△(y(i),ˆy) The structured margin loss △(y(i), ˆy) is defined as: △(y(i), ˆy) = n X j κ1{h(y(i), x(i) j ) ̸= h(ˆy, x(i) j )} where n is the length of sentence x, h(y(i), x(i) j ) is the head (with type) for the j-th word of x(i) in tree y(i) and κ is a discount parameter. The loss is proportional to the number of word with an incorrect head and edge type in the proposed tree. Given a training set with size m, The regularized objective function is the loss function J(θ) including a l2-norm term: J(θ) = 1 m m X i=1 li(θ) + λ 2 ||θ||2 li(θ) = max ˆy∈Y (x(i))(Score(x(i),ˆy; θ)+△(y(i),ˆy)) −Score(x(i),y(i); θ) (7) By minimizing this objective, the score of the correct tree is increased and score of the highest scoring incorrect tree is decreased. 4.2 Optimization Algorithm Parameter optimization is performed with the diagonal variant of AdaGrad (Duchi et al., 2011) with minibatchs (batch size = 20) . The parameter update for the i-th parameter θt,i at time step t is as follows: θt,i = θt−1,i − α qPt τ=1 g2 τ,i gt,i (8) where α is the initial learning rate (α = 0.2 in our experiment) and gτ ∈R|θi| is the subgradient at time step τ for parameter θi. To mitigate overfitting, dropout (Hinton et al., 2012) is used to regularize our model. we apply dropout on the hidden layer with 0.2 rate. 4.3 Model Initialization&Hyperparameters The following hyper-parameters are used in all experiments: word embedding size = 100, POS tag embedding size = 100, hidden layer size = 200, LSTM hidden vector size = 100, Bidirectional LSTM layers = 2, regularization parameter λ = 10−4. We initialized the parameters using pretrained word embeddings. Following Dyer et al. (2015), we use a variant of the skip n-gram model introduced by Ling et al. (2015) on Gigaword corpus (Graff et al., 2003). We also experimented with randomly initialized embeddings, where embeddings are uniformly sampled from range [−0.3, 0.3]. All other parameters are uniformly sampled from range [−0.05, 0.05]. 2310 Models UAS LAS Speed(sent/s) First-order MSTParser 91.60 90.39 20 1st-order atomic (Pei et al., 2015) 92.14 90.92 55 1st-order phrase (Pei et al., 2015) 92.59 91.37 26 Our basic model 93.09 92.03 61 Our basic model + segment 93.51 92.45 26 Second-order MSTParser 92.30 91.06 14 2nd-order phrase (Pei et al., 2015) 93.29 92.13 10 Third-order (Koo and Collins, 2010) 93.04 N/A N/A Fourth-order (Ma and Zhao, 2012) 93.4 N/A N/A Unlimited-order (Zhang and McDonald, 2012) 93.06 91.86 N/A (Zhang et al., 2013) 93.50 92.41 N/A (Zhang and McDonald, 2014) 93.57 92.48 N/A Table 2: Comparison with previous graph-based models on Penn-YM. 5 Experiments In this section, we present our experimental setup and the main result of our work. 5.1 Experiments Setup We conduct our experiments on the English Penn Treebank (PTB) and the Chinese Penn Treebank (CTB) datasets. For English, we follow the standard splits of PTB3. Using section 2-21 for training, section 22 as development set and 23 as test set. We conduct experiments on two different constituency-todependency-converted Penn Treebank data sets. The first one, Penn-YM, was created by the Penn2Malt tool2 based on Yamada and Matsumoto (2003) head rules. The second one, Penn-SD, use Stanford Basic Dependencies (Marneffe et al., 2006) and was converted by version 3.3.03 of the Stanford parser. The Stanford POS Tagger (Toutanova et al., 2003) with ten-way jackknifing of the training data is used for assigning POS tags (accuracy ≈97.2%). For Chinese, we adopt the same split of CTB5 as described in (Zhang and Clark, 2008). Following (Zhang and Clark, 2008; Dyer et al., 2015; Chen and Manning, 2014), we use gold segmentation and POS tags for the input. 5.2 Experiments Results We first make comparisons with previous graphbased models of different orders as shown in Ta2http://stp.lingfil.uu.se/nivre/ research/Penn2Malt.html 3http://nlp.stanford.edu/software/ lex-parser.shtml ble 2. We use MSTParser 4 for conventional firstorder model (McDonald et al., 2005) and secondorder model (McDonald and Pereira, 2006). We also include the results of conventional high-order models (Koo and Collins, 2010; Ma and Zhao, 2012; Zhang and McDonald, 2012; Zhang et al., 2013; Zhang and McDonald, 2014) and the neural network model of Pei et al. (2015). Different from typical high-order models (Koo and Collins, 2010; Ma and Zhao, 2012), which need to extend their decoding algorithm to score new types of higher-order dependencies. Zhang and McDonald (2012) generalized the Eisner algorithm to handle arbitrary features over higher-order dependencies and controlled complexity via approximate decoding with cube pruning. They further improve their work by using perceptron update strategies for inexact hypergraph search (Zhang et al., 2013) and forcing inference to maintain both label and structural ambiguity through a secondary beam (Zhang and McDonald, 2014). Following previous work, UAS (unlabeled attachment scores) and LAS (labeled attachment scores) are calculated by excluding punctuation5. The parsing speeds are measured on a workstation with Intel Xeon 3.4GHz CPU and 32GB RAM which is same to Pei et al. (2015). We measure the parsing speeds of Pei et al. (2015) according to their codes6 and parameters. On accuracy, as shown in table 2, our 4http://sourceforge.net/projects/ mstparser 5Following previous work, a token is a punctuation if its POS tag is {“ ” : , .} 6https://github.com/Williammed/ DeepParser 2311 Method Penn-YM Penn-SD CTB5 UAS LAS UAS LAS UAS LAS (Zhang and Nivre, 2011) 92.9 91.8 86.0 84.4 (Bernd Bohnet, 2012) 93.39 92.38 87.5 85.9 (Zhang and McDonald, 2014) 93.57 92.48 93.01 90.64 87.96 86.34 (Dyer et al., 2015) 93.1 90.9 87.2 85.7 (Weiss et al., 2015) 93.99 92.05 Our basic model + segment 93.51 92.45 94.08 91.82 87.55 86.23 Table 3: Comparison with previous state-of-the-art models on Penn-YM, Penn-SD and CTB5. basic model outperforms previous first-order graph-based models by a substantial margin, even outperforms Zhang and McDonald (2012)’s unlimited-order model. Moreover, incorporating segment information further improves our model’s accuracy, which shows that segment embeddings do capture richer contextual information. By using segment embeddings, our improved model could be comparable to high-order graph-based models7. With regard to parsing speed, our model also shows advantage of efficiency. Our model uses only first-order factorization and requires O(n3) time to decode. Third-order model requires O(n4) time and fourth-order model requires O(n5) time. By using approximate decoding, the unlimitedorder model of Zhang and McDonald (2012) requires O(k·log(k)·n3) time, where k is the beam size. The computational cost of our model is the lowest among graph-based models. Moreover, although using LSTM requires much computational cost. However, compared with Pei’s 1st-order model, our model decreases the number of atomic features from 21 to 3, this allows our model to require a much smaller matrix computation in the scoring model, which cancels out the extra computation cost introduced by the LSTM computation. Our basic model is the fastest among first-order and second-order models. Incorporating segment information slows down the parsing speed while it is still slightly faster than conventional first-order model. To compare with conventional high-order models on practical parsing speed, we can make an indirect comparison according to Zhang and McDonald (2012). Conventional first-order model is about 10 times faster than Zhang and McDon7Note that our model can’t be strictly comparable with third-order model (Koo and Collins, 2010) and fourthorder model (Ma and Zhao, 2012) since they are unlabeled model. However, our model is comparable with all the three unlimited-order models presented in (Zhang and McDonald, 2012), (Zhang et al., 2013) and (Zhang and McDonald, 2014), since they all are labeled models as ours. Method Peen-YM Peen-SD CTB5 Average 93.23 93.83 87.24 LSTM-Minus 93.51 94.08 87.55 Table 4: Model performance of different way to learn segment embeddings. ald (2012)’s unlimited-order model and about 40 times faster than conventional third-order model, while our model is faster than conventional firstorder model. Our model should be much faster than conventional high-order models. We further compare our model with previous state-of-the-art systems for English and Chinese. Table 3 lists the performances of our model as well as previous state-of-the-art systems on on PennYM, Penn-SD and CTB5. We compare to conventional state-of-the-art graph-based model (Zhang and McDonald, 2014), conventional state-of-theart transition-based model using beam search (Zhang and Nivre, 2011), transition-based model combining graph-based approach (Bernd Bohnet, 2012) , transition-based neural network model using stack LSTM (Dyer et al., 2015) and transitionbased neural network model using beam search (Weiss et al., 2015). Overall, our model achieves competitive accuracy on all three datasets. Although our model is slightly lower in accuarcy than unlimited-order double beam model (Zhang and McDonald, 2014) on Penn-YM and CTB5, our model outperforms their model on Penn-SD. It seems that our model performs better on data sets with larger label sets, given the number of labels used in Penn-SD data set is almost four times more than Penn-YM and CTB5 data sets. To show the effectiveness of our segment embedding method LSTM-Minus, we compare with averaging method proposed by Pei et al. (2015). We get segment embeddings by averaging the output vectors of Bidirectional LSTM in segments. 2312 Figure 4: Error rates of different distance between head and modifier on Peen-YM. To make comparison as fair as possible, we let two models have almost the same number parameters. Table 4 lists the UAS of two methods on test set. As we can see, LSTM-Minus shows better performance because our method further incorporates more sentence-level information into our model. 5.3 Impact of Network Structure In this part, we investigate the impact of the components of our approach. LSTM Recurrent Network To evaluate the impact of LSTM, we make error analysis on Penn-YM. We compare our model with Pei et al. (2015) on error rates of different distance between head and modifier. As we can see, the five models do not show much difference for short dependencies whose distance less than three. For long dependencies, both our two models show better performance compared with the 1st-order model of Pei et al. (2015), which proves that LSTM can effectively capture long-distance dependencies. Moreover, our models and Pei’s 2nd-order phrase model both improve accuracy on long dependencies compared with Pei’s 1st-order model, which is in line with our expectations. Using LSTM shows the same effect as high-order factorization strategy. Compared with 2nd-order phrase model of Pei et al. (2015), our basic model occasionally performs worse in recovering long distant dependencies. However, this should not be a surprise since higher order models are also motivated to recover longdistance dependencies. Nevertheless, with the introduction of LSTM-minus segment embeddings, our model consistently outperforms the 2nd-order phrase model of Pei et al. (2015) in accuracies of all long dependencies. We carried out significance test on the difference between our and Pei’s models. Our basic model performs significantly better than all 1st-order models of Pei et al. (2015) (ttest with p<0.001) and our basic+segment model (still a 1st-order model) performs significantly better than their 2nd-order phrase model (t-test with p<0.001) in recovering long-distance dependencies. Initialization of pre-trained word embeddings We further analyze the influence of using pretrained word embeddings for initialization. without using pretrained word embeddings, our improved model achieves 92.94% UAS / 91.83% LAS on Penn-YM, 93.46% UAS / 91.19% LAS on Penn-SD and 86.5% UAS / 85.0% LAS on CTB5. Using pre-trained word embeddings can obtain around 0.5%∼1.0% improvement. 6 Related work Dependency parsing has gained widespread interest in the computational linguistics community. There are a lot of approaches to solve it. Among them, we will mainly focus on graph-based dependency parsing model here. Dependency tree factorization and decoding algorithm are necessary for graph-based models. McDonald et al. (2005) proposed the first-order model which decomposes a dependency tree into its individual edges and use a effective dynamic programming algorithm (Eisner, 2000) to decode. Based on firstorder model, higher-order models(McDonald and Pereira, 2006; Carreras, 2007; Koo and Collins, 2010; Ma and Zhao, 2012) factor a dependency tree into a set of high-order dependencies which bring interactions between head, modifier, siblings and (or) grandparent into their model. However, for above models, scoring new types of higherorder dependencies requires extensions of the underlying decoding algorithm, which also requires higher computational cost. Unlike above models, unlimited-order models (Zhang and McDonald, 2012; Zhang et al., 2013; Zhang and McDonald, 2014) could handle arbitrary features over higherorder dependencies by generalizing the Eisner algorithm. In contrast to conventional methods, neural network model shows their ability to reduce the effort in feature engineering. Pei et al. (2015) proposed a model to automatically learn high-order feature 2313 combinations via a novel activation function, allowing their model to use a set of atomic features instead of millions of hand-crafted features. Different from previous work, which is sensitive to local state and accesses to larger context by higher-order factorization. Our model makes parsing decisions on a global perspective with firstorder factorization, avoiding the expensive computational cost introduced by high-order factorization. LSTM network is heavily utilized in our model. LSTM network has already been explored in transition-based dependency parsing. Dyer et al. (2015) presented stack LSTMs with push and pop operations and used them to implement a state-of-the-art transition-based dependency parser. Ballesteros et al. (2015) replaced lookup-based word representations with characterbased representations obtained by Bidirectional LSTM in the continuous-state parser of Dyer et al. (2015), which was proved experimentally to be useful for morphologically rich languages. 7 Conclusion In this paper, we propose an LSTM-based neural network model for graph-based dependency parsing. Utilizing Bidirectional LSTM and segment embeddings learned by LSTM-Minus allows our model access to sentence-level information, making our model more accurate in recovering longdistance dependencies with only first-order factorization. Experiments on PTB and CTB show that our model could be competitive with conventional high-order models with a faster speed. Acknowledgments This work is supported by National Key Basic Research Program of China under Grant No.2014CB340504 and National Natural Science Foundation of China under Grant No.61273318. The Corresponding author of this paper is Baobao Chang. References Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2015. Improved transition-based parsing by modeling characters instead of words with lstms. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 349–359. Jonas Kuhn Bernd Bohnet. 2012. The best of both worlds: a graph-based completion model for transition-based parsers. Conference of the European Chapter of the Association for Computational Linguistics. Xavier Carreras. 2007. Experiments with a higherorder projective dependency parser. In EMNLPCoNLL, pages 957–961. Danqi Chen and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 740–750. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, pages 2121–2159. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, pages 334–343. Jason Eisner. 2000. Bilexical Grammars and their Cubic-Time Parsing Algorithms. Springer Netherlands. David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2003. English gigaword. Linguistic Data Consortium, Philadelphia. Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. 2012. Improving neural networks by preventing coadaptation of feature detectors. arXiv preprint arXiv:1207.0580. Terry Koo and Michael Collins. 2010. Efficient thirdorder dependency parsers. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1–11. Association for Computational Linguistics. Wang Ling, Chris Dyer, Alan Black, and Isabel Trancoso. 2015. Two/too simple adaptations of word2vec for syntax problems. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Xuezhe Ma and Hai Zhao. 2012. Fourth-order dependency parsing. In COLING 2012, 24th International Conference on Computational Linguistics, Proceedings of the Conference: Posters, 8-15 December 2012, Mumbai, India, pages 785–796. Marie Catherine De Marneffe, Bill Maccartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. Lrec, pages 449–454. 2314 Ryan T McDonald and Fernando CN Pereira. 2006. Online learning of approximate dependency parsing algorithms. In EACL. Citeseer. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of the 43rd annual meeting on association for computational linguistics, pages 91–98. Association for Computational Linguistics. Wenzhe Pei, Tao Ge, and Baobao Chang. 2014. Maxmargin tensor neural network for chinese word segmentation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 293–303. Wenzhe Pei, Tao Ge, and Baobao Chang. 2015. An effective neural network model for graph-based dependency parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, pages 313–322. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing System, pages 3104–3112. Kristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language TechnologyVolume 1, pages 173–180. Association for Computational Linguistics. Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey E. Hinton. 2014. Grammar as a foreign language. CoRR, abs/1412.7449. David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. 2015. Structured training for neural network transition-based parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics. Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proceedings of IWPT, volume 3, pages 195–206. Yue Zhang and Stephen Clark. 2008. A tale of two parsers: Investigating and combining graph-based and transition-based dependency parsing. In 2008 Conference on Empirical Methods in Natural Language Processing, pages 562–571. Hao Zhang and Ryan T. McDonald. 2012. Generalized higher-order dependency parsing with cube pruning. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 320–331. Hao Zhang and Ryan T. McDonald. 2014. Enforcing structural diversity in cube-pruned dependency parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 656–661. Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 188–193. Hao Zhang, Liang Huang Kai Zhao, and Ryan Mcdonald. 2013. Online learning for inexact hypergraph search. Proceedings of Emnlp. Xiaoqing Zheng, Hanyang Chen, and Tianyu Xu. 2013. Deep learning for Chinese word segmentation and POS tagging. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 647–657, October. 2315
2016
218
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2316–2325, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics TransG : A Generative Model for Knowledge Graph Embedding Han Xiao, Minlie Huang∗, Xiaoyan Zhu State Key Lab. of Intelligent Technology and Systems National Lab. for Information Science and Technology Dept. of Computer Science and Technology Tsinghua University, Beijing 100084, PR China {aihuang, zxy-dcs}@tsinghua.edu.cn Abstract Recently, knowledge graph embedding, which projects symbolic entities and relations into continuous vector space, has become a new, hot topic in artificial intelligence. This paper proposes a novel generative model (TransG) to address the issue of multiple relation semantics that a relation may have multiple meanings revealed by the entity pairs associated with the corresponding triples. The new model can discover latent semantics for a relation and leverage a mixture of relationspecific component vectors to embed a fact triple. To the best of our knowledge, this is the first generative model for knowledge graph embedding, and at the first time, the issue of multiple relation semantics is formally discussed. Extensive experiments show that the proposed model achieves substantial improvements against the state-of-the-art baselines. 1 Introduction Abstract or real-world knowledge is always a major topic in Artificial Intelligence. Knowledge bases such as Wordnet (Miller, 1995) and Freebase (Bollacker et al., 2008) have been shown very useful to AI tasks including question answering, knowledge inference, and so on. However, traditional knowledge bases are symbolic and logic, thus numerical machine learning methods cannot be leveraged to support the computation over the knowledge bases. To this end, knowledge graph embedding has been proposed to project entities and relations into continuous vector spaces. Among various embedding models, there is a line ∗Correspondence author of translation-based models such as TransE (Bordes et al., 2013), TransH (Wang et al., 2014), TransR (Lin et al., 2015b), and other related models (He et al., 2015) (Lin et al., 2015a). Figure 1: Visualization of TransE embedding vectors with PCA dimension reduction. Four relations (a ∼d) are chosen from Freebase and Wordnet. A dot denotes a triple and its position is decided by the difference vector between tail and head entity (t −h). Since TransE adopts the principle of t −h ≈r, there is supposed to be only one cluster whose centre is the relation vector r. However, results show that there exist multiple clusters, which justifies our multiple relation semantics assumption. A fact of knowledge base can usually be represented by a triple (h, r, t) where h, r, t indicate a head entity, a relation, and a tail entity, respectively. All translation-based models almost follow the same principle hr + r ≈tr where hr, r, tr in2316 dicate the embedding vectors of triple (h, r, t), with the head and tail entity vector projected with respect to the relation space. In spite of the success of these models, none of the previous models has formally discussed the issue of multiple relation semantics that a relation may have multiple meanings revealed by the entity pairs associated with the corresponding triples. As can be seen from Fig. 1, visualization results on embedding vectors obtained from TransE (Bordes et al., 2013) show that, there are different clusters for a specific relation, and different clusters indicate different latent semantics. For example, the relation HasPart has at least two latent semantics: composition-related as (Table, HasPart, Leg) and location-related as (Atlantics, HasPart, NewYorkBay). As one more example, in Freebase, (Jon Snow, birth place, Winter Fall) and (George R. R. Martin, birth place, U.S.) are mapped to schema /fictional universe/fictional character/place of birth and /people/person/place of birth respectively, indicating that birth place has different meanings. This phenomenon is quite common in knowledge bases for two reasons: artificial simplification and nature of knowledge. On one hand, knowledge base curators could not involve too many similar relations, so abstracting multiple similar relations into one specific relation is a common trick. On the other hand, both language and knowledge representations often involve ambiguous information. The ambiguity of knowledge means a semantic mixture. For example, when we mention “Expert”, we may refer to scientist, businessman or writer, so the concept “Expert” may be ambiguous in a specific situation, or generally a semantic mixture of these cases. However, since previous translation-based models adopt hr + r ≈tr, they assign only one translation vector for one relation, and these models are not able to deal with the issue of multiple relation semantics. To illustrate more clearly, as showed in Fig.2, there is only one unique representation for relation HasPart in traditional models, thus the models made more errors when embedding the triples of the relation. Instead, in our proposed model, we leverage a Bayesian non-parametric infinite mixture model to handle multiple relation semantics by generating multiple translation components for a relation. Thus, different semantics are characterized by different components in our embedding model. For example, we can distinguish the two clusters HasPart.1 or HasPart.2, where the relation semantics are automatically clustered to represent the meaning of associated entity pairs. To summarize, our contributions are as follows: • We propose a new issue in knowledge graph embedding, multiple relation semantics that a relation in knowledge graph may have different meanings revealed by the associated entity pairs, which has never been studied previously. • To address the above issue, we propose a novel Bayesian non-parametric infinite mixture embedding model, TransG. The model can automatically discover semantic clusters of a relation, and leverage a mixture of multiple relation components for translating an entity pair. Moreover, we present new insights from the generative perspective. • Extensive experiments show that our proposed model obtains substantial improvements against the state-of-the-art baselines. 2 Related Work Translation-Based Embedding. Existing translation-based embedding methods share the same translation principle h + r ≈t and the score function is designed as: fr(h, t) = ||hr + r −tr||2 2 where hr, tr are entity embedding vectors projected in the relation-specific space. TransE (Bordes et al., 2013), lays the entities in the original entity space: hr = h, tr = t. TransH (Wang et al., 2014) projects entities into a hyperplane for addressing the issue of complex relation embedding: hr = h −w⊤ r hwr, tr = t −w⊤ r twr. To address the same issue, TransR (Lin et al., 2015b), transforms the entity embeddings by the same relationspecific matrix: hr = Mrh, tr = Mrt. TransR also proposes an ad-hoc clustering-based method, CTransR, where the entity pairs for a relation are clustered into different groups, and the pairs in the same group share the same relation vector. In comparison, our model is more elegant to address such an issue theoretically, and does not require a pre-process of clustering. Furthermore, our model has much better performance than CTransR, as expected. TransM (Fan et al., 2317 Figure 2: Visualization of multiple relation semantics. The data are selected from Wordnet. The dots are correct triples that belong to HasPart relation, while the circles are incorrect ones. The point coordinate is the difference vector between tail and head entity, which should be near to the centre. (a) The correct triples are hard to be distinguished from the incorrect ones. (b) By applying multiple semantic components, our proposed model could discriminate the correct triples from the wrong ones. 2014) leverages the structure of the knowledge graph via pre-calculating the distinct weight for each training triple to enhance embedding. KG2E (He et al., 2015) is a probabilistic embedding method for modeling the uncertainty in knowledge graph. There are many works to improve translationbased methods by considering other information. For instance, (Guo et al., 2015) aims at discovering the geometric structure of the embedding space to make it semantically smooth. (Wang et al., 2014) focuses on bridging the gap between knowledge and texts, with a loss function for jointly modeling knowledge graph and text resources. (Wang et al., 2015) incorporates the rules that are related with relation types such as 1-N and N-1. PTransE (Lin et al., 2015a) takes into account path information in knowledge graph. Since the previous models are point-wise modeling methods, ManifoldE (Xiao et al., 2016) proposes a novel manifold-based approach for knowledge graph embedding. In aid of kernel tricks, manifold-based methods can improve embedding performance substantially. Structured & Unstructured Embedding. The structured embedding model (Bordes et al., 2011) transforms the entity space with the head-specific and tail-specific matrices. The score function is defined as fr(h, t) = ||Mh,rh −Mt,rt||. According to (Socher et al., 2013), this model cannot capture the relationship between entities. Semantic Matching Energy (SME) (Bordes et al., 2012) (Bordes et al., 2014) can handle the correlations between entities and relations by matrix product and Hadamard product. The unstructured model (Bordes et al., 2012) may be a simplified version of TransE without considering any relation-related information. The score function is directly defined as fr(h, t) = ||h −t||2 2. Neural Network based Embedding. Single Layer Model (SLM) (Socher et al., 2013) applies neural network to knowledge graph embedding. The score function is defined as fr(h, t) = u⊤ r g(Mr,1h + Mr,2t) where Mr,1, Mr,2 are relation-specific weight matrices. Neural Tensor Network (NTN) (Socher et al., 2013) defines a very expressive score function by applying tensor: fr(h, t) = u⊤ r g(h⊤W··rt + Mr,1h + Mr,2t + br), where ur is a relation-specific linear layer, g(·) is the tanh function, W ∈Rd×d×k is a 3-way tensor. Factor Models. The latent factor models (Jenatton et al., 2012) (Sutskever et al., 2009) attempt to capturing the second-order correlations between entities by a quadratic form. The score function is defined as fr(h, t) = h⊤Wrt. RESCAL is a collective matrix factorization model which is also a common method in knowledge base embedding (Nickel et al., 2011) (Nickel et al., 2012). 3 Methods 3.1 TransG: A Generative Model for Embedding As just mentioned, only one single translation vector for a relation may be insufficient to model multiple relation semantics. In this paper, we propose to use Bayesian non-parametric infinite mix2318 ture embedding model (Griffiths and Ghahramani, 2011). The generative process of the model is as follows: 1. For an entity e ∈E: (a) Draw each entity embedding mean vector from a standard normal distribution as a prior: ue ∽N(0, 1). 2. For a triple (h, r, t) ∈∆: (a) Draw a semantic component from Chinese Restaurant Process for this relation: πr,m ∼CRP(β). (b) Draw a head entity embedding vector from a normal distribution: h ∽ N(uh, σ2 hE). (c) Draw a tail entity embedding vector from a normal distribution: t ∽ N(ut, σ2 t E). (d) Draw a relation embedding vector for this semantics: ur,m = t −h ∽ N(ut −uh, (σ2 h + σ2 t )E). where uh and ut indicate the mean embedding vector for head and tail respectively, σh and σt indicate the variance of corresponding entity distribution respectively, and ur,m is the m-th component translation vector of relation r. Chinese Restaurant Process (CRP) is a Dirichlet Process and it can automatically detect semantic components. In this setting, we obtain the score function as below: P{(h, r, t)} ∝ Mr X m=1 πr,mP(ur,m|h, t) = Mr X m=1 πr,me − ||uh+ur,m−ut||2 2 σ2 h+σ2 t (1) where πr,m is the mixing factor, indicating the weight of i-th component and Mr is the number of semantic components for the relation r, which is learned from the data automatically by the CRP. Inspired by Fig.1, TransG leverages a mixture of relation component vectors for a specific relation. Each component represents a specific latent meaning. By this way, TransG could distinguish multiple relation semantics. Notably, the CRP could generate multiple semantic components when it is necessary and the relation semantic component number Mr is learned adaptively from the data. Table 1: Statistics of datasets Data WN18 FB15K WN11 FB13 #Rel 18 1,345 11 13 #Ent 40,943 14,951 38,696 75,043 #Train 141,442 483,142 112,581 316,232 #Valid 5,000 50,000 2,609 5,908 #Test 5,000 59,071 10,544 23,733 3.2 Explanation from the Geometry Perspective Similar to previous studies, TransG has geometric explanations. In the previous methods, when the relation r of triple (h, r, t) is given, the geometric representations are fixed, as h + r ≈t. However, TransG generalizes this geometric principle to: m∗ (h,r,t) = arg max m=1...Mr πr,me − ||uh+ur,m−ut||2 2 σ2 h+σ2 t ! h + ur,m∗ (h,r,t) ≈t (2) where m∗ (h,r,t) is the index of primary component. Though all the components contribute to the model, the primary one contributes the most due to the exponential effect (exp(·)). When a triple (h, r, t) is given, TransG works out the index of primary component then translates the head entity to the tail one with the primary translation vector. For most triples, there should be only one component that have significant non-zero value as πr,me − ||uh+ur,m−ut||2 2 σ2 h+σ2 t ! and the others would be small enough, due to the exponential decay. This property reduces the noise from the other semantic components to better characterize multiple relation semantics. In detail, (t −h) is almost around only one translation vector ur,m∗ (h,r,t) in TransG. Under the condition m ̸= m∗ (h,r,t),  ||uh+ur,m−ut||2 2 σ2 h+σ2 t  is very large so that the exponential function value is very small. This is why the primary component could represent the corresponding semantics. To summarize, previous studies make translation identically for all the triples of the same relation, but TransG automatically selects the best translation vector according to the specific semantics of a triple. Therefore, TransG could focus on the specific semantic embedding to avoid much noise from the other unrelated semantic components and result in promising improvements than existing methods. Note that, all the components in 2319 Table 2: Evaluation results on link prediction Datasets WN18 FB15K Metric Mean Rank HITS@10(%) Mean Rank HITS@10(%) Raw Filter Raw Filter Raw Filter Raw Filter Unstructured (Bordes et al., 2011) 315 304 35.3 38.2 1,074 979 4.5 6.3 RESCAL (Nickel et al., 2012) 1,180 1,163 37.2 52.8 828 683 28.4 44.1 SE(Bordes et al., 2011) 1,011 985 68.5 80.5 273 162 28.8 39.8 SME(bilinear) (Bordes et al., 2012) 526 509 54.7 61.3 284 158 31.3 41.3 LFM (Jenatton et al., 2012) 469 456 71.4 81.6 283 164 26.0 33.1 TransE (Bordes et al., 2013) 263 251 75.4 89.2 243 125 34.9 47.1 TransH (Wang et al., 2014) 401 388 73.0 82.3 212 87 45.7 64.4 TransR (Lin et al., 2015b) 238 225 79.8 92.0 198 77 48.2 68.7 CTransR (Lin et al., 2015b) 231 218 79.4 92.3 199 75 48.4 70.2 PTransE (Lin et al., 2015a) N/A N/A N/A N/A 207 58 51.4 84.6 KG2E (He et al., 2015) 362 348 80.5 93.2 183 69 47.5 71.5 TransG (this paper) 357 345 84.5 94.9 152 50 55.9 88.2 TransG have their own contributions, but the primary one makes the most. 3.3 Training Algorithm The maximum data likelihood principle is applied for training. As to the non-parametric part, πr,m is generated from the CRP with Gibbs Sampling, similar to (He et al., 2015) and (Griffiths and Ghahramani, 2011). A new component is sampled for a triple (h,r,t) with the below probability: P(mr,new) = βe − ||h−t||2 2 σ2 h+σ2 t +2 βe − ||h−t||2 2 σ2 h+σ2 t +2 + P{(h, r, t)} (3) where P{(h, r, t)} is the current posterior probability. As to other parts, in order to better distinguish the true triples from the false ones, we maximize the ratio of likelihood of the true triples to that of the false ones. Notably, the embedding vectors are initialized by (Glorot and Bengio, 2010). Putting all the other constraints together, the final objective function is obtained, as follows: min uh,ur,m,ut L L = − X (h,r,t)∈∆ ln Mr X m=1 πr,me − ||uh+ur,m−ut||2 2 σ2 h+σ2 t ! + X (h′,r′,t′)∈∆′ ln   Mr X m=1 πr′,me − ||uh′ +ur′,m−ut′ ||2 2 σ2 h′ +σ2 t′   +C X r∈R Mr X m=1 ||ur,m||2 2 + X e∈E ||ue||2 2 ! (4) where ∆is the set of golden triples and ∆′ is the set of false triples. C controls the scaling degree. E is the set of entities and R is the set of relations. Noted that the mixing factors π and the variances σ are also learned jointly in the optimization. SGD is applied to solve this optimization problem. In addition, we apply a trick to control the parameter updating process during training. For those very impossible triples, the update process is skipped. Hence, we introduce a similar condition as TransE (Bordes et al., 2013) adopts: the training algorithm will update the embedding vectors only if the below condition is satisfied: P{(h, r, t)} P{(h′, r′, t′)} = PMr m=1 πr,me − ||uh+ur,m−ut||2 2 σ2 h+σ2 t PMr′ m=1 πr′,me − ||uh′ +ur′,m−ut′ ||2 2 σ2 h′ +σ2 t′ ≤ Mreγ (5) where (h, r, t) ∈∆and (h′, r′, t′) ∈∆′. γ controls the updating condition. As to the efficiency, in theory, the time complexity of TransG is bounded by a small constant M compared to TransE, that is O(TransG) = O(M × O(TransE)) where M is the number of semantic components in the model. Note that TransE is the fastest method among translationbased methods. The experiment of Link Prediction shows that TransG and TransE would converge at around 500 epochs, meaning there is also no significant difference in convergence speed. In experiment, TransG takes 4.8s for one iteration on FB15K while TransR costs 136.8s and PTransE 2320 Table 3: Evaluation results on FB15K by mapping properties of relations(%) Tasks Predicting Head(HITS@10) Predicting Tail(HITS@10) Relation Category 1-1 1-N N-1 N-N 1-1 1-N N-1 N-N Unstructured (Bordes et al., 2011) 34.5 2.5 6.1 6.6 34.3 4.2 1.9 6.6 SE(Bordes et al., 2011) 35.6 62.6 17.2 37.5 34.9 14.6 68.3 41.3 SME(bilinear) (Bordes et al., 2012) 30.9 69.6 19.9 38.6 28.2 13.1 76.0 41.8 TransE (Bordes et al., 2013) 43.7 65.7 18.2 47.2 43.7 19.7 66.7 50.0 TransH (Wang et al., 2014) 66.8 87.6 28.7 64.5 65.5 39.8 83.3 67.2 TransR (Lin et al., 2015b) 78.8 89.2 34.1 69.2 79.2 37.4 90.4 72.1 CTransR (Lin et al., 2015b) 81.5 89.0 34.7 71.2 80.8 38.6 90.1 73.8 PTransE (Lin et al., 2015a) 90.1 92.0 58.7 86.1 90.1 70.7 87.5 88.7 KG2E (He et al., 2015) 92.3 93.7 66.0 69.6 92.6 67.9 94.4 73.4 TransG (this paper) 93.0 96.0 62.5 86.8 92.8 68.1 94.5 88.8 costs 1200.0s on the same computer for the same dataset. 4 Experiments Our experiments are conducted on four public benchmark datasets that are the subsets of Wordnet and Freebase, respectively. The statistics of these datasets are listed in Tab.1. Experiments are conducted on two tasks : Link Prediction and Triple Classification. To further demonstrate how the proposed model approaches multiple relation semantics, we present semantic component analysis at the end of this section. 4.1 Link Prediction Link prediction concerns knowledge graph completion: when given an entity and a relation, the embedding models predict the other missing entity. More specifically, in this task, we predict t given (h, r, ∗), or predict h given (∗, r, t). The WN18 and FB15K are two benchmark datasets for this task. Note that many AI tasks could be enhanced by Link Prediction such as relation extraction (Hoffmann et al., 2011). Evaluation Protocol. We adopt the same protocol used in previous studies. For each testing triple (h, r, t), we corrupt it by replacing the tail t (or the head h) with every entity e in the knowledge graph and calculate a probabilistic score of this corrupted triple (h, r, e) (or (e, r, t)) with the score function fr(h, e). After ranking these scores in descending order, we obtain the rank of the original triple. There are two metrics for evaluation: the averaged rank (Mean Rank) and the proportion of testing triple whose rank is not larger than 10 (HITS@10). This is called “Raw” setting. When we filter out the corrupted triples that exist in the training, validation, or test datasets, this is the“Filter” setting. If a corrupted triple exists in the knowledge graph, ranking it ahead the original triple is also acceptable. To eliminate this case, the “Filter” setting is preferred. In both settings, a lower Mean Rank and a higher HITS@10 mean better performance. Implementation. As the datasets are the same, we directly report the experimental results of several baselines from the literature, as in (Bordes et al., 2013), (Wang et al., 2014) and (Lin et al., 2015b). We have attempted several settings on the validation dataset to get the best configuration. For example, we have tried the dimensions of 100, 200, 300, 400. Under the “bern.” sampling strategy, the optimal configurations are: learning rate α = 0.001, the embedding dimension k = 100, γ = 2.5, β = 0.05 on WN18; α = 0.0015, k = 400, γ = 3.0, β = 0.1 on FB15K. Note that all the symbols are introduced in “Methods”. We train the model until it converges. Results. Evaluation results on WN18 and FB15K are reported in Tab.2 and Tab.31. We observe that: 1. TransG outperforms all the baselines obviously. Compared to TransR, TransG makes improvements by 2.9% on WN18 and 26.0% on FB15K, and the averaged semantic component number on WN18 is 5.67 and that on FB15K is 8.77. This result demonstrates capturing multiple relation semantics would benefit embedding. 1Note that correctly regularized TransE can produce much better performance than what were reported in the ogirinal paper, see (Garc´ıa-Dur´an et al., 2015). 2321 Table 4: Different clusters in WN11 and FB13 relations. Relation Cluster Triples (Head, Tail) PartOf Location (Capital of Utah, Beehive State), (Hindustan, Bharat) ... Composition (Monitor, Television), (Bush, Adult Body), (Cell Organ, Cell)... Religion Catholicism (Cimabue, Catholicism), (St.Catald, Catholicism) ... Others (Michal Czajkowsk, Islam), (Honinbo Sansa, Buddhism) ... DomainRegion Abstract (Computer Science, Security System), (Computer Science, PL).. Specific (Computer Science, Router), (Computer Science, Disk File) ... Profession Scientist (Michael Woodruf, Surgeon), (El Lissitzky, Architect)... Businessman (Enoch Pratt, Entrepreneur), (Charles Tennant, Magnate)... Writer (Vlad. Gardin, Screen Writer), (John Huston, Screen Writer) ... 2. The model has a bad Mean Rank score on the WN18 dataset. Further analysis shows that there are 24 testing triples (0.5% of the testing set) whose ranks are more than 30,000, and these few cases would lead to about 150 mean rank loss. Among these triples, there are 23 triples whose tail or head entities have never been co-occurring with the corresponding relations in the training set. In one word, there is no sufficient training data for those relations and entities. 3. Compared to CTransR, TransG solves the multiple relation semantics problem much better for two reasons. Firstly, CTransR clusters the entity pairs for a specific relation and then performs embedding for each cluster, but TransG deals with embedding and multiple relation semantics simultaneously, where the two processes can be enhanced by each other. Secondly, CTransR models a triple by only one cluster, but TransG applies a mixture to refine the embedding. Our model is almost insensitive to the dimension if that is sufficient. For the dimensions of 100, 200, 300, 400, the HITS@10 of TransG on FB15 are 81.8%, 84.0%, 85.8%, 88.2%, while those of TransE are 47.1%, 48.5%, 51.3%, 49.2%. 4.2 Triple Classification In order to testify the discriminative capability between true and false facts, triple classification is conducted. This is a classical task in knowledge base embedding, which aims at predicting whether a given triple (h, r, t) is correct or not. WN11 and FB13 are the benchmark datasets for this task. Note that evaluation of classification needs negative samples, and the datasets have already provided negative triples. Figure 3: Accuracies of each relations in WN11 for triple classification. The right y-axis is the number of semantic components, corresponding to the lines. Evaluation Protocol. The decision process is very simple as follows: for a triple (h, r, t), if fr(h, t) is below a threshold σr, then positive; otherwise negative. The thresholds {σr} are determined on the validation dataset. Table 5: Triple classification: accuracy(%) for different embedding methods. Methods WN11 FB13 AVG. LFM 73.8 84.3 79.0 NTN 70.4 87.1 78.8 TransE 75.9 81.5 78.7 TransH 78.8 83.3 81.1 TransR 85.9 82.5 84.2 CTransR 85.7 N/A N/A KG2E 85.4 85.3 85.4 TransG 87.4 87.3 87.4 Implementation. As all methods use the same datasets, we directly re-use the results of different methods from the literature. We have attempted several settings on the validation dataset to find 2322 Figure 4: Semantic component number on WN18 (left) and FB13 (right). the best configuration. The optimal configurations of TransG are as follows: “bern” sampling, learning rate α = 0.001, k = 50, γ = 6.0, β = 0.1 on WN11, and “bern” sampling, α = 0.002, k = 400, γ = 3.0, β = 0.1 on FB13. Results. Accuracies are reported in Tab.5 and Fig.3. The following are our observations: 1. TransG outperforms all the baselines remarkably. Compared to TransR, TransG improves by 1.7% on WN11 and 5.8% on FB13, and the averaged semantic component number on WN11 is 2.63 and that on FB13 is 4.53. This result shows the benefit of capturing multiple relation semantics for a relation. 2. The relations, such as “Synset Domain” and “Type Of”, which hold more semantic components, are improved much more. In comparison, the relation “Similar” holds only one semantic component and is almost not promoted. This further demonstrates that capturing multiple relation semantics can benefit embedding. 4.3 Semantic Component Analysis In this subsection, we analyse the number of semantic components for different relations and list the component number on the dataset WN18 and FB13 in Fig.4. Results. As Fig. 4 and Tab. 4 show, we have the following observations: 1. Multiple semantic components are indeed necessary for most relations. Except for relations such as “Also See”, “Synset Usage” and “Gender”, all other relations have more than one semantic component. 2. Different components indeed correspond to different semantics, justifying the theoretical analysis and effectiveness of TransG. For example, “Profession” has at least three semantics: scientistrelated as (ElLissitzky, Architect), businessman-related as (EnochPratt, Entrepreneur) and writerrelated as (Vlad.Gardin, ScreenWriter). 3. WN11 and WN18 are different subsets of Wordnet. As we know, the semantic component number is decided on the triples in the dataset. Therefore, It’s reasonable that similar relations, such as “Synset Domain” and “Synset Usage” may hold different semantic numbers for WN11 and WN18. 5 Conclusion In this paper, we propose a generative Bayesian non-parametric infinite mixture embedding model, TransG, to address a new issue, multiple relation semantics, which can be commonly seen in knowledge graph. TransG can discover the latent semantics of a relation automatically and leverage a mixture of relation components for embedding. Extensive experiments show our method achieves substantial improvements against the state-of-theart baselines. 6 Acknowledgements This work was partly supported by the National Basic Research Program (973 Program) under grant No. 2012CB316301/2013CB329403, the National Science Foundation of China under grant 2323 No. 61272227/61332007, and the Beijing Higher Education Young Elite Teacher Project. References Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247–1250. ACM. Antoine Bordes, Jason Weston, Ronan Collobert, Yoshua Bengio, et al. 2011. Learning structured embeddings of knowledge bases. In Proceedings of the Twenty-fifth AAAI Conference on Artificial Intelligence. Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. 2012. Joint learning of words and meaning representations for open-text semantic parsing. In International Conference on Artificial Intelligence and Statistics, pages 127–135. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in Neural Information Processing Systems, pages 2787–2795. Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. 2014. A semantic matching energy function for learning with multi-relational data. Machine Learning, 94(2):233–259. Miao Fan, Qiang Zhou, Emily Chang, and Thomas Fang Zheng. 2014. Transition-based knowledge graph embedding with relational mapping properties. In Proceedings of the 28th Pacific Asia Conference on Language, Information, and Computation, pages 328–337. Alberto Garc´ıa-Dur´an, Antoine Bordes, Nicolas Usunier, and Yves Grandvalet. 2015. Combining two and three-way embeddings models for link prediction in knowledge bases. CoRR, abs/1506.00999. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In International conference on artificial intelligence and statistics, pages 249–256. Thomas L Griffiths and Zoubin Ghahramani. 2011. The indian buffet process: An introduction and review. The Journal of Machine Learning Research, 12:1185–1224. Shu Guo, Quan Wang, Bin Wang, Lihong Wang, and Li Guo. 2015. Semantically smooth knowledge graph embedding. In Proceedings of ACL. Shizhu He, Kang Liu, Guoliang Ji, and Jun Zhao. 2015. Learning to represent knowledge graphs with gaussian embedding. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pages 623–632. ACM. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledgebased weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesVolume 1, pages 541–550. Association for Computational Linguistics. Rodolphe Jenatton, Nicolas L Roux, Antoine Bordes, and Guillaume R Obozinski. 2012. A latent factor model for highly multi-relational data. In Advances in Neural Information Processing Systems, pages 3167–3175. Yankai Lin, Zhiyuan Liu, and Maosong Sun. 2015a. Modeling relation paths for representation learning of knowledge bases. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015b. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence. George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39–41. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th international conference on machine learning (ICML-11), pages 809–816. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2012. Factorizing yago: scalable machine learning for linked data. In Proceedings of the 21st international conference on World Wide Web, pages 271–280. ACM. Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in Neural Information Processing Systems, pages 926–934. Ilya Sutskever, Joshua B Tenenbaum, and Ruslan Salakhutdinov. 2009. Modelling relational data using bayesian clustered tensor factorization. In Advances in neural information processing systems, pages 1821–1828. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the TwentyEighth AAAI Conference on Artificial Intelligence, pages 1112–1119. 2324 Quan Wang, Bin Wang, and Li Guo. 2015. Knowledge base completion using embeddings and rules. In Proceedings of the 24th International Joint Conference on Artificial Intelligence. Han Xiao, Minlie Huang, and Xiaoyan Zhu. 2016. From one point to a manifold: Knowledge graph embedding for precise link prediction. In IJCAI. 2325
2016
219
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 226–235, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Compressing Neural Language Models by Sparse Word Representations Yunchuan Chen,1,2 Lili Mou,1,3 Yan Xu,1,3 Ge Li,1,3 Zhi Jin1,3,∗ 1Key Laboratory of High Confidence Software Technologies (Peking University), MoE, China 2University of Chinese Academy of Sciences, [email protected] 3Institute of Software, Peking University, [email protected], {xuyan14,lige,zhijin}@pku.edu.cn ∗Corresponding author Abstract Neural networks are among the state-ofthe-art techniques for language modeling. Existing neural language models typically map discrete words to distributed, dense vector representations. After information processing of the preceding context words by hidden layers, an output layer estimates the probability of the next word. Such approaches are time- and memory-intensive because of the large numbers of parameters for word embeddings and the output layer. In this paper, we propose to compress neural language models by sparse word representations. In the experiments, the number of parameters in our model increases very slowly with the growth of the vocabulary size, which is almost imperceptible. Moreover, our approach not only reduces the parameter space to a large extent, but also improves the performance in terms of the perplexity measure.1 1 Introduction Language models (LMs) play an important role in a variety of applications in natural language processing (NLP), including speech recognition and document recognition. In recent years, neural network-based LMs have achieved significant breakthroughs: they can model language more precisely than traditional n-gram statistics (Mikolov et al., 2011); it is even possible to generate new sentences from a neural LM, benefiting various downstream tasks like machine translation, summarization, and dialogue systems (Devlin et al., 2014; Rush et al., 2015; Sordoni et al., 2015; Mou et al., 2015b). 1Code released on https://github.com/chenych11/lm Existing neural LMs typically map a discrete word to a distributed, real-valued vector representation (called embedding) and use a neural model to predict the probability of each word in a sentence. Such approaches necessitate a large number of parameters to represent the embeddings and the output layer’s weights, which is unfavorable in many scenarios. First, with a wider application of neural networks in resourcerestricted systems (Hinton et al., 2015), such approach is too memory-consuming and may fail to be deployed in mobile phones or embedded systems. Second, as each word is assigned with a dense vector—which is tuned by gradient-based methods—neural LMs are unlikely to learn meaningful representations for infrequent words. The reason is that infrequent words’ gradient is only occasionally computed during training; thus their vector representations can hardly been tuned adequately. In this paper, we propose a compressed neural language model where we can reduce the number of parameters to a large extent. To accomplish this, we first represent infrequent words’ embeddings with frequent words’ by sparse linear combinations. This is inspired by the observation that, in a dictionary, an unfamiliar word is typically defined by common words. We therefore propose an optimization objective to compute the sparse codes of infrequent words. The property of sparseness (only 4–8 values for each word) ensures the efficiency of our model. Based on the pre-computed sparse codes, we design our compressed language model as follows. A dense embedding is assigned to each common word; an infrequent word, on the other hand, computes its vector representation by a sparse combination of common words’ embeddings. We use the long short term memory (LSTM)-based recurrent neural network (RNN) as the hidden layer of 226 our model. The weights of the output layer are also compressed in a same way as embeddings. Consequently, the number of trainable neural parameters is a constant regardless of the vocabulary size if we ignore the biases of words. Even considering sparse codes (which are very small), we find the memory consumption grows imperceptibly with respect to the vocabulary. We evaluate our LM on the Wikipedia corpus containing up to 1.6 billion words. During training, we adopt noise-contrastive estimation (NCE) (Gutmann and Hyv¨arinen, 2012) to estimate the parameters of our neural LMs. However, different from Mnih and Teh (2012), we tailor the NCE method by adding a regression layer (called ZRegressoion) to predict the normalization factor, which stabilizes the training process. Experimental results show that, our compressed LM not only reduces the memory consumption, but also improves the performance in terms of the perplexity measure. To sum up, the main contributions of this paper are three-fold. (1) We propose an approach to represent uncommon words’ embeddings by a sparse linear combination of common ones’. (2) We propose a compressed neural language model based on the pre-computed sparse codes. The memory increases very slowly with the vocabulary size (4– 8 values for each word). (3) We further introduce a ZRegression mechanism to stabilize the NCE algorithm, which is potentially applicable to other LMs in general. 2 Background 2.1 Standard Neural LMs Language modeling aims to minimize the joint probability of a corpus (Jurafsky and Martin, 2014). Traditional n-gram models impose a Markov assumption that a word is only dependent on previous n −1 words and independent of its position. When estimating the parameters, researchers have proposed various smoothing techniques including back-off models to alleviate the problem of data sparsity. Bengio et al. (2003) propose to use a feedforward neural network (FFNN) to replace the multinomial parameter estimation in n-gram models. Recurrent neural networks (RNNs) can also be used for language modeling; they are especially capable of capturing long range dependencies in sentences (Mikolov et al., 2010; Sundermeyer et Figure 1: The architecture of a neural networkbased language model. al., 2015). In the above models, we can view that a neural LM is composed of three main parts, namely the Embedding, Encoding, and Prediction subnets, as shown in Figure 1. The Embedding subnet maps a word to a dense vector, representing some abstract features of the word (Mikolov et al., 2013). Note that this subnet usually accepts a list of words (known as history or context words) and outputs a sequence of word embeddings. The Encoding subnet encodes the history of a target word into a dense vector (known as context or history representation). We may either leverage FFNNs (Bengio et al., 2003) or RNNs (Mikolov et al., 2010) as the Encoding subnet, but RNNs typically yield a better performance (Sundermeyer et al., 2015). The Prediction subnet outputs a distribution of target words as p(w = wi|h) = exp(s(h, wi)) P j exp(s(h, wj)), (1) s(h, wi) =W ⊤ i h + bi, (2) where h is the vector representation of context/history h, obtained by the Encoding subnet. W = (W1, W2, . . . , WV ) ∈RC×V is the output weights of Prediction; b = (b1, b2, . . . , bV ) ∈ RC is the bias (the prior). s(h, wi) is a scoring function indicating the degree to which the context h matches a target word wi. (V is the size of vocabulary V; C is the dimension of context/history, given by the Encoding subnet.) 2.2 Complexity Concerns of Neural LMs Neural network-based LMs can capture more precise semantics of natural language than n-gram models because the regularity of the Embedding subnet extracts meaningful semantics of a word 227 and the high capacity of Encoding subnet enables complicated information processing. Despite these, neural LMs also suffer from several disadvantages mainly out of complexity concerns. Time complexity. Training neural LMs is typically time-consuming especially when the vocabulary size is large. The normalization factor in Equation (1) contributes most to time complexity. Morin and Bengio (2005) propose hierarchical softmax by using a Bayesian network so that the probability is self-normalized. Sampling techniques—for example, importance sampling (Bengio and Sen´ecal, 2003), noise-contrastive estimation (Gutmann and Hyv¨arinen, 2012), and target sampling (Jean et al., 2014)—are applied to avoid computation over the entire vocabulary. Infrequent normalization maximizes the unnormalized likelihood with a penalty term that favors normalized predictions (Andreas and Klein, 2014). Memory complexity and model complexity. The number of parameters in the Embedding and Prediction subnets in neural LMs increases linearly with respect to the vocabulary size, which is large (Table 1). As said in Section 1, this is sometimes unfavorable in memory-restricted systems. Even with sufficient hardware resources, it is problematic because we are unlikely to fully tune these parameters. Chen et al. (2015) propose the differentiated softmax model by assigning fewer parameters to rare words than to frequent words. However, their approach only handles the output weights, i.e., W in Equation (2); the input embeddings remain uncompressed in their approach. In this work, we mainly focus on memory and model complexity, i.e., we propose a novel method to compress the Embedding and Prediction subnets in neural language models. 2.3 Related Work Existing work on model compression for neural networks. Buciluˇa et al. (2006) and Hinton et al. (2015) use a well-trained large network to guide the training of a small network for model compression. Jaderberg et al. (2014) compress neural models by matrix factorization, Gong et al. (2014) by quantization. In NLP, Mou et al. (2015a) learn an embedding subspace by supervised training. Our work resembles little, if any, to the above methods as we compress embeddings and output weights using sparse word representations. Existing model Sub-nets RNN-LSTM FFNN Embedding V E V E Encoding 4(CE + C2 + C) nCE + C Prediction V (C + 1) V (C + 1) TOTAL† O((C + E)V ) O((E + C)V ) Table 1: Number of parameters in different neural network-based LMs. E: embedding dimension; C: context dimension; V : vocabulary size. †Note that V ≫C (or E). compression typically works with a compromise of performance. On the contrary, our model improves the perplexity measure after compression. Sparse word representations. We leverage sparse codes of words to compress neural LMs. Faruqui et al. (2015) propose a sparse coding method to represent each word with a sparse vector. They solve an optimization problem to obtain the sparse vectors of words as well as a dictionary matrix simultaneously. By contrast, we do not estimate any dictionary matrix when learning sparse codes, which results in a simple and easyto-optimize model. 3 Our Proposed Model In this section, we describe our compressed language model in detail. Subsection 3.1 formalizes the sparse representation of words, serving as the premise of our model. On such a basis, we compress the Embedding and Prediction subnets in Subsections 3.2 and 3.3, respectively. Finally, Subsection 3.4 introduces NCE for parameter estimation where we further propose the ZRegression mechanism to stabilize our model. 3.1 Sparse Representations of Words We split the vocabulary V into two disjoint subsets (B and C). The first subset B is a base set, containing a fixed number of common words (8k in our experiments). C = V\B is a set of uncommon words. We would like to use B’s word embeddings to encode C’s. Our intuition is that oftentimes a word can be defined by a few other words, and that rare words should be defined by common ones. Therefore, it is reasonable to use a few common words’ embeddings to represent that of a rare word. Following most work in the literature (Lee et al., 2006; Yang et al., 2011), we represent each uncommon word with a sparse, linear combination of com228 mon ones’ embeddings. The sparse coefficients are called a sparse code for a given word. We first train a word representation model like SkipGram (Mikolov et al., 2013) to obtain a set of embeddings for each word in the vocabulary, including both common words and rare words. Suppose U = (U1, U2, . . . , UB) ∈RE×B is the (learned) embedding matrix of common words, i.e., Ui is the embedding of i-th word in B. (Here, B = |B|.) Each word in B has a natural sparse code (denoted as x): it is a one-hot vector with B elements, the i-th dimension being on for the i-th word in B. For a word w ∈C, we shall learn a sparse vector x = (x1, x2, . . . , xB) as the sparse code of the word. Provided that x has been learned (which will be introduced shortly), the embedding of w is ˆw = B X j=1 xjUj = Ux, (3) To learn the sparse representation of a certain word w, we propose the following optimization objective min x ∥Ux −w∥2 2 + α∥x∥1 + β|1⊤x −1| + γ1⊤max{0, −x}, (4) where max denotes the component-wise maximum; w is the embedding for a rare word w ∈C. The first term (called fitting loss afterwards) evaluates the closeness between a word’s coded vector representation and its “true” representation w, which is the general goal of sparse coding. The second term is an ℓ1 regularizer, which encourages a sparse solution. The last two regularization terms favor a solution that sums to 1 and that is nonnegative, respectively. The nonnegative regularizer is applied as in He et al. (2012) due to psychological interpretation concerns. It is difficult to determine the hyperparameters α, β, and γ. Therefore we perform several tricks. First, we drop the last term in the problem (4), but clip each element in x so that all the sparse codes are nonnegative during each update of training. Second, we re-parametrize α and β by balancing the fitting loss and regularization terms dynamically during training. Concretely, we solve the following optimization problem, which is slightly different but closely related to the conceptual objective (4): min x L(x) + αtR1(x) + βtR2(x), (5) where L(x) = ∥Ux −w∥2 2, R1(x) = ∥x∥1, and R2(x) = |1⊤x−1|. αt and βt are adaptive parameters that are resolved during training time. Suppose xt is the value we obtain after the update of the t-th step, we expect the importance of fitness and regularization remain unchanged during training. This is equivalent to αtR1(xt) L(xt) = wα ≡const, (6) βtR2(xt) L(xt) = wβ ≡const. (7) or αt = L(xt) R1(xt)wα and βt = L(xt) R2(xt)wβ, where wα and wβ are the ratios between the regularization loss and the fitting loss. They are much easier to specify than α or β in the problem (4). We have two remarks as follows. • To learn the sparse codes, we first train the “true” embeddings by word2vec2 for both common words and rare words. However, these true embeddings are slacked during our language modeling. • As the codes are pre-computed and remain unchanged during language modeling, they are not tunable parameters of our neural model. Considering the learned sparse codes, we need only 4–8 values for each word on average, as the codes contain 0.05–0.1% nonzero values, which are almost negligible. 3.2 Parameter Compression for the Embedding Subnet One main source of LM parameters is the Embedding subnet, which takes a list of words (history/context) as input, and outputs dense, lowdimensional vector representations of the words. We leverage the sparse representation of words mentioned above to construct a compressed Embedding subnet, where the number of parameters is independent of the vocabulary size. By solving the optimization problem (5) for each word, we obtain a non-negative sparse code x ∈RB for each word, indicating the degree to which the word is related to common words in B. Then the embedding of a word is given by ˆw = Ux. 2https://code.google.com/archive/p/word2vec 229 We would like to point out that the embedding of a word ˆw is not sparse because U is a dense matrix, which serves as a shared parameter of learning all words’ vector representations. 3.3 Parameter Compression for the Prediction Subnet Another main source of parameters is the Prediction subnet. As Table 1 shows, the output layer contains V target-word weight vectors and biases; the number increases with the vocabulary size. To compress this part of a neural LM, we propose a weight-sharing method that uses words’ sparse representations again. Similar to the compression of word embeddings, we define a base set of weight vectors, and use them to represent the rest weights by sparse linear combinations. Without loss of generality, we let D = W:,1:B be the output weights of B base target words, and c = b1:B be bias of the B target words.3 The goal is to use D and c to represent W and b. However, as the values of W and b are unknown before the training of LM, we cannot obtain their sparse codes in advance. We claim that it is reasonable to share the same set of sparse codes to represent word vectors in Embedding and the output weights in the Prediction subnet. In a given corpus, an occurrence of a word is always companied by its context. The co-occurrence statistics about a word or corresponding context are the same. As both word embedding and context vectors capture these co-occurrence statistics (Levy and Goldberg, 2014), we can expect that context vectors share the same internal structure as embeddings. Moreover, for a fine-trained network, given any word w and its context h, the output layer’s weight vector corresponding to w should specify a large inner-product score for the context h; thus these context vectors should approximate the weight vector of w. Therefore, word embeddings and the output weight vectors should share the same internal structures and it is plausible to use a same set of sparse representations for both words and target-word weight vectors. As we shall show in Section 4, our treatment of compressing the Prediction subnet does make sense and achieves high performance. Formally, the i-th output weight vector is estimated by ˆ Wi = Dxi, (8) 3W:,1:B is the first B columns of W . Figure 2: Compressing the output of neural LM. We apply NCE to estimate the parameters of the Prediction sub-network (dashed round rectangle). The SpUnnrmProb layer outputs a sparse, unnormalized probability of the next word. By “sparsity,” we mean that, in NCE, the probability is computed for only the “true” next word (red) and a few generated negative samples. The biases can also be compressed as ˆbi = cxi. (9) where xi is the sparse representation of the i-th word. (It is shared in the compression of weights and biases.) In the above model, we have managed to compressed a language model whose number of parameters is irrelevant to the vocabulary size. To better estimate a “prior” distribution of words, we may alternatively assign an independent bias to each word, i.e., b is not compressed. In this variant, the number of model parameters grows very slowly and is also negligible because each word needs only one extra parameter. Experimental results show that by not compressing the bias vector, we can even improve the performance while compressing LMs. 3.4 Noise-Contrastive Estimation with ZRegression We adopt the noise-contrastive estimation (NCE) method to train our model. Compared with the maximum likelihood estimation of softmax, NCE reduces computational complexity to a large degree. We further propose the ZRegression mechanism to stablize training. NCE generates a few negative samples for each positive data sample. During training, we only 230 need to compute the unnormalized probability of these positive and negative samples. Interested readers are referred to (Gutmann and Hyv¨arinen, 2012) for more information. Formally, the estimated probability of the word wi with history/context h is P(w|h; θ) = 1 Zh P 0(wi|h; θ) = 1 Zh exp(s(wi, h; θ)), (10) where θ is the parameters and Zh is a contextdependent normalization factor. P 0(wi|h; θ) is the unnormalized probability of the w (given by the SpUnnrmProb layer in Figure 2). The NCE algorithm suggests to take Zh as parameters to optimize along with θ, but it is intractable for context with variable lengths or large sizes in language modeling. Following Mnih and Teh (2012), we set Zh = 1 for all h in the base model (without ZRegression). The objective for each occurrence of context/history h is J(θ|h) = log P(wi|h; θ) P(wi|h; θ) + kPn(wi)+ k X j=1 log kPn(wj) P(wj|h; θ) + kPn(wj), where Pn(w) is the probability of drawing a negative sample w; k is the number of negative samples that we draw for each positive sample. The overall objective of NCE is J(θ) = Eh[J(θ|h)] ≈1 M M X i=1 J(θ|hi), where hi is an occurrence of the context and M is the total number of context occurrences. Although setting Zh to 1 generally works well in our experiment, we find that in certain scenarios, the model is unstable. Experiments show that when the true normalization factor is far away from 1, the cost function may vibrate. To comply with NCE in general, we therefore propose a ZRegression layer to predict the normalization constant Zh dependent on h, instead of treating it as a constant. The regression layer is computed by Z−1 h = exp(W ⊤ Z h + bZ), Partitions Running words Train (n-gram) 1.6 B Train (neural LMs) 100 M Dev 100 K Test 5 M Table 2: Statistics of our corpus. where WZ ∈RC and bZ ∈R are weights and bias for ZRegression. Hence, the estimated probability by NCE with ZRegression is given by P(w|h) = exp(s(h, w)) · exp(W ⊤ Z h + bZ). Note that the ZRegression layer does not guarantee normalized probabilities. During validation and testing, we explicitly normalize the probabilities by Equation (1). 4 Evaluation In this part, we first describe our dataset in Subsection 4.1. We evaluate our learned sparse codes of rare words in Subsection 4.2 and the compressed language model in Subsection 4.3. Subsection 4.4 provides in-depth analysis of the ZRegression mechanism. 4.1 Dataset We used the freely available Wikipedia4 dump (2014) as our dataset. We extracted plain sentences from the dump and removed all markups. We further performed several steps of preprocessing such as text normalization, sentence splitting, and tokenization. Sentences were randomly shuffled, so that no information across sentences could be used, i.e., we did not consider cached language models. The resulting corpus contains about 1.6 billion running words. The corpus was split into three parts for training, validation, and testing. As it is typically timeconsuming to train neural networks, we sampled a subset of 100 million running words to train neural LMs, but the full training set was used to train the backoff n-gram models. We chose hyperparameters by the validation set and reported model performance on the test set. Table 2 presents some statistics of our dataset. 4.2 Qualitative Analysis of Sparse Codes To obtain words’ sparse codes, we chose 8k common words as the “dictionary,” i.e., B = 8000. 4http://en.wikipedia.org 231 Figure 3: The sparse representations of selected words. The x-axis is the dictionary of 8k common words; the y-axis is the coefficient of sparse coding. Note that algorithm, secret, and debate are common words, each being coded by itself with a coefficient of 1. We had 2k–42k uncommon words in different settings. We first pretrained word embeddings of both rare and common words, and obtained 200d vectors U and w in Equation (5). The dimension was specified in advance and not tuned. As there is no analytic solution to the objective, we optimized it by Adam (Kingma and Ba, 2014), which is a gradient-based method. To filter out small coefficients around zero, we simply set a value to 0 if it is less than 0.015 · max{v ∈x}. wα in Equation (6) was set to 1 because we deemed fitting loss and sparsity penalty are equally important. We set wβ in Equation (7) to 0.1, and this hyperparameter is insensitive. Figure 3 plots the sparse codes of a few selected words. As we see, algorithm, secret, and debate are common words, and each is (sparsely) coded by itself with a coefficient of 1. We further notice that a rare word like algorithms has a sparse representation with only a few non-zero coefficient. Moreover, the coefficient in the code of algorithms—corresponding to the base word algorithm—is large (∼0.6), showing that the words algorithm and algorithms are similar. Such phenomena are also observed with secret and debate. The qualitative analysis demonstrates that our approach can indeed learn a sparse code of a word, and that the codes are meaningful. 4.3 Quantitative Analysis of Compressed Language Models We then used the pre-computed sparse codes to compress neural LMs, which provides quantitative analysis of the learned sparse representations of words. We take perplexity as the performance measurement of a language model, which is defined by PPL = 2−1 N PN i=1 log2 p(wi|hi) where N is the number of running words in the test corpus. 4.3.1 Settings We leveraged LSTM-RNN as the Encoding subnet, which is a prevailing class of neural networks for language modeling (Sundermeyer et al., 2015; Karpathy et al., 2015). The hidden layer was 200d. We used the Adam algorithm to train our neural models. The learning rate was chosen by validation from {0.001, 0.002, 0.004, 0.006, 0.008}. Parameters were updated with a mini-batch size of 256 words. We trained neural LMs by NCE, where we generated 50 negative samples for each positive data sample in the corpus. All our model variants and baselines were trained with the same pre-defined hyperparameters or tuned over a same candidate set; thus our comparison is fair. We list our compressed LMs and competing methods as follows. • KN3. We adopted the modified Kneser-Ney smoothing technique to train a 3-gram LM; we used the SRILM toolkit (Stolcke and others, 2002) in out experiment. • LBL5. A Log-BiLinear model introduced in Mnih and Hinton (2007). We used 5 preceding words as context. • LSTM-s. A standard LSTM-RNN language model which is applied in Sundermeyer et al. (2015) and Karpathy et al. (2015). We implemented the LM ourselves based on Theano (Theano Development Team, 2016) and also used NCE for training. • LSTM-z. An LSTM-RNN enhanced with the ZRegression mechanism described in Section 3.4. • LSTM-z,wb. Based on LSTM-z, we compressed word embeddings in Embedding and the output weights and biases in Prediction. • LSTM-z,w. In this variant, we did not compress the bias term in the output layer. For each word in C, we assigned an independent bias parameter. 4.3.2 Performance Tables 3 shows the perplexity of our compressed model and baselines. As we see, LSTM-based LMs significantly outperform the log-bilinear 232 Vocabulary 10k 22k 36k 50k KN3† 90. 4 125.3 146.4 159.9 LBL5 116. 6 167.0 199.5 220.3 LSTM-s 107. 3 159.5 189.4 222.1 LSTM-z 75. 1 104.4 119.6 130.6 LSTM-z,wb 73. 7 103.4 122.9 138.2 LSTM-z,w 72. 9 101.9 119.3 129.2 Table 3: Perplexity of our compressed language models and baselines. †Trained with the full corpus of 1.6 billion running words. Vocabulary 10k 22k 36k 50k LSTM-z,w 17.76 59.28 73.42 79.75 LSTM-z,wb 17.80 59.44 73.61 79.95 Table 4: Memory reduction (%) by our proposed methods in comparison with the uncompressed model LSTM-z. The memory of sparse codes are included. Figure 4: Fine-grained plot of performance (perplexity) and memory consumption (including sparse codes) versus the vocabulary size. model as well as the backoff 3-gram LM, even if the 3-gram LM is trained on a much larger corpus with 1.6 billion words. The ZRegression mechanism improves the performance of LSTM to a large extent, which is unexpected. Subsection 4.4 will provide more in-depth analysis. Regarding the compression method proposed in this paper, we notice that LSTM-z,wb and LSTM-z,w yield similar performance to LSTM-z. In particular, LSTM-z,w outperforms LSTM-z in all scenarios of different vocabulary sizes. Moreover, both LSTM-z,wb and LSTM-z,w can reduce the memory consumption by up to 80% (Table 4). We further plot in Figure 4 the model performance (lines) and memory consumption (bars) in a fine-grained granularity of vocabulary sizes. We see such a tendency that compressed LMs (LSTMz,wb and LSTM-z,w, yellow and red lines) are generally better than LSTM-z (black line) when we have a small vocabulary. However, LSTMz,wb is slightly worse than LSTM-z if the vocabulary size is greater than, say, 20k. The LSTM-z,w remains comparable to LSTM-z as the vocabulary grows. To explain this phenomenon, we may imagine that the compression using sparse codes has two effects: it loses information, but it also enables more accurate estimation of parameters especially for rare words. When the second factor dominates, we can reasonably expect a high performance of the compressed LM. From the bars in Figure 4, we observe that traditional LMs have a parameter space growing linearly with the vocabulary size. But the number of parameters in our compressed models does not increase—or strictly speaking, increases at an extremely small rate—with vocabulary. These experiments show that our method can largely reduce the parameter space with even performance improvement. The results also verify that the sparse codes induced by our model indeed capture meaningful semantics and are potentially useful for other downstream tasks. 4.4 Effect of ZRegression We next analyze the effect of ZRegression for NCE training. As shown in Figure 5a, the training process becomes unstable after processing 70% of the dataset: the training loss vibrates significantly, whereas the test loss increases. We find a strong correlation between unstableness and the Zh factor in Equation (10), i.e., the sum of unnormalized probability (Figure 5b). Theoretical analysis shows that the Zh factor tends to be self-normalized even though it is not forced to (Gutmann and Hyv¨arinen, 2012). However, problems would occur, should it fail. In traditional methods, NCE jointly estimates normalization factor Z and model parameters (Gutmann and Hyv¨arinen, 2012). For language modeling, Zh dependents on context h. Mnih and Teh (2012) propose to estimate a separate Zh based on two history words (analogous to 3-gram), but their approach hardly scales to RNNs because of the exponential number of different combinations of history words. We propose the ZRegression mechanism in Section 3.4, which can estimate the Zh factor well (Figure 5d) based on the history vector h. In this way, we manage to stabilize the training process (Figure 5c) and improve the performance by 233 (a) Training/test loss vs. training time w/o ZRegression. (b) The validation perplexity and normalization factor Zh w/o ZRegression. (c) Training loss vs. training time w/ ZRegression of different runs. (d) The validation perplexity and normalization factor Zh w/ ZRegression. Figure 5: Analysis of ZRegression. a large margin, as has shown in Table 3. It should be mentioned that ZRegression is not specific to model compression and is generally applicable to other neural LMs trained by NCE. 5 Conclusion In this paper, we proposed an approach to represent rare words by sparse linear combinations of common ones. Based on such combinations, we managed to compress an LSTM language model (LM), where memory does not increase with the vocabulary size except a bias and a sparse code for each word. Our experimental results also show that the compressed LM has yielded a better performance than the uncompressed base LM. Acknowledgments This research is supported by the National Basic Research Program of China (the 973 Program) under Grant No. 2015CB352201, the National Natural Science Foundation of China under Grant Nos. 61232015, 91318301, 61421091 and 61502014, and the China Post-Doctoral Foundation under Grant No. 2015M580927. References Jacob Andreas and Dan Klein. 2014. When and why are log-linear models self-normalizing. In Proceedings of the Annual Meeting of the North American Chapter of the Association for Computational Linguistics, pages 244–249. Yoshua Bengio and Jean-S´ebastien Sen´ecal. 2003. Quick training of probabilistic neural nets by importance sampling. In Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. The Journal of Machine Learning Research, 3:1137–1155. Cristian Buciluˇa, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 535–541. Welin Chen, David Grangier, and Michael Auli. 2015. Strategies for training large vocabulary neural language models. arXiv preprint arXiv:1512.04906. Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard M Schwartz, and John Makhoul. 2014. Fast and robust neural network joint models for statistical machine translation. In Proceedings of the 52rd Annual Meeting of the Association for Computational Linguistics, pages 1370–1380. 234 Manaal Faruqui, Yulia Tsvetkov, Dani Yogatama, Chris Dyer, and Noah A. Smith. 2015. Sparse overcomplete word vector representations. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, pages 1491–1500. Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. 2014. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115. Michael Gutmann and Aapo Hyv¨arinen. 2012. Noisecontrastive estimation of unnormalized statistical models, with applications to natural image statistics. The Journal of Machine Learning Research, 13(1):307–361. Zhanying He, Chun Chen, Jiajun Bu, Can Wang, Lijun Zhang, Deng Cai, and Xiaofei He. 2012. Document summarization based on data reconstruction. In Proceedings of the 26th AAAI Conference on Artificial Intelligence, pages 620–626. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. 2014. Speeding up convolutional neural networks with low rank expansions. In Proceedings of the British Machine Vision Conference. S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2014. On using very large target vocabulary for neural machine translation. arXiv preprint arXiv:1412.2007. Dan Jurafsky and James H. Martin. 2014. Speech and Language Processing. Pearson. Andrej Karpathy, Justin Johnson, and Fei-Fei Li. 2015. Visualizing and understanding recurrent networks. arXiv preprint arXiv:1506.02078. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Honglak Lee, Alexis Battle, Rajat Raina, and Andrew Y Ng. 2006. Efficient sparse coding algorithms. In Advances in Neural Information Processing Systems, pages 801–808. Omer Levy and Yoav Goldberg. 2014. Linguistic regularities in sparse and explicit word representations. In Proceedings of the Eighteenth Conference on Natural Language Learning, pages 171–180. Tomas Mikolov, Martin Karafi´at, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH, pages 1045–1048. Tomas Mikolov, Anoop Deoras, Daniel Povey, Lukas Burget, and Jan Cernock´y. 2011. Strategies for training large scale neural network language models. In Proceedings of the IEEE Workshop on Automatic Speech Recognition and Understanding, pages 196– 201. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Andriy Mnih and Geoffrey Hinton. 2007. Three new graphical models for statistical language modelling. In Proceedings of the 24th International Conference on Machine learning, pages 641–648. Andriy Mnih and Yee-Whye Teh. 2012. A fast and simple algorithm for training neural probabilistic language models. arXiv preprint arXiv:1206.6426. Fr´ederic Morin and Yoshua Bengio. 2005. Hierarchical probabilistic neural network language model. In Proceedings of the International Workshop on Artificial Intelligence and Statistics, pages 246–252. Lili Mou, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. 2015a. Distilling word embeddings: An encoding approach. arXiv preprint arXiv:1506.04488. Lili Mou, Rui Yan, Ge Li, Lu Zhang, and Zhi Jin. 2015b. Backward and forward language modeling for constrained natural language generation. arXiv preprint arXiv:1512.06612. Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379–389. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 196–205. Andreas Stolcke et al. 2002. SRILM—An extensible language modeling toolkit. In INTERSPEECH, pages 901–904. Martin Sundermeyer, Hermann Ney, and Ralf Schl¨uter. 2015. From feedforward to recurrent LSTM neural networks for language modeling. IEEE/ACM Transactions on Audio, Speech and Language Processing, 23(3):517–529. Theano Development Team. 2016. Theano: A Python framework for fast computation of mathematical expressions. arXiv preprint arXiv:1605.02688. Meng Yang, Lei Zhang, Jian Yang, and David Zhang. 2011. Robust sparse coding for face recognition. In Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, pages 625– 632. 235
2016
22
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2326–2336, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Question Answering on Freebase via Relation Extraction and Textual Evidence Kun Xu1, Siva Reddy2, Yansong Feng1,∗, Songfang Huang3 and Dongyan Zhao1 1Institute of Computer Science & Technology, Peking University, Beijing, China 2School of Informatics, University of Edinburgh, UK 3IBM China Research Lab, Beijing, China {xukun, fengyansong, zhaody}@pku.edu.cn [email protected] [email protected] Abstract Existing knowledge-based question answering systems often rely on small annotated training data. While shallow methods like relation extraction are robust to data scarcity, they are less expressive than the deep meaning representation methods like semantic parsing, thereby failing at answering questions involving multiple constraints. Here we alleviate this problem by empowering a relation extraction method with additional evidence from Wikipedia. We first present a neural network based relation extractor to retrieve the candidate answers from Freebase, and then infer over Wikipedia to validate these answers. Experiments on the WebQuestions question answering dataset show that our method achieves an F1 of 53.3%, a substantial improvement over the state-of-the-art. 1 Introduction Since the advent of large structured knowledge bases (KBs) like Freebase (Bollacker et al., 2008), YAGO (Suchanek et al., 2007) and DBpedia (Auer et al., 2007), answering natural language questions using those structured KBs, also known as KBbased question answering (or KB-QA), is attracting increasing research efforts from both natural language processing and information retrieval communities. The state-of-the-art methods for this task can be roughly categorized into two streams. The first is based on semantic parsing (Berant et al., 2013; Kwiatkowski et al., 2013), which typically learns a grammar that can parse natural language to a sophisticated meaning representation language. But such sophistication requires a lot of annotated training examples that contains compositional structures, a practically impossible solution for large KBs such as Freebase. Furthermore, mismatches between grammar predicted structures and KB structure is also a common problem (Kwiatkowski et al., 2013; Berant and Liang, 2014; Reddy et al., 2014). On the other hand, instead of building a formal meaning representation, information extraction methods retrieve a set of candidate answers from KB using relation extraction (Yao and Van Durme, 2014; Yih et al., 2014; Yao, 2015; Bast and Haussmann, 2015) or distributed representations (Bordes et al., 2014; Dong et al., 2015). Designing large training datasets for these methods is relatively easy (Yao and Van Durme, 2014; Bordes et al., 2015; Serban et al., 2016). These methods are often good at producing an answer irrespective of their correctness. However, handling compositional questions that involve multiple entities and relations, still remains a challenge. Consider the question what mountain is the highest in north america. Relation extraction methods typically answer with all the mountains in North America because of the lack of sophisticated representation for the mathematical function highest. To select the correct answer, one has to retrieve all the heights of the mountains, and sort them in descending order, and then pick the first entry. We propose a method based on textual evidence which can answer such questions without solving the mathematic functions implicitly. Knowledge bases like Freebase capture real world facts, and Web resources like Wikipedia provide a large repository of sentences that validate or support these facts. For example, a sentence in Wikipedia says, Denali (also known as Mount McKinley, its former official name) is the highest mountain peak in North America, with a summit elevation of 20,310 feet (6,190 m) above sea level. To answer our example question against a KB using a relation extractor, we can use this sentence 2326 as external evidence, filter out wrong answers and pick the correct one. Using textual evidence not only mitigates representational issues in relation extraction, but also alleviates the data scarcity problem to some extent. Consider the question, who was queen isabella’s mother. Answering this question involves predicting two constraints hidden in the word mother. One constraint is that the answer should be the parent of Isabella, and the other is that the answer’s gender is female. Such words with multiple latent constraints have been a pain-in-the-neck for both semantic parsing and relation extraction, and requires larger training data (this phenomenon is coined as sub-lexical compositionality by Wang et al. (2015)). Most systems are good at triggering the parent constraint, but fail on the other, i.e., the answer entity should be female. Whereas the textual evidence from Wikipedia, ...her mother was Isabella of Barcelos ..., can act as a further constraint to answer the question correctly. We present a novel method for question answering which infers on both structured and unstructured resources. Our method consists of two main steps as outlined in §2. In the first step we extract answers for a given question using a structured KB (here Freebase) by jointly performing entity linking and relation extraction (§3). In the next step we validate these answers using an unstructured resource (here Wikipedia) to prune out the wrong answers and select the correct ones (§4). Our evaluation results on a benchmark dataset WebQuestions show that our method outperforms existing state-ofthe-art models. Details of our experimental setup and results are presented in §5. Our code, data and results can be downloaded from https://github. com/syxu828/QuestionAnsweringOverFB. 2 Our Method Figure 1 gives an overview of our method for the question “who did shaq first play for”. We have two main steps: (1) inference on Freebase (KB-QA box); and (2) further inference on Wikipedia (Answer Refinement box). Let us take a close look into step 1. Here we perform entity linking to identify a topic entity in the question and its possible Freebase entities. We employ a relation extractor to predict the potential Freebase relations that could exist between the entities in the question and the answer entities. Later we perform a joint inference step over the entity linking and relation extraction who did shaq first play for KB-QA Entity Linking Relation Extraction Joint Inference shaq: m.012xdf shaq: m.05n7bp shaq: m.06_ttvh sports.pro_athlete.teams..sports.sports_team_roster.team basketball.player.statistics..basketball.player_stats.team …… Answer Refinement m.012xdf sports.pro_athlete.teams..sports.sports_team_roster.team Los Angeles Lakers, Boston Celtics, Orlando Magic, Miami Heat Freebase Shaquille O'Neal O'Neal signed as a free agent with the Los Angeles Lakers Shaquille O'Neal O'Neal played for the Boston Celtics in the 2010-11 season before retiring Shaquille O'Neal O'Neal was drafted in the 1992 NBA draft by the Orlando Magic with the first overall pick Los Angeles Lakers Boston Celtics Orlando Magic O’Neal was drafted by the Orlando Magic with the first overall pick in the 1992 NBA draft O’Neal played for the Boston Celtics in the 2010-11 season before retiring O’Neal signed as a free agent with the Los Angeles Lakers Refinement Model + Orlando Magic Wikipedia Dump (with CoreNLP annotations) Figure 1: An illustration of our method to find answers for the given question who did shaq first play for. results to find the best entity-relation configuration which will produce a list of candidate answer entities. In the step 2, we refine these candidate answers by applying an answer refinement model which takes the Wikipedia page of the topic entity into consideration to filter out the wrong answers and pick the correct ones. While the overview in Figure 1 works for questions containing single Freebase relation, it also works for questions involving multiple Freebase relations. Consider the question who plays anakin skywalker in star wars 1. The actors who are the answers to this question should satisfy the following constraints: (1) the actor played anakin skywalker; and (2) the actor played in star wars 1. Inspired by Bao et al. (2014), we design a dependency treebased method to handle such multi-relational questions. We first decompose the original question into a set of sub-questions using syntactic patterns which are listed in Appendix. The final answer set of the original question is obtained by intersecting the answer sets of all its sub-questions. For the 2327 example question, the sub-questions are who plays anakin skywalker and who plays in star wars 1. These sub-questions are answered separately over Freebase and Wikipedia, and the intersection of their answers to these sub-questions is treated as the final answer. 3 Inference on Freebase Given a sub-question, we assume the question word1 that represents the answer has a distinct KB relation r with an entity e found in the question, and predict a single KB triple (e, r, ?) for each subquestion (here ? stands for the answer entities). The QA problem is thus formulated as an information extraction problem that involves two sub-tasks, i.e., entity linking and relation extraction. We first introduce these two components, and then present a joint inference procedure which further boosts the overall performance. 3.1 Entity Linking For each question, we use hand-built sequences of part-of-speech categories to identify all possible named entity mention spans, e.g., the sequence NN (shaq) may indicate an entity. For each mention span, we use the entity linking tool S-MART2 (Yang and Chang, 2015) to retrieve the top 5 entities from Freebase. These entities are treated as candidate entities that will eventually be disambiguated in the joint inference step. For a given mention span, S-MART first retrieves all possible entities of Freebase by surface matching, and then ranks them using a statistical model, which is trained on the frequency counts with which the surface form occurs with the entity. 3.2 Relation Extraction We now proceed to identify the relation between the answer and the entity in the question. Inspired by the recent success of neural network models in KB question-answering (Yih et al., 2015; Dong et al., 2015), and the success of syntactic dependencies for relation extraction (Liu et al., 2015; Xu et al., 2015), we propose a Multi-Channel Convolutional Neural Network (MCCNN) which could exploit both syntactic and sentential information for relation extraction. 1who, when, what, where, how, which, why, whom, whose. 2S-MART demo can be accessed at http://msre2edemo.azurewebsites.net/ [Who] did [shaq] first play for play did first play for Word Representation Feature Extraction max( ). Convolution Feature Vector Output Softmax dobj nsubj dobj aux nsubj QPRG KB relations We W1 W2 W3 Figure 2: Overview of the multi-channel convolutional neural network for relation extraction. We is the word embedding matrix, W1 is the convolution matrix, W2 is the activation matrix and W3 is the classification matrix. 3.2.1 MCCNNs for Relation Classification In MCCNN, we use two channels, one for syntactic information and the other for sentential information. The network structure is illustrated in Figure 2. Convolution layer tackles an input of varying length returning a fixed length vector (we use max pooling) for each channel. These fixed length vectors are concatenated and then fed into a softmax classifier, the output dimension of which is equal to the number of predefined relation types. The value of each dimension indicates the confidence score of the corresponding relation. Syntactic Features We use the shortest path between an entity mention and the question word in the dependency tree3 as input to the first channel. Similar to Xu et al. (2015), we treat the path as a concatenation of vectors of words, dependency edge directions and dependency labels, and feed it to the convolution layer. Note that, the entity mention and the question word are excluded from the dependency path so as to learn a more general relation representation in syntactic level. As shown in Figure 2, the dependency path between who and shaq is ←dobj – play – nsubj →. 3We use Stanford CoreNLP dependency parser (Manning et al., 2014). 2328 Sentential Features This channel takes the words in the sentence as input excluding the question word and the entity mention. As illustrated in Figure 2, the vectors for did, first, play and for are fed into this channel. 3.2.2 Objective Function and Learning The model is learned using pairs of question and its corresponding gold relation from the training data. Given an input question x with an annotated entity mention, the network outputs a vector o(x), where the entry ok(x) is the probability that there exists the k-th relation between the entity and the expected answer. We denote t(x) ∈RK×1 as the target distribution vector, in which the value for the gold relation is set to 1, and others to 0. We compute the cross entropy error between t(x) and o(x), and further define the objective function over the training data as: J(θ) = − X x K X k=1 tk(x) log ok(x) + λ||θ||2 2 where θ represents the weights, and λ the L2 regularization parameters. The weights θ can be efficiently computed via back-propagation through network structures. To minimize J(θ), we apply stochastic gradient descent (SGD) with AdaGrad (Duchi et al., 2011). 3.3 Joint Entity Linking & Relation Extraction A pipeline of entity linking and relation extraction may suffer from error propagations. As we know, entities and relations have strong selectional preferences that certain entities do not appear with certain relations and vice versa. Locally optimized models could not exploit these implicit bi-directional preferences. Therefore, we use a joint model to find a globally optimal entity-relation assignment from local predictions. The key idea behind is to leverage various clues from the two local models and the KB to rank a correct entity-relation assignment higher than other combinations. We describe the learning procedure and the features below. 3.3.1 Learning Suppose the pair (egold, rgold) represents the gold entity/relation pair for a question q. We take all our entity and relation predictions for q, create a list of entity and relation pairs {(e0, r0), (e1, r1), ..., (en, rn)} from q and rank them using an SVM rank classifier (Joachims, 2006) which is trained to predict a rank for each pair. Ideally higher rank indicates the prediction is closer to the gold prediction. For training, SVM rank classifier requires a ranked or scored list of entityrelation pairs as input. We create the training data containing ranked input pairs as follows: if both epred = egold and rpred = rgold, we assign it with a score of 3. If only the entity or relation equals to the gold one (i.e., epred = egold, rpred ̸= rgold or epred ̸= egold, rpred = rgold), we assign a score of 2 (encouraging partial overlap). When both entity and relation assignments are wrong, we assign a score of 1. 3.3.2 Features For a given entity-relation pair, we extract the following features which are passed as an input vector to the SVM ranker above: Entity Clues. We use the score of the predicted entity returned by the entity linking system as a feature. The number of word overlaps between the entity mention and entity’s Freebase name is also included as a feature. In Freebase, most entities have a relation fb:description which describes the entity. For instance, in the running example, shaq is linked to three potential entities m.06 ttvh (Shaq Vs. Television Show), m.05n7bp (Shaq Fu Video Game) and m.012xdf (Shaquille O’Neal). Interestingly, the word play only appears in the description of Shaquille O’Neal and it occurs three times. We count the content word overlap between the given question and the entity’s description, and include it as a feature. Relation Clues. The score of relation returned by the MCCNNs is used as a feature. Furthermore, we view each relation as a document which consists of the training questions that this relation is expressed in. For a given question, we use the sum of the tf-idf scores of its words with respect to the relation as a feature. A Freebase relation r is a concatenation of a series of fragments r = r1.r2.r3. For instance, the three fragments of people.person.parents are people, person and parents. The first two fragments indicate the Freebase type of the subject of this relation, and the third fragment indicates the object type, in our case the answer type. We use an indicator feature to denote if the surface form of the third fragment (here parents) appears in the question. Answer Clues. The above two feature classes indicate local features. From the entity-relation (e, r) 2329 pair, we create the query triple (e, r, ?) to retrieve the answers, and further extract features from the answers. These features are non-local since we require both e and r to retrieve the answer. One such feature is using the co-occurrence of the answer type and the question word based on the intuition that question words often indicate the answer type, e.g., the question word when usually indicates the answer type type.datetime. Another feature is the number of answer entities retrieved. 4 Inference on Wikipedia We use the best ranked entity-relation pair from the above step to retrieve candidate answers from Freebase. In this step, we validate these answers using Wikipedia as our unstructured knowledge resource where most statements in it are verified for factuality by multiple people. Our refinement model is inspired by the intuition of how people refine their answers. If you ask someone: who did shaq first play for, and give them four candidate answers (Los Angeles Lakers, Boston Celtics, Orlando Magic and Miami Heat), as well as access to Wikipedia, that person might first determine that the question is about Shaquille O’Neal, then go to O’Neal’s Wikipedia page, and search for the sentences that contain the candidate answers as evidence. By analyzing these sentences, one can figure out whether a candidate answer is correct or not. 4.1 Finding Evidence from Wikipedia As mentioned above, we should first find the Wikipedia page corresponding to the topic entity in the given question. We use Freebase API to convert Freebase entity to Wikipedia page. We extract the content from the Wikipedia page and process it with Wikifier (Cheng and Roth, 2013) which recognizes Wikipedia entities, which can further be linked to Freebase entities using Freebase API. Additionally we use Stanford CoreNLP (Manning et al., 2014) for tokenization and entity co-reference resolution. We search for the sentences containing the candidate answer entities retrieved from Freebase. For example, the Wikipedia page of O’Neal contains a sentence “O’Neal was drafted by the Orlando Magic with the first overall pick in the 1992 NBA draft”, which is taken into account by the refinement model (our inference model on Wikipedia) to discriminate whether Orlando Magic is the answer for the given question. 4.2 Refinement Model We treat the refinement process as a binary classification task over the candidate answers, i.e., correct (positive) and incorrect (negative) answer. We prepare the training data for the refinement model as follows. On the training dataset, we first infer on Freebase to retrieve the candidate answers. Then we use the annotated gold answers of these questions and Wikipedia to create the training data. Specifically, we treat the sentences that contain correct/incorrect answers as positive/negative examples for the refinement model. We use LIBSVM (Chang and Lin, 2011) to learn the weights for classification. Note that, in the Wikipedia page of the topic entity, we may collect more than one sentence that contain a candidate answer. However, not all sentences are relevant, therefore we consider the candidate answer as correct if at least there is one positive evidence. On the other hand, sometimes, we may not find any evidence for the candidate answer. In these cases, we fall back to the results of the KB-based approach. 4.3 Lexical Features Regarding the features used in LIBSVM, we use the following lexical features extracted from the question and a Wikipedia sentence. Formally, given a question q = <q1, ... qn> and an evidence sentence s = <s1, ... sm>, we denote the tokens of q and s by qi and sj, respectively. For each pair (q, s), we identify a set of all possible token pairs (qi, sj), the occurrences of which are used as features. As learning proceeds, we hope to learn a higher weight for a feature like (first, drafted) and a lower weight for (first, played). 5 Experiments In this section we introduce the experimental setup, the main results and detailed analysis of our system. 5.1 Training and Evaluation Data We use the WebQuestions (Berant et al., 2013) dataset, which contains 5,810 questions crawled via Google Suggest service, with answers annotated on Amazon Mechanical Turk. The questions are split into training and test sets, which contain 3,778 questions (65%) and 2,032 questions (35%), respectively. We further split the training questions into 80%/20% for development. 2330 To train the MCCNNs and the joint inference model, we need the gold standard relations of the questions. Since this dataset contains only questionanswer pairs and annotated topic entities, instead of relying on gold relations we rely on surrogate gold relations which produce answers that have the highest overlap with gold answers. Specifically, for a given question, we first locate the topic entity e in the Freebase graph, then select 1-hop and 2-hop relations connected to the topic entity as relation candidates. The 2-hop relations refer to the n-ary relations of Freebase, i.e., first hop from the subject to a mediator node, and the second from the mediator to the object node. For each relation candidate r, we issue the query (e, r, ?) to the KB, and label the relation that produces the answer with minimal F1-loss against the gold answer, as the surrogate gold relation. From the training set, we collect 461 relations to train the MCCNN, and the target prediction during testing time is over these relations. 5.2 Experimental Settings We have 6 dependency tree patterns based on Bao et al. (2014) to decompose the question into subquestions (See Appendix). We initialize the word embeddings with Turian et al. (2010)’s word representations with dimensions set to 50. The hyper parameters in our model are tuned using the development set. The window size of MCCNN is set to 3. The sizes of the hidden layer 1 and the hidden layer 2 of the two MCCNN channels are set to 200 and 100, respectively. We use the Freebase version of Berant et al. (2013), containing 4M entities and 5,323 relations. 5.3 Results and Discussion We use the average question-wise F1 as our evaluation metric.4 To give an idea of the impact of different configurations of our method, we compare the following with existing methods. Structured. This method involves inference on Freebase only. First the entity linking (EL) system is run to predict the topic entity. Then we run the relation extraction (RE) system and select the best relation that can occur with the topic entity. We choose this entity-relation pair to predict the answer. 4We use the evaluation script available at http:// www-nlp.stanford.edu/software/sempre. Method average F1 Berant et al. (2013) 35.7 Yao and Van Durme (2014) 33.0 Xu et al. (2014) 39.1 Berant and Liang (2014) 39.9 Bao et al. (2014) 37.5 Bordes et al. (2014) 39.2 Dong et al. (2015) 40.8 Yao (2015) 44.3 Bast and Haussmann (2015) 49.4 Berant and Liang (2015) 49.7 Reddy et al. (2016) 50.3 Yih et al. (2015) 52.5 This work Structured 44.1 Structured + Joint 47.1 Structured + Unstructured 47.0 Structured + Joint + Unstructured 53.3 Table 1: Results on the test set. Structured + Joint. In this method instead of the above pipeline, we perform joint EL and RE as described in §3.3. Structured+Unstructured. We use the pipelined EL and RE along with inference on Wikipedia as described in §4. Structured + Joint + Unstructured. This is our main model. We perform inference on Freebase using joint EL and RE, and then inference on Wikipedia to validate the results. Specifically, we treat the top two predictions of the joint inference model as the candidate subject and relation pairs, and extract the corresponding answers from each pair, take the union, and filter the answer set using Wikipedia. Table 1 summarizes the results on the test data along with the results from the literature.5 We can see that joint EL and RE performs better than the default pipelined approach, and outperforms most semantic parsing based models, except (Berant and Liang, 2015) which searches partial logical forms in strategic order by combining imitation learning and agenda-based parsing. In addition, inference on unstructured data helps the default model. The joint EL and RE combined with inference on unstructured data further improves the default pipelined model by 9.2% (from 44.1% to 53.3%), and achieves a new state-of-the-art result beating the previous reported best result of Yih et al. (2015) (with one-tailed t-test significance of p < 0.05). 5We use development data for all our ablation experiments. Similar trends are observed on both development and test results. 2331 Entity Linking Relation Extraction Accuracy Accuracy Isolated Model 79.8 45.9 Joint Inference 83.2 55.3 Table 2: Impact of the joint inference on the development set Method average F1 Structured (syntactic) 38.1 Structured (sentential) 38.7 Structured (syntactic + sentential) 40.1 Structured + Joint (syntactic) 43.6 Structured + Joint (sentential) 44.1 Structured + Joint (syntactic + sentential) 45.8 Table 3: Impact of different MCCNN channels on the development set. 5.3.1 Impact of Joint EL & RE From Table 1, we can see that the joint EL & RE gives a performance boost of 3% (from 44.1 to 47.1). We also analyze the impact of joint inference on the individual components of EL & RE. We first evaluate the EL component using the gold entity annotations on the development set. As shown in Table 2, for 79.8% questions, our entity linker can correctly find the gold standard topic entities. The joint inference improves this result to 83.2%, a 3.4% improvement. Next we use the surrogate gold relations to evaluate the performance of the RE component on the development set. As shown in Table 2, the relation prediction accuracy increases by 9.4% (from 45.9% to 55.3%) when using the joint inference. 5.3.2 Impact of the Syntactic and the Sentential Channels Table 3 presents the results on the impact of individual and joint channels on the end QA performance. When using a single-channel network, we tune the parameters of only one channel while switching off the other channel. As seen, the sentential features are found to be more important than syntactic features. We attribute this to the short and noisy nature of WebQuestions questions due to which syntactic parser wrongly parses or the shortest dependency path does not contain sufficient information to predict a relation. By using both the channels, we see further improvements than using any one of the channels. Question & Answers 1. what is the largest nation in europe Before: Kazakhstan, Turkey, Russia, ... After: Russia 2. which country in europe has the largest land area Before: Georgia, France, Russia, ... After: Russian Empire, Russia 3. what year did ray allen join the nba Before: 2007, 2003, 1996, 1993, 2012 After: 1996 4. who is emma stone father Before: Jeff Stone, Krista Stone After: Jeff Stone 5. where did john steinbeck go to college Before: Salinas High School, Stanford University After: Stanford University Table 4: Example questions and corresponding predicted answers before and after using unstructured inference. Before uses (Structured + Joint) model, and After uses Structured + Joint + Unstructured model for prediction. The colors blue and red indicate correct and wrong answers respectively. 5.3.3 Impact of the Inference on Unstructured Data As shown in Table 1, when structured inference is augmented with the unstructured inference, we see an improvement of 2.9% (from 44.1% to 47.0%). And when Structured + Joint uses unstructured inference, the performance boosts by 6.2% (from 47.1% to 53.3%) achieving a new state-of-the-art result. For the latter, we manually analyzed the cases in which unstructured inference helps. Table 4 lists some of these questions and the corresponding answers before and after the unstructured inference. We observed the unstructured inference mainly helps for two classes of questions: (1) questions involving aggregation operations (Questions 1-3); (2) questions involving sub-lexical compositionally (Questions 4-5). Questions 1 and 2 contain the predicate largest an aggregation operator. A semantic parsing method should explicitly handle this predicate to trigger max(.) operator. For Question 3, structured inference predicts the Freebase relation fb:teams..from retrieving all the years in which Ray Allen has played basketball. Note that Ray Allen has joined Connecticut University’s team in 1993 and NBA from 1996. To answer this question a semantic parsing system would require a min(·) operator along with an additional constraint that the year corresponds to the NBA’s term. Interestingly, without having to explicitly model these complex predicates, the unstructured inference helps in answering these questions more accurately. Questions 4-5 involve sub-lexical com2332 positionally (Wang et al., 2015) predicates father and college. For example in Question 5, the user queries for the colleges that John Steinbeck attended. However, Freebase defines the relation fb:education..institution to describe a person’s educational information without discriminating the specific periods such as high school or college. Inference using unstructured data helps in alleviating these representational issues. 5.3.4 Error analysis We analyze the errors of Structured + Joint + Unstructured model. Around 15% of the errors are caused by incorrect entity linking, and around 50% of the errors are due to incorrect relation predictions. The errors in relation extraction are due to (i) insufficient context, e.g., in what is duncan bannatyne, neither the dependency path nor sentential context provides enough evidence for the MCCNN model; (ii) unbalanced distribution of relations (3022 training examples for 461 relations) heavily influences the performance of MCCNN model towards frequently seen relations. The remaining errors are the failure of unstructured inference due to insufficient evidence in Wikipedia or misclassification. Entity Linking. In the entity linking component, we had handcrafted POS tag patterns to identify entity mentions, e.g., DT-JJ-NN (noun phrase), NNIN-NN (prepositional phrase). These patterns are designed to have high recall. Around 80% of entity linking errors are due to incorrect entity prediction even when the correct mention span was found. Question Decomposition. Around 136 questions (15%) of dev data contains compositional questions, leading to 292 sub-questions (around 2.1 subquestions for a compositional question). Since our question decomposition component is based on manual rules, one question of interest is how these rules perform on other datasets. By human evaluation, we found these rules achieves 95% on a more general but complex QA dataset QALD-56. 5.3.5 Limitations While our unstructured inference alleviates representational issues to some extent, we still fail at modeling compositional questions such as who is the mother of the father of prince william involving 6http://qald.sebastianwalter.org/index. php?q=5 multi-hop relations and the inter alia. Our current assumption that unstructured data could provide evidence for questions may work only for frequently typed queries or for popular domains like movies, politics and geography. We note these limitations and hope our result will foster further research in this area. 6 Related Work Over time, the QA task has evolved into two main streams – QA on unstructured data, and QA on structured data. TREC QA evaluations (Voorhees and Tice, 1999) were a major boost to unstructured QA leading to richer datasets and sophisticated methods (Wang et al., 2007; Heilman and Smith, 2010; Yao et al., 2013; Yih et al., 2013; Yu et al., 2014; Yang et al., 2015; Hermann et al., 2015). While initial progress on structured QA started with small toy domains like GeoQuery (Zelle and Mooney, 1996), recent focus has shifted to large scale structured KBs like Freebase, DBPedia (Unger et al., 2012; Cai and Yates, 2013; Berant et al., 2013; Kwiatkowski et al., 2013; Xu et al., 2014), and on noisy KBs (Banko et al., 2007; Carlson et al., 2010; Krishnamurthy and Mitchell, 2012; Fader et al., 2013; Parikh et al., 2015). An exciting development in structured QA is to exploit multiple KBs (with different schemas) at the same time to answer questions jointly (Yahya et al., 2012; Fader et al., 2014; Zhang et al., 2016). QALD tasks and linked data initiatives are contributing to this trend. Our model combines the best of both worlds by inferring over structured and unstructured data. Though earlier methods exploited unstructured data for KB-QA (Krishnamurthy and Mitchell, 2012; Berant et al., 2013; Yao and Van Durme, 2014; Reddy et al., 2014; Yih et al., 2015), these methods do not rely on unstructured data at test time. Our work is closely related to Joshi et al. (2014) who aim to answer noisy telegraphic queries using both structured and unstructured data. Their work is limited in answering single relation queries. Our work also has similarities to Sun et al. (2015) who does question answering on unstructured data but enrich it with Freebase, a reversal of our pipeline. Other line of very recent related work include Yahya et al. (2016) and Savenkov and Agichtein (2016). Our work also intersects with relation extraction methods. While these methods aim to predict a relation between two entities in order to pop2333 ulate KBs (Mintz et al., 2009; Hoffmann et al., 2011; Riedel et al., 2013), we work with sentence level relation extraction for question answering. Krishnamurthy and Mitchell (2012) and Fader et al. (2014) adopt open relation extraction methods for QA but they require hand-coded grammar for parsing queries. Closest to our extraction method is Yao and Van Durme (2014) and Yao (2015) who also uses sentence level relation extraction for QA. Unlike them, we can predict multiple relations per question, and our MCCNN architecture is more robust to unseen contexts compared to their logistic regression models. Dong et al. (2015) were the first to use MCCNN for question answering. Yet our approach is very different in spirit to theirs. Dong et al. aim to maximize the similarity between the distributed representation of a question and its answer entities, whereas our network aims to predict Freebase relations. Our search space is several times smaller than theirs since we do not require potential answer entities beforehand (the number of relations is much smaller than the number of entities in Freebase). In addition, our method can explicitly handle compositional questions involving multiple relations, whereas Dong et al. learn latent representation of relation joins which is difficult to comprehend. Moreover, we outperform their method by 7 points even without unstructured inference. 7 Conclusion and Future Work We have presented a method that could infer both on structured and unstructured data to answer natural language questions. Our experiments reveal that unstructured inference helps in mitigating representational issues in structured inference. We have also introduced a relation extraction method using MCCNN which is capable of exploiting syntax in addition to sentential features. Our main model which uses joint entity linking and relation extraction along with unstructured inference achieves the state-of-the-art results on WebQuestions dataset. A potential application of our method is to improve KB-question answering using the documents retrieved by a search engine. Since we pipeline structured inference first and then unstructured inference, our method is limited by the coverage of Freebase. Our future work involves exploring other alternatives such as treating structured and unstructured data as two independent resources in order to overcome the knowledge gaps in either of the two resources. Acknowledgments We would like to thank Weiwei Sun, Liwei Chen, and the anonymous reviewers for their helpful feedback. This work is supported by National High Technology R&D Program of China (Grant No. 2015AA015403, 2014AA015102), Natural Science Foundation of China (Grant No. 61202233, 61272344, 61370055) and the joint project with IBM Research. For any correspondence, please contact Yansong Feng. Appendix The syntax-based patterns for question decomposition are shown in Figure 3. The first four patterns are designed to extract sub-questions from simple questions, while the latter two are designed for complex questions involving clauses. verb subj obj1 prep obj2 verb subj obj1 prep* obj2 (a) (b) and verb subj prep* objk (c) prep* obj1 … … verb subj prep* obj2 (d) prep* obj1 and verb WDT subj (e) verb obj1 WDT subj (f) verb prep* obj1 Figure 3: Syntax-based patterns for question decomposition. References Sren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary G. Ives. 2007. Dbpedia: A nucleus for a web of open data. In ISWC/ASWC. Michele Banko, Michael J Cafarella, Stephen Soderland, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction for the web. In IJCAI. Junwei Bao, Nan Duan, Ming Zhou, and Tiejun Zhao. 2014. Knowledge-based question answering as machine translation. In ACL. 2334 Hannah Bast and Elmar Haussmann. 2015. More accurate question answering on freebase. In CIKM. Jonathan Berant and Percy Liang. 2014. Semantic parsing via paraphrasing. In ACL. Jonathan Berant and Percy Liang. 2015. Imitation learning of agenda-based semantic parsers. Transactions of the Association for Computational Linguistics, 3:545–558. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In EMNLP. Kurt D. Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In SIGMOD. Antoine Bordes, Sumit Chopra, and Jason Weston. 2014. Question answering with subgraph embeddings. In EMNLP. Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. CoRR, abs/1506.02075. Qingqing Cai and Alexander Yates. 2013. Large-scale semantic parsing via schema matching and lexicon extension. In ACL. Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R Hruschka Jr, and Tom M Mitchell. 2010. Toward an architecture for neverending language learning. In AAAI. Chih-Chung Chang and Chih-Jen Lin. 2011. LIBSVM: A library for support vector machines. ACM TIST, 2(3):27. Xiao Cheng and Dan Roth. 2013. Relational inference for wikification. In ACL. Li Dong, Furu Wei, Ming Zhou, and Ke Xu. 2015. Question answering over freebase with multicolumn convolutional neural networks. In ACLIJCNLP. John C. Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159. Anthony Fader, Luke S. Zettlemoyer, and Oren Etzioni. 2013. Paraphrase-driven learning for open question answering. In ACL. Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and extracted knowledge bases. In SIGKDD. Michael Heilman and Noah A Smith. 2010. Tree edit models for recognizing textual entailments, paraphrases, and answers to questions. In NAACL. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledgebased weak supervision for information extraction of overlapping relations. In ACL. Thorsten Joachims. 2006. Training linear svms in linear time. In SIGKDD. Mandar Joshi, Uma Sawant, and Soumen Chakrabarti. 2014. Knowledge graph and corpus driven segmentation and answer inference for telegraphic entityseeking queries. In EMNLP. Jayant Krishnamurthy and Tom M Mitchell. 2012. Weakly supervised training of semantic parsers. In EMNLP-CoNLL. Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke S. Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In EMNLP. Yang Liu, Furu Wei, Sujian Li, Heng Ji, Ming Zhou, and Houfeng WANG. 2015. A dependency-based neural network for relation classification. In ACL. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In ACL System Demonstrations. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In ACL. Ankur P. Parikh, Hoifung Poon, and Kristina Toutanova. 2015. Grounded semantic parsing for complex knowledge extraction. In NAACL. Siva Reddy, Mirella Lapata, and Mark Steedman. 2014. Large-scale semantic parsing without questionanswer pairs. Transactions of the Association of Computational Linguistics, pages 377–392. Siva Reddy, Oscar T¨ackstr¨om, Michael Collins, Tom Kwiatkowski, Dipanjan Das, Mark Steedman, and Mirella Lapata. 2016. Transforming Dependency Structures to Logical Forms for Semantic Parsing. Transactions of the Association for Computational Linguistics, 4. Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M. Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In NAACL. Denis Savenkov and Eugene Agichtein. 2016. When a knowledge base is not enough: Question answering over knowledge bases with external text data. In SIGIR. 2335 Iulian Vlad Serban, Alberto Garc´ıa-Dur´an, C¸ aglar G¨ulc¸ehre, Sungjin Ahn, Sarath Chandar, Aaron C. Courville, and Yoshua Bengio. 2016. Generating factoid questions with recurrent neural networks: The 30m factoid question-answer corpus. In ACL. Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In WWW. Huan Sun, Hao Ma, Wen-tau Yih, Chen-Tse Tsai, Jingjing Liu, and Ming-Wei Chang. 2015. Open domain question answering via semantic enrichment. In WWW. Joseph P. Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In ACL 2010, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, July 1116, 2010, Uppsala, Sweden, pages 384–394. Christina Unger, Lorenz B¨uhmann, Jens Lehmann, Axel-Cyrille Ngonga Ngomo, Daniel Gerber, and Philipp Cimiano. 2012. Template-based question answering over rdf data. In WWW. Ellen M Voorhees and Dawn M. Tice. 1999. The trec-8 question answering track report. In TREC. Mengqiu Wang, Noah A Smith, and Teruko Mitamura. 2007. What is the jeopardy model? a quasisynchronous grammar for qa. In EMNLP-CoNLL. Yushi Wang, Jonathan Berant, and Percy Liang. 2015. Building a semantic parser overnight. In ACL. Kun Xu, Sheng Zhang, Yansong Feng, and Dongyan Zhao. 2014. Answering natural language questions via phrasal semantic parsing. In Natural Language Processing and Chinese Computing - Third CCF Conference, NLPCC 2014, Shenzhen, China, December 5-9, 2014. Proceedings, pages 333–344. Kun Xu, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2015. Semantic relation classification via convolutional neural networks with simple negative sampling. In EMNLP. Mohamed Yahya, Klaus Berberich, Shady Elbassuoni, Maya Ramanath, Volker Tresp, and Gerhard Weikum. 2012. Natural language questions for the web of data. In EMNLP. Mohamed Yahya, Denilson Barbosa, Klaus Berberich, Qiuyue Wang, and Gerhard Weikum. 2016. Relationship queries on extended knowledge graphs. In WSDM. Yi Yang and Ming-Wei Chang. 2015. S-mart: Novel tree-based structured learning algorithms applied to tweet entity linking. In ACL-IJNLP. Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. Wikiqa: A challenge dataset for open-domain question answering. In EMNLP. Xuchen Yao and Benjamin Van Durme. 2014. Information extraction over structured data: Question answering with freebase. In ACL. Xuchen Yao, Benjamin Van Durme, and Peter Clark. 2013. Answer extraction as sequence tagging with tree edit distance. In NAACL. Xuchen Yao. 2015. Lean question answering over freebase from scratch. In NAACL. Wen-tau Yih, Ming-Wei Chang, Christopher Meek, and Andrzej Pastusiak. 2013. Question answering using enhanced lexical semantic models. In ACL. Wen-tau Yih, Xiaodong He, and Christopher Meek. 2014. Semantic parsing for single-relation question answering. In ACL. Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In ACL-IJCNLP. Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. Deep learning for answer sentence selection. arXiv preprint arXiv:1412.1632. John M Zelle and Raymond J Mooney. 1996. Learning to parse database queries using inductive logic programming. In AAAI. Yuanzhe Zhang, Shizhu He, Kang Liu, and Jun Zhao. 2016. A joint model for question answering over multiple knowledge bases. In AAAI. 2336
2016
220
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2337–2346, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Vector-space topic models for detecting Alzheimer’s disease Maria Yancheva Department of Computer Science, University of Toronto Toronto, Ontario, Canada [email protected] Frank Rudzicz Toronto Rehabilitation Institute; and Department of Computer Science, University of Toronto Toronto, Ontario, Canada [email protected] Abstract Semantic deficit is a symptom of language impairment in Alzheimer’s disease (AD). We present a generalizable method for automatic generation of information content units (ICUs) for a picture used in a standard clinical task, achieving high recall, 96.8%, of human-supplied ICUs. We use the automatically generated topic model to extract semantic features, and train a random forest classifier to achieve an F-score of 0.74 in binary classification of controls versus people with AD using a set of only 12 features. This is comparable to results (0.72 F-score) with a set of 85 manual features. Adding semantic information to a set of standard lexicosyntactic and acoustic features improves F-score to 0.80. While control and dementia subjects discuss the same topics in the same contexts, controls are more informative per second of speech. 1 Introduction Alzheimer’s disease (AD) is the most common cause of neurodegenerative dementia, and affects more than 24.3 million people worldwide (Ballard et al., 2011). Importantly, early detection enables some therapeutic intervention and diseasemodifying treatment (Sperling et al., 2011). Longitudinal studies of people with autopsyconfirmed AD indicate that linguistic changes are detectable in the prodromal stages of the disease; these include a decline in grammatical complexity, word-finding difficulties, and semantic content deficiencies, such as low idea density (i.e., the ratio of semantic units to the total number of words in a speech sample), and low efficiency (i.e., the rate of semantic units over the duration of the speech sample) (Bayles and Kaszniak, 1987; Snowdon et al., 1996; Le et al., 2011; Ahmed et al., 2013b). In the present study, we investigate methods of automatically assessing the semantic content of speech, and use it to distinguish people with AD from healthy older adults. A standard clinical task for eliciting spontaneous speech, with high sensitivity to language in early AD, is picture description. In it, a participant is asked to provide a free-form verbal description of a visual stimulus (Goodglass and Kaplan, 1983; Bayles and Kaszniak, 1987). The picture is associated with a set of human-supplied information content units (hsICUs) representing components of the image, such as subjects, objects, locations, and actions (Croisile et al., 1996). The semantic content of the elicited speech can then be scored by counting the hsICUs present in the description. Previous studies found that, even in the earliest stages, descriptions by those with AD are less informative compared to those of healthy older adults, producing fewer information units out of a pre-defined list of units, and having less relevant content and lower efficiency (Hier et al., 1985; Croisile et al., 1996; Giles et al., 1996; Ahmed et al., 2013a). Using a pre-defined list of annotated hsICUs is subject to several limitations: (i) it is subjective — different authors use a different number of hsICUs for the same picture (e.g., from 7 to 25 for Cookie Theft in the Boston Diagnostic Aphasia Examination (BDAE)) (Hier et al., 1985; Croisile et al., 1996; Forbes-McKay and Venneri, 2005; Lai et al., 2009); (ii) it may not be optimal for detecting linguistic impairment — the manually-annotated hsICUs are neither exhaustive of all details present in the picture, nor necessarily reflective of the content units which differ most across groups; (iii) it is not generalizable — hsICUs are specific to a particular picture, and new visual stimuli (e.g., 2337 required for longitudinal assessments) need to be annotated manually. In addition to requiring time and effort, this may result in inconsistencies, since the methodology for identifying hsICUs was never clearly defined in previous work. Automatic scoring of semantic content in speech to detect cognitive impairment has so far required manual hsICUs. Hakkani-T¨ur et al. (2010) used unigram recall among hsICUs in the Western Aphasia Battery’s Picnic picture (Kertesz, 1982) and obtained a correlation of 0.93 with manual hsICU counts. Pakhomov et al. (2010) counted N-grams (N = 1, 2, 3, 4) extracted from a list of hsICUs for the Cookie Theft picture to assess semantic content in the speech of patients with frontotemporal lobar degeneration. Fraser et al. (2016) counted instances of lexical tokens extracted from a list of hsICUs, using dependency parses of Cookie Theft picture descriptions, and combined them with other lexicosyntactic and acoustic features to obtain classification accuracy of 81.9% in identifying people with AD from controls. While those automated methods for scoring the information content in speech used manual hsICUs, we have found none that attempted to produce ICUs automatically. In this paper, we present a generalizable method for automatically generating information content units for any given picture (or spontaneous speech task), using reference speech. Since clinical data can be sparse, we present a method for building word vector representations using a large general corpus, then augment it with local context windows from a smaller clinical corpus. We evaluate the generated ICUs by computing recall of hsICUs and use the constructed topic models to compare the speech of participants with and without dementia, and compute topic alignment. Second, we automatically score new picture descriptions by learning semantic features extracted from these generated ICU models, using a random forest classifier; we assess performance with recall, precision, and F-score. Third, we propose a set of clinically-relevant features for identifying AD based on differences in topic, topic context, idea density and idea efficiency. 2 Methodology 2.1 Data DementiaBank is one of the largest public, longitudinal datasets of spontaneous speech from individuals with and without dementia. It was collected at the University of Pittsburgh (Becker et al., 1994) and contains verbal descriptions of the standard Cookie Theft picture (Goodglass and Kaplan, 1983), along with manual transcriptions. In our study, we use 255 speech samples from participants diagnosed with probable or possible AD (collectively referred to as the ‘AD’ class), and 241 samples from healthy controls (collectively referred to as the ‘CT’ class), see Table 1. We remove all CHAT-format annotations (MacWhinney, 2015), filled pauses (e.g., ‘ah’ and ‘um’), phonological fragments (e.g., ‘b b boy’ becomes ‘boy’), repairs (e.g., ‘in the in the kitchen’ becomes ‘in the kitchen’), non-standard forms (e.g., ‘gonna’ becomes ‘going to’), and punctuation (e.g., commas are removed). These corrections are all provided in the database. We ignore transcripts of the investigator’s speech, as irrelevant. Subject data were randomly partitioned into training, validation, and test sets using a 60-20-20 split. Table 1: Distribution of dataset transcriptions. Class Subjects Samples Tokens AD 168 255 24,753 CT 98 241 26,654 Total 266 496 51,407 2.2 Human-supplied ICUs (hsICUs) We combine all hsICUs in previous work for the Cookie Theft picture (Hier et al., 1985; Croisile et al., 1996; Forbes-McKay and Venneri, 2005; Lai et al., 2009) with hsICUs obtained from a speech language pathologist (SLP) at the Toronto Rehabilitation Institute (TRI). The annotations of the SLP overlap completely with previously identified hsICUs, except for one (apron). The first three columns of Table 2 summarize these manuallyproduced hsICUs. 2.3 Automatic generation of ICUs Our novel method of identifying ICUs is based on simple topic modelling using clusters of global word-vector representations from picture descriptions. First, we train a word-vector model on a large normative general-purpose corpus, allowing us to avoid sparsity in the clinical data’s wordword co-occurrence matrix. Then, we extract the vector representations of words in the Dementia2338 Table 2: Information units above the double line are human-supplied ICUs (hsICUs) found in previous work, except those marked with † which were annotated by an SLP for this study; those below are additionally analyzed. Over 1,000 clustering configurations based on word vectors extracted from Control and Dementia reference transcriptions, µ is the mean of the scaled distance (Eq. 1) of each hsICU to its closest cluster centroid, σ is the standard deviation, and δ = (µdementia −µcontrol). Statistical significance of δ was tested using an independent two-sample, two-tailed t-test; *** = p < .001, ** = p < .01, * = p < .05, ns = not significant. Control Dementia Type ID hsICU µ σ µ σ δ p Subject S1 boy -0.510 0.102 -0.860 0.204 -0.350 *** Subject S2 girl -0.357 0.203 -0.545 0.284 -0.187 *** Subject S3 woman 0.171 0.468 0.140 0.433 -0.031 ns Subject S4 mother -0.533 0.206 -0.187 0.300 0.345 *** Place P1 kitchen 0.667 0.650 0.901 0.710 0.234 *** Place P2 exterior 1.985 0.601 1.947 0.530 -0.039 ns Object O1 cookie -1.057 0.221 -0.943 0.230 0.114 *** Object O2 jar 0.243 0.486 0.146 0.453 -0.097 *** Object O3 stool -0.034 0.674 -0.162 0.623 -0.128 *** Object O4 sink -0.839 0.433 -0.600 0.631 0.239 *** Object O5 plate 0.564 0.593 0.639 0.608 0.076 ** Object O6 dishcloth 4.509 1.432 3.989 1.154 -0.521 *** Object O7 water -0.418 0.582 -0.567 0.530 -0.149 *** Object O8 cupboard 0.368 0.613 0.453 0.637 0.085 ** Object O9 window -0.809 0.425 -0.298 0.452 0.511 *** Object O10 cabinet 2.118 0.556 2.154 0.496 0.036 ns Object O11 dishes 0.037 0.503 -0.083 0.406 -0.120 *** Object O12 curtains -0.596 0.594 0.121 0.707 0.717 *** Object O13 faucet 1.147 0.567 1.016 0.547 -0.131 *** Object O14 floor -0.466 0.384 -0.932 0.451 -0.466 *** Object O15 counter 0.202 0.427 0.449 0.323 0.247 *** Object O16 apron† -0.140 0.433 0.181 0.688 0.321 *** Action A1 boy stealing cookies 1.219 0.373 0.746 0.462 -0.473 *** Action A2 boy/stool falling over -0.064 0.465 -0.304 0.409 -0.240 *** Action A3 woman washing dishes -0.058 0.539 0.009 0.611 0.068 ** Action A4 woman drying dishes -0.453 0.469 -0.385 0.541 0.068 ** Action A5 water overflowing in sink 0.147 0.804 0.282 0.791 0.135 *** Action A6 girl’s actions towards boy, girl asking for a cookie 0.800 0.555 0.620 0.861 -0.179 *** Action A7 woman daydreaming, unaware or unconcerned about overflow 0.049 0.774 0.092 0.561 0.043 ns Action A8 dishes already washed sitting on worktop -0.224 0.535 -0.597 0.426 -0.373 *** Action A9 woman being indifferent to the children 0.781 0.795 0.881 0.585 0.100 ** Relation brother 2.297 0.510 1.916 0.344 -0.380 *** Relation sister 0.862 0.273 0.737 0.349 -0.125 *** Relation son 2.140 0.443 1.818 0.312 -0.322 *** Relation daughter 0.916 0.356 0.904 0.421 -0.012 ns 2339 Bank corpus, and optionally augment them with local context windows from the clinical dataset. We use GloVe v1.2 (Pennington et al., 2014) to obtain embedded word representations and train on a combined corpus of Wikipedia 20141 + Gigaword 52. The trained model consists of 400,000 word vectors, in 50 dimensions. Transcriptions in DementiaBank are lowercased and tokenized using NLTK v3.1, and each word token is converted to its vector space representation using the trained GloVe model. There are a total of 26,654 word vectors (1,087 unique vectors) in the control data, and 24,753 (1,131 unique) in the dementia data. Since we aim to construct a model of semantic content, only nouns and verbs are retained prior to clustering. The resulting dataset consists of 9,330 word vectors (801 unique vectors) in the control data, and 8,021 (843 unique) in the dementia data. We use k-means clustering with whitening, initialization with the Forgy method, and a distortion threshold of 10−5 as the stopping condition, where distortion is defined as the sum of the distances between each vector and its corresponding centroid. We train a control cluster model on the control training set (see Fig. 1 for a 2D projection of cluster vectors using principal component analysis), and a dementia cluster model on the dementia training set. Clusters represent topics, or groups of semantically related word vectors, discussed by the respective group of subjects. While prior work is based on hsICUs that are expected to be discussed by healthy speakers, we construct a separate cluster model for the control and dementia groups since it is unclear whether the topics discussed by both groups overlap. We vary k (= 1, 5, 10, 15, 20, 30, 40, 50), completing 1,000 runs for each value, and use the Elbow method to select the optimal number of clusters on the respective validation set. The optimal setting, k = 10, optimizes the tradeoff between the percentage of variance explained by the clusters, and their total number. The resulting clusters represent topics that can be compared against hsICUs. 3 Experiments 3.1 Recall of hsICUs In order to assess (i) how well the automatically generated clusters match clinical hsICUs for this 1http://dumps.wikimedia.org/enwiki/20140102/ 2https://catalog.ldc.upenn.edu/LDC2011T07 Figure 1: Control cluster model. The word vectors belonging to a given cluster are shown in the same colour. The most frequent words in each cluster are displayed. image, and (ii) how much the two generated topic models differ, we analyze the vector space distance between each hsICU and its closest cluster centroid (dEuclidean) in each of the control and dementia models. Since some clusters are more dispersed than others, we need to scale the distance appropriately. To do so, for each cluster in each model, we compute the mean distortion, µcl, of the vectors in the cluster, and the associated standard deviation σcl. For each hsICU vector, we compute the scaled distance between the vector and its closest cluster centroid in each generated model as follows: dscaled = (dEuclidean −µcl) σcl (1) The scaled distance is equivalent to the number of standard deviations above the mean — a value below zero indicates hsICUs which are very close to an automatically generated cluster centroid, while a large positive value indicates hsICUs that are far from a cluster centroid. To account for the fact that k-means is a stochastic algorithm, we perform clustering multiple times and average the results. Table 2 shows the mean, µ, and standard deviation, σ, of dscaled, for each hsICU, over 1,000 cluster configurations for each model. To quantify the recall of hsICUs using each generated cluster model, we consider hsICUs with µ ≤3.0 to be recalled (i.e., the distance to the assigned cluster centroid is not greater than those of 99.7% of the datapoints in the cluster, given a Gaussian distribution of distortion). The recall of 2340 hsICUs, for both the control and dementia models, is 96.8%. Since the optimal number of generated clusters is k = 10, while the number of hsICUs is 31, multiple hsICUs can be grouped in related themes (e.g., one automatically generated cluster corresponds to the description of animate subjects in the picture, capturing four hsICUs: S1– S4). Both the control and dementia models do not recall hsICU O6, dishcloth, which suggests that it is a topic that neither study group discusses. All remaining hsICUs are recalled by both the control and dementia models, indicating that the hsICU topics are discussed by both groups. However, to assess whether they are discussed to the same extent, i.e. to evaluate whether the two topic models differ, we conducted an independent two-sample two-tailed t-test to compare the mean scaled distance, µ, of each hsICU to its closest cluster centroid, in each cluster model (see δ in Table 2). As anticipated, since they involve inference of attention, the control model is better at accounting for the topics of the overflowing sink and the mother’s indifference: overflowing (t(1998) = −3.78, p < .001); sink (t(1998) = −9.85, p < .001); indifferent (t(1998) = −3.20, p < .01). While there is no significant difference in the term woman between the two groups, the control model predicts the term mother better than the dementia model (t(1998) = −30.05, p < .001). To investigate whether healthy participants are more likely to identify relations between the subjects than participants with cognitive impairment, we repeated the recall experiment with the following new hsICUs: brother, sister, son, daughter. Interestingly, the dementia cluster model contains a cluster which aligns significantly more closely, than any in the control model, with all four of these relation words: brother (t(1998) = 19.53, p < .001); sister (t(1998) = 8.93, p < .001); son (t(1998) = 18.78, p < .001). While the control participants mention relation words as often as the participants with dementia3, the generated cluster models show that the ratio of relation words to non-relation words is higher for the dementia group4. 3An independent two-sample two-tailed t-test of the effect of group on the number of occurrences of each relation word shows no statistical significance: son (t(494) = 0.65, p > .05), daughter (t(494) = 0.63, p > .05), brother (t(494) = 0.97, p > .05), sister (t(494) = 1.65, p > .05). 4An independent two-sample two-tailed t-test of the effect of group on this ratio shows a significant difference in the ratio of sister to mother, with the control group having a The new hsICU, apron, which was not identified in previous literature but was labelled by an SLP for this study, is significantly more likely to be discussed by the control population (t(1998) = −12.46, p < .001), suggesting at the importance of details for distinguishing cognitively impaired individuals. In a similar vein, control participants are significantly more likely to identify objects in the background of the scene, such as the window (t(1998) = −26.04, p < .001), curtains (t(1998) = −24.54, p < .001), cupboard (t(1998) = −3.03, p < .01), or counter (t(1998) = −14.59, p < .001). 3.2 Cluster model alignment While prior work counted the frequency with which fixed topics are mentioned, our data-driven cluster models allow greater exploration of differences between the set of topics discussed by each subject group, and the alignment between them. Since prior work has found that subjects with cognitive impairment produce more irrelevant content, we quantify the amount of dispersion within each cluster through the standard deviation of its distortion and its type-to-token ratio (TTR), as shown in Table 3. Further, we compute directional alignment between pairs of clusters in each model. For each cluster in one model, alignment is determined by computing the closest cluster in the other model for each vector, and taking the majority assignment label (see a in Table 3). To quantify the alignment, the Euclidean distance of each vector to the assigned cluster in the other model is computed, scaled by the mean and standard deviation of the cluster distortion; the mean of the scaled distance, µa, is reported in Table 3. To quantify the alignment of clusters in each model, we consider clusters to be recalled if their distance to the closest cluster in the other model is µa ≤3. Notably, all control clusters (C0-C9) are recalled by the dementia model, while one dementia cluster, D7, is not recalled by the control model. This exemplifies the fact that while the dementia group mentions all topics discussed by controls, they also mention a sufficient number of extraneous terms which constitute a new heterogeneous topic cluster, having the highest TTR. lower ratio (t(494) = −4.10, p < .001). 2341 Table 3: Cluster statistics for control (C*) and dementia (D*) models, with computed cluster alignment. Cluster words are the 5 most frequently occurring words. fvec is the fraction of all vectors which belong to the given cluster. µcl and σcl are the mean and standard deviation of the cluster distortion. fn is the fraction of nouns among cluster vectors; (1 −fn) is the fraction of verbs. TTR is the type-to-token ratio. a is the ID of the aligned cluster, and µa is the mean scaled distance to the aligned cluster centroid. ID Cluster words fvec µcl σcl fn TTR a µa Control C0 window, floor, curtains, plate, kitchen 0.14 5.42 1.18 0.94 0.14 D4 0.69 C1 dishes, dish 0.04 1.62 1.11 1.00 0.01 D1 0.01 C2 running, standing, action, hand, counter 0.18 4.97 1.25 0.57 0.22 D8 0.16 C3 water, sink, drying, overflowing, washing 0.17 5.18 1.13 0.66 0.09 D6 0.04 C4 stool, legged 0.03 0.53 1.26 0.96 0.01 D4 -0.28 C5 mother, boy, girl, sister, children 0.11 3.49 1.08 1.00 0.04 D2 -0.08 C6 cookie, cookies, sakes, cream 0.06 2.00 1.15 1.00 0.01 D0 -0.08 C7 jar, cups, lid, dried, bowl 0.04 3.88 2.30 0.97 0.04 D5 0.63 C8 see, going, getting, looks, know 0.18 3.84 1.16 0.38 0.13 D3 0.18 C9 reaching, falling, fall, summer, growing 0.05 4.18 1.41 0.38 0.16 D8 0.21 Dementia D0 cookie, cookies, cake, baking, apples 0.07 2.18 0.74 1.00 0.02 C6 0.09 D1 dishes, dish, eating, bowls, dinner 0.05 1.42 1.72 0.98 0.03 C1 0.05 D2 boy, girl, mother, sister, lady 0.11 3.63 1.25 0.99 0.05 C5 0.20 D3 going, see, getting, get, know 0.24 3.67 1.06 0.38 0.11 C8 -0.11 D4 stool, floor, window, chair, curtains 0.10 5.10 1.00 0.97 0.13 C0 0.08 D5 jar, cups, jars, dried, honey 0.04 2.00 2.26 0.98 0.03 C7 -0.44 D6 sink, drying, washing, spilling, overflowing 0.14 5.36 1.20 0.52 0.19 C3 0.36 D7 mama, huh, alright, johnny, ai 0.01 6.24 1.34 0.95 0.55 C8 4.13 D8 running, fall, falling, reaching, hand 0.18 4.97 1.29 0.47 0.25 C2 0.15 D9 water, dry, food 0.05 0.39 1.13 1.00 0.01 C3 -0.59 3.3 Local context weighted vectors Since there is significant overlap in the topics discussed between the control and dementia groups, we proceed by investigating whether the overlapping topics are discussed in the same contexts. To this end, we augment the word vector representations with local context windows from DementiaBank. Each word vector is constructed using a linear combination of its global vector from the trained GloVe model, and the vectors of the ±N surrounding context words, where each context word is weighted inversely to its distance from the central word: φw = vw + −1 X i=−N αi × vi + N X i=1 αi × vi (2) Here, φw is the local-context-weighted vector for word w, vw is the GloVe vector for word w, vi is the GloVe vector for word i within the context of w, and αi is the weighting of word i, inversely and linearly proportional to the distance between context and central word. Following previous work (Fraser and Hirst, 2016), we use a context window of size N = 3. We extract local-context-weighted vectors for all control and dementia transcripts, and construct two topic models as before. To quantify whether the dementia contexts differ significantly from the control contexts for the same word, we extract all word usages as localcontext-weighted vectors, and find the centroid of the control usages, along with the mean and standard deviation of the control vectors from their centroids. Then, we compute the average scaled Euclidean distance, dscaled, of the dementia vectors from the control centroid, as in Eq. 1. Words with dscaled > 3 (i.e., where the dementia context vectors are further from the control centroid than the majority of control context vectors) are considered to have different context usage across the control and dementia groups. Interestingly, all of the control cluster words are used in the same contexts by both healthy participants and those with dementia. However, the average number of times these words are used per transcript is significantly higher in the control group (1.07, s.d. = 0.12) than in the dementia group (0.77, s.d. = 0.14; t(18) = 1.87, p < .05). While the two groups discuss the same topics generally and use the same words in the same contexts, not all participants in the dementia group identify all of the control topics or discuss them with the same frequency. A contextual analysis reveals that certain words are discussed in a distinct number of limited contexts, while others are discussed in more varied contexts. For in2342 Figure 2: All usages of the word cookie in DementiaBank. Control usages are represented with blue circles; dementia with red crosses. stance, while we identified a control cluster associated with the topic of the cookie in Section 3.2, there are two clearly distinct contexts in which this word is used, by both groups, as illustrated in Fig. 2. The two clusters in context space correspond to: (i) the usage of cookie in the compound noun phrase cookie jar, and (ii) referring to a single cookie, e.g. reaching for a cookie, hand her a cookie, getting a cookie. 3.4 Classification To classify speakers as having AD or not, we extract the following types of features from our automatically-generated cluster models: (i) distance-based metrics for each of the control model clusters, C0–C9, (ii) distance-based metrics for each of the dementia model clusters, D0–D9, (iii) idea density, and (iv) idea efficiency. Given the vectors associated with a transcript’s nouns and verbs, feature Ci (and equivalently, Di) is computed by finding the average scaled distance, dscaled (Eq. 1), of all vectors assigned to cluster Ci. A feature value below zero indicates that the transcript words assigned to the cluster are very well predicted by it (i.e., their distance from the cluster centroid is less than the average cluster distortion). Conversely, clusters which represent topics not discussed in the transcript have large positive feature values. We chose these distancebased metrics to evaluate topic recall in the transcript since a continuous measure is more appropriate for modelling the non-discrete nature of language and semantic similarity. We compute idea density as the number of expected topics mentioned5 divided by the total number of words in the transcript, and idea efficiency as the number of expected topics mentioned divided by the total duration of the recording (in seconds). The expected topics used for computation of idea density and idea efficiency are the ICUs from the automatically-produced cluster models. We perform classification using a random forest, whose parameters are optimized on the validation set, and performance reported on the test set. We vary the following experimental settings: cluster model (control; dementia; combined), feature set (distance-based; distance-based + idea density + idea efficiency), and context (no context; context with N = 3). A three-way ANOVA is conducted to examine the effects of these settings on average test F-score. There is a significant interaction between feature set and context, F(1, 110) = 9.07, p < 0.01. Simple main effect analysis shows that when using the extended feature set, vectors constructed without local context windows from the clinical dataset yield significantly better results than those with context (p < 0.001), but there is no effect when using only distance-based features (p = 0.87). There is no main effect of cluster model on test performance, F(2, 117) = 2.30, p = 0.11, which is expected since cluster alignment revealed significant overlap between the topics discussed by the control and dementia groups (Section 3.2). Notably, there is a significant effect of feature set on test performance, whereby adding the idea density and idea efficiency features results in significantly higher F-scores, both when using local context for vector construction (p < 0.05), and otherwise (p < 0.001). As a baseline, we use a list of hsICUs extracted by Fraser et al. (2016) in a state-of-the-art automated method for separating AD and control speakers in DementiaBank. These features consist of (i) counts of lexical tokens representing hsICUs (e.g., boy, son, and brother are used to identify whether hsICU S1 (Table 2) was discussed, and (ii) Boolean values which indicate whether each hsICU was mentioned or not. Overall, this constitutes 85 features. Additionally, Fraser et al. (2016) identified a list of lexicosyntactic and acoustic (LS&A) features which are indicative of cognitive impairment. We compute the performance of each set of features independently, and then com5I.e., the number of word vectors in the transcript whose scaled distance is within 3 s.d.’s from the mean cluster distortion of at least one cluster. 2343 Table 4: Binary classification (AD:CT) using a random forest classifier, with 10-fold cross-validation. All cluster models are trained on vectors with no local context. LS&A are lexicosyntactic and acoustic features as described by Fraser et al. (2016). The reported precision, recall, and F-score are a weighted average over the two classes. Model Features Accuracy Precision Recall F-score Baseline hsICUs 0.73 0.74 0.73 0.72 Baseline LS&A 0.76 0.77 0.76 0.76 Baseline hsICUs + LS&A 0.80 0.80 0.80 0.80 control distance-based 0.68 0.69 0.68 0.68 dementia distance-based 0.66 0.67 0.66 0.66 combined distance-based 0.68 0.69 0.68 0.68 control distance-based + idea density + idea efficiency 0.74 0.76 0.74 0.74 dementia distance-based + idea density + idea efficiency 0.74 0.75 0.74 0.74 combined distance-based + idea density + idea efficiency 0.74 0.75 0.74 0.74 control distance-based + idea density + idea efficiency + LS&A 0.79 0.79 0.79 0.79 dementia distance-based + idea density + idea efficiency + LS&A 0.77 0.78 0.77 0.77 combined distance-based + idea density + idea efficiency + LS&A 0.80 0.80 0.80 0.80 bine them. Table 4 summarizes the results; the first column indicates the cluster model (e.g., control indicates a cluster model trained on the control transcriptions), and the second column specifies the feature set. Our 12 automatically generated features (i.e., the combined set of distance-based measures, idea density, and idea efficiency) result in higher F-scores (0.74) than using 85 manually generated hsICUs (0.72); a two-sample paired t-test shows no difference (using control cluster model: t(9) = 1.10, p = 0.30; using dementia cluster model: t(9) = 0.74, p = 0.48) indicating the similarity of our method to the manual gold standard. Furthermore, we match state-of-the-art results (F-score of 0.80) when we augment the set of LS&A features with our automatically generated semantic features. 4 Discussion We demonstrated a method for generating topic models automatically within the context of clinical assessment, and confirmed that low idea density and low idea efficiency are salient indicators of cognitive impairment. In our data, we also found that speakers with and without Alzheimer’s disease generally discuss the same topics and in the same contexts, although those with AD give more spurious descriptions, as exemplified by the irrelevant topic cluster D7 (Table 3). Using a fully automated topic generation and feature extraction pipeline, we found a small set of features which perform as well as a large set of manually constructed hsICUs in binary classification experiments, achieving an F-score of 0.80 in 10-fold cross-validation on DementiaBank. The features which correlate most highly with class include: idea efficiency (Pearson’s r = −0.41), which means that healthy individuals discuss more topics per unit time; distance from cluster C4 (r = 0.34), which indicates that speakers with AD focus less on the topic of the three-legged stool; and idea density (r = −0.26), which shows that healthy speakers need fewer words to express the same number of topics. While we anticipated that combining a large normative corpus with local context windows from a clinical corpus would produce optimal vectors, using the former exclusively actually performs better. This phenomenon is being investigated. This implies that word-vector representations do not need to be adapted with context windows in specific clinical data in order to be effective. A limitation of the current work is its requirement of high-quality transcriptions of speech, since high word-error rates (WERs) could compromise semantic information. We are therefore generating automatic transcriptions of the DementiaBank audio using the Kaldi speech recognition toolkit6. So far, a triphone model with the standard insertion penalty (0) and language model scale (20) on DementiaBank gives the best average WER of 36.7±3.6% with 10-fold cross-validation. Continued optimization is the subject of ongoing research but preliminary experiments with these transcriptions indicate significantly lower perfor6http://kaldi.sourceforge.net/ 2344 mance of the baseline model (0.68 F-score; t(9) = 3.52, p < 0.01). While the eventual aim is a completely automatic system, our methodology overcomes several major challenges in the manual semantic annotation of clinical images for cognitive assessment, even with manual transcriptions. Specifically, our methodology is fully objective, sensitive to differences between groups, and generalizable to new stimuli which is especially important if longitudinal analysis is to avoid the socalled ‘practice effect’ by using multiple stimuli. Across many domains, to extract useful semantic features (such as idea density and idea efficiency), one needs to first identify information content units in speech or text. Our method can be applied to any picture or contentful stimuli, given a sufficient amount of normative data, with no modification. Although we apply this generalizable method to a single (albeit important) image used in clinical practice in this work, we note that we obtain better accuracies with this completely automated method than a completely manual alternative. Acknowledgments The authors would like to thank Selvana Morcos, a speech language pathologist at the Toronto Rehabilitation Institute, for her generous help with providing professional annotations of information content units for the BDAE Cookie Theft picture. References S. Ahmed, C. A. de Jager, A. F. Haigh, and P. Garrard. 2013a. Semantic processing in connected speech at a uniformly early stage of autopsy-confirmed Alzheimer’s disease. Neuropsychology, 27(1):79– 85. S. Ahmed, A. F. Haigh, C. A. de Jager, and P. Garrard. 2013b. Connected speech as a marker of disease progression in autopsy-proven Alzheimer’s disease. Brain, 136(12):3727–3737. C. Ballard, S. Gauthier, A. Corbett, C. Brayne, D. Aarsland, and E. Jones. 2011. Alzheimer’s disease. The Lancet, 377(9770):1019–1031. K. A. Bayles and A. W. Kaszniak. 1987. Communication and cognition in normal aging and dementia. Little, Brown, Boston. J. T. Becker, F. Boller, O. L. Lopez, J. Saxton, and K. L. McGonigle. 1994. The natural history of Alzheimer’s disease. Archives of Neurology, 51:585–594. B. Croisile, B. Ska, M. J. Brabant, A. Duchene, Y. Lepage, G. Aimard, and M. Trillet. 1996. Comparative study of oral and written picture description in patients with Alzheimer’s disease. Brain and Language, 53(1):1–19. K. E. Forbes-McKay and A. Venneri. 2005. Detecting subtle spontaneous language decline in early Alzheimer’s disease with a picture description task. Neurological Sciences, 26(4):243–254. K. C. Fraser and G. Hirst. 2016. Detecting semantic changes in Alzheimer’s disease with vector space models. In Dimitrios Kokkinakis, editor, Proceedings of LREC 2016 Workshop: Resources and Processing of Linguistic and Extra-Linguistic Data from People with Various Forms of Cognitive/Psychiatric Impairments (RaPID-2016), pages 1–8, Portoroˇz, Slovenia. Link¨oping University Electronic Press. K. C. Fraser, J. A. Meltzer, and F. Rudzicz. 2016. Linguistic features identify Alzheimer’s disease in narrative speech. Journal of Alzheimer’s Disease, 49(2):407–422. E. Giles, K. Patterson, and J. R. Hodges. 1996. Performance on the Boston Cookie Theft picture description task in patients with early dementia of the Alzheimer’s type: missing information. Aphasiology, 10(4):395–408. H. Goodglass and E. Kaplan. 1983. The assessment of aphasia and related disorders. Lea and Febiger, Philadelphia. D. Hakkani-T¨ur, D. Vergyri, and G. Tur. 2010. Speech-based automated cognitive status assessment. In 11th Annual Conference of the International Speech Communication Association, pages 258–261. D. B. Hier, K. Hagenlocker, and A. G. Shindler. 1985. Language disintegration in dementia: effects of etiology and severity. Brain and Language, 25(1):117– 133. A. Kertesz. 1982. The Western aphasia battery. Grune and Stratton, New York. Y. H. Lai, H. H. Pai, and Y. T. Lin. 2009. To be semantically-impaired or to be syntacticallyimpaired: linguistic patterns in Chinese-speaking persons with or without dementia. Journal of Neurolinguistics, 22(5):465–475. X. Le, I. Lancashire, G. Hirst, and R. Jokel. 2011. Longitudinal detection of dementia through lexical and syntactic changes in writing: a case study of three British novelists. Literary and Linguistic Computing, 26(4):435–461, may. B. MacWhinney. 2015. The CHILDES Project: Tools for analyzing talk. Lawrence Erlbaum Associates, Mahwah, NJ, 3rd edition. 2345 S. V. S. Pakhomov, G. E. Smith, D. Chacon, Y. Feliciano, N. Graff-Radford, R. Caselli, and D. S. Knopman. 2010. Computerized analysis of speech and language to identify psycholinguistic correlates of frontotemporal lobar degeneration. Cognitive and Behavioral Neurology, 23(3):165–177. J. Pennington, R. Socher, and C. D. Manning. 2014. GloVe: Global vectors for word representation. In Conference on Empirical Methods on Natural Language Processing (EMNLP), pages 1532–1543. D. A. Snowdon, S. J. Kemper, J. A. Mortimer, L. H. Greiner, D. R. Wekstein, and W. R. Markesbery. 1996. Linguistic ability in early life and cognitive function and Alzheimer’s disease in late life. Findings from the Nun Study. JAMA: the Journal of the American Medical Association, 275(7):528–532. R. A. Sperling, P. S. Aisen, L. A. Beckett, D. A. Bennett, S. Craft, A. M. Fagan, T. Iwatsubo, C. R. Jack, J. Kaye, T. J. Montine, D. C. Park, E. M. Reiman, C. C. Rowe, E. Siemers, Y. Stern, K. Yaffe, M. C. Carrillo, B. Thies, M. Morrison-Bogorad, M. V. Wagster, and C. H. Phelps. 2011. Toward defining the preclinical stages of Alzheimer’s disease: recommendations from the National Institute on AgingAlzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease. Alzheimer’s & Dementia: the Journal of the Alzheimer’s Association, 7(3):280–292. 2346
2016
221
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2347–2357, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Chinese Couplet Generation with Neural Network Structures Rui Yan1,2, Cheng-Te Li3, Xiaohua Hu4, and Ming Zhang5 1Institute of Computer Science and Technology, Peking University, Beijing 100871, China 2Natural Language Processing Department, Baidu Inc., Beijing 100193, China 3Academia Sinica, Taipei 11529, Taiwan 4College of Computing and Informatics, Drexel University, Philadelphia, PA 19104, USA 5Department of Computer Science, Peking University, Beijing 100871, China [email protected], [email protected] [email protected], mzhang [email protected] Abstract Part of the unique cultural heritage of China is the Chinese couplet. Given a sentence (namely an antecedent clause), people reply with another sentence (namely a subsequent clause) equal in length. Moreover, a special phenomenon is that corresponding characters from the same position in the two clauses match each other by following certain constraints on semantic and/or syntactic relatedness. Automatic couplet generation by computer is viewed as a difficult problem and has not been fully explored. In this paper, we formulate the task as a natural language generation problem using neural network structures. Given the issued antecedent clause, the system generates the subsequent clause via sequential language modeling. To satisfy special characteristics of couplets, we incorporate the attention mechanism and polishing schema into the encoding-decoding process. The couplet is generated incrementally and iteratively. A comprehensive evaluation, using perplexity and BLEU measurements as well as human judgments, has demonstrated the effectiveness of our proposed approach. 1 Introduction Chinese antithetical couplets, (namely “对联”), form a special type of poetry composed of two clauses (i.e., sentences). The popularity of the game of Chinese couplet challenge manifests itself in many aspects of people’s life, e.g., as a means of expressing personal emotion, political views, or communicating messages at festive occasions. Hence, Chinese couplets are considered an important cultural heritage. A couplet is often written in calligraphy on red banners during special occasions such as wedding ceremonies and the Chinese New Year. People also use couplets to celebrate birthdays, mark the openings of a business, and commemorate historical events. We illustrate a real couplet for Chinese New Year celebration in Figure 1, and translate the couplet into English character-by-character. Usually in the couplet generation game, one person challenges the other person with a sentence (namely an antecedent clause). The other person then replies with another sentence (namely a subsequent clause) equal in length and term segmentation, in a way that corresponding characters from the same position in the two clauses match each other by obeying certain constraints on semantic and/or syntactic relatedness. We also illustrate the special phenomenon of Chinese couplet in Figure 1: “one” is paired with “two”, “term” is associated with “character”, “hundred” is mapped into “thousand”, and “happiness” is coupled with “treasures”. As opposed to free languages, couplets have unique poetic elegance, e.g., aestheticism and conciseness etc. Filling in the couplet is considered as a challenging task with a set of structural and semantic requirements. Only few best scholars are able to master the skill to manipulate and to organize terms. The Chinese couplet generation given the antecedent clause can be viewed as a big challenge in the joint area of Artificial Intelligence and Natural Language Processing. With the fast development of computing techniques, we realize that computers might play an important role in helping people to create couplets: 1) it is rather convenient for computers to sort out appropriate term combinations from a large corpus, and 2) computer programs can take great advantages to recognize, to learn, and even to remember patterns or rules given the corpus. Although computers are no sub2347 Figure 1: An example of a Chinese couplet for Chinese New Year. We mark the character-wise translation under each Chinese character of the couplet so as to illustrate that each character from the same position of the two clauses has the constraint of certain relatedness. Overall, the couplet can be translated as: the term of “peaceful and lucky” (i.e., 和顺) indicates countless happiness; the two characters “safe and sound” (a.k.a., 平and 安) worth innumerable treasures. stitute for human creativity, they can process very large text repositories of couplets. Furthermore, it is relatively straightforward for the machine to check whether a generated couplet conforms to constraint requirements. The above observations motivate automatic couplet generation using computational intelligence. Beyond the long-term goal of building an autonomous intelligent system capable of creating meaningful couplets eventually, there are potential short-term applications for augmented human expertise/experience to create couplets for entertainment or educational purposes. To design the automatic couplet generator, we first need to empirically study the generation criteria. We discuss some of the general generation standards here. For example, the couplet generally have rigid formats with the same length for both clauses. Such a syntactic constraint is strict: both clauses have exactly the same length while the length is measured in Chinese characters. Each character from the same position of the two clauses have certain constraints. This constraint is less strict. Since Chinese language is flexible sometimes, synonyms and antonyms both indicate semantic relatedness. Also, semantic coherence is a critical feature in couplets. A well-written couplet is supposed to be semantically coherent among both clauses. In this paper we are concerned with automatic couplet generation. We propose a neural couplet machine (NCM) based on neural network structures. Given a large collection of texts, we learn representations of individual characters, and their combinations within clauses as well as how they mutually reinforce and constrain each other. Given any specified antecedent clause, the system could generate a subsequent clause via sequential language modeling using encoding and decoding. To satisfy special characteristics of couplets, we incorporate the attention mechanism and polishing schema into the generation process. The couplet is generated incrementally and iteratively to refine wordings. Unlike the single-pass generation process, the hidden representations of the draft subsequent clause will be fed into the neural network structure to polish the next version of clause in our proposed system. In contrast to previous approaches, our generator makes utilizations of neighboring characters within the clause through an iterative polishing schema, which is novel. To sum up, our contributions are as follows. For the first time, we propose a series of neural network-based couplet generation models. We formulate a new system framework to take in the antecedent clauses and to output the subsequent clauses in the couplet pairs. We tackle the special characteristics of couplets, such as corresponding characters paired in the two clauses, by incorporating the attention mechanism into the generation process. For the 1st time, we propose a novel polishing schema to iteratively refine the generated couplet using local pattern of neighboring characters. The draft subsequent clause from the last iteration will be used as additional information to generate a revised version of the subsequent clause. The rest of the paper is organized as follows. In Section 2, we briefly summarize related work of couplet generation. Then Sections 3 and 4 show the overview of our approach paradigm and then detail the neural models. The experimental results and evaluation are reported in Section 5 and we draw conclusions Section 6. 2 Related Work There are very few studies focused on Chinese couplet generation, based on templates (Zhang and Sun, 2009) or statistic translations (Jiang and Zhou, 2008). The Chinese couplet generation task can be viewed as a reduced form of 2-sentence poem generation (Jiang and Zhou, 2008). Given the first line of the poem, the generator ought to generate the second line accordingly, which is a similar process as couplet generation. We consider automatic Chinese poetry generation to be a closely re2348 (a). Sequential couplet generation. (b). Couplet generation with attention.(c). Couplet generation with polishing schema. Figure 2: Three neural models for couplet generation. More details will be introduced in Section 4. lated research area. Note that there are still some differences between couplet generation and poetry generation. The task of generating the subsequent clause to match the given antecedent clause is more well-defined than generating all sentences of a poem. Moreover, not all of the sentences in the poems need to follow couplet constraints. There are some formal researches into the area of computer-assisted poetry generation. Scientists from different countries have studied the automatic poem composition in their own languages through different ways: 1) Genetic Algorithms. Manurung et al. (2004; 2011) propose to create poetic texts in English based on state search; 2) Statistical Machine Translation (SMT). Greene et al. (2010) propose a translation model to generation cross-lingual poetry, from Italian to English; 3) Rule-based Templates. Oliveira (2009; 2012) has proposed a system of poem generation platform based on semantic and grammar templates in Spanish. An interactive system has been proposed to reproduce the traditional Japanese poem named Haiku based on rule-based phrase search related to user queries (Tosa et al., 2008; Wu et al., 2009). Netzer et al. (2009) propose another way of Haiku generation using word association rules. As to computer-assisted Chinese poetry generation. There are now several Chinese poetry generators available. The system named Daoxiang1 basically relies on manual pattern selection. The system maintains a list of manually created terms related to pre-defined keywords, and inserts terms randomly into the selected template as a poem. The system is simple but random term selection leads to unnatural sentences. 1http://www.poeming.com/web/index.htm Zhou et al. (2010) use a genetic algorithm for Chinese poetry generation by tonal codings and state search. He et al. (2012) extend the couplet machine translation paradigm (Jiang and Zhou, 2008) from a 2-line couplet to a 4-line poem by giving previous sentences sequentially, considering structural templates. Yan et al. (2013; 2016) proposed a summarization framework to generate poems. Recently, along with the prosperity of neural networks, a recurrent neural network based language generation is proposed (Zhang and Lapata, 2014): the generation is more or less a translation process. Given previous sentences, the system generates the next sentence of the poem. We also briefly introduce deep neural networks, which contribute great improvements in NLP. A series of neural models are proposed, such as convolutional neural networks (CNN) (Kalchbrenner et al., 2014) and recurrent neural networks (RNN) (Mikolov et al., 2010) with or without gated recurrent units (GRU) (Cho et al., 2014) and longshort term memory (LSTM) units (Hochreiter and Schmidhuber, 1997). We conduct a pilot study to design neural network structures for couplet generation problems. For the first time, we propose a polishing schema for the couplet generation process, and combine it with the attention mechanism to satisfy the couplet constraints, which is novel. 3 Overview The basic idea of the Chinese couplet generation is to build a hidden representation of the antecedent clause, and then generate the subsequent clause accordingly, shown in Figure 2. In this way, our system works in an encoding-decoding manner. The units of couplet generation are characters. Problem formulation. We define the following 2349 formulations: • Input. Given the antecedent clause A = {x1, x2, . . . , xm}, xi ∈V, where xi is a character and V is the vocabulary, we then learn an abstractive representation of the antecedent clause A. • Output. We generate a subsequent clause S = {y1, y2, . . . , ym} according to A, which indicates semantic coherence. We have yi ∈V. To be more specific, each character yi in S is coordinated with the corresponding character xi in A, which is determined by the couplet constraint. As mentioned, we encode the input clause as a hidden vector, and then decode the vector into an output clause so that the two clauses are actually a pair of couplets. Since we have special characteristics for couplet generation, we propose different neural models for different concerns. The proposed models are extended incrementally so that the final model would be able to tackle complicated issues for couplet generation. We first introduce these neural models from a high level description, and then elaborate them in more details. Sequential Couplet Generation. The model accepts the input clause. We use a recurrent neural network (RNN) over characters to capture the meaning of the clause. Thus we obtain a single vector which represents the antecedent clause. We then use another RNN to decode the input vector into the subsequent clause by the character-wise generation. Basically, the process is a sequenceto-sequence generation via encoding and decoding, which is based on the global level of the clause. We show the diagram of sequential couplet generation in Figure 2(a). Couplet Generation with Attention. There is a special phenomenon within a pair of couplets: the characters from the same position in the antecedent clause and subsequent clause, i.e., xi and yi, generally have some sort of relationships such as “coupling” or “pairing”. Hence we ought to model such one-to-one correlation between xi and yi in the neural model for couplet generation. Recently, the attention mechanism is proposed to allow the decoder to dynamically select and linearly combine different parts of the input sequence with different weights. Basically, the attention mechanism models the alignment between positions between inputs and outputs, so it can be viewed as a local matching model. Moreover, the tonal coding issue can also be addressed by the pairwise attention mechanism. The extension of attention mechFigure 3: Couplet generation via sequential language modeling: plain neural couplet machine. anism to the sequential couplet generation model is shown in Figure 2(b). Polishing Schema for Generation. Couplet generation is a form of art, and art usually requires polishing. Unlike the traditional single-pass generation in previous neural models, our proposed couplet generator will be able to polish the generated couplets for one or more iterations to refine the wordings. The model is essentially the same as the sequential generation with attention except that the information representation of the previous generated clause draft will be again utilized as an input, serving as additional information for semantic coherence. The principle is illustrated in Figure 2(c): the generated draft from the previous iteration will be incorporated into the hidden state which generates the polished couplet pair in the next iteration. To sum up, we introduce three neural models for Chinese couplet generation. Each revised model targets at tackling an issue for couplet generation so that the system could try to imitate a human couplet generator. We further elaborate these neural models incrementally in details in Section 4. 4 Neural Generation Models 4.1 Sequential Couplet Generation The sequential couplet generation model is basically a sequence-to-sequence generation fashion (Sutskever et al., 2014) using encoding and decoding shown in Figure 3. We use a recurrent neural network (RNN) to iteratively pick up information over the character sequence x1, x2, . . . , xm of the input antecedent clause A. All characters are vectorized using their embeddings (Mikolov et al., 2013). For each character, the RNN allocates a hidden state si, which is dependent on the current character’s embedding xi and the previous state si−1. Since usually each clause in the couplet pair would not be quite long, it is sufficient to use a vanilla RNN with basic interactions. 2350 Figure 4: Couplet generation with attention mechanism, namely attention neural couplet machine. Attention signal is generated by both encoder and decoder, and then fed into the coupling vector. Calculation details are elaborated in Section 4.2. The equation for encoding is as follows: si = f(Whsi−1 + Wxxi + b) (1) x is the vector representation (i.e., embedding) of the character. W and b are parameters for weights and bias. f(·) is the non-linear activation function and we use ReLU (Nair and Hinton, 2010) in this paper. As for the hidden state hi in the decoding RNN, we have: hi = f(Wxxi−1 + Whhi−1) (2) 4.2 Couplet Generation with Attention As mentioned, there is special phenomenon in the couplet pair that the characters from the same position in the antecedent clause and the subsequent clause comply with certain relatedness, so that two clauses may, to some extent, look “symmetric”. Hence we introduce the attention mechanism into the couplet generation model. The attention mechanism coordinates, either statically or dynamically, different positions of the input sequence (Shang et al., 2015). To this end, we introduce a hidden coupling vector ci = ∑m j=1 αijsj. The coupling vectors linearly combine all parts from the antecedent clause, and determine which part should be utilized to generate the characters in the subsequent clause. The attention signal αij can be calculated as αij = σatt(sj, hi−1) after a softmax function. The score is based on how well the inputs from position j and the output at position i match. σatt(·) is parametrized as a neural network which is jointly trained with all the other components (Bahdanau et al., 2015; Hermann et al., 2015). This mechanism enjoys the advantage Figure 5: Couplet generation with the polishing schema, i.e., full neural couplet machine. Note that for conciseness, we only show the gist of this schema across polishing iterations. The shaded circles are the hidden vectors to generate characters in the subsequent clause. We omit the duplicated sequential and attention dependencies within each iteration as we have shown in Figures 3 & 4. of adaptively focusing on the corresponding characters of the input text according to the generated characters in the subsequent clause. The mechanism is pictorially shown in Figure 4. With the coupling vectors generated, we have the following equation for the decoding process with attention mechanism: hi = f(Wxxi−1 + Whhi−1 + Wcci) (3) 4.3 Polishing Schema for Generation Inspired by the observation that a human couplet generator might recompose the clause for several times, we propose a polishing schema for the couplet generation. Specifically, after a singlepass generation, the couplet generator itself shall be aware of the generated clause as a draft, so that polishing each and every character of the clause becomes possible. We hereby propose a convolutionary neural network (CNN) based polishing schema shown in Figure 5. The intuition for convolutionary structure is that this polishing schema guarantees better coherence: with the batch of neighboring characters, the couplet generator knows which character to generate during the revision process. A convolutional neural network applies a fixedsize window to extract local (neighboring) patterns of successive characters. Suppose the window is of size t, the detected features at a certain position 2351 xi, · · · , xi+t−1 is given by o(n) i = f(W[h(n) i ; · · · ; h(n) i+t−1] + b) (4) Here h(n) with the superscript n is the hidden vector representation from the n-th iteration. W and b are parameters for convolution. Semicolons refer to column vector concatenation. Also, f(·) is the non-linear activation function and we use ReLU (Nair and Hinton, 2010) as well. Note that we pad zero at the end of the term if a character does not have enough following characters to fill the slots in the convolution window. In this way, we obtain a set of detected features. Then a maxpooling layer aggregates information over different characters into a fixed-size vector. Now the couplet generation with both attention mechanism and polishing schema becomes: h(n+1) i = f(Wxxi−1 + Whh(n+1) i−1 + Wcc(n+1) i + Woo(n) i ) (5) Note that in this way ,we feed the information from the n-th generation iteration into the (n+1)th polishing iteration. For the iterations, we have the stopping criteria as follows. • After each iteration process, we have the subsequent clause generated; we encode the clause as h using the RNN encoder using the calculation shown in Equation (1). We stop the algorithm iteration when the cosine similarity between the two h(n+1) and h(n) from two successive iterations exceeds a threshold ∆(∆= 0.5 in this study). • Ideally, we shall let the algorithm converge by itself. There will always be some long-tail cases. To be practical, it is necessary to apply a termination schedule when the generator polishes for many times. We stop the couplet generator after a fixed number of recomposition. Here we empirically set the threshold as 5 times of polishing, which means 6 iterations in all. 5 Experiments and Evaluations 5.1 Experimental Setups Datasets. A large Chinese couplet corpus is necessary to learn the model for couplet generation. There is, however, no large-sized pure couplet collection available (Jiang and Zhou, 2008). As mentioned, generally people regard Chinese couplets as a reduced form of Chinese poetry and there are several large Chinese poem datasets publicly Table 1: Detailed information of the datasets. Each pair of couplets consist of two clauses. #Pairs #Character TANG Poem 26,833 6,358 SONG Poem 11,324 3,629 Couplet Forum 46,959 8,826 available, such as Poems of Tang Dynasty (i.e., Tang Poem) and Poems of Song Dynasty (i.e., Song Poem). It becomes a widely acceptable approximation to mine couplets out of existing poems, even though poems are not specifically intended for couplets2 (Jiang and Zhou, 2008; Yan et al., 2013; He et al., 2012). We are able to mine such sentence pairs out of the poems and filtering those do not conform to couplet constraints, which is a similar process mentioned in (Jiang and Zhou, 2008). Moreover, we also crawl couplets from couplet forums where couplet fans discuss, practice and show couplet works. We performed standard Chinese segmentation into characters. In all, we collect 85,116 couplets. We randomly choose 2,000 couplets for validation and 1,000 couplets for testing, other non-overlap ones for training. The details are shown in Table 1. Hyperparameters and Setups. Word embeddings (Mikolov et al., 2013) are a standard apparatus in neural network-based text processing. A word is mapped to a low dimensional, real-valued vector. This process, known as vectorization, captures some underlying meanings. Given enough data, usage, and context, word embeddings can make highly accurate guesses about the meaning of a particular word. Embeddings can equivalently be viewed that a word is first represented as a one-hot vector and multiplied by a look-up table (Mikolov et al., 2013). In our model, we first vectorize all words using their embeddings. Here we used 128-dimensional word embeddings through vectorization, and they were initialized randomly and learned during training. We set the width of convolution filters as 3. The above parameters were chosen empirically. Training. The objective for training is the cross entropy errors of the predicted character distribution and the actual character distribution in our 2For instance, in the 4-sentence poetry (namely quatrain, i.e., 绝句in Chinese), the 3rd and 4th sentences are usually paired; in the 8-sentence poetry (namely regulated verse, i.e., 律诗in Chinese), the 3rd-4th and 5th-6th sentences are generally form pairs which satisfy couplet constraints. 2352 corpus. An ℓ2 regularization term is also added to the objective. The model is trained with back propagation through time with the length being the time step. The objective is minimized by stochastic gradient descent with shuffled mini-batches (with a mini-batch size of 100) for optimization. During training, the cross entropy error of the output is back-propagated through all hidden layers. Initial learning rate was set to 0.8, and a multiplicative learning rate decay was applied. We used the validation set for early stopping. In practice, the training converges after a few epochs. 5.2 Evaluation Metrics It is generally difficult to judge the effect of couplets generated by computers. We propose to evaluate results from 3 different evaluation metrics. Perplexity. For most of the language generation research, language perplexity is a sanity check. Our first set of experiments involved intrinsic evaluation of the “perplexity” evaluation for the generated couplets. Perplexity is actually an entropy based evaluation. In this sense, the lower perplexity for the couplets generated, the better performance in purity for the generations, and the couplets are likely to be good. m denotes the length. pow [ 2, −1 m m ∑ i=1 log p(yi) ] BLEU. The Bilingual Evaluation Understudy (BLEU) score-based evaluation is usually used for machine translation (Papineni et al., 2002): given the reference translation(s), the algorithm evaluates the quality of text which has been machinetranslated from the reference translation as ground truth. We adapt the BLEU evaluation under the couplet generation scenario. Take a couplet from the dataset, we generate the computer authored subsequent clause given the antecedent clause, and compare it with the original subsequent clause written by humans. There is a concern for such an evaluation metric is that BLEU score can only reflect the partial capability of the models; there is (for most cases) only one ground truth for the generated couplet but actually there are more than one appropriate ways to generate a well-written couplet. The merit of BLEU evaluation is to examine how likely to approximate the computer generated couplet towards human authored ones. Human Evaluation. We also include human judgments from 13 evaluators who are graduate students majoring in Chinese literature. Evaluators are requested to express an opinion over the automatically generated couplets. A clear criterion is necessary for human evaluation. We use the evaluation standards discussed in (Wang, 2002; Jiang and Zhou, 2008; He et al., 2012; Yan et al., 2013; Zhang and Lapata, 2014): “syntactic” and “semantic” satisfaction. For the syntactic side, evaluators consider whether the subsequent clauses conform the length restriction and word pairing between the two clauses. For a higher level of semantic side, evaluators then consider whether the two clauses are semantically meaningful and coherent. Evaluators assign 0-1 scores for both syntactic and semantic criteria (‘0’-no, ‘1’- yes). The evaluation process is conducted as a blind-review3 5.3 Algorithms for Comparisons We implemented several generation methods as baselines. For fairness, we conduct the same pregeneration process to all algorithms. Standard SMT. We adapt the standard phrasebased statistical machine translation method (Koehn et al., 2003) for the couplet task, which regards the antecedent clause as the source language and the subsequent clause as the target language. Couplet SMT. Based on SMT techniques, a phrase-based SMT system for Chinese couplet generation is proposed in (Jiang and Zhou, 2008), which incorporates extensive coupletspecific character filtering and re-rankings. LSTM-RNN. We also include a sequence-tosequence LSTM-RNN (Sutskever et al., 2014). LSTM-RNN is basically a RNN using the LSTM units, which consists of memory cells in order to store information for extended periods of time. For generation, we first use an LSTM-RNN to encode the given antecedent sequence to a vector space, and then use another LSTM-RNN to decode the vector into the output sequence. Since Chinese couplet generation can be viewed as a reduced form of Chinese poetry generation, we also include some approaches designed for poetry generation as baselines. iPoet. Given the antecedent clause, the iPoet method first retrieves relevant couplets from the 3We understand that acceptability is a gradable concept, especially for the less subjective tasks. Here from our experience, to grade the ”yes”-”no” acceptability is more feasible for the human evaluators to judge (with good agreement). As to couplet evaluation, it might be more difficult for the evaluators to say ”very acceptable” or ”less acceptable”. We will try to make scale-based evaluation as the future work. 2353 Table 2: Overall performance comparison against baselines. Algorithm Perplexity BLEU Human Evaluation Syntactic Semantic Overall Standard SMT (Koehn et al., 2003) 128 21.68 0.563 0.248 0.811 Couplet SMT (Jiang and Zhou, 2008) 97 28.71 0.916 0.503 1.419 LSTM-RNN (Sutskever et al., 2014) 85 24.23 0.648 0.233 0.881 iPoet (Yan et al., 2013) 143 13.77 0.228 0.435 0.663 Poetry SMT (He et al., 2012) 121 23.11 0.802 0.516 1.318 RNNPG (Zhang and Lapata, 2014) 99 25.83 0.853 0.600 1.453 Neural Couplet Machine (NCM) 68 32.62 0.925 0.631 1.556 corpus, and then summarizes the retrieved couplets into a single clause based on a generative summarization framework (Yan et al., 2013). Poetry SMT. He et al. (2012) extend the couplet SMT method into a poetry-oriented SMT approach, with different focus and different filtering for different applications from Couplet SMT. RNNPG. The RNN-based poem generator (RNNPG) is proposed to generate a poem (Zhang and Lapata, 2014), and we adapt it into the couplet generation scenario. Given the antecedent clause, the subsequent clause is generated through the standard RNN process with contextual convolutions of the given antecedent clause. Neural Couplet Machine (NCM). We propose the neural generation model particularly for couplets. Basically we have the RNN based encodingdecoding process with attention mechanism and polishing schema. We demonstrate with the best performance of all NCM variants proposed here. 5.4 Performance In Table 2 we show the overall performance of our proposed NCM system compared with strong competing methods as described above. We see that, for perplexity, BLEU and human judgments, our system outperforms other baseline models. The standard SMT method manipulates characters according to the dataset by standard translation but ignores all couplet characteristics in the model. The Couplet SMT especially established for couplet generation performs much better than the general SMT method since it incorporates several filtering with couplet constraints. As a strongly competitive baseline of the neural model LSTM-RNN, the perplexity performance gets boosted in the generation process, which indicates that neural models show strong ability for language generation. However, there is a major drawback that LSTM-RNN does not explicitly model the couplet constraints such as length restrictions and so on for couplet pairs. LSTM-RNN is not really a couplet-driven generation method and might not capture the corresponding patterns between the antecedent clause and subsequent clause well enough to get a high BLEU score. For the group of algorithms originally proposed for poetry generation, we have summarizationbased poetry method iPoet, translation-based poetry method Poetry SMT and a neural network based method RNNPG. In general, the summarization based poetry method iPoet does not perform well in either perplexity or BLEU evaluation: summarization is not an intuitive way to model and capture the pairwise relationship between the antecedent and subsequent clause within the couplet pair. Poetry SMT performs better, indicating the translation-based solution makes more sense for couplet generation than summarization methods. RNNPG is a strong baseline which applies both neural network structures, while the insufficiency lies in the lack of couplet-oriented constraints during the generation process. Note that all poetryoriented methods show worse performance than the couplet SMT method, indicating that couplet constraints should be specially addressed. We hence introduce the neural couplet machine based on neural network structures specially designed for couplet generation. We incorporate attention mechanism and polishing schema into the generation process. The attention mechanism strengthens the coupling characteristics between the antecedent and subsequent clause and the polishing schema enables the system to revise and refine the generated couplets, which leads to better performance in experimental evaluations. For evaluations, the perplexity scores and BLEU scores show some consistency. Besides, we 2354 Figure 6: Performance comparison of all variants in the neural couplet machine family. observe that the BLEU scores are quite low for almost all methods. It is not surprising that these methods are not likely to generate the exactly same couplets as the ground truth, since that is not how the objective function works. BLEU can only partially calibrate the capability of couplet generation because there are many ways to create couplets which do not look like the ground truth but also make sense to people. Although quite subjective, the human evaluations in Table 2 can to some extent show the potentials of all couplet generators. 5.5 Analysis and Discussions There are two special strategies in the proposed neural model for couplet generation: 1) attention mechanism and 2) polishing schema. We hence analyze the separate contributions of the two components in all the neural couplet machine variants. We have the NCM-Plain model with no attention or polishing strategy. We incrementally add the attention mechanism as NCM-Attention, and then add the polishing schema as NCM-Full. The three NCM variants correspond to the three models proposed in this paper. Besides, for a complete comparison, we also include the plain NCM integrated with polishing schema but without attention mechanism, namely NCM-Polishing. The results are shown in Figure 6. We can see that NCM-Plain shows the weakest performance, with no strategy tailored for couplet generation. An interesting phenomenon is that NCMAttention has better performance in BLEU score while NCM-Polishing performs better in terms of perplexity. We conclude that attention mechanism captures the pairing patterns between the two clauses, and the polishing schema enables better wordings of semantic coherence in the couplet afFigure 7: The distribution of stopping iteration counts for all test data. Note that 6 iterations of generation means 5 times of polishing. ter several revisions. The two strategies address different concerns for couplet generation, hence NCM-Full performs best. We also take a closer look at the polishing schema proposed in this paper, which enables a multi-pass generation. The couplet generator can generate a subsequent clause utilizing additional information from the generated subsequent clause from the last iteration. It is a novel insight against previous methods. The effect and benefits of the polishing schema is demonstrated in Figure 6. We also examine the stopping criteria, shown in Figure 7. In general, most of the polishing process stops after 2-3 iterations. 6 Conclusions The Chinese couplet generation is a difficult task in the field of natural language generation. We propose a novel neural couplet machine to tackle this problem based on neural network structures. Given an antecedent clause, we generate a subsequent clause to create a couplet pair using a sequential generation process. The two innovative insights are that 1) we adapt the attention mechanism for the couplet coupling constraint, and 2) we propose a novel polishing schema to refine the generated couplets using additional information. We compare our approach with several baselines. We apply perplexity and BLEU to evaluate the performance of couplet generation as well as human judgments. We demonstrate that the neural couplet machine can generate rather good couplets and outperform baselines. Besides, both attention mechanism and polishing schema contribute to the better performance of the proposed approach. 2355 Acknowledgments We thank all the anonymous reviewers for their valuable and constructive comments. This paper is partially supported by the National Natural Science Foundation of China (NSFC Grant Numbers 61272343, 61472006), the Doctoral Program of Higher Education of China (Grant No. 20130001110032) as well as the National Basic Research Program (973 Program No. 2014CB340405). References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. International Conference on Learning Representations. Kyunghyun Cho, Bart van Merri¨enboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259. Erica Greene, Tugba Bodrumlu, and Kevin Knight. 2010. Automatic analysis of rhythmic poetry with applications to generation and translation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP’10, pages 524–533. Jing He, Ming Zhou, and Long Jiang. 2012. Generating chinese classical poems with statistical machine translation models. In Twenty-Sixth AAAI Conference on Artificial Intelligence, pages 1650–1656. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1684– 1692. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Long Jiang and Ming Zhou. 2008. Generating chinese couplets using a statistical mt approach. In Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1, COLING ’08, pages 377–384. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. arXiv preprint arXiv:1404.2188. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language TechnologyVolume 1, pages 48–54. Association for Computational Linguistics. R. Manurung, G. Ritchie, and H. Thompson. 2011. Using genetic algorithms to create meaningful poetic text. Journal of Experimental & Theoretical Artificial Intelligence, 24(1):43–64. H. Manurung. 2004. An evolutionary algorithm approach to poetry generation. University of Edinburgh. College of Science and Engineering. School of Informatics. Tomas Mikolov, Martin Karafi´at, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH, volume 2, page 3. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Vinod Nair and Geoffrey E Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 807–814. Yael Netzer, David Gabay, Yoav Goldberg, and Michael Elhadad. 2009. Gaiku: generating haiku with word associations norms. In Proceedings of the Workshop on Computational Approaches to Linguistic Creativity, CALC ’09, pages 32–39. H. Oliveira. 2009. Automatic generation of poetry: an overview. Universidade de Coimbra. H.G. Oliveira. 2012. Poetryme: a versatile platform for poetry generation. Computational Creativity, Concept Invention, and General Intelligence, 1:21. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interational Joint Conference on Natural Language Processing, ACL-IJCNLP’15, pages 1577–1586. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. N. Tosa, H. Obara, and M. Minoh. 2008. Hitch haiku: An interactive supporting system for composing haiku poem. Entertainment Computing-ICEC 2008, pages 209–216. 2356 Li Wang. 2002. A summary of rhyming constraints of chinese poems. Beijing Press. X. Wu, N. Tosa, and R. Nakatsu. 2009. New hitch haiku: An interactive renku poem composition supporting tool applied for sightseeing navigation system. Entertainment Computing–ICEC 2009, pages 191–196. Rui Yan, Han Jiang, Mirella Lapata, Shou-De Lin, Xueqiang Lv, and Xiaoming Li. 2013. i, poet: Automatic chinese poetry composition through a generative summarization framework under constrained optimization. In Proceedings of the 23rd International Joint Conference on Artificial Intelligence, IJCAI’13, pages 2197–2203. Rui Yan. 2016. i, poet: Automatic poetry composition through recurrent neural networks with iterative polishing schema. In Proceedings of the 25th International Joint Conference on Artificial Intelligence, IJCAI’16. Xingxing Zhang and Mirella Lapata. 2014. Chinese poetry generation with recurrent neural networks. In Proceedings of Conference on Empirical Methods in Natural Language Processing, pages 670–680. Kai-Xu Zhang and Mao-Song Sun. 2009. An chinese couplet generation model based on statistics and rules. Journal of Chinese Information Processing, 1:017. Cheng-Le Zhou, Wei You, and Xiaojun Ding. 2010. Genetic algorithm and its implementation of automatic generation of chinese songci. Journal of Software, 21(3):427–437. 2357
2016
222
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2358–2367, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task Danqi Chen and Jason Bolton and Christopher D. Manning Computer Science Stanford University Stanford, CA 94305-9020, USA {danqi,jebolton,manning}@cs.stanford.edu Abstract Enabling a computer to understand a document so that it can answer comprehension questions is a central, yet unsolved goal of NLP. A key factor impeding its solution by machine learned systems is the limited availability of human-annotated data. Hermann et al. (2015) seek to solve this problem by creating over a million training examples by pairing CNN and Daily Mail news articles with their summarized bullet points, and show that a neural network can then be trained to give good performance on this task. In this paper, we conduct a thorough examination of this new reading comprehension task. Our primary aim is to understand what depth of language understanding is required to do well on this task. We approach this from one side by doing a careful hand-analysis of a small subset of the problems and from the other by showing that simple, carefully designed systems can obtain accuracies of 72.4% and 75.8% on these two datasets, exceeding current state-of-the-art results by over 5% and approaching what we believe is the ceiling for performance on this task.1 1 Introduction Reading comprehension (RC) is the ability to read text, process it, and understand its meaning.2 How to endow computers with this capacity has been an elusive challenge and a long-standing goal of Artificial Intelligence (e.g., (Norvig, 1978)). Genuine reading comprehension involves interpretation of 1Our code is available at https://github.com/danqi/ rc-cnn-dailymail. 2https://en.wikipedia.org/wiki/Reading_ comprehension the text and making complex inferences. Human reading comprehension is often tested by asking questions that require interpretive understanding of a passage, and the same approach has been suggested for testing computers (Burges, 2013). In recent years, there have been several strands of work which attempt to collect human-labeled data for this task – in the form of document, question and answer triples – and to learn machine learning models directly from it (Richardson et al., 2013; Berant et al., 2014; Wang et al., 2015). However, these datasets consist of only hundreds of documents, as the labeled examples usually require considerable expertise and neat design, making the annotation process quite expensive. The subsequent scarcity of labeled examples prevents us from training powerful statistical models, such as deep learning models, and would seem to prevent a system from learning complex textual reasoning capacities. Recently, researchers at DeepMind (Hermann et al., 2015) had the appealing, original idea of exploiting the fact that the abundant news articles of CNN and Daily Mail are accompanied by bullet point summaries in order to heuristically create large-scale supervised training data for the reading comprehension task. Figure 1 gives an example. Their idea is that a bullet point usually summarizes one or several aspects of the article. If the computer understands the content of the article, it should be able to infer the missing entity in the bullet point. This is a clever way of creating supervised data cheaply and holds promise for making progress on training RC models; however, it is unclear what level of reading comprehension is actually needed to solve this somewhat artificial task and, indeed, what statistical models that do reasonably well on this task have actually learned. In this paper, our aim is to provide an in-depth and thoughtful analysis of this dataset and what level of natural language understanding is needed to 2358 ( @entity4 ) if you feel a ripple in the force today , it may be the news that the official @entity6 is getting its first gay character . according to the sci-fi website @entity9 , the upcoming novel " @entity11 " will feature a capable but flawed @entity13 official named @entity14 who " also happens to be a lesbian . " the character is the first gay figure in the official @entity6 -- the movies , television shows , comics and books approved by @entity6 franchise owner @entity22 -- according to @entity24 , editor of " @entity6 " books at @entity28 imprint @entity26 . Passage Question characters in " @placeholder " movies have gradually become more diverse Answer @entity6 Figure 1: An example item from dataset CNN. do well on it. We demonstrate that simple, carefully designed systems can obtain high, state-of-the-art accuracies of 72.4% and 75.8% on CNN and Daily Mail respectively. We do a careful hand-analysis of a small subset of the problems to provide data on their difficulty and what kinds of language understanding are needed to be successful and we try to diagnose what is learned by the systems that we have built. We conclude that: (i) this dataset is easier than previously realized, (ii) straightforward, conventional NLP systems can do much better on it than previously suggested, (iii) the distributed representations of deep learning systems are very effective at recognizing paraphrases, (iv) partly because of the nature of the questions, current systems much more have the nature of single-sentence relation extraction systems than larger-discoursecontext text understanding systems, (v) the systems that we present here are close to the ceiling of performance for single-sentence and unambiguous cases of this dataset, and (vi) the prospects for getting the final 20% of questions correct appear poor, since most of them involve issues in the data preparation which undermine the chances of answering the question (coreference errors or anonymization of entities making understanding too difficult). 2 The Reading Comprehension Task The RC datasets introduced in (Hermann et al., 2015) are made from articles on the news websites CNN and Daily Mail, utilizing articles and their bullet point summaries.3 Figure 1 demonstrates 3The datasets are available at https://github.com/ deepmind/rc-data. an example4: it consists of a passage p, a question q and an answer a, where the passage is a news article, the question is a cloze-style task, in which one of the article’s bullet points has had one entity replaced by a placeholder, and the answer is this questioned entity. The goal is to infer the missing entity (answer a) from all the possible entities which appear in the passage. A news article is usually associated with a few (e.g., 3–5) bullet points and each of them highlights one aspect of its content. The text has been run through a Google NLP pipeline. It it tokenized, lowercased, and named entity recognition and coreference resolution have been run. For each coreference chain containing at least one named entity, all items in the chain are replaced by an @entityn marker, for a distinct index n. Hermann et al. (2015) argue convincingly that such a strategy is necessary to ensure that systems approach this task by understanding the passage in front of them, rather than by using world knowledge or a language model to answer questions without needing to understand the passage. However, this also gives the task a somewhat artificial character. On the one hand, systems are greatly helped by entity recognition and coreference having already been performed; on the other, they suffer when either of these modules fail, as they do (in Figure 1, “the character” should probably be coreferent with @entity14; clearer examples of failure appear later on in our data analysis). Moreover, this inability to use world knowledge also makes it much more difficult for a human to do this task – occasionally it is very difficult or impossible for a human to determine the correct answer when presented with an item anonymized in this way. The creation of the datasets benefits from the sheer volume of news articles available online, so they offer a large and realistic testing ground for statistical models. Table 1 provides some statistics on the two datasets: there are 380k and 879k training examples for CNN and Daily Mail respectively. The passages are around 30 sentences and 800 tokens on average, while each question contains around 12–14 tokens. In the following sections, we seek to more deeply understand the nature of this dataset. We first build some straightforward systems in order to get a better idea of a lower-bound for the performance of current NLP systems. Then we turn to data analysis of a 4The original article can be found at http: //www.cnn.com/2015/03/10/entertainment/ feat-star-wars-gay-character/. 2359 CNN Daily Mail # Train 380,298 879,450 # Dev 3,924 64,835 # Test 3,198 53,182 Passage: avg. tokens 761.8 813.1 Passage: avg. sentences 32.3 28.9 Question: avg. tokens 12.5 14.3 Avg. # entities 26.2 26.2 Table 1: Data statistics of the CNN and Daily Mail datasets. The avg. tokens and sentences in the passage, the avg. tokens in the query, and the number of entities are based on statistics from the training set, but they are similar on the development and test sets. sample of the items to examine their nature and an upper bound on performance. 3 Our Systems In this section, we describe two systems we implemented – a conventional entity-centric classifier and an end-to-end neural network. While Hermann et al. (2015) do provide several baselines for performance on the RC task, we suspect that their baselines are not that strong. They attempt to use a frame-semantic parser, and we feel that the poor coverage of that parser undermines the results, and is not representative of what a straightforward NLP system – based on standard approaches to factoid question answering and relation extraction developed over the last 15 years – can achieve. Indeed, their frame-semantic model is markedly inferior to another baseline they provide, a heuristic word distance model. At present just two papers are available presenting results on this RC task, both presenting neural network approaches: (Hermann et al., 2015) and (Hill et al., 2016). While the latter is wrapped in the language of end-to-end memory networks, it actually presents a fairly simple window-based neural network classifier running on the CNN data. Its success again raises questions about the true nature and complexity of the RC task provided by this dataset, which we seek to clarify by building a simple attention-based neural net classifier. Given the (passage, question, answer) triple (p, q, a), p = {p1, . . ., pm} and q = {q1, . . ., ql} are sequences of tokens for the passage and question sentence, with q containing exactly one “@placeholder” token. The goal is to infer the correct entity a ∈p ∩E that the placeholder corresponds to, where E is the set of all abstract entity markers. Note that the correct answer entity must appear in the passage p. 3.1 Entity-Centric Classifier We first build a conventional feature-based classifier, aiming to explore what features are effective for this task. This is similar in spirit to (Wang et al., 2015), which at present has very competitive performance on the MCTest RC dataset (Richardson et al., 2013). The setup of this system is to design a feature vector f p,q(e) for each candidate entity e, and to learn a weight vector θ such that the correct answer a is expected to rank higher than all other candidate entities: θ⊺f p,q(a) > θ⊺f p,q(e), ∀e ∈E ∩p \ {a} (1) We employ the following feature templates: 1. Whether entity e occurs in the passage. 2. Whether entity e occurs in the question. 3. The frequency of entity e in the passage. 4. The first position of occurence of entity e in the passage. 5. n-gram exact match: whether there is an exact match between the text surrounding the placeholder and the text surrounding entity e. We have features for all combinations of matching left and/or right one or two words. 6. Word distance: we align the placeholder with each occurrence of entity e, and compute the average minimum distance of each non-stop question word from the entity in the passage. 7. Sentence co-occurrence: whether entity e cooccurs with another entity or verb that appears in the question, in some sentence of the passage. 8. Dependency parse match: we dependency parse both the question and all the sentences in the passage, and extract an indicator feature of whether w r−→@placeholder and w r−→e are both found; similar features are constructed for @placeholder r−→w and e r−→w. 3.2 End-to-end Neural Network Our neural network system is based on the AttentiveReader model proposed by (Hermann et al., 2015). The framework can be described in the following three steps (see Figure 2): 2360 ( @entity4 ) if you feel a ripple in the force today , it may be the news that the official @entity6 is getting its first gay character . according to the sci-fi website @entity9 , the upcoming novel " @entity11 " will feature a capable but flawed @entity13 official named @entity14 who " also happens to be a lesbian . " the character is the first gay figure in the official @entity6 -- the movies , television shows , comics and books approved by @entity6 franchise owner @entity22 -- according to @entity24 , editor of " @entity6 " books at @entity28 imprint @entity26 . Passage Question characters in " @placeholder " movies have gradually become more diverse Answer @entity6 … … … characters in " @placeholder " movies have gradually become more diverse Passage Question entity6 Answer Figure 2: Our neural network architecture for the reading comprehension task. Encoding: First, all the words are mapped to ddimensional vectors via an embedding matrix E ∈Rd×|V |; therefore we have p: p1, . . ., pm ∈Rd and q : q1, . . ., ql ∈Rd. Next we use a shallow bi-directional LSTM with hidden size ˜h to encode contextual embeddings ˜pi of each word in the passage, −→h i = LSTM(−→h i−1, pi), i = 1, . . ., m ←−h i = LSTM(←−h i+1, pi), i = m, . . ., 1 and ˜pi = concat(−→h i, ←−h i) ∈Rh, where h = 2˜h. Meanwhile, we use another bi-directional LSTM to map the question q1, . . ., ql to an embedding q ∈Rh. Attention: In this step, the goal is to compare the question embedding and all the contextual embeddings, and select the pieces of information that are relevant to the question. We compute a probability distribution α depending on the degree of relevance between word pi (in its context) and the question q and then produce an output vector o which is a weighted combination of all contextual embeddings {˜pi}: αi = softmaxi q⊺Ws ˜pi (2) o = X i αi ˜pi (3) Ws ∈Rh×h is used in a bilinear term, which allows us to compute a similarity between q and ˜pi more flexibly than with just a dot product. Prediction: Using the output vector o, the system outputs the most likely answer using: a = arg maxa∈p∩E W⊺ ao (4) Finally, the system adds a softmax function on top of W⊺ ao and adopts a negative loglikelihood objective for training. Differences from (Hermann et al., 2015). Our model basically follows the AttentiveReader. However, to our surprise, our experiments observed nearly 8–10% improvement over the original AttentiveReader results on CNN and Daily Mail datasets (discussed in Sec. 4). Concretely, our model has the following differences: • We use a bilinear term, instead of a tanh layer to compute the relevance (attention) between question and contextual embeddings. The effectiveness of the simple bilinear attention function has been shown previously for neural machine translation by (Luong et al., 2015). • After obtaining the weighted contextual embeddings o, we use o for direct prediction. In contrast, the original model in (Hermann et al., 2015) combined o and the question embedding q via another non-linear layer before making final predictions. We found that we could remove this layer without harming performance. We believe it is sufficient for the model to learn to return the entity to which it maximally gives attention. • The original model considers all the words from the vocabulary V in making predictions. We think this is unnecessary, and only predict among entities which appear in the passage. Of these changes, only the first seems important; the other two just aim at keeping the model simple. 2361 Window-based MemN2Ns (Hill et al., 2016). Another recent neural network approach proposed by (Hill et al., 2016) is based on a memory network architecture (Weston et al., 2015). We think it is highly similar in spirit. The biggest difference is their way of encoding passages: they demonstrate that it is most effective to only use a 5-word context window when evaluating a candidate entity and they use a positional unigram approach to encode the contextual embeddings: if a window consists of 5 words x1, . . ., x5, then it is encoded as P5 i=1 Ei(xi), resulting in 5 separate embedding matrices to learn. They encode the 5-word window surrounding the placeholder in a similar way and all other words in the question text are ignored. In addition, they simply use a dot product to compute the “relevance” between the question and a contextual embedding. This simple model nevertheless works well, showing the extent to which this RC task can be done by very local context matching. 4 Experiments 4.1 Training Details For training our conventional classifier, we use the implementation of LambdaMART (Wu et al., 2010) in the RankLib package.5 We use this ranking algorithm since our problem is naturally a ranking problem and forests of boosted decision trees have been very successful lately (as seen, e.g., in many recent Kaggle competitions). We do not use all the features of LambdaMART since we are only scoring 1/0 loss on the first ranked proposal, rather than using an IR-style metric to score ranked results. We use Stanford’s neural network dependency parser (Chen and Manning, 2014) to parse all our document and question text, and all other features can be extracted without additional tools. For training our neural networks, we only keep the most frequent |V| = 50k words (including entity and placeholder markers), and map all other words to an <unk> token. We choose word embedding size d = 100, and use the 100-dimensional pretrained GloVe word embeddings (Pennington et al., 2014) for initialization. The attention and output parameters are initialized from a uniform distribution between (−0.01, 0.01), and the LSTM weights are initialized from a Gaussian distribution N (0, 0.1). We use hidden size h = 128 for CNN and 256 for Daily Mail. Optimization is carried out using 5https://sourceforge.net/p/lemur/wiki/ RankLib/. vanilla stochastic gradient descent (SGD), with a fixed learning rate of 0.1. We sort all the examples by the length of its passage, and randomly sample a mini-batch of size 32 for each update. We also apply dropout with probability 0.2 to the embedding layer and gradient clipping when the norm of gradients exceeds 10. All of our models are run on a single GPU (GeForce GTX TITAN X), with roughly a runtime of 6 hours per epoch for CNN, and 15 hours per epoch for Daily Mail. We run all the models up to 30 epochs and select the model that achieves the best accuracy on the development set. 4.2 Main Results Table 2 presents our main results. The conventional feature-based classifier obtains 67.9% accuracy on the CNN test set. Not only does this significantly outperform any of the symbolic approaches reported in (Hermann et al., 2015), it also outperforms all the neural network systems from their paper and the best single-system result reported so far from (Hill et al., 2016). This suggests that the task might not be as difficult as suggested, and a simple feature set can cover many of the cases. Table 3 presents a feature ablation analysis of our entity-centric classifier on the development portion of the CNN dataset. It shows that n-gram match and frequency of entities are the two most important classes of features. More dramatically, our single-model neural network surpasses the previous results by a large margin (over 5%), pushing up the state-of-the-art accuracies to 72.4% and 75.8% respectively. Due to resource constraints, we have not had a chance to investigate ensembles of models, which generally can bring further gains, as demonstrated in (Hill et al., 2016) and many other papers. Concurrently with our paper, Kadlec et al. (2016) and Kobayashi et al. (2016) also experiment on these two datasets and report competitive results. However, our single model not only still outperforms theirs, but also appears to be structurally simpler. All these recent efforts converge to similar numbers, and we believe that they are approaching the ceiling performance of this task, as we will indicate in the next section. 5 Data Analysis So far, we have good results via either of our systems. In this section, we aim to conduct an in-depth analysis and answer the following questions: (i) Since the 2362 Model CNN Daily Mail Dev Test Dev Test Frame-semantic model † 36.3 40.2 35.5 35.5 Word distance model † 50.5 50.9 56.4 55.5 Deep LSTM Reader † 55.0 57.0 63.3 62.2 Attentive Reader † 61.6 63.0 70.5 69.0 Impatient Reader † 61.8 63.8 69.0 68.0 MemNNs (window memory) ‡ 58.0 60.6 N/A N/A MemNNs (window memory + self-sup.) ‡ 63.4 66.8 N/A N/A MemNNs (ensemble) ‡ 66.2∗ 69.4∗ N/A N/A Ours: Classifier 67.1 67.9 69.1 68.3 Ours: Neural net 72.4 72.4 76.9 75.8 Table 2: Accuracy of all models on the CNN and Daily Mail datasets. Results marked † are from (Hermann et al., 2015) and results marked ‡ are from (Hill et al., 2016). Classifier and Neural net denote our entity-centric classifier and neural network systems respectively. The numbers marked with ∗indicate that the results are from ensemble models. Features Accuracy Full model 67.1 −whether e is in the passage 67.1 −whether e is in the question 67.0 −frequency of e 63.7 −position of e 65.9 −n-gram match 60.5 −word distance 65.4 −sentence co-occurrence 66.0 −dependency parse match 65.6 Table 3: Feature ablation analysis of our entitycentric classifier on the development portion of the CNN dataset. The numbers denote the accuracy after we exclude each feature from the full system, so a low number indicates an important feature. dataset was created in an automatic and heuristic way, how many of the questions are trivial to answer, and how many are noisy and not answerable? (ii) What have these models learned? What are the prospects for further improving them? To study this, we randomly sampled 100 examples from the dev portion of the CNN dataset for analysis (see more details in Appendix A). 5.1 Breakdown of the Examples After carefully analyzing these 100 examples, we roughly classify them into the following categories (if an example satisfies more than one category, we classify it into the earliest one): Exact match The nearest words around the placeholder are also found in the passage surrounding an entity marker; the answer is self-evident. Sentence-level paraphrasing The question text is entailed/rephrased by exactly one sentence in the passage, so the answer can definitely be identified from that sentence. Partial clue In many cases, even though we cannot find a complete semantic match between the question text and some sentence, we are still able to infer the answer through partial clues, such as some word/concept overlap. Multiple sentences It requires processing multiple sentences to infer the correct answer. Coreference errors It is unavoidable that there are many coreference errors in the dataset. This category includes those examples with critical coreference errors for the answer entity or key entities appearing in the question. Basically we treat this category as “not answerable”. Ambiguous or very hard This category includes examples for which we think humans are not able to obtain the correct answer (confidently). Table 5 provides our estimate of the percentage for each category, and Table 4 presents one representative example from each category. To our surprise, 2363 Category Question Passage Exact Match it ’s clear @entity0 is leaning toward @placeholder , says an expert who monitors @entity0 . . . @entity116 , who follows @entity0 ’s operations and propaganda closely , recently told @entity3 , it ’s clear @entity0 is leaning toward @entity60 in terms of doctrine , ideology and an emphasis on holding territory after operations . . . . Paraphrase @placeholder says he understands why @entity0 wo n’t play at his tournament . . . @entity0 called me personally to let me know that he would n’t be playing here at @entity23 , " @entity3 said on his @entity21 event ’s website . . . . Partial clue a tv movie based on @entity2 ’s book @placeholder casts a @entity76 actor as @entity5 . . . to @entity12 @entity2 professed that his @entity11 is not a religious book . . . . Multiple sent. he ’s doing a his - and - her duet all by himself , @entity6 said of @placeholder . . . we got some groundbreaking performances , here too , tonight , @entity6 said . we got @entity17 , who will be doing some musical performances . he ’s doing a his - and - her duet all by himself . . . . Coref. Error rapper @placeholder " disgusted , " cancels upcoming show for @entity280 . . . with hip - hop star @entity246 saying on @entity247 that he was canceling an upcoming show for the @entity249 . . . . (but @entity249 = @entity280 = SAEs) Hard pilot error and snow were reasons stated for @placeholder plane crash . . . a small aircraft carrying @entity5 , @entity6 and @entity7 the @entity12 @entity3 crashed a few miles from @entity9 , near @entity10 , @entity11 . . . . Table 4: Some representative examples from each category. No. Category (%) 1 Exact match 13 2 Paraphrasing 41 3 Partial clue 19 4 Multiple sentences 2 5 Coreference errors 8 6 Ambiguous / hard 17 Table 5: An estimate of the breakdown of the dataset into classes, based on the analysis of our sampled 100 examples from the CNN dataset. “coreference errors” and “ambiguous/hard” cases account for 25% of this sample set, based on our manual analysis, and this certainly will be a barrier for training models with an accuracy much above 75% (although, of course, a model can sometimes make a lucky guess). Additionally, only 2 examples require multiple sentences for inference – this is a lower rate than we expected and Hermann et al. (2015) suggest. Therefore, we hypothesize that in most of the “answerable” cases, the goal is to Category Classifier Neural net Exact match 13 (100.0%) 13 (100.0%) Paraphrasing 32 (78.1%) 39 (95.1%) Partial clue 14 (73.7%) 17 (89.5%) Multiple sentences 1 (50.0%) 1 (50.0%) Coreference errors 4 (50.0%) 3 (37.5%) Ambiguous / hard 2 (11.8%) 1 (5.9%) All 66 (66.0%) 74 (74.0%) Table 6: The per-category performance of our two systems. identify the most relevant (single) sentence, and then to infer the answer based upon it. 5.2 Per-category Performance Now, we further analyze the predictions of our two systems, based on the above categorization. As seen in Table 6, we have the following observations: (i) The exact-match cases are quite simple and both systems get 100% correct. (ii) For the ambiguous/hard and entity-linking-error cases, 2364 meeting our expectations, both of the systems perform poorly. (iii) The two systems mainly differ in paraphrasing cases, and some of the “partial clue” cases. This clearly shows how neural networks are better capable of learning semantic matches involving paraphrasing or lexical variation between the two sentences. (iv) We believe that the neural-net system already achieves near-optimal performance on all the single-sentence and unambiguous cases. There does not seem to be much useful headroom for exploring more sophisticated natural language understanding approaches on this dataset. 6 Related Tasks We briefly survey other tasks related to reading comprehension. MCTest (Richardson et al., 2013) is an opendomain reading comprehension task, in the form of fictional short stories, accompanied by multiplechoice questions. It was carefully created using crowd sourcing, and aims at a 7-year-old reading comprehension level. On the one hand, this dataset has a high demand on various reasoning capacities: over 50% of the questions require multiple sentences to answer and also the questions come in assorted categories (what, why, how, whose, which, etc). On the other hand, the full dataset has only 660 paragraphs in total (each paragraph is associated with 4 questions), which renders training statistical models (especially complex ones) very difficult. Up to now, the best solutions (Sachan et al., 2015; Wang et al., 2015) are still heavily relying on manually curated syntactic/semantic features, with the aid of additional knowledge (e.g., word embeddings, lexical/paragraph databases). Children Book Test (Hill et al., 2016) was developed in a similar spirit to the CNN/Daily Mail datasets. It takes any consecutive 21 sentences from a children’s book – the first 20 sentences are used as the passage, and the goal is to infer a missing word in the 21st sentence (question and answer). The questions are also categorized by the type of the missing word: named entity, common noun, preposition or verb. According to the first study on this dataset (Hill et al., 2016), a language model (an n-gram model or a recurrent neural network) with local context is sufficient for predicting verbs or prepositions; however, for named entities or common nouns, it improves performance to scan through the whole paragraph to make predictions. So far, the best published results are reported by window-based memory networks. bAbI (Weston et al., 2016) is a collection of artificial datasets, consisting of 20 different reasoning types. It encourages the development of models with the ability to chain reasoning, induction/deduction, etc., so that they can answer a question like “The football is in the playground” after reading a sequence of sentences “John is in the playground; Bob is in the office; John picked up the football; Bob went to the kitchen.” Various types of memory networks (Sukhbaatar et al., 2015; Kumar et al., 2016) have been shown effective on these tasks, and Lee et al. (2016) show that vector space models based on extensive problem analysis can obtain near-perfect accuracies on all the categories. Despite these promising results, this dataset is limited to a small vocabulary (only 100–200 words) and simple language variations, so there is still a huge gap from real-world datasets that we need to fill in. 7 Conclusion In this paper, we carefully examined the recent CNN/Daily Mail reading comprehension task. Our systems demonstrated state-of-the-art results, but more importantly, we performed a careful analysis of the dataset by hand. Overall, we think the CNN/Daily Mail datasets are valuable datasets, which provide a promising avenue for training effective statistical models for reading comprehension tasks. Nevertheless, we argue that: (i) this dataset is still quite noisy due to its method of data creation and coreference errors; (ii) current neural networks have almost reached a performance ceiling on this dataset; and (iii) the required reasoning and inference level of this dataset is still quite simple. As future work, we need to consider how we can utilize these datasets (and the models trained upon them) to help solve more complex RC reasoning tasks (with less annotated data). Acknowledgments We thank the anonymous reviewers for their thoughtful feedback. Stanford University gratefully acknowledges the support of the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) contract no. FA8750-13-2-0040. Any opinions, findings, 2365 and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of the DARPA, AFRL, or the US government. References Jonathan Berant, Vivek Srikumar, Pei-Chun Chen, Abby Vander Linden, Brittany Harding, Brad Huang, Peter Clark, and Christopher D. Manning. 2014. Modeling biological processes for reading comprehension. In Empirical Methods in Natural Language Processing (EMNLP), pages 1499–1510. Christopher J.C. Burges. 2013. Towards the machine comprehension of text: An essay. Technical report, Microsoft Research Technical Report MSR-TR-2013125. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Empirical Methods in Natural Language Processing (EMNLP), pages 740–750. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems (NIPS), pages 1684–1692. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2016. The goldilocks principle: Reading children’s books with explicit memory representations. In International Conference on Learning Representations (ICLR). Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. 2016. Text understanding with the attention sum reader network. In Association for Computational Linguistics (ACL). Sosuke Kobayashi, Ran Tian, Naoaki Okazaki, and Kentaro Inui. 2016. Dynamic entity representation with max-pooling improves machine reading. In North American Association for Computational Linguistics (NAACL). Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. 2016. Ask me anything: Dynamic memory networks for natural language processing. In International Conference on Machine Learning (ICML). Moontae Lee, Xiaodong He, Wen-tau Yih, Jianfeng Gao, Li Deng, and Paul Smolensky. 2016. Reasoning in vector space: An exploratory study of question answering. In International Conference on Learning Representations (ICLR). Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1412–1421. Peter Norvig. 1978. A Unified Theory of Inference for Text Understanding. Ph.D. thesis, University of California, Berkeley. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Matthew Richardson, Christopher J.C. Burges, and Erin Renshaw. 2013. MCTest: A challenge dataset for the open-domain machine comprehension of text. In Empirical Methods in Natural Language Processing (EMNLP), pages 193–203. Mrinmaya Sachan, Kumar Dubey, Eric Xing, and Matthew Richardson. 2015. Learning answerentailing structures for machine comprehension. In Association for Computational Linguistics and International Joint Conference on Natural Language Processing (ACL/IJCNLP), pages 239–249. Sainbayar Sukhbaatar, arthur szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In Advances in Neural Information Processing Systems (NIPS), pages 2431–2439. Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. 2015. Machine comprehension with syntax, frames, and semantics. In Association for Computational Linguistics and International Joint Conference on Natural Language Processing (ACL/IJCNLP), pages 700–706. Jason Weston, Sumit Chopra, and Antoine Bordes. 2015. Memory networks. In International Conference on Learning Representations (ICLR). Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. 2016. Towards AI-complete question answering: A set of prerequisite toy tasks. In International Conference on Learning Representations (ICLR). Qiang Wu, Christopher J. Burges, Krysta M. Svore, and Jianfeng Gao. 2010. Adapting boosting for information retrieval measures. Information Retrieval, pages 254–270. A Samples and Labeled Categories from the CNN Dataset For the analysis in Section 5, we uniformly sampled 100 examples from the development set of the CNN dataset. Table 8 provides a full index list of our samples and Table 7 presents our labeled categories. 2366 Category Sample IDs Exact match (13) 8, 11, 23, 27, 28, 32, 43, 57, 63, 72, 86, 87, 99 Sentence-level paraphrasing (41) 0, 2, 7, 9, 12, 14, 16, 18, 19, 20, 29, 30, 31, 34, 36, 37, 39, 41, 42, 44, 47, 48, 52, 54, 58, 64, 65, 66, 69, 73, 74, 78, 80, 81, 82, 84, 85, 90, 92, 95, 96 Partial clues (19) 4, 17, 21, 24, 35, 38, 45, 53, 55, 56, 61, 62, 75, 83, 88, 89, 91, 97, 98 Multiple sentences (2) 5, 76 Coreference errors (8) 6, 22, 40, 46, 51, 60, 68, 94 Ambiguous or very hard (17) 1, 3, 10, 13, 15, 25, 26, 33, 49, 50, 59, 67, 70, 71, 77, 79, 93 Table 7: Our labeled categories of the 100 samples. ID Filename ID Filename 0 ddb1e746f88a22fee654ecde8f018e7586595045.question 1 2bef8ec21b10a3294b1496d9a86f29f0592d2300.question 2 38c702812a874f983e9890c32ba832841a327351.question 3 636857045cf266dd69b67b1e53617bed5253dc33.question 4 417cbffd5e6275b3c42cb88be222a9f6c7d415f1.question 5 ef96409c707a699e4055a1d0684eecdb6e115c16.question 6 b4e157a6a34bf11a03e0b5cd55065c0f39ac8d60.question 7 1d75e7c59978c7c06f3aecaf52bc35b8919eee17.question 8 223c8e3aeddc3f65fee1964df17bb72f89b723e4.question 9 13d33b8c86375b0f5fdc856116e91a7355c6fc5a.question 10 378fd418b8ec18dff406be07ec225e6bf53659f5.question 11 d8253b7f22662911c19ec4468f81b9db29df1746.question 12 80529c792d3a368861b404c1ce4d7ad3c12e552a.question 13 728e7b365e941d814676168c78c9c4f38892a550.question 14 3cf6fb2c0d09927a12add82b4a3f248da740d0de.question 15 04b827f84e60659258e19806afe9f8d10b764db1.question 16 f0abf359d71f7896abd09ff7b3319c70f2ded81e.question 17 b6696e0f2166a75fcefbe4f28d0ad06e420eef23.question 18 881ab3139c34e9d9f29eb11601321a234d096272.question 19 66f5208d62b543ee41accb7a560d63ff40413bac.question 20 f83a70d469fa667f0952959346b496fbf3cdb35c.question 21 1853813a80f83a1661dd3f6695559674c749525e.question 22 02664d5e3af321afbaf4ee351ba1f24643746451.question 23 20417b5efb836530846ddf677d1bd0bbc831643c.question 24 42c25a01801228a863c508f9d9e95399ea5f37a4.question 25 70a3ba822770abcaf64dd131c85ec964d172c312.question 26 b6636e525ad58ffdc9a7c18187fb3412660d2cdd.question 27 6147c9f2b3d1cc6fbc57c2137f0356513f49bf46.question 28 262b855e2f24e1b2e4e0ba01ace81a1f214d729e.question 29 d7211f4d21f40461bb59954e53360eeb4bb6c664.question 30 be813e58ae9387a9fdaf771656c8e1122794e515.question 31 ad39c5217042f36e4c1458e9397b4a588bbf8cf9.question 32 9534c3907f1cd917d24a9e4f2afc5b38b82d9fca.question 33 3fbe4bfb721a6e1aa60502089c46240d5c332c05.question 34 6efa2d6bad587bde65ca22d10eca83cf0176d84f.question 35 436aa25e28d3a026c4fcd658a852b6a24fc6935e.question 36 0c44d6ef109d33543cfbd26c95c9c3f6fe33a995.question 37 8472b859c5a8d18454644d9acdb5edd1db175eb5.question 38 fb4dd20e0f464423b6407fd0d21cc4384905cf26.question 39 a192ddbcecf2b00260ae4c7c3c20df4d5ce47a85.question 40 f7133f844967483519dbf632e2f3fb90c5625a4c.question 41 29b274958eb057e8f1688f02ef8dbc1c6d06c954.question 42 8ea6ad57c1c5eb1950f50ea47231a5b3f32dd639.question 43 1e43f2349b17dac6d1b3143f8c5556e2257be92c.question 44 7f11f0b4f6bb9aaa3bdc74bffaed5c869b26be97.question 45 8e6d8d984e51adb5071aad22680419854185eaea.question 46 57fc2b7ffcfbd1068fbc33b95d5786e2bff24698.question 47 57b773478955811a8077c98840d85af03e1b4f05.question 48 d857700721b5835c3472ba73ef7abfad0c9c499f.question 49 f8eedded53c96e0cb98e2e95623714d2737f29da.question 50 4c488f41622ad48977a60c2283910f15a736417e.question 51 39680fd0bff53f2ca02f632eabbc024d698f979e.question 52 addd9cebe24c96b4a3c8e9a50cd2a57905b6defb.question 53 50317f7a626e23628e4bfd190e987ad5af7d283e.question 54 3f7ac912a75e4ef7a56987bff37440ffa14770c6.question 55 610012ef561027623f4b4e3b8310c1c41dc819cc.question 56 d9c2e9bfc71045be2ecd959676016599e4637ed1.question 57 848c068db210e0b255f83c4f8b01d2d421fb9c94.question 58 f5c2753703b66d26f43bafe7f157803dc96eedbc.question 59 4f76379f1c7b1d4acc5a4c82ced64af6313698dd.question 60 e5bb1c27d07f1591929bf0283075ad1bc1fc0b50.question 61 33b911f9074c80eb18a57f657ad01393582059be.question 62 58c4c046654af52a3cb8f6890411a41c0dd0063b.question 63 7b03f730fda1b247e9f124b692e3298859785ef3.question 64 ece6f4e047856d5a84811a67ac9780d48044e69a.question 65 35565dc6aecc0f1203842ef13aede0a14a8cf075.question 66 ddf3f2b06353fe8a9b50043f926eb3ab318e91b2.question 67 e248e59739c9c013a2b1b7385d881e0f879b341d.question 68 e86d3fa2a74625620bcae0003dfbe13416ee29cf.question 69 176bf03c9c19951a8ae5197505a568454a6d4526.question 70 ee694cb968ae99aea36f910355bf73da417274c0.question 71 7a666f78590edbaf7c4d73c4ea641c545295a513.question 72 91e3cdd46a70d6dfbe917c6241eab907da4b1562.question 73 e54d9bdcb478ecc490608459d3405571979ef3f2.question 74 f3737e4de9864f083d6697293be650e02505768c.question 75 1fc7488755d24696a4ed1aabc0a21b8b9755d8c6.question 76 fb3eadd07b9f1df1f8a7a6b136ad6d06f4981442.question 77 1406bdad74b3f932342718d5d5d0946a906d73e2.question 78 54b6396669bdb2e30715085745d4f98d058269ef.question 79 0a53102673f2bebc36ce74bf71db1b42a0187052.question 80 d5eb4f98551d23810bfeb0e5b8a94037bcf58b0d.question 81 370de4ffe0f2f9691e4bd456ff344a6a337e0edf.question 82 12f32c770c86083ff21b25de7626505c06440018.question 83 9f6b5cff3ce146e21e323a1462c3eff8fca3d4a0.question 84 1c2a14f525fa3802b8da52aebaa9abd2091f9215.question 85 f2416e14d89d40562284ba2d15f7d5cc59c7e602.question 86 adcf5881856bcbaf1ad93d06a3c5431f6a0319ba.question 87 097d34b804c4c052591984d51444c4a97a3c41ac.question 88 773066c39bb3b593f676caf03f7e7370a8cd2a43.question 89 598cf5ff08ea75dcedda31ac1300e49cdf90893a.question 90 b66ebaaefb844f1216fd3d28eb160b08f42cde62.question 91 535a44842decdc23c11bae50d9393b923897187e.question 92 e27ca3104a596171940db8501c4868ed2fbc8cea.question 93 bb07799b4193cffa90792f92a8c14d591754a7f3.question 94 83ff109c6ccd512abdf317220337b98ef551d94a.question 95 5ede07a1e4ac56a0155d852df0f5bb6bde3cb507.question 96 7a2a9a7fbb44b0e51512c61502ce2292170400c1.question 97 9dcdc052682b041cdbf2fadc8e55f1bafc88fe61.question 98 0c2e28b7f373f29f3796d29047556766cc1dd709.question 99 2bdf1696bfd2579bb719402e9a6fa99cb8dbf587.question Table 8: A full index list of our samples. 2367
2016
223
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2368–2378, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Learning Language Games through Interaction Sida I. Wang Percy Liang Christopher D. Manning Computer Science Department Stanford University {sidaw,pliang,manning}@cs.stanford.edu Abstract We introduce a new language learning setting relevant to building adaptive natural language interfaces. It is inspired by Wittgenstein’s language games: a human wishes to accomplish some task (e.g., achieving a certain configuration of blocks), but can only communicate with a computer, who performs the actual actions (e.g., removing all red blocks). The computer initially knows nothing about language and therefore must learn it from scratch through interaction, while the human adapts to the computer’s capabilities. We created a game called SHRDLURN in a blocks world and collected interactions from 100 people playing it. First, we analyze the humans’ strategies, showing that using compositionality and avoiding synonyms correlates positively with task performance. Second, we compare computer strategies, showing that modeling pragmatics on a semantic parsing model accelerates learning for more strategic players. 1 Introduction Wittgenstein (1953) famously said that language derives its meaning from use, and introduced the concept of language games to illustrate the fluidity and purpose-orientedness of language. He described how a builder B and an assistant A can use a primitive language consisting of four words— ‘block’, ‘pillar’, ‘slab’, ‘beam’—to successfully communicate what block to pass from A to B. This is only one such language; many others would also work for accomplishing the cooperative goal. This paper operationalizes and explores the idea of language games in a learning setting, which we call interactive learning through language games Figure 1: The SHRDLURN game: the objective is to transform the start state into the goal state. The human types in an utterance, and the computer (which does not know the goal state) tries to interpret the utterance and perform the corresponding action. The computer initially knows nothing about the language, but through the human’s feedback, learns the human’s language while making progress towards the game goal. (ILLG). In the ILLG setting, the two parties do not initially speak a common language, but nonetheless need to collaboratively accomplish a goal. Specifically, we created a game called SHRDLURN,1 in homage to the seminal work of Winograd (1972). As shown in Figure 1, the objective is to transform a start state into a goal state, but the only action the human can take is entering an utterance. The computer parses the utterance and produces a ranked list of possible interpretations according to its current model. The human scrolls through the list and chooses the intended one, simultaneously advancing the state of the blocks and providing feedback to the computer. Both the human and the computer wish to reach the goal state 1Demo: http://shrdlurn.sidaw.xyz 2368 (only known to the human) with as little scrolling as possible. For the computer to be successful, it has to learn the human’s language quickly over the course of the game, so that the human can accomplish the goal more efficiently. Conversely, the human must also accommodate the computer, at least partially understanding what it can and cannot do. We model the computer in the ILLG as a semantic parser (Section 3), which maps natural language utterances (e.g., ‘remove red’) into logical forms (e.g., remove(with(red))). The semantic parser has no seed lexicon and no annotated logical forms, so it just generates many candidate logical forms. Based on the human’s feedback, it performs online gradient updates on the parameters corresponding to simple lexical features. During development, it became evident that while the computer was eventually able to learn the language, it was learning less quickly than one might hope. For example, after learning that ‘remove red’ maps to remove(with(red)), it would think that ‘remove cyan’ also mapped to remove(with(red)), whereas a human would likely use mutual exclusivity to rule out that hypothesis (Markman and Wachtel, 1988). We therefore introduce a pragmatics model in which the computer explicitly reasons about the human, in the spirit of previous work on pragmatics (Golland et al., 2010; Frank and Goodman, 2012; Smith et al., 2013). To make the model suitable for our ILLG setting, we introduce a new online learning algorithm. Empirically, we show that our pragmatic model improves the online accuracy by 8% compared to our best non-pragmatic model on the 10 most successful players (Section 5.3). What is special about the ILLG setting is the real-time nature of learning, in which the human also learns and adapts to the computer. While the human can teach the computer any language— English, Arabic, Polish, a custom programming language—a good human player will choose to use utterances that the computer is more likely to learn quickly. In the parlance of communication theory, the human accommodates the computer (Giles, 2008; Ireland et al., 2011). Using Amazon Mechanical Turk, we collected and analyzed around 10k utterances from 100 games of SHRDLURN. We show that successful players tend to use compositional utterances with a consistent vocabulary and syntax, which matches the inductive biases of the computer (Section 5.2). In addition, through this interaction, many players adapt to the computer by becoming more consistent, more precise, and more concise. On the practical side, natural language systems are often trained once and deployed, and users must live with their imperfections. We believe that studying the ILLG setting will be integral for creating adaptive and customizable systems, especially for resource-poor languages and new domains where starting from close to scratch is unavoidable. 2 Setting We now describe the interactive learning of language games (ILLG) setting formally. There are two players, the human and the computer. The game proceeds through a fixed number of levels. In each level, both players are presented with a starting state s ∈Y, but only the human sees the goal state t ∈Y. (e.g. in SHRDLURN, Y is the set of all configurations of blocks). The human transmits an utterance x (e.g., ‘remove red’) to the computer. The computer then constructs a ranked list of candidate actions Z = [z1, . . . , zK] ⊆Z (e.g., remove(with(red)), add(with(orange)), etc.), where Z is all possible actions. For each zi ∈Z, it computes yi = JziKs, the successor state from executing action zi on state s. The computer returns to the human the ordered list Y = [y1, . . . , yK] of successor states. The human then chooses yi from the list Y (we say the computer is correct if i = 1). The state then updates to s = yi. The level ends when s = t, and the players advance to the next level. Since only the human knows the goal state t and only the computer can perform actions, the only way for the two to play the game successfully is for the human to somehow encode the desired action in the utterance x. However, we assume the two players do not have a shared language, so the human needs to pick a language and teach it to the computer. As an additional twist, the human does not know the exact set of actions Z (although they might have some preconception of the computer’s capabilities).2 Finally, the human only sees the outcomes of the computer’s actions, not the actual logical actions themselves. We expect the game to proceed as follows: In the beginning, the computer does not understand 2This is often the case when we try to interact with a new software system or service before reading the manual. 2369 what the human is saying and performs arbitrary actions. As the computer obtains feedback and learns, the two should become more proficient at communicating and thus playing the game. Herein lies our key design principle: language learning should be necessary for the players to achieve good game performance. SHRDLURN. Let us now describe the details of our specific game, SHRDLURN. Each state s ∈Y consists of stacks of colored blocks arranged in a line (Figure 1), where each stack is a vertical column of blocks. The actions Z are defined compositionally via the grammar in Table 1. Each action either adds to or removes from a set of stacks, and a set of stacks is computed via various set operations and selecting by color. For example, the action remove(leftmost(with(red))) removes the top block from the leftmost stack whose topmost block is red. The compositionality of the actions gives the computer non-trivial capabilities. Of course, the human must teach a language to harness those capabilities, while not quite knowing the exact extent of the capabilities. The actual game proceeds according to a curriculum, where the earlier levels only need simpler actions with fewer predicates. We designed SHRDLURN in this way for several reasons. First, visual block manipulations are intuitive and can be easily crowdsourced, and it can be fun as an actual game that people would play. Second, the action space is designed to be compositional, mirroring the structure of natural language. Third, many actions z lead to the same successor state y = JzKs; e.g., the ‘leftmost stack’ might coincide with the ‘stack with red blocks’ for some state s and therefore an action involving either one would result in the same outcome. Since the human only points out the correct y, the computer must grapple with this indirect supervision, a reflection of real language learning. 3 Semantic parsing model Following Zettlemoyer and Collins (2005) and most recent work on semantic parsing, we use a log-linear model over logical forms (actions) z ∈Z given an utterance x: pθ(z | x) ∝exp(θTφ(x, z)), (1) where φ(x, z) ∈Rd is a feature vector and θ ∈Rd is a parameter vector. The denotation y (successor state) is obtained by executing z on a state s; formally, y = JzKs. Features. Our features are n-grams (including skip-grams) conjoined with tree-grams on the logical form side. Specifically, on the utterance side (e.g., ‘stack red on orange’), we use unigrams (‘stack’, ∗, ∗), bigrams (‘red’, ‘on’, ∗), trigrams (‘red’, ‘on’, ‘orange’), and skip-trigrams (‘stack’, ∗, ‘on’). On the logical form side, features corresponds to the predicates in the logical forms and their arguments. For each predicate h, let h.i be the i-th argument of h. Then, we define tree-gram features ψ(h, d) for predicate h and depth d = 0, 1, 2, 3 recursively as follows: ψ(h, 0) = {h}, ψ(h, d) = {(h, i, ψ(h.i, d −1)) | i = 1, 2, 3}. The set of all features is just the cross product of utterance features and logical form features. For example, if x = ‘enlever tout’ and z = remove(all()), then features include: (‘enlever’, all) (‘tout’, all) (‘enlever’, remove) (‘tout’, remove) (‘enlever’, (remove, 1, all)) (‘tout’, (remove, 1, all)) Note that we do not model an explicit alignment or derivation compositionally connecting the utterance and the logical form, in contrast to most traditional work in semantic parsing (Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Liang et al., 2011; Kwiatkowski et al., 2010; Berant et al., 2013), instead following a looser model of semantics similar to (Pasupat and Liang, 2015). Modeling explicit alignments or derivations is only computationally feasible when we are learning from annotated logical forms or have a seed lexicon, since the number of derivations is much larger than the number of logical forms. In the ILLG setting, neither are available. Generation/parsing. We generate logical forms from smallest to largest using beam search. Specifically, for each size n = 1, . . . , 8, we construct a set of logical forms of size n (with exactly n predicates) by combining logical forms of smaller sizes according to the grammar rules in Table 1. For each n, we keep the 100 logical forms z with the highest score θTφ(x, z) according to the current model θ. Let Z be the set of logical forms on the final beam, which contains logical forms of all sizes n. During training, due to pruning at 2370 Rule Semantics Description Set all() all stacks Color cyan|brown|red|orange primitive color Color →Set with(c) stacks whose top block has color c Set →Set not(s) all stacks except those in s Set →Set leftmost|rightmost(s) leftmost/rightmost stack in s Set Color →Act add(s, c) add block with color c on each stack in s Set →Act remove(s) remove the topmost block of each stack in s Table 1: The formal grammar defining the compositional action space Z for SHRDLURN. We use c to denote a Color, and s to denote a Set. For example, one action that we have in SHRDLURN is: ‘add an orange block to all but the leftmost brown block’ 7→ add(not(leftmost(with(brown))),orange). intermediate sizes, Z is not guaranteed to contain the logical form that obtains the observed state y. To mitigate this effect, we use a curriculum so that only simple actions are needed in the initial levels, giving the human an opportunity to teach the computer about basic terms such as colors first before moving to larger composite actions. The system executes all of the logical forms on the final beam Z, and orders the resulting denotations y by the maximum probability of any logical form that produced it.3 Learning. When the human provides feedback in the form of a particular y, the system forms the following loss function: ℓ(θ, x, y) = −log pθ(y | x, s) + λ||θ||1, (2) pθ(y | x, s) = X z:JzKs=y pθ(z | x). (3) Then it makes a single gradient update using AdaGrad (Duchi et al., 2010), which maintains a perfeature step size. 4 Modeling pragmatics In our initial experience with the semantic parsing model described in Section 3, we found that it was able to learn reasonably well, but lacked a reasoning ability that one finds in human learners. To illustrate the point, consider the beginning of a game when θ = 0 in the log-linear model pθ(z | x). Suppose that human utters ‘remove red’ and then identifies 3 We tried ordering based on the sum of the probabilities (which corresponds to marginalizing out the logical form), but this had the degenerate effect of assigning too much probability mass to y being the set of empty stacks, which can result from many actions. zrm-red = remove(with(red)) as the correct logical form. The computer then performs a gradient update on the loss function (2), upweighting features such as (‘remove’, remove) and (‘remove’, red). Next, suppose the human utters ‘remove cyan’. Note that zrm-red will score higher than all other formulas since the (‘remove’, red) feature will fire again. While statistically justified, this behavior fails to meet our intuitive expectations for a smart language learner. Moreover, this behavior is not specific to our model, but applies to any statistical model that simply tries to fit the data without additional prior knowledge about the specific language. While we would not expect the computer to magically guess ‘remove cyan’ 7→ remove(with(cyan)), it should at least push down the probability of zrm-red because zrm-red intuitively is already well-explained by another utterance ‘remove red’. This phenomenon, mutual exclusivity, was studied by Markman and Wachtel (1988). They found that children, during their language acquisition process, reject a second label for an object and treat it instead as a label for a novel object. The pragmatic computer. To model mutual exclusivity formally, we turn to probabilistic models of pragmatics (Golland et al., 2010; Frank and Goodman, 2012; Smith et al., 2013; Goodman and Lassiter, 2015), which operationalize the ideas of Grice (1975). The central idea in these models is to treat language as a cooperative game between a speaker (human) and a listener (computer) as we are doing, but where the listener has an explicit model of the speaker’s strategy, which in turn models the listener. Formally, let S(x | z) be the speaker’s strategy and L(z | x) be the listener’s 2371 zrm-red zrm-cyan z3, z4, . . . pθ(z | x) ‘remove red’ 0.8 0.1 0.1 ‘remove cyan’ 0.6 0.2 0.2 S(x | z) ‘remove red’ 0.57 0.33 0.33 ‘remove cyan’ 0.43 0.67 0.67 L(z | x) ‘remove red’ 0.46 0.27 0.27 ‘remove cyan’ 0.24 0.38 0.38 Table 2: Suppose the computer saw one example of ‘remove red’ 7→zrm-red, and then the human utters ‘remove cyan’. top: the literal listener, pθ(z | x), mistakingly chooses zrm-red over zrm-cyan. middle: the pragmatic speaker, S(x | z), assigns a higher probability to to ‘remove cyan’ given zrm-cyan; bottom: the pragmatic listener, L(z | x) correctly assigns a lower probability to zrm-red where p(z) is uniform. strategy. The speaker takes into account the literal semantic parsing model pθ(z | x) as well as a prior over utterances p(x), while the listener considers the speaker S(x | z) and a prior p(z): S(x | z) ∝(pθ(z | x)p(x))β , (4) L(z | x) ∝S(x | z)p(z), (5) where β ≥1 is a hyperparameter that sharpens the distribution (Smith et al., 2013). The computer would then use L(z | x) to rank candidates rather than pθ. Note that our pragmatic model only affects the ranking of actions returned to the human and does not affect the gradient updates of the model pθ. Let us walk through a simple example to see the effect of modeling pragmatics. Table 2 shows that the literal listener pθ(z | x) assigns high probability to zrm-red for both ‘remove red’ and ‘remove cyan’. Assuming a uniform p(x) and β = 1, the pragmatic speaker S(x | z) corresponds to normalizing each column of pθ. Note that if the pragmatic speaker wanted to convey zrm-cyan, there is a decent chance that they would favor ‘remove cyan’. Next, assuming a uniform p(z), the pragmatic listener L(z | x) corresponds to normalizing each row of S(x | z). The result is that conditioned on ‘remove cyan’, zrm-cyan is now more likely than zrm-red, which is the desired effect. The pragmatic listener models the speaker as a cooperative agent who behaves in a way to maximize communicative success. Certain speaker behaviors such as avoiding synonyms (e.g., not ‘delete cardinal’) and using a consistent word ordering (e.g, not ‘red remove’) fall out of the game theory.4 For speakers that do not follow this strategy, our pragmatic model is incorrect, but as we get more data through game play, the literal listener pθ(z | x) will sharpen, so that the literal listener and the pragmatic listener will coincide in the limit. ∀z, C(z) ←0 ∀z, Q(z) ←ϵ repeat receive utterance x from human L(z | x) ∝P(z) Q(z)pθ(z | x)β send human a list Y ranked by L(z | x) receive y ∈Y from human θ ←θ −η∇θℓ(θ, x, y) Q(z) ←Q(z) + pθ(z | x)β C(z) ←C(z) + pθ(z | x, JzKs = y) P(z) ← C(z)+α P z′:C(z′)>0 C(z′)+α  until game ends Algorithm 1: Online learning algorithm that updates the parameters of the semantic parser θ as well as counts C, Q required to perform pragmatic reasoning. Online learning with pragmatics. To implement the pragmatic listener as defined in (5), we need to compute the speaker’s normalization constant P x pθ(z | x)p(x) in order to compute S(x | z) in (4). This requires parsing all utterances x based on pθ(z | x). To avoid this heavy computation in an online setting, we propose Algorithm 1, where some approximations are used for the sake of efficiency. First, to approximate the intractable sum over all utterances x, we only use the examples that are seen to compute the normalization constant P x pθ(z | x)p(x) ≈P i pθ(z | xi). Then, in order to avoid parsing all previous examples again using the current parameters for each new example, we store Q(z) = P i pθi(z | xi)β, where θi is the parameter after the model updates on the ith example xi. While θi is different from the current parameter θ, pθ(z | xi) ≈pθi(z | xi) for the relevant example xi, which is accounted for 4 Of course, synonyms and variable word order occur in real language. We would need a more complex game compared to SHRDLURN to capture this effect. 2372 by both θi and θ. In Algorithm 1, the pragmatic listener L(z | x) can be interpreted as an importance-weighted version of the sharpened literal listener pβ θ , where it is downweighted by Q(z), which reflects which z’s the literal listener prefers, and upweighted by P(z), which is just a smoothed estimate of the actual distribution over logical forms p(z). By construction, Algorithm 1 is the same as (4) except that it uses the normalization constant Q based on stale parameters θi after seeing example, and it uses samples to compute the sum over x. Following (5), we also need p(z), which is estimated by P(z) using add-α smoothing on the counts C(z). Note that Q(z) and C(z) are updated after the model parameters are updated for the current example. Lastly, there is a small complication due to only observing the denotation y and not the logical form z. We simply give each consistent logical form {z | JzKs = y} a pseudocount based on the model: C(z) ←C(z) + pθ(z | x, JzKs = y) where pθ(z | x, JzKs = y) ∝exp(θTφ(x, z)) for JzKs = y (0 otherwise). Compared to prior work where the setting is specifically designed to require pragmatic inference, pragmatics arises naturally in ILLG. We think that this form of pragmatics is the most important during learning, and becomes less important if we had more data. Indeed, if we have a lot of data and a small number of possible zs, then L(z|x) ≈pθ(z|x) as P x pθ(z|x)p(x) →p(z) when β = 1.5 However, for semantic parsing, we would not be in this regime even if we have a large amount of training data. In particular, we are nowhere near that regime in SHRDLURN, and most of our utterances / logical forms are seen only once, and the importance of modeling pragmatics remains. 5 Experiments 5.1 Setting Data. Using Amazon Mechanical Turk (AMT), we paid 100 workers 3 dollars each to play SHRDLURN. In total, we have 10223 utterances along with their starting states s. Of these, 8874 utterances are labeled with their denotations y; the rest are unlabeled, since the player can try any utterance without accepting an action. 100 players completed the entire game under identical settings. 5Technically, we also need pθ to be well-specified. We deliberately chose to start from scratch for every worker, so that we can study the diversity of strategies that different people used in a controlled setting. Each game consists of 50 blocks tasks divided into 5 levels of 10 tasks each, in increasing complexity. Each level aims to reach an end goal given a start state. Each game took on average 89 utterances to complete.6 It only took 6 hours to complete these 100 games on AMT and each game took around an hour on average according to AMT’s work time tracker (which does not account for multi-tasking players). The players were provided minimal instructions on the game controls. Importantly, we gave no example utterances in order to avoid biasing their language use. Around 20 players were confused and told us that the instructions were not clear and gave us mostly spam utterances. Fortunately, most players understood the setting and some even enjoyed SHRDLURN as reflected by their optional comments: • That was probably the most fun thing I have ever done on mTurk. • Wow this was one mind bending games [sic]. Metrics. We use the number of scrolls as a measure of game performance for each player. For each example, the number of scrolls is the position in the list Y of the action selected by the player. It was possible to complete this version of SHRDLURN by scrolling (all actions can be found in the first 125 of Y )—22 of the 100 players failed to teach an actual language, and instead finished the game mostly by scrolling. Let us call them spam players, who usually typed single letters, random words, digits, or random phrases (e.g. ‘how are you’). Overall, spam players had to scroll a lot: 21.6 scrolls per utterance versus only 7.4 for the non-spam players. 5.2 Human strategies Some example utterances can be found in Table 3. Most of the players used English, but vary in their adherence to conventions such as use of determiners, plurals, and proper word ordering. 5 players invented their own language, which are more precise, more consistent than general English. One player used Polish, and another used Polish notation (bottom of Table 3). 6 This number is not 50 because some block tasks need multiple steps and players are also allowed to explore without reaching the goal. 2373 Most successful players (1st–20th) rem cy pos 1, stack or blk pos 4, rem blk pos 2 thru 5, rem blk pos 2 thru 4, stack bn blk pos 1 thru 2, fill bn blk, stack or blk pos 2 thru 6, rem cy blk pos 2 fill rd blk (3.01) remove the brown block, remove all orange blocks, put brown block on orange blocks, put orange blocks on all blocks, put blue block on leftmost blue block in top row (2.78) Remove the center block, Remove the red block, Remove all red blocks, Remove the first orange block, Put a brown block on the first brown block, Add blue block on first blue block (2.72) Average players (21th–50th) reinsert pink, take brown, put in pink, remove two pink from second layer, Add two red to second layer in odd intervals, Add five pink to second layer, Remove one blue and one brown from bottom layer (9.17) remove red, remove 1 red, remove 2 4 orange, add 2 red, add 1 2 3 4 blue, emove 1 3 5 orange, add 2 4 orange, add 2 orange, remove 2 3 brown, add 1 2 3 4 5 red, remove 2 3 4 5 6, remove 2, add 1 2 3 4 6 red (8.37) move second cube, double red with blue, double first red with red, triple second and fourth with orange, add red, remove orange on row two, add blue to column two, add brown on first and third (7.18) Least successful players (51th–) holdleftmost, holdbrown, holdleftmost, blueonblue, brownonblue1, blueonorange, holdblue, holdorange2, blueonred2 , holdends1, holdrightend, hold2, orangeonorangerightmost (14.15) ‘add red cubes on center left, center right, far left and far right’, ‘remove blue blocks on row two column two, row two column four’, remove red blocks in center left and center right on second row (12.6) laugh with me, red blocks with one aqua, aqua red alternate, brown red red orange aqua orange, red brown red brown red brown, space red orange red, second level red space red space red space (14.32) Spam players (∼85th–100) next, hello happy, how are you, move, gold, build goal blocks, 23,house, gabboli, x, run„xav, d, j, xcv, dulicate goal (21.7) Most interesting usu´n br ˛azowe klocki, postaw pomara´nczowy klocek na pierwszym klocku, postaw czerwone klocki na pomara´nczowych, usu´n pomara´nczowe klocki w górnym rz˛edzie rm scat + 1 c, + 1 c, rm sh, + 1 2 4 sh, + 1 c, - 4 o, rm 1 r, + 1 3 o, full fill c, rm o, full fill sh, - 1 3, full fill sh, rm sh, rm r, + 2 3 r, rm o, + 3 sh, + 2 3 sh, rm b, - 1 o, + 2 c, mBROWN,mBLUE,mORANGE RED+ORANGEˆORANGE, BROWN+BROWNm1+BROWNm3, ORANGE +BROWN +ORANGEˆm1+ ORANGEˆm3 + BROWNˆˆ2 + BROWNˆˆ4 Table 3: Example utterances, along with the average number of scrolls for that player in parentheses. Success is measured by the number of scrolls, where the more successful players need less scrolls. 1) The 20 most successful players tend to use consistent and concise language whose semantics is similar to our logical language. 2) Average players tend to be slightly more verbose and inconsistent (left and right), or significantly different from our logical langauge (middle). 3) Reasons for being unsuccessful vary. Left: no tokenization, middle: used a coordinate system and many conjunctions; right: confused in the beginning, and used a language very different from our logical language. Overall, we find that many players adapt in ILLG by becoming more consistent, less verbose, and more precise, even if they used standard English at the beginning. For example, some players became more consistent over time (e.g. from using both ‘remove’ and ‘discard’ to only using ‘remove’). In terms of verbosity, removing function words like determiners as the game progresses is a common adaptation. In each of the following examples from different players, we compare an utterance that appeared early in the game to a similar utterance that appeared later: ‘Remove the red ones’ became ‘Remove red.’; ‘add brown on top of red’ became ‘add orange on red’; ‘add red blocks to all red blocks’ became ‘add red to red’; ‘dark red’ became ‘red’; one player used ‘the’ in all of the first 20 utterances, and then never used ‘the’ in the last 75 utterances. Players also vary in precision, ranging from overspecified (e.g. ‘remove the orange cube at the left’, ‘remove red blocks from top row’) to underspecified or requiring context (e.g. ‘change colors’, ‘add one blue’, ‘Build more blocus’, ‘Move the blocks fool’,‘Add two red cubes’). We found that some players became more precise over time, as they gain a better understanding of ILLG. Most players use utterances that actually do not match our logical language in Table 1, even the successful players. In particular, numbers are often used. While some concepts always have the same effect in our blocks world (e.g. ‘first block’ means leftmost), most are different. More concretely, of the top 10 players, 7 used numbers of some form and only 3 players matched our logical language. Some players who did not match the logical language performed quite well neverthe2374 less. One possible explanation is because the action required is somewhat constrained by the logical language and some tokens can have unintended interpretations. For example, the computer can correctly interpret numerical positional references, as long as the player only refers to the leftmost and rightmost positions. So if the player says ‘rem blk pos 4’ and ‘rem blk pos 1’, the computer can interpret ‘pos’ as rightmost and interpret the bigram (‘pos’, ‘1’) as leftmost. On the other hand, players who deviated significantly by describing the desired state declaratively (e.g. ‘red orange red’, ‘246’) rather than using actions, or a coordinate system (e.g. ‘row two column two’) performed poorly. Although players do not have to match our logical language exactly to perform well, being similar is definitely helpful. Compositionality. As far as we can tell, all players used a compositional language; no one invented unrelated words for each action. Interestingly, 3 players did not put spaces between words. Since we assume monomorphemic words separated by spaces, they had to do a lot of scrolling as a result (e.g., 14.15 with utterances like ‘orangeonorangerightmost’). 5.3 Computer strategies We now present quantitative results on how quickly the computer can learn, where our goal is to achieve high accuracy on new utterances as we make just a single pass over the data. The number of scrolls used to evaluate player is sensitive to outliers and not as intuitive as accuracy. Instead, we consider online accuracy, described as follows. Formally, if a player produced T utterances x(j) and labeled them y(j), then online accuracy def = 1 T T X j=1 I h y(j) = Jz(j)Ks(j) i , where z(j) = arg maxz pθ(j−1)(z|x(j)) is the model prediction based on the previous parameter θ(j−1). Note that the online accuracy is defined with respect to the player-reported labels, which only corresponds to the actual accuracy if the player is precise and honest. This is not true for most spam players. Compositionality. To study the importance of compositionality, we consider two baselines. First, consider a non-compositional model (mem0.0 0.1 0.2 0.3 0.4 0.5 0.6 full model accuracy 0.0 0.1 0.2 0.3 0.4 0.5 0.6 full+pragmatics accuracy (a) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 half model accuracy 0.0 0.1 0.2 0.3 0.4 0.5 0.6 half+pragmatics accuracy (b) Figure 2: Pragmatics improve online accuracy. In these plots, each marker is a player. red o: players who ranked 1–20 in terms of minimizing number of scrolls, green x: players 20–50; blue +: lower than 50 (includes spam players). Marker sizes correspond to player rank, where better players are depicted with larger markers. 2a: online accuracies with and without pragmatics on the full model; 2b: same for the half model. players ranked by # of scrolls Method top 10 top 20 top 50 all 100 memorize 25.4 24.5 22.5 17.6 half model 38.7 38.4 36.0 27.0 half + prag 43.7 42.7 39.7 29.4 full model 48.6 47.8 44.9 33.3 full + prag 52.8 49.8 45.8 33.8 Table 4: Average online accuracy under various settings. memorize: featurize entire utterance and logical form non-compositionally; half model: featurize the utterances with unigrams, bigrams, and skip-grams but conjoin with the entire logical form; full model: the model described in Section 3; +prag: the models above, with our online pragmatics algorithm described in Section 4. Both compositionality and pragmatics improve accuracy. orize) that just remembers pairs of complete utterance and logical forms. We implement this using indicator features on features (x, z), e.g., (‘remove all the red blocks’, zrm-red), and use a large learning rate. Second, we consider a model (half) that treats utterances compositionally with unigrams, bigrams, and skip-trigrams features, but the logical forms are regarded as non-compositional, so we have features such as (‘remove’, zrm-red), (‘red’, zrm-red), etc. Table 4 shows that the full model (Section 3) significantly outperforms both the memorize and half baselines. The learning rate η = 0.1 is selected via cross validation, and we used α = 1 and β = 3 following Smith et al. (2013). 2375 Pragmatics. Next, we study the effect of pragmatics on online accuracy. Figure 2 shows that modeling pragmatics helps successful players (e.g., top 10 by number of scrolls) who use precise and consistent languages. Interestingly, our pragmatics model did not help and can even hurt the less successful players who are less precise and consistent. This is expected behavior: the pragmatics model assumes that the human is cooperative and behaving rationally. For the bottom half of the players, this assumption is not true, in which case the pragmatics model is not useful. 6 Related Work and Discussion Our work connects with a broad body of work on grounded language, in which language is used in some environment as a means towards some goal. Examples include playing games (Branavan et al., 2009, 2010; Reckman et al., 2010) interacting with robotics (Tellex et al., 2011, 2014), and following instructions (Vogel and Jurafsky, 2010; Chen and Mooney, 2011; Artzi and Zettlemoyer, 2013) Semantic parsing utterances to logical forms, which we leverage, plays an important role in these settings (Kollar et al., 2010; Matuszek et al., 2012; Artzi and Zettlemoyer, 2013). What makes this work unique is our new interactive learning of language games (ILLG) setting, in which a model has to learn a language from scratch through interaction. While online gradient descent is frequently used, for example in semantic parsing (Zettlemoyer and Collins, 2007; Chen, 2012), we using it in a truly online setting, taking one pass over the data and measuring online accuracy (Cesa-Bianchi and Lugosi, 2006). To speed up learning, we leverage computational models of pragmatics (Jäger, 2008; Golland et al., 2010; Frank and Goodman, 2012; Smith et al., 2013; Vogel et al., 2013). The main difference is these previous works use pragmatics with a trained base model, whereas we learn the model online. Monroe and Potts (2015) uses learning to improve the pragmatics model. In contrast, we use pragmatics to speed up the learning process by capturing phenomena like mutual exclusivity (Markman and Wachtel, 1988). We also differ from prior work in several details. First, we model pragmatics in the online learning setting where we use an online update for the pragmatics model. Second, unlikely the reference games where pragmatic effects plays an important role by design, SHRDLURN is not specifically designed to require pragmatics. The improvement we get is mainly due to players trying to be consistent in their language use. Finaly, we treat both the utterance and the logical forms as featurized compositional objects. Smith et al. (2013) treats utterances (i.e. words) and logical forms (i.e. objects) as categories; Monroe and Potts (2015) used features, but also over flat categories. Looking forward, we believe that the ILLG setting is worth studying and has important implications for natural language interfaces. Today, these systems are trained once and deployed. If these systems could quickly adapt to user feedback in real-time as in this work, then we might be able to more readily create systems for resource-poor languages and new domains, that are customizable and improve through use. Acknowledgments DARPA Communicating with Computers (CwC) program under ARO prime contract no. W911NF15-1-0462. The first author is supported by a NSERC PGS-D fellowship. In addition, we thank Will Monroe, and Chris Potts for their insightful comments and discussions on pragmatics. Reproducibility All code, data, and experiments for this paper are available on the CodaLab platform: https://worksheets. codalab.org/worksheets/ 0x9fe4d080bac944e9a6bd58478cb05e5e The client side code is here: https://github.com/sidaw/shrdlurn/tree/ acl16-demo and a demo: http://shrdlurn.sidaw.xyz References Y. Artzi and L. Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics (TACL) 1:49–62. J. Berant, A. Chou, R. Frostig, and P. Liang. 2013. Semantic parsing on Freebase from questionanswer pairs. In Empirical Methods in Natural Language Processing (EMNLP). S. Branavan, H. Chen, L. S. Zettlemoyer, and R. Barzilay. 2009. Reinforcement learning for 2376 mapping instructions to actions. In Association for Computational Linguistics and International Joint Conference on Natural Language Processing (ACL-IJCNLP). pages 82–90. S. Branavan, L. Zettlemoyer, and R. Barzilay. 2010. Reading between the lines: Learning to map high-level instructions to commands. In Association for Computational Linguistics (ACL). pages 1268–1277. N. Cesa-Bianchi and G. Lugosi. 2006. Prediction, learning, and games. Cambridge University Press. D. L. Chen. 2012. Fast online lexicon learning for grounded language acquisition. In Association for Computational Linguistics (ACL). D. L. Chen and R. J. Mooney. 2011. Learning to interpret natural language navigation instructions from observations. In Association for the Advancement of Artificial Intelligence (AAAI). pages 859–865. J. Duchi, E. Hazan, and Y. Singer. 2010. Adaptive subgradient methods for online learning and stochastic optimization. In Conference on Learning Theory (COLT). M. Frank and N. D. Goodman. 2012. Predicting pragmatic reasoning in language games. Science 336:998–998. H. Giles. 2008. Communication accommodation theory. Sage Publications, Inc. D. Golland, P. Liang, and D. Klein. 2010. A gametheoretic approach to generating spatial descriptions. In Empirical Methods in Natural Language Processing (EMNLP). N. Goodman and D. Lassiter. 2015. Probabilistic Semantics and Pragmatics: Uncertainty in Language and Thought. The Handbook of Contemporary Semantic Theory, 2nd Edition WileyBlackwell. H. P. Grice. 1975. Logic and conversation. Syntax and semantics 3:41–58. M. E. Ireland, R. B. Slatcher, P. W. Eastwick, L. E. Scissors, E. J. Finkel, and J. W. Pennebaker. 2011. Language style matching predicts relationship initiation and stability. Psychological Science 22(1):39–44. G. Jäger. 2008. Game theory in semantics and pragmatics. Technical report, University of Tübingen. T. Kollar, S. Tellex, D. Roy, and N. Roy. 2010. Grounding verbs of motion in natural language commands to robots. In International Symposium on Experimental Robotics (ISER). T. Kwiatkowski, L. Zettlemoyer, S. Goldwater, and M. Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higherorder unification. In Empirical Methods in Natural Language Processing (EMNLP). pages 1223–1233. P. Liang, M. I. Jordan, and D. Klein. 2011. Learning dependency-based compositional semantics. In Association for Computational Linguistics (ACL). pages 590–599. E. Markman and G. F. Wachtel. 1988. Children’s use of mutual exclusivity to constrain the meanings of words. Cognitive Psychology 20:125– 157. C. Matuszek, N. FitzGerald, L. Zettlemoyer, L. Bo, and D. Fox. 2012. A joint model of language and perception for grounded attribute learning. In International Conference on Machine Learning (ICML). pages 1671–1678. W. Monroe and C. Potts. 2015. Learning in the Rational Speech Acts model. In Proceedings of 20th Amsterdam Colloquium. P. Pasupat and P. Liang. 2015. Compositional semantic parsing on semi-structured tables. In Association for Computational Linguistics (ACL). H. Reckman, J. Orkin, and D. Roy. 2010. Learning meanings of words and constructions, grounded in a virtual game. In Conference on Natural Language Processing (KONVENS). N. J. Smith, N. D. Goodman, and M. C. Frank. 2013. Learning and using language via recursive pragmatic reasoning about other agents. In Advances in Neural Information Processing Systems (NIPS). S. Tellex, R. Knepper, A. Li, D. Rus, and N. Roy. 2014. Asking for help using inverse semantics. In Robotics: Science and Systems (RSS). S. Tellex, T. Kollar, S. Dickerson, M. R. Walter, A. G. Banerjee, S. J. Teller, and N. Roy. 2011. Understanding natural language commands for robotic navigation and mobile manipulation. In Association for the Advancement of Artificial Intelligence (AAAI). A. Vogel, M. Bodoia, C. Potts, and D. Jurafsky. 2013. Emergence of gricean maxims from 2377 multi-agent decision theory. In North American Association for Computational Linguistics (NAACL). pages 1072–1081. A. Vogel and D. Jurafsky. 2010. Learning to follow navigational directions. In Association for Computational Linguistics (ACL). pages 806– 814. T. Winograd. 1972. Understanding Natural Language. Academic Press. L. Wittgenstein. 1953. Philosophical Investigations. Blackwell, Oxford. Y. W. Wong and R. J. Mooney. 2007. Learning synchronous grammars for semantic parsing with lambda calculus. In Association for Computational Linguistics (ACL). pages 960–967. L. S. Zettlemoyer and M. Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Uncertainty in Artificial Intelligence (UAI). pages 658–666. L. S. Zettlemoyer and M. Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP/CoNLL). pages 678–687. 2378
2016
224
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2379–2388, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Finding Non-Arbitrary Form-Meaning Systematicity Using String-Metric Learning for Kernel Regression E. Dar´ıo Guti´errez UC San Diego [email protected] Roger Levy MIT [email protected] Benjamin K. Bergen UC San Diego [email protected] Abstract Arbitrariness of the sign—the notion that the forms of words are unrelated to their meanings—is an underlying assumption of many linguistic theories. Two lines of research have recently challenged this assumption, but they produce differing characterizations of non-arbitrariness in language. Behavioral and corpus studies have confirmed the validity of localized form-meaning patterns manifested in limited subsets of the lexicon. Meanwhile, global (lexicon-wide) statistical analyses instead find diffuse form-meaning systematicity across the lexicon as a whole. We bridge the gap with an approach that can detect both local and global formmeaning systematicity in language. In the kernel regression formulation we introduce, form-meaning relationships can be used to predict words’ distributional semantic vectors from their forms. Furthermore, we introduce a novel metric learning algorithm that can learn weighted edit distances that minimize kernel regression error. Our results suggest that the English lexicon exhibits far more global form-meaning systematicity than previously discovered, and that much of this systematicity is focused in localized formmeaning patterns. 1 Introduction Arbitrariness of the sign refers to the notion that the phonetic/orthographic forms of words have no relationship to their meanings (de Saussure, 1916). It is a foundational assumption of many theories of language comprehension, production, acquisition, and evolution. For instance, Hockett's (1960) influential enumeration of the design features of human language ascribes a central role to arbitrariness in enabling the combination and recombination of phonemic units to create new words. Gasser (2004) uses simulations to show that for large vocabularies, arbitrary form-meaning mappings may provide an advantage in acquisition. Meanwhile, modular theories of language comprehension rely upon the duality of patterning to support the independence of the phonetic and semantic aspects of language comprehension (Levelt et al., 1999). Quantifying the extent to which the arbitrariness principle actually holds is important for understanding how language works. Language researchers have long noted exceptions to arbitrariness. Most of these are patterns that occur in some relatively localized subset of the lexicon. These patterns are sub-morphemic because, unlike conventional morphemes, they cannot combine reliably to produce new words. Phonaesthemes (1930) are one example. A phonaestheme is a phonetic cluster that recurs in many words that have related meanings. One notable phonaestheme is the onset gl-, which occurs at the beginning of at least 38 English words relating to vision: glow, glint, glaze, gleam, etc. (Bergen, 2004). At least 46 candidate phonaesthemes have been posited in the linguistics literature, according to a list compiled by Hutchins (1998). Iconicity is another violation of arbitrariness that can lead to non-arbitrary local regularities. Iconicity occurs when the form of a word is transparently motivated by some perceptual aspect of its referent. Consequently, when several referents share perceptual features, their associated word-forms would tend to be similar as well (to the extent that they are iconic). For instance, Ohala (1984) conjectures that vowels with high acoustic frequency tend to associate with smaller items while vowels with low acoustic frequency 2379 tend to associate with larger items, due to the experiential link between vocalizer size and frequency. Systematic iconicity is also manifested in sets of onomatopoeic words that echo similar sounds (e.g., clink, clank). Although these exceptions to non-arbitrariness differ, in each case, specific form-meaning relationships emerge in a subset of the lexicon. We will refer to all such specific localized form-meaning patterns as phonosemantic sets. In recent decades, behavioral and corpus studies have empirically confirmed the psychological reality and statistical reliability of many phonosemantic sets that had previously been identified by intuition and observation. Various candidate phonaesthemes have significant effects on reaction times during language processing tasks (Hutchins, 1998; Magnus, 1998; Bergen, 2004). Sagi and Otis (2008) test the statistical significance of the 46 candidates in Hutchins’s (1998) list, and find that 27 of them exhibit more within-category distributional semantic coherence than expected by chance. These results have been replicated using other corpora and distributional semantic models (Abramova et al., 2013). Klink (2000) shows that sound-symbolic attributes such as those proposed by Ohala (1984) are associated with human judgments about nonwords’ semantic attributes, such as smallness or beauty. Using a statistical corpus analysis and WordNet semantic features, Monaghan et al. (2014a) examine a similar hypothesis space of sound-symbolic phonological and semantic attributes, and reach similar conclusions. While these localized studies support the existence of some islands of non-arbitrariness in language, their results do not address how pervasive non-arbitrariness is at the global level—that is, in the lexicon of a language as a whole. After all, some seemingly non-arbitrary local patterns can be expected to emerge merely by chance. How can we measure whether local phonosemantic patterning translates into global phonosemantic systematicity–that is, strong, non-negligible lexiconwide non-arbitrariness? Shillcock et al. (2001) introduce the idea of measuring phonosemantic systematicity by analyzing the correlation between phonological edit distances and distributional semantic distances. In a lexicon of monomorphemic and monosyllabic English words, they find a small but statistically significant correlation between these two distance measures. Monaghan et al. (2014b) elaborate on this methodology, showing that the statistical effect is robust to different choices of form-distance and semantic-distance metrics. They also look at the effect of leaving out each word in the lexicon on the overall correlation measure; from this, they derive a phonosemantic systematicity measure for each word. Interestingly, they find that systematicity is diffusely distributed across the words in English in a pattern indistinguishable from random chance. Hence, they conclude that “systematicity in the vocabulary is not a consequence of small clusters of sound symbolism.” This line of work provides a proof-of-concept that it is possible to detect the phonosemantic systematicity of a language, and confirms that English exhibits significant phonosemantic systematicity. Broadly speaking, both the localized tests of individual phonosemantic sets and the global analyses of phonosemantic systematicity challenge the arbitrariness of the sign. However, they attribute responsibility for non-arbitrariness differently. The local methods reveal dozens of specific phonosemantic sets that have strong, measurable behavioral effects and statistical signatures in corpora. Meanwhile, the global methods find small and diffuse systematicity. How can we reconcile this discrepancy? Original Contributions. We attempt to bridge the gap with a new approach that builds off of previous lexicon-wide analyses, making two innovations. The first addresses the concern that the lexicon-wide methods currently in use may not be well suited to finding local regularities such as phonosemantic sets, because they make the assumption that systematicity exists only in the form of a global correlation between distances in formspace and distances in meaning-space. Instead, we model the problem using kernel regression, a nonparametric regression model. Crucially, in kernel regression the prediction for a point is based on the predictions of neighboring points; this enables us to conduct a global analysis while still capturing local, neighborhood effects. As in previous work, we represent word-forms by their orthographic strings, and word-meanings by their semantic vector representations as produced by a distributional semantic vector space model. The goal of the regression is then to learn a mapping from string-valued predictor variables to vectorvalued target variables that minimizes regression 2380 error in the vector space. Conveniently, our model allows us to produce predictions of the semantic vectors associated with both words and nonwords. Previous work may also underestimate systematicity in that it weights all edits (substitutions, insertions, and deletions) equally in determining edit distance. A priori, there is no reason to believe this is the case—indeed, the work on individual phonosemantic sets suggests that some orthographic/phonetic attributes are more important than others for non-arbitrariness. To address this, we introduce String-Metric Learning for Kernel Regression (SMLKR), a metric-learning algorithm that is able to learn a weighted edit distance metric that minimizes the prediction error in kernel regression. We find that SMLKR enables us to recover more systematicity from a lexicon of monomorphemic English words than reported in previous global analyses. Using SMLKR, we propose a new measure of per-word phonosemantic systematicity. Our analyses using this systematicity measure indicate that specific phonosemantic sets do contribute significantly to the global phonosemantic systematicity of English, in keeping with previous local-level analyses. Finally, we evaluate our systematicity measure against human judgments, and find that it accords with raters’ intuitions about what makes a word’s form well suited to its meaning. 2 Background & Related Work 2.1 Previous Approaches to Finding Lexicon-Wide Systematicity Measuring Form, Meaning, and Systematicity. To our knowledge, all previous lexicon-level analyses of phonosemantic systematicity have used variations of the method of Shillcock et al. (2001). The inputs for this method are form-meaning tuples (yi, si) for each word i in the lexicon, where yi is the vector representation of the word in a distributional semantic model, and si is the string representation of the word (phonological, phonemic, or orthographic). Semantic distances are measured as cosine distances between the vectors of each pair of words. Shillcock et al. (Shillcock et al., 2001) and Monaghan et al. (Monaghan et al., 2014b) measure form-distances in terms of edit distance between each pair of strings. In addition Monaghan et al. (2014b) and Tamariz (2006) study distance measures based on a selected set binary phonological features, with similar results. Phonosemantic systematicity is then measured as the correlation between all the pairwise semantic distances and all the pairwise string distances. Hypothesis Testing. In this line line of work, statistical significance of the results is assessed using the Mantel test, a permutation test of the correlation between two sets of pairwise distances (Mantel, 1967). The test involves randomly shuffling the assignments of semantic vectors to wordstrings in the lexicon. We can think of each formmeaning shuffle as a member of the set of all possible lexicons. Next, the correlation between the semantic distances and the string distances is computed under each reassignment. An empirical pvalue for the true lexicon is then derived by performing many shufflings, and comparing the correlation coefficients measured under the shuffles to the correlation coefficient measured in the true lexicon. Under the null hypothesis that form-meaning assignments are arbitrary, the probability of observing a form-meaning correlation of at least the magnitude actually observed in the true lexicon is asymptotically equal to the proportion of reassignments that produce greater correlations than the true lexicon. Previous Findings. Shillcock et al. (2001) find a statistically significant correlation between semantic and phonological edit distances in a lexicon of the 1733 most frequent monosyllabic monomorphemic words in the BNC. Tamariz (2008) extends these results to Spanish data, looking only at words with one of three consonantvowel (CV) structures (CVCV, CVCCV, and CVCVCV). (2001), Monaghan et al. (2014b) derive a list of 5138 monomorphemic monosyllabic words and a list of 5604 monomorphemic polysyllabic from the CELEX database (Baayen et al., 1996), and find significant form-meaning correlations in both. 2.2 Kernel Regression In contrast to previous studies, we study formmeaning systematicity using a kernel regression framework. Kernel regression is a nonparametric supervised learning technique that is able to learn highly nonlinear relationships between predictor variables and target variables. Rather than assuming any particular parametric relationship between the predictor and target variables, kernel regression assumes only that the value of the target vari2381 able is a smooth function of the value of the predictors. In other words, given a new point in predictor space, the value of the target at that point can reasonably be estimated by the value of the targets at points that are nearby in the predictor space. In this way, kernel regression is analogous to an exemplar model. We performed kernel regression on our lexicon using the NadarayaWatson estimator (Nadaraya, 1964). Given a data set D of vector-valued predictor variables {xi}N i=1, and targets {yi}N i=1, the Nadaraya-Watson estimator of the target for sample i is ˆyi = ˆy(xi) = P j̸=i kijyj P j̸=i kij , (1) where kij is the kernel between point i and point j. A commonly used kernel is the exponential kernel: kij = k(xi, xj) = exp(−d(xi, xj)/h), where d(·, ·) is a distance metric and h is a bandwidth that determines the radius of the effective neighborhood around each point that contributes to its estimate. For our purposes we use the Levenshtein string edit distance metric (Levenshtein, 1966). The Levenshtein edit distance between two strings is the minimum number of edits needed to transform one string into the other, where an edit is defined as the insertion, deletion, or substitution of a single character. Using this edit distance and semantic vectors derived from a distributional semantic model, the Nadaraya-Watson estimator can estimate the position in the semantic vector space for each word in the lexicon. The exponential edit distance kernel has been useful for modeling behavior in many tasks involving word similarity and neighborhood effects; see, for example the Generalized Context Model (Nosofsky, 1986), which has been applied to word identification, recognition, and categorization, to inflectional morphology, and to artificial grammar learning (Bailey and Hahn, 2001). 2.3 Metric Learning for Kernel Regression In kernel regression, the bandwidth h of the kernel function must be fine-tuned by testing out many different bandwidths. Moreover, for many tasks there is no reason to assume that all of the dimensions of a vector-valued predictor are equally important. This is problematic for conventional kernel regression, as the quality of its predictions is wholly reliant on the appropriateness of the given distance metric. Weinberger and Tesauro (2007) introduce metric learning for kernel regression (MLKR), an algorithm that can learn a task-specific Mahalanobis (i.e., weighted Euclidean) distance metric over a real-vector-valued predictor space, in which small distances between two vectors imply similar target values. They note that this metric induces a kernel function whose parameters are set entirely from the data. Specifically, MLKR can learn a weight matrix W for a Mahalanobis metric that optimizes the leave-one out mean squared error of kernel regression (MSE), defined as: L(D) = 1 N N X i=1 L(ˆyi, yi) = 1 N N X i=1 ∥ˆyi −yi∥2 2, where ˆyi is estimated using ˆyj for all i ̸= j, as in Eq. 1. In MLKR, the weighted distance metric is learned using stochastic gradient descent. As an added benefit, MLKR is implicitly able to learn an appropriate kernel bandwidth. 3 String-Metric Learning for Kernel Regression (SMLKR) Our novel contribution is an extension of MLKR to situations where the predictor variables are not real-valued vectors, but strings, and the distance metric we wish to learn is a weighted Levenshtein edit distance. Vector-valued representations of the strings themselves would only approximately preserve edit distance. Fortunately, it turns out that we do not need vector-valued representations of the strings at all. Define the minimum edit-distance path as the smallest-length sequence of edits that is needed to transform one string into another. Observe that the weighted edit distance between two strings si and sj can be represented as the weighted sum of all the edits that must take place to transform one string into the other along the minimum edit-distance path (Bellet et al., 2012). In turn, these edits can be represented by a vector νij constructed as in Fig 1, while the weights can be represented by a vector w = (w1, ..., wM)T : dWL(si, sj) = M X m=1 wmνijm = wT νij. 2382 Figure 1: Each element in νij (the vector at left) represents a type of edit. The entry νijm represents the number of edits of type m that occur as string si (boot) is transformed into string sj (bee). Each entry of νij corresponds to a particular type of edit operation (e.g., substitution of character a for character b). The value assigned to each entry is the count of the total number of times that the corresponding edit operation must be applied to achieve transformation of string i to string j along the minimum edit-distance path. We note that νij does not admit a unique representation, since there are multiple ways to transform one string to another in the same number of edits, using different edit operations. However, we adopt the convention that some class of edit operations always takes priority over another—e.g., that deletions always occur before substitutions. This then enables us to specify νij uniquely. We also adopt the convention that the weights for edit operations are symmetric—e.g., that the weight for substituting character a for character b is the same as the weight for substituting character b for character a, so we represent every such pair of edit operations by a single entry in νij. As in MLKR, our goal is to minimize the leaveone-out MSE,1 where kij = e−wT νij. The gradient of the regression error for MSE is ∂L(D) ∂w = 2 N N X i=1 (yi −ˆyi)∂ˆyi ∂w where ∂ˆyi ∂w = P j̸=i(yj −ˆyi)T kijνij P j̸=i kij . Using this exact gradient, we can find the edit weights that minimize the loss function. We wish to constrain the weights to be nonnegative, since weighted edit distance only 1We attained similar results minimizing mean cosine error. The gradient for mean cosine error is ∂L(D) ∂w = 1 N N X i=1 (∥ˆyi∥yi −L(yi, ˆyi)ˆyi) ∥ˆyi∥2 ∂ˆyi ∂w . makes sense with nonnegative weights. Thus, to minimize the loss we use the limitedmemory Broyden–Fletcher–Goldfarb–Shanno algorithm for box constraints (L-BFGS-B) (Byrd et al., 1995), a quasi-Newton method that allows bounded optimization. We made a Python implementation of SMLKR available at http://bit.ly/25Hidqg/. 4 Experimental Setup 4.1 Data Lexicon. A principal concern is the possibility that our models may detect morphemes rather than sub-morphemic units. To minimize this concern, we adopted an approach similar to that of Shillcock et al. (2001), of training our model only on monomorphemic words. Monomorphemic words were selected by cross-referencing the morphemic analyses contained in the CELEX lexical database (Baayen et al., 1996) with the morphemic analyses contained in the etymologies of the Oxford English Dictionary Online (http://www.oed.com). Then, we went through the filtered list and removed any remaining polymorphemic words as well as place names, demonyms, spelling variants, and proper nouns. Finally, words that were not among the 40,000 most frequent non-filler word types in the corpus were excluded. The final lexicon was composed of 4,949 word types. Corpus and Semantic Model. The corpus we used to train our semantic model is a concatenation of the UKWaC, BNC, and Wikipedia corpora (Ferraresi et al., 2008; BNC Consortium, 2007; Parker et al., 2011). We trained our vectorspace model on this corpus using the Word2Vec (Mikolov et al., 2013), as instantiated in the GENSIM package ( ˇReh˚uˇrek and Sojka, 2010) for Python using default parameters. We produced 100-dimensional word-embedding vectors using the SkipGram algorithm of Word2Vec and normalized the 100-dimensional vector for each word so that its Euclidean norm was equal to 1. 4.2 Training We trained SMLKR on the 100-dimensional Word2Vec embeddings using L-BGFS-B, and placing non-negativity constraints on the weights w. We let SMLKR run until convergence, as de2383 termined by the following criterion: |L(k−1) −L(k)| max(|L(k−1)|, |L(k)|) = ϵ where L(k) is the loss at the kth iteration of learning, and we set ϵ = 2 × 10−8. We randomly initialized the L-BGFS-B algorithm 10 times to avoid poor local minima, and kept the solution with the lowest loss. 5 Experiments 5.1 Model Analysis Weighted Edit Distance Reveals More NonArbitrariness. We first assessed whether the structure found by kernel regression could arise merely by arbitrary, random pairings of form and meaning (i.e., strings and semantic vectors). We adopt a Monte Carlo testing procedure similar to the Mantel test of §2.1. We first randomly shuffled the assignment of the semantic vectors of all the words in the lexicon. We then trained SMLKR on the shuffled lexicon just as we did on the true lexicon. We measured the mean squared error of the SMLKR prediction. Out of 1000 reassignments, none produced a prediction error as small as the prediction error in the true lexicon (i.e., empirical p-value of p < .001). For comparison, we analyzed our corpus using the correlation method of Monaghan et al. (2014b). In our implementation, we measured the correlation between the pairwise cosine distances produced by Word2Vec and pairwise orthographic edit distances for all pairs of words in our lexicon. The correlation between the Word2Vec semantic distances and the orthographic edit distances in our corpus was r = 0.0194, similar to the correlation reported by Monaghan et al. of r = 0.016 between the phoneme edit distances and the semantic distances in the monomorphemic English lexicon. We also looked at the correlation between the weighted edit distances produced by SMLKR and the Word2Vec semantic distances. The correlation between these distances was r = 0.0464; thus, the weighted edit distance captures more than 5.7 times as much variance as the unweighted edit distance. Further, using the estimated semantic vectors produced by the SMLKR model, we can actually produce new estimates of the semantic distances between the words. The correlation between these estimated semantic distances and the true semantic distances was r = 0.1028, revealing much more systematicity than revealed by the simple linear correlation method. The Mantel test with 1,000 permutations produced significant empirical p-values for all correlations (p < .001). Systematicity Not Evenly Distributed Across Lexicon. What could be accounting for the higher degree of systematicity detected with SMLKR? Applying a more expressive model could result in a better fit simply because incidental but inconsequential patterns are being captured. Conversely, SMLKR could be finding phonosemantic sets which the correlation method of Monaghan et al. (Monaghan et al., 2014b) is unable to detect. We investigated further by determining what was driving the better fit produced by SMLKR. Monaghan et al. measure per-word systematicity as the change in the lexicon-wide form-meaning correlation that results from removing the word from the lexicon. The more the correlation decreases from removing the word, the more systematic the word is, according to this measure. They compared the distribution of this systematicity measure across the words in the lexicon to the distribution of systematicity in lexicons with randomly shuffled form-meaning assignments, and found that the null hypothesis that the distributions were identical could not be rejected. From this, they conclude that the observed systematicity of the lexicon is not a consequence only of small pockets of sound symbolism, but is rather a feature of the mappings from sound to meaning across the lexicon as a whole. However, it is possible that their methods may not be sensitive enough to find localized phonosemantic sets. We developed our own measure of per-word systematicity by measuring the per-word regression error of the SMLKR model. We presume words with lower regression errors to be more systematic. A list of the words with the lowest perword regression error in our corpus can be found in Table 1. Notably, many of these words, such as fluff, flutter, and flick, exhibit word beginnings or word endings that have been previously identified as phonaesthemes (Hutchins, 1998; Otis and Sagi, 2008). Others exhibit regular onomatopoeia, such as clang and croak. We decided to investigate the distribution of systematicity across two-letter word-beginnings in our lexicon using a permutation test. The goal of the permutation test is to estimate a p-value for the 2384 SMLKR Correlation Random gurgle emu tunic tingle nexus decay hoop asylum skirmish chink ethic scroll swirl odd silk ladle slime prom flick snare knob wobble scarlet havoc tangle deem irate knuckle balustrade veer glitter envoy wear twig scrape phone fluff essay surgeon rasp ambit hiccup quill echo bowel flutter onus sack whirl exam lens croak pirouette hovel squeal kohl challenge clang chandelier box Table 1: Left: Most systematic words according to SMLKR. Center: Most systematic words according to the leave-one-out correlation method proposed by Monaghan et al. (2014b). Right: Randomly generated list for comparison. likelihood that each set of words sharing a word beginning would exhibit the mean regression error it exhibits, if systematicity is randomly distributed across the lexicon. For each set Sω of words with word-beginning ω, we measured the mean SMLKR regression error of the words in Sω. To get an empirical p-value for each Sω with cardinality greater than 5 (i.e., more than 5 word tokens), we randomly chose 105 sets of words in the lexicon with the same cardinality, and measured the mean SMLKR regression error for each of these random sets. If r of the randomly assembled sets had a lower mean regression error than Sω did, we assign an empirical p-value of r 105 to Sω. A histogram of empirical p-values is in Fig. 2. From the figure, it seems clear that the p-values are not uniformly distributed; instead, an inordinate number of word-beginnings exhibit mean errors that are unlikely to occur if error is distributed arbitrarily across word-beginnings. We can confirm this observation statistically. On the assumption that systematicity is arbitrarily distributed across word-beginnings, the empirical p-values of the permutation test should approximately conform to a Unif(0, 1) distribution. We can test this hypothesis using a χ2 test on the negative logarithms of the p-values (Fisher, 1932). Using this test, we reject the hypothesis that the pvalues are uniformly distributed with p < .0001 (χ2 156 = 707.8). The particular word-beginnings Figure 2: Histogram showing distribution of systematicity across two-letter word-beginnings, as measured by permutation-test empirical p-value. Onset p-value fl< 1 × 10−4 sn< 1 × 10−4 sw< 1 × 10−4 tw< 1 × 10−4 gl1 × 10−3 sl1 × 10−3 bu1 × 10−3 mu2 × 10−3 wh2 × 10−3 sc-/sk3 × 10−3 Table 2: Word-beginnings with mean errors lower than predicted by random distribution of errors across lexicon. Bold are among the phonaesthemes identified by Hutchins (1998). Italics were identified by Otis and Sagi (2008). with statistically significant empirical p-values (p < .05 after Benjamini-Hochberg (1995) correction for multiple comparisons) are in Table 2. Eight of these ten features are among the 18 two-letter onsets posited to be phonaesthemes by Hutchins (1998). For comparison, Otis and Sagi (2008) identified eight of Hutchins’s 18 two-letter word-beginning candidate phonaesthemes (and 12 two-letter word-beginnings overall) as statistically significant, though they restricted their hypothesis space to only 50 pre-specified word-beginnings and word-endings. We are able to identify just as many candidate phonaesthemes, but with a much less restricted hypothesis space of candidates (225 rather than the 50 in Otis and Sagi’s analysis) and with a general model not specifically attuned to finding phonaesthemes in particular, but rather systematicity in general. 2385 5.2 Behavioral Evaluation of Systematicity Measure We empirically tested whether the systematicity measure based on SMLKR regression error accords with na¨ıve human judgments about how well-suited a word’s form is to its meaning (its “phonosemantic feeling”) (Stefanowitsch, 2002). We recruited 60 native English-speaking participants through Mechanical Turk, and asked them to judge the phonosemantic feeling of the 60 words in Table 1 on a sliding scale from 1 to 5.2 We used Cronbach’s α to measure inter-annotator reliability at α = 0.96, indicating a high degree of interannotator reliability (Cronbach, 1951; George, 2000). The results showed that the words in the SMLKR list were rated higher for phonosemantic feeling than the words in the Correlation and Random lists. We fit a parametric linear mixedeffects model to the phonosemantic feeling judgments (Baayen et al., 2008), as implemented in the lme4 library for R. As fixed effects, we entered the list identity (SMLKR, Correlation, Random), the word length, and the log frequency of the word in our corpus. Our random effects structure included a random intercept for word, and random subject slopes for all fixed effects, with all correlations allowed (a “maximal” randomeffects structure (Barr et al., 2013)). Including list identity in the maximal mixed-effects model significantly improved model fit (χ2 11 = 126.08, p < 10−6). Post-hoc analysis revealed that the SMLKR list elicited average suitability judgments that were 0.49 points higher than the Random list (p < 10−6) and 0.59 points higher than the Correlation list (p < 10−6). Post-hoc analysis did not find a significant difference in suitability judgments between the Random and Correlation lists (p > .16).3 6 Conclusion In this paper, we proposed SMLKR, a novel algorithm that can learn weighted string edit distances that minimize kernel regression error. We succeed 2Participants were given the following guidance: “Your job is to decide how well-suited each word is to what it means. This is known as the ‘phonosemantic feeling.’ Basically, most people feel like some of the words in their native language sound right, given what they mean.” Full instructions and experiment available at http://goo.gl/Z6Lzlp 3Post hoc analyses were produced by comparing the items in only two of the lists at a time, and fitting the same mixedeffects model as above. in applying this algorithm to the problem of finding form-meaning systematicity in the monomorphemic English lexicon. Our algorithm offers improved global predictions of word-meaning given word-form at the lexicon-wide level. We show that this improvement seems related to localized pockets of form-meaning systematicity such as those previously uncovered in behavioral and corpus analyses. Unlike previous lexicon-wide analyses, we find that form-meaning systematicity is not randomly distributed throughout the English lexicon. Moreover, the measure of systematicity that we compute using SMLKR accords significantly with human raters’ judgments about formmeaning correspondences in English. Future work may investigate to what extent the SMLKR model can predict human intuitions about form-meaning systematicity in language. We do not know, for instance, if our model can predict human semantic judgments of novel words that have never been encountered. This is a question that has received attention in the market research literature, where new brand names are tested for the emotions they elicit (Klink, 2000). We would also like to investigate the degree to which our statistical model predicts the behavioral effects of phonosemantic systematicity during human semantic processing that have been reported in the psycholinguistics literature. Our model makes precise quantitative predictions that should allow us to address these questions. While developing our model on preliminary versions of the monomorphemic lexicon, we noticed that the model detected high degrees of systematicity in words with suffixes such as -ate and -tet (e.g., quintet, quartet). We removed such words in the final analysis since they are polymorphemic, but this observation suggests that our algorithm may have applications in unsupervised morpheme discovery. Finally, we would like to test our model using other representations of word-form and wordmeaning. We chose to use orthographic rather than phonetic representations of words because of the variance in pronunciation present in the dialects of English that are manifested in our corpus. However, it would be interesting to verify our results in a phonological setting, perhaps using a monodialectal corpus. Moreover, previous locallevel analyses suggest that systematicity seems to be concentrated in word-beginnings and word2386 endings. Thus, it may be worthwhile to augment the representation of edit distance in our model by making it context-sensitive. Future work could also test whether a more interpretable meaningspace representation such as that provided by binary WordNet feature vectors reveals patterns of systematicity not found using a distributional semantic space. Acknowledgments This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. References Ekaterina Abramova, Raquel Fern´andez, and Federico Sangati. 2013. Automatic labeling of phonesthemic senses. In Proeedings of the 35th Annual Conference of the Cognitive Science Society, volume 35. Cognitive Science Society. R Harald Baayen, Richard Piepenbrock, and L´eon Gulikers. 1996. CELEX2 (CD-ROM). R. Harald Baayen, Douglas J. Davidson, and Douglas M. Bates. 2008. Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59(4):390–412. Todd M. Bailey and Ulrike Hahn. 2001. Determinants of wordlikeness: Phonotactics or lexical neighborhoods? Journal of Memory and Language, 44(4):568–591. Dale J. Barr, Roger Levy, Christoph Scheepers, and Harry J. Tily. 2013. Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language, 68(3):255–278. Aur´elien Bellet, Amaury Habrard, and Marc Sebban. 2012. Good edit similarity learning by loss minimization. Machine Learning, pages 5–35. Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society. Series B (Methodological), pages 289–300. Benjamin K. Bergen. 2004. The psychological reality of phonaesthemes. Language, pages 290–311. BNC Consortium. 2007. British National Corpus, Version 3 BNC XML edition. Richard H. Byrd, Peihuang Lu, Jorge Nocedal, and Ciyou Zhu. 1995. A limited memory algorithm for bound constrained optimization. SIAM Journal on Scientific Computing, 16(5):1190–1208. Lee J. Cronbach. 1951. Coefficient alpha and the internal structure of tests. Psychometrika, 16(3):297– 334. Ferdinand de Saussure. 1916. Course in General Linguistics. McGraw-Hill, New York. Adriano Ferraresi, Eros Zanchetta, Marco Baroni, and Silvia Bernardini. 2008. Introducing and evaluating UKWaC, a very large web-derived corpus of English. In Proceedings of the 4th Web as Corpus Workshop (WAC-4) Can we beat Google, pages 47– 54. John R. Firth. 1930. Speech. Benn’s Sixpenny Library, London. R.A. Fisher. 1932. Statistical methods for research workers. Oliver and Boyd, London. Michael Gasser. 2004. The origins of arbitrariness in language. In Proceedings of the 26th Annual Conference of the Cognitive Science Society, volume 26, pages 4–7. Darren George. 2000. SPSS for Windows Step by Step: A Simple Guide and Reference, 11.0 Update (4th ed.). Allyn & Bacon, London. Charles F. Hockett. 1960. The origin of speech. Scientific American, 203:88–96. Sharon Suzanne Hutchins. 1998. The psychological reality, variability, and compositionality of English phonesthemes. Ph.D. thesis, Emory University, Atlanta. Richard R. Klink. 2000. Creating brand names with meaning: The use of sound symbolism. Marketing Letters, 11(1):5–20. Willem J.M. Levelt, Ardi Roelofs, and Antje S. Meyer. 1999. A theory of lexical access in speech production. Behavioral and Brain Sciences, 22(01):1–38. Vladimir I. Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. In Soviet Physics Doklady, volume 10, pages 707–710. Margaret Magnus. 1998. What’s in a Word? Evidence for Phonosemantics. Ph.D. thesis, University of Trondheim, Trondheim, Norway. Nathan Mantel. 1967. The detection of disease clustering and a generalized regression approach. Cancer Research, 27(2 Part 1):209–220. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26, pages 3111–3119. Curran Associates, Inc. 2387 Padraic Monaghan, Gary Lupyan, and Morten H Christiansen. 2014a. The systematicity of the sign: Modeling activation of semantic attributes from nonwords. In P. Bello, M. Guarini, M. McShane, and B. Scassellati, editors, Proceedings of the 36th Annual Meeting of the Cognitive Science Society, pages 2741–2746, Austin, TX. Cognitive Science Society. Padraic Monaghan, Richard C. Shillcock, Morten H. Christiansen, and Simon Kirby. 2014b. How arbitrary is language? Philosophical Transactions of the Royal Society of London B: Biological Sciences, 369(1651). Elizbar A. Nadaraya. 1964. On estimating regression. Theory of Probability & Its Applications, 9(1):141– 142. Robert M. Nosofsky. 1986. Attention, similarity, and the identification–categorization relationship. Journal of Experimental Psychology: General, 115(1):39. John J. Ohala. 1984. An ethological perspective on common cross-language utilization of f0 of voice. Phonetica, 41(1):1–16. Katya Otis and Eyal Sagi. 2008. Phonaesthemes: A corpus-based analysis. In Proceedings of the 30th Annual Conference of the Cognitive Science Society, pages 65–70. Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2011. English Gigaword Fifth Edition. Radim ˇReh˚uˇrek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45–50, Valletta, Malta, May. ELRA. urlhttp://is.muni.cz/publication/884893/en. Eyal Sagi and Katya Otis. 2008. Semantic glimmers: Phonaesthemes facilitate access to sentence meaning. In 9th Conference on Conceptual Structure, Discourse, & Language (CSDL9). Richard Shillcock, Simon Kirby, Scott McDonald, and Chris Brew. 2001. Filled pauses and their status in the mental lexicon. In ISCA Tutorial and Research Workshop (ITRW) on Disfluency in Spontaneous Speech. Anatol Stefanowitsch. 2002. Sound symbolism in a usage-driven model. Unpublished manuscript, Rice University, Houston, Texas, USA. Monica Tamariz. 2006. Exploring the adaptive structure of the mental lexicon. Ph.D. thesis, University of Edinburgh, Edinburgh. Monica Tamariz. 2008. Exploring systematicity between phonological and context-cooccurrence representations of the mental lexicon. The Mental Lexicon, 3(2):259–278. Killian Q. Weinberger and Gerald Tesauro. 2007. Metric learning for kernel regression. In Eleventh International Conference on Artificial Intelligence and Statistics, pages 608–615. 2388
2016
225
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2389–2398, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Improving Hypernymy Detection with an Integrated Path-based and Distributional Method Vered Shwartz Yoav Goldberg Ido Dagan Computer Science Department Bar-Ilan University Ramat-Gan, Israel [email protected] [email protected] [email protected] Abstract Detecting hypernymy relations is a key task in NLP, which is addressed in the literature using two complementary approaches. Distributional methods, whose supervised variants are the current best performers, and path-based methods, which received less research attention. We suggest an improved path-based algorithm, in which the dependency paths are encoded using a recurrent neural network, that achieves results comparable to distributional methods. We then extend the approach to integrate both pathbased and distributional signals, significantly improving upon the state-of-the-art on this task. 1 Introduction Hypernymy is an important lexical-semantic relation for NLP tasks. For instance, knowing that Tom Cruise is an actor can help a question answering system answer the question “which actors are involved in Scientology?”. While semantic taxonomies, like WordNet (Fellbaum, 1998), define hypernymy relations between word types, they are limited in scope and domain. Therefore, automated methods have been developed to determine, for a given term-pair (x, y), whether y is an hypernym of x, based on their occurrences in a large corpus. For a couple of decades, this task has been addressed by two types of approaches: distributional, and path-based. In distributional methods, the decision whether y is a hypernym of x is based on the distributional representations of these terms. Lately, with the popularity of word embeddings (Mikolov et al., 2013), most focus has shifted towards supervised distributional methods, in which each (x, y) term-pair is represented using some combination of the terms’ embedding vectors. In contrast to distributional methods, in which the decision is based on the separate contexts of x and y, path-based methods base the decision on the lexico-syntactic paths connecting the joint occurrences of x and y in a corpus. Hearst (1992) identified a small set of frequent paths that indicate hypernymy, e.g. Y such as X. Snow et al. (2004) represented each (x, y) term-pair as the multiset of dependency paths connecting their co-occurrences in a corpus, and trained a classifier to predict hypernymy, based on these features. Using individual paths as features results in a huge, sparse feature space. While some paths are rare, they often consist of certain unimportant components. For instance, “Spelt is a species of wheat” and “Fantasy is a genre of fiction” yield two different paths: X be species of Y and X be genre of Y, while both indicating that X is-a Y. A possible solution is to generalize paths by replacing words along the path with their part-of-speech tags or with wild cards, as done in the PATTY system (Nakashole et al., 2012). Overall, the state-of-the-art path-based methods perform worse than the distributional ones. This stems from a major limitation of path-based methods: they require that the terms of the pair occur together in the corpus, limiting the recall of these methods. While distributional methods have no such requirement, they are usually less precise in detecting a specific semantic relation like hypernymy, and perform best on detecting broad semantic similarity between terms. Though these approaches seem complementary, there has been rather little work on integrating them (Mirkin et al., 2006; Kaji and Kitsuregawa, 2008). In this paper, we present HypeNET, an integrated path-based and distributional method for hypernymy detection. Inspired by recent progress 2389 in relation classification, we use a long shortterm memory (LSTM) network (Hochreiter and Schmidhuber, 1997) to encode dependency paths. In order to create enough training data for our network, we followed previous methodology of constructing a dataset based on knowledge resources. We first show that our path-based approach, on its own, substantially improves performance over prior path-based methods, yielding performance comparable to state-of-the-art distributional methods. Our analysis suggests that the neural path representation enables better generalizations. While coarse-grained generalizations, such as replacing a word by its POS tag, capture mostly syntactic similarities between paths, HypeNET captures also semantic similarities. We then show that we can easily integrate distributional signals in the network. The integration results confirm that the distributional and pathbased signals indeed provide complementary information, with the combined model yielding an improvement of up to 14 F1 points over each individual model.1 2 Background We introduce the two main approaches for hypernymy detection: distributional (Section 2.1), and path-based (Section 2.2). We then discuss the recent use of recurrent neural networks in the related task of relation classification (Section 2.3). 2.1 Distributional Methods Hypernymy detection is commonly addressed using distributional methods. In these methods, the decision whether y is a hypernym of x is based on the distributional representations of the two terms, i.e., the contexts with which each term occurs separately in the corpus. Earlier methods developed unsupervised measures for hypernymy, starting with symmetric similarity measures (Lin, 1998), and followed by directional measures based on the distributional inclusion hypothesis (Weeds and Weir, 2003; Kotlerman et al., 2010). This hypothesis states that the contexts of a hyponym are expected to be largely included in those of its hypernym. More recent work (Santus et al., 2014; Rimell, 2014) introduce new measures, based on the assumption that the 1Our code and data are available in: https://github.com/vered1986/HypeNET most typical linguistic contexts of a hypernym are less informative than those of its hyponyms. More recently, the focus of the distributional approach shifted to supervised methods. In these methods, the (x, y) term-pair is represented by a feature vector, and a classifier is trained on these vectors to predict hypernymy. Several methods are used to represent term-pairs as a combination of each term’s embeddings vector: concatenation ⃗x⊕⃗y (Baroni et al., 2012), difference ⃗y−⃗x (Roller et al., 2014; Weeds et al., 2014), and dot-product ⃗x · ⃗y. Using neural word embeddings (Mikolov et al., 2013; Pennington et al., 2014), these methods are easy to apply, and show good results (Baroni et al., 2012; Roller et al., 2014). 2.2 Path-based Methods A different approach to detecting hypernymy between a pair of terms (x, y) considers the lexicosyntactic paths that connect the joint occurrences of x and y in a large corpus. Automatic acquisition of hypernyms from free text, based on such paths, was first proposed by Hearst (1992), who identified a small set of lexico-syntactic paths that indicate hypernymy relations (e.g. Y such as X, X and other Y). In a later work, Snow et al. (2004) learned to detect hypernymy. Rather than searching for specific paths that indicate hypernymy, they represent each (x, y) term-pair as the multiset of all dependency paths that connect x and y in the corpus, and train a logistic regression classifier to predict whether y is a hypernym of x, based on these paths. Paths that indicate hypernymy are those that were assigned high weights by the classifier. The paths identified by this method were shown to subsume those found by Hearst (1992), yielding improved performance. Variations of Snow et al.’s (2004) method were later used in tasks such as taxonomy construction (Snow et al., 2006; Kozareva and Hovy, 2010; Carlson et al., 2010; Riedel et al., 2013), analogy identification (Turney, 2006), and definition extraction (Borg et al., 2009; Navigli and Velardi, 2010). A major limitation in relying on lexicosyntactic paths is the sparsity of the feature space. Since similar paths may somewhat vary at the lexical level, generalizing such variations into more abstract paths can increase recall. The PATTY algorithm (Nakashole et al., 2012) applied such generalizations for the purpose of acquiring a taxon2390 parrot is a bird NOUN VERB DET NOUN NSUBJ ATTR DET Figure 1: An example dependency tree of the sentence “parrot is a bird”, with x=parrot and y=bird, represented in our notation as X/NOUN/nsubj/< be/VERB/ROOT/Y/NOUN/attr/>. omy of term relations from free text. For each path, they added generalized versions in which a subset of words along the path were replaced by either their POS tags, their ontological types or wild-cards. This generalization increased recall while maintaining the same level of precision. 2.3 RNNs for Relation Classification Relation classification is a related task whose goal is to classify the relation that is expressed between two target terms in a given sentence to one of predefined relation classes. To illustrate, consider the following sentence, from the SemEval-2010 relation classification task dataset (Hendrickx et al., 2009): “The [apples]e1 are in the [basket]e2”. Here, the relation expressed between the target entities is Content −Container(e1, e2). The shortest dependency paths between the target entities were shown to be informative for this task (Fundel et al., 2007). Recently, deep learning techniques showed good performance in capturing the indicative information in such paths. In particular, several papers show improved performance using recurrent neural networks (RNN) that process a dependency path edge-by-edge. Xu et al. (2015; 2016) apply a separate long shortterm memory (LSTM) network to each sequence of words, POS tags, dependency labels and WordNet hypernyms along the path. A max-pooling layer on the LSTM outputs is used as the input of a network that predicts the classification. Other papers suggest incorporating additional network architectures to further improve performance (Nguyen and Grishman, 2015; Liu et al., 2015). While relation classification and hypernymy detection are both concerned with identifying semantic relations that hold for pairs of terms, they differ in a major respect. In relation classification the relation should be expressed in the given text, while in hypernymy detection, the goal is to recognize a generic lexical-semantic relation between terms that holds in many contexts. Accordingly, in relation classification a term-pair is represented by a single dependency path, while in hypernymy detection it is represented by the multiset of all dependency paths in which they co-occur in the corpus. 3 LSTM-based Hypernymy Detection We present HypeNET, an LSTM-based method for hypernymy detection. We first focus on improving path representation (Section 3.1), and then integrate distributional signals into our network, resulting in a combined method (Section 3.2). 3.1 Path-based Network Similarly to prior work, we represent each dependency path as a sequence of edges that leads from x to y in the dependency tree.2 Each edge contains the lemma and part-of-speech tag of the source node, the dependency label, and the edge direction between two subsequent nodes. We denote each edge as lemma/POS/dep/dir. See figure 1 for an illustration. Rather than treating an entire dependency path as a single feature, we encode the sequence of edges using a long short-term memory (LSTM) network. The vectors obtained for the different paths of a given (x, y) pair are pooled, and the resulting vector is used for classification. Figure 2 depicts the overall network structure, which is described below. Edge Representation We represent each edge by the concatenation of its components’ vectors: ⃗ve = [⃗vl, ⃗ vpos, ⃗ vdep, ⃗ vdir] where ⃗vl, ⃗ vpos, ⃗ vdep, ⃗ vdir represent the embedding vectors of the lemma, part-of-speech, dependency label and dependency direction (along the path from x to y), respectively. Path Representation For a path p composed of edges e1, ..., ek, the edge vectors ⃗ ve1, ..., ⃗ vek are fed in order to an LSTM encoder, resulting in a vector ⃗op representing the entire path p. The LSTM architecture is effective at capturing temporal patterns in sequences. We expect the training procedure to drive the LSTM encoder to focus on parts of the path that are more informative for the classification task while ignoring others. 2Like Snow et al. (2004), we added for each path, additional paths containing single daughters of x or y not already contained in the path, referred by Snow et al. (2004) as “satellite edges”. This enables including paths like Such Y as X, in which the word “such” is not in the path between x and y. 2391 X/NOUN/nsubj/> be/VERB/ROOT/Y/NOUN/attr/< X/NOUN/dobj/> define/VERB/ROOT/Y/NOUN/pobj/< as/ADP/prep/< Path LSTM Term-pair Classifier ⃗op average pooling ⃗ vwx (x, y) classification (softmax) ⃗ vwy ⃗vxy Embeddings: lemma POS dependency label direction Figure 2: An illustration of term-pair classification. Each term-pair is represented by several paths. Each path is a sequence of edges, and each edge consists of four components: lemma, POS, dependency label and dependency direction. Each edge vector is fed in sequence into the LSTM, resulting in a path embedding vector ⃗op. The averaged path vector becomes the term-pair’s feature vector, used for classification. The dashed ⃗ vwx, ⃗ vwy vectors refer to the integrated network described in Section 3.2. Term-Pair Classification Each (x, y) term-pair is represented by the multiset of lexico-syntactic paths that connected x and y in the corpus, denoted as paths(x, y), while the supervision is given for the term pairs. We represent each (x, y) term-pair as the weighted-average of its path vectors, by applying average pooling on its path vectors, as follows: ⃗ vxy = ⃗vpaths(x,y) = P p∈paths(x,y) fp,(x,y)· ⃗op P p∈paths(x,y) fp,(x,y) (1) where fp,(x,y) is the frequency of p in paths(x, y). We then feed this path vector to a single-layer network that performs binary classification to decide whether y is a hypernym of x. c = softmax(W · ⃗ vxy) (2) c is a 2-dimensional vector whose components sum to 1, and we classify a pair as positive if c[1] > 0.5. Implementation Details To train the network, we used PyCNN.3 We minimize the cross entropy loss using gradient-based optimization, with mini-batches of size 10 and the Adam update rule (Kingma and Ba, 2014). Regularization is applied by a dropout on each of the components’ embeddings. We tuned the hyper-parameters (learning rate and dropout rate) on the validation set (see the appendix for the hyper-parameters values). We initialized the lemma embeddings with the pre-trained GloVe word embeddings (Pennington et al., 2014), trained on Wikipedia. We tried both 3https://github.com/clab/cnn the 50-dimensional and 100-dimensional embedding vectors and selected the ones that yield better performance on the validation set.4 The other embeddings, as well as out-of-vocabulary lemmas, are initialized randomly. We update all embedding vectors during training. 3.2 Integrated Network The network presented in Section 3.1 classifies each (x, y) term-pair based on the paths that connect x and y in the corpus. Our goal was to improve upon previous path-based methods for hypernymy detection, and we show in Section 6 that our network indeed outperforms them. Yet, as path-based and distributional methods are considered complementary, we present a simple way to integrate distributional features in the network, yielding improved performance. We extended the network to take into account distributional information on each term. Inspired by the supervised distributional concatenation method (Baroni et al., 2012), we simply concatenate x and y word embeddings to the (x, y) feature vector, redefining ⃗ vxy: ⃗ vxy = [ ⃗ vwx,⃗vpaths(x,y), ⃗ vwy] (3) where ⃗ vwx and ⃗ vwy are x and y’s word embeddings, respectively, and ⃗vpaths(x,y) is the averaged path vector defined in equation 1. This way, each (x, y) pair is represented using both the distributional features of x and y, and their path-based features. 4Higher-dimensional embeddings seem not to improve performance, while hurting the training runtime. 2392 resource relations WordNet instance hypernym, hypernym DBPedia type Wikidata subclass of, instance of Yago subclass of Table 1: Hypernymy relations in each resource. 4 Dataset 4.1 Creating Instances Neural networks typically require a large amount of training data, whereas the existing hypernymy datasets, like BLESS (Baroni and Lenci, 2011), are relatively small. Therefore, we followed the common methodology of creating a dataset using distant supervision from knowledge resources (Snow et al., 2004; Riedel et al., 2013). Following Snow et al. (2004), who constructed their dataset based on WordNet hypernymy, and aiming to create a larger dataset, we extract hypernymy relations from several resources: WordNet (Fellbaum, 1998), DBPedia (Auer et al., 2007), Wikidata (Vrandeˇci´c, 2012) and Yago (Suchanek et al., 2007). All instances in our dataset, both positive and negative, are pairs of terms that are directly related in at least one of the resources. These resources contain thousands of relations, some of which indicate hypernymy with varying degrees of certainty. To avoid including questionable relation types, we consider as denoting positive examples only indisputable hypernymy relations (Table 1), which we manually selected from the set of hypernymy indicating relations in Shwartz et al. (2015). Term-pairs related by other relations (including hyponymy), are considered as negative instances. Using related rather than random term-pairs as negative instances tests our method’s ability to distinguish between hypernymy and other kinds of semantic relatedness. We maintain a ratio of 1:4 positive to negative pairs in the dataset. Like Snow et al. (2004), we include only termpairs that have joint occurrences in the corpus, requiring at least two different dependency paths for each pair. 4.2 Random and Lexical Dataset Splits As our primary dataset, we perform standard random splitting, with 70% train, 25% test and 5% validation sets. As pointed out by Levy et al. (2015), supervised distributional lexical inference methods tend train test val all random split 49,475 17,670 3,534 70,679 lexical split 20,335 6,610 1,350 28,295 Table 2: The number of instances in each dataset. to perform “lexical memorization”, i.e., instead of learning a relation between the two terms, they mostly learn an independent property of a single term in the pair: whether it is a “prototypical hypernym” or not. For instance, if the training set contains term-pairs such as (dog, animal), (cat, animal), and (cow, animal), all annotated as positive examples, the algorithm may learn that animal is a prototypical hypernym, classifying any new (x, animal) pair as positive, regardless of the relation between x and animal. Levy et al. (2015) suggested to split the train and test sets such that each will contain a distinct vocabulary (“lexical split”), in order to prevent the model from overfitting by lexical memorization. To investigate such behaviors, we present results also for a lexical split of our dataset. In this case, we split the train, test and validation sets such that each contains a distinct vocabulary. We note that this differs from Levy et al. (2015), who split only the train and the test sets, and dedicated a subset of the train for validation. We chose to deviate from Levy et al. (2015) because we noticed that when the validation set contains terms from the train set, the model is rewarded for lexical memorization when tuning the hyper-parameters, consequently yielding suboptimal performance on the lexically-distinct test set. When each set has a distinct vocabulary, the hyper-parameters are tuned to avoid lexical memorization and are likely to perform better on the test set. We tried to keep roughly the same 70/25/5 ratio in our lexical split.5 The sizes of the two datasets are shown in Table 2. Indeed, training a model on a lexically split dataset may result in a more general model, that can better handle pairs consisting of two unseen terms during inference. However, we argue that in the common applied scenario, the inference involves an unseen pair (x, y), in which x and/or y have already been observed separately. Models trained on a random split may introduce the model with a term’s “prior probability” of being a hypernym or a hyponym, and this information can be exploited beneficially at inference time. 5The lexical split discards many pairs consisting of crossset terms. 2393 path X/NOUN/dobj/> establish/VERB/ROOT/- as/ADP/prep/< Y/NOUN/pobj/< X/NOUN/dobj/> VERB as/ADP/prep/< Y/NOUN/pobj/< X/NOUN/dobj/> * as/ADP/prep/< Y/NOUN/pobj/< X/NOUN/dobj/> establish/VERB/ROOT/- ADP Y/NOUN/pobj/< X/NOUN/dobj/> establish/VERB/ROOT/- * Y/NOUN/pobj/< Table 3: Example generalizations of X was established as Y. 5 Baselines We compare HypeNET with several state-of-theart methods for hypernymy detection, as described in Section 2: path-based methods (Section 5.1), and distributional methods (Section 5.2). Due to different works using different datasets and corpora, we replicated the baselines rather than comparing to the reported results. We use the Wikipedia dump from May 2015 as the underlying corpus of all the methods, and parse it using spaCy.6 We perform model selection on the validation set to tune the hyper-parameters of each method.7 The best hyper-parameters are reported in the appendix. 5.1 Path-based Methods Snow We follow the original paper, and extract all shortest paths of four edges or less between terms in a dependency tree. Like Snow et al. (2004), we add paths with “satellite edges”, i.e., single words not already contained in the dependency path, which are connected to either X or Y, allowing paths like such Y as X. The number of distinct paths was 324,578. We apply χ2 feature selection to keep only the 100,000 most informative paths and train a logistic regression classifier. Generalization We also compare our method to a baseline that uses generalized dependency paths. Following PATTY’s approach to generalizing paths (Nakashole et al., 2012), we replace edges with their part-of-speech tags as well as with wild cards. We generate the powerset of all possible generalizations, including the original paths. See Table 3 for examples. The number of features after generalization went up to 2,093,220. Similarly to the first baseline, we apply feature selection, this time keeping the 1,000,000 most informative paths, and train a logistic regression classifier over the generalized paths.8 6https://spacy.io/ 7We applied grid search for a range of values, and picked the ones that yield the highest F1 score on the validation set. 8We also tried keeping the 100,000 most informative paths, but the performance was worse. 5.2 Distributional Methods Unsupervised SLQS (Santus et al., 2014) is an entropy-based measure for hypernymy detection, reported to outperform previous state-ofthe-art unsupervised methods (Weeds and Weir, 2003; Kotlerman et al., 2010). The original paper was evaluated on the BLESS dataset (Baroni and Lenci, 2011), which consists of mostly frequent words. Applying the vanilla settings of SLQS on our dataset, that contains also rare terms, resulted in low performance. Therefore, we received assistance from Enrico Santus, who kindly provided the results of SLQS on our dataset after tuning the system as follows. The validation set was used to tune the threshold for classifying a pair as positive, as well as the maximum number of each term’s most associated contexts (N). In contrast to the original paper, in which the number of each term’s contexts is fixed to N, in this adaptation it was set to the minimum between the number of contexts with LMI score above zero and N. In addition, the SLQS scores were not multiplied by the cosine similarity scores between terms, and terms were lemmatized prior to computing the SLQS scores, significantly improving recall. As our results suggest, while this method is state-of-the-art for unsupervised hypernymy detection, it is basically designed for classifying specificity level of related terms, rather than hypernymy in particular. Supervised To represent term-pairs with distributional features, we tried several state-of-the-art methods: concatenation ⃗x⊕⃗y (Baroni et al., 2012), difference ⃗y −⃗x (Roller et al., 2014; Weeds et al., 2014), and dot-product ⃗x · ⃗y. We downloaded several pre-trained embeddings (Mikolov et al., 2013; Pennington et al., 2014) of different sizes, and trained a number of classifiers: logistic regression, SVM, and SVM with RBF kernel, which was reported by Levy et al. (2015) to perform best in this setting. We perform model selection on the validation set to select the best vectors, method and regularization factor (see the appendix). 2394 random split lexical split method precision recall F1 precision recall F1 Path-based Snow 0.843 0.452 0.589 0.760 0.438 0.556 Snow + Gen 0.852 0.561 0.676 0.759 0.530 0.624 HypeNET Path-based 0.811 0.716 0.761 0.691 0.632 0.660 Distributional SLQS (Santus et al., 2014) 0.491 0.737 0.589 0.375 0.610 0.464 Best supervised (concatenation) 0.901 0.637 0.746 0.754 0.551 0.637 Combined HypeNET Integrated 0.913 0.890 0.901 0.809 0.617 0.700 Table 4: Performance scores of our method compared to the path-based baselines and the state-of-the-art distributional methods for hypernymy detection, on both variations of the dataset – with lexical and random split to train / test / validation. 6 Results Table 4 displays performance scores of HypeNET and the baselines. HypeNET Path-based is our path-based recurrent neural network model (Section 3.1) and HypeNET Integrated is our combined method (Section 3.2). Comparing the path-based methods shows that generalizing paths improves recall while maintaining similar levels of precision, reassessing the behavior found in Nakashole et al. (2012). HypeNET Path-based outperforms both path-based baselines by a significant improvement in recall and with slightly lower precision. The recall boost is due to better path generalization, as demonstrated in Section 7.1. Regarding distributional methods, the unsupervised SLQS baseline performed slightly worse on our dataset. The low precision stems from its inability to distinguish between hypernyms and meronyms, which are common in our dataset, causing many false positive pairs such as (zabrze, poland) and (kibbutz, israel). We sampled 50 false positive pairs of each dataset split, and found that 38% of the false positive pairs in the random split and 48% of those in the lexical split were holonym-meronym pairs. In accordance with previously reported results, the supervised embedding-based method is the best performing baseline on our dataset as well. HypeNET Path-based performs slightly better, achieving state-of-the-art results. Adding distributional features to our method shows that these two approaches are indeed complementary. On both dataset splits, the performance differences between HypeNET Integrated and HypeNET Pathbased, as well as the supervised distributional method, are substantial, and statistically significant with p-value of 1% (paired t-test). We also reassess that indeed supervised distributional methods perform worse on a lexical split (Levy et al., 2015). We further observe a similar reduction when using HypeNET, which is not a result of lexical memorization, but rather stems from over-generalization (Section 7.1). 7 Analysis 7.1 Qualitative Analysis of Learned Paths We analyze HypeNET’s ability to generalize over path structures, by comparing prominent indicative paths which were learned by each of the pathbased methods. We do so by finding high-scoring paths that contributed to the classification of truepositive pairs in the dataset. In the path-based baselines, these are the highest-weighted features as learned by the logistic regression classifier. In the LSTM-based method, it is less straightforward to identify the most indicative paths. We assess the contribution of a certain path p to classification by regarding it as the only path that appeared for the term-pair, and compute its TRUE label score from the class distribution: softmax(W · ⃗ vxy)[1], setting ⃗ vxy = [⃗0, ⃗op,⃗0]. A notable pattern is that Snow’s method learns specific paths, like X is Y from (e.g. Megadeth is an American thrash metal band from Los Angeles). While Snow’s method can only rely on verbatim paths, limiting its recall, the generalized version of Snow often makes coarse generalizations, such as X VERB Y from. Clearly, such a path is too general, and almost any verb assigned to it results in a non-indicative path (e.g. X take Y from). Efforts by the learning method to avoid such generalization, again, lower the recall. HypeNET provides a better midpoint, making finegrained generalizations by learning additional semantically similar paths such as X become Y from and X remain Y from. See table 5 for additional example paths which illustrate these behaviors. We also noticed that while on the random split our model learns a range of specific paths such as X is Y published (learned for e.g. Y=magazine) and X is Y produced (Y=film), in the lexical split it only learns the general X is Y path for these re2395 method path example text Snow X/NOUN/nsubj/> be/VERB/ROOT/- Y/NOUN/attr/< direct/VERB/acl/> Eyeball is a 1975 Italian-Spanish film directed by Umberto Lenzi X/NOUN/nsubj/> be/VERB/ROOT/- Y/NOUN/attr/< publish/VERB/acl/> Allure is a U.S. women’s beauty magazine published monthly Snow + Gen X/NOUN/compound/> NOUN∗be/VERB/ROOT/Y/NOUN/attr/< base/VERB/acl/> Calico Light Weapons Inc. (CLWS) is an American privately held manufacturing company based in Cornelius, Oregon X/NOUN/compound/> NOUN Y/NOUN/compound/< Weston Town Council HypeNET Integrated X/NOUN/nsubj/> be/VERB/ROOT/- Y/NOUN/attr/< (release|direct|produce|write)/VERB/acl/> Blinky is a 1923 American comedy film directed by Edward Sedgwick X/NOUN/compound/> (association|co.|company|corporation| foundation|group|inc.|international |limited|ltd.)/NOUN/nsubj/> be/VERB/ROOT/- Y/NOUN/attr/< ((create|found|headquarter |own|specialize)/VERB/acl/>)? Retalix Ltd. is a software company Table 5: Examples of indicative paths learned by each method, with corresponding true positive term-pairs from the random split test set. Hypernyms are marked red and hyponyms are marked blue. Relation % synonymy 21.37% hyponymy 29.45% holonymy / meronymy 9.36% hypernymy-like relations 21.03% other relations 18.77% Table 6: Distribution of relations holding between each pair of terms in the resources among false positive pairs. lations. We note that X is Y is a rather “noisy” path, which may occur in ad-hoc contexts without indicating generic hypernymy relations (e.g. chocolate is a big problem in the context of children’s health). While such a model may identify hypernymy relations between unseen terms, based on general paths, it is prone to over-generalization, hurting its performance, as seen in Table 4. As discussed in § 4.2, we suspect that this scenario, in which both terms are unseen, is usually not common enough to justify this limiting training setup. 7.2 Error Analysis False Positives We categorized the false positive pairs on the random split according to the relation holding between each pair of terms in the resources used to construct the dataset. We grouped several semantic relations from different resources to broad categories, e.g. synonym includes also alias and Wikipedia redirection. Table 6 displays the distribution of semantic relations among false positive pairs. More than 20% of the errors stem from confusing synonymy with hypernymy, which are known to be difficult to distinguish. An additional 30% of the term-pairs are reversed hypernym-hyponym pairs (y is a hyponym of x). Examining a sample of these pairs suggests that they are usually near-synonyms, i.e., it is not that clear whether one term is truely more general than the other or not. For instance, fiction is annotated in WordNet as a hypernym of story, while our method classified fiction as its hyponym. A possible future research direction might be to quite simply extend our network to classify term-pairs simultaneously to multiple semantic relations, as in Pavlick et al. (2015). Such a multiclass model can hopefully better distinguish between these similar semantic relations. Another notable category is hypernymy-like relations: these are other relations in the resources that could also be considered as hypernymy, but were annotated as negative due to our restrictive selection of only indisputable hypernymy relations from the resources (see Section 4.1). These include instances like (Goethe, occupation, novelist) and (Homo, subdivisionRanks, species). Lastly, other errors made by the model often correspond to term-pairs that co-occur very few times in the corpus, e.g. xebec, a studio producing Anime, was falsely classified as a hyponym of anime. False Negatives We sampled 50 term-pairs that were falsely annotated as negative, and analyzed the major (overlapping) types of errors (Table 7). Most of these pairs had only few co-occurrences in the corpus. This is often either due to infrequent terms (e.g. cbc.ca), or a rare sense of x in which the hypernymy relation holds (e.g. (night, 2396 Error Type % 1 low statistics 80% 2 infrequent term 36% 3 rare hyponym sense 16% 4 annotation error 8% Table 7: (Overlapping) categories of false negative pairs: (1) x and y co-occurred less than 25 times (average cooccurrences for true positive pairs is 99.7). (2) Either x or y is infrequent. (3) The hypernymy relation holds for a rare sense of x. (4) (x, y) was incorrectly annotated as positive. play) holding for “Night”, a dramatic sketch by Harold Pinter). Such a term-pair may have too few hypernymy-indicating paths, leading to classifying it as negative. 8 Conclusion We presented HypeNET: a neural-networks-based method for hypernymy detection. First, we focused on improving path representation using LSTM, resulting in a path-based model that performs significantly better than prior path-based methods, and matches the performance of the previously superior distributional methods. In particular, we demonstrated that the increase in recall is a result of generalizing semantically-similar paths, in contrast to prior methods, which either make no generalizations or over-generalize paths. We then extended our network by integrating distributional signals, yielding an improvement of additional 14 F1 points, and demonstrating that the path-based and the distributional approaches are indeed complementary. Finally, our architecture seems straightforwardly applicable for multi-class classification, which, in future work, could be used to classify term-pairs to multiple semantic relations. Acknowledgments We would like to thank Omer Levy for his involvement and assistance in the early stage of this project and Enrico Santus for helping us by computing the results of SLQS (Santus et al., 2014) on our dataset. This work was partially supported by an Intel ICRI-CI grant, the Israel Science Foundation grant 880/12, and the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1). References S¨oren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. Springer. Marco Baroni and Alessandro Lenci. 2011. How we blessed distributional semantic evaluation. In Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics, pages 1–10. Marco Baroni, Raffaella Bernardi, Ngoc-Quynh Do, and Chung-chieh Shan. 2012. Entailment above the word level in distributional semantics. In EACL, pages 23–32. Claudia Borg, Mike Rosner, and Gordon Pace. 2009. Evolutionary algorithms for definition extraction. In Proceedings of the 1st Workshop on Definition Extraction, pages 26–32. Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R Hruschka Jr, and Tom M Mitchell. 2010. Toward an architecture for neverending language learning. In AAAI. Christiane Fellbaum. 1998. WordNet. Wiley Online Library. Katrin Fundel, Robert K¨uffner, and Ralf Zimmer. 2007. Relexrelation extraction using dependency parse trees. Bioinformatics, pages 365–371. Marti A Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In ACL, pages 539– 545. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid ´O S´eaghdha, Sebastian Pad´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2009. Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. In SemEval, pages 94–99. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, pages 1735–1780. Nobuhiro Kaji and Masaru Kitsuregawa. 2008. Using hidden markov random fields to combine distributional and pattern-based word clustering. In COLING, pages 401–408. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Lili Kotlerman, Ido Dagan, Idan Szpektor, and Maayan Zhitomirsky-Geffet. 2010. Directional distributional similarity for lexical inference. NLE, pages 359–389. Zornitsa Kozareva and Eduard Hovy. 2010. A semi-supervised method to learn and construct taxonomies using the web. In EMNLP, pages 1110– 1118. 2397 Omer Levy, Steffen Remus, Chris Biemann, and Ido Dagan. 2015. Do supervised distributional methods really learn lexical inference relations. NAACL. Dekang Lin. 1998. An information-theoretic definition of similarity. In ICML, pages 296–304. Yang Liu, Furu Wei, Sujian Li, Heng Ji, Ming Zhou, and Houfeng Wang. 2015. A dependency-based neural network for relation classification. arXiv preprint arXiv:1507.04646. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In NIPS, pages 3111–3119. Shachar Mirkin, Ido Dagan, and Maayan Geffet. 2006. Integrating pattern-based and distributional similarity methods for lexical entailment acquisition. In COLING and ACL, pages 579–586. Ndapandula Nakashole, Gerhard Weikum, and Fabian Suchanek. 2012. Patty: a taxonomy of relational patterns with semantic types. In EMNLP and CoNLL, pages 1135–1145. Roberto Navigli and Paola Velardi. 2010. Learning word-class lattices for definition and hypernym extraction. In ACL, pages 1318–1327. Thien Huu Nguyen and Ralph Grishman. 2015. Combining neural networks and log-linear models to improve relation extraction. arXiv preprint arXiv:1511.05926. Ellie Pavlick, Johan Bos, Malvina Nissim, Charley Beller, Benjamin Van Durme, and Chris CallisonBurch. 2015. Adding semantics to data-driven paraphrasing. In ACL. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532–1543. Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In NAACL. Laura Rimell. 2014. Distributional lexical entailment by topic coherence. In EACL, pages 511–519. Stephen Roller, Katrin Erk, and Gemma Boleda. 2014. Inclusive yet selective: Supervised distributional hypernymy detection. In COLING, pages 1025–1036. Enrico Santus, Alessandro Lenci, Qin Lu, and Sabine Schulte Im Walde. 2014. Chasing hypernyms in vector spaces with entropy. In EACL, pages 38–42. Vered Shwartz, Omer Levy, Ido Dagan, and Jacob Goldberger. 2015. Learning to exploit structured resources for lexical inference. In CoNLL, page 175. Rion Snow, Daniel Jurafsky, and Andrew Y Ng. 2004. Learning syntactic patterns for automatic hypernym discovery. In NIPS. Rion Snow, Daniel Jurafsky, and Andrew Y Ng. 2006. Semantic taxonomy induction from heterogenous evidence. In ACL, pages 801–808. Fabian M Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In WWW, pages 697–706. Peter D Turney. 2006. Similarity of semantic relations. CL, pages 379–416. Denny Vrandeˇci´c. 2012. Wikidata: A new platform for collaborative data collection. In WWW, pages 1063–1064. Julie Weeds and David Weir. 2003. A general framework for distributional similarity. In EMLP, pages 81–88. Julie Weeds, Daoud Clarke, Jeremy Reffin, David Weir, and Bill Keller. 2014. Learning to distinguish hypernyms and co-hyponyms. In COLING, pages 2249–2259. Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015. Classifying relations via long short term memory networks along shortest dependency paths. In EMNLP. Yan Xu, Ran Jia, Lili Mou, Ge Li, Yunchuan Chen, Yangyang Lu, and Zhi Jin. 2016. Improved relation classification by deep recurrent neural networks with data augmentation. arXiv preprint arXiv:1601.03651. Appendix A Best Hyper-parameters Table 8 displays the chosen hyper-parameters of each method, yielding the highest F1 score on the validation set. method values random split Snow regularization: L2 Snow + Gen regularization: L1 LSTM embeddings: GloVe-100-Wiki learning rate: α = 0.001 dropout: d = 0.5 SLQS N=100, threshold = 0.000464 Best Supervised method: concatenation, classifier: SVM embeddings: GloVe-300-Wiki LSTMIntegrated embeddings: GloVe-50-Wiki learning rate: α = 0.001 word dropout: d = 0.3 lexical split Snow regularization: L2 Snow + Gen regularization: L2 LSTM embeddings: GloVe-50-Wiki learning rate: α = 0.001 dropout: d = 0.5 SLQS N=100, threshold = 0.007629 Best Supervised method: concatenation, classifier: SVM embeddings: GloVe-100-Wikipedia LSTMIntegrated embeddings: GloVe-50-Wiki learning rate: α = 0.001 word dropout: d = 0.3 Table 8: The best hyper-parameters in every model. 2398
2016
226
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2399–2409, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Multimodal Pivots for Image Caption Translation Julian Hitschler and Shigehiko Schamoni Computational Linguistics Heidelberg University 69120 Heidelberg, Germany {hitschler,schamoni}@cl.uni-heidelberg.de Stefan Riezler Computational Linguistics & IWR Heidelberg University 69120 Heidelberg, Germany [email protected] Abstract We present an approach to improve statistical machine translation of image descriptions by multimodal pivots defined in visual space. The key idea is to perform image retrieval over a database of images that are captioned in the target language, and use the captions of the most similar images for crosslingual reranking of translation outputs. Our approach does not depend on the availability of large amounts of in-domain parallel data, but only relies on available large datasets of monolingually captioned images, and on state-ofthe-art convolutional neural networks to compute image similarities. Our experimental evaluation shows improvements of 1 BLEU point over strong baselines. 1 Introduction Multimodal data consisting of images and natural language descriptions (henceforth called captions) are an abundant source of information that has led to a recent surge in research integrating language and vision. Recently, the aspect of multilinguality has been added to multimodal language processing in a shared task at the WMT16 conference.1 There is clearly also a practical demand for multilingual image captions, e.g., automatic translation of descriptions of art works would allow access to digitized art catalogues across language barriers and is thus of social and cultural interest; multilingual product descriptions are of high commercial interest since they would allow to widen e-commerce transactions automatically to international markets. However, while datasets of images and monolingual captions already include millions 1http://www.statmt.org/wmt16/ multimodal-task.html of tuples (Ferraro et al., 2015), the largest multilingual datasets of images and captions known to the authors contain 20,000 (Grubinger et al., 2006) or 30,0002 triples of images with German and English descriptions. In this paper, we want to address the problem of multilingual captioning from the perspective of statistical machine translation (SMT). In contrast to prior work on generating captions directly from images (Kulkarni et al. (2011), Karpathy and FeiFei (2015), Vinyals et al. (2015), inter alia), our goal is to integrate visual information into an SMT pipeline. Visual context provides orthogonal information that is free of the ambiguities of natural language, therefore it serves to disambiguate and to guide the translation process by grounding the translation of a source caption in the accompanying image. Since datasets consisting of source language captions, images, and target language captions are not available in large quantities, we would instead like to utilize large datasets of images and target-side monolingual captions to improve SMT models trained on modest amounts of parallel captions. Let the task of caption translation be defined as follows: For production of a target caption ei of an image i, a system may use as input an image caption for image i in the source language fi, as well as the image i itself. The system may safely assume that fi is relevant to i, i.e., the identification of relevant captions for i (Hodosh et al., 2013) is not itself part of the task of caption translation. In contrast to the inference problem of finding ˆe = argmaxe p(e|f) in text-based SMT, multimodal caption translation allows to take into consideration i as well as fi in finding ˆei: ˆei = argmax ei p(ei|fi, i) 2The dataset used at the WMT16 shared task is based on translations of Flickr30K captions (Rashtchian et al., 2010). 2399 In this paper, we approach caption translation by a general crosslingual reranking framework where for a given pair of source caption and image, monolingual captions in the target language are used to rerank the output of the SMT system. We present two approaches to retrieve target language captions for reranking by pivoting on images that are similar to the input image. One approach calculates image similarity based deep convolutional neural network (CNN) representations. Another approach calculates similarity in visual space by comparing manually annotated object categories. We compare the multimodal pivot approaches to reranking approaches that are based on text only, and to standard SMT baselines trained on parallel data. Compared to a strong baseline trained on 29,000 parallel caption data, we find improvements of over 1 BLEU point for reranking based on visual pivots. Notably, our reranking approach does not rely on large amounts of in-domain parallel data which are not available in practical scenarios such as e-commerce localization. However, in such scenarios, monolingual product descriptions are naturally given in large amounts, thus our work is a promising pilot study towards real-world caption translation. 2 Related Work Caption generation from images alone has only recently come into the scope of realistically solvable problems in image processing (Kulkarni et al. (2011), Karpathy and Fei-Fei (2015), Vinyals et al. (2015), inter alia). Recent approaches also employ reranking of image captions by measuring similarity between image and text using deep representations (Fang et al., 2015). The tool of choice in these works are neural networks whose deep representations have greatly increased the quality of feature representations of images, enabling robust and semantically salient analysis of image content. We rely on the CNN framework (Socher et al., 2014; Simonyan and Zisserman, 2015) to solve semantic classification and disambiguation tasks in NLP with the help of supervision signals from visual feedback. However, we consider image captioning as a different task than caption translation since it is not given the information of the source language string. Therefore we do not compare our work to caption generation models. In the area of SMT, W¨aschle and Riezler (2015) presented a framework for integrating a large, indomain, target-side monolingual corpus into machine translation by making use of techniques from crosslingual information retrieval. The intuition behind their approach is to generate one or several translation hypotheses using an SMT system, which act as queries to find matching, semantically similar sentences in the target side corpus. These can in turn be used as templates for refinement of the translation hypotheses, with the overall effect of improving translation quality. Our work can be seen as an extension of this method, with visual similarity feedback as additional constraint on the crosslingual retrieval model. Calixto et al. (2012) suggest using images as supplementary context information for statistical machine translation. They cite examples from the news domain where visual context could potentially be helpful in the disambiguation aspect of SMT and discuss possible features and distance metrics for context images, but do not report experiments involving a full SMT pipeline using visual context. In parallel to our work, Elliott et al. (2015) addressed the problem of caption translation from the perspective of neural machine translation.3 Their approach uses a model which is considerably more involved than ours and relies exclusively on the availability of parallel captions as training data. Both approaches crucially rely on neural networks, where they use a visually enriched neural encoder-decoder SMT approach, while we follow a retrieval paradigm for caption translation, using CNNs to compute similarity in visual space. Integration of multimodal information into NLP problems has been another active area of recent research. For example, Silberer and Lapata (2014) show that distributional word embeddings grounded in visual representations outperform competitive baselines on term similarity scoring and word categorization tasks. The orthogonality of visual feedback has previously been exploited in a multilingual setting by Kiela et al. (2015) (relying on previous work by Bergsma and Van Durme (2011)), who induce a bilingual lexicon using term-specific multimodal representations obtained by querying the Google image 3We replicated the results of Elliott et al. (2015) on the IAPR TC-12 data. However, we decided to not include their model as baseline in this paper since we found our hierarchical phrase-based baselines to yield considerably better results on IAPR TC-12 as well as on MS COCO. 2400 Image i Target Hyp. List Nfi Source Caption fi Reranker F(r, Mfi) Source Caption Final Target Caption ei Multimodal Pivot Documents Target Captions Multimodal Pivot Documents Target Captions Multimodal Pivot Documents Mfi Target Captions MT Decoder Target Hyp. List Rfi ^ Interpolated Model Multimodal Retrieval Model S(m, Nfi, i) Figure 1: Overview of model architecture. search engine.4 Funaki and Nakayama (2015) use visual similarity for crosslingual document retrieval in a multimodal and bilingual vector space obtained by generalized canonical correlation analysis, greatly reducing the need for parallel training data. The common element is that CNNbased visual similarity information is used as a “hub” (Funaki and Nakayama, 2015) or pivot connecting corpora in two natural languages which lack direct parallelism, a strategy which we apply to the problem of caption translation. 3 Models 3.1 Overview Following the basic approach set out by W¨aschle and Riezler (2015), we use a crosslingual retrieval model to find sentences in a target language document collection C, and use these to rerank target language translations e of a source caption f. The systems described in our work differ from that of W¨aschle and Riezler (2015) in a number of aspects. Instead of a two-step architecture of coarse-grained and fine-grained retrieval, our system uses relevance scoring functions for retrieval of matches in the document collection C, and for 4https://images.google.com/ reranking of translation candidates that are based on inverse document frequency of terms (Sp¨arck Jones, 1972) and represent variants of the popular TF-IDF relevance measure. A schematic overview of our approach is given in Figure 1. It consists of the following components: Input: Source caption fi, image i, target-side collection C of image-captions pairs Translation: Generate unique list Nfi of kn-best translations, generate unique list Rfi of krbest list of translations5 using MT decoder Multimodal retrieval: For list of translations Nfi, find set Mfi of km-most relevant pairs of images and captions in a target-side collection C, using a heuristic relevance scoring function S(m, Nfi, i), m ∈C Crosslingual reranking: Use list Mfi of imagecaption pairs to rerank list of translations Rfi, applying relevance scoring function F(r, Mfi) to all r ∈Rfi Output: Determine best translation hypothesis ˆei by interpolating decoder score dr for a hypothesis r ∈Rfi with its relevance score F(r, Mfi) with weight λ s.t. ˆei = argmax r∈Rfi dr + λ · F(r, Mfi) The central concept is the scoring function S(m, Nfi, i) which defines three variants of target-side retrieval (TSR), all of which make use of the procedure outlined above. In the baseline text-based reranking model (TSR-TXT), we use relevance scoring function STXT . This function is purely text-based and does not make use of multimodal context information (as such, it comes closest to the models used for target-side retrieval in W¨aschle and Riezler (2015)). In the retrieval model enhanced by visual information from a deep convolutional neural network (TSRCNN), the scoring function SCNN incorporates a textual relevance score with visual similarity information extracted from the neural network. Finally, we evaluate these models against a relevance score based on human object-category annotations (TSR-HCA), using the scoring function 5In practice, the first hypothesis list may be reused. We distinguish between the two hypothesis lists Nfi and Rfi for notational clarity since in general, the two hypothesis lists need not be of equal length. 2401 SHCA. This function makes use of the object annotations available for the MS COCO corpus (Lin et al., 2014) to give an indication of the effectiveness of our automatically extracted visual similarity metric. The three models are discussed in detail below. 3.2 Target Side Retrieval Models Text-Based Target Side Retrieval. In the TSRTXT retrieval scenario, a match candidate m ∈C is scored in the following way: STXT (m, Nfi) = Zm X n∈Nfi X wn∈tok(n) X wm∈typ(m) δ(wm, wn)idf(wm), where δ is the Kronecker δ-function, Nfi is the set of the kn-best translation hypotheses for a source caption fi of image i by decoder score, typ(a) is a function yielding the set of types (unique tokens) contained in a caption a,6 tok(a) is a function yielding the tokens of caption a, idf(w) is the inverse document frequency (Sp¨arck Jones, 1972) of term w, and Zm = 1 |typ(m)| is a normalization term introduced in order to avoid biasing the system towards long match candidates containing many low-frequency terms. Term frequencies were computed on monolingual data from Europarl (Koehn, 2005) and the News Commentary and News Discussions English datasets provided for the WMT15 workshop.7 Note that in this model, information from the image i is not used. Multimodal Target Side Retrieval using CNNs. In the TSR-CNN scenario, we supplement the textual target-side TSR model with visual similarity information from a deep convolutional neural network. We formalize this by introduction of the positive-semidefinite distance function v(ix, iy) →[0, ∞) for images ix, iy (smaller values indicating more similar images). The relevance scoring function SCNN used in this model 6The choice for per-type scoring of reference captions was primarily driven by performance considerations. Since captions rarely contain repetitions of low-frequency terms, this has very little effect in practice, other than to mitigate the influence of stopwords. 7http://www.statmt.org/wmt15/ translation-task.html takes the following form: SCNN(m, Nfi, i) = ( STXT (m, Nfi)e−bv(im,i), v(im, i) < d 0 otherwise, where im is the image to which the caption m refers and d is a cutoff maximum distance, above which match candidates are considered irrelevant, and b is a weight term which controls the impact of the visual distance score v(im, i) on the overall score.8 Our visual distance measure v was computed using the VGG16 deep convolutional model of Simonyan and Zisserman (2015), which was pretrained on ImageNet (Russakovsky et al., 2014). We extracted feature values for all input and reference images from the penultimate fully-connected layer (fc7) of the model and computed the Euclidean distance between feature vectors of images. If no neighboring images fell within distance d, the text-based retrieval procedure STXT was used as a fallback strategy, which occurred 47 out of 500 times on our test data. Target Side Retrieval by Human Category Annotations. For contrastive purposes, we evaluated a TSR-HCA retrieval model which makes use of the human object category annotations for MS COCO. Each image in the MS COCO corpus is annotated with object polygons classified into 91 categories of common objects. In this scenario, a match candidate m is scored in the following way: SHCA(m, Nfi, i) = δ(cat(im), cat(i))STXT (m, Nfi), where cat(i) returns the set of object categories with which image i is annotated. The amounts to enforcing a strict match between the category annotations of i and the reference image im, thus pre-filtering the STXT scoring to captions for images with strict category match.9 In cases where i was annotated with a unique set of object categories and thus no match candidates with nonzero scores were returned by SHCA, STXT was used as a fallback strategy, which occurred 77 out of 500 times on our test data. 8The value of b = 0.01 was found on development data and kept constant throughout the experiments. 9Attempts to relax this strict matching criterion led to strong performance degradation on the development test set. 2402 3.3 Translation Candidate Re-scoring The relevance score F(r, Mfi) used in the reranking model was computed in the following way for all three models: F(r, Mfi) = ZMfi X m∈Mfi X wm∈typ(m) X wr∈tok(r) δ(wm, wr)idf(wm) with normalization term ZMfi = ( X m∈Mfi |tok(m)|)−1, where r is a translation candidate and Mfi is a list of km-top target side retrieval matches. Because the model should return a score that is reflective of the relevance of r with respect to Mfi, irrespective of the length of Mfi, normalization with respect to the token count of Mfi is necessary. The term ZMfi serves this purpose. 4 Experiments 4.1 Bilingual Image-Caption Data We constructed a German-English parallel dataset based on the MS COCO image corpus (Lin et al., 2014). 1,000 images were selected at random from the 2014 training section10 and, in a second step, one of their five English captions was chosen randomly. This caption was then translated into German by a native German speaker. Note that our experiments were performed with German as the source and English as the target language, therefore, our reference data was not produced by a single speaker but reflects the heterogeneity of the MS COCO dataset at large. The data was split into a development set of 250 captions, a development test set of 250 captions for testing work in progress, and a test set of 500 captions. For our retrieval experiments, we used only the images and captions that were not included in the development, development test or test data, a total of 81,822 images with 5 English captions per image. All data was tokenized and converted to lower case using the cdec11 utilities tokenized-anything.pl and lowercase.pl. For the German data, we 10We constructed our parallel dataset using only the training rather than the validation section of MS COCO so as to keep the latter pristine for future work based on this research. 11https://github.com/redpony/cdec Section Images Captions Languages DEV 250 250 DE-EN DEVTEST 250 250 DE-EN TEST 500 500 DE-EN RETRIEVAL (C) 81,822 409,110 EN Table 1: Number of images and sentences in MS COCO image and caption data used in experiments. performed compound-splitting using the method described by Dyer (2009), as implemented by the cdec utility compound-split.pl. Table 1 gives an overview of the dataset. Our parallel development, development test and test data is publicly available.12 4.2 Translation Baselines We compare our approach to two baseline machine translation systems, one trained on out-ofdomain data exclusively and one domain-adapted system. Table 2 gives an overview of the training data for the machine translation systems. Out-of-Domain Baseline. Our baseline SMT framework is hierarchical phrase-based translation using synchronous context free grammars (Chiang, 2007), as implemented by the cdec decoder (Dyer et al., 2010). Data from the Europarl (Koehn, 2005), News Commentary and Common Crawl corpora (Smith et al., 2013) as provided for the WMT15 workshop was used to train the translation model, with German as source and English as target language. Like the retrieval dataset, training, development and test data was tokenized and converted to lower case, using the same cdec tools. Sentences with lengths over 80 words in either the source or the target language were discarded before training. Source text compound splitting was performed using compound-split.pl. Alignments were extracted bidirectionally using the fast-align utility of cdec and symmetrized with the atools utility (also part of cdec) using the grow-diag-final-and symmetrization heuristic. The alignments were then used by the cdec grammar extractor to extract a synchronous context free grammar from the parallel data. 12www.cl.uni-heidelberg.de/decoco/ 2403 Corpus Sentences Languages System Europarl 1,920,209 DE-EN O/I News Commentary 216,190 DE-EN O/I Common Crawl 2,399,123 DE-EN O/I Flickr30k WMT16 29,000 DE-EN I Europarl 2,218,201 EN O/I News Crawl 28,127,448 EN O/I News Discussions 57,803,684 EN O/I Flickr30k WMT16 29,000 EN I Table 2: Parallel and monolingual data used for training machine translation systems. Sentence counts are given for raw data without preprocessing. O/I: both out-of-domain and indomain system, I: in-domain system only. The target language model was trained on monolingual data from Europarl, as well as the News Crawl and News Discussions English datasets provided for the WMT15 workshop (the same data as was used for estimating term frequencies for the retrieval models) with the KenLM toolkit (Heafield et al., 2013; Heafield, 2011).13 We optimized the parameters of the translation system for translation quality as measured by IBM BLEU (Papineni et al., 2002) using the Margin Infused Relaxed Algorithm (MIRA) (Crammer and Singer, 2003). For tuning the translation models used for extraction of the hypothesis lists for final evaluation, MIRA was run for 20 iterations on the development set, and the best run was chosen for final testing. In-Domain Baseline. We also compared our models to a domain-adapted machine translation system. The domain-adapted system was identical to the out-of-domain system, except that it was supplied with additional parallel training data from the image caption domain. For this purpose, we used 29,000 parallel German-English image captions as provided for the WMT16 shared task on multimodal machine translation. The English captions in this dataset belong to the Flickr30k corpus (Rashtchian et al., 2010) and are very similar to those of the MS COCO corpus. The German captions are expert translations. The English captions were also used as additional training data for the target-side language model. We generated kn- and kr-best lists of translation candidates using this in-domain baseline system. 13https://kheafield.com/code/kenlm/ Model kn km kr λ TSR-TXT 300 500 5 5 · 104 TSR-CNN 300 300 5 70 · 104 TSR-HCA 300 500 5 10 · 104 Table 3: Optimized hyperparameter values used in final evaluation. 4.3 Optimization of TSR Hyperparameters For each of our retrieval models, we performed a step-wise exhaustive search of the hyperparameter space over the four system hyperparameters for IBM BLEU on the development set: The length of the kn-best list the entries of which are used as queries for retrieval; the number of km-bestmatching captions retrieved; the length of the final kr-best list used in reranking; the interpolation weight λ of the relevance score F relative to the translation hypothesis log probability returned by the decoder. The parameter ranges to be explored were determined manually, by examining system output for prototypical examples. Table 3 gives an overview over the hyperparameter values obtained. For TSR-CNN, we initially set the cutoff distance d to 90.0, after manually inspecting sets of nearest neighbors returned for various maximum distance values. After optimization of retrieval parameters, we performed an exhaustive search from d = 80.0 to d = 100.0, with step size 1.0 on the development set, while keeping all other hyperparameters fixed, which confirmed out initial choice of d = 90.0 as the optimal value. Explored parameter spaces were identical for all models and each model was evaluated on the test set using its own optimal configuration of hyperparameters. 4.4 Significance Testing Significance tests on the differences in translation quality were performed using the approximate randomization technique for measuring performance differences of machine translation systems described in Riezler and Maxwell (2005) and implemented by Clark et al. (2011) as part of the Multeval toolkit.14 14https://github.com/jhclark/multeval 2404 System BLEU ↑ pc pt pd po cdec out-dom. 25.5 cdec in-dom. 29.6 0.00 TSR-TXT 29.7 0.45 0.00 TSR-CNN 30.6 0.04 0.02 0.00 TSR-HCA 30.3 0.42 0.01 0.00 0.00 System METEOR ↑ pc pt pd po cdec out-dom. 31.7 cdec in-dom. 34.0 0.00 TSR-TXT 34.1 0.41 0.00 TSR-CNN 34.7 0.00 0.00 0.00 TSR-HCA 34.4 0.09 0.00 0.00 0.00 System TER ↓ pc pt pd po cdec out-dom. 49.3 cdec in-dom. 46.1 0.00 TSR-TXT 45.8 0.12 0.00 TSR-CNN 45.1 0.03 0.00 0.00 TSR-HCA 45.3 0.34 0.02 0.00 0.00 Table 4: Metric scores for all systems and their significance levels as reported by Multeval. povalues are relative to the cdec out-of-domain baseline, pd-values are relative to the cdec indomain baseline, pt-values are relative to TSRTXT and pc-values are relative to TSR-CNN. Best results are reported in bold face.15 4.5 Experimental Results Table 4 summarizes the results for all models on an unseen test set of 500 captions. Domain adaptation led to a considerable improvement of +4.1 BLEU and large improvements in terms of METEOR and Translation Edit Rate (TER). We found that the target-side retrieval model enhanced with multimodal pivots from a deep convolutional neural network, TSR-CNN and TSR-HCA, consistently outperformed both the domain-adapted cdec baseline, as well as the text-based target side retrieval model TSR-TXT. These models therefore achieve a performance gain which goes beyond the effect of generic domain-adaptation. The gain in performance for TSR-CNN and TSRHCA was significant at p < 0.05 for BLEU, METEOR, and TER. For all evaluation metrics, the difference between TSR-CNN and TSR-HCA was not significant, demonstrating that retrieval using our CNN-derived distance metric could match retrieval based the human object category annotations. 15A baseline for which a random hypothesis was chosen from the top-5 candidates of the in-domain system lies between the other two baseline systems: 27.5 / 33.3 / 47.7 (BLEU / METEOR / TER). a+,f+ a+,f− a−,f+ a−,f− 102 7 15 45 Figure 2: Results of the human pairwise preference ranking experiment, given as the joint distribution of both rankings: a+ denotes preference for TSR-CNN in terms of accuracy, f+ in terms of fluency; a−denotes preference for the in-domain baseline in terms of accuracy, f−in terms of fluency. The text-based retrieval baseline TSR-TXT never significantly outperformed the in-domain cdec baseline, but there were slight nominal improvements in terms of BLEU, METEOR and TER. This finding is actually consistent with W¨aschle and Riezler (2015) who report performance gains for text-based, target side retrieval models only on highly technical, narrow-domain corpora and even report performance degradation on medium-diversity corpora such as Europarl. Our experiments show that it is the addition of visual similarity information by incorporation of multimodal pivots into the image-enhanced models TSR-CNN and TSR-HCA which makes such techniques effective on MS COCO, thus upholding our hypothesis that visual information can be exploited for improvement of caption translation. 4.6 Human Evaluation The in-domain baseline and TSR-CNN differed in their output in 169 out of 500 cases on the test set. These 169 cases were presented to a human judge alongside the German source captions in a double-blinded pairwise preference ranking experiment. The order of presentation was randomized for the two systems. The judge was asked to rank fluency and accuracy of the translations independently. The results are given in Figure 2. Overall, there was a clear preference for the output of TSRCNN. 2405 4.7 Examples Table 5 shows example translations produced by both cdec baselines, TSR-TXT, TSR-CNN, and TSR-HCA, together with source caption, image, and reference translation. The visual information induced by target side captions of pivot images allows a disambiguation of translation alternatives such as “skirt” versus “rock (music)” for the German “Rock”, “pole” versus “mast” for the German “Masten”, and is able to repair mistranslations such as “foot” instead of “mouth” for the German “Maul”. 5 Conclusions and Further Work We demonstrated that the incorporation of multimodal pivots into a target-side retrieval model improved SMT performance compared to a strong in-domain baseline in terms of BLEU, METEOR and TER on our parallel dataset derived from MS COCO. The gain in performance was comparable between a distance metric based on a deep convolutional network and one based on human object category annotations, demonstrating the effectiveness of the CNN-derived distance measure. Using our approach, SMT can, in certain cases, profit from multimodal context information. Crucially, this is possible without using large amounts of indomain parallel text data, but instead using large amounts of monolingual image captions that are more readily available. Learning semantically informative distance metrics using deep learning techniques is an area under active investigation (Wu et al., 2013; Wang et al., 2014; Wang et al., 2015). Despite the fact that our simple distance metric performed comparably to human object annotations, using such high-level semantic distance metrics for caption translation by multimodal pivots is a promising avenue for further research. The results were achieved on one language pair (German-English) and one corpus (MS COCO) only. As with all retrieval-based methods, generalized statements about the relative performance on corpora of various domains, sizes and qualities are difficult to substantiate. This problem is aggravated in the multimodal case, since the relevance of captions with respect to images varies greatly between different corpora (Hodosh et al., 2013). In future work, we plan to evaluate our approach in more naturalistic settings, such machine translation for captions in online multimedia repositories Image: Source: Eine Person in einem Anzug und Krawatte und einem Rock. cdec out-dom: a person in a suit and tie and a rock . cdec in-dom: a person in a suit and tie and a rock . TSR-TXT: a person in a suit and tie and a rock . TSR-CNN: a person in a suit and tie and a skirt . TSR-HCA: a person in a suit and tie and a rock . Reference: a person wearing a suit and tie and a skirt Image: Source: Ein Masten mit zwei Ampeln f¨ur Autofahrer. cdec out-dom: a mast with two lights for drivers . cdec in-dom: a mast with two lights for drivers . TSR-TXT: a mast with two lights for drivers . TSR-CNN: a pole with two lights for drivers . TSR-HCA: a pole with two lights for drivers . Reference: a pole has two street lights on it for drivers . Image: Source: Ein Hund auf einer Wiese mit einem Frisbee im Maul. cdec out-dom: a dog on a lawn with a frisbee in the foot . cdec in-dom: a dog with a frisbee in a grassy field . TSR-TXT: a dog with a frisbee in a grassy field . TSR-CNN: a dog in a grassy field with a frisbee in its mouth . TSR-HCA: a dog with a frisbee in a grassy field . Reference: a dog in a field with a frisbee in its mouth Table 5: Examples for improved caption translation by multimodal feedback. 2406 such as Wikimedia Commons16 and digitized art catalogues, as well as e-commerce localization. A further avenue of future research is improving models such as that presented in Elliott et al. (2015) by crucial components of neural MT such as “attention mechanisms”. For example, the attention mechanism of Bahdanau et al. (2015) serves as a soft alignment that helps to guide the translation process by influencing the sequence in which source tokens are translated. A similar mechanism is used in Xu et al. (2015) to decide which part of the image should influence which part of the generated caption. Combining these two types of attention mechanisms in a neural caption translation model is a natural next step in caption translation. While this is beyond the scope of this work, our models should provide an informative baseline against which to evaluate such methods. Acknowledgments This research was supported in part by DFG grant RI-2221/2-1 “Grounding Statistical Machine Translation in Perception and Action”, and by an Amazon Academic Research Award (AARA) “Multimodal Pivots for Low Resource Machine Translation in E-Commerce Localization”. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, California, USA. Shane Bergsma and Benjamin Van Durme. 2011. Learning bilingual lexicons using the visual similarity of labeled web images. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), Barcelona, Spain. Iacer Calixto, Te´ofilo de Compos, and Lucia Specia. 2012. Images as context in statistical machine translation. In Proceedings of the Workshop on Vision and Language (VL), Sheffield, England, United Kingdom. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. Jonathan Clark, Chris Dyer, Alon Lavie, and Noah Smith. 2011. Better hypothesis testing for statistical 16https://commons.wikimedia.org/wiki/ Main_Page machine translation: Controlling for optimizer instability. In Proceedings of the Association for Computational Lingustics (ACL), Portland, Oregon, USA. Koby Crammer and Yoram Singer. 2003. Ultraconservative online algorithms for multiclass problems. Journal of Machine Learning Research, 3:951–991. Chris Dyer, Adam Lopez, Juri Ganitkevitch, Johnathan Weese, Ferhan Ture, Phil Blunsom, Hendra Setiawan, Vladimir Eidelman, and Philip Resnik. 2010. cdec: A decoder, alignment, and learning framework for finite-state and context-free translation models. In Proceedings of the Association for Computational Linguistics (ACL), Uppsala, Sweden. Chris Dyer. 2009. Using a maximum entropy model to build segmentation lattices for mt. In Proceedings of Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT), Boulder, Colorado, USA. Desmond Elliott, Stella Frank, and Eva Hasler. 2015. Multi-language image description with neural sequence models. CoRR, abs/1510.04709. Hao Fang, Li Deng, Margaret Mitchell, Saurabh Gupta, Piotr Dollar, John C. Platt, Forrest Iandola, Jianfeng Gao, C. Lawrence Zitnick, Rupesh K. Srivastava, Xiaodeng He, and Geoffrey Zweit. 2015. From captions to visual concepts and back. In In Proceedings of the 28th IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA. Francis Ferraro, Nasrin Mostafazadeh, Ting-Hao (Kenneth) Huang, Lucy Vanderwende, Jacob Devlin, Michel Galley, and Margaret Mitchell. 2015. A survey of current datasets for vision and language research. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Lisbon, Portugal. Ruka Funaki and Hideki Nakayama. 2015. Imagemediated learning for zero-shot cross-lingual document retrieval. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Lisbon, Portugal. Michael Grubinger, Paul Clough, Henning M¨uller, and Thomas Deselaers. 2006. The IAPR TC-12 benchmark: A new evaluatioin resource for visual information systems. In In Proceedings of LREC, Genova, Italy. Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable modified Kneser-Ney language model estimation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL), Sofia, Bulgaria. Kenneth Heafield. 2011. KenLM: faster and smaller language model queries. In Proceedings of the Sixth Workshop on Statistical Machine Translation (WMT), Edinburgh, Scotland, United Kingdom. 2407 Micah Hodosh, Peter Young, and Julia Hockenmaier. 2013. Framing image description as a ranking task: Data, models and evaluation metrics. Journal of Artificial Intelligence Research, 47:853–899. Andrey Karpathy and Li Fei-Fei. 2015. Deep visualsemantic alignments for generating image descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, Massachusetts, USA. Douwe Kiela, Ivan Vuli´c, and Stephen Clark. 2015. Visual bilingual lexicon induction with transferred convnet features. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Lisbon, Portugal. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceedings of the Machine Translation Summit, Phuket, Thailand. Girish Kulkarni, Visruth Premraj, Sagnik Dhar, Siming Li, Yejin Choi, Alexander C Berg, and Tamara L Berg. 2011. Baby talk: Understanding and generating image descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, Colorado, USA. Tsung-Yi Lin, Michael Maire, Serge Belongie, Lubomir D. Bourdev, Ross B. Girshick, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C. Lawrence Zitnick. 2014. Microsoft COCO: common objects in context. Computing Research Repository, abs/1405.0312. Kishore Papineni, Salim Roukos, Todd Ard, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), Philadelphia, Pennsylvania, USA. Cyrus Rashtchian, Peter Young, Micah Hodosh, and Julia Hockenmaier. 2010. Collecting image annotations using amazon’s mechanical turk. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, Los Angeles, California, USA. Stefan Riezler and John Maxwell. 2005. On some pitfalls in automatic evaluation and significance testing for mt. In Proceedings of the Workshop on Intrinsic and Extrinsic Evaluation Methods for MT and Summarization (MTSE) at the 43rd Annual Meeting of the Association for Computational Linguistics, Ann Arbor, Michigan, USA. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael S. Bernstein, Alexander C. Berg, and Fei-Fei Li. 2014. Imagenet large scale visual recognition challenge. Computing Research Repository, abs/1409.0575. Carina Silberer and Mirella Lapata. 2014. Learning grounded meaning representations with autoencoders. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL), Baltimore, Maryland, USA. Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA. Jason Smith, Herve Saint-Amand, Magdalena Plamada, Philipp Koehn, Chris Callison-Burch, and Adam Lopez. 2013. Dirt cheap web-scale parallel text from the Common Crawl. In Proceedings of the Conference of the Association for Computational Linguistics (ACL), Sofia, Bulgaria. Richard Socher, Andrej Karpathy, Quoc V. Le, Christopher D. Manning, and Andrew Y. Ng. 2014. Grounded compositional semantics for finding and describing images with sentences. Transactions of the Association for Computational Linguistics, 2(1):207–218. Karen Sp¨arck Jones. 1972. A statistical interpretation of term specificity and its application in retrieval. Journal of Documentation, 28:11–21. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston,Massachusetts, USA. Jiang Wang, Yang Song, Thomas Leung, Chuck Rosenberg, Jingbin Wang, James Philbin, Bo Chen, and Ying Wu. 2014. Learning fine-grained image similarity with deep ranking. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, Ohio, USA. Zhaowen Wang, Jianchao Yang, Zhe Lin, Jonathan Brandt, Shiyu Chang, and Thomas Huang. 2015. Scalable similarity learning using large margin neighborhood embedding. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision, Washington, DC, USA. Katharina W¨aschle and Stefan Riezler. 2015. Integrating a large, monolingual corpus as translation memory into statistical machine translation. In Proceedings of the 18th Annual Conference of the European Association for Machine Translation (EAMT), Antalya, Turkey. Pengcheng Wu, Steven C.H. Hoi, Hao Xia, Peilin Zhao, Dayong Wang, and Chunyan Miao. 2013. Online multimodal deep similarity learning with application to image retrieval. In Proceedings of the 21st ACM International Conference on Multimedia, Barcelona, Spain. 2408 Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the International Conference on Machine Learning (ICML), Lille, France. 2409
2016
227
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2410–2420, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Harnessing Deep Neural Networks with Logic Rules Zhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, Eric P. Xing School of Computer Science Carnegie Mellon University {zhitingh,xuezhem,liu,epxing}@cs.cmu.edu, [email protected] Abstract Combining deep neural networks with structured logic rules is desirable to harness flexibility and reduce uninterpretability of the neural models. We propose a general framework capable of enhancing various types of neural networks (e.g., CNNs and RNNs) with declarative first-order logic rules. Specifically, we develop an iterative distillation method that transfers the structured information of logic rules into the weights of neural networks. We deploy the framework on a CNN for sentiment analysis, and an RNN for named entity recognition. With a few highly intuitive rules, we obtain substantial improvements and achieve state-of-the-art or comparable results to previous best-performing systems. 1 Introduction Deep neural networks provide a powerful mechanism for learning patterns from massive data, achieving new levels of performance on image classification (Krizhevsky et al., 2012), speech recognition (Hinton et al., 2012), machine translation (Bahdanau et al., 2014), playing strategic board games (Silver et al., 2016), and so forth. Despite the impressive advances, the widelyused DNN methods still have limitations. The high predictive accuracy has heavily relied on large amounts of labeled data; and the purely data-driven learning can lead to uninterpretable and sometimes counter-intuitive results (Szegedy et al., 2014; Nguyen et al., 2015). It is also difficult to encode human intention to guide the models to capture desired patterns, without expensive direct supervision or ad-hoc initialization. On the other hand, the cognitive process of human beings have indicated that people learn not only from concrete examples (as DNNs do) but also from different forms of general knowledge and rich experiences (Minksy, 1980; Lake et al., 2015). Logic rules provide a flexible declarative language for communicating high-level cognition and expressing structured knowledge. It is therefore desirable to integrate logic rules into DNNs, to transfer human intention and domain knowledge to neural models, and regulate the learning process. In this paper, we present a framework capable of enhancing general types of neural networks, such as convolutional networks (CNNs) and recurrent networks (RNNs), on various tasks, with logic rule knowledge. Combining symbolic representations with neural methods have been considered in different contexts. Neural-symbolic systems (Garcez et al., 2012) construct a network from a given rule set to execute reasoning. To exploit a priori knowledge in general neural architectures, recent work augments each raw data instance with useful features (Collobert et al., 2011), while network training, however, is still limited to instance-label supervision and suffers from the same issues mentioned above. Besides, a large variety of structural knowledge cannot be naturally encoded in the featurelabel form. Our framework enables a neural network to learn simultaneously from labeled instances as well as logic rules, through an iterative rule knowledge distillation procedure that transfers the structured information encoded in the logic rules into the network parameters. Since the general logic rules are complementary to the specific data labels, a natural “side-product” of the integration is the support for semi-supervised learning where unlabeled data is used to better absorb the logical knowledge. Methodologically, our approach can be seen as a combination of the knowledge distillation (Hinton et al., 2015; Bucilu et al., 2006) and the posterior regularization (PR) method (Ganchev et al., 2010). 2410 In particular, at each iteration we adapt the posterior constraint principle from PR to construct a rule-regularized teacher, and train the student network of interest to imitate the predictions of the teacher network. We leverage soft logic to support flexible rule encoding. We apply the proposed framework on both CNN and RNN, and deploy on the task of sentiment analysis (SA) and named entity recognition (NER), respectively. With only a few (one or two) very intuitive rules, both the distilled networks and the joint teacher networks strongly improve over their basic forms (without rules), and achieve better or comparable performance to state-of-the-art models which typically have more parameters and complicated architectures. To the best of our knowledge, this is the first work to integrate logic rules with general workhorse types of deep neural networks in a principled framework. The encouraging results indicate our method can be potentially useful for incorporating richer types of human knowledge, and improving other application domains. 2 Related Work Combination of logic rules and neural networks has been considered in different contexts. Neuralsymbolic systems (Garcez et al., 2012), such as KBANN (Towell et al., 1990) and CILP++ (Franc¸a et al., 2014), construct network architectures from given rules to perform reasoning and knowledge acquisition. A related line of research, such as Markov logic networks (Richardson and Domingos, 2006), derives probabilistic graphical models (rather than neural networks) from the rule set. With the recent success of deep neural networks in a vast variety of application domains, it is increasingly desirable to incorporate structured logic knowledge into general types of networks to harness flexibility and reduce uninterpretability. Recent work that trains on extra features from domain knowledge (Collobert et al., 2011), while producing improved results, does not go beyond the data-label paradigm. Kulkarni et al. (2015) uses a specialized training procedure with careful ordering of training instances to obtain an interpretable neural layer of an image network. Karaletsos et al. (2016) develops a generative model jointly over data-labels and similarity knowledge expressed in triplet format to learn improved disentangled representations. Though there do exist general frameworks that allow encoding various structured constraints on latent variable models (Ganchev et al., 2010; Zhu et al., 2014; Liang et al., 2009), they either are not directly applicable to the NN case, or could yield inferior performance as in our empirical study. Liang et al. (2008) transfers predictive power of pre-trained structured models to unstructured ones in a pipelined fashion. Our proposed approach is distinct in that we use an iterative rule distillation process to effectively transfer rich structured knowledge, expressed in the declarative first-order logic language, into parameters of general neural networks. We show that the proposed approach strongly outperforms an extensive array of other either ad-hoc or general integration methods. 3 Method In this section we present our framework which encapsulates the logical structured knowledge into a neural network. This is achieved by forcing the network to emulate the predictions of a ruleregularized teacher, and evolving both models iteratively throughout training (section 3.2). The process is agnostic to the network architecture, and thus applicable to general types of neural models including CNNs and RNNs. We construct the teacher network in each iteration by adapting the posterior regularization principle in our logical constraint setting (section 3.3), where our formulation provides a closed-form solution. Figure 1 shows an overview of the proposed framework. loss labeled data logic rules 𝑞(𝑦|𝑥) 𝑝𝜃(𝑦|𝑥) projection unlabeled data teacher network construction rule knowledge distillation back propagation teacher 𝑞(𝑦|𝑥) student 𝑝𝜃(𝑦|𝑥) Figure 1: Framework Overview. At each iteration, the teacher network is obtained by projecting the student network to a rule-regularized subspace (red dashed arrow); and the student network is updated to balance between emulating the teacher’s output and predicting the true labels (black/blue solid arrows). 2411 3.1 Learning Resources: Instances and Rules Our approach allows neural networks to learn from both specific examples and general rules. Here we give the settings of these “learning resources”. Assume we have input variable x ∈X and target variable y ∈Y. For clarity, we focus on K-way classification, where Y = ∆K is the K-dimensional probability simplex and y ∈ {0, 1}K ⊂Y is a one-hot encoding of the class label. However, our method specification can straightforwardly be applied to other contexts such as regression and sequence learning (e.g., NER tagging, which is a sequence of classification decisions). The training data D = {(xn, yn)}N n=1 is a set of instantiations of (x, y). Further consider a set of first-order logic (FOL) rules with confidences, denoted as R = {(Rl, λl)}L l=1, where Rl is the lth rule over the input-target space (X, Y), and λl ∈[0, ∞] is the confidence level with λl = ∞indicating a hard rule, i.e., all groundings are required to be true (=1). Here a grounding is the logic expression with all variables being instantiated. Given a set of examples (X, Y ) ⊂(X, Y) (e.g., a minibatch from D), the set of groundings of Rl are denoted as {rlg(X, Y )}Gl g=1. In practice a rule grounding is typically relevant to only a single or subset of examples, though here we give the most general form on the entire set. We encode the FOL rules using soft logic (Bach et al., 2015) for flexible encoding and stable optimization. Specifically, soft logic allows continuous truth values from the interval [0, 1] instead of {0, 1}, and the Boolean logic operators are reformulated as: A&B = max{A + B −1, 0} A ∨B = min{A + B, 1} A1 ∧· · · ∧AN = X i Ai/N ¬A = 1 −A (1) Here & and ∧are two different approximations to logical conjunction (Foulds et al., 2015): & is useful as a selection operator (e.g., A&B = B when A = 1, and A&B = 0 when A = 0), while ∧is an averaging operator. 3.2 Rule Knowledge Distillation A neural network defines a conditional probability pθ(y|x) by using a softmax output layer that produces a K-dimensional soft prediction vector denoted as σθ(x). The network is parameterized by weights θ. Standard neural network training has been to iteratively update θ to produce the correct labels of training instances. To integrate the information encoded in the rules, we propose to train the network to also imitate the outputs of a rule-regularized projection of pθ(y|x), denoted as q(y|x), which explicitly includes rule constraints as regularization terms. In each iteration q is constructed by projecting pθ into a subspace constrained by the rules, and thus has desirable properties. We present the construction in the next section. The prediction behavior of q reveals the information of the regularized subspace and structured rules. Emulating the q outputs serves to transfer this knowledge into pθ. The new objective is then formulated as a balancing between imitating the soft predictions of q and predicting the true hard labels: θ(t+1) = arg min θ∈Θ 1 N N X n=1 (1 −π)ℓ(yn, σθ(xn)) + πℓ(s(t) n , σθ(xn)), (2) where ℓdenotes the loss function selected according to specific applications (e.g., the cross entropy loss for classification); s(t) n is the soft prediction vector of q on xn at iteration t; and π is the imitation parameter calibrating the relative importance of the two objectives. A similar imitation procedure has been used in other settings such as model compression (Bucilu et al., 2006; Hinton et al., 2015) where the process is termed distillation. Following them we call pθ(y|x) the “student” and q(y|x) the “teacher”, which can be intuitively explained in analogous to human education where a teacher is aware of systematic general rules and she instructs students by providing her solutions to particular questions (i.e., the soft predictions). An important difference from previous distillation work, where the teacher is obtained beforehand and the student is trained thereafter, is that our teacher and student are learned simultaneously during training. Though it is possible to combine a neural network with rule constraints by projecting the network to the rule-regularized subspace after it is fully trained as before with only data-label instances, or by optimizing projected network directly, we found our iterative teacher-student distillation approach provides a much superior performance, as shown in the experiments. Moreover, since pθ distills the rule information into the 2412 weights θ instead of relying on explicit rule representations, we can use pθ for predicting new examples at test time when the rule assessment is expensive or even unavailable (i.e., the privileged information setting (Lopez-Paz et al., 2016)) while still enjoying the benefit of integration. Besides, the second loss term in Eq.(2) can be augmented with rich unlabeled data in addition to the labeled examples, which enables semi-supervised learning for better absorbing the rule knowledge. 3.3 Teacher Network Construction We now proceed to construct the teacher network q(y|x) at each iteration from pθ(y|x). The iteration index t is omitted for clarity. We adapt the posterior regularization principle in our logic constraint setting. Our formulation ensures a closedform solution for q and thus avoids any significant increases in computational overhead. Recall the set of FOL rules R = {(Rl, λl)}L l=1. Our goal is to find the optimal q that fits the rules while at the same time staying close to pθ. For the first property, we apply a commonly-used strategy that imposes the rule constraints on q through an expectation operator. That is, for each rule (indexed by l) and each of its groundings (indexed by g) on (X, Y ), we expect Eq(Y |X)[rlg(X, Y )] = 1, with confidence λl. The constraints define a ruleregularized space of all valid distributions. For the second property, we measure the closeness between q and pθ with KL-divergence, and wish to minimize it. Combining the two factors together and further allowing slackness for the constraints, we finally get the following optimization problem: min q,ξ≥0 KL(q(Y |X)∥pθ(Y |X)) + C X l,gl ξl,gl s.t. λl(1 −Eq[rl,gl(X, Y )]) ≤ξl,gl gl = 1, . . . , Gl, l = 1, . . . , L, (3) where ξl,gl ≥0 is the slack variable for respective logic constraint; and C is the regularization parameter. The problem can be seen as projecting pθ into the constrained subspace. The problem is convex and can be efficiently solved in its dual form with closed-form solutions. We provide the detailed derivation in the supplementary materials and directly give the solution here: q∗(Y |X) ∝pθ(Y |X) exp   − X l,gl Cλl(1 −rl,gl(X, Y ))    (4) Intuitively, a strong rule with large λl will lead to low probabilities of predictions that fail to meet the constraints. We discuss the computation of the normalization factor in section 3.4. Our framework is related to the posterior regularization (PR) method (Ganchev et al., 2010) which places constraints over model posterior in unsupervised setting. In classification, our optimization procedure is analogous to the modified EM algorithm for PR, by using cross-entropy loss in Eq.(2) and evaluating the second loss term on unlabeled data differing from D, so that Eq.(4) corresponds to the E-step and Eq.(2) is analogous to the M-step. This sheds light from another perspective on why our framework would work. However, we found in our experiments (section 5) that to produce strong performance it is crucial to use the same labeled data xn in the two losses of Eq.(2) so as to form a direct trade-off between imitating soft predictions and predicting correct hard labels. 3.4 Implementations The procedure of iterative distilling optimization of our framework is summarized in Algorithm 1. During training we need to compute the soft predictions of q at each iteration, which is straightforward through direct enumeration if the rule constraints in Eq.(4) are factored in the same way as the base neural model pθ (e.g., the “but”-rule of sentiment classification in section 4.1). If the constraints introduce additional dependencies, e.g., bigram dependency as the transition rule in the NER task (section 4.2), we can use dynamic programming for efficient computation. For higher-order constraints (e.g., the listing rule in NER), we approximate through Gibbs sampling that iteratively samples from q(yi|y−i, x) for each position i. If the constraints span multiple instances, we group the relevant instances in minibatches for joint inference (and randomly break some dependencies when a group is too large). Note that calculating the soft predictions is efficient since only one NN forward pass is required to compute the base distribution pθ(y|x) (and few more, if needed, for calculating the truth values of relevant rules). p v.s. q at Test Time At test time we can use either the distilled student network p, or the teacher network q after a final projection. Our empirical results show that both models substantially improve over the base network that is trained with only datalabel instances. In general q performs better than p. Particularly, q is more suitable when the logic rules introduce additional dependencies (e.g., span2413 Algorithm 1 Harnessing NN with Rules Input: The training data D = {(xn, yn)}N n=1, The rule set R = {(Rl, λl)}L l=1, Parameters: π – imitation parameter C – regularization strength 1: Initialize neural network parameter θ 2: repeat 3: Sample a minibatch (X, Y ) ⊂D 4: Construct teacher network q with Eq.(4) 5: Transfer knowledge into pθ by updating θ with Eq.(2) 6: until convergence Output: Distill student network pθ and teacher network q ning over multiple examples), requiring joint inference. In contrast, as mentioned above, p is more lightweight and efficient, and useful when rule evaluation is expensive or impossible at prediction time. Our experiments compare the performance of p and q extensively. Imitation Strength π The imitation parameter π in Eq.(2) balances between emulating the teacher soft predictions and predicting the true hard labels. Since the teacher network is constructed from pθ, which, at the beginning of training, would produce low-quality predictions, we thus favor predicting the true labels more at initial stage. As training goes on, we gradually bias towards emulating the teacher predictions to effectively distill the structured knowledge. Specifically, we define π(t) = min{π0, 1 −αt} at iteration t ≥0, where α ≤1 specifies the speed of decay and π0 < 1 is a lower bound. 4 Applications We have presented our framework that is general enough to improve various types of neural networks with rules, and easy to use in that users are allowed to impose their knowledge and intentions through the declarative first-order logic. In this section we illustrate the versatility of our approach by applying it on two workhorse network architectures, i.e., convolutional network and recurrent network, on two representative applications, i.e., sentencelevel sentiment analysis which is a classification problem, and named entity recognition which is a sequence learning problem. For each task, we first briefly describe the base neural network. Since we are not focusing on tuning network architectures, we largely use the same or similar networks to previous successful neural models. We then design the linguisticallymotivated rules to be integrated. I like this book store a lot Padding Padding Word Embedding Convolution Max Pooling Sentence Representation Figure 2: The CNN architecture for sentence-level sentiment analysis. The sentence representation vector is followed by a fully-connected layer with softmax output activation, to output sentiment predictions. 4.1 Sentiment Classification Sentence-level sentiment analysis is to identify the sentiment (e.g., positive or negative) underlying an individual sentence. The task is crucial for many opinion mining applications. One challenging point of the task is to capture the contrastive sense (e.g., by conjunction “but”) within a sentence. Base Network We use the single-channel convolutional network proposed in (Kim, 2014). The simple model has achieved compelling performance on various sentiment classification benchmarks. The network contains a convolutional layer on top of word vectors of a given sentence, followed by a max-over-time pooling layer and then a fullyconnected layer with softmax output activation. A convolution operation is to apply a filter to word windows. Multiple filters with varying window sizes are used to obtain multiple features. Figure 2 shows the network architecture. Logic Rules One difficulty for the plain neural network is to identify contrastive sense in order to capture the dominant sentiment precisely. The conjunction word “but” is one of the strong indicators for such sentiment changes in a sentence, where the sentiment of clauses following “but” generally dominates. We thus consider sentences S with an “A-but-B” structure, and expect the sentiment of the whole sentence to be consistent with the sentiment of clause B. The logic rule is written as: has-‘A-but-B’-structure(S) ⇒ (1(y = +) ⇒σθ(B)+ ∧σθ(B)+ ⇒1(y = +)) , (5) 2414 where 1(·) is an indicator function that takes 1 when its argument is true, and 0 otherwise; class ‘+’ represents ‘positive’; and σθ(B)+ is the element of σθ(B) for class ’+’. By Eq.(1), when S has the ‘Abut-B’ structure, the truth value of the above logic rule equals to (1 + σθ(B)+)/2 when y = +, and (2 −σθ(B)+)/2 otherwise 1. Note that here we assume two-way classification (i.e., positive and negative), though it is straightforward to design rules for finer grained sentiment classification. 4.2 Named Entity Recognition NER is to locate and classify elements in text into entity categories such as “persons” and “organizations”. It is an essential first step for downstream language understanding applications. The task assigns to each word a named entity tag in an “X-Y” format where X is one of BIEOS (Beginning, Inside, End, Outside, and Singleton) and Y is the entity category. A valid tag sequence has to follow certain constraints by the definition of the tagging scheme. Besides, text with structures (e.g., lists) within or across sentences can usually expose some consistency patterns. Base Network The base network has a similar architecture with the bi-directional LSTM recurrent network (called BLSTM-CNN) proposed in (Chiu and Nichols, 2015) for NER which has outperformed most of previous neural models. The model uses a CNN and pre-trained word vectors to capture character- and word-level information, respectively. These features are then fed into a bi-directional RNN with LSTM units for sequence tagging. Compared to (Chiu and Nichols, 2015) we omit the character type and capitalization features, as well as the additive transition matrix in the output layer. Figure 3 shows the network architecture. Logic Rules The base network largely makes independent tagging decisions at each position, ignoring the constraints on successive labels for a valid tag sequence (e.g., I-ORG cannot follow B-PER). In contrast to recent work (Lample et al., 2016) which adds a conditional random field (CRF) to capture bi-gram dependencies between outputs, we instead apply logic rules which does not introduce extra parameters to learn. An example rule is: equal(yi−1, I-ORG) ⇒¬ equal(yi, B-PER) (6) 1Replacing ∧with & in Eq.(5) leads to a probably more intuitive rule which takes the value σθ(B)+ when y = +, and 1 −σθ(B)+ otherwise. Char+Word Representation Backward LSTM Forward LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM Output Representation NYC locates in USA Figure 3: The architecture of the bidirectional LSTM recurrent network for NER. The CNN for extracting character representation is omitted. The confidence levels are set to ∞to prevent any violation. We further leverage the list structures within and across sentences of the same documents. Specifically, named entities at corresponding positions in a list are likely to be in the same categories. For instance, in “1. Juventus, 2. Barcelona, 3. ...” we know “Barcelona” must be an organization rather than a location, since its counterpart entity “Juventus” is an organization. We describe our simple procedure for identifying lists and counterparts in the supplementary materials. The logic rule is encoded as: is-counterpart(X, A) ⇒1 −∥c(ey) −c(σθ(A))∥2, (7) where ey is the one-hot encoding of y (the class prediction of X); c(·) collapses the probability mass on the labels with the same categories into a single probability, yielding a vector with length equaling to the number of categories. We use ℓ2 distance as a measure for the closeness between predictions of X and its counterpart A. Note that the distance takes value in [0, 1] which is a proper soft truth value. The list rule can span multiple sentences (within the same document). We found the teacher network q that enables explicit joint inference provides much better performance over the distilled student network p (section 5). 5 Experiments We validate our framework by evaluating its applications of sentiment classification and named entity recognition on a variety of public benchmarks. By integrating the simple yet effective rules with 2415 Model SST2 MR CR 1 CNN (Kim, 2014) 87.2 81.3±0.1 84.3±0.2 2 CNN-Rule-p 88.8 81.6±0.1 85.0±0.3 3 CNN-Rule-q 89.3 81.7±0.1 85.3±0.3 4 MGNC-CNN (Zhang et al., 2016) 88.4 – – 5 MVCNN (Yin and Schutze, 2015) 89.4 – – 6 CNN-multichannel (Kim, 2014) 88.1 81.1 85.0 7 Paragraph-Vec (Le and Mikolov, 2014) 87.8 – – 8 CRF-PR (Yang and Cardie, 2014) – – 82.7 9 RNTN (Socher et al., 2013) 85.4 – – 10 G-Dropout (Wang and Manning, 2013) – 79.0 82.1 Table 1: Accuracy (%) of Sentiment Classification. Row 1, CNN (Kim, 2014) is the base network corresponding to the “CNN-non-static” model in (Kim, 2014). Rows 2-3 are the networks enhanced by our framework: CNN-Rule-p is the student network and CNN-Rule-q is the teacher network. For MR and CR, we report the average accuracy±one standard deviation using 10-fold cross validation. the base networks, we obtain substantial improvements on both tasks and achieve state-of-the-art or comparable results to previous best-performing systems. Comparison with a diverse set of other rule integration methods demonstrates the unique effectiveness of our framework. Our approach also shows promising potentials in the semi-supervised learning and sparse data context. Throughout the experiments we set the regularization parameter to C = 400. In sentiment classification we set the imitation parameter to π(t) = 1 −0.9t, while in NER π(t) = min{0.9, 1 −0.9t} to downplay the noisy listing rule. The confidence levels of rules are set to λl = 1, except for hard constraints whose confidence is ∞. For neural network configuration, we largely followed the reference work, as specified in the following respective sections. All experiments were performed on a Linux machine with eight 4.0GHz CPU cores, one Tesla K40c GPU, and 32GB RAM. We implemented neural networks using Theano 2, a popular deep learning platform. 5.1 Sentiment Classification 5.1.1 Setup We test our method on a number of commonly used benchmarks, including 1) SST2, Stanford Sentiment Treebank (Socher et al., 2013) which contains 2 classes (negative and positive), and 6920/872/1821 sentences in the train/dev/test sets respectively. Following (Kim, 2014) we train models on both sentences and phrases since all labels are provided. 2) MR (Pang and Lee, 2005), a set of 10,662 one-sentence movie reviews with negative 2http://deeplearning.net/software/theano or positive sentiment. 3) CR (Hu and Liu, 2004), customer reviews of various products, containing 2 classes and 3,775 instances. For MR and CR, we use 10-fold cross validation as in previous work. In each of the three datasets, around 15% sentences contains the word “but”. For the base neural network we use the “nonstatic” version in (Kim, 2014) with the exact same configurations. Specifically, word vectors are initialized using word2vec (Mikolov et al., 2013) and fine-tuned throughout training, and the neural parameters are trained using SGD with the Adadelta update rule (Zeiler, 2012). 5.1.2 Results Table 1 shows the sentiment classification performance. Rows 1-3 compare the base neural model with the models enhanced by our framework with the “but”-rule (Eq.(5)). We see that our method provides a strong boost on accuracy over all three datasets. The teacher network q further improves over the student network p, though the student network is more widely applicable in certain contexts as discussed in sections 3.2 and 3.4. Rows 4-10 show the accuracy of recent top-performing methods. On the MR and CR datasets, our model outperforms all the baselines. On SST2, MVCNN (Yin and Schutze, 2015) (Row 5) is the only system that shows a slightly better result than ours. Their neural network has combined diverse sets of pre-trained word embeddings (while we use only word2vec) and contained more neural layers and parameters than our model. To further investigate the effectiveness of our framework in integrating structured rule knowledge, we compare with an extensive array of other 2416 Model Accuracy (%) 1 CNN (Kim, 2014) 87.2 2 -but-clause 87.3 3 -ℓ2-reg 87.5 4 -project 87.9 5 -opt-project 88.3 6 -pipeline 87.9 7 -Rule-p 88.8 8 -Rule-q 89.3 Table 2: Performance of different rule integration methods on SST2. 1) CNN is the base network; 2) “-but-clause” takes the clause after “but” as input; 3) “-ℓ2-reg” imposes a regularization term γ∥σθ(S) − σθ(Y )∥2 to the CNN objective, with the strength γ selected on dev set; 4) “-project” projects the trained base CNN to the rule-regularized subspace with Eq.(3); 5) “-opt-project” directly optimizes the projected CNN; 6) “-pipeline” distills the pre-trained “-opt-project” to a plain CNN; 7-8) “-Rule-p” and “Rule-q” are our models with p being the distilled student network and q the teacher network. Note that “-but-clause” and “-ℓ2-reg” are ad-hoc methods applicable specifically to the “but”-rule. possible integration approaches. Table 2 lists these methods and their performance on the SST2 task. We see that: 1) Although all methods lead to different degrees of improvement, our framework outperforms all other competitors with a large margin. 2) In particular, compared to the pipelined method in Row 6 which is in analogous to the structure compilation work (Liang et al., 2008), our iterative distillation (section 3.2) provides better performance. Another advantage of our method is that we only train one set of neural parameters, as opposed to two separate sets as in the pipelined approach. 3) The distilled student network “-Rule-p” achieves much superior accuracy compared to the base CNN, as well as “-project” and “-opt-project” which explicitly project CNN to the rule-constrained subspace. This validates that our distillation procedure transfers the structured knowledge into the neural parameters effectively. The inferior accuracy of “-opt-project” can be partially attributed to the poor performance of its neural network part which achieves only 85.1% accuracy and leads to inaccurate evaluation of the “but”-rule in Eq.(5). We next explore the performance of our framework with varying numbers of labeled instances as well as the effect of exploiting unlabeled data. Intuitively, with less labeled examples we expect the Data size 5% 10% 30% 100% 1 CNN 79.9 81.6 83.6 87.2 2 -Rule-p 81.5 83.2 84.5 88.8 3 -Rule-q 82.5 83.9 85.6 89.3 4 -semi-PR 81.5 83.1 84.6 – 5 -semi-Rule-p 81.7 83.3 84.7 – 6 -semi-Rule-q 82.7 84.2 85.7 – Table 3: Accuracy (%) on SST2 with varying sizes of labeled data and semi-supervised learning. The header row is the percentage of labeled examples for training. Rows 1-3 use only the supervised data. Rows 4-6 use semi-supervised learning where the remaining training data are used as unlabeled examples. For “-semi-PR” we only report its projected solution (in analogous to q) which performs better than the non-projected one (in analogous to p). general rules would contribute more to the performance, and unlabeled data should help better learn from the rules. This can be a useful property especially when data are sparse and labels are expensive to obtain. Table 3 shows the results. The subsampling is conducted on the sentence level. That is, for instance, in “5%” we first selected 5% training sentences uniformly at random, then trained the models on these sentences as well as their phrases. The results verify our expectations. 1) Rows 1-3 give the accuracy of using only data-label subsets for training. In every setting our methods consistently outperform the base CNN. 2) “-Rule-q” provides higher improvement on 5% data (with margin 2.6%) than on larger data (e.g., 2.3% on 10% data, and 2.0% on 30% data), showing promising potential in the sparse data context. 3) By adding unlabeled instances for semi-supervised learning as in Rows 5-6, we get further improved accuracy. 4) Row 4, “-semi-PR” is the posterior regularization (Ganchev et al., 2010) which imposes the rule constraint through only unlabeled data during training. Our distillation framework consistently provides substantially better results. 5.2 Named Entity Recognition 5.2.1 Setup We evaluate on the well-established CoNLL-2003 NER benchmark (Tjong Kim Sang and De Meulder, 2003), which contains 14,987/3,466/3,684 sentences and 204,567/51,578/46,666 tokens in train/dev/test sets, respectively. The dataset includes 4 categories, i.e., person, location, organization, and misc. BIOES tagging scheme is used. 2417 Model F1 1 BLSTM 89.55 2 BLSTM-Rule-trans p: 89.80, q: 91.11 3 BLSTM-Rules p: 89.93, q: 91.18 4 NN-lex (Collobert et al., 2011) 89.59 5 S-LSTM (Lample et al., 2016) 90.33 6 BLSTM-lex (Chiu and Nichols, 2015) 90.77 7 BLSTM-CRF1 (Lample et al., 2016) 90.94 8 Joint-NER-EL (Luo et al., 2015) 91.20 9 BLSTM-CRF2 (Ma and Hovy, 2016) 91.21 Table 4: Performance of NER on CoNLL-2003. Row 2, BLSTM-Rule-trans imposes the transition rules (Eq.(6)) on the base BLSTM. Row 3, BLSTMRules further incorporates the list rule (Eq.(7)). We report the performance of both the student model p and the teacher model q. Around 1.7% named entities occur in lists. We use the mostly same configurations for the base BLSTM network as in (Chiu and Nichols, 2015), except that, besides the slight architecture difference (section 4.2), we apply Adadelta for parameter updating. GloVe (Pennington et al., 2014) word vectors are used to initialize word features. 5.2.2 Results Table 4 presents the performance on the NER task. By incorporating the bi-gram transition rules (Row 2), the joint teacher model q achieves 1.56 improvement in F1 score that outperforms most previous neural based methods (Rows 4-7), including the BLSTM-CRF model (Lample et al., 2016) which applies a conditional random field (CRF) on top of a BLSTM in order to capture the transition patterns and encourage valid sequences. In contrast, our method implements the desired constraints in a more straightforward way by using the declarative logic rule language, and at the same time does not introduce extra model parameters to learn. Further integration of the list rule (Row 3) provides a second boost in performance, achieving an F1 score very close to the best-performing systems including Joint-NER-EL (Luo et al., 2015) (Row 8), a probabilistic graphical model optimizing NER and entity linking jointly with massive external resources, and BLSTM-CRF (Ma and Hovy, 2016), a combination of BLSTM and CRF with more parameters than our rule-enhanced neural networks. From the table we see that the accuracy gap between the joint teacher model q and the distilled student p is relatively larger than in the sentiment classification task (Table 1). This is because in the NER task we have used logic rules that introduce extra dependencies between adjacent tag positions as well as multiple instances, making the explicit joint inference of q useful for fulfilling these structured constraints. 6 Discussion and Future Work We have developed a framework which combines deep neural networks with first-order logic rules to allow integrating human knowledge and intentions into the neural models. In particular, we proposed an iterative distillation procedure that transfers the structured information of logic rules into the weights of neural networks. The transferring is done via a teacher network constructed using the posterior regularization principle. Our framework is general and applicable to various types of neural architectures. With a few intuitive rules, our framework significantly improves base networks on sentiment analysis and named entity recognition, demonstrating the practical significance of our approach. Though we have focused on first-order logic rules, we leveraged soft logic formulation which can be easily extended to general probabilistic models for expressing structured distributions and performing inference and reasoning (Lake et al., 2015). We plan to explore these diverse knowledge representations to guide the DNN learning. The proposed iterative distillation procedure also reveals connections to recent neural autoencoders (Kingma and Welling, 2014; Rezende et al., 2014) where generative models encode probabilistic structures and neural recognition models distill the information through iterative optimization (Rezende et al., 2016; Johnson et al., 2016; Karaletsos et al., 2016). The encouraging empirical results indicate a strong potential of our approach for improving other application domains such as vision tasks, which we plan to explore in the future. Finally, we also would like to generalize our framework to automatically learn the confidence of different rules, and derive new rules from data. Acknowledgments We thank the anonymous reviewers for their valuable comments. This work is supported by NSF IIS1218282, NSF IIS1447676, Air Force FA872105-C-0003, and FA8750-12-2-0342. 2418 References Stephen H Bach, Matthias Broecheler, Bert Huang, and Lise Getoor. 2015. Hinge-loss Markov random fields and probabilistic soft logic. arXiv preprint arXiv:1505.04406. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. Proc. of ICLR. Cristian Bucilu, Rich Caruana, and Alexandru NiculescuMizil. 2006. Model compression. In Proc. of KDD, pages 535–541. ACM. Jason PC Chiu and Eric Nichols. 2015. Named entity recognition with bidirectional LSTM-CNNs. arXiv preprint arXiv:1511.08308. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. JMLR, 12:2493– 2537. James Foulds, Shachi Kumar, and Lise Getoor. 2015. Latent topic networks: A versatile probabilistic programming framework for topic models. In Proc. of ICML, pages 777–786. Manoel VM Franc¸a, Gerson Zaverucha, and Artur S dAvila Garcez. 2014. Fast relational learning using bottom clause propositionalization with artificial neural networks. Machine learning, 94(1):81–104. Kuzman Ganchev, Joao Grac¸a, Jennifer Gillenwater, and Ben Taskar. 2010. Posterior regularization for structured latent variable models. JMLR, 11:2001–2049. Artur S d’Avila Garcez, Krysia Broda, and Dov M Gabbay. 2012. Neural-symbolic learning systems: foundations and applications. Springer Science & Business Media. Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdelrahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. 2012. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):82–97. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proc. of KDD, pages 168–177. ACM. Matthew J Johnson, David Duvenaud, Alexander B Wiltschko, Sandeep R Datta, and Ryan P Adams. 2016. Structured VAEs: Composing probabilistic graphical models and variational autoencoders. arXiv preprint arXiv:1603.06277. Theofanis Karaletsos, Serge Belongie, Cornell Tech, and Gunnar R¨atsch. 2016. Bayesian representation learning with oracle constraints. In Proc. of ICLR. Yoon Kim. 2014. Convolutional neural networks for sentence classification. Proc. of EMNLP. Diederik P Kingma and Max Welling. 2014. Auto-encoding variational Bayes. In Proc. of ICLR. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Proc. of NIPS, pages 1097–1105. Tejas D Kulkarni, William F Whitney, Pushmeet Kohli, and Josh Tenenbaum. 2015. Deep convolutional inverse graphics network. In Proc. of NIPS, pages 2530–2538. Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. 2015. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332– 1338. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proc. of NAACL. Quoc V Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. Proc. of ICML. Percy Liang, Hal Daum´e III, and Dan Klein. 2008. Structure compilation: trading structure for features. In Proc. of ICML, pages 592–599. ACM. Percy Liang, Michael I Jordan, and Dan Klein. 2009. Learning from measurements in exponential families. In Proc. of ICML, pages 641–648. ACM. David Lopez-Paz, L´eon Bottou, Bernhard Sch¨olkopf, and Vladimir Vapnik. 2016. Unifying distillation and privileged information. Prof. of ICLR. Gang Luo, Xiaojiang Huang, Chin-Yew Lin, and Zaiqing Nie. 2015. Joint named entity recognition and disambiguation. In Proc. of EMNLP. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF. In Proc. of ACL. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proc. of NIPS, pages 3111–3119. Marvin Minksy. 1980. Learning meaning. Technical Report AI Lab Memo. Project MAC. MIT. Anh Nguyen, Jason Yosinski, and Jeff Clune. 2015. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proc. of CVPR, pages 427–436. IEEE. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proc. of ACL, pages 115–124. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proc. of EMNLP, volume 14, pages 1532–1543. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. Proc. of ICML. Danilo Jimenez Rezende, Shakir Mohamed, Ivo Danihelka, Karol Gregor, and Daan Wierstra. 2016. One-shot generalization in deep generative models. arXiv preprint arXiv:1603.05106. Matthew Richardson and Pedro Domingos. 2006. Markov logic networks. Machine learning, 62(1-2):107–136. 2419 David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. 2016. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484–489. Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proc. of EMNLP, volume 1631, page 1642. Citeseer. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. Proc. of ICLR. Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proc. of CoNLL, pages 142– 147. Association for Computational Linguistics. Geoffrey G Towell, Jude W Shavlik, and Michiel O Noordewier. 1990. Refinement of approximate domain theories by knowledge-based neural networks. In Proceedings of the eighth National conference on Artificial intelligence, pages 861–866. Boston, MA. Sida Wang and Christopher Manning. 2013. Fast dropout training. In Proc. of ICML, pages 118–126. Bishan Yang and Claire Cardie. 2014. Context-aware learning for sentence-level sentiment analysis with posterior regularization. In Proc. of ACL, pages 325–335. Wenpeng Yin and Hinrich Schutze. 2015. Multichannel variable-size convolution for sentence classification. Proc. of CONLL. Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701. Ye Zhang, Stephen Roller, and Byron Wallace. 2016. MGNCCNN: A simple approach to exploiting multiple word embeddings for sentence classification. Proc. of NAACL. Jun Zhu, Ning Chen, and Eric P Xing. 2014. Bayesian inference with posterior regularization and applications to infinite latent SVMs. JMLR, 15(1):1799–1847. 2420
2016
228
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2421–2430, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Case and Cause in Icelandic: Reconstructing Causal Networks of Cascaded Language Changes Fermín Moscoso del Prado Martín and Christian Brendel University of California, Santa Barbara Department of Linguistics, South Hall 3521 Santa Barbara, CA 93106, USA [email protected] and [email protected] Abstract Linguistic drift is a process that produces slow irreversible changes in the grammar and function of a language’s constructions. Importantly, changes in a part of a language can have trickle down effects, triggering changes elsewhere in that language. Although such causally triggered chains of changes have long been hypothesized by historical linguists, no explicit demonstration of the actual causality has been provided. In this study, we use cooccurrence statistics and machine learning to demonstrate that the functions of morphological cases experience a slow, irreversible drift along history, even in a language as conservative as is Icelandic. Crucially, we then move on to demonstrate –using the notion of Granger-causality– that there are explicit causal connections between the changes in the functions of the different cases, which are consistent with documented processes in the history of Icelandic. Our technique provides a means for the quantitative reconstruction of connected networks of subtle linguistic changes. 1 Introduction Sapir (1921/2014, p. 123) noticed that “Language moves down on a current of its own making. It has a drift” (emphasis added). In Sapir’s view, the formation of different dialects requires that the small changes constantly being introduced by the speakers are not just plain white noise, but rather random walks in which minute changes accumulate over time. The very high dimensionality on which languages operate makes cumulative linguistic changes irreversible. Once a change has been effected there is very little chance that the language will ever return to its original state before the change, in the same way that a diffusion process in a very high dimensional space is never going to return to the exact same point in the space. Drift in language is in this respect reminiscent of random genetic drift from evolutionary biology (Wright, 1955). However, Sapir’s idea of drift goes further in that he viewed it as a directional process, more similar to Wright’s (1929) concept of a directional drift related to selectional pressures. In Sapir’s view, “language has a ‘slope”’; the small changes that accumulate in linguistic drift are not fully random, but rather they reflect the speakers’ unconscious cognitive tendency to increase the consistency within their languages. This idea is currently challenged by some researchers (Croft, 2000; Lupyan and Dale, 2015), who are of the opinion that purely random drift – of the same type as that found in genetics–, when coupled with adequate selection mechanisms, is sufficient to account for the diachronic changes observed in the world’s languages. Sapir motivated the need for directional change in what he saw as apparent causal chains in language change, which he illustrated with the progressive loss and functional shift of English oblique case markers, into an absolutive case-free system encoding animacy and position relative to the head noun. ‘Chain reactions’ along the history of a language are particularly well-studied in phonology. Chain shifts (Martinet, 1952) are processes by which the position in perceptual/articulatory space of a phoneme changes in response to the change in position of another phoneme (either moving away from the second phoneme, in a ‘push’ chain, or moving to occupy the space left void by the other, in a ‘pull’ chain). A famous example of a chain shift is the Great English Vowel Shift. In a similar fashion, one could think of functional chain shifts 2421 in morphology, by which a certain morphological category takes over some of the functions of another, triggering a chain of ‘push’ and/or ‘pull’ movements in other categories. Such cascaded changes have often been reported in diachronic linguistics (Biberauer and Roberts, 2008; Fisiak, 1984; Lightfoot, 2002; Wittmann, 1983). Icelandic is a famously conservative language. Compared to most other languages, its grammar has experienced remarkably little change since the high middle ages. For instance, Barðdal and Eythórsson (2003) argue that the changes it has experienced from its old phase (Old West Norse; mid XI century to mid XIV century) to its current phase are comparable to the slight changes occurring from early Modern English (late XV century into early XVIII century) into Modern English (from early XVIII century). In terms of inflectional morphology, change in Icelandic has been minimal. For instance, one finds that the nominal paradigms of Old West Norse, are mostly the same as those of Modern Icelandic. Notwithstanding the apparent formal stability of Icelandic cases, there is evidence that they are experiencing subtle changes in their functions (Barðdal, 2011; Barðdal and Eythórsson, 2003; Eythórsson, 2000). In particular, Barðdal argues that an accumulation of small syntatico-semantic shifts has finally resulted in a shift in the Icelandic dative’s functions (i.e., ‘dative sickness’), possibly triggered by earlier changes in nominatives and accusatives (e.g., ‘nominative sickness’). In this study, we investigate whether one can reliably detect a drift in the functions of Icelandic case and –crucially– whether there is evidence for causal chain shifts in these functions. In Section 2, we describe the processing of a diachronic corpus of Icelandic to obtain co-occurrence representations of the functions of case types and tokens. Section 3 uses machine learning on the cooccurrence vectors of tokens to demonstrate that the usage of Icelandic cases has been subject to a constant drift along history, a drift that is distinguishable from the overall changes experienced by the language in this period. We then go on –in Section 4– to demonstrate using Granger-causality (Granger, 1969) that there are causal relations between the changes in the different cases, and it is possible to reconstruct a directed network of chain shifts, which is consistent with the directions of causality hypothesized by Barðdal (2011). Finally, in Section 5, we discuss the theoretical implications of our results for theories of language change, as well as the possibilities offered by the technical innovations presented here. 2 Corpus Processing 2.1 Corpus We used the Icelandic Parsed Historical Corpus (Wallenberg et al., 2011), a sample of around one million word tokens of Icelandic texts that have been orthographically standardized, manually lemmatized, part-of-speech tagged, and parsed into context-free derivation trees. An example of the lemmatization and part-of-speech tagging for a sentence is shown in Fig. 1. The dating of the text samples ranges from 1,150 CE to 2,008 CE, covering most of the history of Icelandic (from its origins in Old West Norse, to the current official language of Iceland). The corpus is divided into 61 files of similar sizes (around 18,000 words per file), each file corresponding to a single document. The documents were chosen to cover the period in a roughly uniform manner, sampling from similar genres across the periods. 2.2 Preprocessing We collapsed into a single file all documents that were dated on the same year. This left us with 44 files containing texts from distinct years. From each of the files, we discarded all tokens that contained anything but valid Icelandic alphabetic characters or the dollar sign (used for marking enclitic breaks within a word, such as the clitic determiner in krossins from the example in Fig. 1). All remaining word tokens were lower-cased, and the ‘$’ character was removed from the stem elements of broken stem plus clitic pairs (e.g., kross$ was changed into kross). 2.3 First Order Co-occurrence Vectors Ideally, for constructing co-occurrence vectors, it is best to choose as features those words with highest overall informativity, which in fact tend to be those words with the highest occurrence frequencies (Bullinaria and Levy, 2007; Bullinaria and Levy, 2012; Lowe and McDonald, 2000). In our case, however, using plain token frequencies runs the risk of creating a representational space that is strongly uninformative about particular periods in the history of the language. Instead, we used document frequencies, as these still provide a measure 2422 En armar kross-ins merkja ást við Guð og menn en armur krosur hinn merkja ást við guð og maður CONJ NS-N N-G D-G VBP N-A P NPR-A CONJ NS-A Figure 1: Tagging and lemmatization of the Old West Norse sentence (number 819) from the Íslensk Hómilíubók (“Icelandic Book of Homilies”; late XII century): En armar krossins merkja ást við Guð og menn. (“But the arms of the cross mark the love of God and man.”) of word frequency (and therefore informativity), while at the same time ensuring that those words chosen as features are most representative across the history of the language. We selected as features all word types that occurred in at least 75% of the 44 by-year files, that is, all 529 distinct (unlemmatized) word forms that had a document frequency in the corpus of at least 33 documents. For each (unlemmatized) word type (w) occurring at least three times in the whole corpus (17,741 distinct word types), we computed its co-occurrence frequency with each of the feature words (t). In this way, we obtained a matrix of 17,741 × 529 word by feature co-occurrence frequencies (f[w, t]) within a symmetrical window including the two preceding and following words.1 The plain co-occurrence matrix was converted into a matrix of word-feature pointwise mutual informations M = (mi,j), such that, mi,j = log N · f[wi, tj] (W1 −1) · f[wi] · f[tj], where N = 899,763 tokens is the total number of tokens in the corpus, W1 = 5 is the total sliding window size considered, and f[wi] and f[tj], are the overall corpus frequencies of words wi and tj, respectively. In this manner, the row Mi,· = (mi,1, . . . , mi,529) represents the contexts in which the word type wi is found across the whole corpus. 2.4 Second Order Co-occurrence Vectors The co-occurrence vectors computed above provide representations for the average contexts in which a given word type is found. In order to represent the specific context of each word token, we used second order co-occurrence vectors (Schütze and Pedersen, 1997). These provide important information about the aspects of a context that are relevant for inflectional morphology (Moscoso del 1To avoid log 0 values, all frequency counts in this paper were increased by one. Prado Martín, 2007). The second order vectors were computed by passing a symmetrical sliding window including, for each token, the immediately preceding and following word. The vector for each token was computed as the average between the first order vectors (of Subsection 2.3) of the preceding and following words. If no first order vector was available for either the preceding or the following word, the second order vector directly corresponded to the plain first order vector of the word for which there was a first order vector. We excluded those tokens for which we had first order vectors for neither the previous nor the following word type. We computed such second order vectors for all tokens in the corpus that had been tagged for grammatical case (a total of 419,910 vectors, on average 9,453 vectors per year, of which 38.14% were nominatives, 10.91% were genitives, 26.38% were accusatives, and 24.56% were datives). 2.5 Representation of the Case Prototypes In order to represent the prototypical usages each grammatical case (i.e., nominative, genitive, accusative, and dative) in a given year, we used the first order co-occurrence technique. For each of the 44 distinct years –using the same features identified in Subsection 2.3– we computed first order co-occurrence vectors collapsing all word tokens in each grammatical case, and using a reduced window size including just the preceding and following words (i.e., W2 = 3).2 For each year (y) we obtained a 4 × 529 element matrix of co-occurrence frequencies (fy[c, t]), indicating the number of times that each case (c) 2The optimal window sizes for the first order cooccurrence vectors for words and for case prototypes were different because they were chosen to optimize different tasks. The window size for first order vectors for words were chosen to optimize the machine learning algorithm for identifying case identities of second order vectors (Section 3), whereas the first order vectors for case prototypes were optimized for clustering cases across the years (Section 4). 2423 was found to co-occur (within the specified window) with feature (t). These matrices were transformed into case to feature pointwise mutual informations, resulting in a series of 44 matrices (M[y] = (m[y]i,j) such that, m[y]i,j = log N · fy[ci, tj] (W2 −1) · f[ci] · f[tj], where N is the total number of tokens in the corpus, f[ci] is the number of instances of case ci, and f[tj] denotes the number of instances of word tj in the corpus. In this way, the rows of the M[y] matrices provided a representation of the contexts in which each grammatical case was used in each year. (a) (b) 3 2 1 0 1 2 3 4 2.5 2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 Nominative Genitive Accusative Dative 2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.5 2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 Nominative Genitive Accusative Dative Figure 2: (a) Representation of the raw case vectors in SVD-reduced space (i.e., SVD dimension 1 vs. dimension 2). (b) Representation of the case vectors after discounting the average vector for each year (i.e., SVD dimension 1 vs. dimension 2). Fig. 2a plots the spatial organization of the resulting vectors (after reducing to a bidimensional projection using Singular Value Decomposition; SVD). Notice that the prototypes for each case very naturally cluster together across the years. The scatter is however asymmetric, hinting at a process of change along the years common for all four cases. If we compute a yearly overall prototype vector as the average vector for the cases in each year, and we substract it from the corresponding case prototypes, we find that the case idetities become clearly differentiated in space (see Fig. 2b), demonstrating that the case prototype vectors do indeed capture the contextual properties of all four cases, which are highly distinctive. 3 Functional Drift in Icelandic Cases As was discussed in the Introduction, the inflectional paradigms marking case and number have barely –if at all– changed along the history of Icelandic. On the basis of this fact alone, one could conclude that the grammatical case system is not actually experiencing any linguistic drift, but has rather remained basically static throughout the last millennium. There is, however, another possibility. Linguistic drift could have been affecting the functions of grammatical cases in Icelandic. If this were the case, one would expect to observe a slow –constant rate– diachronic change in the contexts in which each of the four cases is used. To investigate this latter possibility, for each of the 44 years documented in the corpus, we trained a basic logistic classifier in the task of assigning grammatical case to the second order cooccurrence vectors developed in Subsection 2.4. Once each of the classifiers had been trained, we tested the classifiers’ performances on the vectors obtained from each of other 43 years on which they were not trained. On the one hand, if the functions of the cases have indeed remained constant along the history of the language, one would expect that the performance of a classifier tested on the data from a given year, should remain approximately constant when tested on vectors from all other years. If, on the other hand, the functions of Icelandic grammatical cases have been subject to linguistic drift, the irreversible and cumulative nature of the drift (Sapir, 1921/2014) implies that the classifier error should grow –if only so slightly– with each year passed. The reason for this is that the contexts in which one would use each case should be slightly different from year to year. One should then predict that the error of the classifier should depend on the temporal distance between the year of the testing vectors and that of the training ones. Furthermore, the change in error should be of a linear nature, with a very slight slope. When tested on the same years in which they had been trained, the classifiers performed rather well in inferring the case to which each of the context vectors belonged (the distribution of errors across the 44 years was well approximated by a normal distribution with a mean error of 26.67%, a standard deviation of 1.99%, and best and worst classification errors of 22.17% and 30.39%, respectively).3 3Although we chose the best model among different learning algorithms, including multiple versions of Support Vector Machines, Classification Trees, and a Softmax Classifier, we have no doubt that the learning performance can be improved upon. For our purposes, however, it was sufficient 2424 500 0 500 Temporal Distance (years) 30 35 40 45 Classification Error (%) β + = 0. 002, p + < 0. 001, β −= −0. 002, p −< 0. 001 Figure 3: Correlation between the classifier error and the temporal distance from the year from which the training vectors were obtained to the year when the testing vectors were obtained. The solid lines plots a linear regressions. We then tested the classifiers on the vectors obtained from different years. The results are plotted in Fig. 3. The scatter plots the difference in years (i.e., the difference values are positive when the classifier was tested on vectors obtained after those used for training, and negative when testing with vectors obtained before the training ones). When testing on data different from the training sets, there is a logical loss in performance (of about 8%) from the baseline of testing on the same training set. We fitted two linear regressions, one to the positive differences and another to the negative differences (plotted by the solid lines in Fig. 3). The first thing that stands out is that the performance of the classifier is remarkably good when tested on vectors obtained at considerable temporal distances from the time when the training vectors were obtained. While the error of the classifier is of about 34% when tested on vectors from the year after or before the training vectors, the error remains at 35% for vectors originating from texts that are five centuries apart. Once again, this speaks to the remarkable conservativeness of the Icelandic language. However, these small differences are in fact reliable: There are significant slopes in both regressions (positive differences: R+ = .161, p+ < .001; negative differences: R−= −.164, p−< .001). A second remarkable thing is that both regressions are substantially symmetrical, in fact their slopes are basically identical (|β+| = |β−| = .002). This indicates that the degree to which the usages of the cases at different to have a classifier with a decent performance, as our goal was showing that the error is time-dependent. 0 200 400 600 800 1000 Cross-Entropy (nats) 28 30 32 34 36 38 40 42 Classification Error (%) β = . 002, z = 5. 009, p < . 001 0 100 200 300 400 500 600 700 800 900 Distance in Years Classification Error (%) β = . 001, z = 4. 661, p < . 001 Figure 4: Independent effects of cross-entropy (left panel) and distance in years between the training and testing sets (right panel) as estimated by the linear mixed-effects regression model. time points have diverged depends on the amount of time that has intervened, irrespective of whether it was the training or the testing set that was collected before. One could argue that the slow drift observed may not be really due to changes in the functions of the grammatical cases themselves, but just to overall changes either in the overall language, or in the very topics that are addressed (e.g., one might guess that talk of swords, slaves, and longships was more frequent in XII century Norse than it is in Modern Icelandic). To investigate this possibility we used an information-theoretical measure of the prototypicality of a set of second order vectors for a particular year (based on that used in Moscoso del Prado Martín, 2007). From the vectors of each year, irrespective of their case, we fitted a 529-dimensional Gaussian distribution (by estimating the mean vector for that year, µy, and the covariance matrix, Σy). The average inadequacy of a given set of vectors {v1, . . . , vn} obtained in year z to the distribution fitted to the vectors obtained in year y is measured by the crossentropy, a Monte Carlo estimator of which is given by,4 H(z, y) ≈K + 1 2 log |Σy| + 1 2n n X i=1 (vi −µy)T Σ−1 y (vi −µy), where K is a constant.5 In addition, one should also take into account the fact that, for some years, the classifier might generalize better or worse than for others (due to irrelevant idiosyncrasies of one specific text used for training), which could lead to a distortion of the results. To investigate whether, after discounting for the inadequacy of the vectors to the overall distribution of those in which the classifier was trained, 4Assuming Σy is definite positive so that its inverse exists. 5K = 529 2 log 2π. 2425 there was still evidence for drift in the functions of the cases, while also accounting for the different generalization powers of the classifiers, we fitted a linear mixed-effects model to the classifier errors, including fixed-effect predictors of the cross-entropy described above, and the absolute value of the difference in years between the training set and testing set dates (as indicated above, the effects were equivalent for positive and negative values in years), and the dating of the testing set as a random effect. As expected, we found that the cross-entropies had a significant positive effect (β = .002, z = 5.009, p < .001; left panel if Fig. 4), indicating that the performances of the models were indeed worse for less adequate sets of testing vectors, irrespective of any aspect of grammatical case. However –crucially– even after considering the effect of cross-entropy, there remained a significant positive effect of the temporal distance (β = .001, z = 4.661, p < .001; right panel if Fig. 4).6 This result therefore supports the hypothesis that the function of grammatical case has been subject to a slight constant change during the history of the language: a functional drift. 4 Functional Chain Shifts in Case In the previous section we have demonstrated that the functions of Icelandic cases have been subject to slow linguistic drift. The question now arises of whether this drift is purely random, or rather it has some degree of directionality arising from endogenous linguistic factors. It is possible that changes in the functions of some cases caused changes in the functions of others. We investigate this possibility using the notion of Granger-causality 4.1 Granger-causality Granger-causality (Granger, 1969) is a powerful technique for assessing whether one time series can be said to be the cause of another one. The basic idea is that one time series x is said to Grangercause another series y if the past of series x predicts the future of series y over and above any 6The estimated covariance matrices were not definite positive for two of the years, which were excluded from the analyses. In addition, in 552 out of the remaining 1,849 estimates, the cross-entropy took unusually large values, orders of magnitude larger than the rest (likely reflecting inadequacy of the multidimensional Gaussian approximation for these cases), which distorted the effect estimates. The analyses reported exclude these 552 points. However, keeping these outlying values in the regression, both key effects remained significant, but the slope estimates were less trustworthy. predictive power that can be found on y’s own past. This idea has proven of great value to investigate the causal connections between economic variables, sequences of behavioral responses, neural spikes, or electroencephalographic potentials. Often, the technique is used to reconstruct directional networks of variables and processes that have causal connections. If x and y are stationary time sequences on time (τ), in order to test whether x Granger-causes y, one begins by fitting autoregressive models (AR) that predict the values of y from its own n values lagged into the past. This consists on finding values a1, a2, ..., an that minimize the error ε in the equation, y[τ] = a0 + past of y z }| { a1y[τ −1] + a2y[τ −2] + . . . + any[τ −n] +ε[τ]. One then augments the autoregression by including m lagged values of x, with additional parameters b1, ..., bm to be fitted, y[τ] =a0 + past of y z }| { a1y[τ −1] + a2y[τ −2] + . . . + any[τ −n] + + b1x[τ −1] + b2x[τ −2] + . . . + bmx[τ −m] | {z } past of x +ε[τ]. where the ε sequences are uncorrelated (white) gaussian noises, reflecting the fully random or chaotic part of the system, which cannot be predicted from its past (i.e., the error, that is termed by some the ‘creativity’ of the model). If the second regression is a significant improvement over the first, then it can be said that x Granger-causes y, indicating that past values of x significantly predict future values of y over and above any predictive power of y’s own past values. This is tested using an F-test, with the null hypothesis being that the second model does not improve on the first one. The selection of the autoregressive order parameters n and m is achieved by model comparisons using information criteria. When one is interested in reconstructing a network of causal relations between multiple variables, one can use a mutivariate generalization of the AR model, the vector autoregressive model (VAR). The VAR model consists of mutiple AR equations (one for each variable in the model). If we consider an autoregressive order of one (i.e., m = n = 1), when we are simultaneously considering p variables Y = {y1, . . . , yp}, the VAR[1] model to be fitted can be expressed in matrix no2426 tation as,    y1[τ] ... yp[τ]   =    a1 ... ap   +    A1,1 . . . A1,p ... ... ... Ap,1 . . . Ap,p       y1[τ −1] ... yp[τ −1]   +    ε1[τ] ... εp[τ]   . This model enables testing for Granger-causality between any pair of variables yi ∈Y and yj ∈ Y, after partialling out the possible confounding effects of {yt, t ̸= i, t ̸= j, 1 ≤t ≤p}. yj is said to Granger-cause yi if the model coefficient Ai,j is significantly different than zero, and the reverse holds if Aj,i significantly different than zero (i.e., yi Granger-causes yj). 4.2 Granger-causality in Case Drift To investigate whether the pattern of change in one case triggers (i.e., Granger-cause) the pattern of change in another, we made use of the prototype vectors for the cases in each of the years developed in Subsection 2.5. As a measure of the amount of contextual change for a given case in a given year, we computed the city-block distances between the case prototypes from each year to the next available time point, which are plotted in Fig. 5a. Notice that there is an overall pattern of change equally affecting all cases, and the changes are therefore strongly correlated. This reflects the overall pattern of historical changes affecting Icelandic as a whole, as well as changes in the topics that would be discussed in the different time periods, as was documented in Subsection 2.5 and Section 3. Considering the changes in each case as a component in a four-dimensional vector, the modulus of this vector (plotted by the dashed orange line in Fig. 5a) gives the overall magnitude of the changes that are unspecific to the cases themselves. To remove this component from the changes, we fitted a linear regression to the sequence of changes in each case, using the overall pattern of change as a predictor. Fig. 5b plots the resulting residuals, indicating the amount of change that was specific to each case, over and above the overall pattern.7 A precondition for testing for Granger-causality is that the time series under consideration are stationary. In our case, the series depicted in Fig. 5b are significantly non-stationary; they exhibit, for instance, significant temporal trends. In order to remove the non-stationarities, the series were differentiated (i.e., we considered the difference between each two consecutive points). The result of 7Negative values in this figure indicate changing less than the average, rather than ‘negative change’. this differentiation, plotted in Fig. 5c, removed the non-stationary trends from the original series. Table 1: Results of the Granger-causality analyses. Causality directions that remained significant after FDR correction are highlighted in bold. Direction F[1, 144] p p (FDR) Direction F[1, 144] p p (FDR) Nom. →Gen. 2.614 .108 .184 Gen. →Nom. 5.618 .019 .046 Nom. →Dat. 8.295 .005 .018 Dat. →Nom. 3.834 .052 .104 Gen. →Acc. .566 .453 .454 Acc. →Gen. 2.408 .123 .184 Acc. →Nom. 6.802 .010 .030 Nom. →Acc. .644 .424 .454 Acc. →Dat. 10.249 .002 .018 Dat. →Acc. .563 .454 .454 Dat. →Gen. 1.354 .246 .329 Gen. →Dat. 9.034 .003 .018 We fitted a VAR[n] model to the four differentiated time-series. The autoregressive order found to maximize Akaike’s Information Criterion (Akaike, 1974) was n = 1.8 The F statistics and significance values for the coefficients in the resulting VAR[1] model are given in Tab. 1. In order to reconstruct the causality network, we also need to consider that we started out with only very vague predictions on the possible directions of causality. As the model involved twelve separate p-value tests, the p-value estimates need to be corrected for multiple comparisons. This correction was done using the false discovery rate (FDR) method (Benjamini and Hochberg, 1995), resulting in the corrected p-value estimates listed in the last column of Tab. 1. The Granger-causality analysis leads us to reconstruct the causality network depicted in Fig. 6. It appears that the drift in the functions of Icelandic case is not plainly random. Instead, we find evidence that changes in the functions of the accusatives and genitives have had a domino effect, triggering further changes in the functions of nominatives. Finally, changes in all other three cases result in changes in the functions of the dative. In summary, the changes observed are consistent with the idea discussed in the Introduction of a functional chain shift affecting the morphological case system of Icelandic. 5 Discussion We have presented evidence for a steady drift –of the precise kind advocated by Sapir (1921/2014)– even in a language as remarkably conservative as is Icelandic. This supports the claim that human languages are in a state of ‘perpetual motion’ (Beckner et al., 2009; Dediu et al., 2013; Hawkins and Gell-Mann, 1992; Hopper, 1987; 8In fact n = 1 was also found to maximize both Akaike’s Final Prediction Error and Hannan-Quinn Criteria. 2427 (a) (b) (c) 1200 1300 1400 1500 1600 1700 1800 1900 2000 Year (CE) 100 200 300 400 500 600 700 800 900 1000 Change in Usage nominatives genitives accusatives datives Overall pattern 1200 1300 1400 1500 1600 1700 1800 1900 2000 Year (CE) 100 80 60 40 20 0 20 40 60 80 Residual Change in Usage nominatives genitives accusatives datives 1300 1400 1500 1600 1700 1800 1900 Year (CE) 80 60 40 20 0 20 40 60 80 100 Difference in Residual Change nominatives genitives accusatives datives Figure 5: (a) Overall value of the city-block distances between the prototypical case vectors for consecutive years. (b) Residual value of the distances specific to each case after residualizing the overall pattern of change. (c) Differentiated values of the residualized distances, removing non-stationarities. Accusative p=.030 ' p=.018 # Nominative p=.018 / Dative Genitive p=.046 7 p=.018 ; Figure 6: Reconstructed network of Grangercausal connections between diachronic changes in case functions. The p-values indicated on the causal arrows are FDR-corrected. Larsen-Freeman and Cameron, 2007; Niyogi and Berwick, 1997). Although we have found that functional change in Icelandic case has proceeded at a constant rate, we do not think, as argued by Nettle (1999a, 1999b), that this rate of change needs to be constant across languages. There are strong arguments suggesting that in other languages such rates might be different (Wichmann, 2008; Wichmann and Holman, 2009). The crucial innovation presented in this paper is the reconstruction of the causality network linking the changes in the four cases. Previous applications of the notion of Granger-causality to diachronic language change (Moscoso del Prado Martín, 2014) have focused on the macroscopic relation between sudden changes in syntax and morphology. Here, we have demonstrated that Granger-causality can also be used to reconstruct detailed networks of slow changes within the morphological system, at a more microscopic scale. The techniques developed offer a mechanism for investigating subtle changes in the functions of linguistic constructions, and the causal relations between them. Traditionally, historical linguists have focused on ‘narrative’ accounts of the the chains of change within a language. Although such type of accounts are extremely useful, the often very subtle changes in usage that can occur from one time-point to another cannot always be described with such clearcut patterns. Nevertheless, we have shown that those very small changes do accumulate in meaningful ways. An important question addressed by this study is the presence of endogenous causal chains in language change. Lupyan and Dale (2015) argue that languages are constrained by their ‘ecological niches’, the communities in which they are spoken, and the extralinguistic properties of those niches can trigger exogenous change in the morphology of the languages. Following on Lupyan and Dale’s ecosystem analogy, one should see that, as well as being part of ecosystems, languages are also ecosystems in themselves, in a nesting similar to that found in natural ecosystems (i.e., an animal is part of a particular ecosystem, and its body is an ecosystem in itself). Sounds, words and constructions have their own ecological niches within the language, and disturbances in the system can trigger cascaded changes, leading to readaptation (evolution) of the constructions. This contrasts with the view of changes in the function of Icelandic cases expressed by Eythórsson (2000). He showed that verbs whose arguments exhibit ‘nominative sickness’ and ‘accusative sickness’ tend to be clustered along certain syntactic and semantic lines. That it is in these particular niches that accusatives and datives ended up settling is not, however, the cause of the language changes. As we have shown, the case system was subject to a string of cascaded pressures. That the cases ended 2428 up settling in new syntactico-semantic niches was the result, rather than the cause of the changes. References Hirotogu Akaike. 1974. A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19:716–723. Jóhanna Barðdal and Thórhallur Eythórsson. 2003. The change that never happened: the story of oblique subjects. Journal of Linguistics, 39:439– 472. Jóhanna Barðdal. 2011. The rise of dative substitution in the history of Icelandic. Lingua, 121:60–79. Clay Beckner, Nick C. Ellis, Richard Blythe, John Holland, Joan Bybee, Jinyun Ke, Morten H. Christensen, Diane Larsen-Freeman, William Croft, and Tom Schoenemann. 2009. Language is a complex adaptive system: Position paper. Language Learning, 59:1–26. Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society, Series B, 57:289–300. Theresa Biberauer and Ian Roberts. 2008. Cascading parameter changes: internally driven change in Middle and Early Modern English. In Thórhallur Eythórsson, editor, Grammatical Change and Linguistic Theory: The Rosendal Papers, pages 79– 114. John Benjamins, Philadelphia, PA. John A. Bullinaria and Joseph P. Levy. 2007. Extracting semantic representations from word cooccurrence statistics: A computational study. Behavior Research Methods, 39:510–526. John A. Bullinaria and Joseph P. Levy. 2012. Extracting semantic representations from word cooccurrence statistics: Stop-lists, stemming and SVD. Behavior Research Methods, 44:890–907. William Croft. 2000. Explaining Language Change: An Evolutionary Approach. Longman, London, England. Dan Dediu, Michael Cysouw, Stephen C. Levinson, Andrea Baronchelli, Morten H. Christensen, William Croft, Nicholas Evans, Simon Garrod, Rusell D. Gray, Anne Kandler, and Elena Lieven. 2013. Cultural evolution of language. In Peter J. Richerson and Morten H. Christensen, editors, Cultural Evolution: Society, Technology, Language, and Religion, pages 303–331. MIT Press, Cambridge, MA. Thórhallur Eythórsson. 2000. Dative vs. nominative: changes in quirky subjects in Icelandic. Leeds Working Papers in Linguistics, 8:27–44. Adam Fisiak, editor. 1984. Historical Syntax. de Gruyter, Berlin. Clive W. J. Granger. 1969. Investigating causal relations by econometric models and cross-spectral methods. Econometrica, 37:424–438. John Hawkins and Murray Gell-Mann, editors. 1992. The Evolution of Human Languages. Santa Fe Institute Studies in the Sciences of Complexity. Addison Wesley, Reading, MA. Paul J. Hopper. 1987. Emergent grammar. Proceedings of the Berkeley Linguistic Society, 13:139–157. Diane Larsen-Freeman and Lynne Cameron. 2007. Complex Systems and Applied Linguistics. Oxford University Press, Oxford, UK. David Lightfoot, editor. 2002. Syntactic Effects of Morphological Change. Oxford University Press, Oxford, UK. Will Lowe and Scott McDonald. 2000. The direct route: Mediated priming in semantic space. In Lila Gleitman and Aravind K. Joshi, editors, Proceedings of the XXII Annual Conference of the Cognitive Science Society, pages 806–811, Austin, TX. Cognitive Science Society. Gary Lupyan and Rick Dale. 2015. The role of adaptation in understanding linguistic diversity. In Rik De Busser and Randy J. LaPolla, editors, Language structure and environment: Social, cultural, and natural factors, pages 289–316. John Benjamins Publishing Company, Philadelphia, PA. André Martinet. 1952. Function, structure, and sound change. Word, 8:1–32. Fermín Moscoso del Prado Martín. 2007. Cooccurrence and the effect of inflectional paradigms. Lingue e Linguaggio, 6:247–263. Fermín Moscoso del Prado Martín. 2014. Grammatical change begins within the word: Causal modeling of the co-evolution of Icelandic morphology and syntax. In Paul Bello, Marcello Guarini, Marjorie McShane, and Brian Scassellati, editors, Proceedings of the XXXVII Annual Conference of the Cognitive Science Society, pages 2657–2662, Austin, TX. Cognitive Science Society. Daniel Nettle. 1999a. Using social impact theory to simulate language change. Lingua, 108:95–117. Daniel Nettle. 1999b. Is the rate of linguistic change constant? Lingua, 108:119–136. Partha Niyogi and Robert C. Berwick. 1997. A dynamical systems model for language change. Complex Systems, 11:161–204. Edward Sapir. 2014. Language: An Introduction to the Study of Speech. Dover Publications, Mineola, NY. (Original work published 1921). 2429 Hinrich Schütze and Jan O. Pedersen. 1997. A cooccurrence-based thesaurus and two applications to information retrieval. Information Processing & Management, 33:307–318. Joel C. Wallenberg, Anton Karl Ingason, Einar Freyr Sigurðsson, and Eiríkur Rögnvaldsson. 2011. Icelandic parsed historical corpus (IcePaHC – v. 0.9). Søren Wichmann and Eric W. Holman. 2009. Population size and rates of language change. Human Biology, 81:259–274. Søren Wichmann. 2008. The emerging field of language dynamics. Language & Linguistics Compass, 2:1294–1297. Henri Wittmann. 1983. Les réactions en chaîne en morphologie diachronique (“Chain reactions in diachronic morphology”). In Actes du colloque de la Societé Internationale de Linguistique Fonctionnelle, volume 10, pages 285–292, Québec, Canada. Université Laval. Sewall Wright. 1929. The evolution of dominance. The American Naturalist, 63:556–561. Sewall Wright. 1955. Classification of the factors of evolution. Cold Spring Harbor Symposia on Quantitative Biology, 20:16–24. 2430
2016
229
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 236–246, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Intrinsic Subspace Evaluation of Word Embedding Representations Yadollah Yaghoobzadeh and Hinrich Sch¨utze Center for Information and Language Processing University of Munich, Germany [email protected] Abstract We introduce a new methodology for intrinsic evaluation of word representations. Specifically, we identify four fundamental criteria based on the characteristics of natural language that pose difficulties to NLP systems; and develop tests that directly show whether or not representations contain the subspaces necessary to satisfy these criteria. Current intrinsic evaluations are mostly based on the overall similarity or full-space similarity of words and thus view vector representations as points. We show the limits of these point-based intrinsic evaluations. We apply our evaluation methodology to the comparison of a count vector model and several neural network models and demonstrate important properties of these models. 1 Introduction Distributional word representations or embeddings are currently an active area of research in natural language processing (NLP). The motivation for embeddings is that knowledge about words is helpful in NLP. Representing words as vocabulary indexes may be a good approach if large training sets allow us to learn everything we need to know about a word to solve a particular task; but in most cases it helps to have a representation that contains distributional information and allows inferences like: “above” and “below” have similar syntactic behavior or “engine” and “motor” have similar meaning. Several methods have been introduced to assess the quality of word embeddings. We distinguish two different types of evaluation in this paper: (i) extrinsic evaluation evaluates embeddings in an NLP application or task and (ii) intrinsic evaluation tests the quality of representations independent of a specific NLP task. Each single word is a combination of a large number of morphological, lexical, syntactic, semantic, discourse and other features. Its embedding should accurately and consistently represent these features, and ideally a good evaluation method must clarify this and give a way to analyze the results. The goal of this paper is to build such an evaluation. Extrinsic evaluation is a valid methodology, but it does not allow us to understand the properties of representations without further analysis; e.g., if an evaluation shows that embedding A works better than embedding B on a task, then that is not an analysis of the causes of the improvement. Therefore, extrinsic evaluations do not satisfy our goals. Intrinsic evaluation analyzes the generic quality of embeddings. Currently, this evaluation mostly is done by testing overall distance/similarity of words in the embedding space, i.e., it is based on viewing word representations as points and then computing full-space similarity. The assumption is that the high dimensional space is smooth and similar words are close to each other. Several datasets have been developed for this purpose, mostly the result of human judgement; see (Baroni et al., 2014) for an overview. We refer to these evaluations as point-based and as full-space because they consider embeddings as points in the space – sub-similarities in subspaces are generally ignored. Point-based intrinsic evaluation computes a score based on the full-space similarity of two words: a single number that generally does not say anything about the underlying reasons for a lower or higher value of full-space similarity. This makes it hard to interpret the results of point-based evaluation and may be the reason that contradictory results have been published; e.g., based on 236 point-based evaluation, some papers have claimed that count-based representations perform as well as learning-based representations (Levy and Goldberg, 2014a). Others have claimed the opposite (e.g., Mikolov et al. (2013), Pennington et al. (2014), Baroni et al. (2014)). Given the limits of current evaluations, we propose a new methodology for intrinsic evaluation of embeddings by identifying generic fundamental criteria for embedding models that are important for representing features of words accurately and consistently. We develop corpus-based tests using supervised classification that directly show whether the representations contain the information necessary to meet the criteria or not. The fine-grained corpus-based supervision makes the sub-similarities of words important by looking at the subspaces of word embeddings relevant to the criteria, and this enables us to give direct insights into properties of representation models. 2 Related Work Baroni et al. (2014) evaluate embeddings on different intrinsic tests: similarity, analogy, synonym detection, categorization and selectional preference. Schnabel et al. (2015) introduce tasks with more fine-grained datasets. These tasks are unsupervised and generally based on cosine similarity; this means that only the overall direction of vectors is considered or, equivalently, that words are modeled as points in a space and only their fullspace distance/closeness is considered. In contrast, we test embeddings in a classification setting and different subspaces of embeddings are analyzed. Tsvetkov et al. (2015) evaluate embeddings based on their correlations with WordNetbased linguistic embeddings. However, correlation does not directly evaluate how accurately and completely an application can extract a particular piece of information from an embedding. Extrinsic evaluations are also common (cf. (Li and Jurafsky, 2015; K¨ohn, 2015; Lai et al., 2015)). Li and Jurafsky (2015) conclude that embedding evaluation must go beyond human-judgement tasks like similarity and analogy. They suggest to evaluate on NLP tasks. K¨ohn (2015) gives similar suggestions and also recommends the use of supervised methods for evaluation. Lai et al. (2015) evaluate embeddings in different tasks with different setups and show the contradictory results of embedding models on different tasks. Idiosyncrasies of different downstream tasks can affect extrinsic evaluations and result in contradictions. 3 Criteria for word representations Each word is a combination of different properties. Depending on the language, these properties include lexical, syntactic, semantic, world knowledge and other features. We call these properties facets. The ultimate goal is to learn representations for words that accurately and consistently contain these facets. Take the facet gender (GEN) as an example. We call a representation 100% accurate for GEN if information it contains about GEN is always accurate; we call the representation 100% consistent for GEN if the representation of every word that has a GEN facet contains this information. We now introduce four important criteria that a representation must satisfy to represent facets accurately and consistently. These criteria are applied across different problems that NLP applications face in the effective use of embeddings. Nonconflation. A word embedding must keep the evidence from different local contexts separate – “do not conflate” – because each context can infer specific facets of the word. Embeddings for different word forms with the same stem, like plural and singular forms or different verb tenses, are examples vulnerable to conflation because they occur in similar contexts. Robustness against sparseness. One aspect of natural language that poses great difficulty for statistical modeling is sparseness. Rare words are common in natural language and embedding models must learn useful representations based on a small number of contexts. Robustness against ambiguity. Another central problem when processing words in NLP is lexical ambiguity (Cruse, 1986; Zhong and Ng, 2010). Polysemy and homonymy of words can make it difficult for a statistical approach to generalize and infer well. Embeddings should fully represent all senses of an ambiguous word. This criterion becomes more difficult to satisfy as distributions of senses become more skewed, but a robust model must be able to overcome this. Accurate and consistent representation of multifacetedness. This criterion addresses settings with large numbers of facets. It is based on the following linguistic phenomenon, a phenomenon that occurs frequently crosslinguistically 237 (Comrie, 1989). (i) Words have a large number of facets, including phonetic, morphological, syntactic, semantic and topical properties. (ii) Each facet by itself constitutes a small part of the overall information that a representation should capture about a word. 4 Experimental setup and results We now design experiments to directly evaluate embeddings on the four criteria. We proceed as follows. First, we design a probabilistic context free grammar (PCFG) that generates a corpus that is a manifestation of the underlying phenomenon. Then we train our embedding models on the corpus. The embeddings obtained are then evaluated in a classification setting, in which we apply a linear SVM (Fan et al., 2008) to classify embeddings. Finally, we compare the classification results for different embedding models and analyze and summarize them. Selecting embedding models. Since this paper is about developing a new evaluation methodology, the choice of models is not important as long as the models can serve to show that the proposed methodology reveals interesting differences with respect to the criteria. On the highest level, we can distinguish two types of distributional representations. Count vectors (Sahlgren, 2006; Baroni and Lenci, 2010; Turney and Pantel, 2010) live in a highdimensional vector space in which each dimension roughly corresponds to a (weighted) count of cooccurrence in a large corpus. Learned vectors are learned from large corpora using machine learning methods: unsupervised methods such as LSI (e.g., Deerwester et al. (1990), Levy and Goldberg (2014b)) and supervised methods such as neural networks (e.g., Mikolov et al. (2013)) and regression (e.g., Pennington et al. (2014)). Because of the recent popularity of learning-based methods, we consider one count-based and five learning-based distributional representation models. The learning-based models are: (i) vLBL (henceforth: LBL) (vectorized log-bilinear language model) (Mnih and Kavukcuoglu, 2013), (ii) SkipGram (henceforth: SKIP) (skipgram bagof-word model), (iii) CBOW (continuous bag-ofword model (Mikolov et al., 2013), (iv) Structured SkipGram (henceforth SSKIP), (Ling et al., 2015) and CWindow (henceforth CWIN) (contin1 P(aV b|S) = 1/4 2 P(bV a|S) = 1/4 3 P(aWa|S) = 1/8 4 P(aWb|S) = 1/8 5 P(bWa|S) = 1/8 6 P(bWb|S) = 1/8 7 P(vi|V ) = 1/5 0 ≤i ≤4 8 P(wi|W) = 1/5 0 ≤i ≤4 Figure 1: Global conflation grammar. Words vi occur in a subset of the contexts of words wi, but the global count vector signatures are the same. uous window model) (Ling et al., 2015). These models learn word embeddings for input and target spaces using neural network models. For a given context, represented by the input space representations of the left and right neighbors ⃗vi−1 and ⃗vi+1, LBL, CBOW and CWIN predict the target space ⃗vi by combining the contexts. LBL combines ⃗vi−1 and ⃗vi+1 linearly with position dependent weights and CBOW (resp. CWIN) combines them by adding (resp. concatenation). SKIP and SSKIP predict the context words vi−1 or vi+1 given the input space ⃗vi. For SSKIP, context words are in different spaces depending on their position to the input word. In summary, CBOW and SKIP are learning embeddings using bag-of-word (BoW) models, but the other three, CWIN, SSKIP and LBL, are using position dependent models. We use word2vec1 for SKIP and CBOW, wang2vec2 for SSKIP and CWIN, and Lai et al. (2015)’s implementation3 for LBL. The count-based model is position-sensitive PPMI, Levy and Goldberg (2014a)’s explicit vector space representation model.4 For a vocabulary of size V , the representation ⃗w of w is a vector of size 4V , consisting of four parts corresponding to the relative positions r ∈{−2, −1, 1, 2} with respect to occurrences of w in the corpus. The entry for dimension word v in the part of ⃗w corresponding to relative position r is the PPMI (positive pointwise mutual information) weight of w and v for that relative position. The four parts of the vector are length normalized. In this paper, we use only two relative positions: r ∈{−1, 1}, so each ⃗w has two parts, corresponding to immediate left and right neighbors. 1code.google.com/archive/p/word2vec 2github.com/wlin12/wang2vec 3github.com/licstar/compare 4bitbucket.org/omerlevy/hyperwords 238 4.1 Nonconflation Grammar. The PCFG grammar shown in Figure 1 generates vi words that occur in two types of contexts: a-b (line 1) and b-a (line 2); and wi words that also occur in these two contexts (lines 4 and 5), but in addition occur in a-a (line 3) and b-b (line 6) contexts. As a result, the set of contexts in which vi and wi occur is different, but if we simply count the number of occurrences in the contexts, then vi and wi cannot be distinguished. Dataset. We generated a corpus of 100,000 sentences. Words that can occur in a-a and b-b contexts constitute the positive class, all other words the negative class. The words v3, v4, w3, w4 were assigned to the test set, all other words to the training set. Results. We learn representations of words by our six models and train one SVM per model; it takes a word representation as input and outputs +1 (word can occur in a-a/b-b) or -1 (it cannot). The SVMs trained on PPMI and CBOW representations assigned all four test set words to the negative class; in particular, w3 and w4 were incorrectly classified. Thus, the accuracy of classification for these models (50%) was not better than random. The SVMs trained on LBL, SSKIP, SSKIP and CWIN representations assigned all four test set words to the correct class: v3 and v4 were assigned to the negative class and w3 and w4 were assigned to the positive class. Discussion. The property of embedding models that is relevant here is that PPMI is an aggregation model, which means it calculates aggregate statistics for each word and then computes the final word embedding from these aggregate statistics. In contrast, all our learning-based models are iterative models: they iterate over the corpus and each local context of a word is used as a training instance for learning its embedding. For iterative models, it is common to use composition of words in the context, as in LBL, CBOW and CWIN. Non-compositional iterative models like SKIP and SSKIP are also popular. Aggregation models can also use composite features from context words, but these features are too sparse to be useful. The reason that the model of Agirre et al. (2009) is rarely used is precisely its inability to deal with sparseness. All widely used distributional models employ individual word occurrences as basic features. The bad PPMI results are explained by the fact 1 P(AV B|S) = 1/2 2 P(CWD|S) = 1/2 3 P(ai|A) = 1/10 0 ≤i ≤9 4 P(bi|B) = 1/10 0 ≤i ≤9 5 P(ci|C) = 1/10 0 ≤i ≤9 6 P(di|D) = 1/10 0 ≤i ≤9 7 P(vi|V ) = 1/10 0 ≤i ≤9 8 P(wi|W) = 1/10 0 ≤i ≤9 9 L′ = L(S) 10 ∪{aiuibi|0 ≤i ≤9} 11 ∪{cixidi|0 ≤i ≤9} Figure 2: In language L′, frequent vi and rare ui occur in a-b contexts; frequent wi and rare xi occur in c-d contexts. Word representations should encode possible contexts (a-b vs. c-d) for both frequent and rare words. that it is an aggregation model: the PPMI model cannot distinguish two words with the same global statistics – as is the case for, say, v3 and w3. The bad result of CBOW is probably connected to its weak (addition) composition of context, although it is an iterative compositional model. Simple representation of context words with iterative updating (through backpropagation in each training instance), can influence the embeddings in a way that SKIP and SSKIP get good results, although they are non-compositional. As an example of conflation occurring in the English Wikipedia, consider this simple example. We replace all single digits by “7” in tokenization. We learn PPMI embeddings for the tokens and see that among the one hundred nearest neighbors of “7” are the days of the week, e.g., “Friday”. As an example of a conflated feature consider the word “falls” occurring immediately to the right of the target word. The weekdays as well as single digits often have the immediate right neighbor “falls” in contexts like “Friday falls on a public holiday” and “2 out of 3 falls match” – tokenized as “7 out of 7 falls match” – in World Wrestling Entertainment (WWE). The left contexts of “Friday” and “7” are different in these contexts, but the PPMI model does not record this information in a way that would make the link to “falls” clear. 4.2 Robustness against sparseness Grammar. The grammar shown in Figure 2 generates frequent vi and rare ui in a-b contexts (lines 1 and 9); and frequent wi and rare xi in c-d contexts (lines 2 and 10). The language generated by the PCFG on lines 1–8 is merged on lines 9–11 with the ten contexts a0u0b0 ... a9u9b9 (line 9) 239 and the ten contexts c0x0d0 . . . c9x9d9 (line 10); that is, each of the ui and xi occurs exactly once in the merged language L′, thus modeling the phenomenon of sparseness. Dataset. We generated a corpus of 100,000 sentences using the PCFG (lines 1–8) and added the 20 rare sentences (lines 9–11). We label all words that can occur in c-d contexts as positive and all other words as negative. The singleton words ui and xi were assigned to the test set, all other words to the training set. Results. After learning embeddings with different models, the SVM trained on PPMI representations assigned all twenty test words to the negative class. This is the correct decision for the ten ui (since they cannot occur in a c-d context), but the incorrect decision for the xi (since they can occur in a c-d context). Thus, the accuracy of classification was 50% and not better than random. The SVMs trained on learning-based representations classified all twenty test words correctly. Discussion. Representations of rare words in the PPMI model are sparse. The PPMI representations of the ui and xi only contain two nonzero entries, one entry for an ai or ci (left context) and one entry for a bi or di (right context). Given this sparseness, it is not surprising that representations are not a good basis for generalization and PPMI accuracy is random. In contrast, learning-based models learn that the ai, bi, ci and di form four different distributional classes. The final embeddings of the ai after learning is completed are all close to each other and the same is true for the other three classes. Once the similarity of two words in the same distributional class (say, the similarity of a5 and a7) has been learned, the contexts for the ui (resp. xi) look essentially the same to embedding models as the contexts of the vi (resp. wi). Thus, the embeddings learned for the ui will be similar to those learned for the vi. This explains why learning-based representations achieve perfect classification accuracy. This sparseness experiment highlights an important difference between count vectors and learned vectors. Count vector models are less robust in the face of sparseness and noise because they base their representations on individual contexts; the overall corpus distribution is only weakly taken into account, by way of PPMI weighting. In contrast, learned vector models make much better use of the overall corpus distri1 P(AV1B|S) =10/20 2 P(CW1D|S)=9/20 3 P(CW2D|S)=β·1/20 4 P(AW2B|S) =(1 −β)·1/20 5 P(ai|A) =1/10 0 ≤i ≤9 6 P(bi|B) =1/10 0 ≤i ≤9 7 P(ci|C) =1/10 0 ≤i ≤9 8 P(di|D) =1/10 0 ≤i ≤9 9 P(vi|V1) =1/50 0 ≤i ≤49 10 P(wi|W1) =1/45 5 ≤i ≤49 11 P(wi|W2) =1/5 0 ≤i ≤4 Figure 3: Ambiguity grammar. vi and w5 . . . w49 occur in a-b and c-d contexts only, respectively. w0 . . . w4 are ambiguous and occur in both contexts. bution and they can leverage second-order effects for learning improved representations. In our example, the second order effect is that the model first learns representations for the ai, bi, ci and di and then uses these as a basis for inferring the similarity of ui to vi and of xi to wi. 4.3 Robustness against ambiguity Grammar. The grammar in Figure 3 generates two types of contexts that we interpret as two different meanings: a-b contexts (lines 1,4) and c-d contexts (lines 2, 3). vi occur only in a-b contexts (line 1), w5 . . . w49 occur only in c-d contexts (line 2); thus, they are unambiguous. w0 . . . w4 are ambiguous and occur with probability β in c-d contexts (line 3) and with probability (1 −β) in a-b contexts (lines 3, 4). The parameter β controls the skewedness of the sense distribution; e.g., the two senses are equiprobable for β = 0.5 and the second sense (line 4) is three times as probable as the first sense (line 3) for β = 0.25. Dataset. The grammar specified in Figure 3 was used to generate a training corpus of 100,000 sentences. Label criterion: A word is labeled positive if it can occur in a c-d context, as negative otherwise. The test set consists of the five ambiguous words w0 . . . w4. All other words are assigned to the training set. Linear SVMs were trained for the binary classification task on the train set. 50 trials of this experiment were run for each of eleven values of β: β = 2−α where α ∈{1.0, 1.1, 1.2, . . . , 2.0}. Thus, for the smallest value of α, α = 1.0, the two senses have the same frequency; for the largest value of α, α = 2.0, the dominant sense is three times as frequent as the less frequent sense. Results. Figure 4 shows accuracy of the classi240 1.0 1.2 1.4 1.6 1.8 2.0 0.0 0.2 0.4 0.6 0.8 1.0 alpha accuracy pmi lbl cbow skip cwin sskip Figure 4: SVM classification results for the ambiguity dataset. X-axis: α = −log2 β. Y-axis: classification accuracy: fication on the test set: the proportion of correctly classified words out of a total of 250 (five words each in 50 trials). All models perform well for balanced sense frequencies; e.g., for α = 1.0, β = 0.5, the SVMs were all close to 100% accurate in predicting that the wi can occur in a c-d context. PPMI accuracy falls steeply when α is increased from 1.4 to 1.5. It has a 100% error rate for α ≥1.5. Learning-based models perform better in the order CBOW (least robust), LBL, SSKIP, SKIP, CWIN (most robust). Even for α = 2.0, CWIN and SKIP are still close to 100% accurate. Discussion. The evaluation criterion we have used here is a classification task. The classifier attempts to answer a question that may occur in an application – can this word be used in this context? Thus, the evaluation criterion is: does the word representation contain a specific type of information that is needed for the application. Another approach to ambiguity is to compute multiple representations for a word, one for each sense. We generally do not yet know what the sense of a word is when we want to use its word representation, so data-driven approaches like clustering have been used to create representations for different usage clusters of words that may capture some of its senses. For example, Reisinger and Mooney (2010) and Huang et al. (2012) cluster the contexts of each word and then learn a different representation for each cluster. The main motivation for this approach is the assumption that single-word distributional representations cannot represent all senses of a word well (Huang et al., 2012). However, Li and Jurafsky (2015) show that simply increasing the dimension1 P(NFn|S) =1/4 2 P(AFa|S) =1/4 3 P(NMn|S) =1/4 4 P(AMf|S) =1/4 5 P(ni|N) =1/5 0 ≤i ≤4 6 P(ai|A) =1/5 0 ≤i ≤4 7 P(xnf i Unf i |Fn) =1/5 0 ≤i ≤4 8 P(f|Unf i ) =1/2 9 P(µ(Unf i )|Unf i ) =1/2 10 P(xaf i Uaf i |Fa) =1/5 0 ≤i ≤4 11 P(f|Uaf i ) =1/2 12 P(µ(Uaf i )|Uaf i ) =1/2 13 P(xnm i Unm i |Mn) =1/5 0 ≤i ≤4 14 P(m|Unm i ) =1/2 15 P(µ(Unm i )|Unm i )=1/2 16 P(xam i Uam i |Mf) =1/5 0 ≤i ≤4 17 P(m|Uam i ) =1/2 18 P(µ(Uam i )|Uam i ) =1/2 Figure 5: This grammar generates nouns (xn. i ) and adjectives (xa. i ) with masculine (x.m i ) and feminine (x.f i ) gender as well as paradigm features ui. µ maps each U to one of {u0 . . . u4}. µ is randomly initialized and then kept fixed. ality of single-representation gets comparable results to using multiple-representation. Our results confirm that a single embedding can be robust against ambiguity, but also show the main challenge: skewness of sense distribution. 4.4 Accurate and consistent representation of multifacetedness Grammar. The grammar shown in Figure 5 models two syntactic categories, nouns and adjectives, whose left context is highly predictable: it is one of five left context words ni (resp. ai) for nouns, see lines 1, 3, 5 (resp. for adjectives, see lines 2, 4, 6). There are two grammatical genders: feminine (corresponding to the two symbols Fn and Fa) and masculine (corresponding to the two symbols Mn and Ma). The four combinations of syntactic category and gender are equally probable (lines 1–4). In addition to gender, nouns and adjectives are distinguished with respect to morphological paradigm. Line 7 generates one of five feminine nouns (xnf i ) and the corresponding paradigm marker U nf i . A noun has two equally probable right contexts: a context indicating its gender (line 8) and a context indicating its paradigm (line 9). µ is a function that maps each U to one of five morphological paradigms {u0 . . . u4}. µ is randomly initialized before a corpus is generated and kept fixed. The function µ models the assignment of 241 paradigms to nouns and adjectives. Nouns and adjectives can have different (or the same) paradigms, but for a given noun or adjective the paradigm is fixed and does not change. Lines 7– 9 generate gender and paradigm markers for feminine nouns, for which we use the symbols xnf i . Lines 10–18 cover the three other cases: masculine nouns (xnm i ), feminine adjectives (xaf i ) and masculine adjectives (xam i ). Dataset. We perform 10 trials. In each trial, µ is initialized randomly and a corpus of 100,000 sentences is generated. The train set consists of the feminine nouns (xnf i , line 7) and the masculine nouns (xnm i , line 13). The test set consists of the feminine (xaf i ) and masculine (xam i ) adjectives. Results. Embeddings have been learned, SVMs are trained on the binary classification task feminine vs. masculine and evaluated on test. There was not a single error: accuracy of classifications is 100% for all embedding models. Discussion. The facet gender is indicated directly by the distribution and easy to learn. For a noun or adjective x, we simply have to check whether f or m occurs to its right anywhere in the corpus. PPMI stores this information in two dimensions of the vectors and the SVM learns this fact perfectly. The encoding of “f or m occurs to the right” is less direct in the learning-based representation of x, but the experiment demonstrates that they also reliably encode it and the SVM reliably picks it up. It would be possible to encode the facet in just one bit in a manually designed representation. While all representations are less compact than a one-bit representation – PPMI uses two real dimensions, learning-based models use an activation pattern over several dimensions – it is still true that most of the capacity of the embeddings is used for encoding facets other than gender: syntactic categories and paradigms. Note that there are five different instances each of feminine/masculine adjectives, feminine/masculine nouns and ui words, but only two gender indicators: f and m. This is a typical scenario across languages: words are distinguished on a large number of morphological, grammatical, semantic and other dimensions and each of these dimensions corresponds to a small fraction of the overall knowledge we have about a given word. Point-based tests do not directly evaluate specific facets of words. In similarity datasets, there is no individual test on facets – only fullspace similarity is considered. There are test cases in analogy that hypothetically evaluate specific facets like gender of words, as in kingman+woman=queen. However, it does not consider the impact of other facets and assumes the only difference of “king” and “queen” is gender. A clear example that words usually differ on many facets, not just one, is the analogy: London:England ∼Ankara:Turkey. political-capitalof applies to both, cultural-capital-of only to London:England since Istanbul is the cultural capital of Turkey. To make our argument more clear, we designed an additional experiment that tries to evaluate gender in our dataset based on similarity and analogy methods. In the similarity evaluation, we search for the nearest neighbor of each word and accuracy is the proportion of nearest neighbors that have the same gender as the search word. In the analogy evaluation, we randomly select triples of the form <xc1g1 i ,xc1g2 j ,xc2g2 k > where (c1, c2) ∈{(noun, adjective), (adjective, noun)} and (g1, g2) ∈{(masculine, feminine), (feminine, masculine) }. We then compute ⃗s = ⃗xc1g1 i − ⃗xc1g2 j + ⃗xc2g2 k and identify the word whose vector is closest to ⃗s where the three vectors ⃗xc1g1 i , ⃗xc1g2 j , ⃗xc2g2 k are excluded. If the nearest neighbor of ⃗s is of type ⃗xc2g1 l , then the search is successful; e.g., for ⃗s = ⃗xnf i −⃗xnm j + ⃗xam k , the search is successful if the nearest neighbor is feminine. We did this evaluation on the same test set for PPMI and LBL embedding models. Error rates were 29% for PPMI and 25% for LBL (similarity) and 16% for PPMI and 14% for LBL (analogy). This high error, compared to 0% error for SVM classification, indicates it is not possible to determine the presence of a low entropy facet accurately and consistently when full-space similarity and analogy are used as test criteria. 5 Analysis In this section, we first summarize and analyze the lessons we learned through experiments in Section 4. After that, we show how these lessons are supported by a real natural-language corpus. 5.1 Learned lessons (i) Two words with clearly different context distributions should receive different representations. Aggregation models fail to do so by calculating 242 all entities head entities tail entities MLP 1NN MLP 1NN MLP 1NN PPMI 61.6 44.0 69.2 63.8 43.0 28.5 LBL 63.5 51.7 72.7 66.4 44.1 32.8 CBOW 63.0 53.5 71.7 69.4 39.1 29.9 CWIN 66.1 53.0 73.5 68.6 46.8 31.4 SKIP 64.5 57.1 69.9 71.5 49.8 34.0 SSKIP 66.2 52.8 73.9 68.5 45.5 31.4 Table 1: Entity typing results using embeddings learned with different models. global statistics. (ii) Embedding learning can have different effectiveness for sparse vs. non-sparse events. Thus, models of representations should be evaluated with respect to their ability to deal with sparseness; evaluation data sets should include rare as well as frequent words. (iii) Our results in Section 4.3 suggest that single-representation approaches can indeed represent different senses of a word. We did a classification task that roughly corresponds to the question: does this word have a particular meaning? A representation can fail on similarity judgement computations because less frequent senses occupy a small part of the capacity of the representation and therefore have little impact on full-space similarity values. Such a failure does not necessarily mean that a particular sense is not present in the representation and it does not necessarily mean that single-representation approaches perform poor on real-world tasks. However, we saw that even though single-representations do well on balanced senses, they can pose a challenge for ambiguous words with skewed senses. (iv) Lexical information is complex and multifaceted. In point-based tests, all dimensions are considered together and their ability to evaluate specific facets or properties of a word is limited. The full-space similarity of a word may be highest to a word that has a different value on a lowentropy facet. Any good or bad result on these tasks is not sufficient to conclude that the representation is weak. The valid criterion of quality is whether information about the facet is consistently and accurately stored. 5.2 Extrinsic evaluation: entity typing To support the case for sub-space evaluation and also to introduce a new extrinsic task that uses the embeddings directly in supervised classification, we address a fine-grained entity typing task. Learning taxonomic properties or types of words has been used as an evaluation method for word embeddings (Rubinstein et al., 2015). Since available word typing datasets are quite small (cf. Baroni et al. (2014), Rubinstein et al. (2015)), entity typing can be a promising alternative, which enables to do supervised classification instead of unsupervised clustering. Entities, like other words, have many properties and therefore belong to several semantic types, e.g., “Barack Obama” is a POLITICIAN, AUTHOR and AWARD WINNER. We perform entity typing by learning types of knowledge base entities from their embeddings; this requires looking at subspaces because each entity can belong to multiple types. We adopt the setup of Yaghoobzadeh and Sch¨utze (2015) who present a dataset of Freebase entities;5 there are 102 types (e.g., POLITICIAN FOOD, LOCATION-CEMETERY) and most entities have several. More specifically, we use a multilayer-perceptron (MLP) with one hidden layer to classify entity embeddings to 102 FIGER types. To show the limit of point-based evaluation, we also experimentally test an entity typing model based on cosine similarity of entity embeddings. To each test entity, we assign all types of the entity closest to it in the train set. We call this approach 1NN (kNN for k = 1).6 We take part of ClueWeb, which is annotated with Freebase entities using automatic annotation of FACC17 (Gabrilovich et al., 2013), as our corpus. We then replace all mentions of entities with their Freebase identifier and learn embeddings of words and entities in the same space. Our corpus has around 6 million sentences with at least one annotated entity. We calculate embeddings using our different models. Our hyperparameters: for learning-based models: dim=100, neg=10, iterations=20, window=1, sub=10−3; for PPMI: SVD-dim=100, neg=1, window=1, cds=0.75, sub=10−3, eig=0.5. See (Levy et al., 2015) for more information about the meaning of hyperparameters. Table 1 gives results on test for all (about 60,000 entities), head (freq > 100; about 12,200 entities) and tail (freq < 5; about 10,000 entities). The MLP models consistently outperform 1NN on 5cistern.cis.lmu.de/figment 6We tried other values of k, but results were not better. 7lemurproject.org/clueweb12/FACC1 243 all and tail entities. This supports our hypothesis that only part of the information about types that is present in the vectors can be determined by similarity-based methods that use the overall direction of vectors, i.e., full-space similarity. There is little correlation between results of MLP and 1NN in all and head entities, and the correlation between their results in tail entities is high.8 For example, for all entities, using 1NN, SKIP is 4.3% (4.1%) better, and using MLP is 1.7% (1.6%) worse than SSKIP (CWIN). The good performance of SKIP on 1NN using cosine similarity can be related to its objective function, which maximizes the cosine similarity of cooccuring token embeddings. The important question is not similarity, but whether the information about a specific type exists in the entity embeddings or not. Our results confirm our previous observation that a classification by looking at subspaces is needed to answer this question. In contrast, based on full-space similarity, one can infer little about the quality of embeddings. Based on our results, SSKIP and CWIN embeddings contain more accurate and consistent information because MLP classifier gives better results for them. However, if we considered 1NN for comparison, SKIP and CBOW would be superior. 6 Conclusion and future work We have introduced a new way of evaluating distributional representation models. As an alternative to the common evaluation tasks, we proposed to identify generic criteria that are important for an embedding model to represent properties of words accurately and consistently. We suggested four criteria based on fundamental characteristics of natural language and designed tests that evaluate models on the criteria. We developed this evaluation methodology using PCFG-generated corpora and applied it on a case study to compare different models of learning distributional representations. While we showed important differences of the embedding models, the goal was not to do a comprehensive comparison of them. We proposed an innovative way of doing intrinsic evaluation of embeddings. Our evaluation method gave direct insight about the quality of embeddings. Additionally, while most intrinsic evaluations consider 8The spearman correlation between MLP and 1NN for all=0.31, head=0.03, tail=0.75. word vectors as points, we used classifiers that identify different small subspaces of the full space. This is an important desideratum when designing evaluation methods because of the multifacetedness of natural language words: they have a large number of properties, each of which only occupies a small proportion of the full-space capacity of the embedding. Based on this paper, there are serveral lines of investigation we plan to conduct in the future. (i) We will attempt to support our results on artificially generated corpora by conducting experiments on real natural language data. (ii) We will study the coverage of our four criteria in evaluating word representations. (iii) We modeled the four criteria using separate PCFGs, but they could also be modeled by one single unified PCFG. One question that arises is then to what extent the four criteria are orthogonal and to what extent interdependent. A single unified grammar may make it harder to interpret the results, but may give additional and more fine-grained insights as to how the performance of embedding models is influenced by different fundamental properties of natural language and their interactions. Finally, we have made the simplifying assumption in this paper that the best conceptual framework for thinking about embeddings is that the embedding space can be decomposed into subspaces: either into completely orthogonal subspaces or – less radically – into partially “overlapping” subspaces. Furthermore, we have made the assumption that the smoothness and robustness properties that are the main reasons why embeddings are used in NLP can be reduced to similarities in subspaces. See Rothe et al. (2016) and Rothe and Sch¨utze (2016) for work that makes similar assumptions. The fundamental assumptions here are decomposability and linearity. The smoothness properties could be much more complicated. However even if this was the case, then much of the general framework of what we have presented in this paper would still apply; e.g., the criterion that a particular facet be fully and correctly represented is as important as before. But the validity of the assumption that embedding spaces can be decomposed into “linear” subspaces should be investigated in the future. Acknowledgments. This work was supported by DFG (SCHU 2246/8-2). 244 References Eneko Agirre, Enrique Alfonseca, Keith B. Hall, Jana Kravalova, Marius Pasca, and Aitor Soroa. 2009. A study on similarity and relatedness using distributional and wordnet-based approaches. In Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings, May 31 - June 5, 2009, Boulder, Colorado, USA, pages 19–27. Marco Baroni and Alessandro Lenci. 2010. Distributional memory: A general framework for corpus-based semantics. Computational Linguistics, 36(4):673–721. Marco Baroni, Georgiana Dinu, and Germ´an Kruszewski. 2014. Don’t count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, pages 238–247. Bernard Comrie. 1989. Language universals and linguistic typology: Syntax and morphology. Blackwell, 2nd edition. D. A. Cruse. 1986. Lexical Semantics. Cambridge University Press, Cambridge, MA. Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6):391–407. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. Liblinear: A library for large linear classification. Journal of Machine Learning Research, 9:1871–1874. Evgeniy Gabrilovich, Michael Ringgaard, and Amarnag Subramanya. 2013. Facc1: Freebase annotation of clueweb corpora. Eric H. Huang, Richard Socher, Christopher D. Manning, and Andrew Y. Ng. 2012. Improving word representations via global context and multiple word prototypes. In The 50th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, July 8-14, 2012, Jeju Island, Korea - Volume 1: Long Papers, pages 873–882. Arne K¨ohn. 2015. What?s in an embedding? analyzing word embeddings through multilingual evaluation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2067–2073, Lisbon, Portugal, September. Siwei Lai, Kang Liu, Liheng Xu, and Jun Zhao. 2015. How to generate a good word embedding? CoRR, abs/1507.05523. Omer Levy and Yoav Goldberg. 2014a. Linguistic regularities in sparse and explicit word representations. In CoNLL. Omer Levy and Yoav Goldberg. 2014b. Neural word embedding as implicit matrix factorization. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 2177–2185. Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. TACL, 3:211–225. Jiwei Li and Dan Jurafsky. 2015. Do multi-sense embeddings improve natural language understanding? In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1722–1732, Lisbon, Portugal, September. Wang Ling, Chris Dyer, Alan W. Black, and Isabel Trancoso. 2015. Two/too simple adaptations of word2vec for syntax problems. In NAACL HLT 2015, The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver, Colorado, USA, May 31 - June 5, 2015, pages 1299– 1304. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In Proceedings of ICLR. Andriy Mnih and Koray Kavukcuoglu. 2013. Learning word embeddings efficiently with noise-contrastive estimation. In NIPS, pages 2265–2273. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532–1543. Joseph Reisinger and Raymond J Mooney. 2010. Multi-prototype vector-space models of word meaning. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 109–117. Association for Computational Linguistics. Sascha Rothe and Hinrich Sch¨utze. 2016. Word embedding calculus in meaningful ultradense subspaces. In ACL. Sascha Rothe, Sebastian Ebert, and Hinrich Sch¨utze. 2016. Ultradense embeddings by orthogonal transformation. In NAACL. Dana Rubinstein, EffiLevi, Roy Schwartz, and Ari Rappoport. 2015. How well do distributional models capture different types of semantic knowledge? In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 2: Short Papers, pages 726– 730. 245 Magnus Sahlgren. 2006. The Word-Space Model. Ph.D. thesis, Stockholm University. Tobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. 2015. Evaluation methods for unsupervised word embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 298–307, Lisbon, Portugal, September. Yulia Tsvetkov, Manaal Faruqui, Wang Ling, Guillaume Lample, and Chris Dyer. 2015. Evaluation of word vector representations by subspace alignment. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2049–2054, Lisbon, Portugal, September. Peter D. Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. J. Artif. Intell. Res. (JAIR), 37:141–188. Yadollah Yaghoobzadeh and Hinrich Sch¨utze. 2015. Corpus-level fine-grained entity typing using contextual information. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 715–725, Lisbon, Portugal, September. Zhi Zhong and Hwee Tou Ng. 2010. It makes sense: A wide-coverage word sense disambiguation system for free text. In ACL 2010, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, July 11-16, 2010, Uppsala, Sweden, System Demonstrations, pages 78–83. 246
2016
23
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2431–2441, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics On-line Active Reward Learning for Policy Optimisation in Spoken Dialogue Systems Pei-Hao Su, Milica Gaˇsi´c, Nikola Mrkˇsi´c, Lina Rojas-Barahona, Stefan Ultes, David Vandyke, Tsung-Hsien Wen and Steve Young Department of Engineering, University of Cambridge, Cambridge, UK {phs26, mg436, nm480, lmr46, su259, djv27, thw28, sjy}@cam.ac.uk Abstract The ability to compute an accurate reward function is essential for optimising a dialogue policy via reinforcement learning. In real-world applications, using explicit user feedback as the reward signal is often unreliable and costly to collect. This problem can be mitigated if the user’s intent is known in advance or data is available to pre-train a task success predictor off-line. In practice neither of these apply for most real world applications. Here we propose an on-line learning framework whereby the dialogue policy is jointly trained alongside the reward model via active learning with a Gaussian process model. This Gaussian process operates on a continuous space dialogue representation generated in an unsupervised fashion using a recurrent neural network encoder-decoder. The experimental results demonstrate that the proposed framework is able to significantly reduce data annotation costs and mitigate noisy user feedback in dialogue policy learning. 1 Introduction Spoken Dialogue Systems (SDS) allow humancomputer interaction using natural speech. They can be broadly divided into two categories: chatoriented systems which aim to converse with users and provide reasonable contextually relevant responses (Vinyals and Le, 2015; Serban et al., 2015), and task-oriented systems designed to assist users to achieve specific goals (e.g. find hotels, movies or bus schedules) (Daubigney et al., 2014; Young et al., 2013). The latter are typically designed according to a structured ontology (or a database schema), which defines the domain Figure 1: An example of a task-oriented dialogue with a pre-defined task and the evaluation results. that the system can talk about. Teaching a system how to respond appropriately in a task-oriented SDS is non-trivial. This dialogue management task is often formulated as a manually defined dialogue flow that directly determines the quality of interaction. More recently, dialogue management has been formulated as a reinforcement learning (RL) problem which can be automatically optimised (Levin and Pieraccini, 1997; Roy et al., 2000; Williams and Young, 2007; Young et al., 2013). In this framework, the system learns by a trial and error process governed by a potentially delayed learning objective defined by a reward function. A typical approach to defining the reward function in a task-oriented dialogue system is to apply a small per-turn penalty to encourage short dialogues and to give a large positive reward at the end of each successful interaction. Figure 1 is an example of a dialogue task which is typically set for users who are being paid to converse with the system. When users are primed with a specific task to complete, dialogue success can be determined from subjective user ratings (Subj), or 2431 an objective measure (Obj) based on whether or not the pre-specified task was completed (Walker et al., 1997; Gaˇsi´c et al., 2013). However, prior knowledge of the user’s goal is not normally available in real situations, making the objective reward estimation approach impractical. Furthermore, objective ratings are inflexible and often fail as can be seen from Figure 1, if the user does not strictly follow the task. This results in a mismatch between the Obj and Subj ratings. However, relying on subjective ratings alone is also problematic since crowd-sourced subjects frequently give inaccurate responses and real users are often unwilling to extend the interaction in order to give feedback, resulting in unstable learning (Zhao et al., 2011; Gaˇsi´c et al., 2011). In order to filter out incorrect user feedback, Gaˇsi´c et al. (2013) used only dialogues for which Obj = Subj. Nonetheless, this is inefficient and not feasible anyway in most real-world tasks where the user’s goal is generally unknown and difficult to infer. In light of the above, Su et al. (2015a) proposed learning a neural network-based Obj estimator from off-line simulated dialogue data. This removes the need for the Obj check during online policy learning and the resulting policy is as effective as one trained with dialogues using the Obj = Subj check. However, a user simulator will only provide a rough approximation of real user statistics and developing a user simulator is a costly process (Schatzmann et al., 2006). To deal with the above issues, this paper describes an on-line active learning method in which users are asked to provide feedback on whether the dialogue was successful or not. However, active learning is used to limit requests for feedback to only those cases where the feedback would be useful, and also a noise model is introduced to compensate for cases where the user feedback is inaccurate. A Gaussian process classification (GPC) model is utilised to robustly model the uncertainty presented by the noisy user feedback. Since GPC operates on a fixed-length observation space and dialogues are of variable-length, a recurrent neural network (RNN)-based embedding function is used to provide fixed-length dialogue representations. In essence, the proposed method learns a dialogue policy and a reward estimator on-line from scratch, and is directly applicable to real-world applications. The rest of the paper is organised as follows. The next section gives an overview of related work. The proposed framework is then described in §3. This consists of the policy learning algorithm, the creation of the dialogue embedding function and the active reward model trained from real user ratings. In §4, the proposed approach is evaluated in the context of an application providing restaurant information in Cambridge, UK. We first give an in-depth analysis of the dialogue embedding space. The results of the active reward model when it is trained together with a dialogue policy on-line with real users are then presented. Finally, our conclusions are presented in §5. 2 Related Work Dialogue evaluation has been an active research area since late 90s. Walker et al. (1997) proposed the PARADISE framework, where a linear function of task completion and various dialogue features such as dialogue duration were used to infer user satisfaction. This measure was later used as a reward function for learning a dialogue policy (Rieser and Lemon, 2011). However, as noted, task completion is rarely available when the system is interacting with real users and also concerns have been raised regarding the theoretical validity of the model (Larsen, 2003). Several approaches have been adopted for learning a dialogue reward model given a corpus of annotated dialogues. Yang et al. (2012) used collaborative filtering to infer user preferences. The use of reward shaping has also been investigated in (El Asri et al., 2014; Su et al., 2015b) to enrich the reward function in order to speed up dialogue policy learning. Also, Ultes and Minker (2015) demonstrated that there is a strong correlation between expert’s user satisfaction ratings and dialogue success. However, all these methods assume the availability of reliable dialogue annotations such as expert ratings, which in practice are hard to obtain. One effective way to mitigate the effects of annotator error is to obtain multiple ratings for the same data and several methods have been developed to guide the annotation process with uncertainty models (Dai et al., 2013; Lin et al., 2014). Active learning is particularly useful for determining when an annotation is needed (Settles, 2010; Zhang and Chaudhuri, 2015). It is often utilised using Bayesian optimisation approaches (Brochu et al., 2010). Based on this, Daniel et al. (2014) 2432 exploited a pool-based active learning method for a robotics application. They queried the user for feedback on the most informative sample collected so far and showed the effectiveness of this method. Rather than explicitly defining a reward function, inverse RL (IRL) aims to recover the underlying reward from demonstrations of good behaviour and then learn a policy which maximises the recovered reward (Russell, 1998). IRL was first introduced to SDS in (Paek and Pieraccini, 2008), where the reward was inferred from human-human dialogues to mimic the behaviour observed in a corpus. IRL has also been studied in a Wizard-of-Oz (WoZ) setting (Boularias et al., 2010; Rojas Barahona and Cerisara, 2014), where typically a human expert served as the dialogue manager to select each system reply based on the speech understanding output at different noise levels. However, this approach is costly and there is no reason to suppose that a human wizard is acting optimally, especially at high noise levels. Since humans are better at giving relative judgements than absolute scores, another related line of research has focused on preference-based approaches to RL (Cheng et al., 2011). In (Sugiyama et al., 2012), users were asked to provide rankings between pairs of dialogues. However, this is also costly and does not scale well in real applications. 3 Proposed Framework The proposed system framework is depicted in Figure 2. It is divided into three main parts: a dialogue policy, a dialogue embedding function, and an active reward model of user feedback. When each dialogue ends, a set of turn-level features ft is extracted and fed into an embedding function σ to obtain a fixed-dimension dialogue representation d that serves as the input space of the reward model R. This reward is modelled as a Gaussian process which for every input point provides an estimate of task success along with a measure of the estimate uncertainty. Based on this uncertainty, R decides whether to query the user for feedback or not. It then returns a reinforcement signal to update the dialogue policy π, which is trained using the GP-SARSA algorithm (Gaˇsi´c and Young, 2014). GP-SARSA also deploys Gaussian process estimation to provide an on-line sample-efficient reinforcement learning algorithm capable of bootstrapping estimates of sparse value functions from minimal numbers of samples (dialogues). The quality of each dialogue is defined by its cumulative reward, where each dialogue turn incurs a small negative reward (-1) and the final reward of either 0 or 20 depending on the estimate of task success are provided by the reward model. Note that the key contribution here is to learn the noise robust reward model and the dialogue policy simultaneously on-line, using the user as a ‘supervisor’. Active learning is not an essential component of the framework but highly desirable in practice to minimise the impact of the supervision burden on users. The use of a pre-trained embedding function is a sub-component of the proposed approach and is trained off-line on corpus data rather than manually designed here. 3.1 Unsupervised Dialogue Embeddings In order to model user feedback over dialogues of varying length, an embedding function is used to map each dialogue into a fixed-dimensional continuous-space. The use of embedding functions has recently gained attention especially for word representations, and has boosted performance on several natural language processing tasks (Mikolov et al., 2013; Turian et al., 2010; Levy and Goldberg, 2014). Embedding has also been successfully applied to machine translation (MT) where it enables varying-length phrases to be mapped to fixed-length vectors using an RNN Encoder-Decoder (Cho et al., 2014). Similar to MT, dialogue embedding enables variable length sequences of utterances to be mapped into an appropriate fixed-length vector. Although embedding is used here to create a fixed-dimension input space for the GPC-based task success classifier, it should be noted that it potentially facilitates a variety of other downstream tasks which depend on classification or clustering. The model structure of the embedding function is described on the left of Figure 2, where the episodic turn-level features ft are extracted from a dialogue and serve as the input features to the encoder. In our proposed model, the encoder is a Bi-directional Long Short-Term Memory network (BLSTM) (Hochreiter and Schmidhuber, 1997; Graves et al., 2013). The LSTM is a Recurrent Neural Network (RNN) with gated recurrent units introduced to alleviate the vanishing gradient problem. The BLSTM encoder takes into account the sequential information from both directions of the input data, computing the forward 2433 Figure 2: Schematic of the system framework. The three main system components dialogue policy, dialogue embedding creation, and reward modelling based on user feedback, are described in §3. hidden sequences −→ h 1:T and the backward hidden sequences ←− h T:1 while iterating over all input features ft, t = 1, ..., T: −→ ht = LSTM(ft, −→ h t−1) ←− ht = LSTM(ft, ←− h t+1) where LSTM denotes the activation function. The dialogue representation d is then calculated as the average over all hidden sequences: d = 1 T T X t=1 ht (1) where ht = [−→ ht; ←− ht] is the concatenation of the two directional hidden sequences. Given the dialogue representation d output by the encoder, the decoder is a forward LSTM that takes d as its input for each turn t to produce the sequence of features f′1:T . The training objective of the encoder-decoder minimises the mean-square-error (MSE) between the prediction f′1:T and the output f1:T (which is also the input): MSE = 1 N N X i=1 T X t=1 ||ft −f′ t||2 (2) where N is the number of training dialogues and || · ||2 denotes the l2-norm. Since all the functions used in the encoder and decoder are differentiable, stochastic gradient decent (SGD) can be used to train the model. The dialogue representations generated by this LSTM-based unsupervised embedding function are then used as the observations for the reward model described in the next section 3.2. 3.2 Active Reward Learning A Gaussian process is a Bayesian non-parametric model that can be used for regression or classification (Rasmussen and Williams, 2006). It is particularly appealing since it can learn from a small number of observations by exploiting the correlations defined by a kernel function and it provides a measure of uncertainty of its estimates. In the context of spoken dialogue systems it has been successfully used for RL policy optimisation (Gaˇsi´c and Young, 2014; Casanueva et al., 2015) and IRL reward function regression (Kim et al., 2014). Here we propose modelling dialogue success as a Gaussian process (GP). This involves estimating the probability p(y|d, D) that the task was successful given the current dialogue representation d and the pool D containing previously classified dialogues. We pose this as a classification problem where the rating is a binary observation y ∈{−1, 1} that defines failure or success. The observations y are considered to be drawn from a Bernoulli distribution with a success probability p(y = 1|d, D). The probability is related to a latent function f(d|D) : Rdim(d) →R that is mapped to a unit interval by a probit function p(y = 1|d, D) = φ(f(d|D)), where φ denotes the cumulative density function of the standard Gaus2434 sian distribution. The latent function is given a GP prior: f(d) ∼ GP(m(d), k(d, d′)), where m(·) is the mean function and k(·, ·) the covariance function (kernel). Here the stationary squared exponential kernel kSE is used. It is also combined with a white noise kernel kWN in order to account for the “noise” in users’ ratings: k(d, d′) = p2 exp(−||d −d′||2 2l2 ) + σ2 n (3) where the first term denotes kSE and the second term kWN. The hyper-parameters p, l, σn can be adequately optimised by maximising the marginal likelihood using a gradient-based method (Chen et al., 2015). Since φ(·) is not Gaussian, the resulting posterior probability p(y = 1|d, D) is analytically intractable. So instead an approximation method, expectation propagation (EP), was used (Nickisch and Rasmussen, 2008). Querying the user for feedback is costly and may impact negatively on the user experience. This impact can be reduced by using active learning informed by the uncertainty estimate of the GP model (Kapoor et al., 2007). This ensures that user feedback is only sought when the model is uncertain about its current prediction. For the current application, an on-line (stream-based) version of active learning is required. An illustration of a 1-dimensional example is shown in Figure 3. Given the labelled data D, the predictive posterior mean µ∗and posterior variance σ2 ∗of the latent value f(d∗) for the current dialogue representation d∗can be calculated. Then a threshold interval [1 −λ, λ] is set on the predictive success probability p(y∗= 1|d∗, D) = φ(µ∗/ p 1 + σ2∗) to decide whether this dialogue should be labelled or not. The decision boundary implicitly considers both the posterior mean as well as the variance. When deploying this reward model in the proposed framework, a GP with a zero-mean prior for f is initialised and D = {}. After the dialogue policy π completes each episode with the user, the generated dialogue turns are transformed into the dialogue representation d = σ(f1:T ) using the dialogue embedding function σ. Given d, the predictive mean and variance of f(d|D) are determined, and the reward model decides whether or not it should seek user feedback based on the threshold λ on φ(f(d|D)). If the model is uncertain, the Figure 3: 1-dimensional example of the proposed GP active reward learning model. user’s feedback on the current episode d is used to update the GP model and to generate the reinforcement signal for training the policy π; otherwise the predictive success rating from the reward model is used directly to update the policy. This process takes place after each dialogue. 4 Experimental results The target application is a live telephone-based spoken dialogue system providing restaurant information for the Cambridge (UK) area. The domain consists of approximately 150 venues each having 6 slots (attributes) of which 3 can be used by the system to constrain the search (food-type, area and price-range) and the remaining 3 are informable properties (phone-number, address and postcode) available once a required database entity has been found. The shared core components of the SDS common to all experiments comprise a HMM-based recogniser, a confusion network (CNet) semantic input decoder (Henderson et al., 2012), the BUDS belief state tracker (Thomson and Young, 2010) that factorises the dialogue state using a dynamic Bayesian network, and a template based natural language generator to map system semantic actions into natural language responses to the user. All policies were trained using the GP-SARSA algorithm and the summary action space of the RL policy contains 20 actions. The reward given to each dialogue was set to 20 × 1success −N, where N is the dialogue turn 2435 number and 1 is the indicator function for dialogue success, which is determined by different methods as described in the following section. These rewards constitute the reinforcement signal used for policy learning. 4.1 Dialogue representations The LSTM Encoder-Decoder model described in §3.1 was used to generate an embedding d for each dialogue. For each dialogue turn that contains a user’s utterance and a system’s response, a feature vector f of size 74 was extracted (Vandyke et al., 2015). This vector consists of the concatenation of the most likely user intention determined by the semantic decoder, the distribution over each concept of interest defined in the ontology, a onehot encoding of the system’s reply action, and the turn number normalised by the maximum number of turns (here 30). This feature vector was used as the input and the target for the LSTM EncoderDecoder model, where the training objective was to minimise the MSE of the reconstruction loss. The model was implemented using the Theano library (Bergstra et al., 2010; Bastien et al., 2012). A corpus consisting of 8565, 1199 and 650 real user dialogues in the Cambridge restaurant domain was used for training, validation and testing respectively. This corpus was collected via the Amazon Mechanical Turk (AMT) service, where paid subjects interacted with the dialogue system. The sizes of −→ ht and ←− ht in the encoder and the hidden layer in the decoder were all 32, resulting in dim(ht) = dim(d) = 64. SGD per dialogue was used during backpropagation to train each model. In order to prevent over-fitting, early stopping was applied based on the held-out validation set. In order to visualise the impact of the embeddings, the dialogue representations of all the 650 test dialogues were transformed by the embedding function in Figure 4 and reduced to two dimensions using t-SNE (Van der Maaten and Hinton, 2008). For each dialogue sample, the shape indicates whether or not the dialogue was successful, and the colour indicates the length of the dialogue (maximum 30 turns). From the figure we can clearly see the colour gradient from the top left (shorter dialogues) to the bottom right (longer dialogues) for the positive Subj labels. This shows that dialogue length was one of the prominent features in the dialogue representation d. It can also be seen that the longer Figure 4: t-SNE visualisation on the unsupervised dialogue representation of the real user data in the Cambridge restaurant domain. Labels are the subjective ratings from the users. failed dialogues (more than 15 turns) are located close to each other, mostly at the bottom right. On the other hand, there are other failed dialogues which are spread throughout the cluster. We can also see that the successful dialogues were on average shorter than 10 turns, which is consistent with the claim that users do not engage in longer dialogues with well-trained task-oriented systems. This visualisation shows the potential of the unsupervised dialogue embedding since the transformed dialogue representations appear to be correlated with dialogue success in the majority of cases. For the purpose of GP reward modelling, this LSTM Encoder-Decoder embedding function appears therefore to be suitable for extracting an adequate fixed-dimension dialogue representation. 4.2 Dialogue Policy Learning Given the well-trained dialogue embedding function, the proposed GP reward model operates on this input space. The system was implemented using the GPy library (Hensman et al., 2012). Given the predictive success probability of each newly seen dialogue, the threshold λ for the uncertainty region was initially set to 1 to encourage label querying and annealed to 0.85 for the first 50 collected dialogues and then set to 0.85 thereafter. Initially, as each new dialogue was added to the training set, the hyper-parameters that defined the structure of the kernels mentioned in Eqn. 3 were optimised to minimise the negative log marginal likelihood using conjugate gradient as2436 Figure 5: Learning curves showing subjective success as a function of the number of training dialogues used during on-line policy optimisation. The on-line GP, Subj, off-line RNN and Obj=Subj systems are shown as black, yellow, blue, and red lines. The light-coloured areas are one standard error intervals. cent (Rasmussen and Williams, 2006). To prevent overfitting, after the first 40 dialogues, these hyper-parameters were only re-optimised after every batch of 20 dialogues. To investigate the performance of the proposed on-line GP policy learning, three other contrasting systems were also tested. Note that the handcrafted system is not compared since it does not scale to larger domains and is sensitive to speech recognition errors. In each case, the only difference was the method used to compute the reward: • the Obj=Subj system which uses prior knowledge of the task to only use training dialogues for which the user’s subjective assessment of success is consistent with the objective assessment of success as in (Gaˇsi´c et al., 2013). • the Subj system which directly optimises the policy using only the user assessment of success whether accurate or not. • the off-line RNN system that uses 1K simulated data and the corresponding Obj labels to train an RNN success estimator as in (Su et al., 2015a). For the Subj system rating, in order to focus solely on the performance of the policy rather than other aspects of the system such as the fluency of the reply sentence, users were asked to rate dialogue success by answering the following question: Did you find all the information you were looking for? Figure 6: The number of times each system queries the user for feedback during on-line policy optimisation as a function of the number of training dialogues. The orange line represents both the Obj=Subj and Subj systems, and the black line represents the on-line GP system. All four of the above systems were trained with a total of 500 dialogues on-line by users recruited via the AMT service. Figure 5 shows the online learning curve of the subjective success rating when during training. For each system, the moving average was calculated using a window of 150 dialogues. In each case, three distinct policies were trained and the results were averaged to reduce noise. As can be seen, all four systems perform better than 80 % subjective success rate after approximately 500 training dialogues. The Obj=Subj system is relatively poor compared to the others. This might be because users often report success even though the objective evaluation indicates failure. In such cases, the dialogue is discarded and not used for training. As a consequence, the Obj=Subj system required approximately 700 dialogues in order to obtain 500 which were useful, whereas all other systems made use of every dialogue. To investigate learning behaviour over longer spans, training for the on-line GP and the Subj systems was extended to 850 dialogues. As can be seen, performance in both cases is broadly flat. Similar to the conclusions drawn in (Gaˇsi´c et al., 2011), the Subj system suffers from unreliable user feedback. Firstly, as in the Obj=Subj system, users forget the full requirements of the task and in particular, forget to ask for all required information. Secondly, users give inconsistent feedback due to a lack of proper care and attention. From Figure 5 it can be clearly seen that the online GP system consistently performed better than 2437 Subj system, presumably, because its noise model mitigates the effect of inconsistency in user feedback. Of course, unlike crowd-sourced subjects, real users might provide more consistent feedback, but nevertheless, some inconsistency is inevitable and the noise model offers the needed robustness. The advantage of the on-line GP system in reducing the number of times that the system requests user feedback (i.e. the label cost) can be seen in Figure 6. The black curve shows the number of active learning queries triggered in the online GP system averaged across the three policies. This system required only 150 user feedback requests to train a robust reward model. On the other hand, the Obj=Subj and Subj systems require user feedback for every training dialogue as shown by the dashed orange line. Of course, the off-line RNN system required no user feedback at all when training the system online since it had the benefit of prior access to a user simulator. Its performance during training after the first 300 dialogues was, however, inferior to the on-line GP system. 4.3 Dialogue Policy Evaluation In order to compare performance, the averaged results obtained between 400-500 training dialogues are shown in the first section of Table 1 along with one standard error. For the 400-500 interval, the Subj, off-line RNN and on-line GP systems achieved comparable results without statistical differences. The results of continuing training on the Subj and on-line GP systems from 500 to 850 training dialogues are also shown. As can be seen, the on-line GP system was significantly better presumably because it is more robust to erroneous user feedback compared to the Subj system. 4.4 Reward Model Evaluation The above results verify the effectiveness of the proposed reward model for policy learning. Here we investigate further the accuracy of the model in predicting the subjective success rate. An evaluation of the on-line GP reward model between 1 and 850 training dialogues is presented in Table 2. Since three reward models were learnt each with 850 dialogues, there were a total of 2550 training dialogues. Of these, the models queried the user for feedback a total of 454 times, leaving 2096 dialogues for which learning relied on the reward model’s prediction. The results shown in the table are thus the average over 2096 dialogues. Table 1: Subjective evaluation of the Obj=Subj, off-line RNN, Subj and on-line GP system during different stages of on-line policy learning. Subjective: user binary rating on dialogue success. Statistical significance was calculated using a twotailed Students t-test with p-value of 0.05. Dialogues Reward Model Subjective (%) 400-500 Obj=Subj 85.0 ± 2.1 off-line RNN 89.0 ± 1.8 Subj 90.7 ± 1.7 on-line GP 91.7 ± 1.6 500-850 Subj 87.1 ± 1.0 on-line GP 90.9 ± 0.9* * p < 0.05 As can be seen, there was a significant imbalance between success and fail labels since the policy was improving along with the training dialogues. This lowered the recall on failed dialogue prediction as the model was biased to data with positive labels. Nevertheless, its precision scores well. On the other hand, the successful dialogues were accurately predicted by the proposed model. Table 2: Statistical evaluation of the prediction of the on-line GP systems with respect to Subj rating. Subj Prec. Recall F-measure Number Fail 1.00 0.52 0.68 204 Suc. 0.95 1.00 0.97 1892 Total 0.96 0.95 0.95 2096 4.5 Example Dialogues The key benefits of the on-line GP reward model compared to other models are its robustness to noise and efficient use of user supervision. Since the four systems compared above differ only in the design of reward model (learning objective), their on-line behaviours were broadly similar. Two example dialogues between users and the on-line GP system are listed in Table 3 to illustrate how the system operates under different noise conditions. The user’s subjective rating and the rating determined by the on-line GP reward model are also shown. The labels ‘n-th ASR’ and ‘nth SEM’ indicate the n-th most likely hypotheses from speech recogniser and semantic decoder respectively. 2438 Table 3: Example dialogues between on-line users and the proposed on-line GP system. -Low noise condition- (User Subjective Evaluation: Success, on-line GP Evaluation: Success) Machine : Hello, welcome to the Cambridge dialogue system, What kind of food would you like? Human : [Top ASR] Yeah I want to find a restaurant that serves European food. [Top SEM] inform(food=european,type=restaurant) Machine : Hotel du Vin and Bistro is a nice place. It serves European food. Human : [Top ASR] Uh what is the address and phone number? [Top SEM] request(phone,addr) Machine : Hotel du Vin and Bistro is a nice place. Their address is 15-19 Trumpington Street and their phone is 01223 227330. Human : [Top ASR] Thank you goodbye. [Top SEM] bye() Machine : Thank you for using this system. -Higher noise condition- (User Subjective Evaluation: Success, on-line GP Evaluation: Success) Machine : Hello, welcome to the Cambridge dialogue system, What kind of food would you like? Human : [Top ASR] And I want to find a expensive restaurant it should be in the south part of town. [Top SEM] inform(area=south,pricerange=expensive,type=restaurant) Machine : Gourmet burger kitchen is a nice place it is in the expensive price range. Human : [Top ASR] And do you have something else? [Top SEM] null() [2nd SEM] reqalts() Machine : Did you say you are looking for a venue in the area of south ? Human : [Top ASR] Yes. [Top SEM] affirm() Machine : Frankie and Benny’s is a nice place, it is in the south part of town and it is in the expensive price range. Human : [Top ASR] Okay can I have the uh address phone number called for any. [Top SEM] request(addr) [2nd SEM] request(phone,addr) Machine : Frankie and Benny’s is a nice place. Their address is Clifton Way and their phone number is 01223 412430. Human : [Top ASR] What is the pub called for free you any? ... [5th ASR] What is the post code for free you any? [Top SEM] request(postcode) Machine : Frankie and Benny’s is a nice place. Their postcode is C.B 1, 7 D.Y. Human : [Top ASR] Thank you goodbye. [Top SEM] bye() Machine : Thank you for using this system. 5 Conclusion In this paper we have proposed an active reward learning model using Gaussian process classification and an unsupervised neural network-based dialogue embedding to enable truly on-line policy learning in spoken dialogue systems. The system enables stable policy optimisation by robustly modelling the inherent noise in real user feedback and uses active learning to minimise the number of feedback requests to the user. We found that the proposed model achieved efficient policy learning and better performance compared to other stateof-the-art methods in the Cambridge restaurant domain. A key advantage of this Bayesian model is that its uncertainty estimate allows active learning and noise handling in a natural way. The unsupervised dialogue embedding function required no labelled data to train whilst providing a compact and useful input to the reward predictor. Overall, the techniques developed in this paper enable for the first time a viable approach to on-line learning in deployed real-world dialogue systems which does not need a large corpus of manually annotated data or the construction of a user simulator. Consistent with all of our previous work, the reward function studied here is focused primarily on task success. This may be too simplistic for many commercial applications and further work will be needed in conjunction with human interaction experts to identify and incorporate the extra dimensions of dialogue quality that will be needed to achieve the highest levels of user satisfaction. Acknowledgments Pei-Hao Su is supported by Cambridge Trust and the Ministry of Education, Taiwan. This research was partly funded by the EPSRC grant EP/M018946/1 Open Domain Statistical Spoken Dialogue Systems. The data used in the experiments is available at www.repository.cam.ac.uk/handle/1810/256020. 2439 References [Bastien et al.2012] Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio. 2012. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS Workshop. [Bergstra et al.2010] James Bergstra, Olivier Breuleux, Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David WardeFarley, and Yoshua Bengio. 2010. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference. [Boularias et al.2010] Abdeslam Boularias, Hamid R Chinaei, and Brahim Chaib-draa. 2010. Learning the reward model of dialogue pomdps from data. In NIPS Workshop on Machine Learning for Assistive Techniques. [Brochu et al.2010] Eric Brochu, Vlad M Cora, and Nando De Freitas. 2010. A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599. [Casanueva et al.2015] I˜nigo Casanueva, Thomas Hain, Heidi Christensen, Ricard Marxer, and Phil Green. 2015. Knowledge transfer between speakers for personalised dialogue management. In Proc of SigDial. [Chen et al.2015] Lu Chen, Pei-Hao Su, and Milica Gaˇsic. 2015. Hyper-parameter optimisation of gaussian process reinforcement learning for statistical dialogue management. In Proc of SigDial. [Cheng et al.2011] Weiwei Cheng, Johannes F¨urnkranz, Eyke H¨ullermeier, and Sang-Hyeun Park. 2011. Preference-based policy iteration: Leveraging preference learning for reinforcement learning. In Machine learning and knowledge discovery in databases. Springer. [Cho et al.2014] Kyunghyun Cho, Bart van Merri¨enboer Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. [Dai et al.2013] Peng Dai, Christopher H Lin, Daniel S Weld, et al. 2013. Pomdp-based control of workflows for crowdsourcing. Artificial Intelligence, 202. [Daniel et al.2014] Christian Daniel, Malte Viering, Jan Metz, Oliver Kroemer, and Jan Peters. 2014. Active reward learning. In Proc of RSS. [Daubigney et al.2014] Lucie Daubigney, Matthieu Geist, Senthilkumar Chandramohan, and Olivier Pietquin. 2014. A comprehensive reinforcement learning framework for dialogue management optimisation. Journal of Selected Topics in Signal Processing, 6(8). [El Asri et al.2014] Layla El Asri, Romain Laroche, and Olivier Pietquin. 2014. Task completion transfer learning for reward inference. In Proc of MLIS. [Gaˇsi´c and Young2014] Milica Gaˇsi´c and Steve Young. 2014. Gaussian processes for pomdp-based dialogue manager optimization. TASLP, 22(1):28–40. [Gaˇsi´c et al.2011] Milica Gaˇsi´c, Filip Jurcicek, Blaise. Thomson, Kai Yu, and Steve Young. 2011. Online policy optimisation of spoken dialogue systems via live interaction with human subjects. In IEEE ASRU. [Gaˇsi´c et al.2013] Milica Gaˇsi´c, Catherine Breslin, Matthew Henderson, Dongho Kim, Martin Szummer, Blaise Thomson, Pirros Tsiakoulis, and Steve J. Young. 2013. On-line policy optimisation of bayesian spoken dialogue systems via human interaction. In Proc of ICASSP. [Graves et al.2013] Alax Graves, Navdeep Jaitly, and Abdel-rahman Mohamed. 2013. Hybrid speech recognition with deep bidirectional lstm. In IEEE ASRU. [Henderson et al.2012] Matthew Henderson, Milica Gaˇsi´c, Blaise Thomson, Pirros Tsiakoulis, Kai Yu, and Steve Young. 2012. Discriminative spoken language understanding using word confusion networks. In IEEE SLT. [Hensman et al.2012] James Hensman, Nicolo Fusi, Ricardo Andrade, Nicolas Durrande, Alan Saul, Max Zwiessele, and Neil D. Lawrence. 2012. GPy: A gaussian process framework in python. http: //github.com/SheffieldML/GPy. [Hochreiter and Schmidhuber1997] Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8). [Kapoor et al.2007] Ashish Kapoor, Kristen Grauman, Raquel Urtasun, and Trevor Darrell. 2007. Active learning with gaussian processes for object categorization. In Proc of ICCV. [Kim et al.2014] Dongho Kim, Catherine Breslin, Pirros Tsiakoulis, Matthew Henderson, and Steve J Young. 2014. Inverse reinforcement learning for micro-turn management. In IEEE SLT. [Larsen2003] L.B. Larsen. 2003. Issues in the evaluation of spoken dialogue systems using objective and subjective measures. In IEEE ASRU. [Levin and Pieraccini1997] Esther Levin and Roberto Pieraccini. 1997. A stochastic model of computerhuman interaction for learning dialogue strategies. Eurospeech. [Levy and Goldberg2014] Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. In NIPS. 2440 [Lin et al.2014] Christopher H Lin, Daniel S Weld, et al. 2014. To re (label), or not to re (label). In Second AAAI Conference on Human Computation and Crowdsourcing. [Mikolov et al.2013] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In NIPS. [Nickisch and Rasmussen2008] Hannes Nickisch and Carl Edward Rasmussen. 2008. Approximations for binary gaussian process classification. JMLR, 9(10). [Paek and Pieraccini2008] Tim Paek and Roberto Pieraccini. 2008. Automating spoken dialogue management design using machine learning: An industry perspective. Speech communication, 50. [Rasmussen and Williams2006] Carl Edward Rasmussen and Chris Williams. 2006. Gaussian processes for machine learning. [Rieser and Lemon2011] Verena Rieser and Oliver Lemon. 2011. Learning and evaluation of dialogue strategies for new applications: Empirical methods for optimization from small data sets. Computational Linguistics, 37(1). [Rojas Barahona and Cerisara2014] Lina Maria Rojas Barahona and Christophe Cerisara. 2014. Bayesian Inverse Reinforcement Learning for Modeling Conversational Agents in a Virtual Environment. In Conference on Intelligent Text Processing and Computational Linguistics. [Roy et al.2000] Nicholas Roy, Joelle Pineau, and Sebastian Thrun. 2000. Spoken dialogue management using probabilistic reasoning. In Proc of SigDial. [Russell1998] Stuart Russell. 1998. Learning agents for uncertain environments. In Proc of COLT. [Schatzmann et al.2006] Jost Schatzmann, Karl Weilhammer, Matt Stuttle, and Steve Young. 2006. A survey of statistical user simulation techniques for reinforcement-learning of dialogue management strategies. The knowledge engineering review, 21(02):97–126. [Serban et al.2015] Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2015. Hierarchical neural network generative models for movie dialogues. arXiv preprint arXiv:1507.04808. [Settles2010] Burr Settles. 2010. Active learning literature survey. Computer Sciences Technical Report 1648. [Su et al.2015a] Pei-Hao Su, David Vandyke, Milica Gaˇsi´c, Dongho Kim, Nikola Mrkˇsi´c, Tsung-Hsien Wen, and Steve Young. 2015a. Learning from real users: Rating dialogue success with neural networks for reinforcement learning in spoken dialogue systems. In Proc of Interspeech. [Su et al.2015b] Pei-Hao Su, David Vandyke, Milica Gaˇsi´c, Nikola Mrkˇsi´c, Tsung-Hsien Wen, and Steve Young. 2015b. Reward shaping with recurrent neural networks for speeding up on-line policy learning in spoken dialogue systems. In Proc of SigDial. [Sugiyama et al.2012] Hiroaki Sugiyama, Toyomi Meguro, and Yasuhiro Minami. 2012. Preferencelearning based inverse reinforcement learning for dialog control. In Proc of Interspeech. [Thomson and Young2010] Blaise Thomson and Steve Young. 2010. Bayesian update of dialogue state: A pomdp framework for spoken dialogue systems. Computer Speech and Language, 24:562–588. [Turian et al.2010] Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proc of ACL. [Ultes and Minker2015] Stefan Ultes and Wolfgang Minker. 2015. Quality-adaptive spoken dialogue initiative selection and implications on reward modelling. In Proc of SigDial. [Van der Maaten and Hinton2008] Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. JMLR, 9:85. [Vandyke et al.2015] David Vandyke, Pei-Hao Su, Milica Gaˇsi´c, Nikola Mrkˇsi´c, Tsung-Hsien Wen, and Steve Young. 2015. Multi-domain dialogue success classifiers for policy training. In IEEE ASRU. [Vinyals and Le2015] Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869. [Walker et al.1997] Marilyn A. Walker, Diane J. Litman, Candace A. Kamm, and Alicia Abella. 1997. PARADISE: A framework for evaluating spoken dialogue agents. In Proc of EACL. [Williams and Young2007] Jason D. Williams and Steve Young. 2007. Partially observable Markov decision processes for spoken dialog systems. Computer Speech and Language, 21(2):393–422. [Yang et al.2012] Zhaojun Yang, G Levow, and Helen Meng. 2012. Predicting user satisfaction in spoken dialog system evaluation with collaborative filtering. IEEE Journal of Selected Topics in Signal Processing, 6(99):971–981. [Young et al.2013] Steve Young, Milica Gaˇsic, Blaise Thomson, and Jason Williams. 2013. Pomdp-based statistical spoken dialogue systems: a review. In Proc of IEEE, volume 99, pages 1–20. [Zhang and Chaudhuri2015] Chicheng Zhang and Kamalika Chaudhuri. 2015. Active learning from weak and strong labelers. CoRR, abs/1510.02847. [Zhao et al.2011] Liyue Zhao, Gita Sukthankar, and Rahul Sukthankar. 2011. Incremental relabeling for active learning with noisy crowdsourced annotations. In Proc of PASSAT and Proc of SocialCom. 2441
2016
230
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2442–2452, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Globally Normalized Transition-Based Neural Networks Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov and Michael Collins∗ Google Inc New York, NY {andor,chrisalberti,djweiss,severyn,apresta,kuzman,slav,mjcollins}@google.com Abstract We introduce a globally normalized transition-based neural network model that achieves state-of-the-art part-ofspeech tagging, dependency parsing and sentence compression results. Our model is a simple feed-forward neural network that operates on a task-specific transition system, yet achieves comparable or better accuracies than recurrent models. We discuss the importance of global as opposed to local normalization: a key insight is that the label bias problem implies that globally normalized models can be strictly more expressive than locally normalized models. 1 Introduction Neural network approaches have taken the field of natural language processing (NLP) by storm. In particular, variants of long short-term memory (LSTM) networks (Hochreiter and Schmidhuber, 1997) have produced impressive results on some of the classic NLP tasks such as part-ofspeech tagging (Ling et al., 2015), syntactic parsing (Vinyals et al., 2015) and semantic role labeling (Zhou and Xu, 2015). One might speculate that it is the recurrent nature of these models that enables these results. In this work we demonstrate that simple feedforward networks without any recurrence can achieve comparable or better accuracies than LSTMs, as long as they are globally normalized. Our model, described in detail in Section 2, uses a transition system (Nivre, 2006) and feature embeddings as introduced by Chen and Manning (2014). We do not use any recurrence, but perform beam search for maintaining multiple hy∗On leave from Columbia University. potheses and introduce global normalization with a conditional random field (CRF) objective (Bottou et al., 1997; Le Cun et al., 1998; Lafferty et al., 2001; Collobert et al., 2011) to overcome the label bias problem that locally normalized models suffer from. Since we use beam inference, we approximate the partition function by summing over the elements in the beam, and use early updates (Collins and Roark, 2004; Zhou et al., 2015). We compute gradients based on this approximate global normalization and perform full backpropagation training of all neural network parameters based on the CRF loss. In Section 3 we revisit the label bias problem and the implication that globally normalized models are strictly more expressive than locally normalized models. Lookahead features can partially mitigate this discrepancy, but cannot fully compensate for it—a point to which we return later. To empirically demonstrate the effectiveness of global normalization, we evaluate our model on part-of-speech tagging, syntactic dependency parsing and sentence compression (Section 4). Our model achieves state-of-the-art accuracy on all of these tasks, matching or outperforming LSTMs while being significantly faster. In particular for dependency parsing on the Wall Street Journal we achieve the best-ever published unlabeled attachment score of 94.61%. As discussed in more detail in Section 5, we also outperform previous structured training approaches used for neural network transition-based parsing. Our ablation experiments show that we outperform Weiss et al. (2015) and Alberti et al. (2015) because we do global backpropagation training of all model parameters, while they fix the neural network parameters when training the global part of their model. We also outperform Zhou et al. (2015) despite using a smaller beam. To shed additional light on the label bias problem 2442 in practice, we provide a sentence compression example where the local model completely fails. We then demonstrate that a globally normalized parsing model without any lookahead features is almost as accurate as our best model, while a locally normalized model loses more than 10% absolute in accuracy because it cannot effectively incorporate evidence as it becomes available. Finally, we provide an open-source implementation of our method, called SyntaxNet,1 which we have integrated into the popular TensorFlow2 framework. We also provide a pre-trained, state-of-the art English dependency parser called “Parsey McParseface,” which we tuned for a balance of speed, simplicity, and accuracy. 2 Model At its core, our model is an incremental transitionbased parser (Nivre, 2006). To apply it to different tasks we only need to adjust the transition system and the input features. 2.1 Transition System Given an input x, most often a sentence, we define: • A set of states S(x). • A special start state s† ∈S(x). • A set of allowed decisions A(s, x) for all s ∈ S(x). • A transition function t(s, d, x) returning a new state s′ for any decision d ∈A(s, x). We will use a function ρ(s, d, x; θ) to compute the score of decision d in state s for input x. The vector θ contains the model parameters and we assume that ρ(s, d, x; θ) is differentiable with respect to θ. In this section, for brevity, we will drop the dependence of x in the functions given above, simply writing S, A(s), t(s, d), and ρ(s, d; θ). Throughout this work we will use transition systems in which all complete structures for the same input x have the same number of decisions n(x) (or n for brevity). In dependency parsing for example, this is true for both the arc-standard and arc-eager transition systems (Nivre, 2006), where for a sentence x of length m, the number of decisions for any complete parse is n(x) = 2 × m.3 1http://github.com/tensorflow/models/tree/master/syntaxnet 2http://www.tensorflow.org 3Note that this is not true for the swap transition system defined in Nivre (2009). A complete structure is then a sequence of decision/state pairs (s1, d1) . . . (sn, dn) such that s1 = s†, di ∈S(si) for i = 1 . . . n, and si+1 = t(si, di). We use the notation d1:j to refer to a decision sequence d1 . . . dj. We assume that there is a one-to-one mapping between decision sequences d1:j−1 and states sj: that is, we essentially assume that a state encodes the entire history of decisions. Thus, each state can be reached by a unique decision sequence from s†.4 We will use decision sequences d1:j−1 and states interchangeably: in a slight abuse of notation, we define ρ(d1:j−1, d; θ) to be equal to ρ(s, d; θ) where s is the state reached by the decision sequence d1:j−1. The scoring function ρ(s, d; θ) can be defined in a number of ways. In this work, following Chen and Manning (2014), Weiss et al. (2015), and Zhou et al. (2015), we define it via a feedforward neural network as ρ(s, d; θ) = φ(s; θ(l)) · θ(d). Here θ(l) are the parameters of the neural network, excluding the parameters at the final layer. θ(d) are the final layer parameters for decision d. φ(s; θ(l)) is the representation for state s computed by the neural network under parameters θ(l). Note that the score is linear in the parameters θ(d). We next describe how softmax-style normalization can be performed at the local or global level. 2.2 Global vs. Local Normalization In the Chen and Manning (2014) style of greedy neural network parsing, the conditional probability distribution over decisions dj given context d1:j−1 is defined as p(dj|d1:j−1; θ) = exp ρ(d1:j−1, dj; θ) ZL(d1:j−1; θ) , (1) where ZL(d1:j−1; θ) = X d′∈A(d1:j−1) exp ρ(d1:j−1, d′; θ). 4It is straightforward to extend the approach to make use of dynamic programming in the case where the same state can be reached by multiple decision sequences. 2443 Each ZL(d1:j−1; θ) is a local normalization term. The probability of a sequence of decisions d1:n is pL(d1:n) = n Y j=1 p(dj|d1:j−1; θ) = exp Pn j=1 ρ(d1:j−1, dj; θ) Qn j=1 ZL(d1:j−1; θ) . (2) Beam search can be used to attempt to find the maximum of Eq. (2) with respect to d1:n. The additive scores used in beam search are the logsoftmax of each decision, ln p(dj|d1:j−1; θ), not the raw scores ρ(d1:j−1, dj; θ). In contrast, a Conditional Random Field (CRF) defines a distribution pG(d1:n) as follows: pG(d1:n) = exp Pn j=1 ρ(d1:j−1, dj; θ) ZG(θ) , (3) where ZG(θ) = X d′ 1:n∈Dn exp n X j=1 ρ(d′ 1:j−1, d′ j; θ) and Dn is the set of all valid sequences of decisions of length n. ZG(θ) is a global normalization term. The inference problem is now to find argmax d1:n∈Dn pG(d1:n) = argmax d1:n∈Dn n X j=1 ρ(d1:j−1, dj; θ). Beam search can again be used to approximately find the argmax. 2.3 Training Training data consists of inputs x paired with gold decision sequences d∗ 1:n. We use stochastic gradient descent on the negative log-likelihood of the data under the model. Under a locally normalized model, the negative log-likelihood is Llocal(d∗ 1:n; θ) = −ln pL(d∗ 1:n; θ) = (4) − n X j=1 ρ(d∗ 1:j−1, d∗ j; θ) + n X j=1 ln ZL(d∗ 1:j−1; θ), whereas under a globally normalized model it is Lglobal(d∗ 1:n; θ) = −ln pG(d∗ 1:n; θ) = − n X j=1 ρ(d∗ 1:j−1, d∗ j; θ) + ln ZG(θ). (5) A significant practical advantange of the locally normalized cost Eq. (4) is that the local partition function ZL and its derivative can usually be computed efficiently. In contrast, the ZG term in Eq. (5) contains a sum over d′ 1:n ∈Dn that is in many cases intractable. To make learning tractable with the globally normalized model, we use beam search and early updates (Collins and Roark, 2004; Zhou et al., 2015). As the training sequence is being decoded, we keep track of the location of the gold path in the beam. If the gold path falls out of the beam at step j, a stochastic gradient step is taken on the following objective: Lglobal−beam(d∗ 1:j; θ) = − j X i=1 ρ(d∗ 1:i−1, d∗ i ; θ) + ln X d′ 1:j∈Bj exp j X i=1 ρ(d′ 1:i−1, d′ i; θ). (6) Here the set Bj contains all paths in the beam at step j, together with the gold path prefix d∗ 1:j. It is straightforward to derive gradients of the loss in Eq. (6) and to back-propagate gradients to all levels of a neural network defining the score ρ(s, d; θ). If the gold path remains in the beam throughout decoding, a gradient step is performed using Bn, the beam at the end of decoding. 3 The Label Bias Problem Intuitively, we would like the model to be able to revise an earlier decision made during search, when later evidence becomes available that rules out the earlier decision as incorrect. At first glance, it might appear that a locally normalized model used in conjunction with beam search or exact search is able to revise earlier decisions. However the label bias problem (see Bottou (1991), Collins (1999) pages 222-226, Lafferty et al. (2001), Bottou and LeCun (2005), Smith and Johnson (2007)) means that locally normalized models often have a very weak ability to revise earlier decisions. This section gives a formal perspective on the label bias problem, through a proof that globally normalized models are strictly more expressive than locally normalized models. The theorem was originally proved5 by Smith and Johnson (2007). 5More precisely Smith and Johnson (2007) prove the theorem for models with potential functions of the form ρ(di−1, di, xi); the generalization to potential functions of the form ρ(d1:i−1, di, x1:i) is straightforward. 2444 The example underlying the proof gives a clear illustration of the label bias problem.6 Global Models can be Strictly More Expressive than Local Models Consider a tagging problem where the task is to map an input sequence x1:n to a decision sequence d1:n. First, consider a locally normalized model where we restrict the scoring function to access only the first i input symbols x1:i when scoring decision di. We will return to this restriction soon. The scoring function ρ can be an otherwise arbitrary function of the tuple ⟨d1:i−1, di, x1:i⟩: pL(d1:n|x1:n) = n Y i=1 pL(di|d1:i−1, x1:i) = exp Pn i=1 ρ(d1:i−1, di, x1:i) Qn i=1 ZL(d1:i−1, x1:i) . Second, consider a globally normalized model pG(d1:n|x1:n) = exp Pn i=1 ρ(d1:i−1, di, x1:i) ZG(x1:n) . This model again makes use of a scoring function ρ(d1:i−1, di, x1:i) restricted to the first i input symbols when scoring decision di. Define PL to be the set of all possible distributions pL(d1:n|x1:n) under the local model obtained as the scores ρ vary. Similarly, define PG to be the set of all possible distributions pG(d1:n|x1:n) under the global model. Here a “distribution” is a function from a pair (x1:n, d1:n) to a probability p(d1:n|x1:n). Our main result is the following: Theorem 3.1 See also Smith and Johnson (2007). PL is a strict subset of PG, that is PL ⊊PG. To prove this we will first prove that PL ⊆PG. This step is straightforward. We then show that PG ⊈PL; that is, there are distributions in PG that are not in PL. The proof that PG ⊈PL gives a clear illustration of the label bias problem. Proof that PL ⊆PG: We need to show that for any locally normalized distribution pL, we can construct a globally normalized model pG such 6Smith and Johnson (2007) cite Michael Collins as the source of the example underlying the proof. Note that the theorem refers to conditional models of the form p(d1:n|x1:n) with global or local normalization. Equivalence (or non-equivalence) results for joint models of the form p(d1:n, x1:n) are quite different: for example results from Chi (1999) and Abney et al. (1999) imply that weighted context-free grammars (a globally normalized joint model) and probabilistic context-free grammars (a locally normalized joint model) are equally expressive. that pG = pL. Consider a locally normalized model with scores ρ(d1:i−1, di, x1:i). Define a global model pG with scores ρ′(d1:i−1, di, x1:i) = log pL(di|d1:i−1, x1:i). Then it is easily verified that pG(d1:n|x1:n) = pL(d1:n|x1:n) for all x1:n, d1:n. □ In proving PG ⊈PL we will use a simple problem where every example seen in training or test data is one of the following two tagged sentences: x1x2x3 = a b c, d1d2d3 = A B C x1x2x3 = a b e, d1d2d3 = A D E (7) Note that the input x2 = b is ambiguous: it can take tags B or D. This ambiguity is resolved when the next input symbol, c or e, is observed. Now consider a globally normalized model, where the scores ρ(d1:i−1, di, x1:i) are defined as follows. Define T as the set {(A, B), (B, C), (A, D), (D, E)} of bigram tag transitions seen in the data. Similarly, define E as the set {(a, A), (b, B), (c, C), (b, D), (e, E)} of (word, tag) pairs seen in the data. We define ρ(d1:i−1, di, x1:i) (8) = α × J(di−1, di) ∈T K + α × J(xi, di) ∈EK where α is the single scalar parameter of the model, and JπK = 1 if π is true, 0 otherwise. Proof that PG ⊈PL: We will construct a globally normalized model pG such that there is no locally normalized model such that pL = pG. Under the definition in Eq. (8), it is straightforward to show that lim α→∞pG(A B C|a b c) = lim α→∞pG(A D E|a b e) = 1. In contrast, under any definition for ρ(d1:i−1, di, x1:i), we must have pL(A B C|a b c) + pL(A D E|a b e) ≤1 (9) This follows because pL(A B C|a b c) = pL(A|a) × pL(B|A, a b) × pL(C|A B, a b c) and pL(A D E|a b e) = pL(A|a) × pL(D|A, a b) × pL(E|A D, a b e). The inequality pL(B|A, a b) + pL(D|A, a b) ≤1 then immediately implies Eq. (9). 2445 En En-Union CoNLL ’09 Avg Method WSJ News Web QTB Ca Ch Cz En Ge Ja Sp Linear CRF 97.17 97.60 94.58 96.04 98.81 94.45 98.90 97.50 97.14 97.90 98.79 97.17 Ling et al. (2015) 97.78 97.44 94.03 96.18 98.77 94.38 99.00 97.60 97.84 97.06 98.71 97.16 Our Local (B=1) 97.44 97.66 94.46 96.59 98.91 94.56 98.96 97.36 97.35 98.02 98.88 97.29 Our Local (B=8) 97.45 97.69 94.46 96.64 98.88 94.56 98.96 97.40 97.35 98.02 98.89 97.30 Our Global (B=8) 97.44 97.77 94.80 96.86 99.03 94.72 99.02 97.65 97.52 98.37 98.97 97.47 Parsey McParseface 97.52 94.24 96.45 Table 1: Final POS tagging test set results on English WSJ and Treebank Union as well as CoNLL’09. We also show the performance of our pre-trained open source model, “Parsey McParseface.” It follows that for sufficiently large values of α, we have pG(A B C|a b c) + pG(A D E|a b e) > 1, and given Eq. (9) it is impossible to define a locally normalized model with pL(A B C|a b c) = pG(A B C|a b c) and pL(A D E|a b e) = pG(A D E|a b e). □ Under the restriction that scores ρ(d1:i−1, di, x1:i) depend only on the first i input symbols, the globally normalized model is still able to model the data in Eq. (7), while the locally normalized model fails (see Eq. 9). The ambiguity at input symbol b is naturally resolved when the next symbol (c or e) is observed, but the locally normalized model is not able to revise its prediction. It is easy to fix the locally normalized model for the example in Eq. (7) by allowing scores ρ(d1:i−1, di, x1:i+1) that take into account the input symbol xi+1. More generally we can have a model of the form ρ(d1:i−1, di, x1:i+k) where the integer k specifies the amount of lookahead in the model. Such lookahead is common in practice, but insufficient in general. For every amount of lookahead k, we can construct examples that cannot be modeled with a locally normalized model by duplicating the middle input b in (7) k + 1 times. Only a local model with scores ρ(d1:i−1, di, x1:n) that considers the entire input can capture any distribution p(d1:n|x1:n): in this case the decomposition pL(d1:n|x1:n) = Qn i=1 pL(di|d1:i−1, x1:n) makes no independence assumptions. However, increasing the amount of context used as input comes at a cost, requiring more powerful learning algorithms, and potentially more training data. For a detailed analysis of the tradeoffs between structural features in CRFs and more powerful local classifiers without structural constraints, see Liang et al. (2008); in these experiments local classifiers are unable to reach the performance of CRFs on problems such as parsing and named entity recognition where structural constraints are important. Note that there is nothing to preclude an approach that makes use of both global normalization and more powerful scoring functions ρ(d1:i−1, di, x1:n), obtaining the best of both worlds. The experiments that follow make use of both. 4 Experiments To demonstrate the flexibility and modeling power of our approach, we provide experimental results on a diverse set of structured prediction tasks. We apply our approach to POS tagging, syntactic dependency parsing, and sentence compression. While directly optimizing the global model defined by Eq. (5) works well, we found that training the model in two steps achieves the same precision much faster: we first pretrain the network using the local objective given in Eq. (4), and then perform additional training steps using the global objective given in Eq. (6). We pretrain all layers except the softmax layer in this way. We purposefully abstain from complicated hand engineering of input features, which might improve performance further (Durrett and Klein, 2015). We use the training recipe from Weiss et al. (2015) for each training stage of our model. Specifically, we use averaged stochastic gradient descent with momentum, and we tune the learning rate, learning rate schedule, momentum, and early stopping time using a separate held-out corpus for each task. We tune again with a different set of hyperparameters for training with the global objective. 4.1 Part of Speech Tagging Part of speech (POS) tagging is a classic NLP task, where modeling the structure of the output is important for achieving state-of-the-art performance. 2446 Data & Evaluation. We conducted experiments on a number of different datasets: (1) the English Wall Street Journal (WSJ) part of the Penn Treebank (Marcus et al., 1993) with standard POS tagging splits; (2) the English “Treebank Union” multi-domain corpus containing data from the OntoNotes corpus version 5 (Hovy et al., 2006), the English Web Treebank (Petrov and McDonald, 2012), and the updated and corrected Question Treebank (Judge et al., 2006) with identical setup to Weiss et al. (2015); and (3) the CoNLL ’09 multi-lingual shared task (Hajiˇc et al., 2009). Model Configuration. Inspired by the integrated POS tagging and parsing transition system of Bohnet and Nivre (2012), we employ a simple transition system that uses only a SHIFT action and predicts the POS tag of the current word on the buffer as it gets shifted to the stack. We extract the following features on a window ±3 tokens centered at the current focus token: word, cluster, character n-gram up to length 3. We also extract the tag predicted for the previous 4 tokens. The network in these experiments has a single hidden layer with 256 units on WSJ and Treebank Union and 64 on CoNLL’09. Results. In Table 1 we compare our model to a linear CRF and to the compositional characterto-word LSTM model of Ling et al. (2015). The CRF is a first-order linear model with exact inference and the same emission features as our model. It additionally also has transition features of the word, cluster and character n-gram up to length 3 on both endpoints of the transition. The results for Ling et al. (2015) were solicited from the authors. Our local model already compares favorably against these methods on average. Using beam search with a locally normalized model does not help, but with global normalization it leads to a 7% reduction in relative error, empirically demonstrating the effect of label bias. The set of character ngrams feature is very important, increasing average accuracy on the CoNLL’09 datasets by about 0.5% absolute. This shows that characterlevel modeling can also be done with a simple feed-forward network without recurrence. 4.2 Dependency Parsing In dependency parsing the goal is to produce a directed tree representing the syntactic structure of the input sentence. Data & Evaluation. We use the same corpora as in our POS tagging experiments, except that we use the standard parsing splits of the WSJ. To avoid over-fitting to the development set (Sec. 22), we use Sec. 24 for tuning the hyperparameters of our models. We convert the English constituency trees to Stanford style dependencies (De Marneffe et al., 2006) using version 3.3.0 of the converter. For English, we use predicted POS tags (the same POS tags are used for all models) and exclude punctuation from the evaluation, as is standard. For the CoNLL ’09 datasets we follow standard practice and include all punctuation in the evaluation. We follow Alberti et al. (2015) and use our own predicted POS tags so that we can include a k-best tag feature (see below) but use the supplied predicted morphological features. We report unlabeled and labeled attachment scores (UAS/LAS). Model Configuration. Our model configuration is basically the same as the one originally proposed by Chen and Manning (2014) and then refined by Weiss et al. (2015). In particular, we use the arc-standard transition system and extract the same set of features as prior work: words, part of speech tags, and dependency arcs and labels in the surrounding context of the state, as well as k-best tags as proposed by Alberti et al. (2015). We use two hidden layers of 1,024 dimensions each. Results. Tables 2 and 3 show our final parsing results and a comparison to the best systems from the literature. We obtain the best ever published results on almost all datasets, including the WSJ. Our main results use the same pre-trained word embeddings as Weiss et al. (2015) and Alberti et al. (2015), but no tri-training. When we artificially restrict ourselves to not use pre-trained word embeddings, we observe only a modest drop of ∼0.5% UAS; for example, training only on the WSJ yields 94.08% UAS and 92.15% LAS for our global model with a beam of size 32. Even though we do not use tri-training, our model compares favorably to the 94.26% LAS and 92.41% UAS reported by Weiss et al. (2015) with tri-training. As we show in Sec. 5, these gains can be attributed to the full backpropagation training that differentiates our approach from that of Weiss et al. (2015) and Alberti et al. (2015). Our results also significantly outperform the LSTM-based approaches of Dyer et al. (2015) and Ballesteros et al. (2015). 2447 WSJ Union-News Union-Web Union-QTB Method UAS LAS UAS LAS UAS LAS UAS LAS Martins et al. (2013)⋆ 92.89 90.55 93.10 91.13 88.23 85.04 94.21 91.54 Zhang and McDonald (2014)⋆ 93.22 91.02 93.32 91.48 88.65 85.59 93.37 90.69 Weiss et al. (2015) 93.99 92.05 93.91 92.25 89.29 86.44 94.17 92.06 Alberti et al. (2015) 94.23 92.36 94.10 92.55 89.55 86.85 94.74 93.04 Our Local (B=1) 92.95 91.02 93.11 91.46 88.42 85.58 92.49 90.38 Our Local (B=32) 93.59 91.70 93.65 92.03 88.96 86.17 93.22 91.17 Our Global (B=32) 94.61 92.79 94.44 92.93 90.17 87.54 95.40 93.64 Parsey McParseface (B=8) 94.15 92.51 89.08 86.29 94.77 93.17 Table 2: Final English dependency parsing test set results. We note that training our system using only the WSJ corpus (i.e. no pre-trained embeddings or other external resources) yields 94.08% UAS and 92.15% LAS for our global model with beam 32. Catalan Chinese Czech English German Japanese Spanish Method UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS Best Shared Task Result 87.86 79.17 80.38 89.88 87.48 92.57 87.64 Ballesteros et al. (2015) 90.22 86.42 80.64 76.52 79.87 73.62 90.56 88.01 88.83 86.10 93.47 92.55 90.38 86.59 Zhang and McDonald (2014) 91.41 87.91 82.87 78.57 86.62 80.59 92.69 90.01 89.88 87.38 92.82 91.87 90.82 87.34 Lei et al. (2014) 91.33 87.22 81.67 76.71 88.76 81.77 92.75 90.00 90.81 87.81 94.04 91.84 91.16 87.38 Bohnet and Nivre (2012) 92.44 89.60 82.52 78.51 88.82 83.73 92.87 90.60 91.37 89.38 93.67 92.63 92.24 89.60 Alberti et al. (2015) 92.31 89.17 83.57 79.90 88.45 83.57 92.70 90.56 90.58 88.20 93.99 93.10 92.26 89.33 Our Local (B=1) 91.24 88.21 81.29 77.29 85.78 80.63 91.44 89.29 89.12 86.95 93.71 92.85 91.01 88.14 Our Local (B=16) 91.91 88.93 82.22 78.26 86.25 81.28 92.16 90.05 89.53 87.4 93.61 92.74 91.64 88.88 Our Global (B=16) 92.67 89.83 84.72 80.85 88.94 84.56 93.22 91.23 90.91 89.15 93.65 92.84 92.62 89.95 Table 3: Final CoNLL ’09 dependency parsing test set results. 4.3 Sentence Compression Our final structured prediction task is extractive sentence compression. Data & Evaluation. We follow Filippova et al. (2015), where a large news collection is used to heuristically generate compression instances. Our final corpus contains about 2.3M compression instances: we use 2M examples for training, 130k for development and 160k for the final test. We report per-token F1 score and per-sentence accuracy (A), i.e. percentage of instances that fully match the golden compressions. Following Filippova et al. (2015) we also run a human evaluation on 200 sentences where we ask the raters to score compressions for readability (read) and informativeness (info) on a scale from 0 to 5. Model Configuration. The transition system for sentence compression is similar to POS tagging: we scan sentences from left-to-right and label each token as keep or drop. We extract features from words, POS tags, and dependency labels from a window of tokens centered on the input, as well as features from the history of predictions. We use a single hidden layer of size 400. Generated corpus Human eval Method A F1 read info Filippova et al. (2015) 35.36 82.83 4.66 4.03 Automatic 4.31 3.77 Our Local (B=1) 30.51 78.72 4.58 4.03 Our Local (B=8) 31.19 75.69 Our Global (B=8) 35.16 81.41 4.67 4.07 Table 4: Sentence compression results on News data. Automatic refers to application of the same automatic extraction rules used to generate the News training corpus. Results. Table 4 shows our sentence compression results. Our globally normalized model again significantly outperforms the local model. Beam search with a locally normalized model suffers from severe label bias issues that we discuss on a concrete example in Section 5. We also compare to the sentence compression system from Filippova et al. (2015), a 3-layer stacked LSTM which uses dependency label information. The LSTM and our global model perform on par on both the automatic evaluation as well as the human ratings, but our model is roughly 100× faster. All compressions kept approximately 42% of the tokens on average and all the models are significantly better than the automatic extractions (p < 0.05). 2448 5 Discussion We derived a proof for the label bias problem and the advantages of global models. We then emprirically verified this theoretical superiority by demonstrating state-of-the-art performance on three different tasks. In this section we situate and compare our model to previous work and provide two examples of the label bias problem in practice. 5.1 Related Neural CRF Work Neural network models have been been combined with conditional random fields and globally normalized models before. Bottou et al. (1997) and Le Cun et al. (1998) describe global training of neural network models for structured prediction problems. Peng et al. (2009) add a non-linear neural network layer to a linear-chain CRF and Do and Artires (2010) apply a similar approach to more general Markov network structures. Yao et al. (2014) and Zheng et al. (2015) introduce recurrence into the model and Huang et al. (2015) finally combine CRFs and LSTMs. These neural CRF models are limited to sequence labeling tasks where exact inference is possible, while our model works well when exact inference is intractable. 5.2 Related Transition-Based Parsing Work For early work on neural-networks for transitionbased parsing, see Henderson (2003; 2004). Our work is closest to the work of Weiss et al. (2015), Zhou et al. (2015) and Watanabe and Sumita (2015); in these approaches global normalization is added to the local model of Chen and Manning (2014). Empirically, Weiss et al. (2015) achieves the best performance, even though their model keeps the parameters of the locally normalized neural network fixed and only trains a perceptron that uses the activations as features. Their model is therefore limited in its ability to revise the predictions of the locally normalized model. In Table 5 we show that full backpropagation training all the way to the word embeddings is very important and significantly contributes to the performance of our model. We also compared training under the CRF objective with a Perceptron-like hinge loss between the gold and best elements of the beam. When we limited the backpropagation depth to training only the top layer θ(d), we found negligible differences in accuracy: 93.20% and 93.28% for the CRF objective and hinge loss respectively. However, when training with full backMethod UAS LAS Local (B=1) 92.85 90.59 Local (B=16) 93.32 91.09 Global (B=16) {θ(d)} 93.45 91.21 Global (B=16) {W2, θ(d)} 94.01 91.77 Global (B=16) {W1, W2, θ(d)} 94.09 91.81 Global (B=16) (full) 94.38 92.17 Table 5: WSJ dev set scores for successively deeper levels of backpropagation. The full parameter set corresponds to backpropagation all the way to the embeddings. Wi: hidden layer i weights. propagation the CRF accuracy is 0.2% higher and training converged more than 4× faster. Zhou et al. (2015) perform full backpropagation training like us, but even with a much larger beam, their performance is significantly lower than ours. We also apply our model to two additional tasks, while they experiment only with dependency parsing. Finally, Watanabe and Sumita (2015) introduce recurrent components and additional techniques like max-violation updates for a corresponding constituency parsing model. In contrast, our model does not require any recurrence or specialized training. 5.3 Label Bias in Practice We observed several instances of severe label bias in the sentence compression task. Although using beam search with the local model outperforms greedy inference on average, beam search leads the local model to occasionally produce empty compressions (Table 6). It is important to note that these are not search errors: the empty compression has higher probability under pL than the prediction from greedy inference. However, the more expressive globally normalized model does not suffer from this limitation, and correctly gives the empty compression almost zero probability. We also present some empirical evidence that the label bias problem is severe in parsing. We trained models where the scoring functions in parsing at position i in the sentence are limited to considering only tokens x1:i; hence unlike the full parsing model, there is no ability to look ahead in the sentence when making a decision.7 The result for a greedy model under this constraint 7This setting may be important in some applications, where for example parse structures for sentence prefixes are required, or where the input is received one word at a time and online processing is beneficial. 2449 Method Predicted compression pL pG Local (B=1) In Pakistan, former leader Pervez Musharraf has appeared in court for the first time, on treason charges. 0.13 0.05 Local (B=8) In Pakistan, former leader Pervez Musharraf has appeared in court for the first time, on treason charges. 0.16 <10−4 Global (B=8) In Pakistan, former leader Pervez Musharraf has appeared in court for the first time, on treason charges. 0.06 0.07 Table 6: Example sentence compressions where the label bias of the locally normalized model leads to a breakdown during beam search. The probability of each compression under the local (pL) and global (pG) models shows that only the global model can properly represent zero probability for the empty compression. is 76.96% UAS; for a locally normalized model with beam search is 81.35%; and for a globally normalized model is 93.60%. Thus the globally normalized model gets very close to the performance of a model with full lookahead, while the locally normalized model with a beam gives dramatically lower performance. In our final experiments with full lookahead, the globally normalized model achieves 94.01% accuracy, compared to 93.07% accuracy for a local model with beam search. Thus adding lookahead allows the local model to close the gap in performance to the global model; however there is still a significant difference in accuracy, which may in large part be due to the label bias problem. A number of authors have considered modified training procedures for greedy models, or for locally normalized models. Daum´e III et al. (2009) introduce Searn, an algorithm that allows a classifier making greedy decisions to become more robust to errors made in previous decisions. Goldberg and Nivre (2013) describe improvements to a greedy parsing approach that makes use of methods from imitation learning (Ross et al., 2011) to augment the training set. Note that these methods are focused on greedy models: they are unlikely to solve the label bias problem when used in conjunction with beam search, given that the problem is one of expressivity of the underlying model. More recent work (Yazdani and Henderson, 2015; Vaswani and Sagae, 2016) has augmented locally normalized models with correctness probabilities or error states, effectively adding a step after every decision where the probability of correctness of the resulting structure is evaluated. This gives considerable gains over a locally normalized model, although performance is lower than our full globally normalized approach. 6 Conclusions We presented a simple and yet powerful model architecture that produces state-of-the-art results for POS tagging, dependency parsing and sentence compression. Our model combines the flexibility of transition-based algorithms and the modeling power of neural networks. Our results demonstrate that feed-forward network without recurrence can outperform recurrent models such as LSTMs when they are trained with global normalization. We further support our empirical findings with a proof showing that global normalization helps the model overcome the label bias problem from which locally normalized models suffer. Acknowledgements We would like to thank Ling Wang for training his C2W part-of-speech tagger on our setup, and Emily Pitler, Ryan McDonald, Greg Coppola and Fernando Pereira for tremendously helpful discussions. Finally, we are grateful to all members of the Google Parsing Team. References Steven Abney, David McAllester, and Fernando Pereira. 1999. Relating probabilistic grammars and automata. Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 131–160. Chris Alberti, David Weiss, Greg Coppola, and Slav Petrov. 2015. Improved transition-based parsing and tagging with neural networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1354–1359. Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2015. Improved transition-based parsing by modeling characters instead of words with LSTMs. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 349–359. Bernd Bohnet and Joakim Nivre. 2012. A transitionbased system for joint part-of-speech tagging and labeled non-projective dependency parsing. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1455–1465. 2450 L´eon Bottou and Yann LeCun. 2005. Graph transformer networks for image recognition. Bulletin of the International Statistical Institute (ISI). L´eon Bottou, Yann Le Cun, and Yoshua Bengio. 1997. Global training of document processing systems using graph transformer networks. In Proceedings of Computer Vision and Pattern Recognition (CVPR), pages 489–493. L´eon Bottou. 1991. Une approche th´eorique de lapprentissage connexionniste: Applications `a la reconnaissance de la parole. Ph.D. thesis, Doctoral dissertation, Universite de Paris XI. Danqi Chen and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 740–750. Zhiyi Chi. 1999. Statistical properties of probabilistic context-free grammars. Computational Linguistics, pages 131–160. Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL’04), pages 111–118. Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493–2537. Hal Daum´e III, John Langford, and Daniel Marcu. 2009. Search-based structured prediction. Machine Learning Journal (MLJ), 75(3):297–325. Marie-Catherine De Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of Fifth International Conference on Language Resources and Evaluation, pages 449– 454. Trinh Minh Tri Do and Thierry Artires. 2010. Neural conditional random fields. In International Conference on Artificial Intelligence and Statistics, volume 9, pages 177–184. Greg Durrett and Dan Klein. 2015. Neural crf parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 302–312. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, pages 334–343. Katja Filippova, Enrique Alfonseca, Carlos A. Colmenares, Łukasz Kaiser, and Oriol Vinyals. 2015. Sentence compression by deletion with lstms. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 360–368. Yoav Goldberg and Joakim Nivre. 2013. Training deterministic parsers with non-deterministic oracles. Transactions of the Association for Computational Linguistics, 1:403–414. Jan Hajiˇc, Massimiliano Ciaramita, Richard Johansson, Daisuke Kawahara, Maria Ant`onia Mart´ı, Llu´ıs M`arquez, Adam Meyers, Joakim Nivre, Sebastian Pad´o, Jan ˇStˇep´anek, Pavel Straˇn´ak, Mihai Surdeanu, Nianwen Xue, and Yi Zhang. 2009. The conll-2009 shared task: Syntactic and semantic dependencies in multiple languages. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task, pages 1–18. James Henderson. 2003. Inducing history representations for broad coverage statistical parsing. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 24–31. James Henderson. 2004. Discriminative training of a neural network statistical parser. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL’04), pages 95–102. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: The 90% solution. In Proceedings of the Human Language Technology Conference of the NAACL, Short Papers, pages 57–60. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991. John Judge, Aoife Cahill, and Josef van Genabith. 2006. Questionbank: Creating a corpus of parseannotated questions. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 497–504. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, pages 282–289. 2451 Yann Le Cun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient based learning applied to document recognition. Proceedings of IEEE, 86(11):2278–2324. Tao Lei, Yu Xin, Yuan Zhang, Regina Barzilay, and Tommi Jaakkola. 2014. Low-rank tensors for scoring dependency structures. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1381–1391. Percy Liang, Hal Daum´e, III, and Dan Klein. 2008. Structure compilation: Trading structure for features. In Proceedings of the 25th International Conference on Machine Learning, pages 592–599. Wang Ling, Chris Dyer, Alan W Black, Isabel Trancoso, Ramon Fermandez, Silvio Amir, Luis Marujo, and Tiago Luis. 2015. Finding function in form: Compositional character models for open vocabulary word representation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1520–1530. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Andre Martins, Miguel Almeida, and Noah A. Smith. 2013. Turning on the turbo: Fast third-order nonprojective turbo parsers. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 617–622. Joakim Nivre. 2006. Inductive Dependency Parsing. Springer-Verlag New York, Inc. Joakim Nivre. 2009. Non-projective dependency parsing in expected linear time. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 351–359. Jian Peng, Liefeng Bo, and Jinbo Xu. 2009. Conditional neural fields. In Advances in Neural Information Processing Systems 22, pages 1419–1427. Slav Petrov and Ryan McDonald. 2012. Overview of the 2012 shared task on parsing the web. Notes of the First Workshop on Syntactic Analysis of NonCanonical Language (SANCL). St´ephane Ross, Geoffrey J. Gordon, and J. Andrew Bagnell. 2011. No-regret reductions for imitation learning and structured prediction. AISTATS. Noah Smith and Mark Johnson. 2007. Weighted and probabilistic context-free grammars are equally expressive. Computational Linguistics, pages 477– 491. Ashish Vaswani and Kenji Sagae. 2016. Efficient structured inference for transition-based parsing with neural networks and error states. Transactions of the Association for Computational Linguistics, 4:183–196. Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Advances in Neural Information Processing Systems 28, pages 2755– 2763. Taro Watanabe and Eiichiro Sumita. 2015. Transitionbased neural constituent parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1169–1179. David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. 2015. Structured training for neural network transition-based parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, pages 323–333. Kaisheng Yao, Baolin Peng, Geoffrey Zweig, Dong Yu, Xiaolong Li, and Feng Gao. 2014. Recurrent conditional random field for language understanding. In IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP ’14). Majid Yazdani and James Henderson. 2015. Incremental recurrent neural network dependency parser with search-based discriminative training. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning, pages 142–152. Hao Zhang and Ryan McDonald. 2014. Enforcing structural diversity in cube-pruned dependency parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 656–661. Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, and Philip H. S. Torr. 2015. Conditional random fields as recurrent neural networks. In The IEEE International Conference on Computer Vision (ICCV), pages 1529–1537. Jie Zhou and Wei Xu. 2015. End-to-end learning of semantic role labeling using recurrent neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1127–1137. Hao Zhou, Yue Zhang, and Jiajun Chen. 2015. A neural probabilistic structured-prediction model for transition-based dependency parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, pages 1213–1222. 2452
2016
231
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 247–257, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics On the Role of Seed Lexicons in Learning Bilingual Word Embeddings Ivan Vuli´c and Anna Korhonen Language Technology Lab DTAL, University of Cambridge {iv250, alk23}@cam.ac.uk Abstract A shared bilingual word embedding space (SBWES) is an indispensable resource in a variety of cross-language NLP and IR tasks. A common approach to the SBWES induction is to learn a mapping function between monolingual semantic spaces, where the mapping critically relies on a seed word lexicon used in the learning process. In this work, we analyze the importance and properties of seed lexicons for the SBWES induction across different dimensions (i.e., lexicon source, lexicon size, translation method, translation pair reliability). On the basis of our analysis, we propose a simple but effective hybrid bilingual word embedding (BWE) model. This model (HYBWE) learns the mapping between two monolingual embedding spaces using only highly reliable symmetric translation pairs from a seed document-level embedding space. We perform bilingual lexicon learning (BLL) with 3 language pairs and show that by carefully selecting reliable translation pairs our new HYBWE model outperforms benchmarking BWE learning models, all of which use more expensive bilingual signals. Effectively, we demonstrate that a SBWES may be induced by leveraging only a very weak bilingual signal (document alignments) along with monolingual data. 1 Introduction Dense real-valued vector representations of words or word embeddings (WEs) have recently gained increasing popularity in natural language processing (NLP), serving as invaluable features in a broad Monolingual vs Bilingual Figure 1: A toy example of a 3-dimensional monolingual vs shared bilingual word embedding space (further SBWES) from Gouws et al. (2015). range of NLP tasks, e.g., (Turian et al., 2010; Collobert et al., 2011; Chen and Manning, 2014). Several studies have showcased a direct link and comparable performance to “more traditional” distributional models (Turney and Pantel, 2010). Yet the widely used skip-gram model with negative sampling (SGNS) (Mikolov et al., 2013b) is considered as the state-of-the-art word representation model, due to its simplicity, fast training, as well as its solid and robust performance across a wide variety of semantic tasks (Baroni et al., 2014; Levy and Goldberg, 2014b; Levy et al., 2015). Research interest has recently extended to bilingual word embeddings (BWEs). BWE learning models focus on the induction of a shared bilingual word embedding space (SBWES) where words from both languages are represented in a uniform language-independent manner such that similar words (regardless of the actual language) have similar representations (see Fig. 1). A variety of BWE learning models have been proposed, differing in the essential requirement of a bilingual signal necessary to construct such a SBWES (discussed later in Sect. 2). SBWES may be used to support many tasks, e.g., computing cross-lingual/multilingual semantic word similarity (Faruqui and Dyer, 2014), learning bilingual word lexicons (Mikolov et al., 2013a; Gouws et al., 2015; Vuli´c et al., 2016), cross-lingual entity linking (Tsai and Roth, 2016), 247 parsing (Guo et al., 2015; Johannsen et al., 2015), machine translation (Zou et al., 2013), or crosslingual information retrieval (Vuli´c and Moens, 2015; Mitra et al., 2016). BWE models should have two desirable properties: (P1) leverage (large) monolingual training sets tied together through a bilingual signal, (P2) use as inexpensive bilingual signal as possible in order to learn a SBWES in a scalable and widely applicable manner across languages and domains. While we provide a classification of related work, that is, different BWE models according to these properties in Sect. 2.1, the focus of this work is on a popular class of models labeled Post-Hoc Mapping with Seed Lexicons. These models operate as follows (Mikolov et al., 2013a; Dinu et al., 2015; Lazaridou et al., 2015; Ammar et al., 2016): (1) two separate non-aligned monolingual embedding spaces are induced using any monolingual WE learning model (SGNS is the typical choice), (2) given a seed lexicon of word translation pairs as the bilingual signal for training, a mapping function is learned which ties the two monolingual spaces together into a SBWES. All existing work on this class of models assumes that high-quality training seed lexicons are readily available. In reality, little is understood regarding what constitutes a high quality seed lexicon, even with “traditional” distributional models (Gaussier et al., 2004; Holmlund et al., 2005; Vuli´c and Moens, 2013). Therefore, in this work we ask whether BWE learning could be improved by making more intelligent choices when deciding over seed lexicon entries. In order to do this we delve deeper into the cross-lingual mapping problem by analyzing a spectrum of seed lexicons with respect to controllable parameters such as lexicon source, its size, translation method, and translation pair reliability. The contributions of this paper are as follows: (C1) We present a systematic study on the importance of seed lexicons for learning mapping functions between monolingual WE spaces. (C2) Given the insights gained, we propose a simple yet effective hybrid BWE model HYBWE that removes the need for readily available seed lexicons, and satisfies properties P1 and P2. HYBWE relies on an inexpensive seed lexicon of highly reliable word translation pairs obtained by a documentlevel BWE model (Vuli´c and Moens, 2016) from document-aligned comparable data. (C3) Using a careful pair selection process when constructing a seed lexicon, we show that in the BLL task HYBWE outperforms a BWE model of Mikolov et al. (2013a) which relies on readily available seed lexicons. HYBWE also outperforms state-of-the-art models of (Hermann and Blunsom, 2014b; Gouws et al., 2015) which require sentencealigned parallel data. 2 Learning SBWES using Seed Lexicons Given source and target language vocabularies V S and V T , all BWE models learn a representation of each word w ∈V S ⊔V T in a SBWES as a realvalued vector: w = [f1, . . . , fd], where fk ∈R denotes the value for the k-th cross-lingual feature for w within a d-dimensional SBWES. Semantic similarity sim(w, v) between two words w, v ∈V S ⊔V T is then computed by applying a similarity function (SF), e.g. cosine (cos) on their representations in the SBWES: sim(w, v) = SF(w, v) = cos(w, v). 2.1 Related Work: BWE Models and Bilingual Signals BWE models may be clustered into four different types according to bilingual signals used in training, and properties P1 and P2 (see Sect. 1). Upadhyay et al. (2016) provide a similar overview of recent bilingual embedding learning architectures regarding different bilingual signals required for the embedding induction. (Type 1) Parallel-Only: This group of BWE models relies on sentence-aligned and/or word-aligned parallel data as the only data source (Zou et al., 2013; Hermann and Blunsom, 2014a; Koˇciský et al., 2014; Hermann and Blunsom, 2014b; Chandar et al., 2014). In addition to an expensive bilingual signal (colliding with P2), these models do not leverage larger monolingual datasets for training (not satisfying P1). (Type 2) Joint Bilingual Training: These models jointly optimize two monolingual objectives, with the cross-lingual objective acting as a cross-lingual regularizer during training (Klementiev et al., 2012; Gouws et al., 2015; Soyer et al., 2015; Shi et al., 2015; Coulmance et al., 2015). The idea may be summarized by the simplified formulation (Luong et al., 2015): γ(MonoS+MonoT )+δBi. The monolingual objectives MonoS and MonoT ensure that similar words in each language are assigned similar 248 embeddings and aim to capture the semantic structure of each language, whereas the cross-lingual objective Bi ensures that similar words across languages are assigned similar embeddings. It ties the two monolingual spaces together into a SBWES (thus satisfying P1). Parameters γ and δ govern the influence of the monolingual and bilingual components.1 The main disadvantage of Type 2 models is the costly parallel data needed for the bilingual signal (thus colliding with P2). (Type 3) Pseudo-Bilingual Training: This set of models requires document alignments as bilingual signal to induce a SBWES. Vuli´c and Moens (2016) create a collection of pseudo-bilingual documents by merging every pair of aligned documents in training data, in a way that preserves important local information: words that appeared next to other words within the same language and those that appeared in the same region of the document across different languages. This collection is then used to train word embeddings with monolingual SGNS from word2vec. With pseudo-bilingual documents, the “context” of a word is redefined as a mixture of neighbouring words (in the original language) and words that appeared in the same region of the document (in the ”foreign” language). The bilingual contexts for each word in each document steer the final model towards constructing a SBWES. The advantage over other BWE model types lies in exploiting weaker document-level bilingual signals (satisfying P2), but these models are unable to exploit monolingual corpora during training (unlike Type 2 or Type 4; thus colliding with P1). (Type 4) Post-Hoc Mapping with Seed Lexicons: These models learn post-hoc mapping functions between monolingual WE spaces induced separately for two different languages (e.g., by SGNS). All Type 4 models (Mikolov et al., 2013a; Faruqui and Dyer, 2014; Dinu et al., 2015; Lazaridou et al., 2015) rely on readily available seed lexicons of highly frequent words obtained by e.g. Google Translate (GT) to learn the mapping (again colliding with P2), but they are able to satisfy P1. 1Type 1 models may be considered a special case of Type 2 models: Setting γ = 0 reduces Type 2 models to Type 1 models trained solely on parallel data, e.g., (Hermann and Blunsom, 2014b; Chandar et al., 2014). γ = 1 results in the models from (Klementiev et al., 2012; Gouws et al., 2015; Soyer et al., 2015; Coulmance et al., 2015). 2.2 Post-Hoc Mapping with Seed Lexicons: Methodology and Lexicons Key Intuition One may infer that a type-hybrid procedure which would retain only highly reliable translation pairs obtained by a Type 3 model as a seed lexicon for Type 4 models effectively satisfies both requirements: (P1) unlike Type 1 and Type 3, it can learn from monolingual data and tie two monolingual spaces using the highly reliable translation pairs, (P2) unlike Type 1 and Type 2, it does not require parallel data; unlike Type 4, it does not require external lexicons and translation systems. The only bilingual signal required are document alignments. Therefore, our focus is on novel less expensive Type 4 models. Overview The standard learning setup we use is as follows: First, two monolingual embedding spaces, RdS and RdT , are induced separately in each of the two languages using a standard monolingual WE model such as CBOW or SGNS. dS and dT denote the dimensionality of monolingual WE spaces. The bilingual signal is a seed lexicon, i.e., a list of word translation pairs (xi, yi), where xi ∈V S, yi ∈V T , and xi ∈RdS, yi ∈RdT . Learning Objectives Training is cast as a multivariate regression problem: it implies learning a function that maps the source language vectors from the training data to their corresponding target language vectors. A standard approach (Mikolov et al., 2013a; Dinu et al., 2015) is to assume a linear map W ∈RdS×dT , where a L2-regularized least-squares error objective (i.e., ridge regression) is used to learn the map W. The map is learned by solving the following optimization problem (typically by stochastic gradient descent (SGD)): min W∈RdS×dT ||XW −Y||2 F + λ||W||2 F (1) X and Y are matrices obtained through the respective concatenation of source language and target language vectors from training pairs. Once the linear map W is estimated, any previously unseen source language word vector xu may be straightforwardly mapped into the target language embedding space RdT as Wxu. After mapping all vectors x, x ∈V S, the target embedding space RdT in fact serves as SBWES.2 2Another possible objective (found in the zero-shot learning literature) is a margin-based ranking loss (Weston et al., 2011; Lazaridou et al., 2015). We omit the results with this objective for brevity, and due to the fact that similar trends are observed as with (more standard) linear maps. 249 Seed Lexicon Source and Translation Method Prior work on post-hoc mapping with seed lexicons used a translation system (i.e., GT) to translate highly frequent English words to other languages such as Czech, Spanish (Mikolov et al., 2013a; Gouws et al., 2015) or Italian (Dinu et al., 2015; Lazaridou et al., 2015). This method presupposes the availability and high quality of such an external translation system. To simulate this setup, we take as a starting point the BNC word frequency list from Kilgarriff (1997) containing 6, 318 most frequent English lemmas. The list is then translated to other languages via GT. We call the BNC-based lexicons obtained by employing Google Translate BNC+GT. In this paper, we propose another option: first, we learn the ”first” SBWES (i.e., SBWES-1) using another BWE model (see Sect. 2.1), and then translate the BNC list through SBWES-1 by retaining the nearest cross-lingual neighbor yi ∈V T for each xi in the BNC list which is represented in SBWES-1. The pairs (xi, yi) constitute the seed lexicon needed for learning the mapping between monolingual spaces, that is, to induce the final SBWES-2. Although in theory any BWE induction model may be used to induce SBWES-1, we rely on a document-level Type 3 BWE induction model from (Vuli´c and Moens, 2016), since it requires only document alignments as (weak) bilingual signal. The resulting hybrid BWE induction model (HYBWE) combines the output of a Type 3 model (SBWES-1) and a Type 4 model (SBWES-2). This seed lexicon and BWE learning variant is called BNC+HYB. Our new hybrid model allows us to also use source language words occurring in SBWES-1 sorted by frequency as seed lexicon source, again leaning on the intuition that higher frequency phenomena are more reliably translated using statistical models. Their translations can also be found through SBWES-1 to obtain seed lexicon pairs (xi, yi). This variant is called HFQ+HYB. Another possibility, recently introduced by Kiros et al. (2015) for vocabulary expansion in monolingual settings, relies on all words shared between two vocabularies to learn the mapping. In this work, we test the ability and limits of such orthographic evidence in cross-lingual settings: seed lexicon pairs are (xi, xi), where xi ∈V S and xi ∈V T . This seed lexicon variant is called ORTHO. Seed Lexicon Size While all prior reported only results with restricted seed lexicon sizes only (i.e., 1K, 2K and 5K lexicon pairs are used as standard), in this work we provide a full-fledged analysis of the influence of seed lexicon size on the SBWES performance in cross-lingual tasks. More extreme settings are also investigated, in the attempt to answer two important questions: (1) Can a Type 4 SBWES be induced in a limited setting with only a few hundred lexicon pairs available (e.g., 100500)? (2) Can the Type 4 models profit from the inclusion of more seed lexicon pairs (e.g., more than 5K, even up to 40K-50K lexicon pairs)? Translation Pair Reliability When building seed lexicons through SBWES-1 (i.e., BNC+HYB and HFQ+HYB methods), it is possible to control for the reliability of translation pairs to be included in the final lexicon, with the idea that the use of only highly reliable pairs can potentially lead to an improved SBWES-2. A simple yet effective reliability reliability feature for translation pairs is the symmetry constraint (Peirsman and Padó, 2010; Vuli´c and Moens, 2013) : two words xi ∈V S and yi ∈V S are used as seed lexicon pairs only if they are mutual nearest neighbours given their representations in SBWES-1. The two variants of seed lexicons with only symmetric pairs are BNC+HYB+SYM and HFREQ+HYB+SYM. We also test the variants without the symmetry constraint (i.e., BNC+HYB+ASYM and HFQ+HYB+ASYM). Even more conservative reliability measures may be applied by exploiting the scores in the lists of translation candidates ranked by their similarity to the cue word xi. We investigate a symmetry constraint with a threshold: two words xi ∈V S and yi ∈V S are included as seed lexicon pair (xi, yi) iff they are mutual nearest neighbours in SBWES-1 and it holds: sim(xi, yi) −sim(xi, zi) > THR (2) sim(yi, xi) −sim(yi, wi) > THR (3) where zi ∈V T is the second best translation candidate for xi, and wi ∈V S for yi. THR is a parameter which specifies the margin between the two best translation candidates. The intuition is that highly unambiguous and monosemous translation pairs (which is reflected in higher score margins) are also highly reliable.3 3Other (more elaborate) reliability measures exist in the 250 3 Experimental Setup Task: Bilingual Lexicon Learning (BLL) After the final SBWES is induced, given a list of n source language words xu1, . . . , xun, the task is to find a target language word t for each xu in the list using the SBWES. t is the target language word closest to the source language word xu in the induced SBWES, also known as the cross-lingual nearest neighbor. The set of learned n (xu, t) pairs is then run against a gold standard BLL test set. Following the standard practice (Mikolov et al., 2013a; Dinu et al., 2015), for all Type 4 models, all pairs containing any of the test words xu1, . . . , xun are removed from training seed lexicons. Test Sets For each language pair, we evaluate on standard 1,000 ground truth one-to-one translation pairs built for three language pairs: Spanish (ES)-, Dutch (NL)-, Italian (IT)-English (EN) by Vuli´c and Moens (2013). The dataset is generally considered a benchmarking test set for BLL models that learn from non-parallel data, and is available online.4 We have also experimented with two other benchmarking BLL test sets (Bergsma and Durme, 2011; Leviant and Reichart, 2015) observing a very similar relative performance of all the models in our comparison. Evaluation Metrics We measure the BLL performance using the standard Top 1 accuracy (Acc1) metric (Gaussier et al., 2004; Mikolov et al., 2013a; Gouws et al., 2015).5 Baseline Models To induce SBWES-1, we resort to document-level embeddings of Vuli´c and Moens (2016) (Type 3). We also compare to results obtained directly by their model (BWESG) to measure the performance gains with HYBWE. To compare with a representative Type 2 model, we opt for the BilBOWA model of Gouws et al. (2015) due to its solid performance and robustness in the BLL task when trained on general-domain corpora such as Wikipedia (Luong et al., 2015), its reduced complexity reflected in fast computations on massive datasets, as well as its public availabilliterature (Smith and Eisner, 2007; Tu and Honavar, 2012; Vuli´c and Moens, 2013), but we do not observe any significant gains when resorting to the more complex reliability estimates. 4http://people.cs.kuleuven.be/~ivan.vulic/ 5Similar trends are observed within a more lenient setting with Acc5 and Acc10 scores, but we omit these results for clarity and the fact that the actual BLL performance is best reflected in Acc1 scores (i.e., best translation only). ity.6 In short, BilBOWA combines the adapted SGNS for monolingual objectives together with a cross-lingual objective that minimizes the L2-loss between the bag-of-word vectors of parallel sentences. BilBOWA uses the same training setup as HYBWE (monolingual datasets plus a bilingual signal), but relies on a stronger bilingual signal (sentence alignments as opposed to HYBWE’s document alignments). We also compare with a benchmarking Type 1 model from sentence-aligned parallel data called BiCVM (Hermann and Blunsom, 2014b). Finally, a SGNS-based BWE model with the BNC+GT seed lexicon is taken as a baseline Type 4 model (Mikolov et al., 2013a).7 Training Data and Setup We use standard training data and suggested settings to obtain BWEs for all models involved in comparison. We retain the 100K most frequent words in each language for all models. To induce monolingual WE spaces, two monolingual SGNS models were trained on the cleaned and tokenized Wikipedias from the Polyglot website (Al-Rfou et al., 2013) using SGD with a global learning rate of 0.025. For BilBOWA, as in the original work (Gouws et al., 2015), the bilingual signal for the cross-lingual regularization is provided by the first 500K sentences from Europarl.v7 (Tiedemann, 2012). We use SGD with a global rate of 0.15.8 The window size is varied from 2 to 16 in steps of 2, and the best scoring model is always reported in all comparisons. BWESG was trained on the cleaned and tokenized document-aligned Wikipedias available online9, SGD on pseudo-bilingual documents with a global rate 0.025. For BiCVM, we use the tool released by its authors10 and train on the whole Europarl.v7 for each language pair: we train an additive model, with hinge loss margin set to d (i.e., dimensionality) as in the original paper, batch size of 50, and noise parameter of 10. All BiCVM models are trained with 200 iterations. For all models, we obtain BWEs with d = 40, 64, 300, 500, but we report only results with 300-dimensional BWEs as similar trends were observed with other d-s. Other parameters are: 15 epochs, 15 negatives, subsampling rate 1e −4. 6https://github.com/gouwsmeister/bilbowa 7For details concerning all baseline models, the reader is encouraged to check the relevant literature. 8Suggested by the authors (personal correspondence). 9http://linguatools.org/tools/corpora/ 10https://github.com/karlmoritz/bicvm 251 BNC+GT BNC+HYB+ASYM BNC+HYB+SYM HFQ+HYB+ASYM HFQ+HYB+SYM ORTHO casamiento casamiento casamiento casamiento casamiento casamiento marriage marry marriage marriage marriage maría marry marriage marry marry marry señor marrying marrying marrying betrothal betrothal doña betrothal wed wedding marrying marrying juana wedding wedding betrothal wedding wedding noche wed betrothal wed daughter wed amor elopement remarry marriages betrothed elopement guerra Table 1: Nearest EN neighbours of the Spanish word casamiento (marriage) with different seed lexicons. Model ES-EN NL-EN IT-EN BICVM (TYPE 1) 0.532 0.583 0.569 BILBOWA (TYPE 2) 0.632 0.636 0.647 BWESG (TYPE 3) 0.676 0.626 0.643 BNC+GT (Type 4) 0.677 0.641 0.646 ORTHO 0.233 0.506 0.224 BNC+HYB+ASYM 0.673 0.626 0.644 BNC+HYB+SYM 0.681 0.658* 0.663* (3388; 2738; 3145) HFQ+HYB+ASYM 0.673 0.596 0.635 HFQ+HYB+SYM 0.695* 0.657* 0.667* Table 2: Acc1 scores in a standard BLL setup (for Type 4 models): all seed lexicons contain 5K translation pairs, except for BNC+HYB+SYM (its sizes provided in parentheses). * denotes a statistically significant improvement over baselines and BNC+GT using McNemar’s statistical significance test with the Bonferroni correction, p < 0.05. 4 Results and Discussion Exp. I: Standard BLL Setting First, we replicate the previous BLL setups with Type 4 models from (Mikolov et al., 2013a; Dinu et al., 2015) by relying on seed lexicons of exactly 5K word pairs (except for BNC+HYB+SYM which exhausts all possible pairs before the 5K limit) sorted by frequency of the source language word. Results with different lexicons for the three language pairs are summarized in Table 2, while Table 1 shows examples of nearest neighbour words for a Spanish word not present in any of the training lexicons. Table 1 provides evidence for our first insight: Type 4 models do not necessarily require external lexicons (such as the BNC+GT model) to learn a semantically plausible SBWES (i.e., the lists of nearest neighbours are similar for all lexicons excluding ORTHO). Table 1 also suggests that the choice of seed lexicon pairs may strongly influence the properties of the resulting SBWES. Due to its design, ORTHO finds a mapping which naturally brings foreign words appearing in the English vocabulary closer in the induced SBWES. This first batch of quantitative results already shows that Type 4 models with inexpensive automatically induced lexicons (i.e., HYBWE) are on a par with or even better than Type 4 models relying on external resources or translation systems. In addition, the best reported scores using the more constrained symmetric BNC/HFQ+HYB+SYM lexicon variants are higher than those for three baseline models (of Type 1, Type 2, and Type 3) that previously held highest scores on the BLL test sets (Vuli´c and Moens, 2016). These improvements over the baseline models and BNC+GT are statistically significant (using McNemar’s statistical significance test, p < 0.05). Table 2 also suggests that a careful selection of reliable pairs can lead to peak performances even with a lower number of pairs, i.e., see the results of BNC+HYB+SYM. Exp. II: Lexicon Size BLL results for ES-EN and NL-EN obtained by varying the seed lexicon sizes are displayed in Fig. 2(a) and 2(b). Results for IT-EN closely follow the patterns observed with ESEN. BNC+HYB+SYM and HFQ+HYB+ASYM – the two models that do not blindly use all potential training pairs, but rely on sets of symmetric pairs (i.e., they include the simple measure of translation pair reliability) – display the best performance across all lexicon sizes. The finding confirms the intuition that a more intelligent pair selection strategy is essential for Type 4 BWE models. HFQ+HYB+SYM – a simple hybrid BWE model (HYBWE) combining a document-level Type 3 model with a Type 4 model and translation reliability detection – is the strongest BWE model overall (see also Table 2 again). HYBWE-based models which do not perform any pair selection (i.e., BNC/HFQ+HYB+ASYM) closely follow the behaviour of the GT-based model. This demonstrates that an external lexicon or translation system may be safely replaced 252 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.1k 0.2k 0.5k 1k 2k 5k 10k 20k 50k Acc1 scores Lexicon size BNC+GT BNC+HYB+ASYM BNC+HYB+SYM HFQ+HYB+ASYM HFQ+HYB+SYM ORTHO (a) Spanish-English 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.1k 0.2k 0.5k 1k 2k 5k 10k 20k 50k Lexicon size BNC+GT BNC+HYB+ASYM BNC+HYB+SYM HFQ+HYB+ASYM HFQ+HYB+SYM ORTHO (b) Dutch-English Figure 2: BLL results (Acc1) across different seed lexicon sizes for all lexicons. x axes are in log scale. by a document-level embedding model without any significant performance loss in the BLL task. The ORTHO-based model falls short of its competitors. However, we observe that even this model with the learning setting relying on the cheapest bilingual signal may lead to reasonable BLL scores, especially for the more related NL-EN pair. The two models with the symmetry constraint display a particularly strong performance with settings relying on scarce resources (i.e., only a small portion of training pairs is available). For instance, HFQ+HYB+SYM scores 0.129 for ES-EN with only 200 training pairs (vs 0.002 with BNC+GT), and 0.529 with 500 pairs (vs 0.145 with BNC+GT). On the other hand, adding more pairs does not lead to an improved BLL performance. In fact, we observe a slow and steady decrease in performance with lexicons containing 10, 000 and more training pairs for all HYBWE variants. The phenomenon may be attributed to the fact that highly frequent words receive more accurate representations in SBWES-1, and adding less frequent and, consequently, less accurate training pairs to the SBWES-2 learning process brings in additional noise. In plain language, when it comes to seed lexicons Type 4 models prefer quality over quantity. Exp. III: Translation Pair Reliability In the next experiment, we vary the threshold value THR (see sect. 2.2) in the HFQ+HYB+SYM variant with the following values in comparison: 0.0 (None), 0.01, 0.025, 0.05, 0.075, 0.1. We investigate whether retaining only highly unambiguous pairs would lead to even better BLL performance. The results for all three language pairs are summarized in Fig. 3(a)-3(c). The results for all variant models again decrease when employing larger lexicons (due to the usage of less frequent word pairs in training). We observe that a slightly stricter selection criterion (i.e., THR = 0.01, 0.025) also leads to slightly improved peak BLL scores for ES-EN and IT-EN around the 5K region. The improvements, however, are not statistically significant. On the other hand, a too conservative pair selection criterion with higher threshold values significantly deteriorates the overall performance of HYBWE with HFQ+HYB+SYM. The conservative criteria discard plenty of potentially useful training pairs. Therefore, as one line of future research, we plan to investigate more sophisticated models for the selection of reliable seed lexicon pairs that will lead to a better trade-off between the lexicon size and reliability of the pairs. Exp. IV: Another Task - Suggesting Word Translations in Context (SWTC) In the final experiment, we test whether the findings originating from the BLL task generalize to another crosslingual semantic task: suggesting word translations in context (SWTC) recently proposed by Vuli´c and Moens (2014). Given an occurrence of a polysemous word w ∈V S, the SWTC task is to choose the correct translation in the target language of that particular occurrence of w from the given set T C(w) = {t1, . . . , ttq}, T C(w) ⊆V T , of its tq possible translations/meanings. Whereas in the BLL task the candidate search is performed over the entire vocabulary V T , the set TC(w) typically comprises only a few pre-selected words/senses. One may refer to T C(w) as an inventory of translation candidates for w. The best scoring translation candidate in the ranked list is then the correct translation for that particular occurrence of w observing its local context Con(w). SWTC is an extended 253 0.6 0.62 0.64 0.66 0.68 0.7 1k 2k 4k 5k 10k 20k 40k Acc1 scores Lexicon size THR=None THR=0.01 THR=0.025 THR=0.05 THR=0.075 THR=0.1 (a) Spanish-English 0.54 0.56 0.58 0.6 0.62 0.64 0.66 1k 2k 4k 5k 10k 20k 40k Lexicon size THR=None THR=0.01 THR=0.025 THR=0.05 THR=0.075 THR=0.1 (b) Dutch-English 0.56 0.58 0.6 0.62 0.64 0.66 0.68 1k 2k 4k 5k 10k 20k 40k Lexicon size THR=None THR=0.01 THR=0.025 THR=0.05 THR=0.075 THR=0.1 (c) Italian-English Figure 3: BLL results across different threshold (THR) values with the HFQ+HYB+SYM seed lexicons. Higher thresholds imply less ambiguous word translation pairs. Thicker horizontal lines denote the best score from any of the baseline models. x axes are in log scale. Model ES-EN NL-EN IT-EN NO CONTEXT 0.406 0.433 0.408 BEST SYSTEM 0.703 0.712 0.789 (Vuli´c and Moens, 2014) BICVM (TYPE 1) 0.506 0.586 0.522 BILBOWA (TYPE 2) 0.586 0.656 0.589 BWESG (TYPE 3) 0.783 0.858 0.792 BNC+GT (TYPE 4) 0.794 0.858 0.783 ORTHO 0.647 0.794 0.678 BNC+HYB+ASYM 0.806* 0.872 0.778 BNC+HYB+SYM 0.808* 0.875* 0.814* (3839; 3117; 3693) HFQ+HYB+ASYM 0.789 0.864 0.781 HFQ+HYB+SYM (THR = None) 0.792 0.869 0.786 HFQ+HYB+SYM (THR=0.01) 0.792 0.858 0.789 HFQ+HYB+SYM (THR=0.025) 0.800 0.853 0.792 Table 3: Acc1 scores in the SWTC task. All seed lexicons contain 6K translation pairs, except for BNC+HYB+SYM (its sizes provided in parentheses). * denotes a statistically significant improvement over baselines and BNC+GT using McNemar’s statistical significance test with the Bonferroni correction, p < 0.05. cross-lingual variant of the task proposed by Huang et al. (2012) which evaluates monolingual contextsensitive semantic similarity of words in sentential context, and it is also very related to cross-lingual lexical substitution (Mihalcea et al., 2010). To isolate the performance of each BWE induction model from the details of the SWTC setup, we use the same approach with all models: we opt for the SWTC framework proven to yield excellent results with BWEs in the SWTC task (Vuli´c and Moens, 2016). In short, the context bag Con(w) = {cw1, . . . , cwr} is obtained by harvesting all r words that occur with w in the sentence. The vector representation of Con(w) is the ddimensional embedding computed by aggregating over all word embeddings for each cwj ∈Con(w) using standard addition as the compositional operator (Mitchell and Lapata, 2008) which was proven a robust choice (Milajevs et al., 2014): Con(w) = cw1 + cw2 + . . . + cwr (4) where cwj is the embedding of the j-th context word, and Con(w) is the resulting embedding of the context bag Con(w). Finally, for each tj ∈T C(w), the context-sensitive similarity with w is computed as: sim(w, tj, Con(w)) = cos(Con(w), tj), where Con(w) and tj are representations of the (sentential) context bag and the candidate translation tj in the same SBWES.11 The evaluation set consists of 360 sentences for 15 polysemous nouns (24 sentences for each noun) in each of the three languages: Spanish, Dutch, Italian, along with the single gold standard single word English translation given the sentential context.12 Table 3 summarizes the results (Acc1 scores) in the SWTC task. NO-CONTEXT refers to the contextinsensitive majority baseline obtained by BNC+GT (i.e., it always chooses the most semantically similar translation candidate at the word type level). We also report the results of the best SWTC model from Vuli´c and Moens (2014). The results largely support the claims established with the BLL evaluation. An exter11The same ranking of different models (with lower absolute scores) is observed when adapting the monolingual lexical substitution framework of Melamud et al. (2015) to the SWTC task as done by Vuli´c and Moens (2016). 12The SWTC evaluation set is available online at: http://aclweb.org/anthology/attachments/D/D14/D141040.Attachment.zip 254 nal seed lexicon of BNC+GT may be safely replaced by an automatically induced inexpensive seed lexicon (as in HYBWE with BNC+HYB+SYM/ASYM). The best performing models are again BNC+HYB+SYM and HFQ+HYB+SYM. The comparison of ASYM and SYM lexicon variants further suggests that filtering translation pairs using the symmetry constraint again leads to consistent improvements, but stricter selection criteria with higher thresholds do not lead to significant performance boosts, and may even hurt the performance (see the results for NL-EN). Various HYBWE variants significantly improve over baseline BWE models (Types 1-4), also outperforming previous best SWTC results. 5 Conclusions and Future Work We presented a detailed analysis of the importance and properties of seed bilingual lexicons in learning bilingual word embeddings (BWEs) which are valuable for many cross-lingual/multilingual NLP tasks. On the basis of the analysis, we proposed a simple yet effective hybrid bilingual word embedding model called HYBWE. It learns the mapping between two monolingual embedding spaces using only highly reliable symmetric translation pairs from an inexpensive seed document-level embedding space. The results in the tasks of (1) bilingual lexicon learning and (2) suggesting word translations in context demonstrate that – due to its careful selection of reliable translation pairs for seed lexicons – HYBWE outperforms benchmarking BWE induction models, all of which use more expensive bilingual signals for training. In future work, we plan to investigate other methods for seed pairs selection, settings with scarce resources (Agi´c et al., 2015; Zhang et al., 2016), other context types inspired by recent work in the monolingual settings (Levy and Goldberg, 2014a; Melamud et al., 2016), as well as model adaptations that can work with multi-word expressions. Encouraged by the excellent results, we also plan to test the portability of the approach to more language pairs, and other tasks and applications. Acknowledgments This work is supported by ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909). The authors are grateful to Roi Reichart and the anonymous reviewers for their helpful comments and suggestions. References Željko Agi´c, Dirk Hovy, and Anders Søgaard. 2015. If all you have is a bit of the Bible: Learning POS taggers for truly low-resource languages. In ACL, pages 268–272. Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual NLP. In CoNLL, pages 183–192. Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A. Smith. 2016. Massively multilingual word embeddings. CoRR, abs/1602.01925. Marco Baroni, Georgiana Dinu, and Germán Kruszewski. 2014. Don’t count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors. In ACL, pages 238–247. Shane Bergsma and Benjamin Van Durme. 2011. Learning bilingual lexicons using the visual similarity of labeled web images. In IJCAI, pages 1764– 1769. Sarath A.P. Chandar, Stanislas Lauly, Hugo Larochelle, Mitesh M. Khapra, Balaraman Ravindran, Vikas C. Raykar, and Amrita Saha. 2014. An autoencoder approach to learning bilingual word representations. In NIPS, pages 1853–1861. Danqi Chen and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural networks. In EMNLP, pages 740–750. Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P. Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493–2537. Jocelyn Coulmance, Jean-Marc Marty, Guillaume Wenzek, and Amine Benhalloum. 2015. Trans-gram, fast cross-lingual word embeddings. In EMNLP, pages 1109–1113. Georgiana Dinu, Angeliki Lazaridou, and Marco Baroni. 2015. Improving zero-shot learning by mitigating the hubness problem. In ICLR Workshop Papers. Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In EACL, pages 462–471. Éric Gaussier, Jean-Michel Renders, Irina Matveeva, Cyril Goutte, and Hervé Déjean. 2004. A geometric view on bilingual lexicon extraction from comparable corpora. In ACL, pages 526–533. Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. BilBOWA: Fast bilingual distributed representations without word alignments. In ICML, pages 748–756. 255 Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2015. Cross-lingual dependency parsing based on distributed representations. In ACL, pages 1234–1244. Karl Moritz Hermann and Phil Blunsom. 2014a. Multilingual distributed representations without word alignment. In ICLR. Karl Moritz Hermann and Phil Blunsom. 2014b. Multilingual models for compositional distributed semantics. In ACL, pages 58–68. Jon Holmlund, Magnus Sahlgren, and Jussi Karlgren. 2005. Creating bilingual lexica using reference wordlists for alignment of monolingual semantic vector spaces. In NODALIDA, pages 71–77. Eric H. Huang, Richard Socher, Christopher D. Manning, and Andrew Y. Ng. 2012. Improving word representations via global context and multiple word prototypes. In ACL, pages 873–882. Anders Johannsen, Héctor Martínez Alonso, and Anders Søgaard. 2015. Any-language frame-semantic parsing. In EMNLP, pages 2062–2066. Adam Kilgarriff. 1997. Putting frequencies in the dictionary. International Journal of Lexicography, 10(2):135–155. Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2015. Skip-thought vectors. In NIPS. Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. In COLING, pages 1459–1474. Tomáš Koˇciský, Karl Moritz Hermann, and Phil Blunsom. 2014. Learning bilingual word representations by marginalizing alignments. In ACL, pages 224– 229. Angeliki Lazaridou, Georgiana Dinu, and Marco Baroni. 2015. Hubness and pollution: Delving into cross-space mapping for zero-shot learning. In ACL, pages 270–280. Ira Leviant and Roi Reichart. 2015. Judgment language matters: Multilingual vector space models for judgment language aware lexical semantics. CoRR, abs/1508.00106. Omer Levy and Yoav Goldberg. 2014a. Dependencybased word embeddings. In ACL, pages 302–308. Omer Levy and Yoav Goldberg. 2014b. Neural word embedding as implicit matrix factorization. In NIPS, pages 2177–2185. Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of the ACL, 3:211–225. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Bilingual word representations with monolingual quality in mind. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 151–159. Oren Melamud, Omer Levy, and Ido Dagan. 2015. A simple word embedding model for lexical substitution. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 1–7. Oren Melamud, David McClosky, Siddharth Patwardhan, and Mohit Bansal. 2016. The role of context types and dimensionality in learning word embeddings. In NAACL-HLT. Rada Mihalcea, Ravi Sinha, and Diana McCarthy. 2010. SemEval-2010 task 2: Cross-lingual lexical substitution. In SEMEVAL, pages 9–14. Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for machine translation. CoRR, abs/1309.4168. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In NIPS, pages 3111–3119. Dmitrijs Milajevs, Dimitri Kartsaklis, Mehrnoosh Sadrzadeh, and Matthew Purver. 2014. Evaluating neural word representations in tensor-based compositional settings. In EMNLP, pages 708–719. Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In ACL, pages 236–244. Bhaskar Mitra, Eric T. Nalisnick, Nick Craswell, and Rich Caruana. 2016. A dual embedding space model for document ranking. CoRR, abs/1602.01137. Yves Peirsman and Sebastian Padó. 2010. Crosslingual induction of selectional preferences with bilingual vector spaces. In NAACL, pages 921–929. Tianze Shi, Zhiyuan Liu, Yang Liu, and Maosong Sun. 2015. Learning cross-lingual word embeddings via matrix co-factorization. In ACL, pages 567–572. David A. Smith and Jason Eisner. 2007. Bootstrapping feature-rich dependency parsers with entropic priors. In EMNLP-CoNLL, pages 667–677. Hubert Soyer, Pontus Stenetorp, and Akiko Aizawa. 2015. Leveraging monolingual data for crosslingual compositional word representations. In ICLR. Jörg Tiedemann. 2012. Parallel data, tools and interfaces in OPUS. In LREC, pages 2214–2218. Chen-Tse Tsai and Dan Roth. 2016. Cross-lingual wikification using multilingual embeddings. In NAACL-HLT. 256 Kewei Tu and Vasant Honavar. 2012. Unambiguity regularization for unsupervised learning of probabilistic grammars. In EMNLP-CoNLL, pages 1324– 1334. Joseph P. Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In ACL, pages 384–394. Peter D. Turney and Patrick Pantel. 2010. From frequency to meaning: vector space models of semantics. Journal of Artifical Intelligence Research, 37(1):141–188. Shyam Upadhyay, Manaal Faruqui, Chris Dyer, and Dan Roth. 2016. Cross-lingual models of word embeddings: An empirical comparison. In ACL. Ivan Vuli´c and Marie-Francine Moens. 2013. A study on bootstrapping bilingual vector spaces from nonparallel data (and nothing else). In EMNLP, pages 1613–1624. Ivan Vuli´c and Marie-Francine Moens. 2014. Probabilistic models of cross-lingual semantic similarity in context based on latent cross-lingual concepts induced from comparable data. In EMNLP, pages 349–362. Ivan Vuli´c and Marie-Francine Moens. 2015. Monolingual and cross-lingual information retrieval models based on (bilingual) word embeddings. In SIGIR, pages 363–372. Ivan Vuli´c and Marie-Francine Moens. 2016. Bilingual distributed word representations from document-aligned comparable data. Journal of Artificial Intelligence Research, 55:953–994. Ivan Vuli´c, Douwe Kiela, Stephen Clark, and MarieFrancine Moens. 2016. Multi-modal representations for improved bilingual lexicon learning. In ACL. Jason Weston, Samy Bengio, and Nicolas Usunier. 2011. WSABIE: scaling up to large vocabulary image annotation. In IJCAI, pages 2764–2770. Yuan Zhang, David Gaddy, Regina Barzilay, and Tommi Jaakkola. 2016. Ten pairs to tag - Multilingual POS tagging via coarse mapping between embeddings. In NAACL-HLT. Will Y. Zou, Richard Socher, Daniel Cer, and Christopher D. Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In EMNLP, pages 1393–1398. 257
2016
24
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 258–268, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Liberal Event Extraction and Event Schema Induction Lifu Huang1, Taylor Cassidy2, Xiaocheng Feng3, Heng Ji1, Clare R. Voss2, Jiawei Han4, Avirup Sil5 1 Rensselaer Polytechnic Institute, 2 US Army Research Lab, 3 Harbin Institute of Technology, 4 Univerisity of Illinois at Urbana-Champaign, 5 IBM T.J. Watson Research Center 1{huangl7,jih}@rpi.edu, 2{taylor.cassidy.civ,clare.r.voss.civ}@rpi.edu, [email protected], [email protected], [email protected] Abstract We propose a brand new “Liberal” Event Extraction paradigm to extract events and discover event schemas from any input corpus simultaneously. We incorporate symbolic (e.g., Abstract Meaning Representation) and distributional semantics to detect and represent event structures and adopt a joint typing framework to simultaneously extract event types and argument roles and discover an event schema. Experiments on general and specific domains demonstrate that this framework can construct high-quality schemas with many event and argument role types, covering a high proportion of event types and argument roles in manually defined schemas. We show that extraction performance using discovered schemas is comparable to supervised models trained from a large amount of data labeled according to predefined event types. The extraction quality of new event types is also promising. 1 Introduction Event extraction aims at identifying and typing trigger words and participants (arguments). It remains a challenging and costly task. The first question is what to extract? The TIPSTER (Onyshkevych et al., 1993), MUC (Grishman and Sundheim, 1996), CoNLL (Tjong et al., 2003; Pradhan et al., 2011), ACE 1 and TACKBP (Ji and Grishman, 2011) programs found that it was feasible to manually define an event schema based on the needs of potential users. An ACE event schema example is shown in Figure 1. This process is very expensive because consumers and 1http://www.itl.nist.gov/iad/mig/tests/ace/ expert linguists need to examine a lot of data before specifying the types of events and argument roles and writing detailed annotation guidelines for each type in the schema. Manually-defined event schemas often provide low coverage and fail to generalize to new domains. For example, none of the aforementioned programs include “donation” and “evacuation” in their schema in spite of their potential relevance to users. In this paper we propose Liberal Event Extraction, a new paradigm to take humans out of the loop and enable systems to extract events in a more liberal fashion. It automatically discovers a complete event schema, customized for a specific input corpus. Figure 1 compares the ACE event extraction paradigm and our proposed Liberal event extraction paradigm. We use the following examples to explain and motivate our approach, where event triggers are in bold and arguments are in italics and underlined: E1. Two Soldiers were killed and one injured in the close-quarters fighting in Kut. E2. Bill Bennet’s glam gambling loss changed my opinion. E3. Gen. Vincent Brooks announced the capture of Barzan Ibrahim Hasan al-Tikriti, telling reporters he was an adviser to Saddam. E4. This was the Italian ship that was captured by Palestinian terrorists back in 1985. E5. Ayman Sabawi Ibrahim was arrested in Tikrit and was sentenced to life in prison. We seek to cluster the event triggers and event arguments so that each cluster represents a type. We rely on distributional similarity for our clustering distance metric. The distributional hypothesis (Harris, 1954) states that words often occurring in similar contexts tend to have similar meanings. We formulate the following distributional 258 Traditional Event Extraction Conflict Life Attack Marry Die … Injure … Type: Subtype: Argument: Demonstrate Entity Time Place … … Agent Victim Time … Guidelines Documents Sen 1: The Indian army stated that 4 Islamic militants were killed in 2 separate gun battles 20021228. Sen 2: The embassy stated the British government is opposed to the death penalty in all circumstances. Event : killed, Type: Die, Arguments: 4 Islamic militants (Victim) Null Linguistic Resource Documents Sen 1: The Indian army stated that 4 Islamic militants were killed in 2 separate gun battles 20021228. Sen 2: The embassy stated the British government is opposed to the death penalty in all circumstances. Event 2: killed, Type: Kill, Arguments: 4 Islamic militants (Victim) Event 1: stated, Type: State, Arguments: embassy (Agent), opposed (Topic) Event 1: stated, Type: State, Arguments: Indian army (Agent), killed (Topic) Event 3: battles, Type: Battle, Arguments: 4 Islamic militants (Agent), 20021228 (Time) Event 2: opposed, Type: Oppose, Arguments: British government (Patient), death penalty (Theme) Attack Imprison Battle … Demand State Type: Trigger Cluster: Arguments: Agent Time Place … … Agent Patient Topic Time Place … Oppose attackstrike hit bombs … imprison prisoners sentence … … demand urge pressured … … … anti opposed … Manner Liberal Event Extraction Figure 1: Comparison between ACE Event Extraction and Liberal Event Extraction. hypotheses specifically for event extraction, and develop our approach accordingly. Hypothesis 1: Event triggers that occur in similar contexts and share the same sense tend to have similar types. Following the distributional hypothesis, when we simply learn general word embeddings from a large corpus for each word, we obtain similar words like those shown in Table 1. We can see similar words, such as those centered around “injure” and “fight”, are converging to similar types. However, for words with multiple senses such as “fire” (shooting or employment termination), similar words may indicate multiple event types. Thus, we propose to apply Word Sense Disambiguation (WSD) and learn a distinct embedding for each sense (Section 2.3). injure Score fight Score fire Score injures 0.602 fighting 0.792 fires 0.686 hurt 0.593 fights 0.762 aim 0.683 harm 0.592 battle 0.702 enemy 0.601 maim 0.571 fought 0.636 grenades 0.597 injuring 0.561 Fight 0.610 bombs 0.585 endanger 0.543 battles 0.590 blast 0.566 dislocate 0.529 Fighting 0.588 burning 0.562 kill 0.527 bout 0.570 smoke 0.558 Table 1: Top-8 Most Similar Words (in 3 Clusters) Hypothesis 2: Beyond the lexical semantics of a particular event trigger, its type is also dependent on its arguments and their roles, as well as other words contextually connected to the trigger. For example, in E4, the fact that the patient role is a vehicle (“Italian ship”), and not a person (as in E3 and E5), suggests that the event trigger “captured” has type “Transfer-Ownership” as opposed to “Arrest”. In E2, we know the “loss” event occurs in a gambling scenario, so we can determine its type as loss of money, not loss of life. We therefore propose to enrich each trigger’s representation by incorporating the distributional representations of various words in the trigger’s context. Not all context words are relevant to event trigger type prediction, while those that are vary in their predictive value. We propose to use semantic relations, derived from a meaning representation for the text, to carefully select arguments and other words in an event trigger’s context. These words are then incorporated into a “global” event structure for a trigger mention. We rely on semantic relations to (1) specify how the distributional semantics of relevant context words contribute to the overall event structure representation; (2) determine the order in which distributional semantics of relevant context words are incorporated into the event structure (Section 2.4). 2 Approach 2.1 Overview Input Documents FrameNet Lexical Units Candidate Trigger & Argument Identification Event Schema & Event Extraction Results AMR Parsing Event Structure Semantic Composition & Representation Unlabeled Corpus Word Sense Disambiguation Distributional Semantic Representation Word Sense based Trigger and Argument Representation Joint Trigger and Argument Clustering Event Type Naming Argument Role Naming AMR/PropBank/FrameNet/ VerbNet/OntoNotes Role Descriptions Figure 2: Liberal Event Extraction Overview. Figure 2 illustrates the overall framework of 259 Liberal Event Extraction. Given a set of input documents, we first extract semantic relations, apply WSD and learn word sense embeddings. Next, we identify candidate triggers and arguments. For each event trigger, we apply a series of compositional functions to generate that trigger’s event structure representation. Each function is specific to a semantic relation, and operates over vectors in the embedding space. Argument representations are generated as a by-product. Trigger and argument representations are then passed to a joint constraint clustering framework. Finally, we name each cluster of triggers, and name each trigger’s arguments using mappings between the meaning representation and semantic role descriptions in FrameNet, VerbNet (Kipper et al., 2008) and Propbank (Palmer et al., 2005). We compare settings in which semantic relations connecting triggers to context words are derived from three meaning representations: Abstract Meaning Representation (AMR) (Banarescu et al., 2013), Stanford Typed Dependencies (Marie-Catherine et al., 2006), and FrameNet (Baker and Sato, 2003). We derive semantic relations automatically for these three representations using CAMR (Wang et al., 2015a), Stanford’s dependency parser (Manning, 2003), and SEMAFOR (Das et al., 2014), respectively. 2.2 Candidate Trigger and Argument Identification Given a sentence, we consider all noun and verb concepts that are assigned an OntoNotes (Hovy et al., 2006) sense by WSD as candidate event triggers. Any remaining concepts that match both a verbal and a nominal lexical unit in the FrameNet corpus are considered candidate event triggers as well. This mainly helps to identify more nominal triggers like “pickpocket” and “sin”.2 For each candidate event trigger, we consider as candidate arguments all concepts for which one of a manually-selected set of semantic relations holds between it and the event trigger. For the setting in which AMR serves as our meaning representation, we selected a subset of all AMR relations that specify event arguments, as shown in Table 2. Note that some AMR relations generally do not specify event arguments, e.g. “mode”, which can indicate sentence illocutionary force, or “snt” 2For consistency, we use the same trigger identification procedure regardless of which meaning representation is used to derive semantic relations. which is used to combine multiple sentences into one AMR graph.3 When FrameNet is the meaning representation we allow all frame relations to identify arguments. For dependencies, we manually mapped dependency relations to AMR relations and use Table 2. Categories Relations Core roles ARG0, ARG1, ARG2, ARG3, ARG4 Non-core roles mod, location, poss, manner, topic, medium, instrument, duration, prep-X Temporal year, duration, decade, weekday, time Spatial destination, path, location Table 2: Event-Related AMR Relations. In E1, for example, “killed”, “injured” and “fighting” are identified as candidate triggers, and three concept sets are identified as candidate arguments using AMR relations: “{Two Soldiers, very large missile}”, “{one, Kut}” and “{Two Soldiers, Kut}”, as shown in Figure 3. 2.3 Trigger Sense and Argument Representation Based on Hypothesis 1, we learn sense-based embeddings from a large data set, using the Continuous Skip-gram model (Mikolov et al., 2013). Specifically, we first apply WSD to link each word to its sense in WordNet using a state-of-the-art tool (Zhong and Ng, 2010), and map WordNet sense output to OntoNotes senses. 4 We map each trigger candidate to its OntoNotes sense and learn a distinct embedding for each sense. We use general lexical embeddings for arguments. 2.4 Event Structure Composition and Representation Based on Hypothesis 2, we aim to exploit linguistic knowledge to incorporate inter-dependencies between event and argument role types into our event structure representation. Many meaning representations could provide such information to some degree. We illustrate our method for building event structures using semantic relations from meaning representations using AMR. In Section 3.4 we compare results using Stanford Typed Dependencies and FrameNet in place of AMR. Let’s take E2 as an example. Based on AMR annotation and Table 2, we extract semantically re3For relation details, see https://github.com/amrisi/amrguidelines/blob/master/amr.md 4WordNet-OntoNotes mapping from https://catalog.ldc.upenn.edu/LDC2011T03 260 E1: Two Soldiers were killed by a very large missile and one injured in the close-quarters fighting in Kut. [Event:Die] [Event:Injure] [Event:Attack] Place Place Place Victim Victim Attacker Instrument kill-01 :ARG1 :instrument injure-01 fight-01 :location :ARG0 Figure 3: Event Trigger and Argument Annotations and AMR Parsing Results of E1. man(Agent), Austrialia(Destination),heroin(Theme) identify the 2 convicts hanged in Zahedan but stated nd guilty of transporting 5.25 kilograms of heroin. rting, Arguments: they(Agent), heroin(Theme) of the facility started in 790000, but stopped after lapse when Tajikistan slid into a 5 year civil war conomy. Event:construction, guments: facility(Product), 790000(Time) -era military facility was fou-nded in 570000 and all information gathered from Russia's military spy Event: founded, ts: Soviet-era facility(Product), 570000(Time) Event Type: Build S2: A newspaper report on January 1, 2008 that Iran hanged two convicted drug traffickers in the south-eastern city of Zahedan. S1: Colombian Government was alarmed because uranium is the primary basis for generating weapons of mass destruction. Event:alarmed, Arguments:Columbian Government(Experiencer) Event Type: Threaten S2: Cluster bomblets have been criticized by human rights groups because they kill indiscriminately and because unexploded ordinance poses a threat to civilians similar to that of land mines. Event:threat, Arguments:ordinance(Cause), civilian(Experiencer) Event: hanged, Arguments: Iran(Agent), drug traf- fickers(Theme), southeastern city of Zahedan(Place) lose gamble glam Bill Bennet :op1 :op2 :mod :mod :poss Z1=fmod(Wmod,Xga,Yl)=XTgaWmodYl+b Reconstruct: (X’ga,Y’l)=Z1W’mod+b’ Z2=fmod(Wmod,Xgl,Z1) Z4=fposs(Wposs,Z3,Z2) Reconstruct: (Z’3,Z’2)=Z4W’poss+b’ Z3=Avarage(VBill, VBennet) Z1 Z2 Z4 X’gamble Y’lose Z’3 Z’2 Reconstruct: (X’gl,Z’1)=Z2W’mod+b’ X’glam Z’1 AMR annotation Event Structure Representation Event Structure for “lose” :instance :mod :mod :poss :op1 :op2 Bill Bennet glamgamble lose :ARG0 (x8 / lose-1 :poss (x3 / person :name (n1 / name :op1 "Bill" :op2 "Bennet")) :mod (x6 / glam) :mod (x7 / gamble-01)) Figure 4: Partial AMR and Event Structure for E2. lated words for the event trigger with sense “lose1” and construct the event structure for the whole event, as shown in Figure 4. We design a Tensor based Recursive AutoEncoder (TRAE) (Socher et al., 2011) framework to utilize a tensor based composition function for each of a subset of the AMR semantic relations and compose the event structure representation based on multiple functional applications. This subset was manually selected by the authors as the set of relations that link a trigger to concepts that help to determine its type. Similarly, we selected a subset of dependency and FrameNet relations using the same criteria for experiments using those meaning representations. Figure 4 shows an instance of a TRAE applied to an event structure to generate its representation. For each semantic relation type r, such as “:mod”, we define the output of a tensor product Z via the following vectorized notation: Z = fmod(X, Y, W [1:d] r , b) = [X; Y ]T W [1:d] r [X; Y ] + b where Wmod ∈R2d·2d·d is a 3-order tensor, and X, Y ∈Rd are two input word vectors. b ∈Rd is the bias term. [X; Y ] denotes the concatenation of two vectors X and Y . Each slice of the tensor acts as a coefficient matrix for one entry Zi in Z: Zi = fmod(X, Y, W [i] r , b) = [X; Y ]T W [i] r [X; Y ] + bi We use the statistical mean to compose the words connected by “:op” relations (e.g. “Bill” and “Bennet” in Figure 4). After composing the vectors of X and Y , we apply an element-wise sigmoid activation function to the composed vector and generate the hidden layer representations Z. One way to optimize Z is to try to reconstruct the vectors X and Y by generating X ′ and Y ′ from Z, and minimizing the reconstruction errors between the input VI = [X, Y ] and output layers VO = [X ′, Y ′]. The error is computed based on Euclidean distance function: E(VI, VO) = 1 2||VI −VO||2 For each pair of words X and Y , the reconstruction error back-propagates from its output layer to input layer through parameters Θr = (W ′ r, b ′ r, Wr, br). Let δO be the residual error of the output layer, and δH be the error of the hidden layer: δO = −(VI −VO) · f ′ sigmoid(V O H ) δH = ( d X k=1 δk O · (W ′k r + (W ′k r )T ) · V O H ) · f ′ sigmoid(V I H) where V I H and V O H denote the input and output of the hidden layer, and V O H = Z. W ′k r is the kth slice of tensor W ′ r. To minimize the reconstruction errors, we utilize gradient descent to iteratively update parameters Θr: ∂E(Θr) ∂W ′k r = δk O · (V O H )T · V O H ∂E(Θr) ∂b ′ r = −(VI −VO) · f ′ sigmoid(V O H ) ∂E(Θr) ∂W k r = δk H · (VI)T · VI ∂E(Θr) ∂br = ( d X k=1 δk O · (W ′k r + (W ′k r )T ) · V O H ) · f ′ sigmoid(V I H) After computing the composition vector of Z1 based on X and Y , for the next layer, it composes Z1 and another new word vector such as 261 Xgl. For each type of relation r, we randomly sample 2,000 pairs to train optimized parameters Θr. For each event structure tree, we iteratively repeat the same steps for each layer. For multiple arguments at each layer, we compose them in the order of their distance to the trigger: the closest argument is composed first. 2.5 Joint Trigger and Argument Clustering Based on the representation vectors generated above, we compute the similarity between each pair of triggers and arguments, and cluster them into types. Recall that a trigger’s arguments are identified as in section 2.2. We observe that, for two triggers t1 and t2, if their arguments have the same type and role, then they are more likely to belong to the same type, and vice versa. Therefore we introduce a constraint function f, to enforce inter-dependent triggers and arguments to have coherent types: f(P1, P2) = log(1 + |L1 ∩L2| |L1 ∪L2|) where P1 and P2 are triggers. Elements of Li are pairs of the form (r, id(a)), where id(a) is the cluster ID for argument a that stands in relation r to Pi. For example, let P1 and P2 be triggers “capture” and “arrested” (c.f. Figure 5). If Barzan Ibrahim Hasan al-Tikriti and Ayman Sabawi Ibrahim share the same cluster ID, the pair (arg1, id(Barzan Ibrahim Hasan al-Tikriti)) will be a member of L1 ∩L2. This argument overlap is evidence that “capture” and “arrested” have the same type. We define f where Pi are arguments, and elements Li are defined analogously to above. capture captured arrested sentenced Barzan Ibrahim Hasan al-Tikriti Tikrit Ayman Sabawi Ibrahim Palestinian terrorists prison Italian ship :arg1 :arg1 :location :arg1 :arg0 :arg1 :location Figure 5: Joint Constraint Clustering for E3,4,5. Given a trigger set T and their corresponding argument set A, we compute the similarity between two triggers t1 and t2 and two arguments a1 and a2 by: sim(t1, t2) = λ · simcos(Et1 g , Et2 g )+ (1 −λ) · Σr∈Rt1 ∩Rt2 simcos(Et1 r , Et2 r ) |Rt1 ∩Rt2| + f(t1, t2) sim(a1, a2) = simcos(Ea1 g , Ea2 g ) + f(a1, a2) where Et g represents the trigger sense vector and Ea g is the argument vector. Rt is the AMR relation set in the event structure of t, and Et r denotes the vector resulting from the last application of the compositional function corresponding to the semantic relation r for trigger t. λ is a regularization parameter that controls the trade-off between these two types of representations. In our experiment λ = 0.6. We design a joint constraint clustering approach, which iteratively produces new clustering results based on the above constraints. To find a global optimum, which corresponds to an approximately optimal partition of the trigger set into K clusters CT = {CT 1 , CT 2 , ..., CT K}, and a partition of the argument set into M clusters CA = {CA 1 , CA 2 , ..., CA M}, we minimize the agreement across clusters and the disagreement within clusters: arg min KT ,KA,λ O = (DT inter + DT intra) + (DA inter + DA intra) DP inter = K X i̸=j=1 X u∈CP i ,v∈CP j sim(Pu, Pv) DP intra = K X i=1 X u,v∈CP i (1 −sim(Pu, Pv)) We incorporate the Spectral Clustering algorithm (Luxburg, 2007) into joint constraint clustering process to get the final optimized clustering results. The detailed algorithm is summarized in Algorithm 1. 2.6 Event Type and Argument Role Naming For each trigger cluster, we utilize the trigger which is nearest to the centroid of the cluster as the event type name. For a given event trigger, we assign a role name to each of its arguments (identified as in section 2.2). This process depends on which meaning representation was used to select the arguments. For AMR, we first map the event trigger’s OntoNotes sense to PropBank, VerbNet, and FrameNet. We assign each argument a role name as follows. We map AMR core roles (e.g. “:ARG0”, “ARG1”) to FrameNet if possible, otherwise to VerbNet if possible, and finally to PropBank roles if a mapping to VerbNet is not available.5. Nearly 5% of AMR core roles can 5OntoNotes 5.0 provides a mapping; https://catalog.ldc.upenn.edu/LDC2013T19 262 Algorithm 1 Joint Constraint Clustering Algorithm Input: Trigger set T, argument set A, their lexical embedding ET g , EA g , event structure representation ET R, and the minimal (Kmin T , Kmin A ) and maximal (Kmax T , Kmax A ) number of clusters for triggers and arguments; Output: The optimal clustering results: CT and CA; • Omin = ∞, CT = ∅, CA = ∅ • For KT = Kmin T to KT = Kmax T , KA = Kmin A to KA = Kmax A – Clustering with Spectral Clustering Algorithm: – CT curr = spectral(T, ET g , ET R, KT ) – CA curr = spectral(A, EA g , KA) – Ocurr = O(CT curr, CA curr) – if Ocurr < Omin ∗Omin = Ocurr, CT = CT curr, CA = CA curr – while iterate time ≤10 ∗CT curr = spectral(T, ET g , ET R, KT , CA curr) ∗CA curr = spectral(A, EA g , KA, CT curr) ∗Ocurr = O(CT curr, CA curr) ∗if Ocurr < Omin · Omin=Ocurr, CT = CT curr, CA = CA curr • return Omin, CT , CA; be mapped to FrameNet roles and 55% can be mapped to VerbNet roles, and the remaining can be mapped to PropBank. Table 3 shows some mapping examples. We map non-core roles from AMR to FrameNet, as shown in Table 4. When Stanford Typed Dependencies are used for meaning representation we construct a manual mapping AMR relations and use the above procedure. When FrameNet is used for meaning representation we simply keep the FrameNet role name for argument role naming. Concept AMR Core Role FrameNet Role VerbNet Role PropBank Description fire.1 ARG0 Agent Agent Shooter fire.1 ARG1 Projectile Theme Gun/projectile extrude.1 ARG0 Agent Extruder, agent extrude.1 ARG1 Theme Entity extruded extrude.1 ARG2 Source Extruded from blood.1 ARG0 Agent blood.1 ARG1 Theme, one bled Table 3: Core Role Mapping Examples. 3 Evaluation 3.1 Data We used the August 11, 2014 English Wikipedia dump to learn trigger sense and argument embeddings. For evaluation we choose a subset of ERE (Entity Relation Event) corpus (50 documents) which has perfect AMR annotations so we can AMR None-Core Role FrameNet Role topic Topic instrument Instrument manner Manner poss Possessor prep-for, prep-to, prep-on-behalf Purpose time, decade, year, weekday, duration Time mod, cause, prep-as Explanation prep-by, medium, path Means location, destination, prep-in Place Table 4: None-Core Role Mapping. compare the impact of perfect AMR and system generated AMR. To compare with state-of-the-art event extraction on Automatic Content Extraction (ACE2005) data, we follow the same evaluation setting in previous work (Ji and Grishman, 2008; Liao and Grishman, 2010; Hong et al., 2011) and use 40 newswire documents as our test set. 3.2 Schema Discovery Figure 6 shows some examples as part of the event schema discovered from the ERE data set. Each cluster denotes an event type, with a set of event mentions and sentences. Each event mention is also associated with some arguments and their roles. The event and argument role annotations for sample sentences may serve as an examplebased corpus-customized “annotation guideline” for event extraction. Table 5 compares the coverage of event schema discovered by our approach, using AMR as meaning representation, with the predefined ACE and ERE event schemas. Besides the types defined in ACE and ERE, this approach discovers many new event types such as Build and Threaten as displayed in Figure 6. Our approach can also discover new argument roles for a given event type. For example, for Attack events, besides five types of existing arguments (Attacker, Target, Instrument, Time, and Place) defined in ACE, we also discover a new type of argument Purpose. For example, in “The Dutch government, facing strong public anti-war pressure, said it would not commit fighting forces to the war against Iraq but added it supported the military campaign to disarm Saddam.”, “disarm Saddam” is identified as the Purpose for the Attack event triggered by “campaign”. Note that while FrameNet specifies Purpose as an argument role for the Attack, such information specific to Attack is not part of AMR. 263 S1: The court official stated that on 18 March 2008 Luong stated to judges that she was hired by an unidentified man to ship the heroin to Australia in exchange for 15000 U.S. dollars. Event: ship, Arguments: man(Agent), Austrialia(Destination),heroin(Theme) S2: State media didn’t identify the 2 convicts hanged in Zahedan but stated that they had been found guilty of transporting 5.25 kilograms of heroin. Event: transporting, Arguments: they(Agent), heroin(Theme) Event Type: Transport S1: The construction of the facility started in 790000, but stopped after the 910000 Soviet collapse when Tajikistan slid into a 5 year civil war that undermined its economy. Event:construction, Arguments: facility(Product), 790000(Time) S2: The closed Soviet-era military facility was fou-nded in 570000 and collects and analyzes all information gathered from Russia's military spy satellites. Event: founded, Arguments: Soviet-era facility(Product), 570000(Time) Event Type: Build Event Type: Die S1: Police in the strict communist country discovered his methamphetamine manufacturing plant disguised as a soap factory and sentenced him to death in 1997. Event: death, Arguments: him(Theme), 1997(Time) S2: A newspaper report on January 1, 2008 that Iran hanged two convicted drug traffickers in the south-eastern city of Zahedan. S1: Colombian Government was alarmed because uranium is the primary basis for generating weapons of mass destruction. Event:alarmed, Arguments:Columbian Government(Experiencer) Event Type: Threaten S2: Cluster bomblets have been criticized by human rights groups because they kill indiscriminately and because unexploded ordinance poses a threat to civilians similar to that of land mines. Event:threat, Arguments:ordinance(Cause), civilian(Experiencer) Event: hanged, Arguments: Iran(Agent), drug traf- fickers(Theme), southeastern city of Zahedan(Place) S1: Ras acts as a molecular switch that is activated upon GTP loading and deactivated upon hydrolysis of GTP to GDP. Event: hydrolysis Arguments: GTP (Patient), GDP (Result) Event Type: Dissociate S2: Activation requires dissociation of protein-bound GDP , an intrinsica- lly slow process that is accelerated by guanine nucleotide exchange factors. Event: dissociation Arguments: GDP (Patient) S3: His - ubiquitinated proteins were purified by Co2+ metal affinity chromatography in 8M urea denaturing conditions. Event: denaturing Arguments: proteins(Patient) Figure 6: Example Output of the Event Schema. Data ACE ERE Human SystemAMR Overlap Human PerfectAMR Overlap SystemAMR Overlap # of Events 440 2,395 331 580 3,765 517 2,498 477 # of Event Types 33 134 N/A 26 137 N/A 120 N/A # of Arguments 883 4,361 587 1,231 6,195 919 4,288 801 Table 5: Schema Coverage Comparison on ACE and ERE. 3.3 Event Extraction for All Types To evaluate the performance of the whole event schema, we randomly sample 100 sentences from ERE data set and ask two linguistic experts to fully annotate the events and arguments. As a starting point, annotators were given output from our Schema Discovery using gold standard AMR. For each sentence, they saw event triggers and corresponding arguments. Their job was to correct this output by marking incorrectly identified events and arguments, and adding missing events and arguments. The inter-annotator agreement is 83% for triggers and 79% for arguments. To evaluate trigger and argument identification, we automatically compare this gold standard with system output (see Table 6). To evaluate trigger and argument typing, annotators manually checked system output and assessed whether the type name was reasonable (see Table 6). Note that automatic comparison between system and gold standard output is not appropriate for typing; for a given cluster, there is no definitive “best” name. We found that most event triggers not recovered by our system are multi-word expressions such as “took office” or adverbs such as “previously” and “formerly”. For argument identification, our approach fails to identify some arguments that require world knowledge to extract. For example, in “Anti-corruption judge Saul Pena stated Montesinos has admitted to the abuse of authority charge”, “Saul Pena” is not identified as a Adjudicator argument of event “charge” because it has no direct semantic relations with the event trigger. 3.4 Impact of Semantic Information and Meaning Representations Table 7 assesses the impact of various types of semantic information, and also compares the effectiveness of each type of meaning representation for the typing task only. We note that F-measure drops 14.4 points if only WSD based embeddings are not used. In addition, AMR relations specifying both core and non-core roles are informative for learning distinct compositional operators. To compare typing results across meaning representations, we use triggers identified by both the AMR and FrameNet parsers. Using Stanford Typed Dependencies, relations are likely too coarse-grained or lack sufficient semantic information. Thus, our approach cannot leverage the inter-dependency between event trigger type and argument role to achieve pure trigger clusters. Compared with dependency relations, the fine-grained AMR semantic relations such as :location, :manner, :topic, :instrument appear to be more informative to infer the argument roles. For example, in sentence “Approximately 25 kilometers southwest of Sringar 2 militants were killed in a second gun battle.”, “gun” is identified as an Instrument for “battle” event based on the AMR relation :instrument. In contrast, dependency parsing identifies “gun” as a 264 Method Trigger Identification (%) Trigger Typing (%) Arg Identification (%) Arg Typing (%) P R F1 P R F1 P R F1 P R F1 Perfect AMR 87.0 98.7 92.5 70.0 79.5 74.5 94.0 83.7 88.6 72.4 64.4 68.2 System AMR 93.0 67.2 78.0 69.8 50.5 58.6 95.7 59.6 73.4 68.9 42.9 52.9 Table 6: Overall Performance of Liberal Event Extraction on ERE data for All Event Types. Method Trigger F1 (%) Arg F1 (%) P R F1 P R F1 Perfect AMR 70 79.5 74.5 72.4 64.4 68.2 w/o Structure Representation 52.8 59.4 55.9 52.1 48.0 50.0 w/o WSD based embeddings 62.8 57.4 60.1 61.9 50.3 55.5 w/o None-Core Roles 61.5 72.2 66.5 61.3 58.0 59.6 w/o Core Roles 57.3 49.7 53.2 63.6 49.5 55.7 System AMR 69.8 50.5 58.6 68.9 42.9 52.9 Replace AMR with Dependency Parsing 45.9 61.9 52.7 63.9 18.2 28.4 Replace AMR with FrameNet Parsing 43.1 57.1 49.2 78.1 7.1 13.0 Table 7: Impact of semantic information and representations on typing for ERE data. compound modifier of “battle”. Note that we used a static mapping to map dependency relations to AMR relations (see section 2.6), whereas ideally this mapping would be context-dependent. Creating a context-dependent mapping would constitute significant steps toward building an AMR parser. Using FrameNet results in low recall for argument typing. SEMAFOR’s output often does not identify all the arguments identified by our annotators. Many triggers are associated with zero or one argument, thus there is not enough data to learn the event structure representation. In addition, most of the arguments from identified by SEMAFOR are long phrases. Because no internal structure is assigned, we simply average all single token’s vectors to represent the phrase. However, the high precision may be due to the fact that FrameNet relations are designed to specify semantic roles. 3.5 Event Extraction for ACE/ERE Types We manually select the event triggers in the ACE and ERE evaluation sets discovered by our AMRbased approaches that are ACE/ERE events based on their annotation guidelines. If a trigger doesn’t already have a gold standard ACE/ERE annotation we provide one. For each such event we use core roles and Instrument/Possessor/Time/Place relations to detect arguments. Each trigger and argument role type is assessed manually if an ACE/ERE annotation does not exist. We evaluate our approach for trigger and argument typing by comparing system output to manual annotation, considering synonymous labels to be equivalent (e.g., our approach’s kill type ACE’s die). We compare our approach with the following state-ofthe-art supervised methods which are trained from 529 ACE documents or 336 ERE documents: • DMCNN: A dynamic multi-pooling convolutional neural network based on distributed word representations (Chen et al., 2015). • Joint: A structured perceptron model based on symbolic semantic features (Li et al., 2013). • LSTM: A long short-term memory neural network (Hochreiter and Schmidhuber, 1997) based on distributed semantic features. Table 8 shows the results. On ACE events, both DMCNN and Joint methods outperform our approach for trigger and argument extraction. However, when moving to ERE event schema, although re-trained based on ERE labeled data, their performance still degrades significantly. These previous methods heavily rely on the quality and quantity of the training data. When the training data is not adequate (the ERE training documents contain 1,068 events and 2,448 arguments, while ACE training documents contain more than 4,700 events and 9,700 arguments), the performance is low. In contrast, our approach is unsupervised and can automatically identify events, arguments and assign types/roles, and is not tied to one event schema. 3.6 Event Extraction for Biomedical Domain To demonstrate the portability of our approach to a new domain, we conduct our experiment on 14 biomedical articles (755 sentences) with perfect AMR annotations (Garg et al., 2016). We utilize a word2vec model6 trained from all paper abstracts from PubMed7 and full-text documents from the PubMed Central Open Access subset. To evaluate the performance, we randomly sample 100 sentences and ask a biomedical scientist to assess the correctness of each event and argument role. Our approach achieves 83.1% precision on trigger labeling (619 events in total) and 78.4% precision on argument labeling (1,124 arguments in total). 6http://bio.nlplab.org/ 7http://www.ncbi.nlm.nih.gov/pubmed 265 Method ERE: Trigger F1 (%) ERE: Arg F1(%) ACE: Trigger F1 (%) ACE: Arg F1 (%) P R F1 P R F1 P R F1 P R F1 LSTM 41.5 46.8 44.1 9.9 11.6 10.7 66.0 60 62.8 29.3 32.6 30.8 Joint 42.3 41.7 42.0 61.8 23.2 33.7 73.7 62.3 67.5 64.7 44.4 52.7 DMCNN 75.6 63.6 69.1 68.8 46.9 53.5 LiberalP erfectAMR 79.8 50.5 61.8 48.9 32.9 39.3 LiberalSystemAMR 88.5 42.6 57.5 47.6 30.0 36.8 80.7 50.1 61.8 51.9 39.4 44.8 Table 8: Performance on ERE and ACE events. It demonstrates that our approach can be rapidly adapted to a new domain and discover domain-rich event schema. An example schema for an event type “Dissociate” is shown in Figure 7. : man(Agent), Austrialia(Destination),heroin(Theme) ’t identify the 2 convicts hanged in Zahedan but stated und guilty of transporting 5.25 kilograms of heroin. orting, Arguments: they(Agent), heroin(Theme) n of the facility started in 790000, but stopped after ollapse when Tajikistan slid into a 5 year civil war economy. Event:construction, rguments: facility(Product), 790000(Time) et-era military facility was fou-nded in 570000 and s all information gathered from Russia's military spy Event: founded, nts: Soviet-era facility(Product), 570000(Time) Event Type: Build vent: death, guments: him( heme), 997( ime) S2: A newspaper report on January 1, 2008 that Iran hanged two convicted drug traffickers in the south-eastern city of Zahedan. S1: Colombian Government was alarmed because uranium is the primary basis for generating weapons of mass destruction. Event:alarmed, Arguments:Columbian Government(Experiencer) Event Type: Threaten S2: Cluster bomblets have been criticized by human rights groups because they kill indiscriminately and because unexploded ordinance poses a threat to civilians similar to that of land mines. Event:threat, Arguments:ordinance(Cause), civilian(Experiencer) Event: hanged, Arguments: Iran(Agent), drug traf- fickers(Theme), southeastern city of Zahedan(Place) S1: Ras acts as a molecular switch that is activated upon GTP loading and deactivated upon hydrolysis of GTP to GDP. Event: hydrolysis Arguments:GTP (Patient), (GDP) (Result) Event Type: Dissociate S2: Activation requires dissociation of protein-bound GDP , an intrinsically slow process that is accelerated by guanine nucleotide exchange factors. Event: dissociation Arguments: GDP (Patient) S3: His - ubiquitinated proteins were purified by Co2+ metal affinity chromatography in 8M urea denaturing conditions. Event: denaturing Arguments: proteins(Patient) Figure 7: Example Output of the Discovered Biomedical Event Schema. 4 Related Work Most of previous event extraction work focused on learning supervised models based on symbolic features (Ji and Grishman, 2008; Miwa et al., 2009; Liao and Grishman, 2010; Liu et al., 2010; Hong et al., 2011; McClosky et al., 2011; Sebastian and Andrew, 2011; Chen and Ng, 2012; Li et al., 2013) or distributional features through deep learning (Chen et al., 2015; Nguyen and Grishman, 2015). They usually rely on a predefined event schema and a large amount of training data. Compared with other paradigms such as Open Information Extraction (Etzioni et al., 2005; Banko et al., 2007; Banko et al., 2008; Etzioni et al., 2011; Ritter et al., 2012), Preemptive IE (Shinyama and Sekine, 2006), Ondemand IE (Sekine, 2006) and semantic frame based event discovery (Kim et al., 2013), our approach can explicitly name each event type and argument role. Some recent work focused on universal schema discovery (Chambers and Jurafsky, 2011; Pantel et al., 2012; Yao et al., 2012; Yao et al., 2013; Chambers, 2013; Nguyen et al., 2015). However, the schemas discovered from these methods are rather static and they are not customized for any specific input corpus. Our work is also related to efforts at composing word embeddings using syntactic structures (Hermann and Blunsom, 2013; Socher et al., 2013a; Socher et al., 2013b; Bowman et al., 2014; Zhao et al., 2015). Our trigger sense representation is similar to Word Sense Induction (Navigli, 2009; Bordag, 2006; Pinto et al., 2007; Brody and Lapata, 2009; Manandhar et al., 2010; Navigli and Lapata, 2010; Van de Cruys and Apidianaki, 2011; Wang et al., 2015b). Besides word sense, we exploit related concepts to enrich trigger representation. 5 Conclusions and Future Work We proposed a novel Liberal event extraction framework which combines the merits of symbolic semantics and distributed semantics. Experiments on news and biomedical domain demonstrate that this framework can discover explicitly defined rich event schemas which cover not only most types in existing manually defined schemas, but also new event types and argument roles. The granularity of event types is also customized for specific input corpus. And it can produce high-quality event annotations simultaneously without using annotated training data. In the future, we will extend this framework to other Information Extraction tasks. Acknowledgements We would like to thank Kevin Knight and Jonathan May (ISI) for sharing biomedical AMR annotations. This work was supported by the U.S. ARL NS-CTA No. W911NF-09-2-0053 and DARPA DEFT No. FA8750-13-2-0041, and in part by NSF IIS-1523198, IIS-1017362, IIS-1320617 and IIS1354329, and NIH BD2K grant 1U54GM114838. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. 266 References C. F. Baker and H. Sato. 2003. The framenet data and software. In Proc. ACL2003. L. Banarescu, C. Bonial, S. Cai, M. Georgescu, K. Griffitt, U. Hermjakob, K. Knight, P. Koehn, M. Palmer, and N. Schneider. 2013. Abstract meaning representation for sembanking. In Proc. ACL2013 Workshop on Linguistic Annotation and Interoperability with Discourse. M. Banko, M. Cafarella, S. Soderland, M. Broadhead, and O. Etzioni. 2007. Open information extraction for the web. In Proc. IJCAI2007. M. Banko, O. Etzioni, and T. Center. 2008. The tradeoffs between open and traditional relation extraction. In Proc. ACL-HLT2008. S. Bordag. 2006. Word sense induction: Tripletbased clustering and automatic evaluation. In Proc. EACL2006. S. Bowman, C. Potts, and C. Manning. 2014. Recursive neural networks for learning logical semantics. CoRR, abs/1406.1827. S. Brody and M. Lapata. 2009. Bayesian word sense induction. In Proc. EACL2009. N. Chambers and D. Jurafsky. 2011. Template-based information extraction without the templates. In Proc. ACL-HLT2011. N. Chambers. 2013. Event schema induction with a probabilistic entity-driven model. In EMNLP, volume 13, pages 1797–1807. C. Chen and V. Ng. 2012. Joint modeling for chinese event extraction with rich linguistic features. In In COLING. Citeseer. Y. Chen, L. Xu, K. Liu, D. Zeng, and J. Zhao. 2015. Event extraction via dynamic multi-pooling convolutional neural networks. In Proc. ACL2015. Dipanjan Das, Desai Chen, Andr´e FT Martins, Nathan Schneider, and Noah A Smith. 2014. Framesemantic parsing. Computational Linguistics, 40(1):9–56. O. Etzioni, M. Cafarella, D. Downey, A. Popescu, T. Shaked, S. Soderland, D. Weld, and A. Yates. 2005. Unsupervised named-entity extraction from the web: An experimental study. Artificial Intelligence, 165:91–134. O. Etzioni, A. Fader, J. Christensen, S. Soderland, and M. Mausam. 2011. Open information extraction: The second generation. In Proc. IJCAI2011, volume 11, pages 3–10. S. Garg, A. Galstyan, U. Hermjakob, and D. Marcu. 2016. Extracting biomolecular interactions using semantic parsing of biomedical text. In Proc. AAAI. R. Grishman and B. Sundheim. 1996. Message understanding conference-6: A brief history. In Proc. COLING1996. Z. Harris. 1954. Distributional structure. Word, 10(23):146–162. K. Hermann and P. Blunsom. 2013. The role of syntax in vector space models of compositional semantics. In Proc. ACL2013. S. Hochreiter and J. Schmidhuber. 1997. Long shortterm memory. Neural computation, 9(8):1735– 1780. Y. Hong, J. Zhang, B. Ma, J. Yao, G. Zhou, and Q. Zhu. 2011. Using cross-entity inference to improve event extraction. In Proc. ACL, pages 1127–1136. Association for Computational Linguistics. E. Hovy, M. Marcus, M. Palmer, L. Ramshaw, and R. Weischedel. 2006. Ontonotes: the 90% solution. In Proc. NAACL2006. H. Ji and R. Grishman. 2008. Refining event extraction through cross-document inference. In ACL. H. Ji and R. Grishman. 2011. Knowledge base population: Successful approaches and challenges. In Proc. ACL2011. H. Kim, X. Ren, Y. Sun, C. Wang, and J. Han. 2013. Semantic frame-based document representation for comparable corpora. In ICDM. K. Kipper, A. Korhonen, N. Ryant, and M. Palmer. 2008. A large-scale classification of english verbs. Language Resources and Evaluation Journal, 42(1):21–40. Q. Li, H. Ji, and L. Huang. 2013. Joint event extraction via structured prediction with global features. In Proc. ACL, pages 73–82. S. Liao and R. Grishman. 2010. Using document level cross-event inference to improve event extraction. In Proc. ACL. B. Liu, L. Qian, H. Wang, and G. Zhou. 2010. Dependency-driven feature-based learning for extracting protein-protein interactions from biomedical text. In Proc. COLING. U. Luxburg. 2007. A tutorial on spectral clustering. Statistics and computing, 17(4):395–416. S. Manandhar, I. Klapaftis, D. Dligach, and S. Pradhan. 2010. Semeval-2010 task 14: Word sense induction & disambiguation. In Proc. ACL2010 international workshop on semantic evaluation. Dan Klein Christopher D Manning. 2003. Natural language parsing. In Advances in Neural Information Processing Systems 15: Proceedings of the 2002 Conference, volume 15, page 3. MIT Press. 267 D. M. Marie-Catherine, Bill M., and Christopher D. M. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings LREC, pages 449,454. D. McClosky, M. Surdeanu, and C. D. Manning. 2011. Event extraction as dependency parsing. In ACL, pages 1626–1635. T. Mikolov, K. Chen, G. Corrado, and J. Dean. 2013. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781. M. Miwa, R. Stre, Y. Miyao, and J. Tsujii. 2009. A rich feature vector for protein-protein interaction extraction from multiple corpora. In Proc. EMNLP. R. Navigli and M. Lapata. 2010. An experimental study of graph connectivity for unsupervised word sense disambiguation. Pattern Analysis and Machine Intelligence, 32(4):678–692. R. Navigli. 2009. Word sense disambiguation: A survey. ACM Computing Surveys (CSUR), 41(2):10. T. Nguyen and R. Grishman. 2015. Event detection and domain adaptation with convolutional neural networks. Volume 2: Short Papers, page 365. K. Nguyen, X. Tannier, O. Ferret, and R. Besanc¸on. 2015. Generative event schema induction with entity disambiguation. In Proc. ACL. B. Onyshkevych, M. E. Okurowski, and L. Carlson. 1993. Tasks, domains, and languages for information extraction. In TIPSTER. M. Palmer, D. Gildea, and P. Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71–106. P. Pantel, T. Lin, and M. Gamon. 2012. Mining entity types from query logs via user intent modeling. In Proc. ACL2012. D. Pinto, P. Rosso, and H. Jimenez-Salazar. 2007. Upv-si: Word sense induction using self term expansion. In Proc. ACL2007 International Workshop on Semantic Evaluations. S. Pradhan, L. Ramshaw, M. Marcus, M. Palmer, R. Weischedel, and N. Xue. 2011. Conll-2011 shared task: Modeling unrestricted coreference in ontonotes. In Proc. CONLL2011. A. Ritter, O. Etzioni, and S. Clark. 2012. Open domain event extraction from twitter. In Proc. SIGKDD2012, pages 1104–1112. ACM. R. Sebastian and M. Andrew. 2011. Fast and robust joint models for biomedical event extraction. In EMNLP. S. Sekine. 2006. On-demand information extraction. In Proc. COLING-ACL2006. Y. Shinyama and S. Sekine. 2006. Preemptive information extraction using unrestricted relation discovery. In Proc. HLT-NAACL2006. R. Socher, J. Pennington, E. H. Huang, A. Y. Ng, and C. D. Manning. 2011. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proc. EMNLP, pages 151–161. R. Socher, A. Karpathy, Q. V. Le, C. Manning, and A. Y. Ng. 2013a. Grounded compositional semantics for finding and describing images with sentences. TACL2013. R. Socher, A. Perelygin, J. Wu, J. Chuang, C. Manning, A. Ng, and C. Potts. 2013b. Recursive deep models for semantic compositionality over a sentiment treebank. In Proc. EMNLP2013. E. Tjong, K. Sang, and F. Meulder. 2003. Introduction to the conll-2003 shared task: language-independent named entity recognition. In Proc. CONLL2003. T. Van de Cruys and M. Apidianaki. 2011. Latent semantic word sense induction and disambiguation. In Proc. ACL-HLT2011. C. Wang, N. Xue, and S. Pradhan. 2015a. Boosting transition-based amr parsing with refined actions and auxiliary analyzers. In Proc. ACL2015. J. Wang, M. Bansal, K. Gimpel, B. Ziebart, and T. Clement. 2015b. A sense-topic model for word sense induction with unsupervised data enrichment. TACL, 3:59–71. L. Yao, S. Riedel, and A. McCallum. 2012. Probabilistic databases of universal schema. In Proc. NIPS2012 Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction. L. Yao, S. Riedel, and A. McCallum. 2013. Universal schema for entity type prediction. In Proc. NIPS2013 Workshop on Automated Knowledge Base Construction. Y. Zhao, Z. Liu, and M. Sun. 2015. Phrase type sensitive tensor indexing model for semantic composition. In Proc. AAAI2015. Z. Zhong and H. T. Ng. 2010. It makes sense: A wide-coverage word sense disambiguation system for free text. In Proceedings of the ACL 2010 System Demonstrations, pages 78–83. 268
2016
25
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 269–278, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Jointly Event Extraction and Visualization on Twitter via Probabilistic Modelling Deyu Zhou†‡ Tianmeng Gao† Yulan He§ † School of Computer Science and Engineering, Key Laboratory of Computer Network and Information Integration, Ministry of Education, Southeast University, China ‡ State Key Laboratory for Novel Software Technology, Nanjing University, China § School of Engineering and Applied Science, Aston University, UK [email protected], [email protected], [email protected] Abstract Event extraction from texts aims to detect structured information such as what has happened, to whom, where and when. Event extraction and visualization are typically considered as two different tasks. In this paper, we propose a novel approach based on probabilistic modelling to jointly extract and visualize events from tweets where both tasks benefit from each other. We model each event as a joint distribution over named entities, a date, a location and event-related keywords. Moreover, both tweets and event instances are associated with coordinates in the visualization space. The manifold assumption that the intrinsic geometry of tweets is a low-rank, non-linear manifold within the high-dimensional space is incorporated into the learning framework using a regularization. Experimental results show that the proposed approach can effectively deal with both event extraction and visualization and performs remarkably better than both the state-of-the-art event extraction method and a pipeline approach for event extraction and visualization. 1 Introduction Event extraction, one of the important and challenging tasks in information extraction, aims to detect structured information such as what has happened, to whom, where and when. The outputs of event extraction could be beneficial for downstream applications such as summarization and personalized news systems. Data visualization, an important exploratory data analysis task, provides a simple way to reveal the relationships among data (Nakaji and Yanai, 2012). Although event extraction and visualization are two different tasks and typically studied separately in the literature, these two tasks are highly related. Documents which are close to each other in the low-dimensional visualization space are likely to describe the same event. Events in nearby locations in the visualization space are likely to share similar event elements. Therefore, jointly learning the two tasks could potentially bring benefits to each other. However, it is not straightforward to learn event extraction and visualization jointly since event extraction usually relies on semantic parsing results (McClosky et al., 2011) while visualization is accomplished by dimensionality reduction (Iwata et al., 2007; L´opez-Rubio et al., 2002). In this paper, we propose a novel probabilistic model, called Latent Event Extraction & Visualization (LEEV) model, for joint event extraction and visualization on Twitter. It is partly inspired by the Latent Event Model (LEM) (Zhou et al., 2015) where each tweet is assigned to one event instance and each event is modeled as a joint distribution over named entities, a date/time, a location and the event-related keywords. Going beyond LEM, we assume that each event is not only modeled as the joint distribution over event elements as in (Zhou et al., 2015), but also associate with coordinates in the visualization space. The Euclidean distance between a tweet and each events determines which event the tweet should be assigned to. Furthermore, the manifold assumption that the intrinsic geometry of tweets is a low-rank, non-linear manifold within the highdimensional space, is incorporated in the learning framework using a regularization. Experimental results show that the proposed approach can effectively deal with both event extraction and visualization tasks and performs remarkably better than both the state-of-the-art event extraction method 269 and a pipeline approach for event extraction and visualization. 2 Related Work Our proposed work is related to two lines of research, event extraction and joint topic modeling and visualization. 2.1 Event Extraction Research on event extraction of tweets can be categorized into domain-specific and open domain approaches. Domain-specific approaches usually have target events in mind and aim to extract events from a particular location or for emergency response during natural disasters. Anantharam et al. (2015) focused on extracting city events by solving a sequence labeling problem. Evaluation was carried out on a real-world dataset consisting of event reports and tweets collected over four months from San Francisco Bay Area. TSum4act (Nguyen et al., 2015) was designed for emergency response during disasters and was evaluated on a dataset containing 230,535 tweets. Most of open domain approaches focused on extracting a summary of events discussed in social media. For example Benson et al. (2011) proposed a structured graphical model which simultaneously analyzed individual messages, clustered, and induced a canonical value for each event. Capdevila et al. (2015) proposed a model named Tweet-SCAN based on the hierarchical Dirichlet process to detect events from geo-located tweets. To extract more information, a system called SEEFT (Wang et al., 2015) used links in tweets and combined tweets and linked articles to identify events. Zhou et al. (2014; 2015) proposed an unsupervised Bayesian model called latent event model (LEM) for event extraction from Twitter by assuming that each tweet message is assigned to one event instance and each event is modeled as a joint distribution over named entities, a date/time, a location and the event-related keywords. Our proposed method is partly inspired by (Zhou et al., 2015). However, different from previous methods, our approach not only extracts the structured representation of events, but also learns the coordinates of events and tweets simultaneously. 2.2 Joint Topic Modeling and Visualization Since our proposed approach can be considered as a variant of topic model, we also review the related work of joint topic modeling and visualization here. Traditionally, topic modeling and visualization are considered as two disjoint tasks and can be combined for pipeline processing. For example, probabilistic latent semantic analysis (Hofmann, 1999) can be first performed followed by parametric embedding (Iwata et al., 2007). Another pipeline approach (Millar et al., 2009) is based on latent Dirichlet allocation followed by selforganizing maps (L´opez-Rubio et al., 2002). Jointly modeling topics and visualization is a new problem explored in very few works. The state-of-the-art is a joint approach proposed in (Iwata et al., 2008). In this model, both documents and topics are assumed to have latent coordinates in a visualization space. The topic proportions of a document are determined by the distances between the document and the topics in the visualization space, and each word is drawn from one of the topics according to the document’s topic proportions. A visualization was obtained by fitting the model to a given set of documents using the EM algorithm. Following the same line, by considering the local consistency in terms of the intrinsic geometric structure of the document manifold, an unsupervised probabilistic model, called SEMAFORE, was proposed in (Le and Lauw, 2014a) by preserving the manifold in the lower dimensional space. In (Le and Lauw, 2014b), a semantic visualization model is learned by associating each document a coordinate in the visualization space, a multinomial distribution in the topic space, and a directional vector in a highdimensional unit hypersphere in the word space. Our work is partly inspired by (Le and Lauw, 2014a). However, our proposed approach differs from (Le and Lauw, 2014a) in that events, instead of topics, are modelled as the joint distribution over event elements. Both tweets and events are associate with coordinates in the visualization space. 3 Methodology We follow the same pre-processing steps described in (Zhou et al., 2015) to filter out nonevent-related tweets and extract dates, locations, and named entities by temporal resolution, partof-speech (POS) tagging and named entity recognition. The pre-processed tweets are then fed into our proposed model for event extraction and visu270 Table 1: Definition of Notations. Notation Definition e event index, e ∈{1..E} W = {wm} tweets, m ∈{1..M} Z = {zm} event labels for tweets Nmy number of named entities in wm Nmd number of dates in wm Nml number of locations in wm Nmk number of keywords in wm θey probability of named entity y in event e φed probability of date d in event e ψel probability of location l in event e ωek probability of keyword k in event e β, γ, η, λ Dirichlet hyperparameters χ, δ Normal hyperparameters G dimension of visualization space alization. We describe our model in more details below. 3.1 Latent Event Extraction & Visualization (LEEV) Model We propose an unsupervised latent variable model called the Latent Event Extraction & Visualization (LEEV) model which simultaneously extracts events from tweets and generates a visualization of the events. Table 1 lists notations used in this paper. In LEEV, each tweet message wm, m ∈ {1...M} is associated with a latent coordinate xm in the visualization space. Each event e ∈{1...E} is also associated with a coordinate ϕe. Assuming that each tweet message wm, m ∈{1...M} is assigned to one event instance zm = e and e is modeled as a joint distribution over named entities y, the date d when e happened, the location l and the event-related keywords k, the generative process of the model is described as follows: • For each event e ∈{1..E}, draw multinomial distributions θe ∼Dirichlet(β), φe ∼ Dirichlet(γ), ψe ∼ Dirichlet(η), ωe ∼ Dirichlet(λ), draw event coordinate ϕe ∼ Normal(0, χ−1I); • For each tweet wm, m ∈{1..M} * Choose tweet coordinate: xm ∼ Normal(0, δ−1I); * Choose an event zm = e ∼ Multinomial({P(e|xm, Φ)E e=1}); * For each named entity in the tweet wm, choose a named entity y ∼ Multinomial(θe); M N E x e l d y k E θ φ ψ ω β γ η λ f Figure 1: Latent Event Extraction & Visualization (LEEV) Model. * For each date in the tweet wm, choose a date d ∼Multinomial(φe); * For each location in the tweet wm, choose a location l ∼Multinomial(ψe); * For other words in the tweet wm, choose a word k ∼Multinomial(ωe). Here, β, γ, η, λ, χ, δ are priors, I is an identity matrix, and P(e|xm, Φ) is the probability of the tweet wm with coordinate xm belonging to the event e. It is defined as, P(e|xm, Φ) = exp(−1 2 ∥xm −ϕe ∥2) ∑E e′=1 exp(−1 2 ∥xm −ϕe′ ∥2) . (1) It is calculated as the normalized Euclidean distance between a tweet wm and an event e. Using this equation, when the Euclidean distance between a tweet wm and and an event e is small, the probability that tweet wm belongs to event e becomes large. The graphical model of LEEV is shown in Figure 1. The parameters to be learned are Θ = {θe, φe, ψe, ωe}E e=1, tweets’ coordinates X = {xm}M m=1 and events’ coordinates Φ = {ϕe}E e=1, which are collectively denoted as B = ⟨Θ, X, Φ⟩. Let H(wm, e) = Nmy ∏ n=1 P(yn|θe) Nmd ∏ n=1 P(dn|φe) Nml ∏ n=1 P(ln|ψe) Nmk ∏ n=1 P(kn|ωe). 271 The log likelihood of B given tweets W is, L(B|W) = M ∑ m=1 log { E ∑ e=1 P(e|xm, Φ) × H(wm, e) } + M ∑ m=1 log(P(xm)) + E ∑ e=1 log(P(ϕe)) + E ∑ e=1 {log(P(θe) ∗P(φe) ∗P(ψe) ∗P(ωe))}. (2) For the events’ coordinate ϕe and tweets’ coordinate xm, we use a Gaussian prior with a zero mean and a spherical covariance: p(ϕe) = ( χ 2π) G 2 exp(−χ 2 ∥ϕe ∥2) p(xm) = ( δ 2π) G 2 exp(−δ 2 ∥xm ∥2). 3.2 LEEV with Manifold Regularization Recent studies suggest that the intrinsic geometry of textual data is a low-rank, non-linear manifold lying in the high dimensional space (Cai et al., 2008; Zhang et al., 2005). We therefore assume that when two tweets wi and wj are close in the intrinsic geometry of the manifold Υ, their low-rank representations should be close as well. To capture this assumption, we consider Laplacian Eigenmaps (LE) (Belkin and Niyogi, 2003) which has been commonly used in manifold learning algorithms (Le and Lauw, 2014a). It constructs a k-nearest neighbors graph to represent data residing on a low-dimensional manifold embedded in a higher-dimensional space. In this paper, we use LE to incorporate neighborhood information of tweets. We construct a manifold graph with edges connecting two data points wi and wj. Set the edge weight υij = 1 if wj is one of the knearest neighbors of wi; Otherwise υij = 0. That makes LEEV an special case when ξ = 0. We represent each tweet as a word-count vector, i.e., each element of a vector is weighted by its corresponding term frequency, and use cosine similarity metric to measure the distance between tweets when constructing the manifold graph. We also tried vectors with the TFIDF weighting strategy to represent tweets and found word-count vectors give better results. We apply a regularization framework to incorporate a manifold structure into a learning model. The new regularized log-likelihood function L is L(B|W, Υ) = L(B|W) −ξ 2R(B|Υ), (3) where ξ is the regularization parameter. The second component R is a regularization function, which consists of two parts: R(B|Υ) = R+(B|Υ) + R−(B|Υ), (4) R+(B|Υ) = M ∑ i,j=1;i̸=j υij · F(wi, wj), (5) R−(B|Υ) = M ∑ i,j=1;i̸=j 1 −υij F(wi, wj) + 1, (6) where F is a distance function that operates on the low rank space. We define F as the squared Euclidean distance of coordinates in the visualization space. F(wi, wj)is computed as follows: F(wi, wj) =∥xi −xj ∥2 . (7) Minimizing R+ leads to minimizing the distance between neighbors and minimizing R−leads to maximizing the distance between non-neighbors. By enforcing manifold learning, we capture the spirit of keeping neighbors close and keeping none-neighbors apart. 3.3 Parameter Estimation As in Equation 2, the presence of the sum over e prevents the logarithm form directly acting on the joint distribution. Assuming that the corresponding latent event zm of each tweet wm is known, {W, Z} is called the complete data. Maximizing the log likelihood of the complete data, log P(W, Z|B), can be easily done. However, in practice we don’t observe the latent variables Z and only have the incomplete data W. Therefore, the expectation maximization (EM) algorithm is employed to handle the incomplete data. EM involves an efficient iterative procedure to compute the Maximum Likelihood estimation of probabilistic models with unobserved latent variables involved. The class posterior probability of the mth tweet under the current parameter values ˆB, P(zm = e|m, ˆB), is given as follows: P(zm = e|m, ˆB) = P(zm = e| ˆ xm, ˆΦ, ˆB) × H(wm, e) ∑E e′=1 P(zm = e′| ˆ xm, ˆΦ, ˆB) × H(wm, e′) , (8) which corresponds to the E-step in EM algorithm. 272 In M-step, model parameters B are updated by maximizing the regularized conditional expectation of the complete data log likelihood with priors defined as follows: Q(B| ˆB) = M ∑ m=1 E ∑ e=1 {P(zm = e|m, ˆB) × log[P(e|xm, Φ) × H(wm, e)]} + M ∑ m=1 log(P(xm)) + E ∑ e=1 log(P(ϕe)) + E ∑ e=1 {log(P(θe) ∗P(φe) ∗P(ψe) ∗P(ωe))} −ξ 2R(B|Υ), where P(zm = e|m, ˆB) is calculated in E-step. By maximizing Q(B| ˆB) w.r.t θey, φed, ψel, ωek, the next estimates are given as follows, θey = M ∑ m=1 Nmy ∑ n=1 I(ymn = y)P (zm = e|m, ˆ B) + β Y ∑ y=1 M ∑ m=1 Nmy ∑ n=1 I(ymn = y)P (zm = e|m, ˆ B) + Y β , φed = M ∑ m=1 Nmd ∑ n=1 I(dmn = d)P (zm = e|m, ˆ B) + γ D ∑ d=1 M ∑ m=1 Nmd ∑ n=1 I(dmn = d)P (zm = e|m, ˆ B) + Dγ , ψel = M ∑ m=1 Nml ∑ n=1 I(lmn = l)P (zm = e|m, ˆ B) + η L ∑ l=1 M ∑ m=1 Nml ∑ n=1 I(lmn = l)P (zm = e|m, ˆ B) + Lη , ωek = M ∑ m=1 Nmk ∑ n=1 I(kmn = k)P (zm = e|m, ˆ B) + λ K ∑ k=1 M ∑ m=1 Nmk ∑ n=1 I(kmn = k)P (zm = e|m, ˆ B) + Kη , where Y, D, L, K are the total numbers of distinct named entities, dates, locations, and words appeared in the whole Twitter corpus, respectively. ϕe and xm cannot be solved in a closed form, and are estimated by maximizing Q(B| ˆB) using quasi-Newton method. The gradients of Q(B| ˆB) w.r.t ϕe and xm are as follows: ∂Q ∂ϕe = M ∑ m=1 p(e|m, ˆB)(p(e|xm, Φ) −1)(ϕe −xm) −χϕe, ∂Q ∂xm = E ∑ e=1 p(e|m, ˆB)(p(e|xm, Φ) −1)(xm −ϕe) −δxm −ξ 2 ∂R(B|Υ) ∂xm , where the gradient of R(B|Υ) w.r.t. xm is computed as follows: ∂R(B|Υ) ∂xm = M ∑ j=1,j̸=m 2υmj(xm −xj) − M ∑ j=1,j̸=m 2(xm −xj) 1 −υmj (F(xm, xj) + 1)2 . We set the parameter χ = 0.00005, δ = 0.05, β = γ = η = λ = 0.1 and run EM algorithm for 50 iterations. Finally we select an entity y, a date d, a location l and two keywords k with the highest probabilities to form a tuple ⟨y, d, l, k⟩to represent each potential event. 3.4 Post-processing In order to filter out spurious events, we calculate the correlation coefficient of each event element. Remove the event element if its correlation coefficient is less than a threshold Ce and remove the event if the sum of the correlation coefficients of all its four event elements is less than Ct. For an event element A, its correlation coefficient is calculated below: CA = log ∑ B∈Ω B̸=A #(A, B) #(A) , (0) where Ωis the set of the four event elements ⟨y, d, l, k⟩and #(x) indicates the number of times x appeared in the whole corpus. We empirically set Ce to 0.4 and Ct to 4. 4 Experiments In this section, we firstly describe the datasets used in our experiments and then present the experimental results. 4.1 Setup We choose two datasets for model evaluation. The first one is the First Story Detection (FSD) dataset (Petrovic et al., 2013) (Dataset I) which contains 2,499 tweets published between 7th July and 12th September 2011. These tweets have been manually annotated with 27 events, covering a wide range of topics from accidents to science discoveries and from disasters to celebrity news. We filter out events mentioned in less than 15 tweets since events mentioned in very few tweets are less likely to be significant. The final dataset contains 2,453 tweets annotated with 20 events. This dataset has been previously used for evaluating event extraction models and the state-of-the-art results have been achieved using LEM (Zhou et al., 2015). We also create another dataset, called Dataset II, by manually annotating 1,000 tweets published in December 2010. A total of 20 events are annotated. We compare our model with LEM (Zhou et al., 2015), which also extracts events as 4-tuples ⟨ 273 y,d,l,k ⟩. The main difference between LEM and our model is that LEM directly estimates the event distribution from the sampled latent event labels, while we derive the distribution from coordinates of tweets and events xm, ϕe. We re-implemented the system described in (Zhou et al., 2015) and used the same evaluation metrics such as precision, recall and F-measure. Precision is defined as the proportion of the correctly identified events out of the system returned events. Recall is defined as the proportion of correctly identified true events. For calculating the precision of the 4-tuple ⟨y, d, l, k⟩, we use following criteria: • Do the entity y, location l, date d and keyword k that we have extracted refer to the same event? • If the extracted representation contains keywords, are they informative enough to tell us what happened? As mentioned in Section 2, PE (Iwata et al., 2007) is a nonlinear visualization method which takes a set of class posterior vectors as input and embeds samples in a low-dimensional Euclidean space. By minimizing the sum of KullbackLeibler divergences, PE tries to preserve the posterior structure in the embedding space. In order to evaluate the visualization results, we compare our proposed method with a pipeline approach, event extraction using LEM (Zhou et al., 2015) followed by event visualization using PE (Iwata et al., 2007), named as LEM+PE. 4.2 Event Extraction Results Table 2 shows the event extraction results on the two datasets. LEEV+R is LEEV with manifold regularization incorporated, in which the model parameters are estimated by the EM algorithm described in Section 3.3. For LEEV and LEEV+R, the number of events, E, is set to 50 for both datasets. For LEEV+R, the number of neighborhood size k is set to 10 and the regularization parameter ξ is set to 1. For LEM, E is set to 25 for both datasets following the suggestion in (Zhou et al., 2015). We ran our experiments on a server equipped with 3.40 GHz Intel Corel i7 CPU and 8 GB memory. The average running time of LEEV is 2328.1 seconds on Dataset I and 940.7 seconds on Dataset II for one iteration. The average running time Figure 2: Experimental results of LEEV+R in 10 different runs. of LEEV+R is 2612.7 seconds on Dataset I and 1296.4 seconds on Dataset II for one iteration. Table 2: Comparison of the event extraction results on the two datasets. Dataset I Method Prec. (%) Rec. (%) F-measure (%) LEM 84.00 76.19 80.35 LEEV 92.10 80.00 85.62 LEEV+R 91.91 88.50 89.88 Dataset II Method Prec. (%) Rec. (%) F-measure (%) LEM 80.00 90.00 84.70 LEEV 83.33 95.00 88.78 LEEV+R 86.18 92.50 89.19 It can be observed that both LEEV and LEEV+R outperforms the state-of-the-art results achieved by LEM on Dataset I. In particular, LEEV improves upon LEM by over 5% in Fmeasure and with regularization, LEEV-R further improves upon LEEV by over 4%. A similar trend is observed on Dataset II where both LEEV and LEEV+R outperforms LEM and the best performance is given by LEEV+R. This shows the effectiveness of using regularization in LEEV. We will further demonstrate its importance in visualization results. Overall, we see superior performance of LEEV+R over the other two models, with the F-measure of over 89% being achieved on both datasets. As described in Section 3.1, the coordinates of tweets and events are randomly initialized. Therefore, we would like to see whether the performance of event extraction is influenced heavily by random initialization. We repeat the experiments 274 Figure 3: The performance of LEEV+R with different number of events E. on the two datasets for 10 times using LEEV+R. The experimental results are shown in Figure 2. It can be observed that the performance of LEEV+R is quite stable on both datasets. The standard deviation of F-measure on both Dataset I and II is 0.036, which shows that random initialization does not have significant impact on the final performance of the model. 4.3 Impact of Number of Events E We need to pre-set the number of events E in the proposed approach. Figure 3 shows the performance of event extraction based on LEEV+R versus different values of E on the two datasets. It can be observed that the performance of the proposed approach improves with the increased value of E and when E goes beyond 50, we notice a more balanced precision/recall values and a relatively stable F-measure. This shows that the proposed approach is less sensitive to the number of events E so long as E is set to a relatively larger value. 4.4 Impact of Neighborhood Size As described in Section 3.2, the neighborhood information of tweets is incorporated into the learning framework. A manifold graph with edges connecting two tweets (or data points) wi and wj is constructed by setting the edge weight υij = 1 if wj is among the k-nearest neighbors of wi and υij = 0 otherwise. Therefore, it is crucial to see whether the performance of LEEV+R heavily depends on the setting of k. Figure 4 shows the performance of our proposed approach with different neighborhood size k. It can be observed that the Figure 4: The performance of LEEV+R with different neighborhood size k. performance of LEEV+R is quite stable and independent of the k value. 4.5 Visualization Results We show the visualization results produced by different approaches on the two datasets in Figure 5 and 6 respectively. We compare LEEV and LEEV+R with the pipeline approach LEM+PE. In the figures, each point represents a tweet and different shapes and colors represent the different events they are associated with. Each red cross represents an extracted event with coordinate ϕz. For Dataset I, it can be observed from Figure 5(a) that the visualization result generated by LEM+PE is not informative. Tweets from different events are mixed together and events are evenly distributed across the whole visualization space. Thus, this visualization does not provide any sensible information about the relationships between tweets and events. The result generated by LEEV without manifold Regularization unit R seems better than that from LEM+PE, as shown in Figure 5(b). However, a large amount of tweets crowded together at the center, which makes it difficult to reveal the relations between tweets and events. The best visualization result is given by LEEV+R as shown in Figure 5(c) that different events are well separated and related events are located nearby. For example, the three events enclosed by a red circle represent “people died in terrorist attacks in Delhi, Oslo and Norway” respectively, while three events in the blue circle represent “riots in Ealing, Totteham and Croydon”, respectively. And two events in the black circle represent “American credit rating” and “House 275 (a) LEM+PE (b) LEEV (c) LEEV+R Figure 5: Visualization results on Dataset I. (a) LEM+PE (b) LEEV (c) LEEV+R Figure 6: Visualization results on Dataset II. debt bill”, respectively. It shows that LEEV+R with manifold learning incorporated significantly improved upon LEEV without regularization and gives better visualization results. The relationships of events are directly reflected in the distances between their coordinates in the visualization space. Similar visualization results have been obtained on Dataset II. Figure 6(a) and 6(b) failed to convey the semantic relations between different events. LEEV+R in Figure 6(c) is good at separating tweets from different events. The events in the red circle are the government activities of the United States. The events in the blue circle are categorized as traffic accidents. They are “ Transport chaos caused by heavy snow ”, “Train to Paris crushed ” and “Demonstrators attacked car carrying Prince Charles ”. Compared to LEM+PE and LEEV, LEEV+R gives much more informative visualization results. To further analyze the visualization results in more detail, the 4 representative events and their corresponding tweets in the red circle of Figure 6(c) are visualized in Figure 7. These four events are “Senate vote on repealing gay ban”, “US state governor plan to visit North Korea”, “Send letter to President Obama to stop tax cut deal” and “Congress passed the Child Nutrition Bill”. Their corresponding tweets are denoted as green ’△’, blue ’2’, green ’2’ and blue ’+’ individually in Figure 7. It can be observed that these four events are all about government activities of the United States, and they are located close to each other in the low-dimensional visualization space. Moreover, the tweets describing the same event are located close to each other and center around their corresponding events, while the tweets describing different events are far away from each other. 5 Conclusions In this paper, we have proposed an unsupervised Bayesian model, called Latent Event Extraction & Visualization (LEEV) model, to extract the structured representations of events from social media and simultaneously visualize them in a twodimensional Euclidean space. The proposed approach has been evaluated on two datasets. Experimental results show that the proposed approach outperforms the previously reported best result on 276 Extracted event y d l k Senate 2010/1 2/08 Washington vote, gay. Tweet texts Senate faces historic vote on military gay ban The Senate was headed toward Senate may vote Saturday on repealing gay ban Extracted event y d l k Obama 2010/1 2/03 Congress bill, child. Tweet texts House passes sweeping $4.5 billion child-nutrition bill. Congress passed Child Nutrition Bill, Now its up to you Mr.president ... Extracted event y d l k Bill 2010/1 2/15 Korea visit, calm. Tweet texts US envoy Bill Richardson, visiting North Korea, says the situation on the pen... Gov. Bill Richardson of New Mexico prepared to meet North Korean officials Extracted event y d l k Obama 2010/12/10 tax, cut. Tweet texts Please co-sign @SherrodBrown’s letter to Pres. Obama to stop the tax cut deal with Republicans No Deal of Tax Cut to the Superwealthy/Co-sign Senator Brown's Open Letter to Obama Figure 7: Four representative events and their corresponding tweets in the red circle of Figure 6(c). Dataset I by nearly 10% in F-measure. Visualization results show that the proposed approach with manifold regularization can significantly improve the quality of event visualization. These results show that by jointly learning event extraction and visualization, our proposed approach is able to give better results on both tasks. In future work, we will investigate scalable and parallel model learning to explore the performance of our model for large-scale real-time event extraction and visualization. Acknowledgments We would like to thank the anonymous reviewers for their valuable comments and suggestions. This work was funded by the National Natural Science Foundation of China (61528302), the Innovate UK under the grant number 101779 and the Collaborative Innovation Center of Wireless Communications Technology. References Pramod Anantharam, Payam Barnaghi, Krishnaprasad Thirunarayan, and Amit Sheth. 2015. Extracting city traffic events from social streams. ACM Transactions on Intelligent Systems and Technology, 6(4):e110206. Mikhail Belkin and Partha Niyogi. 2003. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput., 15(6):1373–1396, June. Edward Benson, Aria Haghighi, and Regina Barzilay. 2011. Event discovery in social media feeds. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, pages 389–398, Stroudsburg, PA, USA. Association for Computational Linguistics. Deng Cai, Qiaozhu Mei, Jiawei Han, and Chengxiang Zhai. 2008. Modeling hidden topics on document manifold. In Proceedings of the 17th ACM Conference on Information and Knowledge Management, CIKM ’08, pages 911–920, New York, NY, USA. ACM. Joan Capdevila, Jess Cerquides, Jordi Nin, and Jordi Torres. 2015. Tweet-scan: An event discovery technique for geo-located tweets. In Artificial Intelligence Research and Development: Proceedings of the 18th International Conference of the Catalan Association for Artificial Intelligence, volume 277, pages 110–119. Thomas Hofmann. 1999. Probabilistic latent semantic indexing. In Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’99, pages 50–57, New York, NY, USA. ACM. Tomoharu Iwata, Kazumi Saito, Naonori Ueda, Sean Stromsten, Thomas L. Griffiths, and Joshua B. Tenenbaum. 2007. Parametric embedding for class visualization. Neural Computation, 19(9):2536–56. Tomoharu Iwata, Takeshi Yamada, and Naonori Ueda. 2008. Probabilistic latent semantic visualization: Topic model for visualizing documents. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’08, pages 363–371, New York, NY, USA. ACM. 277 Tuan M. V. Le and Hady W. Lauw. 2014a. Manifold learning for jointly modeling topic and visualization. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, AAAI’14, pages 1960–1967. AAAI Press. Tuan M.V. Le and Hady W. Lauw. 2014b. Semantic visualization for spherical representation. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14, pages 1007–1016, New York, NY, USA. ACM. Ezequiel L´opez-Rubio, Jos´e Mu˜noz-P´erez, and Jos´e Antonio G´omez-Ruiz. 2002. Self-organizing dynamic graphs. Neural Processing Letters, 16(2):93–109(17). David McClosky, Mihai Surdeanu, and Christopher D. Manning. 2011. Event extraction as dependency parsing. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT ’11, pages 1626–1635, Stroudsburg, PA, USA. Association for Computational Linguistics. Jeremy Millar, Gilbert Peterson, and Michael Mendenhall. 2009. Document clustering and visualization with latent dirichlet allocation and self-organizing maps. In Proceedings of Florida Artificial Intelligence Research Society Conference. Yusuke Nakaji and Keiji Yanai. 2012. Visualization of real-world events with geotagged tweet photos. In Proceedings of the 2012 IEEE International Conference on Multimedia and Expo Workshops, ICMEW ’12, pages 272–277, Washington, DC, USA. IEEE Computer Society. Minh-Tien Nguyen, Asanobu Kitamoto, and Tri-Thanh Nguyen. 2015. Tsum4act: A framework for retrieving and summarizing actionable tweets during a disaster for reaction. In Advances in Knowledge Discovery and Data Mining, pages 64–75. Springer. Saˇsa Petrovic, Miles Osborne, Richard McCreadie, Craig Macdonald, Iadh Ounis, and Luke Shrimpton. 2013. Can twitter replace newswire for breaking news? In Proceedings of the 7th International AAAI Conference on Weblogs and Social Media. Yu Wang, David Fink, and Eugene Agichtein. 2015. Seeft: Planned social event discovery and attribute extraction by fusing twitter and web content. In Proceedings of the Ninth International AAAI Conference on Web and Social Media, pages 483–492. Dell Zhang, Xi Chen, and Wee Sun Lee. 2005. Text classification with kernels on the multinomial manifold. In Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’05, pages 266–273, New York, NY, USA. ACM. Deyu Zhou, Liangyu Chen, and Yulan He. 2014. A simple bayesian modelling approach to event extraction from twitter. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL), pages 700–705. ACL. Deyu Zhou, Liangyu Chen, and Yulan He. 2015. An unsupervised framework of exploring events on twitter: Filtering, extraction and categorization. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI’15, pages 2468– 2474. AAAI Press. 278
2016
26
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 279–289, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Using Sentence-Level LSTM Language Models for Script Inference Karl Pichotta Department of Computer Science The University of Texas at Austin [email protected] Raymond J. Mooney Department of Computer Science The University of Texas at Austin [email protected] Abstract There is a small but growing body of research on statistical scripts, models of event sequences that allow probabilistic inference of implicit events from documents. These systems operate on structured verb-argument events produced by an NLP pipeline. We compare these systems with recent Recurrent Neural Net models that directly operate on raw tokens to predict sentences, finding the latter to be roughly comparable to the former in terms of predicting missing events in documents. 1 Introduction Statistical scripts are probabilistic models of event sequences (Chambers and Jurafsky, 2008). A learned script model is capable of processing a document and inferring events that are probable but not explicitly stated. These models operate on automatically extracted structured events (for example, verbs with entity arguments), which are derived from standard NLP tools such as dependency parsers and coreference resolution engines. Recent work has demonstrated that standard sequence models applied to such extracted event sequences, e.g. discriminative language models (Rudinger et al., 2015) and Long Short Term Memory (LSTM) recurrent neural nets (Pichotta and Mooney, 2016), are able to infer held-out events more accurately than previous approaches. These results call into question the extent to which statistical event inference systems require linguistic preprocessing and syntactic structure. In an attempt to shed light on this issue, we compare existing script models to LSTMs trained as sentencelevel language models which try to predict the sequence of words in the next sentence from a learned representation of the previous sentences using no linguistic preprocessing. Some prior statistical script learning systems are focused on knowledge induction. These systems are primarily designed to induce collections of co-occurring event types involving the same entities, and their ability to infer held-out events is not their primary intended purpose (Chambers and Jurafsky, 2008; Ferraro and Van Durme, 2016, inter alia). In the present work, we instead investigate the behavior of systems trained to directly optimize performance on the task of predicting subsequent events; in other words, we are investigating statistical models of events in discourse. Much prior research on statistical script learning has also evaluated on inferring missing events from documents. However, the exact form that this task takes depends on the adopted definition of what constitutes an event: in previous work, events are defined in different ways, with differing degrees of structure. We consider simply using raw text, which requires no explicit syntactic annotation, as our mediating representation, and evaluate how raw text models compare to models of more structured events. Kiros et al. (2015) introduced skip-thought vector models, in which an RNN is trained to encode a sentence within a document into a lowdimensional vector that supports predicting the neighboring sentences in the document. Though the objective function used to train networks maximizes performance on the task of predicting sentences from their neighbors, Kiros et al. (2015) do not evaluate directly on the ability of networks to predict text; they instead demonstrate that the intermediate low-dimensional vector embeddings are useful for other tasks. We directly evaluate the text predictions produced by such sentence-level RNN encoder-decoder models, and measure their utility for the task of predicting subsequent events. 279 We find that, on the task of predicting the text of held-out sentences, the systems we train to operate on the level of raw text generally outperform the systems we train to predict text mediated by automatically extracted event structures. On the other hand, if we run an NLP pipeline on the automatically generated text and extract structured events from these predictions, we achieve prediction performance roughly comparable to that of systems trained to predict events directly. The difference between word-level and event-level models on the task of event prediction is marginal, indicating that the task of predicting the next event, particularly in an encoder-decoder setup, may not necessarily need to be mediated by explicit event structures. To our knowledge, this is the first effort to evaluate sentence-level RNN language models directly on the task of predicting document text. Our results show that such models are useful for predicting missing information in text; and the fact that they require no linguistic preprocessing makes them more applicable to languages where quality parsing and co-reference tools are not available. 2 Background 2.1 Statistical Script Learning Scripts, structured models of stereotypical sequences of events, date back to AI research from the 1970s, in particular the seminal work of Schank and Abelson (1977). In this conception, scripts are modeled as temporally ordered sequences of symbolic structured events. These models are nonprobabilistic and brittle, and pose serious problems for automated learning. In recent years, there has been a growing body of research into statistical script learning systems, which enable statistical inference of implicit events from text. Chambers and Jurafsky (2008; 2009) describe a number of simple event co-occurrence based systems which infer (verb, dependency) pairs related to a particular discourse entity. For example, given the text: Andrew Wiles won the 2016 Abel prize for proving Fermat’s last theorem, such a system will ideally be able to infer novel facts like (accept, subject) or (publish, subject) for the entity Andrew Wiles, and facts like (accept, object) for the entity Abel prize. A number of other systems inferring the same types of pair events have been shown to provide superior performance in modeling events in documents (Jans et al., 2012; Rudinger et al., 2015). Pichotta and Mooney (2014) give a cooccurrence based script system that models and infers more complex multi-argument events from text. For example, in the above example, their model would ideally be able to infer a single event like accept(Wiles, prize), as opposed to the two simpler pairs from which it is composed. They provide evidence that modeling and inferring more complex multi-argument events also yields superior performance on the task of inferring simpler (verb, dependency) pair events. These events are constructed using only coreference information; that is, the learned event co-occurrence models do not directly incorporate noun information. More recently, Pichotta and Mooney (2016) presented an LSTM-based script inference model which models and infers multi-argument events, improving on previous systems on the task of inferring verbs with arguments. This system can incorporate both noun and coreference information about event arguments. We will use this multiargument event formulation (formalized below) and compare LSTM models using this event formulation to LSTM models using raw text. 2.2 Recurrent Neural Networks Recurrent Neural Networks (RNNs) are neural nets whose computation graphs have cycles. In particular, RNN sequence models are RNNs which map a sequence of inputs x1, . . . , xT to a sequence of outputs y1, . . . , yT via a learned latent vector whose value at timestep t is a function of its value at the previous timestep t −1. The most basic RNN sequence models, socalled “vanilla RNNs” (Elman, 1990), are described by the following equations: zt = f(Wi,zxt + Wz,zzt−1) ot = g(Wz,ozt) where xt is the vector describing the input at time t; zt is the vector giving the hidden state at time t; ot is the vector giving the predicted output at time t; f and g are element-wise nonlinear functions (typically sigmoids, hyperbolic tangent, or rectified linear units); and Wi,z, Wz,z, and Wz,o are learned matrices describing linear transformations. The recurrency in the computation graph arises from the fact that zt is a function of zt−1. The more complex Long Short-Term Memory (LSTM) RNNs (Hochreiter and Schmidhuber, 280 zt ot ft it gt zt-1 xt mt Figure 1: Long Short-Term Memory unit at timestep t. The four nonlinearity nodes (it, gt, ft, and ot) all have, as inputs, xt and zt−1. Small circles with dots are elementwise vector multiplications. 1997) have been shown to perform well on a wide variety of NLP tasks (Sutskever et al., 2014; Hermann et al., 2015; Vinyals et al., 2015, inter alia). The LSTM we use is described by: it = σ (Wx,ixt + Wz,izt−1 + bi) ft = σ (Wx,fxt + Wz,fzt−1 + bf) ot = σ (Wx,oxt + Wh,izt−1 + bo) gt = tanh (Wx,mxt + Wz,mzt−1 + bg) mt = ft ◦mt−1 + it ◦gt zt = ot ◦tanh mt. The model is depicted graphically in Figure 1. The memory vector mt is a function of both its previous value mt−1 and the input xt; the vector zt is output both to any layers above the unit (which are trained to predict the output values yt), and is additionally given as input to the LSTM unit at the next timestep t + 1. The W∗,∗matrices and b∗vectors are learned model parameters, and u ◦v signifies element-wise multiplication. 2.3 Sentence-Level RNN Language Models RNN sequence models have recently been shown to be extremely effective for word-level and character-level language models (Mikolov et al., 2011; Jozefowicz et al., 2016). At each timestep, these models take a word or character as input, update a hidden state vector, and predict the next timestep’s word or character. There is also a growing body of work on training RNN encoderdecoder models for NLP problems. These systems first encode the entire input into the network’s hidden state vector and then, in a second step, decode the entire output from this vector (Sutskever et al., 2014; Vinyals et al., 2015; Serban et al., 2016). Sentence-level RNN language models, for example the skip-thought vector system of Kiros et al. (2015), conceptually bridge these two approaches. Whereas standard language models are trained to predict the next token in the sequence of tokens, these systems are explicitly trained to predict the next sentence in the sequence of sentences. Kiros et al. (2015) train an encoder-decoder model to encode a sentence into a fixed-length vector and subsequently decode both the following and preceding sentence, using Gated Recurrent Units (Chung et al., 2014). In the present work, we train an LSTM model to predict a sentence’s successor, which is essentially the forward component of the skip-thought system. Kiros et al. (2015) use the skip-thought system as a means of projecting sentences into low-dimensional vector embeddings, demonstrating the utility of these embeddings on a number of other tasks; in contrast, we will use our trained sentence-level RNN language model directly on the task its objective function optimizes: predicting a sentence’s successor. 3 Methodology 3.1 Narrative Cloze Evaluation The evaluation of inference-focused statistical script systems is not straightforward. Chambers and Jurafsky (2008) introduced the Narrative Cloze evaluation, in which a single event is held out from a document and systems are judged by the ability to infer this held-out event given the remaining events. This evaluation has been used by a number of published script systems (Chambers and Jurafsky, 2009; Jans et al., 2012; Pichotta and Mooney, 2014; Rudinger et al., 2015). This automated evaluation measures systems’ ability to model and predict events as they co-occur in text. The exact definition of the Narrative Cloze evaluation depends on the formulation of events used in a script system. For example, Chambers and Jurafsky (2008), Jans et al. (2012), and Rudinger et al. (2015) evaluate inference of heldout (verb, dependency) pairs from documents; Pichotta and Mooney (2014) evaluate inference of 281 verbs with coreference information about multiple arguments; and Pichotta and Mooney (2016) evaluate inference of verbs with noun information about multiple arguments. In order to gather human judgments of inference quality, the latter also learn an encoder-decoder LSTM network for transforming verbs and noun arguments into English text to present to annotators for evaluation. We evaluate instead on the task of directly inferring sequences of words. That is, instead of defining the Narrative Cloze to be the evaluation of predictions of held-out events, we define the task to be the evaluation of predictions of held-out text; in this setup, predictions need not be mediated by noisy, automatically-extracted events. To evaluate inferred text against gold standard text, we argue that the BLEU metric (Papineni et al., 2002), commonly used to evaluate Statistical Machine Translation systems, is a natural evaluation metric. It is an n-gram-level analog to the eventlevel Narrative Cloze evaluation: whereas the Narrative Cloze evaluates a system on its ability to reconstruct events as they occur in documents, BLEU evaluates a system on how well it reconstructs the n-grams. This evaluation takes some inspiration from the evaluation of neural encoder-decoder translation models (Sutskever et al., 2014; Bahdanau et al., 2015), which use similar architectures for the task of Machine Translation. That is, the task we present can be thought of as “translating” a sentence into its successor. While we do not claim that BLEU is necessarily the optimal way of evaluating text-level inferences, but we do claim that it is a natural ngram-level analog to the Narrative Cloze task on events. If a model infers text, we may also evaluate it on the task of inferring events by automatically extracting structured events from its output text (in the same way as events are extracted from natural text). This allows us to compare directly to previous event-based models on the task they are optimized for, namely, predicting structured events. 3.2 Models Statistical script systems take a sequence of events from a document and infer additional events that are statistically probable. Exactly what constitutes an event varies: it may be a (verb, dependency) pair inferred as relating to a particular discourse entity (Chambers and Jurafsky, 2008; Rudinger et al., 2015), a simplex verb (Chambers and Jurafsky, 2009; Orr et al., 2014), or a verb with multiple arguments (Pichotta and Mooney, 2014). In the present work, we adopt a representation of events as verbs with multiple arguments (Balasubramanian et al., 2013; Pichotta and Mooney, 2014; Modi and Titov, 2014). Formally, we define an event to be a variadic tuple (v, s, o, p∗), where v is a verb, s is a noun standing in subject relation to v, o is a noun standing as a direct object to v, and p∗denotes an arbitrary number of (pobj, prep) pairs, with prep a preposition and pobj a noun related to the verb v via the preposition prep.1 Any argument except v may be null, indicating no noun fills that slot. For example, the text Napoleon sent the letter to Josephine would be represented by the event (sent, Napoleon, letter, (Josephine, to)). We represent arguments by their grammatical head words. We evaluate on a number of different neural models which differ in their input and output. All models are LSTM-based encoder-decoder models. These models encode a sentence (either its events or text) into a learned hidden vector state and then, subsequently, decode that vector into its successor sentence (either its events or its text). Our general system architecture is as follows. At each timestep t, the input token is represented as a learned 100-dimensional embedding vector (learned jointly with the other parameters of the model), such that predictively similar words should get similar embeddings. This embedding is fed as input to the LSTM unit (that is, it will be the vector xt in Section 2.2, the input to the LSTM). The output of the LSTM unit (called zt in Section 2.2) is then fed to a softmax layer via a learned linear transformation. During the encoding phase the network is not trained to produce any output. During the decoding phase the output is a one-hot representation of the subsequent timestep’s input token (that is, with a V -word vocabulary, the output will be a V -dimensional vector with one 1 and V −1 zeros). In this way, the network is trained to consume an entire input sequence and, as a second step, iteratively output the subsequent timestep’s 1This is essentially the event representation of Pichotta and Mooney (2016), but whereas they limited events to having a single prepositional phrase, we allow an arbitrary number, and we do not lemmatize words. 282 Hello ∅ </S> ∅ <S> ∅ LSTM <GEN> <S> <S> Goodbye Goodbye </S> </S> ∅ Input Hidden (zt) Output (yt) Encoding Decoding LSTM LSTM LSTM LSTM LSTM LSTM Embedding (xt) Figure 2: Encoder-Decoder setup predicting the text “Goodbye” from “Hello” input, which allows the prediction of full output sequences. This setup is pictured diagrammatically in Figure 2, which gives an example of input and output sequence for a token-level encoderdecoder model, encoding the sentence “Hello .” and decoding the successor sentence “Goodbye .” Note that we add beginning-of-sequence and end-of-sequence pseudo-tokens to sentences. This formulation allows a system to be trained which can encode a sentence and then infer a successor sentence by iteratively outputting next-input predictions until the </S> end-of-sentence pseudotoken is predicted. We use different LSTMs for encoding and decoding, as the dynamics of the two stages need not be identical. We notate the different systems as follows. Let s1 be the input sentence and s2 its successor sentence. Let t1 denote the sequence of raw tokens in s1, and t2 the tokens of s2. Further, let e1 and e2 be the sequence of structured events occurring in s1 and s2, respectively (described in more detail in Section 4.1), and let e2[0] denote the first event of e2. The different systems we compare are named systematically as follows: • The system t1  t2 is trained to encode a sentence’s tokens and decode its successor’s tokens. • The system e1  e2 is trained to encode a sentence’s events and decode its successor’s events. • The system e1  e2  t2 is trained to encode a sentence’s events, decode its successor’s events, and then encode the latter and subsequently decode the successor’s text. We will not explicitly enumerate all systems, but other systems are defined analogously, with the schema X  Y describing a system which is trained to encode X and subsequently decode Y , and X  Y  Z indicating a system which is trained to encode X, decode Y , and subsequently encode Y and decode Z. Note that in a system X  Y  Z, only X is provided as input. We also present results for systems of the form X a Y , which signifies that the system is trained to decode Y from X with the addition of an attention mechanism. We use the attention mechanism of Vinyals et al. (2015). In short, these models have additional parameters which can learn soft alignments between positions of encoded inputs and positions in decoded outputs. Attention mechanisms have recently been shown to be quite empirically valuable in many complex sequence prediction tasks. For more details on the model, see Vinyals et al. (2015). Figure 3 gives a diagrammatic representation of the different system setups. Text systems infer successor text and, optionally, parse that text and extract events from it; event sequences infer successor events and, optionally, expand inferred events into text. Note that the system t1  t2, in which both the encoding and decoding steps operate on raw text, is essentially a one-directional version of the skip-thought system of Kiros et al. (2015).2 Further, the system e1  e2  t2, which is trained to take a sentence’s event sequence as input, predict its successor’s events, and then predict its successor’s words, is comparable to the event inference system of Pichotta and Mooney (2016). They use an LSTM sequence model of events in sequence 2The system of Kiros et al. (2015), in addition to being trained to predict the next sentence, also contains a backwarddirectional RNN trained to predict a sentence’s predecessor; we condition only on previous text. Kiros et al. (2015) also use Gated Recurrent Units instead of LSTM. 283 t1 encode/decode “The dog chased the cat.” “The cat ran away.” ran_away(cat) t2 e2 parse e1 encode/decode e2 t2 encode/decode chased(dog, cat) ran_away(cat) “The cat ran away.” Text representation Event representation Figure 3: Different system setups for modeling the two-sentence sequence “The dog chased the cat.” followed by “The cat ran away.” The gray components inside dotted boxes are only present in some systems. for event inference, and optionally transform inferred events to text using another LSTM; we, on the other hand, use an encoder/decoder setup to infer text directly. 4 Evaluation 4.1 Experimental Details We train a number of LSTM encoder-decoder networks which vary in their input and output. Models are trained on English Language Wikipedia, with 1% of the documents held out as a validation set. Our test set consists of 10,000 unseen sentences (from articles in neither the training nor validation set). We train models with batch stochastic gradient descent with momentum, minimizing the cross-entropy error of output predictions. All models are implemented in TensorFlow (Abadi et al., 2015). We use a vocabulary of the 50,000 most frequent tokens, replacing all other tokens with an out-of-vocabulary pseudo-token. Learned word embeddings are 100-dimensional, and the latent LSTM vector is 500-dimensional. To extract events from text, we use the Stanford Dependency Parser (De Marneffe et al., 2006; Socher et al., 2013). We use the Moses toolkit (Koehn et al., 2007) to calculate BLEU.3 We evaluate the task of predicting held-out text with three metrics. The first metric is BLEU, which is standard BLEU (the geometric mean of modified 1-, 2-, 3-, and 4-gram precision against a gold standard, multiplied by a brevity penalty which penalizes short candidates). The second metric we present, BLEU-BP, is BLEU without the brevity 3Via the script multi-bleu.pl. penalty: in the task of predicting successor sentences, depending on predictions’ end use, ontopic brevity is not necessarily undesirable. Evaluations are over top system inferences (that is, decoding is done by taking the argmax). Finally, we also present values for unigram precision (1G P), one of the components of BLEU. We also evaluate on the task of predicting heldout verb-argument events, either directly or via inferred text. We use two evaluation metrics for this task. First, the Accuracy metric measures the percentage of a system’s most confident guesses that are totally correct. That is, for each held-out event, a system makes its single most confident guess for that event, and we calculate the total percentage of such guesses which are totally correct. Some authors (e.g. Jans et al. (2012), Pichotta and Mooney (2016)) present results on the “Recall at k” metric, judging gold-standard recall against a list of top k event inferences; this metric is equivalent to “Recall at 1.” This is quite a stringent metric, as an inference is only counted correct if the verb and all arguments are correct. To relax this requirement, we also present results on what we call the Partial Credit metric, which is the percentage of held-out event components identical to the respective components in a system’s top inference.4 4.2 Experimental Evaluation Table 1 gives the results of evaluating predicted successor sentence text against the gold standard using BLEU. The baseline system t1  t1 sim4This metric was used in Pichotta and Mooney (2014), but there it was called Accuracy. In the present work, we use “accuracy” only to mean Recall at 1. 284 System BLEU BLEU-BP 1G P t1  t1 1.88 1.88 22.6 e1  e2  t2 0.34 0.66 19.9 e1 a e2  t2 0.30 0.39 15.8 t1  t2 5.20 7.84 30.9 t1 a t2 4.68 8.09 32.2 Table 1: Successor text predictions evaluated with BLEU. ply reproduces the input sentence as its own successor.5 Below this are systems which make predictions from event information, with systems which make predictions from raw text underneath. Transformations written X a Y are, recall, encoder-decoder LSTMs with attention. Note, first, that the text-level models outperform other models on BLEU. In particular, the two-step model e1  e2  t2 (and comparable model with attention) which first predicts successor events and then, as a separate step, expands these events into text, performs quite poorly. This is perhaps due to the fact that the translation from text to events is lossy, so reconstructing raw sentence tokens is not straightforward. The BLEU-BP scores, which are BLEU without the brevity penalty, are noticeably higher in the text-level models than the raw BLEU scores. This is in part because these models seem to produce shorter sentences, as illustrated below in section 4.4. The attention mechanism does not obviously benefit either text or event level prediction encoder-decoder models. This could be because there is not an obvious alignment structure between contiguous spans of raw text (or events) in natural documents. These results provide evidence that, if the Narrative Cloze task is defined to evaluate prediction of held-out text from a document, then sentencelevel RNN language models provide superior performance to RNN models operating at the event level. In other words, linguistic pre-processing does not obviously benefit encoder-decoder models trained to predict succeeding text. Table 2 gives results on the task of predicting the next verb with its nominal arguments; that is, whereas Table 1 gave results on a text analog to the Narrative Cloze evaluation (BLEU), Table 2 gives 5“t1  t1” is minor abuse of notation, as the system is not an encoder/decoder but a simple identity function. System Accuracy Partial Credit Most common 0.2 26.5 e1  e2[0] 2.3 26.7 e1 a e2[0] 2.2 25.6 t1  t2  e2[0] 2.0 30.3 t1 a t2  e2[0] 2.0 27.7 Table 2: Next event prediction accuracy (numbers are percentages: maximum value is 100). results on the verb-with-arguments prediction version. In the t1  t2  e2[0] system (and the comparable system with attention), events are extracted from automatically generated text by parsing output text and applying the same event extractor to this parse used to extract events from raw text.6 The row labeled Most common in Table 2 gives performance for the baseline system which always guesses the most common event in the training set. The LSTM models trained to directly predict events are roughly comparable to systems which operate on raw text, performing slightly worse on accuracy and slightly better when taking partial credit into account. As with the previous comparisons with BLEU, the attention mechanism does not provide an obvious improvement when decoding inferences, perhaps, again, because the event inference problem lacks a clear alignment structure. These systems infer their most probable guesses of e2[0], the first event in the succeeding sentence. In order for a system prediction to be counted as correct, it must have the correct strings for grammatical head words of all components of the correct event. Note also that we judge only against a system’s single most confident prediction (as opposed to some prior work (Jans et al., 2012; Pichotta and Mooney, 2014) which takes the top k predictions—the numbers presented here are therefore noticeably lower). We do this mainly for computational reasons: namely, a beam search over a full sentence’s text would be quite computationally expensive. 4.3 Adding Additional Context The results given above are for systems which encode information about one sentence and decode 6This is also a minor abuse of notation, as the second transformation uses a statistical parser rather than an encoder/decoder. 285 information about its successor. This is within the spirit of the skip-gram system of Kiros et al. (2015), but we may wish to condition on more of the document. To investigate this, we perform an experiment varying the number of previous sentences input during the encoding step of t1  t2 text-level models without attention. We train three different models, which take either one, three, or five sentences as input, respectively, and are trained to output the successor sentence. Num Prev Sents BLEU BLEU-BP 1G P 1 5.80 8.59 29.4 3 5.82 9.35 31.2 5 6.83 6.83 21.4 Table 3: Varying the amount of context in textlevel models. “Num Prev Sents” is the number of previous sentences supplied during encoding. Table 3 gives the results of running these models on 10,000 sentences from the validation set. As can be seen, in the training setup we investigate, more additional context sentences have a mixed effect, depending on the metric. This is perhaps due in part to the fact that we kept hyperparameters fixed between experiments, and a different hyperparameter regime would benefit predictions from longer input sequences. More investigation could prove fruitful. 4.4 Qualitative Analysis Figure 4 gives some example automatic nextsentence text predictions, along with the input sentence and the gold-standard next sentence. Note that gold-standard successor sentences frequently introduce new details not obviously inferrable from previous text. Top system predictions, on the other hand, are frequently fairly short. This is likely due part to the fact that the cross-entropy loss does not directly penalize short sentences and part to the fact that many details in gold-standard successor text are inherently difficult to predict. 4.5 Discussion The general low magnitude of the BLEU scores presented in Table 1, especially in comparison to the scores typically reported in Machine Translation results, indicates the difficulty of the task. In open-domain text, a sentence is typically not straightforwardly predictable from preceding text; if it were, it would likely not be stated. On the task of verb-argument prediction in Table 2, the difference between t1  t2 and e1  e2[0] is fairly marginal. This raises the general question of how much explicit syntactic analysis is required for the task of event inference, particularly in the encoder/decoder setup. These results provide evidence that a sentence-level RNN language model which operates on raw tokens can predict what comes next in a document as well or nearly as well as an event-mediated script model. 5 Future Work There are a number of further extensions to this work. First, in this work (and, more generally, Neural Machine Translation research), though generated text is evaluated using BLEU, systems are optimized for per-token cross-entropy error, which is a different objective (Luong et al. (2016) give an example of a system which improves cross-entropy error but reduces BLEU score in the Neural Machine Translation context). Finding differentiable objective functions that more directly target more complex evaluation metrics like BLEU is an interesting future research direction. Relatedly, though we argue that BLEU is a natural token-sequence-level analog to the verbargument formulation of the Narrative Cloze task, it is not obviously the best metric for evaluating inferences of text, and comparing these automated metrics with human judgments is an important direction of future work. Pichotta and Mooney (2016) present results on crowdsourced human evaluation of script inferences that could be repeated for our RNN models. Though we focus here on forward-direction models predicting successor sentences, bidirectional encoder-decoder models, which predict sentences from both previous and subsequent text, are another interesting future research direction. 6 Related Work The use of scripts in AI dates back to the 1970s (Minsky, 1974; Schank and Abelson, 1977); in this conception, scripts were composed of complex events with no probabilistic semantics, which were difficult to learn automatically. In recent years, a growing body of research has investigated learning probabilistic co-occurrence models with simpler events. Chambers and Jurafsky (2008) propose a model of co-occurrence of (verb, dependency) pairs, which can be used to infer such 286 Input: As of October 1 , 2008 , ⟨OOV⟩changed its company name to Panasonic Corporation. Gold: ⟨OOV⟩products that were branded “National” in Japan are currently marketed under the “Panasonic” brand. Predicted: The company’s name is now ⟨OOV⟩. Input: White died two days after Curly Bill shot him. Gold: Before dying, White testified that he thought the pistol had accidentally discharged and that he did not believe that Curly Bill shot him on purpose. Predicted: He was buried at ⟨OOV⟩Cemetery. Input: The foundation stone was laid in 1867. Gold: The members of the predominantly Irish working class parish managed to save £700 towards construction, a large sum at the time. Predicted: The ⟨OOV⟩was founded in the early 20th century. Input: Soldiers arrive to tell him that ⟨OOV⟩has been seen in camp and they call for his capture and death. Gold: ⟨OOV⟩agrees . Predicted: ⟨OOV⟩is killed by the ⟨OOV⟩. Figure 4: Sample next-sentence text predictions. ⟨OOV⟩is the out-of-vocabulary pseudo-token, which frequently replaces proper names. pairs from documents; Jans et al. (2012) give a superior model in the same general framework. Chambers and Jurafsky (2009) give a method of generalizing from single sequences of pair events to collections of such sequences. Rudinger et al. (2015) apply a discriminative language model to the (verb, dependency) sequence modeling task, raising the question of to what extent event inference can be performed with standard language models applied to event sequences. Pichotta and Mooney (2014) describe a method of learning a co-occurrence based model of verbs with multiple coreference-based entity arguments. There is a body of related work focused on learning models of co-occurring events to automatically induce templates of complex events comprising multiple verbs and arguments, aimed ultimately at maximizing coherency of templates (Chambers, 2013; Cheung et al., 2013; Balasubramanian et al., 2013). Ferraro and Van Durme (2016) give a model integrating various levels of event information of increasing abstraction, evaluating both on coherence of induced templates and log-likelihood of predictions of held-out events. McIntyre and Lapata (2010) describe a system that learns a model of co-occurring events and uses this model to automatically generate stories via a Genetic Algorithm. There have been a number of recent published neural models for various event- and discourserelated tasks. Pichotta and Mooney (2016) show that an LSTM event sequence model outperforms previous co-occurrence methods for predicting verbs with arguments. Granroth-Wilding and Clark (2016) describe a feedforward neural network which composes verbs and arguments into low-dimensional vectors, evaluating on a multiple-choice version of the Narrative Cloze task. Modi and Titov (2014) describe a feedforward network which is trained to predict event orderings. Kiros et al. (2015) give a method of embedding sentences in low-dimensional space such that embeddings are predictive of neighboring sentences. Li et al. (2014) and Ji and Eisenstein (2015), use RNNs for discourse parsing; Liu et al. (2016) use a Convolutional Neural Network for implicit discourse relation classification. 7 Conclusion We have given what we believe to be the first systematic evaluation of sentence-level RNN language models on the task of predicting held-out document text. We have found that models operating on raw text perform roughly comparably to identical models operating on predicate-argument event structures when predicting the latter, and that text models provide superior predictions of raw text. This provides evidence that, for the task of held-out event prediction, encoder/decoder models mediated by automatically extracted events may not be learning appreciably more structure than systems trained on raw tokens alone. Acknowledgments Thanks to Stephen Roller, Amelia Harrison, and the UT NLP group for their help and feedback. Thanks also to the anonymous reviewers for their very helpful suggestions. This research was supported in part by the DARPA DEFT program under AFRL grant FA8750-13-2-0026. 287 References Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org. Dzmitry Bahdanau, KyungHyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 2015 International Conference on Learning Representations (ICLR 2015). Niranjan Balasubramanian, Stephen Soderland, Mausam, and Oren Etzioni. 2013. Generating coherent event schemas at scale. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP-2013). Nathanael Chambers and Daniel Jurafsky. 2008. Unsupervised learning of narrative event chains. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL-08), pages 789–797. Nathanael Chambers and Dan Jurafsky. 2009. Unsupervised learning of narrative schemas and their participants. In Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics (ACL-09), pages 602–610. Nathanael Chambers. 2013. Event schema induction with a probabilistic entity-driven model. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP2013). Jackie Chi Kit Cheung, Hoifung Poon, and Lucy Vanderwende. 2013. Probabilistic frame induction. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-13). Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. In NIPS Deep Learning Workshop. Marie-Catherine De Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the 5th International Conference on Language Resources & Evaluation (LREC-2006), volume 6, pages 449–454. Jeffrey L. Elman. 1990. Finding structure in time. Cognitive Science, 14:179–211. Francis Ferraro and Benjamin Van Durme. 2016. A unified Bayesian model of scripts, frames and language. In Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI-16). Mark Granroth-Wilding and Stephen Clark. 2016. What happens next? Event prediction using a compositional neural network model. In Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI-16). Karl Moritz Hermann, Tom´aˇs Koˇcisk`y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Proceedings of the 29th Annual Conference on Neural Information Processing Systems (NIPS-15). Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Bram Jans, Steven Bethard, Ivan Vuli´c, and Marie Francine Moens. 2012. Skip n-grams and ranking functions for predicting script events. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL-12), pages 336–344. Yangfeng Ji and Jacob Eisenstein. 2015. One vector is not enough: Entity-augmented distributional semantics for discourse relations. Transactions of the Association for Computational Linguistics (TACL). Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410. Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2015. Skip-thought vectors. In Proceedings of the 29th Annual Conference on Neural Information Processing Systems (NIPS-15). Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL-07) Companion Volume: Proceedings of the Demo and Poster Sessions, pages 177–180, Prague, Czech Republic. Jiwei Li, Rumeng Li, and Eduard Hovy. 2014. Recursive deep models for discourse parsing. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2061–2069, October. 288 Yang Liu, Sujian Li, Xiaodong Zhang, and Zhifang Sui. 2016. Implicit discourse relation classification via multi-task neural networks. In Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI-16). Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task sequence to sequence learning. In Proceedings of the 4th International Conference on Learning Representations (ICLR-16). Neil McIntyre and Mirella Lapata. 2010. Plot induction and evolutionary search for story generation. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL-10), pages 1562–1572. Tomas Mikolov, Anoop Deoras, Stefan Kombrink, Lukas Burget, and Jan Cernock`y. 2011. Empirical evaluation and combination of advanced language modeling techniques. In Proceedings of the 12th Annual Conference of the International Speech Communication Association 2011 (INTERSPEECH 2011), pages 605–608. Marvin Minsky. 1974. A framework for representing knowledge. Technical report, MIT-AI Laboratory. Ashutosh Modi and Ivan Titov. 2014. Inducing neural models of script knowledge. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning (CoNLL-2014), Baltimore, MD, USA. J Walker Orr, Prasad Tadepalli, Janardhan Rao Doppa, Xiaoli Fern, and Thomas G Dietterich. 2014. Learning scripts as Hidden Markov Models. In Proceedings of the 28th AAAI Conference on Artificial Intelligence (AAAI-14). Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL-02), pages 311–318. Karl Pichotta and Raymond J. Mooney. 2014. Statistical script learning with multi-argument events. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2014), pages 220–229. Karl Pichotta and Raymond J. Mooney. 2016. Learning statistical scripts with LSTM recurrent neural networks. In Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI-16). Rachel Rudinger, Pushpendre Rastogi, Francis Ferraro, and Benjamin Van Durme. 2015. Script induction as language modeling. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP-15). Roger C. Schank and Robert P. Abelson. 1977. Scripts, Plans, Goals and Understanding: An Inquiry into Human Knowledge Structures. Lawrence Erlbaum and Associates. Iulian V. Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI-16). Richard Socher, John Bauer, Christopher D. Manning, and Andrew Y. Ng. 2013. Parsing with compositional vector grammars. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL-13). Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the 28th Annual Conference on Neural Information Processing Systems (NIPS-14), pages 3104–3112. Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Proceedings of the 29th Annual Conference on Neural Information Processing Systems (NIPS-15), pages 2755–2763. A Supplemental Material Our Wikipedia dump from which the training, development, and test sets are constructed is from Jan 2, 2014. We parse text using version 3.3.1 of the Stanford CoreNLP system. We use a vocab consisting of the 50,000 most common tokens, replacing all others with an Out-of-vocabulary pseudotoken. We train using batch stochastic gradient descent with momentum with a batch size of 10 sequences, using an initial learning rate of 0.1, damping the learning rate by 0.99 any time the previous hundred updates’ average test error is greater than any of the average losses in the previous ten groups of hundred updates. Our momentum parameter is 0.95. Our embedding vectors are 100-dimensional, and our LSTM hidden state is 500-dimensional. We train all models for 300k batch updates (with the exception of the models compared in §4.3, all of which we train for 150k batch updates, as training is appreciably slower with longer input sequences). Training takes approximately 36 hours on an NVIDIA Titan Black GPU. 289
2016
27
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 290–300, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Two Discourse Driven Language Models for Semantics Haoruo Peng and Dan Roth University of Illinois, Urbana-Champaign Urbana, IL, 61801 {hpeng7,danr}@illinois.edu Abstract Natural language understanding often requires deep semantic knowledge. Expanding on previous proposals, we suggest that some important aspects of semantic knowledge can be modeled as a language model if done at an appropriate level of abstraction. We develop two distinct models that capture semantic frame chains and discourse information while abstracting over the specific mentions of predicates and entities. For each model, we investigate four implementations: a “standard” N-gram language model and three discriminatively trained “neural” language models that generate embeddings for semantic frames. The quality of the semantic language models (SemLM) is evaluated both intrinsically, using perplexity and a narrative cloze test and extrinsically – we show that our SemLM helps improve performance on semantic natural language processing tasks such as co-reference resolution and discourse parsing. 1 Introduction Natural language understanding often necessitates deep semantic knowledge. This knowledge needs to be captured at multiple levels, from words to phrases, to sentences, to larger units of discourse. At each level, capturing meaning frequently requires context sensitive abstraction and disambiguation, as shown in the following example (Winograd, 1972): Ex.1 [Kevin] was robbed by [Robert]. [He] was arrested by the police. Ex.2 [Kevin] was robbed by [Robert]. [He] was rescued by the police. In both cases, one needs to resolve the pronoun “he” to either “Robert” or “Kevin”. To make the correct decisions, one needs to know that the subject of “rob” is more likely than the object of “rob” to be the object of “arrest” while the object of “rob” is more likely to be the object of “rescue”. Thus, beyond understanding individual predicates (e.g., at the semantic role labeling level), there is a need to place them and their arguments in a global context. However, just modeling semantic frames is not sufficient; consider a variation of Ex.1: Ex.3 Kevin was robbed by Robert, but the police mistakenly arrested him. In this case, “him” should refer to “Kevin” as the discourse marker “but” reverses the meaning, illustrating that it is necessary to take discourse markers into account when modeling semantics. In this paper we propose that these aspects of semantic knowledge can be modeled as a Semantic Language Model (SemLM). Just like the “standard” syntactic language models (LM), we define a basic vocabulary, a finite representation language, and a prediction task, which allows us to model the distribution over the occurrence of elements in the vocabulary as a function of their (well-defined) context. In difference from syntactic LMs, we represent natural language at a higher level of semantic abstraction, thus facilitating modeling deep semantic knowledge. We propose two distinct discourse driven language models to capture semantics. In our first semantic language model, the Frame-Chain SemLM, we model all semantic frames and discourse markers in the text. Each document is viewed as a single chain of semantic frames and discourse markers. Moreover, while the vocabulary of discourse markers is rather small, the number of different surface form semantic frames that could appear in the text is very large. To achieve a better level of abstraction, we disambiguate semantic frames and map them to their PropBank/FrameNet represen290 tation. Thus, in Ex.3, the resulting frame chain is “rob.01 — but — arrest.01” (“01” indicates the predicate sense). Our second semantic language model is called Entity-Centered SemLM. Here, we model a sequence of semantic frames and discourse markers involved in a specific co-reference chain. For each co-reference chain in a document, we first extract semantic frames corresponding to each co-referent mention, disambiguate them as before, and then determine the discourse markers between these frames. Thus, each unique frame contains both the disambiguated predicate and the argument label of the mention. In Ex.3, the resulting sequence is “rob.01#obj — but — arrest.01#obj” (here “obj” indicates the argument label for “Kevin” and “him” respectively). While these two models capture somewhat different semantic knowledge, we argue later in the paper that both models can be induced at high quality, and that they are suitable for different NLP tasks. For both models of SemLM, we study four language model implementations: N-gram, skipgram (Mikolov et al., 2013b), continuous bagof-words (Mikolov et al., 2013a) and log-bilinear language model (Mnih and Hinton, 2007). Each model defines its own prediction task. In total, we produce eight different SemLMs. Except for Ngram model, others yield embeddings for semantic frames as they are neural language models. In our empirical study, we evaluate both the quality of all SemLMs and their application to coreference resolution and shallow discourse parsing tasks. Following the traditional evaluation standard of language models, we first use perplexity as our metric. We also follow the script learning literature (Chambers and Jurafsky, 2008b; Chambers and Jurafsky, 2009; Rudinger et al., 2015) and evaluate on the narrative cloze test, i.e. randomly removing a token from a sequence and test the system’s ability to recover it. We conduct both evaluations on two test sets: a hold-out dataset from the New York Times Corpus and gold sequence data (for frame-chain SemLMs, we use PropBank (Kingsbury and Palmer, 2002); for entitycentered SemLMs, we use Ontonotes (Hovy et al., 2006) ). By comparing the results on these test sets, we show that we do not incur noticeable degradation when building SemLMs using preprocessing tools. Moreover, we show that SemLMs improves the performance of co-reference resolution, as well as that of predicting the sense of discourse connectives for both explicit and implicit ones. The main contributions of our work can be summarized as follows: 1) The design of two novel discourse driven Semantic Language models, building on text abstraction and neural embeddings; 2) The implementation of high quality SemLMs that are shown to improve state-of-theart NLP systems. 2 Related Work Our work is related to script learning. Early works (Schank and Abelson, 1977; Mooney and DeJong, 1985) tried to construct knowledge bases from documents to learn scripts. Recent work focused on utilizing statistical models to extract high-quality scripts from large amounts of data (Chambers and Jurafsky, 2008a; Bejan, 2008; Jans et al., 2012; Pichotta and Mooney, 2014; Granroth-Wilding et al., 2015; Pichotta and Mooney, 2016). Other works aimed at learning a collection of structured events (Chambers, 2013; Cheung et al., 2013; Cheung et al., 2013; Balasubramanian et al., 2013; Bamman and Smith, 2014; Nguyen et al., 2015), and several works have employed neural embeddings (Modi and Titov, 2014b; Modi and Titov, 2014a; Frermann et al., 2014; Titov and Khoddam, 2015). In our work, the semantic sequences in the entity-centered SemLMs are similar to narrative schemas (Chambers and Jurafsky, 2009). However, we differ from them in the following aspects: 1) script learning does not generate a probabilistic model on semantic frames1; 2) script learning models semantic frame sequences incompletely as they do not consider discourse information; 3) works in script learning rarely show applications to real NLP tasks. Some prior works have used scripts-related ideas to help improve NLP tasks (Irwin et al., 2011; Rahman and Ng, 2011; Peng et al., 2015b). However, since they use explicit script schemas either as features or constraints, these works suffer from data sparsity problems. In our work, the SemLM abstract vocabulary ensures a good coverage of frame semantics. 1Some works may utilize a certain probabilistic framework, but they mainly focus on generating high-quality frames by filtering. 291 Table 1: Comparison of vocabularies between frame-chain (FC) and entity-centered (EC) SemLMs. “F-Sen” stands for frames with predicate sense information while “F-Arg” stands for frames with argument role label information; “Conn” means discourse marker and “Per” means period. “Seq/Doc” represents the number of sequence per document. F-Sen F-Arg Conn Per Seq/Doc FC YES NO YES YES Single EC YES YES YES NO Multiple 3 Two Models for SemLM In this section, we describe how we capture sequential semantic information consisted of semantic frames and discourse markers as semantic units (i.e. the vocabulary). 3.1 Semantic Frames and Discourse Markers Semantic Frames A semantic frame is composed of a predicate and its corresponding argument participants. Here we require the predicate to be disambiguated to a specific sense, and we need a certain level of abstraction of arguments so that we can assign abstract labels. The design of PropBank frames (Kingsbury and Palmer, 2002) and FrameNet frames (Baker et al., 1998) perfectly fits our needs. They both have a limited set of frames (in the scale of thousands) and each frame can be uniquely represented by its predicate sense. These frames provide a good level of generalization as each frame can be instantiated into various surface forms in natural texts. We use these frames as part of our vocabulary for SemLMs. Formally, we use the notation f to represent a frame. Also, we denote fa ≜f#Arg when referring to an argument role label (Arg) inside a frame (f). Discourse Markers We use discourse markers (connectives) to model discourse relationships between frames. There is only a limited number of unique discourse markers, such as and, but, however, etc. We get the full list from the Penn Discourse Treebank (Prasad et al., 2008) and include them as part of our vocabulary for SemLMs. Formally, we use dis to denote the discourse marker. Note that discourse relationships can exist without an explicit discourse marker, which is also a challenge for discourse parsing. Since we cannot reliably identify implicit discourse relationships, we only consider explicit ones here. More importantly, discourse markers are associated with arguments (Wellner and Pustejovsky, 2007) in text (usually two sentences/clauses, sometimes one). We only add a discourse marker in the semantic sequence when its corresponding arguments contain semantic frames which belong to the same semantic sequence. We call them frame-related discourse markers. Details on generating semantic frames and discourse markers to form semantic sequences are discussed in Sec. 5. 3.2 Frame-Chain SemLM For frame-chain SemLM, we model all semantic frames and discourse markers in a document. We form the semantic sequence by first including all semantic frames in the order they appear in the text: [f1, f2, f3, . . .]. Then we add framerelated discourse markers into the sequence by placing them in their order of appearance. Thus we get a sequence like [f1, dis1, f2, f3, dis2, . . .]. Note that discourse markers do not necessarily exist between all semantic frames. Additionally, we treat the period symbol as a special discourse marker, denoted by “o”. As some sentences contain more than one semantic frame (situations like clauses), we get the final semantic sequence like this: [f1, dis1, f2, o, f3, o, dis2, . . . , o] 3.3 Entity-Centered SemLM We generate semantic sequences according to co-reference chains for entity-centered SemLM. From co-reference resolution, we can get a sequence like [m1, m2, m3, . . .], where mentions appear in the order they occur in the text. Each mention can be matched to an argument inside a semantic frame. Thus, we replace each mention with its argument label inside a semantic frame, and get [fa1, fa2, fa3, . . .]. We then add discourse markers exactly in they way we do for frame-chain SemLM, and get the following sequence: [fa1, dis1, fa2, fa3, dis2, . . .] The comparison of vocabularies between frame-chain and entity-centered SemLMs is summarized in Table 1. 4 Implementations of SemLM In this work, we experiment with four language model implementations: N-gram (NG), SkipGram (SG), Continuous Bag-of-Words (CBOW) and Log-bilinear (LB) language model. For ease 292 of explanation, we assume that a semantic unit sequence is s = [w1, w2, w3, . . . , wk]. 4.1 N-gram Model For an n-gram model, we predict each token based on its n−1 previous tokens, i.e. we directly model the following conditional probability (in practice, we choose n = 3, Tri-gram (TRI) ): p(wt+2|wt, wt+1). Then, the probability of the sequence is p(s) = p(w1)p(w2|w1) k−2 Y t=1 p(wt+2|wt, wt+1). To compute p(w2|w1) and p(w1), we need to back off from Tri-gram to Bi-gram and Uni-gram. 4.2 Skip-Gram Model The SG model was proposed in Mikolov et al. (2013b). It uses a token to predict its context, i.e. we model the following conditional probability: p(c ∈c(wt)|wt, θ). Here, c(wt) is the context for wt and θ denotes the learned parameters which include neural network states and embeddings. Then the probability of the sequence is computed as k Y t=1 Y c∈c(wt) p(c|wt, θ). 4.3 Continuous Bag-of-Words Model In contrast to skip-gram, CBOW (Mikolov et al., 2013a) uses context to predict each token, i.e. we model the following conditional probability: p(wt|c(wt), θ). In this case, the probability of the sequence is k Y t=1 p(wt|c(wt), θ). 4.4 Log-bilinear Model LB was introduced in Mnih and Hinton (2007). Similar to CBOW, it also uses context to predict each token. However, LB associates a token with three components instead of just one vector: a target vector v(w), a context vector v’(w) and a bias b(w). So, the conditional probability becomes: p(wt|c(wt)) = exp(v(wt)⊺u(c(wt)) + b(wt)) P w∈V exp(v(w)⊺u(c(wt)) + b(w)). Here, V denotes the vocabulary and we define u(c(wt)) = P ci∈c(wt) qi ⊙v′(ci). Note that ⊙ represents element-wise multiplication and qi is a vector that depends only on the position of a token in the context, which is a also a model parameter. So, the overall sequence probability is k Y t=1 p(wt|c(wt)). 5 Building SemLMs from Scratch In this section, we explain how we build SemLMs from un-annotated plain text. 5.1 Dataset and Preprocessing Dataset We use the New York Times Corpus2 (from year 1987 to 2007) for training. It contains a bit more than 1.8M documents in total. Preprocessing We pre-process all documents with semantic role labeling (Punyakanok et al., 2004) and part-of-speech tagger (Roth and Zelenko, 1998). We also implement the explicit discourse connective identification module in shallow discourse parsing (Song et al., 2015). Additionally, we utilize within document entity coreference (Peng et al., 2015a) to produce coreference chains. To obtain all annotations, we employ the Illinois NLP tools3. 5.2 Semantic Unit Generation FrameNet Mapping We first directly derive semantic frames from semantic role labeling annotations. As the Illinois SRL package is built upon PropBank frames, we do a mapping to FrameNet frames via VerbNet senses (Schuler, 2005), thus achieving a higher level of abstraction. The mapping file4 defines deterministic mappings. However, the mapping is not complete and there are remaining PropBank frames. Thus, the generated vocabulary for SemLMs contains both PropBank and FrameNet frames. For example, “place” and 2https://catalog.ldc.upenn.edu/LDC2008T19 3http://cogcomp.cs.illinois.edu/page/software/ 4http://verbs.colorado.edu/verb-index/fn/vn-fn.xml 293 “put” with the VerbNet sense id “9.1-2” are converted to the same FrameNet frame “Placing”. Augmenting to Verb Phrases We apply three heuristic modifications to augment semantic frames defined in Sec. 3.1: 1) if a preposition immediately follows a predicate, we append the preposition to the predicate e.g. “take over”; 2) if we encounter the semantic role label AM-PRD which indicates a secondary predicate, we also append this secondary predicate to the main predicate e.g. “be happy”; 3) if we see the semantic role label AM-NEG which indicates negation, we append “not” to the predicate e.g. “not like”. These three augmentations can co-exist and they allow us to model more fine-grained semantic frames. Verb Compounds We have observed that if two predicates appear very close to each other, e.g. “eat and drink”, “decide to buy”, they actually represent a unified semantic meaning. Thus, we construct compound verbs to connect them together. We apply the rule that if the gap between two predicates is less than two tokens, we treat them as a unified semantic frame defined by the conjunction of the two (augmented) semantic frames, e.g. “eat.01-drink.01” and “decide.01-buy.01”. Argument Labels for Co-referent Mentions To get the argument role label information for coreferent mentions, we need to match each mention to its corresponding semantic role labeling argument. If a mention head is inside an argument, we regard it as a match. We do not consider singleton mentions. Vocabulary Construction After generating all semantic units for (augmented and compounded) semantic frames and discourse markers, we merge them together as a tentative vocabulary. In order to generate a sensible SemLM, we filter out rare tokens which appear less than 20 times in the data. We add the Unknown token (UNK) and End-ofSequence token (EOS) to the eventual vocabulary. Statistics on the eventual SemLM vocabularies and semantic sequences are shown in Table 2. We also compare frame-chain and entity-centered SemLMs to the usual syntactic language model setting. The statistics in Table 2 shows that they are comparable both in vocabulary size and in the total number of tokens for training. Moreover, entity-centered SemLMs have shorter sequences then frame-chain SemLMs. We also provide several examples of high-frequency augmented compound semantic frames in our generated SemLM Table 2: Statistics on SemLM vocabularies and sequences. “F-s” stands for single frame while “F-c” stands for compound frame; “Conn” means discourse marker. “#seq” is the number of sequences, and “#token” is the total number of tokens (semantic units). We also compute the average token in a sequence i.e. “#t/s”. We compare frame-chain (FC) and entity-centered (EC) SemLMs to the usual syntactic language model setting i.e. “LM”. Vocabulary Size Sequence Size F-s F-c Conn #seq #token #t/s FC 14857 7269 44 1.2M 25.4M 21 EC 8758 2896 44 3.4M 18.6M 5 LM ∼20k ∼3M ∼38M 10-15 vocabularies. All are very intuitive: want.01-know.01, agree.01-pay.01, try.01-get.01, decline.02-comment.01, wait.01-see.01, make.02-feel.01, want.01(not)-give.08(up) 5.3 Language Model Training NG We implement the N-gram model using the SRILM toolkit (Stolcke, 2002). We also employ the well-known KneserNey Smoothing (Kneser and Ney, 1995) technique. SG & CBOW We utilize the word2vec package to implement both SG and CBOW. In practice, we set the context window size to be 10 for SG while set the number as 5 for CBOW (both are usual settings for syntactic language models). We generate 300dimension embeddings for both models. LB We use the OxLM toolkit (Paul et al., 2014) with Noise-Constrastive Estimation (Gutmann and Hyvarinen, 2010) for the LB model. We set the context window size to 5 and produce 150dimension embeddings. 6 Evaluation In this section, we first evaluate the quality of SemLMs through perplexity and a narrative cloze test. More importantly, we show that the proposed SemLMs can help improve the performance of coreference resolution and shallow discourse parsing. This further proves that we successfully capture semantic sequence information which can potentially benefit a wide range of semantic related NLP tasks. We have designed two models for SemLM: frame-chain (FC) and entity-centered (EC). By training on both types of sequences respectively, we implement four different language models: 294 TRI, SG, CBOW, LB. We focus the evaluation efforts on these eight SemLMs. 6.1 Quality Evaluation of SemLMs Datasets We use three datasets. We first randomly sample 10% of the New York Times Corpus documents (roughly two years of data), denoted the NYT Hold-out Data. All our SemLMs are trained on the remaining NYT data and tested on this hold-out data. We generate semantic sequences for the training and test data using the methodology described in Sec. 5. We use PropBank data with gold frame annotations as another test set. In this case, we only generate frame-chain SemLM sequences by applying semantic unit generation techniques on gold frames, as described in Sec 5.2. When we test on Gold PropBank Data with Frame Chains, we use frame-chain SemLMs trained from all NYT data. Similarly, we use Ontonotes data (Hovy et al., 2006) with gold frame and co-reference annotations as the third test set, Gold Ontonotes Data with Coref Chains. We only generate entitycentered SemLMs by applying semantic unit generation techniques on gold frames and gold coreference chains, as described in Sec 5.2. Baselines We use Uni-gram (UNI) and Bi-gram (BG) as two language model baselines. In addition, we use the point-wise mutual information (PMI) for token prediction. Essentially, PMI scores each pair of tokens according to their cooccurrences. It predicts a token in the sequence by choosing the one with the highest total PMI with all other tokens in the sequence. We use the ordered PMI (OP) as our baseline, which is a variation of PMI by considering asymmetric counting (Jans et al., 2012). 6.1.1 Perplexity As SemLMs are language models, it is natural to evaluate the perplexity, which is a measurement of how well a language model can predict sequences. Results for SemLM perplexities are presented in Table 3. They are computed without considering end token (EOS). We apply tri-gram KneserNey Smoothing to CBOW, SG and LB. LB consistently shows the lowest perplexities for both frame-chain and entity-centered SemLMs across all test sets. Similar to syntactic language models, perplexities are fast decreasing from UNI, BI to TRI. Also, CBOW and SG have very close perplexity results which indicate that their language Table 3: Perplexities for SemLMs. UNI, BG, TRI, CBOW, SG, LB are different language model implementations while “FC” and “EC” stand for the two SemLM models studied, respectively. “FC-FM” and “EC-FM” indicate that we removed the “FrameNet Mapping” step (Sec. 5.2). LB consistently produces the lowest perplexities for both frame-chain and entity-centered SemLMs. Baselines SemLMs UNI BG TRI CBOW SG LB NYT Hold-out Data FC 952.1 178.3 119.2 115.4 114.1 108.5 EC 914.7 154.4 114.9 111.8 113.8 109.7 Gold PropBank Data with Frame Chains FC-FM 992.9 213.7 139.1 135.6 128.4 121.8 FC 970.0 191.2 132.7 126.4 123.5 115.4 Gold Ontonotes Data with Coref Chains EC-FM 956.4 187.7 121.1 115.6 117.2 113.7 EC 923.8 163.2 120.5 113.7 115.0 109.3 modeling abilities are at the same level. We can compare the results of our frame-chain SemLM on NYT Hold-out Data and Gold PropBank Data with Frame Chains, and our entitycentered SemLM on NYT Hold-out Data and Gold Ontonotes Data with Coref Chains. While we see differences in the results, the gap is narrow and the relative ranking of different SemLMs does not change. This indicates that the automatic SRL and Co-reference annotations added some noise but, more importantly, that the resulting SemLMs are robust to this noise as we still retain the language modeling ability for all methods. Additionally, our ablation study removes the “FrameNet Mapping” step in Sec. 5.2 (“FC-FM” and “EC-FM” rows), resulting in only using PropBank frames in the vocabulary. The increase in perplexities shows that “FrameNet Mapping” does produce a higher level of abstraction, which is useful for language modeling. 6.1.2 Narrative Cloze Test We follow the Narrative Cloze Test idea used in script learning (Chambers and Jurafsky, 2008b; Chambers and Jurafsky, 2009). As Rudinger et al. (2015) points out, the narrative cloze test can be regarded as a language modeling evaluation. In the narrative cloze test, we randomly choose and remove one token from each semantic sequence in the test set. We then use language models to predict the missing token and evaluate the correctness. For all SemLMs, we use the conditional probabilities defined in Sec. 4 to get token predictions. We also use ordered PMI as an additional baseline. The narrative cloze test is conducted on 295 Table 4: Narrative cloze test results for SemLMs. UNI, BG, TRI, CBOW, SG, LB are different language model implementations while “FC” and “EC” stand for our two SemLM models, respectively. “FC-FM” and “EC-FM” mean that we remove the FrameNet mappings. “w/o DIS” indicates the removal of discourse makers in SemLMs. “Rel-Impr” indicates the relative improvement of the best performing SemLM over the strongest baseline. We evaluate on two metrics: mean reciprocal rank (MRR)/recall at 30 (Recall@30). LB outperforms other methods for both frame-chain and entity-centered SemLMs. Baselines SemLMs Rel-Impr OP UNI BG TRI CBOW SG LB MRR NYT Hold-out Data FC 0.121 0.236 0.225 0.249 0.242 0.247 0.276 8.5% EC 0.126 0.235 0.210 0.242 0.249 0.249 0.261 5.9% EC w/o DIS 0.092 0.191 0.188 0.212 0.215 0.216 0.227 18.8% Rudinger et al. (2015)∗ 0.083 0.186 0.181 —– —– —– 0.223 19.9% Gold PropBank Data with Frame Chains FC 0.106 0.215 0.212 0.232 0.228 0.229 0.254 18.1% FC-FM 0.098 0.201 0.204 0.223 0.218 0.220 0.243 ——— Gold Ontonotes Data with Coref Chains EC 0.122 0.228 0.213 0.239 0.247 0.246 0.257 12.7% EC-FM 0.109 0.215 0.208 0.230 0.237 0.239 0.254 ——— Recall@30 NYT Hold-out Data FC 33.2 46.8 45.3 47.3 46.6 47.5 55.4 18.4% EC 29.4 43.7 41.6 44.8 46.5 46.6 52.0 19.0% Gold PropBank Data with Frame Chains FC 26.3 39.5 38.1 45.5 43.6 43.8 53.9 36.5% FC-FM 24.4 37.3 37.3 42.8 41.9 42.1 48.2 ——— Gold Ontonotes Data with Coref Chains EC 30.6 42.1 39.7 46.4 48.3 48.1 51.5 22.3% EC-FM 26.6 39.9 37.6 45.4 46.7 46.2 49.8 ——— the same test sets as the perplexity evaluation. We use mean reciprocal rank (MRR) and recall at 30 (Recall@30) to evaluate. Results are provided in Table 4. Consistent with the results in the perplexity evaluation, LB outperforms other methods for both frame-chain and entity-centered SemLMs across all test sets. It is interesting to see that UNI performs better than BG in this prediction task. This finding is also reflected in the results reported in Rudinger et al. (2015). Though CBOW and SG have similar perplexity results, SG appears to be stronger in the narrative cloze test. With respect to the strongest baseline (UNI), LB achieves close to 20% relative improvement for Recall@30 metric on NYT hold-out data. On gold data, the frame-chain SemLMs get a relative improvement of 36.5% for Recall@30 while entity-centered SemLMs get 22.3%. For MRR metric, the relative improvement is around half that of the Recall@30 metric. In the narrative cloze test, we also carry out an ablation study to remove the “FrameNet Mapping” step in Sec. 5.2 (“FC-FM” and “EC-FM” rows). The decrease in MRR and Recall@30 metrics further strengthens the argument that “FrameNet Mapping” is important for language modeling as it improves the generalization on frames. We cannot directly compare with other related works (Rudinger et al., 2015; Pichotta and Mooney, 2016) because of the differences in data and evaluation metrics. Rudinger et al. (2015) also use the NYT portion of the Gigaword corpus, but with Concrete annotations; Pichotta and Mooney (2016) use the English Wikipedia as their data, and Stanford NLP tools for pre-processing while we use the Illinois NLP tools. Consequently, the eventual chain statistics are different, which leads to different test instances.5 We counter this difficulty 5Rudinger et al. (2015) is similar to our entity-centered SemLM without discourse information. So, in Table 4, we 296 Table 5: Co-reference resolution results with entity-centered SemLM features. “EC” stands for the entity-centered SemLM. “TRI” is the trigram model while “LB” is the log-bilinear model. “pc” means conditional probability features and “em” represents frame embedding features. “w/o DIS” indicates the ablation study by removing all discourse makers for SemLMs. We conduct the experiments by adding SemLM features into the base system. We outperform the state-of-art system (Wiseman et al., 2015), which reports the best results on CoNLL12 dataset. The improvement achieved by “EC LB (pc+em)” over the base system is statistically significant. ACE04 CoNLL12 Wiseman et al. (2015) —– 63.39 Base (Peng et al., 2015a) 71.20 63.03 Base+EC-TRI (pc) 71.31 63.14 Base+EC-TRI w/o DIS 71.08 62.99 Base+EC-LB (pc) 71.71 63.42 Base+EC-LB (pc + em) 71.79 63.46 Base+EC-LB w/o DIS 71.12 63.00 by reporting results on “Gold PropBank Data” and “Gold Ontonotes Data”. We hope that these two gold annotation datasets can become standard test sets. Rudinger et al. (2015) does share a common evaluation metric with us: MRR. If we ignore the data difference and make a rough comparison, we find that the absolute values of our results are better while Rudinger et al. (2015) have higher relative improvement (“Rel-Impr” in Table 4). This means that 1) the discourse information is very likely to help better model semantics 2) the discourse information may boost the baseline (UNI) more than it does for the LB model. 6.2 Evaluation of SemLM Applications 6.2.1 Co-reference Resolution Co-reference resolution is the task of identifying mentions that refer to the same entity. To help improve its performance, we incorporate SemLM information as features into an existing co-reference resolution system. We choose the state-of-art Illinois Co-reference Resolution system (Peng et al., 2015a) as our base system. It employs a supervised joint mention detection and co-reference framework. We add additional features into the mention-pair feature set. Given a pair of mentions (m1, m2) where m1 make a rough comparison between them. appears before m2, we first extract the corresponding semantic frame and the argument role label of each mention. We do this by following the procedures in Sec. 5. Thus, we can get a pair of semantic frames with argument information (fa1, fa2). We may also get an additional discourse marker between these two frames, e.g. (fa1, dis, fa2). Now, we add the following conditional probability as the feature from SemLMs: pc = p(fa2|fa1, dis). We also add p2 c, √pc and 1/pc as features. To get the value of pc, we follow the definitions in Sec. 4, and we only use the entity-centered SemLM here as its vocabulary covers frames with argument labels. For the neural language model implementations (CBOW, SG and LB), we also include frame embeddings as additional features. We evaluate the effect of the added SemLM features on two co-reference benchmark datasets: ACE04 (NIST, 2004) and CoNLL12 (Pradhan et al., 2012). We use the standard split of 268 training documents, 68 development documents, and 106 testing documents for ACE04 data (Culotta et al., 2007; Bengtson and Roth, 2008). For CoNLL12 data, we follow the train and test document split from CoNLL-2012 Shared Task. We report CoNLL AVG for results (average of MUC, B3, and CEAFe metrics), using the v7.0 scorer provided by the CoNLL-2012 Shared Task. Co-reference resolution results with entitycentered SemLM features are shown in Table 5. Tri-grams with conditional probability features improve the performance by a small margin, while the log-bilinear model achieves a 0.4-0.5 F1 points improvement. By employing log-bilinear model embeddings, we further improve the numbers and we outperform the best reported results on the CoNLL12 dataset (Wiseman et al., 2015). In addition, we carry out ablation studies to remove all discourse makers during the language modeling process. We re-train our models and study their effects on the generated features. Table 5 (“w/o DIS” rows) shows that without discourse information, the SemLM features would hurt the overall performance, thus proving the necessity of considering discourse for semantic language models. 6.2.2 Shallow Discourse Parsing Shallow discourse parsing is the task of identifying explicit and implicit discourse connectives, 297 Table 6: Shallow discourse parsing results with frame-chain SemLM features. “FC” stands for the frame-chain SemLM. “TRI” is the tri-gram model while “LB” is the log-bilinear model. “pc”, “em” are conditional probability and frame embedding features, resp. “w/o DIS” indicates the case where we remove all discourse makers for SemLMs. We do the experiments by adding SemLM features to the base system. The improvement achieved by “FC-LB (pc + em)” over the baseline is statistically significant. CoNLL16 Test CoNLL16 Blind Explicit Implicit Overall Explicit Implicit Overall Base (Song et al., 2015) 89.8 35.6 60.4 75.8 31.9 52.3 Base + FC-TRI (qc) 90.3 35.8 60.7 76.4 32.5 52.9 Base + FC-TRI w/o DIS 89.2 35.3 60.0 75.5 31.6 52.0 Base + FC-LB (qc) 90.9 36.2 61.3 76.8 32.9 53.4 Base + FC-LB (qc + em) 91.1 36.3 61.4 77.3 33.2 53.8 Base + FC-LB w/o DIS 90.1 35.7 60.6 76.9 33.0 53.5 determine their senses and their discourse arguments. In order to show that SemLM can help improve shallow discourse parsing, we evaluate on identifying the correct sense of discourse connectives (both explicit and implicit ones). We choose Song et al. (2015), which uses a supervised pipeline approach, as our base system. The system extracts context features for potential discourse connectives and applies the discourse connective sense classifier. Consider an explicit connective “dis”; we extract the semantic frames that are closest to it (left and right), resulting in the sequence [f1, dis, f2] by following the procedures described in Sec. 5. We then add the following conditional probabilities as features. Compute qc = p(dis|f1, f2). and, similar to what we do for co-reference resolution, we add qc, q2 c, √qc, 1/qc as conditional probability features, which can be computed following the definitions in Sec. 4. We also include frame embeddings as additional features. We only use frame-chain SemLMs here. We evaluate on CoNLL16 (Xue et al., 2015) test and blind sets, following the train and development document split from the Shared Task, and report F1 using the official shared task scorer. Table 6 shows the results for shallow discourse parsing with SemLM features. Tri-gram with conditional probability features improve the performance for both explicit and implicit connective sense classifiers. Log-bilinear model with conditional probability features achieves even better results, and frame embeddings further improve the numbers. SemLMs improve relatively more on explicit connectives than on implicit ones. We also show an ablation study in the same setting as we did for co-reference, i.e. removing discourse information (“w/o DIS” rows). While our LB model can still exhibit improvement over the base system, its performance is lower than the proposed discourse driven version, which means that discourse information improves the expressiveness of semantic language models. 7 Conclusion The paper builds two types of discourse driven semantic language models with four different language model implementations that make use of neural embeddings for semantic frames. We use perplexity and a narrative cloze test to prove that the proposed SemLMs have a good level of abstraction and are of high quality, and then apply them successfully to the two challenging tasks of co-reference resolution and shallow discourse parsing, exhibiting improvements over state-ofthe-art systems. In future work, we plan to apply SemLMs to other semantic related NLP tasks e.g. machine translation and question answering. Acknowledgments The authors would like to thank Christos Christodoulopoulos and Eric Horn for comments that helped to improve this work. This work is supported by Contract HR0011-15-2-0025 with the US Defense Advanced Research Projects Agency (DARPA). Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. This material is also based upon work supported by the U.S. Department of Homeland Security under Award Number 2009ST-061-CCI002-07. 298 References C. F. Baker, C. J. Fillmore, and J. B. Lowe. 1998. The berkeley framenet project. In COLING/ACL, pages 86–90. N. Balasubramanian, S. Soderland, Mausam, and O. Etzioni. 2013. Generating coherent event schemas at scale. In EMNLP, pages 1721–1731. D. Bamman and N. A. Smith. 2014. Unsupervised discovery of biographical structure from text. TACL, 2:363–376. C. A. Bejan. 2008. Unsupervised discovery of event scenarios from texts. In FLAIRS Conference, pages 124–129. E. Bengtson and D. Roth. 2008. Understanding the value of features for coreference resolution. In EMNLP. N. Chambers and D. Jurafsky. 2008a. Jointly combining implicit constraints improves temporal ordering. In EMNLP. N. Chambers and D. Jurafsky. 2008b. Unsupervised learning of narrative event chains. In ACL, volume 94305, pages 789–797. N. Chambers and D. Jurafsky. 2009. Unsupervised learning of narrative schemas and their participants. In ACL, volume 2, pages 602–610. N. Chambers. 2013. Event schema induction with a probabilistic entity-driven model. In EMNLP, volume 13, pages 1797–1807. J. C. K. Cheung, H. Poon, and L. Vanderwende. 2013. Probabilistic frame induction. arXiv:1302.4813. A. Culotta, M. Wick, R. Hall, and A. McCallum. 2007. First-order probabilistic models for coreference resolution. In NAACL. L. Frermann, I. Titov, and Pinkal. M. 2014. A hierarchical bayesian model for unsupervised induction of script knowledge. In EACL. M. Granroth-Wilding, S. Clark, M. T. Llano, R. Hepworth, S. Colton, J. Gow, J. Charnley, N. Lavraˇc, M. ˇZnidarˇsiˇc, and M. Perovˇsek. 2015. What happens next? event prediction using a compositional neural network model. M. Gutmann and A. Hyvarinen. 2010. Noisecontrastive estimation: A new estimation principle for unnormalized statistical models. In AISTATS. E. Hovy, M. Marcus, M. Palmer, L. Ramshaw, and R. Weischedel. 2006. Ontonotes: The 90% solution. In Proceedings of HLT/NAACL. J. Irwin, M. Komachi, and Y. Matsumoto. 2011. Narrative schema as world knowledge for coreference resolution. In CoNLL Shared Task, pages 86–92. B. Jans, S. Bethard, I. Vuli´c, and M. F. Moens. 2012. Skip n-grams and ranking functions for predicting script events. In EACL, pages 336–344. P. Kingsbury and M. Palmer. 2002. From Treebank to PropBank. In Proceedings of LREC-2002. R. Kneser and H. Ney. 1995. Improved backing-off for m-gram language modeling. In ICASSP. T. Mikolov, K. Chen, G. Corrado, and J. Dean. 2013a. Efficient estimation of word representations in vector space. arXiv:1301.3781. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic regularities in continuous space word representations. In NAACL. A. Mnih and G. Hinton. 2007. Three new graphical models for statistical language modelling. In ICML, pages 641–648. A. Modi and I. Titov. 2014a. Inducing neural models of script knowledge. In CoNLL. A. Modi and I. Titov. 2014b. Learning semantic script knowledge with event embeddings. In ICLR Workshop. R. Mooney and G. DeJong. 1985. Learning schemata for natural language processing. K.-H. Nguyen, X. Tannier, O. Ferret, and R. Besanc¸on. 2015. Generative event schema induction with entity disambiguation. In ACL. US NIST. 2004. The ace evaluation plan. US National Institute for Standards and Technology (NIST). B. Paul, B. Phil, and H. Hieu. 2014. Oxlm: A neural language modelling framework for machine translation. The Prague Bulletin of Mathematical Linguistics, 102(1):81–92. H. Peng, K. Chang, and D. Roth. 2015a. A joint framework for coreference resolution and mention head detection. In CoNLL. H. Peng, D. Khashabi, and D. Roth. 2015b. Solving hard coreference problems. In NAACL. K. Pichotta and R. J. Mooney. 2014. Statistical script learning with multi-argument events. In EACL, volume 14, pages 220–229. K. Pichotta and R. J. Mooney. 2016. Learning statistical scripts with lstm recurrent neural networks. In AAAI. S. Pradhan, A. Moschitti, N. Xue, O. Uryupina, and Y. Zhang. 2012. CoNLL-2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes. In CoNLL. 299 Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The penn discourse treebank 2.0. In Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008). V. Punyakanok, D. Roth, W. Yih, and D. Zimak. 2004. Semantic role labeling via integer linear programming inference. In COLING. A. Rahman and V. Ng. 2011. Coreference resolution with world knowledge. In ACL. D. Roth and D. Zelenko. 1998. Part of speech tagging using a network of linear separators. In COLINGACL. R. Rudinger, P. Rastogi, F. Ferraro, and B. Van Durme. 2015. Script induction as language modeling. In EMNLP. R. C. Schank and R. P. Abelson. 1977. Scripts, plans, goals, and understanding: An inquiry into human knowledge structures. In JMZ. K. K. Schuler. 2005. Verbnet: A broad-coverage, comprehensive verb lexicon. Y. Song, H. Peng, P. Kordjamshidi, M. Sammons, and D. Roth. 2015. Improving a pipeline architecture for shallow discourse parsing. In CoNLL Shared Task. A. Stolcke. 2002. Srilm-an extensible language modeling toolkit. In INTERSPEECH, volume 2002, page 2002. I. Titov and E. Khoddam. 2015. Unsupervised induction of semantic roles within a reconstruction-error minimization framework. In NAACL. Ben Wellner and James Pustejovsky. 2007. Automatically identifying the arguments of discourse connectives. In Proceedings of the 2007 Joint Conference of EMNLP-CoNLL. T. Winograd. 1972. Understanding natural language. Cognitive psychology, 3(1):1–191. S. Wiseman, A. M. Rush, S. M. Shieber, and J. Weston. 2015. Learning anaphoricity and antecedent ranking features for coreference resolution. In ACL. N. Xue, H. T. Ng, S. Pradhan, R. P. C. Bryant, and A. T. Rutherford. 2015. The conll-2015 shared task on shallow discourse parsing. In CoNLL. 300
2016
28
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 301–310, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Sentiment Domain Adaptation with Multiple Sources Fangzhao Wu and Yongfeng Huang∗ Tsinghua National Laboratory for Information Science and Technology Department of Electronic Engineering Tsinghua University, Beijing, China [email protected], [email protected] Abstract Domain adaptation is an important research topic in sentiment analysis area. Existing domain adaptation methods usually transfer sentiment knowledge from only one source domain to target domain. In this paper, we propose a new domain adaptation approach which can exploit sentiment knowledge from multiple source domains. We first extract both global and domain-specific sentiment knowledge from the data of multiple source domains using multi-task learning. Then we transfer them to target domain with the help of words’ sentiment polarity relations extracted from the unlabeled target domain data. The similarities between target domain and different source domains are also incorporated into the adaptation process. Experimental results on benchmark dataset show the effectiveness of our approach in improving cross-domain sentiment classification performance. 1 Introduction Sentiment classification is a hot research topic in natural language processing field, and has many applications in both academic and industrial areas (Pang and Lee, 2008; Liu, 2012; Wu et al., 2015; Wu and Huang, 2016). Sentiment classification is widely known as a domain-dependent task (Blitzer et al., 2007; Glorot et al., 2011). The sentiment classifier trained in one domain may not perform well in another domain. This is because sentiment expressions used in different domains are usually different. For example, “boring” ∗Corresponding author. and “lengthy” are frequently used to express negative sentiments in Book domain. However, they rarely appear in Electronics domain (Bollegala et al., 2011). Thus a sentiment classifier trained in Electronics domain cannot accurately predict their sentiments in Book domain. In addition, the same word may convey different sentiments in different domains. For example, in Electronics domain “easy” is usually used in positive reviews, e.g., “this digital camera is easy to use.” However, it is frequently used as a negative word in Movie domain. For instance, “the ending of this movie is easy to guess.” Thus, the sentiment classifier trained in one domain usually cannot be applied to another domain directly (Pang and Lee, 2008). In order to tackle this problem, sentiment domain adaptation has been widely studied (Liu, 2012). For example, Blitzer et al. (2007) proposed to compute the correspondence among features from different domains using their associations with pivot features based on structural correspondence learning (SCL). Pan et al. (2010) proposed a spectral feature alignment (SFA) algorithm to align the domain-specific words from different domains in order to reduce the gap between source and target domains. However, all of these methods transfer sentiment information from only one source domain. When the source and target domains have significant difference in feature distributions, the adaptation performance will heavily decline. In some cases, the performance of sentiment domain adaptation is even worse than that without adaptation, which is usually known as negative transfer (Pan and Yang, 2010). In this paper we propose a new domain adaptation approach for cross-domain sentiment classification. Our approach can exploit the sentiment information in multiple source domains to reduce the risk of negative transfer effectively. Our approach consists of two steps, i.e., training and 301 adaptation. At the training stage, we extract two kinds of sentiment models, i.e., the global model and the domain-specific models, from the data of multiple source domains using multi-task learning. The global sentiment model can capture the common sentiment knowledge shared by various domains, and has better generalization performance than the sentiment model trained in a single source domain. The domain-specific sentiment model can capture the specific sentiment knowledge in each source domain. At the adaptation stage, we transfer both kinds of sentiment knowledge to target domain with the help of the words’ sentiment graph of target domain. The sentiment graph contains words’ domain-specific sentiment polarity relations extracted from the syntactic parsing results of the unlabeled data in target domain. Since sentiment transfer between similar domains is more effective than dissimilar domains, we incorporate the similarities between target domain and different source domains into the adaptation process. In order to estimate the similarity between two domains, we propose a novel domain similarity measure based on their sentiment graphs. Extensive experiments were conducted on the benchmark Amazon product review dataset. The experimental results show that our approach can improve the performance of cross-domain sentiment classification effectively. 2 Related work Sentiment classification is widely known as a domain-dependent task, since different expressions are used to express sentiments in different domains (Blitzer et al., 2007). The sentiment classifier trained in one domain may not perform well in another domain. Since there are massive domains, it is impractical to annotate enough data for each new domain. Thus, domain adaptation, or so called cross-domain sentiment classification, which transfers the sentiment knowledge from domains with sufficient labeled data (i.e., source domain) to a new domain with no or scarce labeled data (i.e., target domain), has been widely studied. Existing domain adaptation methods mainly transfer sentiment information from only one source domain. For example, Blitzer et al. (2007) proposed a domain adaptation method based on structural correspondence learning (SCL). In their method, a set of pivot features are first selected according to their associations with source domain labels. Then the correspondence among features from source and target domains is computed using their associations with pivot features. In order to reduce the gap between source and target domains, Pan et al. (2010) proposed a spectral feature alignment (SFA) algorithm to align the domainspecific sentiment words from different domains into clusters. He et al. (2011) proposed to extract polarity-bearing topics using joint sentiment-topic (JST) model to expand the feature representations of texts from both source and target domains. Li et al. (2009) proposed to transfer sentiment knowledge from source domain to target domain using nonnegative matrix factorization. A common shortcoming of above methods is that if the source and target domains have significantly different distributions of sentiment expressions, then the domain adaptation performance will heavily decline (Li et al., 2013). Using multiple source domains in cross-domain sentiment classification has also been explored. Glorot et al. (2011) proposed a sentiment domain adaptation method based on a deep learning technique, i.e., Stacked Denoising Auto-encoders. The core idea of their method is to learn a high-level representation that can capture generic concepts using the unlabeled data from multiple domains. Yoshida et al. (2011) proposed a probabilistic generative model for cross-domain sentiment classification with multiple source and target domains. In their method, each word is assigned three attributes, i.e., the domain label, the domain dependence/independence label, and sentiment polarity. Bollegala et al. (2011) proposed to construct a sentiment sensitive thesaurus for cross-domain sentiment classification using data from multiple source domains. This thesaurus is used to expand the feature vectors for both training and classification. However, the similarities between target domain and different source domains are not considered in these methods. In addition, although unlabeled data is utilized in these methods, the useful word-level sentiment knowledge in the unlabeled target domain data is not exploited. General-purpose multiple source domain adaptation methods have also been studied. For example, Mansour et al. (2009) proposed a distribution weighted hypothesis combination approach, and gave theoretical guarantees for it. However, this method is based on the assumption that target distribution is some mixture of source distri302 butions, which may not hold in sentiment domain adaptation scenario. Duan et al. (2009) proposed a Domain Adaptation Machine (DAM) method to learn a Least-Squares SVM classifier for target domain by leveraging the classifiers independently trained in multiple source domains. Chattopadhyay et al. (2011) explored to assign psuedo labels to unlabeled samples in the target domain using the classifiers from multiple source domains. Then target domain classifier is trained on these psuedo labeled samples. Compared with these generalpurpose domain adaptation methods with multiple source domains, our approach is more suitable for sentiment domain adaptation because our approach exploits more sentiment-related characteristics and knowledge, such as the general sentiment knowledge shared by different domains and the word-level sentiment polarity relations, which is validated by experiments. 3 Sentiment Graph Extraction and Domain Similarity Measure In this section we introduce two important components used in our sentiment domain adaptation approach, i.e., the words’ sentiment graph and domain similarity. 3.1 Sentiment Graph Extraction Compared with labeled data, unlabeled data is usually much easier and cheaper to collect on a large scale. Although unlabeled samples are not associated with sentiment labels, they can still provide a lot of useful sentiment information for domain adaptation. For example, if “great” and “quick” are frequently used to describe the same target in the same review of Kitchen domain, then they probably convey the same sentiment polarity in this domain. Since “great” is a general positive word in both Book and Kitchen domains, we can infer that “quick” is also a positive word in Kitchen domain when transferring from Book domain to Kitchen domain. Motivated by above observations, in this paper we propose to extract sentiment polarity relations among words from massive unlabeled data for sentiment domain adaptation. Two kinds of polarity relations are explored, i.e., sentiment coherent relation and sentiment opposite relation. The former means that two words convey the same sentiment polarity while the latter indicates opposite sentiment polarities. These polarity relations are exdurable small battery The camera of this not conj_and nsubj neg det nmod_of case det small durable opposite nsubj Figure 1: An illustrative example of extracting sentiment polarity relations from syntactic parsing results. tracted from the syntactic parsing results according to manually selected rules. Two rules are used to extract sentiment coherent relations. The first one is that two words are connected by coordinating conjunctions such as “and” and “as well as”. For example, a review in Kitchen domain may be “it is so high-quality and professional.” Since “high-quality” and “professional” are connected by the coordinating conjunction “and”, we infer that they probably convey the same sentiment polarity. The second rule is that two words are not directly connected but are used to describe the same target in the same sentence. For example, a review in Electronics domain may be “It is a beautiful, durable, easy-to-use camera.” Since “beautiful”, “durable”, and “easy-to-use” are all used to describe the same camera in the same review, they tend to convey the same sentiment polarity. We also propose two rules for extracting sentiment opposite relations. The first rule is that two words are connected by adversative conjunctions such as “but” and “however”. The second rule is that two words are connected by coordinating conjunctions but there is a negation symbol before one of them. For example, a review may be “The battery of this camera is small and not durable.” We can infer that “small” and “durable” may convey opposite sentiments when they are used to describe camera battery. An illustrative example of extracting sentiment polarity relations from syntactic parsing results is shown in Fig. 1. Based on the sentiment polarity relations among words extracted from the unlabeled data, we can build a words’ sentiment graph for each domain. The nodes of the sentiment graph represent words and the edges stand for sentiment polarity relations. We denote R ∈RD×D as the words’ sentiment graph of a specific domain. Ri,j represents the sentiment polarity relation score between words i 303 and j. In this paper we define Ri,j as nC i,j−nO i,j nC i,j+nO i,j , where nC i,j and nO i,j represent the frequencies of words i and j sharing coherent and opposite sentiment polarity relations respectively in all the unlabeled samples. Thus, Ri,j ∈[−1, 1]. If Ri,j > 0, then words i and j tend to convey the same sentiment polarity. Similarly, if Ri,j < 0, then these two words are more likely to convey opposite sentiments. The absolute value of Ri,j represents the confidence of this sentiment polarity relation. 3.2 Domain Similarity Different pairs of domains have different sentiment relatedness (Remus, 2012; Wu and Huang, 2015). Researchers have found that sentiment domain adaptation between similar domains, such as Kitchen and Electronics, is much more effective than that between dissimilar domains, such as Kitchen and Book (Blitzer et al., 2007; Pan et al., 2010). Thus, it is beneficial if we take the similarity between source and target domains into consideration when transferring sentiment knowledge. In this paper we explore two methods to measure domain similarity. The first one is based on term distribution. The assumption behind this method is that similar domains usually share more common terms than dissimilar domains. For example, Smart Phone and Digital Camera domains share many common terms such as “screen”, “battery”, “light”, and “durable”, while the term distributions of Digital Camera and Book domains may have significant difference. Term distribution based domain similarity measures, such as A-distance, have been explored in previous works (Blitzer et al., 2007). Inspired by (Remus, 2012), here we apply Jensen-Shannon divergence to measure domain similarity based on term distributions, which is more easy to compute than A-distance. Denote tm ∈RD×1 as the term distribution of domain m, where tm w is the probability of term w appearing in domain m. Then the similarity between domains m and n is formulated as: TermSim(m, n) = 1 −DJS(tm, tn) = 1 −1 2(DKL(tm, t) + DKL(tn, t)), (1) where t = 1 2(tm + tn) is the average distribution, and DKL(·, ·) is the Kullback-Leibler divergence: DKL(p, q) = D X i=1 pi log2 pi qi  . (2) We can verify that DJS(tm, tn) ∈[0, 1]. Thus, the range of TermSim(m, n) is also [0, 1]. The term distribution based domain similarity can measure whether similar words are used in two domains. However, sharing similar terms does not necessarily mean that sentiment expressions are used similarly in these domains. For example, CPU and Battery are both related to electronics. The word “fast” is positive when used to describe CPU. However, it is frequently used as a negative word in Battery domain. For example, “this battery runs out fast.” Thus, it is more useful to measure domain similarity based on sentiment word distributions. However, although we can infer the sentiment word distributions of source domains according to labeled samples, it is difficult to compute the sentiment word distribution of target domain, since the labeled data does not exist or is very scarce in target domain. In order to tackle this problem, in this paper we propose to estimate the similarity between two domains based on their sentiment graphs. Similar domains usually share more common sentiment words and sentiment word pairs than dissimilar domains. In addition, the polarity relation scores of a pair of words in the sentiment graphs of similar domains are also more similar. In other words, they tend to be both positive or negative in these two domains. Motivated by above observations, the domain similarity based on sentiment graph is formulated as follows: SentiSim(m, n) = D P w=1 P v̸=w |Rm w,v + Rn w,v| · N m∩n w,v D P w=1 P v̸=w (|Rm w,v| · N m w,v + |Rnw,v| · N n w,v) , (3) where Rm w,v is the sentiment polarity relation score between words w and v in domain m, and Nm w,v is its frequency in this domain. Nm∩n w,v = min{Nm w,v, Nn w,v}. We can verify that SentiSim(m, n) ∈[0, 1]. If two domains have more common sentiment word pairs and the polarity relation scores of these word pairs are more similar, then these two domains share higher domain similarity according to Eq. (3). 4 Sentiment Domain Adaptation with Multiple Sources In this section we introduce our sentiment domain adaptation approach in detail. First we introduce several notations that will be used in following discussions. Assume there are M source domains. 304 Denote {Xm ∈RNm×D, ym ∈RNm×1} as the labeled data in source domain m, where Nm is the number of labeled samples and D is the size of feature vector. xm i ∈RD×1 is the feature vector of the ith sample in domain m, and its sentiment label is ym i . In this paper we focus on sentiment polarity classification, and ym i ∈{+1, −1}. Denote w ∈RD×1 as the global sentiment model extracted from multiple source domains and wm ∈RD×1 as the domain-specific sentiment model of source domain m. Denote wt ∈RD×1 as the domain-specific sentiment model of target domain. Denote f(x, y, w) as the loss of classifying sample x into label y under linear classification model w. Our approach is flexible to the selection of loss function f, which can be square loss, logistic loss, and hinge loss. Denote Rm ∈RD×D as the sentiment graph knowledge of domain m, and Sm,t ∈[0, 1] as the similarity between source domain m and target domain. Our sentiment domain adaptation with multiple sources approach (SDAMS) consists of two steps, i.e., training and adaptation. At the training stage, the global and domain-specific sentiment knowledge are extracted from the data of multiple source domains. And at the adaptation stage, these two kinds of sentiment knowledge are transferred to target domain by incorporating the sentiment graph knowledge of target domain and the similarities between target and source domains. 4.1 Training Given the labeled data and the sentiment graph knowledge of multiple source domains, at the training stage, our goal is to train a robust global sentiment model to capture the general sentiment knowledge shared by various domains and a domain-specific sentiment model for each source domain. The model of the training process is motivated by multi-task learning (Evgeniou and Pontil, 2004; Liu et al., 2009) and is formulated as: arg min w,wm L(w, wm) = M X m=1 1 Nm Nm X i=1 f(xm i , ym i , w + wm) + α M X m=1 D X i=1 X j̸=i Rm i,j|(wi + wm,i) −(wj + wm,j)| + λ1∥w∥2 2 + λ1 M X m=1 ∥wm∥2 2 + λ2∥w∥1 + λ2 M X m=1 ∥wm∥1, (4) where α, λ1, and λ2 are nonnegative regularization coefficients. The sentiment classification model of each source domain is decomposed into two components, i.e., a global one and a domain-specific one. The global sentiment model is shared by all source domains and is trained in these domains simultaneously. It is used to capture the general sentiment knowledge, such as the general sentiment words “great”, “worst”, “perfect” and so on. The domain-specific sentiment model is trained on the labeled data within one source domain and is used to capture the specific sentiment knowledge of this domain. For example, the domain-specific sentiment word “easy” is a positive word in Electronics domain but is used as a negative word in Movie domain. In Eq. (4), the first term means minimizing the empirical classification loss on the labeled data of multiple source domains. In this way we incorporate the sentiment information in labeled samples into sentiment classifier learning. In the second term we incorporate the sentiment graph knowledge of each source domain. It is motivated by graph-guided fused Lasso (Chen et al., 2012). If two words have strong coherent (or opposite) sentiment polarity relations, then we constrain that their sentiment scores are more similar (or dissimilar) with each other in the final classification model. The L1-norm regularization terms are motivated by Lasso (Tibshirani, 1996). It can set many minor sentiment scores in the models to exact zeros. Since not all the words convey sentiments, these terms can help conduct sentiment word selection. We also incorporate the L2-norm regularization terms in order to improve model stability in high-dimensional problems, which is inspired by elastic net regularization (Zou and Hastie, 2003). 4.2 Adaptation At the adaptation stage, we incorporate the global sentiment knowledge, the domain-specific sentiment knowledge of multiple source domains, the sentiment graph knowledge of target domain, and the domain similarities between target and source domains into a unified framework to learn an accurate sentiment classifier for target domain. The model of our adaptation framework is formulated as follows: arg min wt L(wt) = M X m=1 Sm,t∥wm −wt∥2 2 + λ1∥wt∥2 2 + λ2∥wt∥1 + β D X i=1 X j̸=i Rt i,j|(wi + wt,i) −(wj + wt,j)|, (5) 305 where β, λ1, and λ2 are nonnegative regularization coefficients. The final sentiment classifier of the target domain is a linear combination of w and wt, i.e., w+wt, where w is the global sentiment model extracted from multiple source domains at the training stage, and wt is the domain-specific sentiment model of target domain learned at the adaptation stage. In the first term of Eq. (5), we transfer the knowledge in domain-specific sentiment models from multiple source domains to wt. Since the sentiment knowledge transfer between similar domains is more effective, the transfer of domainspecific sentiment knowledge is weighted by the similarities between target domain and different source domains. If target domain is more similar with source domain m than source domain n (i.e., Sm,t > Sn,t ), then more domain-specific sentiment knowledge will be transferred to wt from wm than wn. Through the last term we incorporate the sentiment graph knowledge extracted from massive unlabeled data of target domain into the adaptation process. If two words share strong coherent (or opposite) sentiment polarity relations in the target domain, then we constrain that their sentiment scores in the sentiment classification model of target domain are more similar (or dissimilar). This term can help transfer the sentiment knowledge from source domains to target domain more effectively. For example, if we know that “great” is a positive word in the global sentiment model and there is a strong coherent polarity relation between “easy” and “great” in Electronics domain, then we can infer that “easy” is also a positive word in this domain. 5 Experiments 5.1 Dataset and Experimental Settings The dataset used in our experiments is the famous Amazon product review dataset collected by Blitzer et al. (2007). It is widely used as a benchmark dataset for cross-domain sentiment classification. Four domains, i.e., Book, DVD, Electronics and Kitchen, are included in this dataset. Each domain contains 1,000 positive and 1,000 negative reviews. Besides, a large number of unlabeled reviews are provided. The detailed statistics of this dataset are shown in Table 1. In our experiments, each domain was selected in turn as target domain, and remaining domains as source domains. In each experiment, we randomly selected N labeled samples from the Table 1: The statistics of the dataset. Domain Book DVD Electronics Kitchen positive 1,000 1,000 1,000 1,000 negative 1,000 1,000 1,000 1,000 unlabeled 973,194 122,438 21,009 17,856 source domains to train sentiment models at the training stage. These samples were balanced among different source domains. In order to perform fair comparisons with baseline methods, following (Bollegala et al., 2011), we limited the total number of training samples, i.e., N, to 1,600. The target domain sentiment classifier was tested on all the labeled samples of target domain. Following (Blitzer et al., 2007), unigrams and bigrams were used as features. The sentiment polarity relations of bigrams were extracted by expanding the polarity relations between unigrams using modifying relations. For example, from the review “this phone is very beautiful and not expensive,” we extract not only sentiment polarity relation between “beautiful” and “expensive”, but also polarity relation between “beautiful” and “not expensive” (coherent sentiment), and that between “very beautiful” and “expensive” (opposite sentiment), since “very” and “not” are used to modify “beautiful” and “expensive” respectively. Classification accuracy was selected as the evaluation metric. We manually set β in Eq. (5) to 0.01. The values of α, λ1, and λ2 in Eq. (4) were selected using cross validation. The optimization problems in Eq. (4) and Eq. (5) were solved using alternating direction method of multipliers (ADMM) (Boyd et al., 2011). Each experiment was repeated 10 times independently and average results were reported. 5.2 Comparison of Domain Similarity Measures In this section, we conducted experiments to compare the effectiveness of the two kinds of domain similarity measures introduced in Section 3.2 in sentiment domain adaptation task. The experimental results are summarized in Fig. 2. The classification loss function used in our approach is hinge loss. The results of other loss functions show similar patterns. From Fig. 2 we can see that the domain similarity measure based on sentiment graph performs consistently better than that based on term distribution in our approach. This result validates our 306 Book DVD Electronics Kitchen 0.7 0.72 0.74 0.76 0.78 0.8 0.82 0.84 0.86 Accuracy TermSim SentiSim Figure 2: The performance of our approach with different kinds of domain similarity measure. assumption in Section 3.2 that the sentiment graph based domain similarity can better model the sentiment relatedness between different domains than that based on term distribution in sentiment domain adaptation task. In all the following experiments, the sentiment graph based domain similarities were used in our approach. 5.3 Performance Evaluation In this section we conducted experiments to evaluate the performance of our approach by comparing it with a series of baseline methods. The methods to be compared are: 1) SCL, domain adaptation based on structural correspondence learning (Blitzer et al., 2007); 2) SFA, domain adaptation based on spectral feature alignment (Pan et al., 2010); 3) SCL-com and SFA-com, adapting SCL and SFA to multiple source domain scenario by first training a cross-domain sentiment classifier in each source domain and then combining their classification results using majority voting; 4) SST, cross-domain sentiment classification by using multiple source domains to construct a sentiment sensitive thesaurus for feature expansion (Bollegala et al., 2011); 5) IDDIWP, multiple-domain sentiment analysis by identifying domain dependent/independent word polarity (Yoshida et al., 2011); 6) DWHC, DAM and CPMDA, three general-purpose multiple source domain adaptation methods proposed in (Mansour et al., 2009), (Duan et al., 2009) and (Chattopadhyay et al., 2011) respectively; 7) SDAMS-LS, SDAMSSVM, and SDAMS-Log, our proposed sentiment domain adaptation approaches with square loss, hinge loss, and logistic loss respectively; 8) AllTraining, all the domains were involved in the training phase of our approach and there is no adaptation phase. This method is introduced to provide an upper bound for the performance of our approach. The experimental results of these methods are summarized in Table 2. Table 2: The performance of different methods. Book DVD Electronics Kitchen SCL 0.7457 0.7630 0.7893 0.8207 SFA 0.7598 0.7848 0.7808 0.8210 SCL-com 0.7523 0.7675 0.7918 0.8247 SFA-com 0.7629 0.7869 0.7864 0.8258 SST 0.7632 0.7877 0.8363 0.8518 IDDIWP 0.7524 0.7732 0.8167 0.8383 DWHC 0.7611 0.7821 0.8312 0.8478 DAM 0.7563 0.7756 0.8284 0.8419 CP-MDA 0.7597 0.7792 0.8331 0.8465 SDAMS-LS 0.7795 0.7880 0.8398 0.8596 SDAMS-SVM 0.7786 0.7902 0.8418 0.8578 SDAMS-Log 0.7829 0.7913 0.8406 0.8629 All-Training 0.7983 0.8104 0.8463 0.8683 From Table 2 we can see that our approach achieves the best performance among all the methods compared here. SCL and SFA are famous cross-domain sentiment classification methods. In these methods, the sentiment knowledge is transferred from one source domain to target domain. According to Table 2, our approach performs significantly better than them. This result indicates that the sentiment knowledge extracted from one source domain may contain heavy domain-specific bias and may be inappropriate for the target domain. Our approach can tackle this problem by extracting the global sentiment model from multiple source domains. This global model can capture the general sentiment knowledge shared by various domains and has better generalization performance. It can reduce the risk of negative transfer effectively. Our approach also outperforms SCLcom and SFA-com. In SCL-com and SFA-com, the sentiment information in different source domains is combined at the classification stage, while in our approach it is combined at the learning stage. The superior performance of our approach compared with SCL-com and SFA-com shows that our approach is a more appropriate way to exploit the sentiment knowledge in different source domains. SST and IDDIWP also utilize data from multiple source domains as our approach. But our approach can still outperform them. This is because in these methods, the similarities between target domain and different source domains are not considered. Since different domains usually 307 easy good excellent hard simple quick worth expensive pricey lack reliable durable 1.0 0.88 1.0 1.0 1.0 1.0 1.0 1.0 1.0 -1.0 -1.0 1.0 -1.0 -1.0 1.0 cheap 0.56 Figure 3: An illustrative example of the sentiment graph of Electronics domain. The value on the line represents the sentiment polarity relation score. have different sentiment relatedness, our approach can exploit the sentiment information in multiple source domains more accurately by incorporating the similarities between target domain and each source domain into the adaptation process. Our approach also outperforms the state-of-theart general-purpose multiple source domain adaptation methods, such as DWHC, DAM, and CPMDA. This is because our approach can exploit more sentiment-related characteristics and knowledge for sentiment domain adaptation, such as the general sentiment knowledge shared by various domains, the sentiment graph based domain similarities, and the word-level sentiment polarity relations. Thus, our approach is more suitable for sentiment domain adaptation than these generalpurpose multiple source domain adaptation methods. Another observation from Table 2 is that the performance of our approach is quite close to the upper bound, i.e., All-Training, especially in Electronics and Kitchen domains. This result validates the effectiveness of our approach in sentiment domain adaptation. 5.4 Case Study In this section we conducted several case studies to further explore how our sentiment domain adaptation approach works. As an illustrative example, we selected Electronics domain as the target domain and remaining domains as source domains. The top sentiment words in the global and domainspecific sentiment models learned from the data of multiple source domains are shown in Table 3. A subgraph of the sentiment graph extracted from the unlabeled data of target domain (Electronics) is shown in Fig. 3. The top words in the final domain-specific sentiment model of target domain returned by our approach are shown in Table 3. From Table 3 we have following observations. First, the global sentiment model extracted from multiple source domains can capture the general sentiment knowledge quite well. It contains many general sentiment words, such as “excellent”, “great”, “waste” and so on. These general sentiment words convey strong sentiment orientations. In addition, their sentiment polarities are consistent in different domains. Thus, the global sentiment model extracted from multiple source domains has good generalization ability and is more suitable for domain adaptation than the sentiment model trained in a single source domain, which may contain heavy domain-specific sentiment bias. Second, the domain-specific sentiment models can capture rich specific sentiment expressions in each source domain. For example, “easy” is a positive word in Kitchen domain while “return” is a negative word in this domain. Third, different domains have different domain-specific sentiment expressions. For example, “read” is frequently used as a positive word in Book domain, while it is a negative word in DVD domain. Thus, it is important to separate the global and the domain-specific sentiment knowledge. In addition, although different sentiment expressions are used in different domains, similar domains may share many common domain-specific sentiment expressions. For example, “easy” and “works” are positive words in both Electronics and Kitchen domains, and “return” and “broken” are both negative words in them. Thus, transferring the domainspecific sentiment models from similar source domains to target domain is helpful. From Fig. 3 we can see that the sentiment polarity relations in the sentiment graph extracted from massive unlabeled data are reasonable. Words with positive relation scores tend to convey similar sentiments, and words with negative relation scores usually convey opposite sentiments. In addition, this sentiment graph contains rich domain-specific sentiment information in target domain, which is useful to transfer the sentiment knowledge from multiple source domains to target domain. For example, “excellent”, “easy”, “simple”, and “quick” share the same sentiment polarity in Electronics domain according to Fig. 3. We can infer that “easy” is positive in this domain using the sentiment of “excellent” in the global sentiment model and the sentiment relation between “easy” and “excellent”. Then we can further infer the sentiments of the domain-specific sentiment words 308 Table 3: The top words in the global and domain-specific sentiment models. Global Positive excellent, great, best, perfect, love, wonderful, the best, loved, well, fantastic, enjoy, favorite Negative bad, waste, boring, disappointed, worst, poor, disappointing, disappointment, terrible, poorly Book Positive excellent, wonderful, easy, loved, enjoyable, life, fun, favorite, a must, read, important, novel Negative no, boring, disappointing, bad, instead, waste, little, writing, poorly, pages, unfortunately DVD Positive enjoy, hope, loved, season, better than, best, a must, first, superman, classic, times, back Negative worst, boring, bad, the worst, terrible, waste, awful, book, horrible, dull, lame, read, hard Kitchen Positive easy, great, perfect, love, works, easy to, best, little, well, good, nice, long, durable, clean Negative disappointed, back, poor, broken, too, return, off, returned, broke, waste, tried, times, doesn’t Electronics Positive excellent, great, perfect, best, love, easy to, easy, little, the best, works, good, nice, wonderful Negative disappointed, poor, waste, too, bad, worst, back, broken, return, horrible, off, tried, poorly “simple” and “quick” in target domain using the polarity of “easy” and their sentiment relations with it, even if they may be covered by no source domain. 6 Conclusion This paper presents a sentiment domain adaptation approach which transfers the sentiment knowledge from multiple source domains to target domain. Our approach consists of two steps. First, we extract both global and domain-specific sentiment knowledge from the data of multiple source domains. Second, we transfer these two kinds of sentiment knowledge to target domain with the help of the words’ sentiment graph. We proposed to build words’ sentiment graph for target domain by extracting their sentiment polarity relations from massive unlabeled data. Besides, we proposed a novel domain similarity measure based on sentiment graphs, and incorporated the domain similarities between target and different source domains into the domain adaptation process. The experimental results on a benchmark dataset show that our approach can effectively improve the performance of cross-domain sentiment classification. Acknowledgements This research is supported by the Key Program of National Natural Science Foundation of China (Grant nos. U1536201 and U1405254), the National Natural Science Foundation of China (Grant no. 61472092), the National High Technology Research and Development Program of China (863 Program) (Grant no. 2015AA020101), the National Science and Technology Support Program of China (Grant no. 2014BAH41B00), and the Initiative Scientific Research Program of Tsinghua University. References John Blitzer, Mark Dredze, Fernando Pereira, et al. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In ACL, volume 7, pages 440–447. Danushka Bollegala, David Weir, and John Carroll. 2011. Using multiple sources to construct a sentiment sensitive thesaurus for cross-domain sentiment classification. In ACL:HLT, pages 132–141. Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. 2011. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1):1–122. Rita Chattopadhyay, Jieping Ye, Sethuraman Panchanathan, Wei Fan, and Ian Davidson. 2011. Multi-source domain adaptation and its application to early detection of fatigue. In KDD, pages 717– 725. ACM. Xi Chen, Qihang Lin, Seyoung Kim, Jaime G Carbonell, Eric P Xing, et al. 2012. Smoothing proximal gradient method for general structured sparse regression. The Annals of Applied Statistics, 6(2):719–752. Lixin Duan, Ivor W Tsang, Dong Xu, and Tat-Seng Chua. 2009. Domain adaptation from multiple sources via auxiliary classifiers. In ICML, pages 289–296. ACM. Theodoros Evgeniou and Massimiliano Pontil. 2004. Regularized multi–task learning. In KDD, pages 109–117. ACM. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In ICML, pages 513–520. Yulan He, Chenghua Lin, and Harith Alani. 2011. Automatically extracting polarity-bearing topics for cross-domain sentiment classification. In ACL:HLT, pages 123–131. Tao Li, Vikas Sindhwani, Chris Ding, and Yi Zhang. 2009. Knowledge transformation for cross-domain sentiment classification. In SIGIR, pages 716–717. ACM. 309 Shoushan Li, Yunxia Xue, Zhongqing Wang, and Guodong Zhou. 2013. Active learning for crossdomain sentiment classification. In IJCAI, pages 2127–2133. Jun Liu, Shuiwang Ji, and Jieping Ye. 2009. Multitask feature learning via efficient l2,1-norm minimization. In UAI, pages 339–348. Bing Liu. 2012. Sentiment analysis and opinion mining. Synthesis Lectures on Human Language Technologies, 5(1):1–167. Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. 2009. Domain adaptation with multiple sources. In NIPS, pages 1041–1048. Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. TKDE, 22(10):1345–1359. Sinno Jialin Pan, Xiaochuan Ni, Jian-Tao Sun, Qiang Yang, and Zheng Chen. 2010. Cross-domain sentiment classification via spectral feature alignment. In WWW, pages 751–760. ACM. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and trends in information retrieval, 2(1-2):1–135. Robert Remus. 2012. Domain adaptation using domain similarity-and domain complexity-based instance selection for cross-domain sentiment analysis. In 2012 IEEE 12th International Conference on Data Mining Workshops, pages 717–723. IEEE. Robert Tibshirani. 1996. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267–288. Fangzhao Wu and Yongfeng Huang. 2015. Collaborative multi-domain sentiment classification. In ICDM, pages 459–468. IEEE. Fangzhao Wu and Yongfeng Huang. 2016. Personalized microblog sentiment classification via multitask learning. In AAAI, pages 3059–3065. Fangzhao Wu, Yangqiu Song, and Yongfeng Huang. 2015. Microblog sentiment classification with contextual knowledge regularization. In AAAI, pages 2332–2338. Yasuhisa Yoshida, Tsutomu Hirao, Tomoharu Iwata, Masaaki Nagata, and Yuji Matsumoto. 2011. Transfer learning for multiple-domain sentiment analysisłidentifying domain dependent/independent word polarity. In AAAI, pages 1286–1291. Hui Zou and Trevor Hastie. 2003. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2):301–320. 310
2016
29
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 23–32, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Inferring Logical Forms From Denotations Panupong Pasupat Computer Science Department Stanford University [email protected] Percy Liang Computer Science Department Stanford University [email protected] Abstract A core problem in learning semantic parsers from denotations is picking out consistent logical forms—those that yield the correct denotation—from a combinatorially large space. To control the search space, previous work relied on restricted set of rules, which limits expressivity. In this paper, we consider a much more expressive class of logical forms, and show how to use dynamic programming to efficiently represent the complete set of consistent logical forms. Expressivity also introduces many more spurious logical forms which are consistent with the correct denotation but do not represent the meaning of the utterance. To address this, we generate fictitious worlds and use crowdsourced denotations on these worlds to filter out spurious logical forms. On the WIKITABLEQUESTIONS dataset, we increase the coverage of answerable questions from 53.5% to 76%, and the additional crowdsourced supervision lets us rule out 92.1% of spurious logical forms. 1 Introduction Consider the task of learning to answer complex natural language questions (e.g., “Where did the last 1st place finish occur?”) using only question-answer pairs as supervision (Clarke et al., 2010; Liang et al., 2011; Berant et al., 2013; Artzi and Zettlemoyer, 2013). Semantic parsers map the question into a logical form (e.g., R[Venue].argmax(Position.1st, Index)) that can be executed on a knowledge source to obtain the answer (denotation). Logical forms are very expressive since they can be recursively composed, but this very expressivity makes it more difficult to search over the space of logical forms. Previous work sidesteps this obstacle by restricting the set of possible logical form compositions, but this is limiting. For instance, for the system in Pasupat and Liang (2015), in only 53.5% of the examples was the correct logical form even in the set of generated logical forms. The goal of this paper is to solve two main challenges that prevent us from generating more expressive logical forms. The first challenge is computational: the number of logical forms grows exponentially as their size increases. Directly enumerating over all logical forms becomes infeasible, and pruning techniques such as beam search can inadvertently prune out correct logical forms. The second challenge is the large increase in spurious logical forms—those that do not reflect the semantics of the question but coincidentally execute to the correct denotation. For example, while logical forms z1, . . . , z5 in Figure 1 are all consistent (they execute to the correct answer y), the logical forms z4 and z5 are spurious and would give incorrect answers if the table were to change. We address these two challenges by solving two interconnected tasks. The first task, which addresses the computational challenge, is to enumerate the set Z of all consistent logical forms given a question x, a knowledge source w (“world”), and the target denotation y (Section 4). Observing that the space of possible denotations grows much more slowly than the space of logical forms, we perform dynamic programming on denotations (DPD) to make search feasible. Our method is guaranteed to find all consistent logical forms up to some bounded size. Given the set Z of consistent logical forms, the second task is to filter out spurious logical forms from Z (Section 5). Using the property that spurious logical forms ultimately give a wrong answer when the data in the world w changes, we create 23 Year Venue Position Event Time 2001 Hungary 2nd 400m 47.12 2003 Finland 1st 400m 46.69 2005 Germany 11th 400m 46.62 2007 Thailand 1st relay 182.05 2008 China 7th relay 180.32 x: “Where did the last 1st place finish occur?” y: Thailand Consistent Correct z1: R[Venue].argmax(Position.1st, Index) Among rows with Position = 1st, pick the one with maximum index, then return the Venue of that row. z2: R[Venue].Index.max(R[Index].Position.1st) Find the maximum index of rows with Position = 1st, then return the Venue of the row with that index. z3: R[Venue].argmax(Position.Number.1, R[λx.R[Date].R[Year].x]) Among rows with Position number 1, pick one with latest date in the Year column and return the Venue. Spurious z4: R[Venue].argmax(Position.Number.1, R[λx.R[Number].R[Time].x]) Among rows with Position number 1, pick the one with maximum Time number. Return the Venue. z5: R[Venue].Year.Number.( R[Number].R[Year].argmax(Type.Row, Index)−1) Subtract 1 from the Year in the last row, then return the Venue of the row with that Year. Inconsistent ˜z: R[Venue].argmin(Position.1st, Index) Among rows with Position = 1st, pick the one with minimum index, then return the Venue. (= Finland) Figure 1: Six logical forms generated from the question x. The first five are consistent: they execute to the correct answer y. Of those, correct logical forms z1, z2, and z3 are different ways to represent the semantics of x, while spurious logical forms z4 and z5 get the right answer y for the wrong reasons. fictitious worlds to test the denotations of the logical forms in Z. We use crowdsourcing to annotate the correct denotations on a subset of the generated worlds. To reduce the amount of annotation needed, we choose the subset that maximizes the expected information gain. The pruned set of logical forms would provide a stronger supervision signal for training a semantic parser. We test our methods on the WIKITABLEQUESTIONS dataset of complex questions on Wikipedia tables. We define a simple, general set of deduction rules (Section 3), and use DPD to confirm that the rules generate a correct logical form in ... r1 · · · 1 Finland 1st r2 · · · 2 Germany 11th r3 · · · 3 Thailand 1st ... 1 11 1 Next Next Next Next Index Index Index Venue Position Venue Position Venue Position Number Number Number z1 = R[Venue]. argmax( Position. 1st , Index) Figure 2: The table in Figure 1 is converted into a graph. The recursive execution of logical form z1 is shown via the different colors and styles. 76% of the examples, up from the 53.5% in Pasupat and Liang (2015). Moreover, unlike beam search, DPD is guaranteed to find all consistent logical forms up to a bounded size. Finally, by using annotated data on fictitious worlds, we are able to prune out 92.1% of the spurious logical forms. 2 Setup The overarching motivation of this work is allowing people to ask questions involving computation on semi-structured knowledge sources such as tables from the Web. This section introduces how the knowledge source is represented, how the computation is carried out using logical forms, and our task of inferring correct logical forms. Worlds. We use the term world to refer to a collection of entities and relations between entities. One way to represent a world w is as a directed graph with nodes for entities and directed edges for relations. (For example, a world about geography would contain a node Europe with an edge Contains to another node Germany.) In this paper, we use data tables from the Web as knowledge sources, such as the one in Figure 1. We follow the construction in Pasupat and Liang (2015) for converting a table into a directed graph (see Figure 2). Rows and cells become nodes (e.g., r0 = first row and Finland) while columns become labeled directed edges between them (e.g., Venue maps r1 to Finland). The graph is augmented with additional edges Next (from each 24 row to the next) and Index (from each row to its index number). In addition, we add normalization edges to cell nodes, including Number (from the cell to the first number in the cell), Num2 (the second number), Date (interpretation as a date), and Part (each list item if the cell represents a list). For example, a cell with content “3-4” has a Number edge to the integer 3, a Num2 edge to 4, and a Date edge to XX-03-04. Logical forms. We can perform computation on a world w using a logical form z, a small program that can be executed on the world, resulting in a denotation JzKw. We use lambda DCS (Liang, 2013) as the language of logical forms. As a demonstration, we will use z1 in Figure 2 as an example. The smallest units of lambda DCS are entities (e.g., 1st) and relations (e.g., Position). Larger logical forms can be constructed using logical operations, and the denotation of the new logical form can be computed from denotations of its constituents. For example, applying the join operation on Position and 1st gives Position.1st, whose denotation is the set of entities with relation Position pointing to 1st. With the world in Figure 2, the denotation is JPosition.1stKw = {r1, r3}, which corresponds to the 2nd and 4th rows in the table. The partial logical form Position.1st is then used to construct argmax(Position.1st, Index), the denotation of which can be computed by mapping the entities in JPosition.1stKw = {r1, r3} using the relation Index ({r0 : 0, r1 : 1, . . . }), and then picking the one with the largest mapped value (r3, which is mapped to 3). The resulting logical form is finally combined with R[Venue] with another join operation. The relation R[Venue] is the reverse of Venue, which corresponds to traversing Venue edges in the reverse direction. Semantic parsing. A semantic parser maps a natural language utterance x (e.g., “Where did the last 1st place finish occur?”) into a logical form z. With denotations as supervision, a semantic parser is trained to put high probability on z’s that are consistent—logical forms that execute to the correct denotation y (e.g., Thailand). When the space of logical forms is large, searching for consistent logical forms z can become a challenge. As illustrated in Figure 1, consistent logical forms can be divided into two groups: correct logical forms represent valid ways for computing the answer, while spurious logical forms accidentally get the right answer for the wrong reasons (e.g., z4 picks the row with the maximum time but gets the correct answer anyway). Tasks. Denote by Z and Zc the sets of all consistent and correct logical forms, respectively. The first task is to efficiently compute Z given an utterance x, a world w, and the correct denotation y (Section 4). With the set Z, the second task is to infer Zc by pruning spurious logical forms from Z (Section 5). 3 Deduction rules The space of logical forms given an utterance x and a world w is defined recursively by a set of deduction rules (Table 1). In this setting, each constructed logical form belongs to a category (Set, Rel, or Map). These categories are used for type checking in a similar fashion to categories in syntactic parsing. Each deduction rule specifies the categories of the arguments, category of the resulting logical form, and how the logical form is constructed from the arguments. Deduction rules are divided into base rules and compositional rules. A base rule follows one of the following templates: TokenSpan[span] →c [f(span)] (1) ∅→c [f()] (2) A rule of Template 1 is triggered by a span of tokens from x (e.g., to construct z1 in Figure 2 from x in Figure 1, Rule B1 from Table 1 constructs 1st of category Set from the phrase “1st”). Meanwhile, a rule of Template 2 generates a logical form without any trigger (e.g., Rule B5 generates Position of category Rel from the graph edge Position without a specific trigger in x). Compositional rules then construct larger logical forms from smaller ones: c1 [z1] + c2 [z2] →c [g(z1, z2)] (3) c1 [z1] →c [g(z1)] (4) A rule of Template 3 combines partial logical forms z1 and z2 of categories c1 and c2 into g(z1, z2) of category c (e.g., Rule C1 uses 1st of category Set and Position of category Rel to construct Position.1st of category Set). Template 4 works similarly. Most rules construct logical forms without requiring a trigger from the utterance x. This is 25 Rule Semantics Base Rules B1 TokenSpan →Set fuzzymatch(span) (entity fuzzily matching the text: “chinese” →China) B2 TokenSpan →Set val(span) (interpreted value: “march 2015” →2015-03-XX) B3 ∅→Set Type.Row (the set of all rows) B4 ∅→Set c ∈ClosedClass (any entity from a column with few unique entities) (e.g., 400m or relay from the Event column) B5 ∅→Rel r ∈GraphEdges (any relation in the graph: Venue, Next, Num2, . . . ) B6 ∅→Rel != | < | <= | > | >= Compositional Rules C1 Set + Rel →Set z2.z1 | R[z2].z1 (R[z] is the reverse of z; i.e., flip the arrow direction) C2 Set →Set a(z1) (a ∈{count, max, min, sum, avg}) C3 Set + Set →Set z1 ⊓z2 | z1 ⊔z2 | z1 −z2 (subtraction is only allowed on numbers) Compositional Rules with Maps Initialization M1 Set →Map (z1, x) (identity map) Operations on Map M2 Map + Rel →Map (u1, z2.b1) | (u1, R[z2].b1) M3 Map →Map (u1, a(b1)) (a ∈{count, max, min, sum, avg}) M4 Map + Set →Map (u1, b1 ⊓z2) | . . . M5 Map + Map →Map (u1, b1 ⊓b2) | . . . (Allowed only when u1 = u2) (Rules M4 and M5 are repeated for ⊔and −) Finalization M6 Map →Set argmin(u1, R[λx.b1]) | argmax(u1, R[λx.b1]) Table 1: Deduction rules define the space of logical forms by specifying how partial logical forms are constructed. The logical form of the i-th argument is denoted by zi (or (ui, bi) if the argument is a Map). The set of final logical forms contains any logical form with category Set. crucial for generating implicit relations (e.g., generating Year from “what’s the venue in 2000?” without a trigger “year”), and generating operations without a lexicon (e.g., generating argmax from “where’s the longest competition”). However, the downside is that the space of possible logical forms becomes very large. The Map category. The technique in this paper requires execution of partial logical forms. This poses a challenge for argmin and argmax operations, which take a set and a binary relation as arguments. The binary could be a complex function (e.g., in z3 from Figure 1). While it is possible to build the binary independently from the set, executing a complex binary is sometimes impossible (e.g., the denotation of λx.count(x) is impossible to write explicitly without knowledge of x). We address this challenge with the Map category. A Map is a pair (u, b) of a finite set u (unary) and a binary relation b. The denotation of (u, b) is (JuKw, JbK′ w) where the binary JbK′ w is JbKw with the domain restricted to the set JuKw. For example, consider the construction of argmax(Position.1st, Index). After constructing Position.1st with denotation {r1, r3}, Rule M1 initializes (Position.1st, x) with denotation ({r1, r3}, {r1 : {r1}, r3 : {r3}}). Rule M2 is then applied to generate (Position.1st, R[Index].x) with denotation ({r1, r3}, {r1 : {1}, r3 : {3}}). Finally, Rule M6 converts the Map into the desired argmax logical form with denotation {r3}. Generality of deduction rules. Using domain knowledge, previous work restricted the space of logical forms by manually defining the categories c or the semantic functions f and g to fit the domain. For example, the category Set might be divided into Records, Values, and Atomic when the knowledge source is a table (Pasupat and Liang, 2015). Another example is when a compositional rule g (e.g., sum(z1)) must be triggered by some phrase in a lexicon (e.g., words like “total” that align to sum in the training data). Such restrictions make search more tractable but greatly limit the scope of questions that can be answered. Here, we have increased the coverage of logical forms by making the deduction rules simple and general, essentially following the syntax of lambda DCS. The base rules only generates entities that approximately match the utterance, but all possible relations, and all possible further combinations. Beam search. Given the deduction rules, an utterance x and a world w, we would like to generate all derived logical forms Z. We first present the floating parser (Pasupat and Liang, 2015), which uses beam search to generate Zb ⊆Z, a usually incomplete subset. Intuitively, the algorithm first constructs base logical forms based on spans of the utterance, and then builds larger logical forms of increasing size in a “floating” fashion—without requiring a trigger from the utterance. Formally, partial logical forms with category c and size s are stored in a cell (c, s). The algorithm first generates base logical forms from base deduction rules and store them in cells (c, 0) (e.g., the cell (Set, 0) contains 1st, Type.Row, and so on). Then for each size s = 1, . . . , smax, we populate 26 · · · · · · · · · · · · · · · · · · · · · (Set, 7, {Thailand}) (Set, 7, {Finland}) Figure 3: The first pass of DPD constructs cells (c, s, d) (square nodes) using denotationally invariant semantic functions (circle nodes). The second pass enumerates all logical forms along paths that lead to the correct denotation y (solid lines). the cells (c, s) by applying compositional rules on partial logical forms with size less than s. For instance, when s = 2, we can apply Rule C1 on logical forms Number.1 from cell (Set, s1 = 1) and Position from cell (Rel, s2 = 0) to create Position.Number.1 in cell (Set, s0+s1+1 = 2). After populating each cell (c, s), the list of logical forms in the cell is pruned based on the model scores to a fixed beam size in order to control the search space. Finally, the set Zb is formed by collecting logical forms from all cells (Set, s) for s = 1, . . . , smax. Due to the generality of our deduction rules, the number of logical forms grows quickly as the size s increases. As such, partial logical forms that are essential for building the desired logical forms might fall off the beam early on. In the next section, we present a new search method that compresses the search space using denotations. 4 Dynamic programming on denotations Our first step toward finding all correct logical forms is to represent all consistent logical forms (those that execute to the correct denotation). Formally, given x, w, and y, we wish to generate the set Z of all logical forms z such that JzKw = y. As mentioned in the previous section, beam search does not recover the full set Z due to pruning. Our key observation is that while the number of logical forms explodes, the number of distinct denotations of those logical forms is much more controlled, as multiple logical forms can share the same denotation. So instead of directly enumerating logical forms, we use dynamic programming on denotations (DPD), which is inspired by similar methods from program induction (Lau et al., 2003; Liang et al., 2010; Gulwani, 2011). The main idea of DPD is to collapse logical forms with the same denotation together. Instead of using cells (c, s) as in beam search, we perform dynamic programming using cells (c, s, d) where d is a denotation. For instance, the logical form Position.Number.1 will now be stored in cell (Set, 2, {r1, r3}). For DPD to work, each deduction rule must have a denotationally invariant semantic function g, meaning that the denotation of the resulting logical form g(z1, z2) only depends on the denotations of z1 and z2: Jz1Kw = Jz′ 1Kw ∧Jz2Kw = Jz′ 2Kw ⇒Jg(z1, z2)Kw = Jg(z′ 1, z′ 2)Kw All of our deduction rules in Table 1 are denotationally invariant, but a rule that, for instance, returns the argument with the larger logical form size would not be. Applying a denotationally invariant deduction rule on any pair of logical forms from (c1, s1, d1) and (c2, s2, d2) always results in a logical form with the same denotation d in the same cell (c, s1 + s2 + 1, d).1 (For example, the cell (Set, 4, {r3}) contains z1 := argmax(Position.1st, Index) and z′ 1 := argmin(Event.Relay, Index). Combining each of these with Venue using Rule C1 gives R[Venue].z1 and R[Venue].z′ 1, which belong to the same cell (Set, 5, {Thailand})). Algorithm. DPD proceeds in two forward passes. The first pass finds the possible combinations of cells (c, s, d) that lead to the correct denotation y, while the second pass enumerates the logical forms in the cells found in the first pass. Figure 3 illustrates the DPD algorithm. In the first pass, we are only concerned about finding relevant cell combinations and not the actual logical forms. Therefore, any logical form that belongs to a cell could be used as an argument of a deduction rule to generate further logical forms. Thus, we keep at most one logical form per cell; subsequent logical forms that are generated for that cell are discarded. After populating all cells up to size smax, we list all cells (Set, s, y) with the correct denotation y, and then note all possible rule combinations (cell1, rule) or (cell1, cell2, rule) that lead to those 1Semantic functions f with one argument work similarly. 27 final cells, including the combinations that yielded discarded logical forms. The second pass retrieves the actual logical forms that yield the correct denotation. To do this, we simply populate the cells (c, s, d) with all logical forms, using only rule combinations that lead to final cells. This elimination of irrelevant rule combinations effectively reduces the search space. (In Section 6.2, we empirically show that the number of cells considered is reduced by 98.7%.) The parsing chart is represented as a hypergraph as in Figure 3. After eliminating unused rule combinations, each of the remaining hyperpaths from base predicates to the target denotation corresponds to a single logical form. making the remaining parsing chart a compact implicit representation of all consistent logical forms. This representation is guaranteed to cover all possible logical forms under the size limit smax that can be constructed by the deduction rules. In our experiments, we apply DPD on the deduction rules in Table 1 and explicitly enumerate the logical forms produced by the second pass. For efficiency, we prune logical forms that are clearly redundant (e.g., applying max on a set of size 1). We also restrict a few rules that might otherwise create too many denotations. For example, we restricted the union operation (⊔) except unions of two entities (e.g., we allow Germany ⊔Finland but not Venue.Hungary ⊔. . . ), subtraction when building a Map, and count on a set of size 1.2 5 Fictitious worlds After finding the set Z of all consistent logical forms, we want to filter out spurious logical forms. To do so, we observe that semantically correct logical forms should also give the correct denotation in worlds w′ other than than w. In contrast, spurious logical forms will fail to produce the correct denotation on some other world. Generating fictitious worlds. With the observation above, we generate fictitious worlds w1, w2, . . . , where each world wi is a slight alteration of w. As we will be executing logical forms z ∈Z on wi, we should ensure that all entities and relations in z ∈Z appear in the fictitious world wi (e.g., z1 in Figure 1 would be meaningless if the entity 1st does not appear in wi). To this end, we 2While we technically can apply count on sets of size 1, the number of spurious logical forms explodes as there are too many sets of size 1 generated. Year Venue Position Event Time 2001 Finland 7th relay 46.62 2003 Germany 1st 400m 180.32 2005 China 1st relay 47.12 2007 Hungary 7th relay 182.05 Figure 4: From the example in Figure 1, we generate a table for the fictitious world w1. w w1 w2 · · · z1 Thailand China Finland · · · ) q1 z2 Thailand China Finland · · · z3 Thailand China Finland · · · z4 Thailand Germany China · · · } q2 z5 Thailand China China · · · o q3 z6 Thailand China China · · · ... ... ... ... Figure 5: We execute consistent logical forms zi ∈Z on fictitious worlds to get denotation tuples. Logical forms with the same denotation tuple are grouped into the same equivalence class qj. impose that all predicates present in the original world w should also be present in wi as well. In our case where the world w comes from a data table t, we construct wi from a new table ti as follows: we go through each column of t and resample the cells in that column. The cells are sampled using random draws without replacement if the original cells are all distinct, and with replacement otherwise. Sorted columns are kept sorted. To ensure that predicates in w exist in wi, we use the same set of table columns and enforce that any entity fuzzily matching a span in the question x must be present in ti (e.g., for the example in Figure 1, the generated ti must contain “1st”). Figure 4 shows an example fictitious table generated from the table in Figure 1. Fictitious worlds are similar to test suites for computer programs. However, unlike manually designed test suites, we do not yet know the correct answer for each fictitious world or whether a world is helpful for filtering out spurious logical forms. The next subsections introduce our method for choosing a subset of useful fictitious worlds to be annotated. Equivalence classes. Let W = (w1, . . . , wk) be the list of all possible fictitious worlds. For each z ∈Z, we define the denotation tuple JzKW = (JzKw1, . . . , JzKwk). We observe that some logical forms produce the same denotation across all 28 fictitious worlds. This may be due to an algebraic equivalence in logical forms (e.g., z1 and z2 in Figure 1) or due to the constraints in the construction of fictitious worlds (e.g., z1 and z3 in Figure 1 are equivalent as long as the Year column is sorted). We group logical forms into equivalence classes based on their denotation tuples, as illustrated in Figure 5. When the question is unambiguous, we expect at most one equivalence class to contain correct logical forms. Annotation. To pin down the correct equivalence class, we acquire the correct answers to the question x on some subset W ′ = (w′ 1, . . . , w′ ℓ) ⊆ W of ℓfictitious worlds, as it is impractical to obtain annotations on all fictitious worlds in W. We compile equivalence classes that agree with the annotations into a set Zc of correct logical forms. We want to choose W ′ that gives us the most information about the correct equivalence class as possible. This is analogous to standard practices in active learning (Settles, 2010).3 Let Q be the set of all equivalence classes q, and let JqKW ′ be the denotation tuple computed by executing an arbitrary z ∈q on W ′. The subset W ′ divides Q into partitions Ft = {q ∈Q : JqKW ′ = t} based on the denotation tuples t (e.g., from Figure 5, if W ′ contains just w2, then q2 and q3 will be in the same partition F(China)). The annotation t∗, which is also a denotation tuple, will mark one of these partitions Ft∗as correct. Thus, to prune out many spurious equivalence classes, the partitions should be as numerous and as small as possible. More formally, we choose a subset W ′ that maximizes the expected information gain (or equivalently, the reduction in entropy) about the correct equivalence class given the annotation. With random variables Q ∈Q representing the correct equivalence class and T ∗ W ′ for the annotation on worlds W ′, we seek to find arg minW ′ H(Q | T ∗ W ′). Assuming a uniform prior on Q (p(q) = 1/|Q|) and accurate annotation (p(t∗| q) = I[q ∈Ft∗]): H(Q | T ∗ W ′) = X q,t p(q, t) log p(t) p(q, t) = 1 |Q| X t |Ft| log |Ft|. (*) 3The difference is that we are obtaining partial information about an individual example rather than partial information about the parameters. We exhaustively search for W ′ that minimizes (*). The objective value follows our intuition since P t |Ft| log |Ft| is small when the terms |Ft| are small and numerous. In our experiments, we approximate the full set W of fictitious worlds by generating k = 30 worlds to compute equivalence classes. We choose a subset of ℓ= 5 worlds to be annotated. 6 Experiments For the experiments, we use the training portion of the WIKITABLEQUESTIONS dataset (Pasupat and Liang, 2015), which consists of 14,152 questions on 1,679 Wikipedia tables gathered by crowd workers. Answering these complex questions requires different types of operations. The same operation can be phrased in different ways (e.g., “best”, “top ranking”, or “lowest ranking number”) and the interpretation of some phrases depend on the context (e.g., “number of” could be a table lookup or a count operation). The lexical content of the questions is also quite diverse: even excluding numbers and symbols, the 14,152 training examples contain 9,671 unique words, only 10% of which appear more than 10 times. We attempted to manually annotate the first 300 examples with lambda DCS logical forms. We successfully constructed correct logical forms for 84% of these examples, which is a good number considering the questions were created by humans who could use the table however they wanted. The remaining 16% reflect limitations in our setup— for example, non-canonical table layouts, answers appearing in running text or images, and common sense reasoning (e.g., knowing that “Quarterfinal” is better than “Round of 16”). 6.1 Generality of deduction rules We compare our set of deduction rules with the one given in Pasupat and Liang (2015) (henceforth PL15). PL15 reported generating the annotated logical form in 53.5% of the first 200 examples. With our more general deduction rules, we use DPD to verify that the rules are able to generate the annotated logical form in 76% of the first 300 examples, within the logical form size limit smax of 7. This is 90.5% of the examples that were successfully annotated. Figure 6 shows some examples of logical forms we cover that PL15 could not. Since DPD is guaranteed to find all consistent logical forms, we can be sure that the logical 29 “which opponent has the most wins” z = argmax(R[Opponent].Type.Row, R[λx.count(Opponent.x ⊓Result.Lost]) “how long did ian armstrong serve?” z = R[Num2].R[Term].Member.IanArmstrong −R[Number].R[Term].Member.IanArmstrong “which players came in a place before lukas bauer?” z = R[Name].Index.<.R[Index].Name.LukasBauer “which players played the same position as ardo kreek?” z = R[Player].Position.R[Position].Player.Ardo ⊓!=.Ardo Figure 6: Several example logical forms our system can generated that are not covered by the deduction rules from the previous work PL15. forms not covered are due to limitations of the deduction rules. Indeed, the remaining examples either have logical forms with size larger than 7 or require other operations such as addition, union of arbitrary sets, etc. 6.2 Dynamic programming on denotations Search space. To demonstrate the savings gained by collapsing logical forms with the same denotation, we track the growth of the number of unique logical forms and denotations as the logical form size increases. The plot in Figure 7 shows that the space of logical forms explodes much more quickly than the space of denotations. The use of denotations also saves us from considering a significant amount of irrelevant partial logical forms. On average over 14,152 training examples, DPD generates approximately 25,000 consistent logical forms. The first pass of DPD generates ≈153,000 cells (c, s, d), while the second pass generates only ≈2,000 cells resulting from ≈8,000 rule combinations, resulting in a 98.7% reduction in the number of cells that have to be considered. Comparison with beam search. We compare DPD to beam search on the ability to generate (but not rank) the annotated logical forms. We consider two settings: when the beam search parameters are uninitialized (i.e., the beams are pruned randomly), and when the parameters are trained using the system from PL15 (i.e., the beams are pruned based on model scores). The plot in Figure 8 shows that DPD generates more annotated logical forms (76%) compared to beam search (53.7%), even when beam search is guided heuristically by learned parameters. Note that DPD is an exact algorithm and does not require a heuristic. 0 1 2 3 4 5 6 7 logical form size 0 0.2K 0.4K 0.6K 0.8K 1.0K count logical forms denotations Figure 7: The median of the number of logical forms (dashed) and denotations (solid) as the formula size increases. The space of logical forms grows much faster than the space of denotations. 0 5000 10000 15000 20000 25000 number of final LFs produced 0.0 0.2 0.4 0.6 0.8 annotated LFs coverage ⋆ Figure 8: The number of annotated logical forms that can be generated by beam search, both uninitialized (dashed) and initialized (solid), increases with the number of candidates generated (controlled by beam size), but lacks behind DPD (star). 6.3 Fictitious worlds We now explore how fictitious worlds divide the set of logical forms into equivalence classes, and how the annotated denotations on the chosen worlds help us prune spurious logical forms. Equivalence classes. Using 30 fictitious worlds per example, we produce an average of 1,237 equivalence classes. One possible concern with using a limited number of fictitious worlds is that we may fail to distinguish some pairs of nonequivalent logical forms. We verify the equivalence classes against the ones computed using 300 fictitious worlds. We found that only 5% of the logical forms are split from the original equivalence classes. Ideal Annotation. After computing equivalence classes, we choose a subset W ′ of 5 fictitious worlds to be annotated based on the informationtheoretic objective. For each of the 252 examples with an annotated logical form z∗, we use the denotation tuple t∗= Jz∗KW ′ as the annotated answers on the chosen fictitious worlds. We are able to rule out 98.7% of the spurious equivalence classes and 98.3% of spurious logical forms. Furthermore, we are able to filter down to just one equivalence class in 32.7% of the examples, and 30 at most three equivalence classes in 51.3% of the examples. If we choose 5 fictitious worlds randomly instead of maximizing information gain, then the above statistics are 22.6% and 36.5%, respectively. When more than one equivalence classes remain, usually only one class is a dominant class with many equivalent logical forms, while other classes are small and contain logical forms with unusual patterns (e.g., z5 in Figure 1). The average size of the correct equivalence class is ≈3,000 with the standard deviation of ≈8,000. Because we have an expressive logical language, there are fundamentally many equivalent ways of computing the same quantity. Crowdsourced Annotation. Data from crowdsourcing is more susceptible to errors. From the 252 annotated examples, we use 177 examples where at least two crowd workers agree on the answer of the original world w. When the crowdsourced data is used to rule out spurious logical forms, the entire set Z of consistent logical forms is pruned out in 11.3% of the examples, and the correct equivalent class is removed in 9% of the examples. These issues are due to annotation errors, inconsistent data (e.g., having date of death before birth date), and different interpretations of the question on the fictitious worlds. For the remaining examples, we are able to prune out 92.1% of spurious logical forms (or 92.6% of spurious equivalence classes). To prevent the entire Z from being pruned, we can relax our assumption and keep logical forms z that disagree with the annotation in at most 1 fictitious world. The number of times Z is pruned out is reduced to 3%, but the number of spurious logical forms pruned also decreases to 78%. 7 Related Work and Discussion This work evolved from a long tradition of learning executable semantic parsers, initially from annotated logical forms (Zelle and Mooney, 1996; Kate et al., 2005; Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007; Kwiatkowski et al., 2010), but more recently from denotations (Clarke et al., 2010; Liang et al., 2011; Berant et al., 2013; Kwiatkowski et al., 2013; Pasupat and Liang, 2015). A central challenge in learning from denotations is finding consistent logical forms (those that execute to a given denotation). As Kwiatkowski et al. (2013) and Berant and Liang (2014) both noted, a chief difficulty with executable semantic parsing is the “schema mismatch”—words in the utterance do not map cleanly onto the predicates in the logical form. This mismatch is especially pronounced in the WIKITABLEQUESTIONS of Pasupat and Liang (2015). In the second example of Figure 6, “how long” is realized by a logical form that computes a difference between two dates. The ramification of this mismatch is that finding consistent logical forms cannot solely proceed from the language side. This paper is about using annotated denotations to drive the search over logical forms. This takes us into the realm of program induction, where the goal is to infer a program (logical form) from input-output pairs (for us, world-denotation pairs). Here, previous work has also leveraged the idea of dynamic programming on denotations (Lau et al., 2003; Liang et al., 2010; Gulwani, 2011), though for more constrained spaces of programs. Continuing the program analogy, generating fictitious worlds is similar in spirit to fuzz testing for generating new test cases (Miller et al., 1990), but the goal there is coverage in a single program rather than identifying the correct (equivalence class of) programs. This connection can potentially improve the flow of ideas between the two fields. Finally, the effectiveness of dynamic programming on denotations relies on having a manageable set of denotations. For more complex logical forms and larger knowledge graphs, there are many possible angles worth exploring: performing abstract interpretation to collapse denotations into equivalence classes (Cousot and Cousot, 1977), relaxing the notion of getting the correct denotation (Steinhardt and Liang, 2015), or working in a continuous space and relying on gradient descent (Guu et al., 2015; Neelakantan et al., 2016; Yin et al., 2016; Reed and de Freitas, 2016). This paper, by virtue of exact dynamic programming, sets the standard. Acknowledgments. We gratefully acknowledge the support of the Google Natural Language Understanding Focused Program. In addition, we would like to thank anonymous reviewers for their helpful comments. Reproducibility. Code and experiments for this paper are available on the CodaLab platform at https://worksheets.codalab.org/worksheets/ 0x47cc64d9c8ba4a878807c7c35bb22a42/. 31 References Y. Artzi and L. Zettlemoyer. 2013. UW SPF: The University of Washington semantic parsing framework. arXiv preprint arXiv:1311.3011. J. Berant and P. Liang. 2014. Semantic parsing via paraphrasing. In Association for Computational Linguistics (ACL). J. Berant, A. Chou, R. Frostig, and P. Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Empirical Methods in Natural Language Processing (EMNLP). J. Clarke, D. Goldwasser, M. Chang, and D. Roth. 2010. Driving semantic parsing from the world’s response. In Computational Natural Language Learning (CoNLL), pages 18–27. P. Cousot and R. Cousot. 1977. Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In Principles of Programming Languages (POPL), pages 238–252. S. Gulwani. 2011. Automating string processing in spreadsheets using input-output examples. ACM SIGPLAN Notices, 46(1):317–330. K. Guu, J. Miller, and P. Liang. 2015. Traversing knowledge graphs in vector space. In Empirical Methods in Natural Language Processing (EMNLP). R. J. Kate, Y. W. Wong, and R. J. Mooney. 2005. Learning to transform natural to formal languages. In Association for the Advancement of Artificial Intelligence (AAAI), pages 1062–1068. T. Kwiatkowski, L. Zettlemoyer, S. Goldwater, and M. Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higher-order unification. In Empirical Methods in Natural Language Processing (EMNLP), pages 1223–1233. T. Kwiatkowski, E. Choi, Y. Artzi, and L. Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Empirical Methods in Natural Language Processing (EMNLP). T. Lau, S. Wolfman, P. Domingos, and D. S. Weld. 2003. Programming by demonstration using version space algebra. Machine Learning, 53:111–156. P. Liang, M. I. Jordan, and D. Klein. 2010. Learning programs: A hierarchical Bayesian approach. In International Conference on Machine Learning (ICML), pages 639–646. P. Liang, M. I. Jordan, and D. Klein. 2011. Learning dependency-based compositional semantics. In Association for Computational Linguistics (ACL), pages 590–599. P. Liang. 2013. Lambda dependency-based compositional semantics. arXiv. B. P. Miller, L. Fredriksen, and B. So. 1990. An empirical study of the reliability of UNIX utilities. Communications of the ACM, 33(12):32–44. A. Neelakantan, Q. V. Le, and I. Sutskever. 2016. Neural programmer: Inducing latent programs with gradient descent. In International Conference on Learning Representations (ICLR). P. Pasupat and P. Liang. 2015. Compositional semantic parsing on semi-structured tables. In Association for Computational Linguistics (ACL). S. Reed and N. de Freitas. 2016. Neural programmerinterpreters. In International Conference on Learning Representations (ICLR). B. Settles. 2010. Active learning literature survey. Technical report, University of Wisconsin, Madison. J. Steinhardt and P. Liang. 2015. Learning with relaxed supervision. In Advances in Neural Information Processing Systems (NIPS). P. Yin, Z. Lu, H. Li, and B. Kao. 2016. Neural enquirer: Learning to query tables with natural language. arXiv. M. Zelle and R. J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Association for the Advancement of Artificial Intelligence (AAAI), pages 1050–1055. L. S. Zettlemoyer and M. Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Uncertainty in Artificial Intelligence (UAI), pages 658– 666. L. S. Zettlemoyer and M. Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP/CoNLL), pages 678–687. 32
2016
3
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 311–321, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Connotation Frames: A Data-Driven Investigation Hannah Rashkin Sameer Singh Yejin Choi Computer Science & Engineering University of Washington {hrashkin, sameer, yejin}@cs.washington.edu Abstract Through a particular choice of a predicate (e.g., “x violated y”), a writer can subtly connote a range of implied sentiment and presupposed facts about the entities x and y: (1) writer’s perspective: projecting x as an “antagonist” and y as a “victim”, (2) entities’ perspective: y probably dislikes x, (3) effect: something bad happened to y, (4) value: y is something valuable, and (5) mental state: y is distressed by the event. We introduce connotation frames as a representation formalism to organize these rich dimensions of connotation using typed relations. First, we investigate the feasibility of obtaining connotative labels through crowdsourcing experiments. We then present models for predicting the connotation frames of verb predicates based on their distributional word representations and the interplay between different types of connotative relations. Empirical results confirm that connotation frames can be induced from various data sources that reflect how language is used in context. We conclude with analytical results that show the potential use of connotation frames for analyzing subtle biases in online news media. 1 Introduction People commonly express their opinions through subtle and nuanced language (Thomas et al., 2006; Somasundaran and Wiebe, 2010). Often, through seemingly objective statements, the writer can influence the readers’ judgments toward an event and their participants. Even by choosing a particular predicate, the writer can indicate rich connotative information about the entities that interact through the predicate. More specifically, through a simple Writer: “Agent violates theme.” Writer + = + Reader = = Agent Theme P(w ! agent) P(w ! theme) P(agent ! theme) E(agent) E(theme) V(theme) V(agent) E(theme) S(agent) S(agent) E(agent) Perspective: the writer is sympathetic towards the theme Perspective: the writer portrays the agent as being antagonistic Value: the theme must be valuable + Effect: the agent is not really affected by the violation State: the theme will be unhappy State: the agent feels indifferent Effect: the theme has been hurt Value: not clear if agent is valuable Figure 1: An example connotation frame of “violate” as a set of typed relations: perspective P(x →y), effect E(x), value V(x), and mental state S(x). statement such as “x violated y”, the writer can convey: (1) writer’s perspective: the writer is projecting x as an “antagonist” and y as a “victim”, eliciting negative perspective from readers toward x (i.e., blaming x) and positive perspective toward y (i.e., sympathetic or supportive toward y). (2) entities’ perspective: y most likely feels negatively toward x as a result of being violated. (3) effect: something bad happened to y. (4) value: y is something valuable, since it does not make sense to violate something worthless. In other words, the writer is presupposing a positive value of y as a fact. (5) mental state: y is most likely unhappy about the outcome.1 1To be more precise, y is most likely in a negative state 311 Verb Subset of Typed Relations Example Sentences L/R suffer P(w →agent) = + P(w →theme) = − P(agent →theme) = − E(agent) = − V(agent) = + S(agent) = − The story begins in Illinois in 1987, when a 17year-old girl suffered a botched abortion. R guard P(w →agent) = + P(w →theme) = + P(agent →theme) = + E(theme) = + V(theme) = + S(theme) = + In August, marshals guarded 25 clinics in 18 cities. L uphold P(w →theme) = + P(agent →theme) = + E(theme) = + V(theme) = + A hearing is scheduled to make a decision on whether to uphold the clinic’s suspension. R Table 1: Example typed relations (perspective P(x →y), effect E(x), value V(x), and mental state S(x)). Not all typed relations are shown due to space constraints. The example sentences demonstrate the usage of the predicates in left [L] or right [R] leaning news sources. Even though the writer might not explicitly state any of the interpretation [1-5] above, the readers will be able interpret these intentions as a part of their comprehension. In this paper, we present an empirical study of how to represent and induce the connotative interpretations that can be drawn from a verb predicate, as illustrated above. We introduce connotation frames as a representation framework to organize the rich dimensions of the implied sentiment and presupposed facts. Figure 1 shows an example of a connotation frame for the predicate violate. We define four different typed relations: P(x →y) for perspective of x towards y, E(x) for effect on x, V(x) for value of x, and S(x) for mental state of x. These relationships can all be either positive (+), neutral (=), or negative (-). Our work is the first study to investigate frames as a representation formalism for connotative meanings. This contrasts with previous computational studies and resource development for frame semantics, where the primary focus was almost exclusively on denotational meanings of language (Baker et al., 1998; Palmer et al., 2005). Our formalism draws inspirations from the earlier work of frame semantics, however, in that we investigate the connection between a word and the related world knowledge associated with the word (Fillmore, 1976), which is essential for the readers to interpret many layers of the implied sentiment and presupposed value judgments. We also build upon the extensive amount of literature in sentiment analysis (Pang and Lee, 2008; Liu and Zhang, 2012), especially the recent emerging efforts on implied sentiment analysis (Feng et al., 2013; Greene and Resnik, 2009), entityentity sentiment inference (Wiebe and Deng, 2014), assuming it is an entity that can have a mental state. opinion role induction (Wiegand and Ruppenhofer, 2015) and effect analysis (Choi and Wiebe, 2014). However, our work is the first to organize various aspects of the connotative information into coherent frames. More concretely, our contributions are threefold: (1) a new formalism, model, and annotated dataset for studying connotation frames from large-scale natural language data and statistics, (2) new datadriven insights into the dynamics among different typed relations within each frame, and (3) an analytic study showing the potential use of connotation frames for analyzing subtle biases in journalism. The rest of the paper is organized as follows: in §2, we provide the definitions and data-driven insights for connotation frames. In §3, we introduce models for inducing the connotation frames, followed by empirical results, annotation studies, and analysis on news media in §4. We discuss related work in §5 and conclude in §6. 2 Connotation Frame Given a predicate v, we define a connotation frame F(v) as a collection of typed relations and their polarity assignments: (i) perspective Pv(ai →aj): a directed sentiment from the entity ai to the entity aj, (ii) value Vv(ai): whether ai is presupposed to be valuable, (iii) effect Ev(ai): whether the event denoted by the predicate v is good or bad for the entity ai, and (iv) mental state Sv(ai): the likely mental state of the entity ai as a result of the event. We assume that each typed relation can have one of the three connotative polarities ∈{+, −, =}, i.e., positive, negative, or neutral. Our goal in this paper is to focus on the general connotation of the predicate considered out of context. We leave contextual interpretation of connotation as future work. Table 1 shows examples of connotation frame 312 Verb x’s role P(w →·) Left-leaning Sources Right-leaning Sources accuse agent Putin, Progressives, Limbaugh, Gingrich activist, U.S., protestor, Chavez theme + official, rival, administration, leader Romney, Iran, Gingrich, regime attack agent McCain, Trump, Limbaugh Obama, campaign, Biden, Israel theme + Gingrich, Obama, policy citizen, Zimmerman criticize agent Ugandans, rival, Romney, Tyson Britain, passage, Obama, Maddow theme + Obama, Allen, Cameron, Congress Pelosi, Romey, GOP, Republicans Table 2: Media Bias in Connotation Frames: Obama, for example, is portrayed as someone who attacks or criticizes others by the right-leaning sources, whereas the left-leaning sources portray Obama as the victim of harsh acts like “attack” and “criticize”. relations for the verbs suffer, guard, and uphold, along with example sentences. For instance, for the verb suffer, the writer is likely to have a positive perspective towards the agent (e.g., being supportive or sympathetic toward the “17-year-old girl” in the example shown on the right) and a negative perspective towards the theme (e.g., being negative towards ‘botched abortion”). 2.1 Data-driven Motivation Since the meaning of language is ultimately contextual, the exact connotation will vary depending on the context of each utterance. Nonetheless, there still are common shifts or biases in the connotative polarities, as we found from two data-driven analyses. First, we looked at words from the Subjectivity Lexicon (Wilson et al., 2005) that are used in the argument positions of a small selection of predicates in Google Syntactic N-grams (Goldberg and Orwant, 2013). For this analysis, we assumed that the word in the subject position is the agent while the object is the theme. We found 64% of the words in the agent position of suffer are positive, and 94% of the words in the theme position are negative, which is consistent with the polarities of the writer’s perspective towards these arguments, as shown in Table 1. For guard, 57% of the subjects and 76% of the objects are positive, and in the case of uphold, 56% of the subjects and 72% of the objects are positive. We also investigated how media bias can potentially be analyzed through connotation frames. From the Stream Corpus 2014 dataset (KBA, 2014), we selected all articles from news outlets with known political biases,2 and compared how they 2The articles come from 30 news sources indicated by others as exhibiting liberal or conservative leanings (Mitchell et al., 2014; Center for Media and Democracy, 2013; Center for Media and Democracy, 2012; HWC Library, 2011) use polarised words such as “accuse”, “attack”, and “criticize” differently in light of P(w →agent) and P(w →theme) relations of the connotation frames. Table 2 shows interesting contrasts. Obama, for example, is portrayed as someone who attacks or criticizes others according to the rightleaning sources, whereas the left-leaning sources portray Obama as the victim of harsh acts like “attack” or “criticize”.3 Furthermore, by knowing the perspective relationships P(w →ai) associated with a predicate, we can make predictions about how the left-leaning and right-leaning sources feel about specific people or issues. For example, because left-leaning sources frequently use McCain, Trump, and Limbaugh in the subject position of attack, we might predict that these sources have a negative sentiment towards these entities. 2.2 Dynamics between Typed Relations Given a predicate, the polarity assignments of typed relations are interdependent. For example, if the writer feels positively towards the agent but negatively towards the theme, then it is likely that the agent and the theme do not feel positively towards each other. This insight is related to that of Wiebe and Deng (2014), but differs in that the polarities are predicate-specific and do not rely on knowledge of prior sentiment towards the arguments. This and other possible interdependencies are summarized in Table 3. These interdependencies serve as general guidelines of what properties we expect to depend on one another, especially in the case where the polarities are non-neutral. We will promote these internal consistencies in our factor graph model (§3) as soft constraints. There also exist other interdependencies that we will use to simplify our task. First, the directed 3That is, even if someone truly deserves criticism from Obama, left-learning sources would choose slightly different wordings to avoid a potentially harsh cast on Obama. 313 Perspective Triad: If A is positive towards B, and B is positive towards C, then we expect A is also positive towards C. Similar dynamics hold for the negative case. Pw→a1 = ¬ (Pw→a2 ⊕Pa1→a2) Perspective – Effect: If a predicate has a positive effect on the Subject, then we expect that the interaction between the Subject and Object was positive. Similar dynamics hold for the negative case and for other perspective relations. Ea1 = Pa2→a1 Perspective – Value: If A is presupposed as valuable, then we expect that the writer also views A positively. Similar dynamics hold for the negative case. Va1 = Pw→a1 Effect – Mental State: If the predicate has a positive effect on A, then we expect that A will gain a positive mental state. Similar dynamics hold for the negative case. Sa1 = Ea1 Table 3: Potential Dynamics among Typed Relations: we propose models that parameterize these dynamics using log-linear models (frame-level model in §3). sentiments between the agent and the theme are likely to be reciprocal, or at least do not directly conflict with + and −simultaneously. Therefore, we assume that P(a1 →a2) = P(a2 →a1) = P(a1 ↔a2), and we only measure for these binary relationships going in one direction. In addition, we assume the predicted4 perspective from the reader r to an argument P(r →a) is likely to be the same as the implied perspective from the writer w to the same argument P(w →a). So, we only try to learn the perspective of the writer. Lifting these assumptions will be future work. For simplicity, our model only explores the polarities involving the agent and the theme roles. We will assume that these roles are correlated to the subject and object positions, and henceforth refer to them as the “Subject” and “Object” of the event. 3 Modeling Connotation Frames Our task is essentially that of lexicon induction (Akkaya et al., 2009; Feng et al., 2013) in that we want to induce the connotation frames of previously unseen verbs. For each predicate, we infer a connotation frame composed of 9 relationship aspects that represent: perspective {P(w →o), P(w →s), P(s →o)}, effect {E(o), E(s)}, value {V(o), V(s)}, and mental state {S(o), S(s)} polarities. We propose two models: an aspect-level model that makes the prediction for each typed relation independently based on the distributional representation of the context in which the predicate appears (§3.1), and a frame-level model that makes the pre4Surely different readers can and will form varying opinions after reading the same text. Here we concern with the most likely perspective of the general audience, as a result of reading the text. Node Meaning Perspective of Writer towards Subject Effect on Subject Value of Subject Mental State of Subject Figure 2: A factor graph for predicting the polarities of the typed relations that define a connotation frame for a given verb predicate. The factor graph also includes unary factors (ψemb), which we left out for brevity. diction over the connotation frame collectively in consideration the dynamics between typed relations (§3.2). 3.1 Aspect-Level Our aspect-level model predicts labels for each of these typed relations separately. As input, we use the 300-dimensional dependency-based word embeddings from Levy and Goldberg (2014). For each aspect, there is a separate MaxEnt (maximum entropy) classifier used to predict the label of that aspect on a given word-embedding, which is treated as a 300 dimensional input vector to the classifier. The MaxEnt classifiers learn their weights using LBFGS on the training data examples with re-weighting of samples to maximize for the best average F1 score. 314 3.2 Frame-Level Next we present a factor graph model (Figure 2) of the connotation frames that parameterize the dynamics between typed relations. Specifically, for each verb predicate,5 the factor graph contains 9 nodes representing the different aspects of the connotation frame. All these variables take polarity values from the set {−, =, +}. We define Yi := {Pwo, Pws, Pso, Eo, Es, Vo, Vs, So, Ss} as the set of relational aspects for the ith verb. The factor graph for Yi, is illustrated in Figure 2, and we will describe the factor potentials in more detail in the rest of this section. The probability of an assignment of polarities to the nodes in Yi is: P(Yi) ∝ψPV(Pws, Vs) ψPV(Pwo, Vo) ψPE(Pso, Es) ψPE(Pso, Eo) ψES(Es, Ss) ψES(Eo, So) ψPT(Pwo, Pws, Pso) Y y∈Yi ψemb(y) Embedding Factors We include unary factors on all nodes to represent the results of the aspect-level classifier. Incorporating this knowledge as factors, as opposed to fixing the variables as observed, affords us the flexibility of representing noise in the labels as soft evidence. The potential function ψemb is a log-linear function of a feature vector f, which is a one-hot feature vector representing the polarity of a node (+,−,or =). For example, with the node representing the value of the object (Vo): ψemb(Vo) = ewVo·f(Vo) The potential ψemb is defined similarly for the other 8 remaining nodes. All weights were learned using stochastic gradient descent (SGD) over training data. Interdependency Factors We include interdependency factors to promote the properties defined by the dynamics between relations (§2.2). The potentials for Perspective Triad, Perspective-Value, Perspective-Effect, and Effect-State Relationships (ψPT, ψPV, ψPE, ψES respectively) are all defined using log-linear functions of one-hot feature vectors that encode the combination of polarities of the neighboring nodes. The potential for ψPT is therefore: ψPT(Pwo, Pws, Pso) = ewP T ·f(Pwo,Pws,Pso) 5We consider only verb predicates here. And we define the potentials for ψPV, ψPE, and ψES for subject nodes as: ψPV(Pws, Vs) = ewP V,s·f(Pws,Vs) ψPE(Pso, Es) = ewP E,s·f(Pso,Es) ψES(Es, Ss) = ewES,s·f(Es,Ss) and we define the potentials for the object nodes similarly. As with the unary seed factors, weights were learned using SGD over training data. Belief Propagation We use belief propagation to induce the connotation frames of previously unseen verbs. In the belief propagation algorithm, messages are iteratively passed between the nodes to their neighboring factors and vice versa. Each message µ, containing a scalar for each value x ∈{−, 0, +}, is defined from each node v to a neighboring factor a as follows: µv→a(x) ∝ Y a∗∈N(v) a µa∗→v(x) and from each factor a to a neighboring node v as: µa→v ∝ X x′,x′v=x ψ(x′) Y v∗∈N(a) v µv∗→a(x′ v∗) At the conclusion of message passing, the probability of a specific polarity associated with node v being equal to x is proportional to Q a∈N(v) µa→v(x). Our factor graph does not contain any loops, so we are able to perform exact inference. 4 Experiments We first describe crowd-sourced annotations (§4.1), then present the empirical results of predicting connotation frames (§4.2), and conclude with qualitative analysis of a large corpus (§4.3). 4.1 Data and Crowdsourcing In order to understand how humans interpret connotation frames, we designed an Amazon Mechanical Turk (AMT) annotation study. We gathered a set of transitive verbs commonly used in the New York Times corpus (Sandhaus, 2008), selecting the 2400 verbs that are used more than 200 times in the corpus. Of these, AMT workers annotated the 1000 most frequently used verbs. Annotation Design In a pilot annotation experiment, we found that annotators have difficulty thinking about subtle connotative polarities when shown predicates without any context. Therefore, 315 we designed the AMT task to provide a generic context as follows. We first split each verb predicate into 5 separate tasks that each gave workers a different generic sentence using the verb. To create generic sentences, we used Google Syntactic N-grams (Goldberg and Orwant, 2013) to come up with a frequently seen Subject-Verb-Object tuple which served as a simple three-word sentence with generic arguments. For each of the 5 sentences, we asked 3 annotators to answer questions like “How do you think the Subject feels about the event described in this sentence?” In total, each verb has 15 annotations aggregated over 5 different generic sentences containing the verb. In order to help the annotators, some of the questions also allowed annotators to choose sentiment using additional classes for “positive or neutral” or “negative or neutral” for when they were less confident but still felt like a sentiment might exist. When taking inter-annotator agreement, we count “positive or neutral” as agreeing with either “positive” or “neutral” classes. Annotator agreement Table 4 shows agreements and data statistics. The non-conflicting (NC) agreement only counts opposite polarities as disagreement.6 From this study, we can see that non-expert annotators are able to see these sort of relationships based on their understanding of how language is used. From the NC agreement, we see that annotators do not frequently choose completely opposite polarities, indicating that even when they disagree, their disagreements are based on the degree of connotations rather than the polarity itself. The average Krippendorff alpha for all of the questions posed to the workers is 0.25, indicating stronger than random agreement. Considering the subtlety of the implicit sentiments that we are asking them to annotate, it is reasonable that some annotators will pick up on more nuances than others. Overall, the percent agreement is encouraging that the connotative relationships are visible to human annotators. Aggregating Annotations We aggregated over crowdsourced labels (fifteen annotations per verb) to create a polarity label for each aspect of a verb.7 Final distributions of the aggregated labels are 6Annotators were asked yes/no questions related to Value, so this does not have a corresponding NC agreement score. 7 We take the average to obtain scalar value between [−1., 1.] for each aspect of a verb’s connotation frame. For simplicity, we cutoff the ranges of negative, neutral and positive polarities as [−1, −0.25), [−0.25, 0.25] and (0.25, 1], respectively. Aspect % Agreement Distribution Strict NC % + % P(w →o) 75.6 95.6 36.6 4.6 P(w →s) 76.1 95.5 47.1 7.9 P(s →o) 70.4 91.9 45.8 5.0 E(o) 52.3 94.6 50.3 20.24 E(s) 53.5 96.5 45.1 4.7 V(o) 65.2 78.64 2.7 V(s) 71.9 90.32 1.4 S(o) 79.9 98.0 12.8 14.5 S(s) 70.4 92.5 50.72 8.6 Table 4: Label Statistics: % Agreement refers to pairwise inter-annotator agreement. The strict agreement counts agreement over 3 classes (“positive or neutral” was counted as agreeing with either + or neutral), while non-conflicting (NC) agreement also allows agreements between neutral and -/+ (no direct conflicts). Distribution shows the final class distribution of -/+ labels created by averaging annotations. included in the right-hand columns of Table 4. Notably, the distributions are skewed toward positive and neutral labels. The most skewed connotation frame aspect is the value V(x) which tends to be positive, especially for the subject argument. This makes some intuitive sense since, as the subject actively causes the predicate event to occur, they most likely have some intrinsic potential to be valuable. An example of a verb where the subject was labelled as not valuable is “contaminate”. In the most generic case, the writer is using contaminate to frame the subject as being worthless (and even harmful) with regards to the other event participants. For example, in the sentence “his touch contaminated the food,” it is clear that the writer considers “his touch” to be of negative value in the context of how it impacts the rest of the event. 4.2 Connotation Frame Prediction Using our crowdsourced labels, we randomly divided the annotated verbs into training, dev, and held-out test sets of equal size (300 verbs each). For evaluation we measured average accuracy and F1 score over the 9 different Connotation Frame relationship types for which we have annotations: P(w →o), P(w →s), P(s →o), V(o), V(s), E(o), E(s), S(o), and S(s). Baselines To show the non-trivial challenge of learning Connotation Frames, we include a simple majority-class baselines. The MAJORITY classifier assigns each of the 9 relationships the label of the majority of that relationship type found in the training data. Some of these relationships (in particular, the Value of subject/object) have skewed 316 distributions, so we expect this classifier to achieve a much higher accuracy than random but a much lower overall F1 score. Additionally, we add a GRAPH PROP baseline that is comparable to algorithms like graph propagation or label propagation which are often used for (sentiment) lexicon induction. We use a factor graph with nodes representing the polarity of each typed relation for each verb. Binary factors connect nodes representing a particular type of relation for two similar verbs (e.g. P(w →o) for verbs persuade and convince). These binary factors have hand-tuned potentials that are proportional to the cosine similarity of the verbs’ embeddings, encouraging similar verbs to have the same polarity for the various relational aspects. We use words in the training data as the seed set and use loopy belief propagation to propagate polarities from known nodes to the unknown relationships. Finally, we use a 3-NEAREST NEIGHBOR baseline that labels relationships for a verb based on the predicate’s 300-dimensional word embedding representation, using the same embeddings as in our aspect-level. 3-NEAREST NEIGHBOR labels each verb using the polarities of the three closest verbs found in the training set. The most similar verbs are determined using the cosine similarity between word embeddings. Results As shown in Table 5, aspect-level and frame-level models consistently outperform all three baselines — MAJORITY, 3-NN, GRAPH PROP in the development set across the different types of relationships. In particular, the improved F1 scores show that these models are able to perform better across all three classes of labels even in the most skewed cases. The frame-level model also frequently improves the F1 scores of the labels from what they were in the aspect-level model. The summarized comparison of the classifiers’ performance test set is shown in Table 6. As with the development set, aspect-level and frame-level are both able to outperform the baselines. Furthermore, the frame-level formulation is able to make improvement over the results of the aspectlevel classification, indicating that the modelling of inter-dependencies between relationships did help correct some of the mistakes made. One point of interest about the frame-level results is whether the learned weights over the consistency factors match our initial intuitions about interdependencies between relationships. The weights (a) wemb for P(s →o) (b) P(w →o): (c) P(w →o): = (d) P(w →o): + Figure 3: Learned weights of embedding factor for the perspective of subject to object and the weights the perspective triad (PT) factor. Red is for weights that are more positive, whereas blue are more negative. learned in our algorithm do tell us something interesting about the degree to which these interdependencies are actually found in our data. We show the heat maps for some of the learned weights in Figure 3. In 3a, we show the weights of one of the embedding factors, and how the polarities are more strongly weighted when they match the relation-level output. In the rest of the figure, we show the weights for the other perspective relationships when P(w →o) is negative (3b), neutral (3c), and positive (3d), respectively. Based on the expected interdependencies, when P(w →o) : −, the model should favor P(w →s) ̸= P(s →o) and when P(w →o) : +, the model should favor P(w →s) = P(s →o). Our model does, in fact, learn a similar trend, with slightly higher weights along these two diagonals in the maps 3b and 3d. Interestingly, when P(w →o) is neutral, weights slightly prefer for the other two perspectives to resemble one another, but with highest weights being when other perspectives are also neutral. 4.3 Analysis of a Large News Corpus Using the connotation frame, we present measured implied sentiment in online journalism. Data From the Stream Corpus (KBA, 2014), we select 70 million news articles. We extract subject-verb-object relations for this subset using the direct dependencies between noun phrases 317 Aspect Algorithm Acc. Avg F1 P(w →o) Majority 56.52 24.07 Graph Prop 59.53 50.20 3-nn 62.88 47.93 Aspect-Level 67.56 56.18 Frame-Level 67.56 56.18 P(w →s) Majority 49.83 22.17 Graph Prop 52.84 42.93 3-nn 55.18 45.88 Aspect-Level 60.54 60.72 Frame-Level 61.87 63.07 P(s →o) Majority 49.83 22.17 Graph Prop 52.17 46.57 3-nn 56.52 52.94 Aspect-Level 63.21 61.70 Frame-Level 63.88 62.56 E(o) Majority 48.83 21.87 Graph Prop 54.85 51.40 3-nn 55.18 51.53 Aspect-Level 64.21 63.63 Frame-Level 65.22 64.67 E(s) Majority 49.83 22.17 Graph Prop 52.17 35.56 3-nn 54.85 42.63 Aspect-Level 62.54 53.82 Frame-Level 63.88 56.81 V(o) Majority 79.60 29.55 Graph Prop 71.91 35.10 3-nn 76.25 39.09 Aspect-Level 75.92 45.45 Frame-Level 76.25 48.13 V(s) Majority 89.30 31.45 Graph Prop 84.62 38.82 3-nn 85.62 38.45 Aspect-Level 87.96 48.06 Frame-Level 87.96 48.06 S(o) Majority 71.91 27.89 Graph Prop 69.90 55.57 3-nn 72.91 59.26 Aspect-Level 81.61 72.85 Frame-Level 81.61 72.85 S(s) Majority 50.84 22.47 Graph Prop 48.83 35.40 3-nn 54.85 45.51 Aspect-Level 61.54 53.88 Frame-Level 61.54 53.88 Table 5: Detailed breakdown of results on the development set using accuracy and average F1 over the three class labels (+,-,=). Algorithm Acc. Avg F1 Graph Prop 58.81 41.46 3-nn 63.71 47.30 Aspect-Level 67.93 53.17 Frame-Level 68.26 53.50 Table 6: Performance on the test set. Results are averaged over the different aspects. 1.0 0.5 0.0 0.5 1.0 Democrat 1.0 0.5 0.0 0.5 1.0 Republican lawsuits funding budget deal tax proposal abortion elephant mccain nancy pelosi delaying tactics backlash bias mitt romney nra the proposal judicial nominees state department bill clinton their principles aid big business obamacare market renomination gay marriage tradition health care bill business the pipeline tax cuts principles small businesses veto threat boehner the dream act george w. bush idea businesses budget proposal tax increases propositions palin barack obama the allegations environment constitution kerry tax deal jobs bills medicare gop leadership health floor vote unions budget cuts gun control Figure 4: Average sentiment of Democrats and Republicans (as subjects) to selected nouns (as their objects), aggregated over a large corpus using the learned lexicon (§4.2). The line indicates identical sentiments, i.e. Republicans are more positive towards the nouns that are above the line. and verbs as identified by the BBN Serif system, obtaining 1.2 billion unique tuples of the form (url,subject,verb,object,count).We also extracted subject-verb-object tuples from news articles found in the Annotated English Gigaword Corpus (Napoles et al., 2012), which contains nearly 10 million articles. From the Gigaword corpus we extracted a further 120 million unique tuples. Estimating Entity Polarities Using connotation frames, we can also measure entity-to-entity sentiment at a large scale. Figure 4, for example, presents the polarity of entities “Democrats” and “Republicans” towards a selected set of nouns, by computing the average estimated polarity (using our lexicon) over triples where one of these entities appears as part of the subject (e.g. “Democrats” or “Republican party”). Apart from nouns that both entities are positive (“business”, “constitution”) or negative (“the allegations”,“veto threat”) towards, we can also see interesting examples in which Democrats feel more positively (below the line: “nancy pelosi”, “unions”, “gun control”, etc.) and ones where Republicans are more positive (“the pipeline”, “gop leaders”, “budget cuts”, etc.) Also, both entities are neutral towards “idea” and “the proposal”, which probably owes to the fact that ideas or proposals can be good or bad for either entity depending on the context. 318 5 Related Work Most prior work on sentiment lexicons focused on the overall polarity of words without taking into account their semantic arguments (Wilson et al., 2005; Baccianella et al., 2010; Wiebe et al., 2005; Velikovich et al., 2010; Kaji and Kitsuregawa, 2007; Kamps et al., 2004; Takamura et al., 2005; Adreevskaia and Bergler, 2006). Several recent studies began exploring more specific and nuanced aspects of sentiment such as connotation (Feng et al., 2013), good and bad effects (Choi and Wiebe, 2014), and evoked sentiment (Mohammad and Turney, 2010). Drawing inspirations from them, we present connotation frames as a unifying representation framework to encode the rich dimensions of implied sentiment, presupposed value judgements, and effect evaluation, and propose a factor graph formulation that captures the interplay among different types of connotation relations. Goyal et al. (2010a; 2010b) investigated how characters (protagonists, villains, victims) in children’s stories are affected by certain predicates, which is related to the effect relations studied in this work. While Klenner et al. (2014) similarly investigated the relation between the polarity of the verbs and arguments, our work introduces new perspective types and proposes a unified representation and inference model. Wiegand and Ruppenhofer (2015) also looked at perspective-based relationships induced by verb predicates with a focus on opinion roles. Building on this concept, our framework also incorporates information about the perspectives’ polarities as well as information about other typed relations. There have been growing interests for modeling framing (Greene and Resnik, 2009; Hasan and Ng, 2013), biased language (Recasens et al., 2013) and ideology detection (Yano et al., 2010). All these tasks are relatively less studied, and we hope our connotation frame lexicon will be useful for them. Sentiment inference rules have been explored by the recent work of Wiebe and Deng (2014) and Deng and Wiebe (2014). In contrast, we make a novel conceptual connection between inferred sentiments and frame semantics, organized as connotation frames, and present a unified model that integrates different aspects of the connotation frames. Finally, in a broader sense, what we study as connotation frames draws a connection to schema and script theory (Schank and Abelson, 1975). Unlike most prior work that focused on directly observable actions (Chambers and Jurafsky, 2009; Frermann et al., 2014; Bethard et al., 2008), we focus on implied sentiments that are framed by predicate verbs. 6 Conclusion In this paper, we presented a novel system of connotative frames that define a set of implied sentiment and presupposed facts for a predicate. Our work also empirically explores different methods of inducing and modelling these connotation frames, incorporating the interplay between relations within frames. Our work suggests new research avenues on learning connotation frames, and their applications to deeper understanding of social and political discourse. All the learned connotation frames and annotations will be shared at http://homes.cs.washington. edu/˜hrashkin/connframe.html. Acknowledgements We thank the anonymous reviewers for many insightful comments. We also thank members of UW NLP for discussions and support. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1256082. The work is also supported in part by NSF grants IIS1408287, IIS-1524371 and gifts by Google and Facebook. References Alina Adreevskaia and Sabine Bergler. 2006. Mining wordnet for fuzzy sentiment: Sentiment tag extraction from wordnet glosses. In 11th Conference of the European Chapter of the Association for Computational Linguistics, pages 209–216. Cem Akkaya, Janyce Wiebe, and Rada Mihalcea. 2009. Subjectivity word sense disambiguation. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, volume 2, pages 190–199. Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. 2010. Sentiwordnet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. In Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC’10). Collin F Baker, Charles J Fillmore, and John B Lowe. 1998. The berkeley framenet project. In Proceedings of the 17th international conference on Computational linguistics, volume 1, pages 86–90. 319 Steven Bethard, William J Corvey, Sara Klingenstein, and James H Martin. 2008. Building a corpus of temporal-causal structure. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC’08). Center for Media and Democracy. 2012. Sourcewatch: Conservative news outlets. http://www.sourcewatch.org/index. php/Conservative_news_outlets. Center for Media and Democracy. 2013. Sourcewatch: Liberal news outlets. http: //www.sourcewatch.org/index.php/ Liberal_news_outlets. Nathanael Chambers and Dan Jurafsky. 2009. Unsupervised learning of narrative schemas and their participants. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, volume 2 of ACL ’09, pages 602–610. Yoonjung Choi and Janyce Wiebe. 2014. +/effectwordnet: Sense-level lexicon acquisition for opinion inference. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1181–1191. Association for Computational Linguistics, October. Lingjia Deng and Janyce Wiebe. 2014. Sentiment propagation via implicature constraints. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics (EACL). Song Feng, Jun Seok Kang, Polina Kuznetsova, and Yejin Choi. 2013. Connotation lexicon: A dash of sentiment beneath the surface meaning. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL), volume 1, pages 1774–1784. Association for Computational Linguistics. Charles J. Fillmore. 1976. Frame semantics and the nature of language. In In Annals of the New York Academy of Sciences: Conference on the Origin and Development of Language and Speech, volume 280, pages 2032. Lea Frermann, Ivan Titov, and Manfred Pinkal. 2014. A hierarchical bayesian model for unsupervised induction of script knowledge. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics. Yoav Goldberg and Jon Orwant. 2013. A dataset of syntactic-ngrams over time from a very large corpus of english books. In Second Joint Conference on Lexical and Computational Semantics (*SEM), volume 1, pages 241–247, June. Amit Goyal, Ellen Riloff, and Hal Daum´e, III. 2010a. Automatically producing plot unit representations for narrative text. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 77–86. Amit Goyal, Ellen Riloff, Hal Daum´e III, and Nathan Gilbert. 2010b. Toward plot units: Automatic affect state analysis. In Proceedings of HLT/NAACL Workshop on Computational Approaches to Analysis and Generation of Emotion in Text (CAET). Stephan Greene and Philip Resnik. 2009. More than words: Syntactic packaging and implicit sentiment. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 503–511. Kazi Saidul Hasan and Vincent Ng. 2013. Frame semantics for stance classification. Proceedings of the Seventeenth Conference on Computational Natural Language Learning (CONLL), pages 124–132. HWC Library. 2011. Consider the Source: A Resource Guide to Liberal, Conservative, and Nonpartisan Periodicals. www.ccc.edu/colleges/ washington/departments/Documents/ PeriodicalsPov.pdf. Compiled by HWC Librarians in January 2011. Nobuhiro Kaji and Masaru Kitsuregawa. 2007. Building lexicon for sentiment analysis from massive collection of html documents. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 1075–1083. Jaap Kamps, Maarten Marx, Robert J Mokken, and Maarten De Rijke. 2004. Using wordnet to measure semantic orientations of adjectives. In Proceedings of the Fourth International Conference on Language Resources and Evaluation(LREC’04), volume 4, pages 1115–1118. TREC KBA. 2014. Knowledge Base Acceleration Stream Corpus. http://trec-kba.org/ kba-stream-corpus-2014.shtml. Manfred Klenner, Michael Amsler, and Nora Hollenstein. 2014. Verb polarity frames: a new resource and its application in target-specific polarity classification. In Proceedings of KONVENS 2014, pages 106–115. Omer Levy and Yoav Goldberg. 2014. Dependencybased word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL), pages 302–308. Bing Liu and Lei Zhang. 2012. A survey of opinion mining and sentiment analysis. In Mining text data, pages 415–463. Springer. 320 Amy Mitchell, Jeffrey Gottfried, Jocelyn Kiley, and Katerina Eva Matsa. 2014. Political Polarization & Media Habits. www.journalism.org/2014/10/21/ political-polarization-media-habits/. Produced by Pew Research Center in October, 2014. Saif M Mohammad and Peter D Turney. 2010. Emotions evoked by common words and phrases: Using mechanical turk to create an emotion lexicon. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, pages 26–34. Association for Computational Linguistics. Courtney Napoles, Matthew Gormley, and Benjamin Van Durme. 2012. Annotated gigaword. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction, pages 95–100. Association for Computational Linguistics. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational linguistics, 31(1):71–106. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and trends in information retrieval, 2(1-2):1–135. Marta Recasens, Cristian Danescu-Niculescu-Mizil, and Dan Jurafsky. 2013. Linguistic models for analyzing and detecting biased language. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL), pages 1650– 1659. Evan Sandhaus. 2008. The new york times annotated corpus. Linguistic Data Consortium, Philadelphia, 6(12):e26752. Roger C Schank and Robert P Abelson. 1975. Scripts, plans, and knowledge. Yale University. Swapna Somasundaran and Janyce Wiebe. 2010. Recognizing stances in ideological on-line debates. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, pages 116–124. Association for Computational Linguistics. Hiroya Takamura, Takashi Inui, and Manabu Okumura. 2005. Extracting semantic orientations of words using spin model. In Proceedings of 43rd Annual Meeting of the Association for Computational Linguistics (ACL). Matt Thomas, Bo Pang, and Lillian Lee. 2006. Get out the vote: Determining support or opposition from Congressional floor-debate transcripts. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 327–335. Leonid Velikovich, Sasha Blair-Goldensohn, Kerry Hannan, and Ryan McDonald. 2010. The viability of web-derived polarity lexicons. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT ’10, pages 777– 785. Janyce Wiebe and Lingjia Deng. 2014. An account of opinion implicatures. CoRR, abs/1404.6491. Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. Language resources and evaluation, 39(2-3):165–210. Michael Wiegand and Josef Ruppenhofer. 2015. Opinion holder and target extraction based on the induction of verbal categories. Proceedings of the 2015 Conference on Computational Natural Language Learning (CoNLL), page 215. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phraselevel sentiment analysis. In Proceedings of the conference on human language technology and empirical methods in natural language processing, pages 347–354. Tae Yano, Philip Resnik, and Noah A. Smith. 2010. Shedding (a thousand points of) light on biased language. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, CSLDAMT ’10, pages 152–158. 321
2016
30
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 322–332, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Bi-Transferring Deep Neural Networks for Domain Adaptation Guangyou Zhou1, Zhiwen Xie1, Jimmy Xiangji Huang2, and Tingting He1 1 School of Computer, Central China Normal University, Wuhan 430079, China 2 School of Information Technology, York University, Toronto, Canada {gyzhou,xiezhiwen,tthe}@mail.ccnu.edu.cn [email protected] Abstract Sentiment classification aims to automatically predict sentiment polarity (e.g., positive or negative) of user generated sentiment data (e.g., reviews, blogs). Due to the mismatch among different domains, a sentiment classifier trained in one domain may not work well when directly applied to other domains. Thus, domain adaptation for sentiment classification algorithms are highly desirable to reduce the domain discrepancy and manual labeling costs. To address the above challenge, we propose a novel domain adaptation method, called Bi-Transferring Deep Neural Networks (BTDNNs). The proposed BTDNNs attempts to transfer the source domain examples to the target domain, and also transfer the target domain examples to the source domain. The linear transformation of BTDNNs ensures the feasibility of transferring between domains, and the distribution consistency between the transferred domain and the desirable domain is constrained with a linear data reconstruction manner. As a result, the transferred source domain is supervised and follows similar distribution as the target domain. Therefore, any supervised method can be used on the transferred source domain to train a classifier for sentiment classification in a target domain. We conduct experiments on a benchmark composed of reviews of 4 types of Amazon products. Experimental results show that our proposed approach significantly outperforms the several baseline methods, and achieves an accuracy which is competitive with the state-of-the-art method for domain adaptation. 1 Introduction With the rise of social media (e.g., blogs and social networks etc.), more and more user generated sentiment data have been shared on the Web (Pang et al., 2002; Pang and Lee, 2008; Liu, 2012; Zhou et al., 2011). They exist in the form of user reviews on shopping or opinion sites, in posts of blogs/questions or customer feedbacks. This has created a surge of research in sentiment classification (or sentiment analysis), which aims to automatically determine the sentiment polarity (e.g., positive or negative) of user generated sentiment data (e.g., reviews, blogs, questions). Machine learning algorithms have been proved promising and widely used for sentiment classification (Pang et al., 2002; Pang and Lee, 2008; Liu, 2012). However, the performance of these models relies on manually labeled training data. In many practical cases, we may have plentiful labeled data in the source domain, but very few or no labeled data in the target domain with a different data distribution. For example, we may have many labeled books reviews, but we are interested in detecting the polarity of electronics reviews. Reviews for different products might have different vocabularies, thus classifiers trained on one domain often fail to produce satisfactory results when transferring to another domain. This has motivated much research on cross-domain (domain adaptation) sentiment classification which transfers the knowledge from the source domain to the target domain (Thomas et al., 2006; Snyder and Barzilay, 2007; Blitzer et al., 2006; Blitzer et al., 2007; Daume III, 2007; Li and Zong, 2008; Li et al., 2009; Pan et al., 2010; Kumar et al., 2010; Glorot et al., 2011; Chen et al., 2011a; Chen et al., 2012; Li et al., 2012; Xia et al., 2013a; Li et al., 2013; Zhou et al., 2015a; Zhuang et al., 2015). Depending on whether the labeled data are available for the target domain, cross-domain sen322 timent classification can be divided into two categories: supervised domain adaptation and unsupervised domain adaptation. In scenario of supervised domain adaptation, labeled data is available in the target domain but the number is usually too small to train a good sentiment classifier, while in unsupervised domain adaptation only unlabeled data is available in the target domain, which is more challenging. This work focuses on the unsupervised domain adaptation problem of which the essence is how to employ the unlabeled data of target domain to guide the model learning from the labeled source domain. The fundamental challenge of cross-domain sentiment classification lies in that the source domain and the target domain have different data distribution. Recent work has investigated several techniques for alleviating the domain discrepancy: instance-weight adaptation (Huang et al., 2007; Jiang and Zhai, 2007; Li and Zong, 2008; Mansour et al., 2009; Dredze et al., 2010; Chen et al., 2011b; Chen et al., 2011a; Chen et al., 2012; Li et al., 2013; Xia et al., 2013a) and feature representation adaptation (Thomas et al., 2006; Snyder and Barzilay, 2007; Blitzer et al., 2006; Blitzer et al., 2007; Li et al., 2009; Pan et al., 2010; Zhou et al., 2015a; Zhuang et al., 2015). The first kind of methods assume that some training data in the source domain are very useful for the target domain and these data can be used to train models for the target domain after re-weighting. In contrast, feature representation approaches attempt to develop an adaptive feature representation that is effective in reducing the difference between domains. Recently, some efforts have been initiated on learning robust feature representations with deep neural networks (DNNs) in the context of crossdomain sentiment classification (Glorot et al., 2011; Chen et al., 2012). Glorot et al. (2011) proposed to learn robust feature representations with stacked denoising auto-encoders (SDAs) (Vincent et al., 2008). Denoising auto-encoders are onelayer neural networks that are optimized to reconstruct input data from partial and random corruption. These denoisers can be stacked into deep learning architectures. The outputs of their intermediate layers are then used as input features for SVMs (Fan et al., 2008). Chen et al. (2012) proposed a marginalized SDA (mSDA) that addressed the two crucial limitations of SDAs: high … … … … …   Labeled Source Domain   Unlabeled Target Domain   Transferred Source Domain Figure 1: The framework of Bi-transferring Deep Neural Networks (BTDNNs). Through BTDNNs, a source domain example can be transferred to the target domain where it can be reconstructed by the target domain examples, and vice versa. computational cost and lack of scalability to highdimensional features. However, these methods learn the unified domain-invariable feature representations by combining the source domain data and that of the target domain data together, which cannot well characterize the domain-specific features as well as the commonality of domains. To this end, we propose a Bi-Transferring Deep Neural Networks (BTDNNs) which can transfer the source domain examples to the target domain and also transfer the target domain examples to the source domain, as shown in Figure 1. In BTDNNs, the linear transformation makes the feasibility of transferring between domains, and the linear data reconstruction manner ensures the distribution consistency between the transferred domain and the desirable domain. Specifically, our BTDNNs has one common encoder fc, two decoders gs and gt which can map an example to the source domain and the target domain respectively. As a result, the source domain can be transferred to the target domain along with its sentiment label, and any supervised method can be used on the transferred source domain to train a classifier for sentiment classification in the target domain, as the transferred source domain data share the similar distribution as the target domain. Experimental results show that the proposed approach significantly outperforms several baselines, and achieves an accuracy which is competitive with the state-of323 the-art method for cross-domain sentiment classification. The remainder of this paper is organized as follows. Section 2 introduces the related work. Section 3 describes our proposed bi-transferring deep neural networks (BTDNNs). Section 4 presents the experimental results. In Section 5, we conclude with ideas for future research. 2 Related Work Domain adaptation aims to generalize a classifier that is trained on a source domain, for which typically plenty of training data is available, to a target domain, for which data is scarce. Cross-domain generalization is important in many real applications, the key challenge is that data in the source and the target domain are often distributed differently. Recent work has investigated several techniques for alleviating the difference in the context of cross-domain sentiment classification task. Blitzer et al. (2007) proposed a structural correspondence learning (SCL) algorithm to train a crossdomain sentiment classifier. SCL is motivated by a multi-task learning algorithm, alternating structural optimization (ASO), proposed by Ando and Zhang (2005). Given labeled data from a source domain and unlabeled data from both source and target domains, SCL attempts to model the relationship between “pivot features” and “non-pivot features”. Pan et al. (2010) proposed a spectral feature alignment (SFA) algorithm to align the domain-specific words from the source and target domains into meaningful clusters, with the help of domain-independent words as a bridge. In the way, the cluster can be used to reduce the gap between domain-specific words of two domains. Dredze et al. (2010) combined classifier weights using confidence-weighted learning, which represented the covariance of the weight vectors. Xia et al. (2013a) proposed an instance selection and instance weighting method for cross-domain sentiment classification. After that, Xia et al. (2013b) proposed a feature ensemble plus sample selection method to further improve the sentiment classification adaptation. Zhou et al. (Zhou et al., 2015b) proposed to bridge the domain gap with the help of topical correspondence. Li et al. (2009) proposed to transfer common lexical knowledge across domains via matrix factorization techniques. Zhou et al. (2015a) further improved the matrix factorization techniques via a regularization term on the pivots and domain-specific words, ensuring that the pivots capture only correspondence aspects and the domain-specific words capture only individual aspects. Li and Zong (2008) proposed the multi-label consensus training approach which combined several base classifiers trained with SCL. Chen et al. (2012) proposed a domain adaptation algorithm based on sample and feature selection. Li et al. (2013) proposed an active learning algorithm for cross-domain sentiment classification. Xiao and Guo (2013) investigated the online active domain adaptation problem in a novel but practical setting where the labels can be acquired with a lower cost in the source domain than in the target domain. There has also been research in exploring careful structuring of features or prior knowledge for domain adaptation. Daum´e III (2007) proposed a kernel-mapping function which maps both source and target domains data to a high-dimensional feature space so that data points from the same domain are twice as similar as those form different domains. Dai et al. (2008) proposed translated learning which used a language model to link the class labels to the features in the source domain, which in turn is translated to the features in the target domain. Xia et al. (2010) proposed a POSbased ensemble model for cross-domain sentiment classification. Xiao et al. (2013) proposed a supervised representation learning method to tackle domain adaptation by inducing predictive latent features based on supervised word clustering. He et al. (2011) employed a joint sentiment-topic model for cross-domain sentiment classification; Bollegala et al. (2011) used a sentiment sensitive thesaurus to perform cross-domain sentiment classification. Xiao and Guo (2015) proposed to learn distributed state representations for cross-domain sequence predictions. Recently, some efforts have been initiated on learning robust feature representations with deep neural networks (DNNs) for cross-domain natural language processing. Glorot et al. (2011) and Chen et al. (2012) proposed to use deep learning for cross-domain sentiment classification. Most recently, Yang and Eisenstein (2014) proposed an unsupervised domain adaptation method with marginalized structured dropout. Furthermore, Yang and Eisenstein (2015) proposed to use feature embeddings with metadata domain attributes for multi-domain adaptation. In this paper, 324 our proposed approach BTDNNs tackles the domain discrepancy with a linear data construction manner, which can effectively model the domainspecific features as well as the commonality of domains. Deep learning techniques have also been proposed to heterogeneous transfer learning (Socher et al., 2013; Zhou et al., 2014; Kan et al., 2015; Long et al., 2015), where knowledge is transferred from one modality to another based on the correspondences at hand. Our proposed framework can be considered as a more general case, where the bias of the correspondences between the source and target domains is constrained with a linear data reconstruction manner. Besides, other researchers also explore the DNNs for sentiment analysis (Socher et al., 2011; Tang et al., 2014; Tang et al., 2015; Zhai and Zhang, 2016; Chandar et al., 2014). However, all these methods focus on the sentiment analysis without considering the domain discrepancy. In this paper, we focus on domain adaptation for sentiment classification with a different model formulation and task definition. 3 Bi-Transferring Deep Neural Networks 3.1 Problem Definition Given two domains Xs and Xt, where Xs and Xt are referred to a source domain and a target domain, respectively. Suppose we have a set of labeled sentiment examples as well as some unlabeled examples in the source domain Xs with size ns, containing terms from a vocabulary V with size m. The examples in the source domain Xs can be represented as a term-document matrix Xs = {xs 1, · · · , xs ns} ∈Rm×ns, with their sentiment labels ys = {ys 1, · · · , ys ns}, where xs i ∈Rm is the feature representation of the i-th source domain example with a tf-idf weight of the corresponding term and ys i ∈{+1, −1} is its sentiment label.1 Similarly, suppose we have a set of unlabeled examples in the target domain Xt with size nt, containing terms from a vocabulary V with size m. The examples in target domain Xt can also be represented as a term-document matrix Xt = {x(t) 1 , · · · , x(t) nt } ∈Rm×nt, where each example denotes a tf-idf weight of the corresponding term. The task of cross-domain sentiment classification is to learn a robust classifier to predict the polarity 1We use upper case and lower case characters represent the matrices and vectors respectively throughout the paper. of unseen examples from Xt. Note that we only consider one source domain and one target domain in this paper. However, our proposed algorithm is a general framework and can be easily adapted to multi-domain problems. 3.2 Basic Auto-Encoder An auto-encoder is an unsupervised neural network which is trained to reconstruct a given input vector from its latent representation (Bengio et al., 2007). It can be seen as a special neural network with three layers: the input layer, the latent layer, and the reconstruction layer. An autoencoder contains two parts: encoder and decoder. The encoder, denoted as f, attempts to map an input vector x ∈Rm×1 to the latent representation z ∈Rk×1, in which k is the number of neurons in the latent layer. Usually, f is a nonlinear function as follows: z = f(x) = se(Wx + b) (1) where se is the activation function of the encoder, whose input is called the activation function, which is usually non-linear, such as sigmoid function or tanh function is a linear transform parameter, and b ∈Rk×1 is the basis. The decoder, denoted as g, tries to map the latent representation z back to a reconstruction: g(z) = sd(W′z + b′) (2) Similarly, sd is the activation function of the decoder with parameters {W′, b′}. The training objective is the determination of parameters {W, b} and {W′, b′} that minimize the average reconstruction errors: L = min W,b,W′,b′ N X i=1 xi −g(f(xi)) 2 2 (3) where xi represents the i-th one of N training examples. Parameters {W, b} and {W′, b′} can be optimized by stochastic or mini-batch gradient descent. By minimizing the reconstruction error, we require the latent features should be able to reconstruct the original input as much as possible. 3.3 Bi-Transferring Deep Neural Networks The traditional auto-encoder in subsection 3.2 attempts to reconstruct the input itself, which is usually used for feature representation learning. Nevertheless, our proposed bi-transferring deep neural networks (BTDNNs) attempts to transfer 325 examples between domains to deal with the domain discrepancy, with the inspiration of DNNs in computer vision (Kan et al., 2015). Motivated by the successful application in computer vision (Kan et al., 2015), we construct the architecture of BTDNNs with one encoder fe, and two decoders, gs and gt shown in Figure 1, which can transform an input example to the source domain and the target domain respectively.2 Specifically, the encoder fc tries to map an input example x into the latent feature representation z, which is common to both the source and target domains as follows: z = fc(x) = se(Wcx + bc) (4) The decoder gs attempts to map the latent representation to the source domain, and the decoder gt attempts to map the latent representation to the target domain as follows: gs(x) = sd(Wsz + bs) (5) gt(x) = sd(Wtz + bt) (6) where se(·) and sd(·) are the element-wise nonlinear activation function, e.g., sigmoid or tanh function, Wc and bc are the parameters for encoder fc, Ws and bs are the parameters for decoder gs, Wt and bt are the parameters for decoder gt. Following the literature (Kan et al., 2015), we attempt to map the source domain examples Xs to the source domain (e.g., Xs itself) with an encoder fc and a decoder gs. Similarly, given an encoder fc and a decoder gt, we aim to map the source domain examples Xs to the target domain. Although it is unknown what the mapped examples look like, they are expected to follow the similar distribution as the target domain. This kind of distribution consistency between two domains can be characterized from the perspective of a linear data reconstruction manner. The two domains Xs and Xt can be generally reconstructed from each other, and their distances can be used to measure the domain discrepancy. Following the literature (He et al., 2012), BTDNNs attempt to represent a transferred source domain gt(fc(xs i)) with a linear reconstruction function from the target domain: ∥gt(fc(xs i)) −Xtβt i)∥2 2 (7) 2In the implementation, we use the stacked denoising auto-encoders (SDA) (Vincent et al., 2008) to model the source and the target domain data. where βt i is the coefficients for the reconstruction of transferred source domain examples. Equation (7) enforces that each example of transferred domain is consistent with that of target domain, which ensures that the transferred source domain follows the similar distribution as the target domain. The overall objective for the examples of source domain Xs can be formulated as below: min fc,gs,gt,βs i ∥Xs −gs(fc(Xs))∥2 2 + ∥gt(fc(Xs)) −XtBt)∥2 2 s.t. ∥βt i∥2 2 < τ, Bt = [βt 1, βt 2, · · · , βt ns]T ∈Rns×nt where gs(fc(Xs) = [gs(fc(xs 1)), · · · , gs(fc(xs nt))] and gt(fc(Xs) = [gt(fc(xt 1)), gt(fc(xt ns))]. The same simplifications are used hereinafter if without misunderstanding. Similarly, for the examples of target domain Xt, with encoder fc and decoder gt they should be mapped on the target domain. Also, with encoder fc and decoder gs they should be mapped to the source domain, where they can be reconstructed by the source domain examples from the point of view of a linear data reconstruction manner (He et al., 2012), so as to ensure a similar distribution between the source domain and the transferred target domain. The overall objective for the examples of target domain Xt can be written as: min fc,gs,gt,βt i ∥Xt −gt(fc(Xt))∥2 2 + ∥gs(fc(Xt)) −XsBs)∥2 2 s.t. ∥βs j ∥2 2 < τ, Bs = [βs 1, βs 2, · · · , βs nt]T ∈Rnt×ns Combining the above equations, the overall objective of BTDNNs can be formulated as follows: min fc,gs,gt,Bs,Bt ∥Xs −gs(fc(Xs))∥2 2 + ∥gt(fc(Xs)) −XtBt)∥2 2 + ∥Xt −gt(fc(Xt))∥2 2 + ∥gs(fc(Xt)) −XsBs)∥2 2 (8) + γ ns X i=1 ∥βt i∥2 2 + nt X j=1 ∥βs j ∥2 2  where γ is a regularization parameter controlling the amount of shrinkage. With the optimization of equation (8), our proposed approach BTDNNs can map any input examples to the source and target domains respectively. Especially, the source domain examples Xs can transferred to the target domain along with their sentiment labels. The transferred source domain data gt(fs(Xs)) share the similar distribution as the target domain, so any supervised method can be used to learn a classifier for sentiment classification in the target domain. In this paper, a linear support vector machine (SVM) (Fan et al., 2008) is employed for building sentiment classification models. 326 3.4 Learning Algorithm Note that the optimization problem in equation (8) is not convex in variables {fc, gs, gt, Bs, Bt} together. However, when considering one variable at a time, the cost function turns out to be convex. For example, given {gs, gt, Bs, Bt}, the cost function is a convex function w.r.t. fc. Therefore, although we cannot expect to get a global minimum of the above problem, we shall develop a simple and efficient optimization algorithm via alternative iterations. 3.4.1 Optimize {fc, gs, gt} given {Bs, Bt} When Bs and Bt are fixed, the objective function in equation (8) can be formulated as: min fc,gs,gt ∥Xs −gs(fc(Xs))∥2 2 + ∥gt(fc(Xs)) −¯Xt)∥2 2 + ∥Xt −gt(fc(Xt))∥2 2 + ∥gs(fc(Xt)) −¯Xs)∥2 2 (9) where ¯Xs = XsBs and ¯Xt = XtBt. Equation (9) can easily optimized by gradient descent as the basic auto-encoder (Bengio et al., 2007). 3.4.2 Optimize {Bs, Bt} given {fc, gs, gt} When {fc, gs, gt} are fixed, the objective function in equation (8) can be written as: min Bs,Bt ∥Gt −XtBt)∥2 2 + ∥Gs −XsBs)∥2 2 + γ ns X i=1 ∥βt i∥2 2 + nt X j=1 ∥βs j ∥2 2  where gs(fc(Xt)) = Gs = [gs 1, · · · , gs nt] and gt(fc(Xs)) = Gt = [gt 1, · · · , gt ns]. Since Gs and Gt are independent with each other, so they can be optimized independently. The optimization of Gs with other variables fixed is a least squares problem with ℓ2-regularization. It can also be decomposed into nt optimization problems, with each corresponding to one βs j and can be solved in parallel: min βs j ∥gs j −Xsβs j ∥2 2 + γ∥βs j ∥2 2 (10) for j = 1, 2, · · · , nt. It is a standard ℓ2-regularized least squares problem and the solution is: βs j = XT s Xs + γI −1XT s gs j (11) where I is an identity matrix with all entries equal to 1. Similarly, The optimization of Gt can also be decomposed into ns ℓ2-regularized least squares problems and the solution of each one is: βt i = XT t Xt + γI −1XT t gt i (12) for i = 1, 2, · · · , ns. We repeat the above equations until fc, gs, gt, Bs and Bt converge or a maximum number of iterations is exceeded. 3.5 Algorithm Complexity In this section, we analyze the computational complexity of the learning algorithm described in equations (9), (11) and (12). Besides expressing the complexity of the algorithm using big O notation, we also count the number of arithmetic operations to provide more details about the run time. Computational complexity of learning matrix Gs is O(m × ns × k) per iteration. Similarly, for each iteration, learning matrices Gt takes O(m × nt × k). Learning matrices Bs and Bt takes O(m2 × ns) and O(m2 × nt) operations per iteration. In real applications, we have k ≪m. Therefore, the overall complexity of the algorithm, dominated by computation of matrices Bs and Bt, is O(m2 × n) where n = max(ns, nt). 4 Experiments 4.1 Data Set Domain adaptation for sentiment classification has been widely studied in the NLP community. A large majority experiments are performed on the benchmark made of reviews of Amazon products gathered by Blitzer et al. (2006). This data set contains 4 different domains: Book (B), DVDs (D), Electronics (E) and Kitchen (K). For simplicity and comparability, we follow the convention of (Blitzer et al., 2006; Pan et al., 2010; Glorot et al., 2011; Xiao and Guo, 2013) and only consider the binary classification problem whether a review is positive (higher than 3 stars) or negative (3 stars or lower). There are 1000 positive and 1000 negative reviews for each domain, as well as approximately 4,000 unlabeled reviews (varying slightly between domains). The positive and negative reviews are also exactly balanced. Following the literature (Pan et al., 2010), we can construct 12 cross-domain sentiment classification tasks: D →B, E →B, K →B, K →E, D → E, B →E, B →D, K →D, E →D, B →K, D → K, E →K, where the word before an arrow corresponds with the source domain and the word after an arrow corresponds with the target domain. To be fair to other algorithms that we compare to, we use the raw bag-of-words unigram/bigram features as their input and pre-process with tf-idf (Blitzer et al., 2006). Table 1 presents the statistics of the data set. 327 B->D E->D K->D 71 72 73 74 75 76 77 78 79 80 81 82 Accuracy (%) baseline SCL MCT SFA PJNMF SDA mSDA TLDA BTDNNs D->B E->B K->B 71 72 73 74 75 76 77 78 79 80 81 Accuracy (%) baseline SCL MCT SFA PJNMF SDA mSDA TLDA BTDNNs B->E D->E K->E 72 74 76 78 80 82 84 86 Accuracy (%) baseline SCL MCT SFA PJNMF SDA mSDA TLDA BTDNNs B->K D->K E->K 74 76 78 80 82 84 86 88 Accuracy (%) baseline SCL MCT SFA PJNMF SDA mSDA TLDA BTDNNs Figure 2: Average results for cross-domain sentiment classification on the Amazon product benchmark of 4 domains. Domain #Train #Test #Unlab. % Neg. Books 1600 400 4465 50% DVDs 1600 400 5945 50% Electronics 1600 400 5681 50% Kitchen 1600 400 3586 50% Table 1: Amazon review statistics. This table depicts the number of training, testing and unlabeled reviews for each domain, as well as the portion of negative training reviews of the data set. 4.2 Compared Methods As a baseline method, we train a linear SVM (Fan et al., 2008) on the raw bag-of-words representation of the labeled source domain and test it on the target domain. In the original paper regarding the benchmark data set, Blitzer et al. (2006) adapted Structural Correspondence Learning (SCL) for sentiment analysis. Li and Zong (2008) proposed the Multi-label Consensus Training (MCT) approach which combined several base classifiers trained with SCL. Pan et al. (2010) first used a Spectral Feature Alignment (SFA) algorithm to align words from the source and target domains to help bridge the gap between them. Zhou et al. (2015a) proposed a method called PJNMF, which linked heterogeneous input features with pivots via joint non-negative matrix factorization. Recently, some efforts have been initiated on learning robust feature representations with DNNs for cross-domain sentiment classification. Glorot et al. (2011) first employed stacked Denoising Auto-encoders (SDA) to extract meaningful representation for domain adaptation. Chen et al. (2012) proposed marginalized SDA (mSDA) that addressed the high computational cost and lack of scalability to high-dimensional features. Zhuang et al. (2015) proposed a state-of-the-art method called transfer learning with deep autoencoders (TLDA). For SCL, PJNMF, SDA, mSDA and TLDA, we use the source codes provided by the authors. For SFA and MCT, we re-implement them based on the original papers. The above methods serve as comparisons in our empirical evaluation. For fair comparison, all hyper-parameters are set by 5-fold cross validation on the training set from the source domain.3 For our proposed BTDNNs, the number 3We keep the default value of some of the parameters in SCL and SFA, e.g., the number of stop-words removed and stemming parameters −as they were already tuned for this 328 of hidden neurons is set as 1000, the regularization parameter γ is tuned via 5-fold cross-validation. For SDA, mSDA, TLDA and BTDNNs, we can construct the classifiers for the target domain in two ways. The first way is directly to use the stacking SVM on top of the output of the hidden layer. The second way is to apply the standard SVM to train a classifier for source domain in the embedding space. Then the classifiers is applied to predict sentiment labels for target domain data. For fair comparison with the shallow models, we choose the second way in this paper. Figure 2 shows the accuracy of classification results for all methods and for all source-target domain pairs. We can check that all compared methods achieve the similar performance with the results reported in the original papers. From Figure 2, we can see that our proposed approach BTDNNs outperforms all other eight comparison methods in general. The baseline performs poorly on all the 12 tasks, while the other seven domain adaptation methods, SCL, MCT, SFA, PJNMF, SDA, mSDA and TLDA, consistently outperform the baseline method across all the 12 tasks, which demonstrates that the transferred knowledge from the source domain to the target domain is useful for sentiment classification. Nevertheless, the improvements achieved by these seven methods over the baseline are much smaller than the proposed approach BTDNNs. Surprisingly, we note that the deep learning based methods (SDA, mSDA and TLDA) perform worse than our approach, the reason may be that SDA, mSDA and TLDA learn the unified domaininvariable feature representations by combining the source domain data and that of the target domain data together, which cannot well characterize the domain-specific features as well as the commonality of domains. On the contrary, our proposed BTDNNs ensures the feasibility of transferring between domains, and the distribution consistency between the transferred domain and the desirable domain is constrained with a linear data reconstruction manner. We also conduct significance tests for our proposed approach BTDNNs and the state-of-the-art method (TLDA) using a McNemar paired test for labeling disagreements (Gillick and Cox, 1989). In general, the average result on the 12 sourcetarget domain pairs indicates that the difference benchmark set by the authors. 1.6 1.65 1.7 1.75 1.8 1.85 1.9 1.95 2 1.6 1.65 1.7 1.75 1.8 1.85 1.9 1.95 2 Proxy A-distance on raw input Proxy A-distance on BTDNNs EK BD DE DK BK BE Figure 3: Proxy A-distance between domains of the Amazon benchmark for the 6 different pairs. between BTDNNs and TLDA is mildly significant with p < 0.08. Furthermore, we also conduct the experiments on a much larger industrial-strength data set of 22 domains (Glorot et al., 2011). The preliminary results show that BTDNNs significantly outperforms TLDA (p < 0.05). Therefore, we will report our detailed results and discussions in our future work. 4.3 Domain Divergence In this subsection, we look into how similar two domains are to each other. Ben-David et al. (2006) showed that the A-distance as a measure of how different between the two domains. They hypothesized that it should be difficult to discriminate between the source and target domains in order to have a good transfer between them. In practice, computing the exact A-distance is impossible and one has to compute a proxy. Similar to (Glorot et al., 2011), the proxy for the A-distance is then defined as 2(1 −2ϵ), where ϵ is the generalization error of a linear SVM classifier trained on the binary classification problem to distinguish inputs between the two domains. Figure 3 presents the results for each pair of domains. Surprisingly, the distance is increased with the help of new feature representations, e.g., distinguishing between domains becomes easier with the BTDNNs features. We explain this effect through the fact that BTDNNs can ensure the feasibility of transferring between domains, and the distribution consistency between the transferred domain and the desirable domain is constrained with a linear data reconstruction manner, which can learn a generally better representations for the input data. This helps both tasks, distinguishing between domains and sentiment classification 329 (e.g., in the book domain BTDNNs might interpolate the feature “exciting” from “boring”, both are not particularly relevant for sentiment classification but might help distinguish the review from the Electronic domain.). 5 Conclusions and Future Work In this paper, we propose a novel Bi-Transferring Deep Neural Networks (BTDNNs) for crossdomain sentiment classification. The proposed BTDNNs attempts to transfer the source domain examples to the target domain, and also transfer the target domain examples to the source domain. The linear transformation of BTDNNs ensures the feasibility of transferring between domains, and the distribution consistency between the transferred domain and the desirable domain is constrained with a linear data reconstruction manner. Experimental results show that BTDNNs significantly outperforms the several baselines, and achieves an accuracy which is competitive with the state-of-the-art method for sentiment classification adaptation. There are some ways in which this research could be continued. First, since deep learning may obtain better generalization on large-scale data sets (Bengio, 2009), a straightforward path of the future research is to apply the proposed BTDNNs for domain adaptation on a much larger industrial-strength data set of 22 domains (Glorot et al., 2011). Second, we will try to investigate the use of the proposed approach for other kinds of data set, such as 20 newsgroups and Reuters21578 (Li et al., 2012; Zhuang et al., 2013). Acknowledgments This work was supported by the National Natural Science Foundation of China (No. 61303180 and No. 61573163), the Fundamental Research Funds for the Central Universities (No. CCNU15ZD003 and No. CCNU16A02024), and also supported by a Discovery grant from the Natural Sciences and Engineering Research Council (NSERC) of Canada and an NSERC CREATE award. We thank the anonymous reviewers for their insightful comments. References Rie Kubota Ando and Tong Zhang. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. J. Mach. Learn. Res., 6:1817–1853. Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. 2006. Analysis of representations for domain adaptation. In NIPS, pages 137–144. Yoshua Bengio, Pascal Lamblin, Dan Popovici, Hugo Larochelle, Universite De Montreal, and Montreal Quebec. 2007. Greedy layer-wise training of deep networks. In NIPS, pages 153–160. Yoshua Bengio. 2009. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1):1–127. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In EMNLP, pages 120–128. John Blitzer, M. Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: domain adaptation for sentiment classification. In EMNLP, pages 120–128. Danushka Bollegala, David Weir, and John Carroll. 2011. Using multiple sources to construct a sentiment sensitive thesaurus for cross-domain sentiment classification. In ACL, pages 132–141. Sarath Chandar, Stanislas Lauly, Hugo Larochelle, Mitesh M Khapra, Balaraman Ravindran, Vikas Raykar, and Amrita Saha. 2014. An autoencoder approach to learning bilingual word representations. In NIPS, pages 1–9. Minmin Chen, John Blitzer, and Kilian Weinberger. 2011a. Co-training for domain adaptation. In NIPS, pages 1–9. Minmin Chen, Kilian Weinberger, and Yixin Chen. 2011b. Automatic feature decomposition for single view co-training. In ICML, pages 953–960. Minmin Chen, Zhixiang Xu, Kilian Weinberger, and Fei Sha. 2012. Marginalized denoising autoencoders for domain adaptation. In ICML, pages 767– 774. W. Dai, Y. Chen, G. Xue, Q. Yang, and Y. Yu. 2008. Translated learning: transfer learning across different feature spaces. In NIPS, pages 353–360. Hal Daume III. 2007. Frustratingly easy domain adaptation. In ACL, pages 256–263. Mark Dredze, Alex Kulesza, and Koby Crammer. 2010. Multi-domain learning by confidenceweighted parameter combination. Machine Learning, 79(12):123–149. R. Fan, Chang K., Hsieh C., Wang X., and Lin C. 2008. Liblinear: A library for large linear classification. J. Mach. Learn. Res., 9:1871–1874. 330 L. Gillick and S. Cox. 1989. Some statistical issues in the comparison of speech recoginition algorithms. In ICASSP, pages 532–535. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In ICML, pages 513–520. Yulan He, Chenghua Lin, and Harith Alani. 2011. Automatically extracting polarity-bearing topics for cross-domain sentiment classification. In ACL, pages 123–131. Zhanying He, Chun Chen, Bu, Jiajun, Can Wang, Lijun Zhang, Deng Cai, and Xiaofei He. 2012. Document summarization based on data reconstruction. In AAAI, pages 620–626. J. Huang, A. Smola, A. Gretton, K. Bordwardt, and B. Scholkopf. 2007. Correcting samples selection bias by unlabeled data. In NIPS, pages 601–608. J. Jiang and C. Zhai. 2007. Instance weighting for domain adaptation in nlp. In ACL, pages 264–271. Meina Kan, Shiguang Shan, and Xilin Chen. 2015. Bi-shifting auto-encoder for unsupervised domain adaptation. In ICCV, pages 3846–3854. Abhishek Kumar, Avishek Saha, and Hal Daum´e III. 2010. A co-regularization based semi-supervised domain adaptation. In NIPS, pages 478–486. Shoushan Li and Chengqing Zong. 2008. Multidomain adaption for sentiment classification: Using multiple classifier combining classification. In NLPKE. Tao Li, Vikas Sindhwani, Chris H. Q. Ding, and Yi Zhang 0005. 2009. Knowledge transformation for cross-domain sentiment classification. In SIGIR, pages 716–717. Lianghao Li, Xiaoming Jin, and Mingsheng Long. 2012. Topic correlation analysis for cross-domain text classification. In AAAI, pages 998–1004. Shoushan Li, Yunxia Xue, Zhongqing Wang, and Guodong Zhou. 2013. Active learning for crossdomain sentiment classification. In IJCAI, pages 2127–2133. B. Liu. 2012. Sentiment analysis and opinion mining. Morgan & Claypool Publishers. Mingsheng Long, Yue Cao, Jianmin Wang, and Michael I. Jordan. 2015. Learning transferable features with deep adaptation networks. In ICML, pages 97–105. T. Mansour, M. Mohri, and A. Rostamizadeh. 2009. Domain adatation with multiple sources. In NIPS, pages 264–271. Sinno Jialin Pan, Xiaochuan Ni, Jian-Tao Sun, Qiang Yang, and Zheng Chen. 2010. Cross-domain sentiment classification via spectral feature alignment. In WWW, pages 751–760. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Found. Trends Inf. Retr., 2(12):1–135. B. Pang, L. Lee, and S. Vaithyanathan. 2002. Thumbs up? sentiment classification using machine learning techniques. In EMNLP, pages 79–86. Benjamin Snyder and Regina Barzilay. 2007. Multiple aspect ranking using the good grief algorithm. In NAACL, pages 300–307. Richard Socher, Jeffrey Pennington, Eric H. Huang, Andrew Y. Ng, and Christopher D. Manning. 2011. Semi-supervised recursive autoencoders for predicting sentiment distributions. In EMNLP, pages 151– 161. Richard Socher, Milind Ganjoo, Christopher D. Manning, and Andrew Y. Ng. 2013. Zero-shot learning through cross-modal transfer. In NIPS, pages 935– 943. Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning sentimentspecific word embedding for twitter sentiment classification. In ACL, pages 1555–1565. Duyu Tang, Bing Qin, and Ting Liu. 2015. Document modeling with gated recurrent neural network for sentiment classification. In EMNLP, pages 1422– 1432. Matt Thomas, Bo Pang, and Lillian Lee. 2006. Get out the vote: Determining support or opposition from congressional floor-debate transcripts. In EMNLP, pages 327–335. Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoencoders. In ICML, pages 1096–1103. Rui Xia, , and Chengqing Zong. 2010. A pos-based ensemble model for cross-domain sentiment classification. In IJCNLP, pages 614–622. Rui Xia, Xuelei Hu, Jianfeng Lu, and Chengqing Zong. 2013a. Instance selection and instance weighting for cross-domain sentiment classification via pu learning. In IJCAI, pages 2276–2182. Rui Xia, Chengqing Zong, Xuelei Hu, and Cambria Erik. 2013b. Feature ensemble plus sample selection: Domain adaptation for sentiment classification. IEEE Intelligent Systems, 28(3):10–18. Min Xiao and Yuhong Guo. 2013. Online active learning for cost-sensitive domain adaptation. In CoNLL, pages 1–9. 331 Min Xiao and Yuhong Guo. 2015. Learning hidden markov models with distributed state representations for domain adaptation. In ACL, pages 524–529. Min Xiao, Feipeng Zhao, and Yuhong Guo. 2013. Learning latent word representations for domain adaptation using supervised word clustering. In EMNLP, pages 152–162. Yi Yang and Jacob Eisenstein. 2014. Fast easy unsupervised domain adaptation with marginalized structured dropout. In ACL, pages 538–544. Yi Yang and Jacob Eisenstein. 2015. Unsupervised multi-domain adaptation with feature embeddings. In NAACL, pages 672–682. Shuangfei Zhai and Zhongfei (Mark) Zhang. 2016. Semi-supervised autoencoder for sentiment analysis. In AAAI, pages 1394–1400. Guangyou Zhou, Li Cai, Jun Zhao, and Kang Liu. 2011. Phrase-based translation model for question retrieval in community question answer archives. In ACL, pages 653–662. Joey Tianyi Zhou, Sinno Jialin Pan, IvorW. Tsang, and Yan Yan. 2014. Hybrid heterogeneous transfer learning through deep learning. In AAAI, pages 2213–2219. Guangyou Zhou, Tingting He, Wensheng Wu, and Xiaohua Hu. 2015a. Linking heterogeneous input features with pivots for domain adaptation. In IJCAI, pages 1419–1425. Guangyou Zhou, Yin Zhou, Xiyue Guo, Xinhui Tu, and Tingting He. 2015b. Cross-domain sentiment classification via topical correspondence transfer. Neurocomputing, 159:298–305. Fuzhen Zhuang, Ping Luo, Peifeng Yin, Qing He, and Zhongzhi Shi. 2013. Concept learning for crossdomain text classification: A general probabilistic framework. In IJCAI, pages 1960–1966. Fuzhen Zhuang, Xiaohu Cheng, Ping Luo, Sinno Jialin Pan, and Qing He. 2015. Supervised representation learning: Transfer learning with deep autoencoders. In IJCAI, pages 4119–4125. 332
2016
31
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 333–343, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Document-level Sentiment Inference with Social, Faction, and Discourse Context Eunsol Choi Hannah Rashkin Luke Zettlemoyer Yejin Choi Computer Science & Engineering University of Washington {eunsol,hrashkin,lsz,yejin}@cs.washington.edu Abstract We present a new approach for documentlevel sentiment inference, where the goal is to predict directed opinions (who feels positively or negatively towards whom) for all entities mentioned in a text. To encourage more complete and consistent predictions, we introduce an ILP that jointly models (1) sentence- and discourse-level sentiment cues, (2) factual evidence about entity factions, and (3) global constraints based on social science theories such as homophily, social balance, and reciprocity. Together, these cues allow for rich inference across groups of entities, including for example that CEOs and the companies they lead are likely to have similar sentiment towards others. We evaluate performance on new, densely labeled data that provides supervision for all pairs, complementing previous work that only labeled pairs mentioned in the same sentence. Experiments demonstrate that the global model outperforms sentence-level baselines, by providing more coherent predictions across sets of related entities. 1 Introduction Documents often present a complex web of facts and opinions that hold among the entities they describe. Consider the international relations story in Figure 1. Representatives from three countries form factions and create a network of sentiment. While some opinions are relatively directly stated (e.g., Russia criticizes Belarus), many others must be inferred based on the factual ties among entities (e.g., Moscow, Gryzlov, and Russia probably share the same sentiment towards other entities) and known social context (e.g., Russia probably Russia criticized Belarus for permitting Georgian President Mikheil Saakhashvili to appear on Belorussian television. “The appearance was an unfriendly step towards Russia,” the speaker of Russian parliament Boris Gryzlov said. . . . Saakhashvili announced Thursday that he did not understand Russia’s claims. Moscow refused to have any business with Georgia’s president after the armed conflict in 2008 . . . Figure 1: Example text excerpt paired with the documentlevel sentiment graph we aim to recover. The graph includes edges with direct textual support (e.g., from Russian to Belarus given the verb “criticized”) as well as ones that must be inferred at the whole-document level (e.g., from Gryzlov to Saakhashvili given the web of relationships and opinions between them, Georgia, Russian, and Belarus). dislikes Saakhashvili since Russia criticized Belarus for supporting him). In this paper, we show that jointly reasoning about all of these factors can provide more complete and consistent documentlevel sentiment predictions. More concretely, we present a global model for document-level entity-to-entity sentiment, i.e., who feels positively (or negatively) towards whom. Our goal is to make exhaustive predictions over all entity pairs, including those that require crosssentence inference. We present a Integer Linear Programming (ILP) model that combines three complementary types of evidence: entity-pair sentiment classification, template-based faction extraction, and sentiment dynamics in social groups. Together, they allow for recovering more complete predictions of both the explicitly stated and im333 Figure 2: Entity subgraphs for the example in Figure 1: (a) shows explicitly stated sentiment, (b) shows faction relationships and (c) shows all edges for Georgia and its representative Saakhashvili. Through Saakhasvili’s relationship with Belarus, Georgia forms an alliance with Belarus, providing evidence for an inferred negative stance towards Russia. Green dotted edges represent positive sentiment, red are negative, and blue dashed lines show faction relationship. plicit sentiment, while preserving consistency. The sentiment dynamics in social groups, motivated by social science theories, are encoded as soft ILP constraints. They include a notion of homophily, that entities in the same group tend to have similar opinions (Lazarsfeld and Merton, 1954). For example, Figure 2b shows directed faction edges, where one entity is likely to agree with the other’s opinions. They also encode dyadic social constraints (i.e., the likely reciprocity of opinions (Gouldner, 1960)) and triadic social dynamics following social balance theory (Heider, 1946). For example, from Russia’s criticism on Belarus and Belarus’ positive attitude towards Saakhashvilli (in Figure 2a), we can infer that Russia is negative towards Saakhashvilli (in Figure 2c). When considered in aggregate, these constraints can greatly improve the consistency over the overall document-level predictions. Our work stands in contrast to previous approaches in three aspects. First, we apply social dynamics motivated by social science theories to entity-entity sentiment analysis in unstructured text. In contrast, most previous studies focused on social media or dialogue data with overt social network structure when integrating social dynamics (Tan et al., 2011; Hu et al., 2013; West et al., 2014). Second, we aim to recover sentiment that can be inferred through partial evidence that spans multiple sentences. This complements prior efforts for accessing implied sentiment where the key evidence is, by and large, at the sentence level (Zhang and Liu, 2011; Yang and Cardie, 2013; Deng and Wiebe, 2015a). Finally, we present the first approach to model the relationship between factual and subjective relations. We evaluate the approach on a newly gathered corpus with dense document-level sentiment labels in news articles.1 This data includes comprehensively annotated sentiment between all entity pairs, including those that do not appear together in any single sentence. Experiments demonstrate that the global model significantly improves performance over a pairwise classifier and other strong baselines. We also perform a detailed ablation and error analysis, showing cases where the global constraints contribute and pointing towards important areas for future work. 2 A Document-level Sentiment Model Given a news document d, and named entities e1,...,en in d, where each entity ei has mentions mi1 · · · mik, the task is to decide directed sentiment between all pairs of entities. We predict the directed sentiment from ei to ej at the document level, i.e., sent(ei→ej) ∈{positive, unbiased, negative}, for all ei, ej ∈d where i ̸= j, assuming that sentiment is consistent within the document. We introduce a document-level ILP that includes base models and soft social constraints. ILP has been used successfully for a wide range of NLP tasks (Roth and Yih, 2004), perhaps because they easily support incorporating different types of global constraints. We use two base models: (1) a learned pairwise sentiment classifier (Sec 3.1) that combines sentence- and discourse-level features to make predictions for each entity pair and (2) a pattern-based faction extractor (Sec 3.2) that detects alliances among a subset of the entities. The ILP is solved by maximizing: F =ψsocial + ψfact + n X i=1 n X j=1 ψij where F combines soft constraints (ψsocial, ψfact defined in detail in this section) with pairwise potentials ψij defined as: 1All data will be made publicly available. You can browse it at http://homes.cs.washington. edu/˜eunsol/project_page/acl16, and download it from the author’s webpage. 334 Sentence i j Canadian Prime Minister Harper. . . Canada Harper . . . Reid, the Democratic leader. . . Reid Democratic Goldman spokesman DuVally Goldman DuVally . . . Djibouti, a key U.S. ally. Djibouti U.S. (a) Detection examples (b) Visual representation of common inference patterns. Figure 3: An example sentiment inference from faction relationships. Pairs in factions are encouraged to share opinions, and to be positive towards other tied entities. On the right, sentiment edges can be both positive or both negative. ψij =φposij · posij + φnegij · negij + φneuij · neuij Each potential ψij includes the sentiment classifier scores (φpos, φneg, φneu) with binary variables posij, neuij and negij where, for example, negij=1 indicates that ei is negative towards ej. Decision variables posij and neuij are defined analogously for positive and neutral opinion. Finally, we introduce a hard constraint: ∀i, j posij + negij + neuij = 1 to ensure a single prediction is made per pair. 2.1 Inference with factions Our first soft ILP constraint ψfact models that fact that entities in supportive social relations tend to share similar sentiment toward others (Lazarsfeld and Merton, 1954), and are often positive towards each other. For now, we assume access to a base extractor to provide such faction relations (Sec. 3.2 provides details of our pattern-based extractor). Figure 3a illustrates sample detections. We introduce a binary variable tieij, where tieij=1 denotes an extracted faction relationship. These variables are tied to the variables regarding sentiment via the variables tie sameijk = tieij ∧posik ∧posjk + tieij ∧negik ∧negjk tie diffijk = tieij ∧posik ∧negjk + tieij ∧negik ∧posjk itselfij = tieij ∧posij −tieij ∧negij which are used in the following objective term: ψfact = n X i=1 n X j=1 (αitself · itselfij + n X k=1 (αfact· (tie sameijk −tie diffijk))) This formulation enables the model to predict implicit sentiment by jointly considering factual and Figure 4: Balance Theory Constraints. When i is positive towards j, sharing same sentiment towards k define a balanced state. When i is negative towards j, differing opinions towards k define a balanced state. sentiment relations among other entity pairs, essentially drawing a connection between sentiment analysis and information extraction. Figure 3 visualizes this inference pattern. 2.2 Inference with sentiment relations We also include constraints ψsocial in the objective that model social balance and reciprocity. Balance theory constraints: Social balance theory (Heider, 1946) models the sentiment dynamics in an interpersonal network. In particular, in balanced states, entities on positive terms have similar opinions towards other entities and those on negative terms have opposing opinions. We introduce a set of variables to capture this insight: for example, the case where ei is positive towards ej is shown below (analogous when negative). pos sameijk = posij ∧posik ∧posjk + posij ∧negik ∧negjk pos diffijk = posij ∧negik ∧posik + posij ∧posik ∧negik and add the term ψbl to ψsocial. ψbl = n X i=1 n X j=1 n X k=1 (αbl · (pos sameijk + neg diffijk) + αbadbl · (pos diffijk + neg sameijk)) A visualization of these constraints is in Figure 4. 335 Faction Balance Reciprocity POS 57% 64% 73% NEG 60% 61% 78% Table 1: Percentage of labels where each constraint holds. For example, positive on reciprocity means when pos(ei, ej) is true, 73% of times pos(ej, ei) is also true. Reciprocity constraint: Reciprocity of sentiment has been recognized as a key aspect of social stability (Johnston, 1916; Gouldner, 1960). To model reciprocity among the real world entities, we introduce variables: r sameij = posij ∧posji + negij ∧negji r diffij = posij ∧negji + negij ∧posji ψr = n X i=1 n X j=1 αr(r sameij) + αbadr(r diffij) and add the term ψr to the ψsocial. 2.3 Discussion While many studies exist on homophily, social balance, and reciprocity, no prior work has reported quantitative analysis on the sentiment dynamics among the real world entities that appear in unstructured text. Thus we report the data statistics based on the development set in Table 1. We find that the global constraints hold commonly but are not universal, motivating the use of soft constraints (see Sec. 6). 3 Pairwise Base Models The global model in Sec. 2 uses two base models, one for pairwise sentiment classification and the other for detecting faction relationships. 3.1 Sentiment Classifier The entity-pair classifier considers a holder entity ei, its mentions mi1 · · · mip, a target entity ej, its mentions mj1 · · · mjq, and document d. It predicts sent(ei→ej) ∈{positive, unbiased, negative}. The input is plain text and no gold labels are assumed; entity detection, dependency parse and co-reference resolution are automatic, and include common nouns and pronoun mentions (details in Sec. 4.1). We trained separate classifiers for pairs that co-occur in a sentence and those that do not, using a linear class-weighted SVM classifier with crowd-sourced data described in Sec. 4.2. In what follows, we describe three different types of features we developed: dependency features, document features, and quotation features. Many of the features test the overall sentiment of a set of words (e.g., the complete document, a dependency path, or a quotation). In each case, we define the sentiment label for the text to be positive if it contains more words that appear in the positive sentiment lexicon than that appear in the negative one (and similarly for the negative label). We used MPQA sentiment lexicon (Wilson et al., 2005) for our study, which contains 2,718 positive and 4,912 negative lexicons. Dependency Features We consider all dependency paths between the head word of ei and ej in each sentence, and aggregate over all co-occurring sentences. The features compute: (1) The sentiment label of the path containing dobj and nsubj rev, up to length three if the path contains sentiment lexicon words (e.g., Olympic hero Skah accuses Norway over custody battle.) (2) The sentiment label of the path ei ↑nsubj ↓ccomp ↓nsubj ↓ej, when it exists (e.g., McCully said any action against Henry is a matter entirely for TVNZ) (3) The sentiment label of path when the path does not contain any named entity (e.g., Nobel winner , Shirin Ebadi) (4) An indicator for the link nmod:against. Document Features Previous work has shown that notions related to salience (e.g., proximity to sentiment words) can help to detect sentiment targets (Ben-Ami et al., 2014). In our data, we found that an entity’s occurrence pattern is highly indicative of being involved in sentiment, for example the most frequently mentioned entity is 3.4 times more likely to be polarized and an entity in the headline is two times more likely to be polarized. Pairwise features include the NER type of ei and ej and the percentage of sentences they cooccur in. We also use features indicating whether ei and ej (1) are mentioned in the headline and (2) appear only once in the document. When they are the two most frequent entities, we add the document sentiment label as a feature. For entity pairs that do not appear together in any sentence, we also include the rank of holder and target in terms of overall number of mentions in the document. Quotation Features Quotations often involve subjective opinions towards prominent entities in news articles. Thus we include document-level 336 features encoding this intuition. For example, the sentence “We’re pleased to put this behind us,” said Michael DuVally implies positive sentiment from DuVally. We extract direct quotations using regular expressions. We include the sentiment label of the direct quotation from the speaker to the entities in it, excluding entities that appear less than three times in the document. We add the sentiment label of the quotation as a feature to (speaker, the most frequent entity) pair as well. To extract indirect quotations, we follow studies (Bethard et al., 2004; Lu, 2010) and use a list of 20 verbs indicating speech events (e.g., say, speak, and announce) to detect direct quotations and their opinion holders. We then add the sentiment label of words connected to ej via a dependency path of length up to two that also includes the subject of quotation verb to ej (e.g. Hassanal said that cooperation between Brunei and China were fruitful). We also include an indicator feature for whether ei is the subject of the quotation verb. 3.2 Faction Detector We use a simple pattern-based detector that extracts a faction relationship between a pair of entities if the dependency path between them either: 1. contains only one link of modifier or compound label (nmod, nmod : poss, amod, nn, or compound). 2. or contains less than three links and has a possessive or appositive label (poss or appos). Example extractions for this approach, which we adopted for its simplicity and the fact that it works reasonably well in practice, are shown in Figure 3a. On average we detect 1.7 ties per document on a small development set with roughly 30% recall and 60% precision. Improving performance and adding more relation types is an important area for future work.2 4 Data We collected new datasets that densely label sentiment among entities in news articles, including: 208 documents, 2,226 sentences, and 15,185 entity pair labels. It complements existing datasets such as MPQA which provides rich annotations at the sentence-level (Deng and Wiebe, 2015b) and the recent KBP challenge which provides sparse 2We experimented with using relations from an external knowledge base (Freebase), but KB sparsity and entity linking errors posed major challenges. KBP MPQA Crowdsourced Document count 154 54 914 Avg. sentence count 10.0 12.7 14.8 Avg. entity count 7.9 10.6 8.8 Avg. mentions / entity 3.6 2.7 3.5 Table 2: Corpus Statistics annotations at the corpus-level (Ellis et al., 2014), by providing document-level annotations for all entity pairs (see Sec. 7 for discussion). 4.1 Document Preprocessing All-pair annotation can be expensive, as there are N2 pairs to annotate for each document with N entities. We determined that it would be more cost efficient to cover a large number of short documents than a small number of very long documents. We therefore selected articles with less than eleven entities from KBP and less than fifteen from MPQA and took the first 15 sentences for annotation. We used Stanford CoreNLP (Manning et al., 2014) for sentence splitting, part-of-speech tagging, named entity recognition, co-reference resolution and dependency parsing. We discarded entities of type date, duration, money, time and number and merged named entities using several heuristics, such as merging acronyms, merging named entity of person type with the same last name (e.g., Tiger Woods to Woods). We merged names listed as alias in when there is an exact match from Freebase. We included all mentions in a co-reference chain with the named entity, discarding chains with more than one entity. The corpus statistics are shown in Table 2. 4.2 Sentiment Data Collection We annotated data using two methods: freelancers ($7.6 per article on average) covering all entity pairs and crowd-sourcing ($1.6 per article on average) covering a subset of entity pairs. Evaluation Dataset We provide exhaustive annotations covering all pairs for the evaluation set. We hired freelancers from UpWork,3 after examining performance on five documents. They labeled entity pairs with one of the following classes. POS: positive towards the target. NOTNEG: positive or unbiased towards the target. 3https://www.upwork.com 337 Label KBP MPQA POS 3.93 3.52 NOT NEG 5.73 8.06 UNBIASED 44.64 91.04 NOT POS 2.73 6.70 NEG 2.27 2.94 Table 3: Sentiment Label Statistics. Each count represents the average number per document. UNB: unbiased towards the target NOTPOS: negative or unbiased towards the target. NEG: negative towards the target. Here, we introduced the NOTPOS and NOTNEG classes to mark more subjective cases where we expect agreement might be lower. For example, one assigned NOTPOS to sentiment(Goldman, FINRA), The FINRA said Goldman lacked adequate procedures to . . . and another assigned NOTNEG to sentiment(Macalintal, Arroyo) in the next example. . . . Arroyo’s election lawyer, Romulo Macalintal. Arguments could be made for NEG or POS, respectively, but the decision is inherently subjective and requires careful reading.4 We also asked annotators to mark the label as inferred when not explicitly stated but implied from the context or world knowledge. Allowing for inferred labels and finer-grained labels encouraged annotators to capture implicit sentiment. For each judgement, we acquired two labels. Interannotator agreement, in Table 4, is high for the relaxed metrics, confirming our intuitions about the ambiguity of the NOTNEG and NOTPOS labels. For experiments, we combine the fine grained labels as follows: POS or NEG is assigned when both marked it as such. When only one of the annotators marked it, we assigned the weaker sentiment (POS to NOTNEG, NEG to NOTPOS). NOTNEG and NOTPOS are assigned when either annotator marked it without ‘Inferred’ label. When the labels contradict in polarity or the labels are inferred weaker sentiment, UNB was assigned. Crowdsourced Dataset We also randomly selected news articles from the Gigaword corpus,5 and collected labels to train the base sentiment 4In the construction of MPQA3.0 dataset, entityentity/event sentiment corpus, even with iterative expert annotation, 31% of disagreements are caused by negligence. 5LDC2014E13:TAC2014KBP English Corpus Exact Strict Relaxed Positive 0.35 0.54 0.67 Negative 0.50 0.64 0.74 Table 4: Inter-annotator Agreement. Cohen’s kappa score: Exact counts only exact matches, Strict counts allows NOT NEG labels to match POS, and Relaxed allows NOT NEG to match POS or UNBIASED (analogously for negative). POS NOT NEG NOT POS NEG KBP 25% 29% 30% 28% MPQA 35% 49% 46% 50% Table 5: Percentage of entity pairs that do not cooccur in a sentence. POS NOTNEG NOTPOS NEG KBP 70% 94% 88% 58% MPQA 68% 74% 83% 66% Table 6: Percentage of labels marked as inferred. classifier (Sec. 3.1). We designed a pipelined approach, with three steps: 1. Document selection: Is there sentiment among entities in this document? 2. Entity selection: (1) Select all entities holding sentiment towards any other entities., and (2) Select all entities which are the target of sentiment by any other entity. 3. Sentiment label collection: Choose the sentiment A has towards B, from {Positive, No Sentiment, Negative} We used CrowdFlower,6 where annotators were randomly presented test questions for quality control. We collected labels from three annotators for each entity pair, and considered labels when at least two agreed. The resulting annotation contains total 2,995 labels on 914 documents, 682 positive, 836 negative and 474 without sentiment, which we discarded. 4.3 Insights Into Data This data supports the study of sentiment-laden entity pairs across sentence boundaries and inferred labels among entities, as we show here. Sentiment Beyond Sentence Boundary Approximately 25% of polarized sentiment labels are between entities that do not co-occur7 in a sentence (see Table 5). For example, in the article 6http://www.crowdflower.com 7This is an estimate due to co-reference resolution errors. 338 with headline ‘Russia heat, smog trigger health problems’, . . . “We never care to work with a future perspective in mind,” Alexei Skripkov of the Federal Medical and Biological Agency said. “It’s a big systemic mistake.” Skripkov never appears together with Russia in any sentence, but he manifests negative sentiment towards it. When a document revolves around a theme (in this example Russia), sentiment is often directed to it without being explicitly mentioned. Inferred sentiment Annotators marked labels as inferred frequently, especially on less polarized sentiment (see Table 6). Various clues led to sentiment inference. For example, in the following document, we can read Sam Lake’s positive attitude towards Paul Auster from his ‘citing’ action: Ask most video-game designers about their inspirations . . . Sam Lake cites Paul Auster’s “The Book of Illusions” Sentiment can also be inferred through reasoning over another entity. The U.N. imposed an embargo against Eritrea for helping insurgents opposed to the Somali government. By considering relations with Eritrea, we can infer U.N. would be positive towards Somalia. 5 Experimental Setup Data and Metrics We randomly split the densely labeled KBP document set, using half as a test data and half as a development data. One half of the development set was used to tune hyper parameters,8 and the other for error analysis and ablations. After development, we ran on the test sets composed of KBP documents and MPQA documents. For MPQA we did not create a separate development set and reserved all of the relatively modest amount of data for a more reliable test set. For the pairwise classifier, we report development results using five-fold cross validation on the training data. We report macro-averaged precision, recall, and F-measure for both sentiment labels. Comparison Systems We compare performance to two simple baselines and two adaptations of existing sentiment classifiers. The baselines include our base pairwise classifier 8We used the following values (αr, αbadr, αitself, αfaction, αbl, αbadbl) = (0.7, -0.8, 0.4, 0.5, 0.1, -0.5). (Pair) and randomly assigning labels according to their empirical distribution (Random). The first existing method adaptation (Sentence) uses the publicly released sentence-level RNN sentiment model from Socher et al (2013). For each entity pair, we collect sentiment labels from sentences they co-occur in and assign a positive label if a positive-labeled sentence exists, negative if there exists more than one sentence with a negative label and no positives.9 We also report a proxy for doing similar aggregation over a state-of-the-art entity-entity sentiment classifier. Here, because we added our new labels to the original KBP and MPQA3.0 annotations, we can simply predict the union of the original gold annotations using mention string overlap to align the entities (KM Gold). This provides a reasonable upper bound on the performance of any extractor trained on this data.10 Implementation Details We use CPLEX411 to solve the ILP described in Sec. 2. For computational efficiency and to avoid erroneous propagation, soft constraints associated with reciprocity and balance theory are introduced only on pairs for which a high-precision classifier assigned polarity. For the pairwise classifier, we use a classweighted linear SVM.12 We include annotated pairs, and randomly sample negative examples from pairs without a label in the crowd-sourced training dataset. We made two versions of pairwise classifiers by tuning weight on polarized classes and negative sampling ratio by grid search. One is tuned for high precision to be used as a base classifier for ILP (ILP base), and the other is tuned for the best F1 (Pairwise).13 6 Results Table 7 shows results on the evaluation datasets. The global model achieves the best F1 on both labels. All systems do significantly better than the random baseline but, overall, we see that entityentity sentiment detection is challenging, requir9Due to domain difference, the system predicted negative labels more (73% of sentences were classified as negative). 10We consider this gold evaluation a direct proxy for the recent work Deng and Wiebe (2015a), which is the most related recent entity-entity sentiment model trained on the gold data whose predictions we are evaluating against. 11http://tinyurl.com/joccfqy 12http://scikit-learn.org/ 13We use 10 as the weights for the polarized classes. Pairwise and base classifier for MPQA sampled 4%, base classifier for KBP sampled 10% of unlabeled pairs. 339 Development Set (KBP) KBP MPQA Positive Negative Positive Negative Positive Negative P R F1 P R F1 P R F1 P R F1 P R F1 P R F1 KM Gold 90.9 2.5 4.8 93.8 8.6 15.8 93.9 4.3 8.3 93.5 6.6 12.4 61.5 1.3 2.5 90.0 5.2 9.8 Random 16.6 13.1 14.7 4.9 4.0 4.4 13.3 12.7 13.0 10.1 6.9 8.2 10.9 15.4 12.8 8.9 6.7 7.7 Sentence 60.0 16.3 25.7 21.7 43.1 28.8 40.9 20.6 27.4 21.0 31.4 25.2 18.9 3.7 6.2 16.7 18.2 17.4 Pairwise 47.3 36.9 41.4 25.6 36.8 30.2 36.2 35.5 35.9 27.6 41.2 33.1 28.7 23.0 25.6 23.2 16.3 19.2 Global 58.2 37.9 45.9 37.2 35.1 36.1 45.5 32.7 38.1 34.6 36.8 35.7 25.2 29.3 27.1 17.6 24.4 20.4 Table 7: Performance on the evaluation datasets: including implicit and explicit sentiment. Positive Negative P R F1 P R F1 ILP base 56.7 25.2 34.9 36.9 27.6 31.6 + Reci. 53.5 30.0 38.4 33.9 33.9 33.9 + Balance 49.6 30.4 37.7 32.0 32.8 32.4 + Faction 58.9 30.2 39.9 37.6 33.9 35.6 Table 8: ILP constraints ablation study. Positive Negative P R F1 P R F1 All 34.5 39.7 36.9 35.7 37.6 36.6 - Depend. 32.9 32.1 32.5 31.7 38.5 34.8 - Doc. 32.6 41.0 35.8 39.4 23.8 28.0 - Quotation 33.6 39.5 36.3 34.5 34.6 34.6 Table 9: Pairwise classifier feature ablation study. ing identification of holders, targets, and sentiment jointly. While the numbers are not directly comparable, the best performing system for KBP 2014 sentiment task achieved F1 score of 25.7. The first row (KM Gold) shows the comparison against gold annotations from different datasets, highlighting the differences between the task definitions. Our annotations are much more dense, while KBP focuses on specific query entities and MPQA has a much broader focus with less emphasis on covering all entity pairs. The high precision suggests that all of the approaches agree when considering the same entity pairs. The global model also improves performance over the pairwise classifier (Pairwise) for both datasets, but we see very different behavior due to the different sentiment label distributions (see Table 3). The KBP data has many fewer unbiased pairs and many mistakes are from choosing the wrong polarity. For the pairwise classifier 17% of all predictions were assigned the opposite polarity. After the global inference, it is reduced to 11%, contributing to the gain in overall precision. For MPQA the base classifier has a more challenging detection task, due to relatively large amount of the unbaised pairs. Here, the best base classifier misses many pairs and the global model helps to fill in some of these gaps in recall. In both cases, the document-level model often propagates correct labels by detecting easier, exSentiment expression detection error 21.0% Missing world knowledge 19.3% Named entity detection error 17.5% Co-reference failure 14.8% Propagation error 12.3% Missing faction 7.0% Table 10: Error Analysis on the development set. plicit expressions. For example, given the sentence Buphavanh said Laos creates favorable conditions for Vietnamese companies, the base classifier detected positive sentiment from Buphavanh to Vietnam, but not between Vietnam and Laos. By detecting the fact that Buphavanh is the prime minister of Laos, it infers the extra sentiment pairs. We also did ablation studies to measure the contributions of different components. Table 8 shows ablations of each soft constraint. The faction constraint is the most helpful, improving both precision and recall for both labels. The reciprocity and social balance constraints tend to improve recall at the cost of precision. Table 9 shows ablations of the base classifier features. All features are helpful, with dependency features most helpful for positive labels, and quotation and documentlevel features more with negatives. Error Analysis We manually analyzed errors on 20 articles from the development set (Table 10). Our system failed when there were sentiment words not in the lexicon, or negated sentiment words. Capturing subtle sentiment expressions beyond sentiment lexicon should improve the performance. Preprocessing, as a whole, was the largest source of error. It includes co-reference failure and named entity error. Co-reference mistakes happen as a result of not resolving pronouns, referring expressions, as well as named entities co-references (e.g., Financial Industry Regulatory Authority to FINRA), or erroneously merging them. Lengthy quotations or nested mentions triggered co-reference error, affecting mostly recall. Named entity errors includes incorrect named 340 entity detection (e.g., pro-Israel) and mention detection boundary errors. For example, we detected negative sentiment from Mexico to Pakistan from Mexico condemns Pakistan series suicide bomb attacks. While actual sentiment is positive. Finally, the ILP propagates sentiment labels erroneously at times. Our constraints often hold among entities of the same type, but are less predictive among entities of different types. For example, when a person supports a peace treaty, the treaty does not have sentiment towards him/her. For future work refining constraints based on entity type should help performance. 7 Related Work Sentiment Inference Our sentiment inference task is related to the recent KBP sentiment task,14 in that we aim to find opinion target and holder. While we study the complete document-level analysis over all entity pairs, the KBP task is formulated as query-focused retrieval of entity sentiment from a large pool of potentially relevant documents. Thus, their annotations focus only on query entities and relatively sparse compared to ours (see Sec. 6). Another recent dataset is MPQA 3.0 (Deng and Wiebe, 2015b), which captures various aspects of sentiment. Their sentiment pair annotations are only at the sentencelevel and are therefore much sparser than we provide (see Sec. 6) for entity-entity relation analysis. Several recent studies focused on various aspects of implied sentiment (Greene and Resnik, 2009; Mohammad and Turney, 2010; Zhang and Liu, 2011; Feng et al., 2013; Deng and Wiebe, 2014; Deng et al., 2014). Deng and Wiebe (2015a) in particular introduced sentiment implicature rules relevant for sentence-level entityentity sentiment. Our work contributes to these recent efforts by presenting a new model and dataset for document-level sentiment inference over all entity pairs. Document-level Analysis Stoyanov and Claire (2011) also studied document-level sentiment analysis based on fine-grained detection of directed sentiment. They aggregate sentence-level detections to make document-level predictions, while our we model global coherency among entities and can discover implied sentiment without direct sentence-level evidence. In the event 14http://www.nist.gov/tac/2014/KBP/ Sentiment extraction domain, previous research showed the effectiveness of jointly considering multiple sentences. Yang and Mitchell (2016) proposed joint extraction of entities and events with the document context, improving on the event extraction. Most work focuses on events, while we primarily study sentiment relations. Social Network Analysis While many previous studies considered the effect of social dynamics for social media analysis, most relied on an explicitly available social network structure or considered dialogues and speech acts for which opinion holders are given (Tan et al., 2011; Hu et al., 2013; Li et al., 2014; West et al., 2014; Krishnan and Eisenstein, 2015). Compared to the recent work that focused on relationships among fictional characters in movie summaries and stories (Chaturvedi et al., 2016; Srivastava et al., 2016; Iyyer et al., 2016), we consider a broader types of named entities on news domains. 8 Conclusion We presented an approach to interpreting sentiment among entities in news articles, with global constraints provided by social, faction and discourse context. Experiments demonstrated that the approach can infer implied sentiment and point toward potential directions for future work, including joint entity detection and incorporation of more varied types of factual relationships. Acknowledgments This research was supported in part by the NSF (IIS-1252835, IIS-1408287, IIS-1524371), DARPA under the DEFT program through the AFRL (FA8750-13-2-0019), an Allen Distinguished Investigator Award, and a gift from Google. This material is also based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1256082. The authors thank the members of UW NLP group for discussions and support. We also thank the anonymous reviewers for insightful comments. Finally, we thank the annotators from the CrowdFlower and UpWork. 341 References Zvi Ben-Ami, Ronen Feldman, and Binyamin Rosenfeld. 2014. Entities’ sentiment relevance. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Steven Bethard, Hong Yu, Ashley Thornton, Vasileios Hatzivassiloglou, and Dan Jurafsky. 2004. Automatic extraction of opinion propositions and their holders. In Proceedings of the AAAI Spring Symposium on Exploring Attitude and Affect in Text: Theories and Applications. Snigdha Chaturvedi, Shashank Srivastava, Hal Daum´e III, and Chris Dyer. 2016. Modeling evolving relationships between characters in literary novels. In Proceedings of the National Conference on Artificial Intelligence. Lingjia Deng and Janyce Wiebe. 2014. Sentiment propagation via implicature constraints. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics. Lingjia Deng and Janyce Wiebe. 2015a. Joint prediction for entity/event-level sentiment analysis using probabilistic soft logic models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Lingjia Deng and Janyce Wiebe. 2015b. Mpqa 3.0: An entity/event-level sentiment corpus. In Proceedings of Annual Meeting of the Association for Computational Linguistics. Lingjia Deng, Janyce Wiebe, and Yoonjung Choi. 2014. Joint inference and disambiguation of implicit sentiments via implicature constraints. In Proceedings of International Conference on Computational Linguistics. Joe Ellis, Jeremy Getman, and Stephanie M Strassel. 2014. Overview of linguistic resources for the tac kbp 2014 evaluations: Planning, execution, and results. In Proceedings of TAC KBP 2014 Workshop, National Institute of Standards and Technology, pages 17–18. Song Feng, Jun Seok Kang, Polina Kuznetsova, and Yejin Choi. 2013. Connotation lexicon: A dash of sentiment beneath the surface meaning. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, Sofia, Bulgaria, August. Association for Computational Linguistics. Alvin W. Gouldner. 1960. The norm of reciprocity: A preliminary statement. American Sociological Review, 25(2). Stephan Greene and Philip Resnik. 2009. More than words: Syntactic packaging and implicit sentiment. In Proceedings the North American Chapter of the Association for Computational Linguistics, Boulder, Colorado, June. Association for Computational Linguistics. Fritz Heider. 1946. Attitudes and cognitive organization. The Journal of psychology, 21(1). Xia Hu, Lei Tang, Jiliang Tang, and Huan Liu. 2013. Exploiting social relations for sentiment analysis in microblogging. In Proceedings of the ACM international conference on Web search and data mining. ACM. Mohit Iyyer, Anupam Guha, Snigdha Chaturvedi, Jordan Boyd-Graber, and Hal Daum´e III. 2016. Feuding families and former friends: Unsupervised learning for dynamic fictional relationships. In Proceedings of North American Association for Computational Linguistics. G. A. Johnston. 1916. International Journal of Ethics, 26(2). Vinodh Krishnan and Jacob Eisenstein. 2015. “You’re Mr. Lebowski, I’m The Dude”: Inducing address term formality in signed social networks. In Proceedings of the North American Chapter of the Association for Computational Linguistics. Paul F Lazarsfeld and Robert K Merton. 1954. Friendship as a social process: A substantive and methodological analysis. Freedom and control in modern society, 18:18–66. Jiwei Li, Alan Ritter, and Eduard Hovy. 2014. Weakly supervised user profile extraction from twitter. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Bin Lu. 2010. Identifying opinion holders and targets with dependency parser in chinese news texts. In Proceedings of the NAACL HLT Student Research Workshop. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 55–60. Saif M. Mohammad and Peter D. Turney. 2010. Emotions evoked by common words and phrases: Using mechanical turk to create an emotion lexicon. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, CAAGET ’10. D. Roth and W. Yih. 2004. A linear programming formulation for global inference in natural language tasks. In Proceedings of Conference on Natural Language Learning. Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the Empirical Methods in Natural Language Processing. 342 Shashank Srivastava, Snigdha Chaturvedi, and Tom Mitchell. 2016. Inferring interpersonal relations in narrative summaries. In Proceedings of the National Conference on Artificial Intelligence. Veselin Stoyanov and Claire Cardie. 2011. Automatically Creating General-Purpose Opinion Summaries from Text. In Proceedings of Recent Advances in Natural Language Processing. Chenhao Tan, Lillian Lee, Jie Tang, Long Jiang, Ming Zhou, and Ping Li. 2011. User-level sentiment analysis incorporating social networks. In Proceedings of Knowldege Discovery and Data Mining. Robert West, Hristo S Paskov, Jure Leskovec, and Christopher Potts. 2014. Exploiting social network structure for person-to-person sentiment analysis. In the Proceedings of Transactions of the Association for Computational Linguistics. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase-level sentiment analysis. In Proceedings of the Human Language Technologies Conference/Conference on Empirical Methods in Natural Language Processing. Bishan Yang and Claire Cardie. 2013. Joint inference for fine-grained opinion extraction. In Proceedings of Association for Computational Linguistics. Bishan Yang and Tom Mitchell. 2016. Joint extraction of events and entities within a document context. In North American Association for Computational Linguistics. Lei Zhang and Bing Liu. 2011. Identifying noun product features that imply opinions. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. 343
2016
32
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 344–354, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Active Learning for Dependency Parsing with Partial Annotation Zhenghua Li†, Min Zhang†∗, Yue Zhang†, Zhanyi Liu‡, Wenliang Chen†, Hua Wu‡, Haifeng Wang‡ † Soochow University, Suzhou, China {zhli13,minzhang,wlchen}@suda.edu.cn, [email protected] ‡ Baidu Inc., Beijing, China {liuzhanyi,wu hua,wanghaifeng}@baidu.com Abstract Different from traditional active learning based on sentence-wise full annotation (FA), this paper proposes active learning with dependency-wise partial annotation (PA) as a finer-grained unit for dependency parsing. At each iteration, we select a few most uncertain words from an unlabeled data pool, manually annotate their syntactic heads, and add the partial trees into labeled data for parser retraining. Compared with sentence-wise FA, dependency-wise PA gives us more flexibility in task selection and avoids wasting time on annotating trivial tasks in a sentence. Our work makes the following contributions. First, we are the first to apply a probabilistic model to active learning for dependency parsing, which can 1) provide tree probabilities and dependency marginal probabilities as principled uncertainty metrics, and 2) directly learn parameters from PA based on a forest-based training objective. Second, we propose and compare several uncertainty metrics through simulation experiments on both Chinese and English. Finally, we conduct human annotation experiments to compare FA and PA on real annotation time and quality. 1 Introduction During the past decade, supervised dependency parsing has gained extensive progress in boosting parsing performance on canonical texts, especially on texts from domains or genres similar to existing manually labeled treebanks (Koo and Collins, 2010; Zhang and Nivre, 2011). However, the ∗Correspondence author. $0 I1 saw2 Sarah3 with4 a5 telescope6 Figure 1: A partially annotated sentence, where only the heads of “saw” and “with” are decided. upsurge of web data (e.g., tweets, blogs, and product comments) imposes great challenges to existing parsing techniques. Meanwhile, previous research on out-of-domain dependency parsing gains little success (Dredze et al., 2007; Petrov and McDonald, 2012). A more feasible way for open-domain parsing is to manually annotate a certain amount of texts from the target domain or genre. Recently, several small-scale treebanks on web texts have been built for study and evaluation (Foster et al., 2011; Petrov and McDonald, 2012; Kong et al., 2014; Wang et al., 2014). Meanwhile, active learning (AL) aims to reduce annotation effort by choosing and manually annotating unlabeled instances that are most valuable for training statistical models (Olsson, 2009). Traditionally, AL utilizes full annotation (FA) for parsing (Tang et al., 2002; Hwa, 2004; Lynn et al., 2012), where a whole syntactic tree is annotated for a given sentence at a time. However, as commented by Mejer and Crammer (2012), the annotation process is complex, slow, and prone to mistakes when FA is required. Particularly, annotators waste a lot of effort on labeling trivial dependencies which can be well handled by current statistical models (Flannery and Mori, 2015). Recently, researchers report promising results with AL based on partial annotation (PA) for dependency parsing (Sassano and Kurohashi, 2010; Mirroshandel and Nasr, 2011; Majidi and Crane, 2013; Flannery and Mori, 2015). They find 344 that smaller units rather than sentences provide more flexibility in choosing potentially informative structures to annotate. Beyond previous work, this paper endeavors to more thoroughly study this issue, and has made substantial progress from the following perspectives. (1) This is the first work that applies a stateof-the-art probabilistic parsing model to AL for dependency parsing. The CRF-based dependency parser on the one hand allows us to use probabilities of trees or marginal probabilities of single dependencies for uncertainty measurement, and on the other hand can directly learn parameters from partially annotated trees. Using probabilistic models may be ubiquitous in AL for relatively simpler tasks like classification and sequence labeling, but is definitely novel for dependency parsing which is dominated by linear models with perceptron-like training. (2) Based on the CRF-based parser, we make systematic comparison among several uncertainty metrics for both FA and PA. Simulation experiments show that compared with using FA, AL with PA can greatly reduce annotation effort in terms of dependency number by 62.2% on Chinese and by 74.2% on English. (3) We build a visualized annotation platform and conduct human annotation experiments to compare FA and PA on real annotation time and quality, where we obtain several interesting observations and conclusions. All codes, along with the data from human annotation experiments, are released at http: //hlt.suda.edu.cn/˜zhli for future research study. 2 Probabilistic Dependency Parsing Given an input sentence x = w1...wn, the goal of dependency parsing is to build a directed dependency tree d = {h ↷m : 0 ≤h ≤n, 1 ≤ m ≤n}, where |d| = n and h ↷m represents a dependency from a head word h to a modifier word m. Figure 1 depicts a partial tree containing two dependencies.1 1In this work, we follow many previous works to focus on unlabeled dependency parsing (constructing the skeleton dependency structure). However, the proposed techniques In this work, we for the first time apply a probabilistic CRF-based parsing model to AL for dependency parsing. We adopt the second-order graphbased model of McDonald and Pereira (2006), which casts the problem as finding an optimal tree from a fully-connect directed graph and factors the score of a dependency tree into scores of pairs of sibling dependencies. d∗= arg maxd∈Y(x)Score(x, d; w) Score(x, d; w) = ∑ (h,s,m):h↷s∈d, h↷m∈d w · f(x, h, s, m) (1) where s and m are adjacent siblings both modifying h; f(x, h, s, m) are the corresponding feature vector; w is the feature weight vector; Y(x) is the set of all legal trees for x according to the dependency grammar in hand; d∗is the 1-best parse tree which can be gained efficiently via a dynamic programming algorithm (Eisner, 2000). We use the state-of-the-art feature set listed in Bohnet (2010). Under the log-linear CRF-based model, the probability of a dependency tree is: p(d|x; w) = eScore(x,d;w) ∑ d′∈Y(x) eScore(x,d′;w) (2) Ma and Zhao (2015) give a very detailed and thorough introduction to CRFs for dependency parsing. 2.1 Learning from FA Under the supervised learning scenario, a labeled training data D = {(xi, di)}N i=1 is provided to learn w. The objective is to maximize the log likelihood of D: L(D; w) = ∑N i=1 log p(di|xi; w) (3) which can be solved by standard gradient descent algorithms. In this work, we adopt stochastic gradient descent (SGD) with L2-norm regularization for all CRF-based parsing models.2 explored in this paper can be easily extended to the case of labeled dependency parsing. 2We borrow the implementation of SGD in CRFsuite (http://www.chokkan.org/software/ crfsuite/), and use 100 sentences for a batch. 345 2.2 Marginal Probability of Dependencies Marcheggiani and Arti`eres (2014) shows that marginal probabilities of local labels can be used as an effective uncertain metric for AL for sequence labeling problems. In the case of dependency parsing, the marginal probability of a dependency is the sum of probabilities of all legal trees that contain the dependency. p(h ↷m|x; w) = ∑ d∈Y(x):h↷m∈d p(d|x; w) (4) Intuitively, marginal probability is a more principled metric for measuring reliability of a dependency since it considers all legal parses in the search space, compared to previous methods based on scores of local classifiers (Sassano and Kurohashi, 2010; Flannery and Mori, 2015) or votes of n-best parses (Mirroshandel and Nasr, 2011). Moreover, Li et al. (2014) find strong correlation between marginal probability and correctness of a dependency in cross-lingual syntax projection. 3 Active Learning for Dependency Parsing This work adopts the standard pool-based AL framework (Lewis and Gale, 1994; McCallum and Nigam, 1998). Initially, we have a small set of labeled seed data L, and a large-scale unlabeled data pool U. Then the procedure works as follows. (1) Train a new parser on the current L. (2) Parse all sentences in U, and select a set of the most informative tasks U′ (3) Manually annotate: U′ →L′ (4) Expand labeled data: L ∪L′ →L The above steps loop for many iterations until a predefined stopping criterion is met. The key challenge for AL is how to measure the informativeness of structures in concern. Following previous work on AL for dependency parsing, we make a simplifying assumption that if the current model is most uncertain about an output (sub)structure, the structure is most informative in terms of boosting model performance. 3.1 Sentence-wise FA Sentence-wise FA selects K most uncertain sentences in Step (2), and annotates their whole tree structures in Step (3). In the following, we describe several uncertainty metrics and investigate their practical effects through experiments. Given an unlabeled sentence x = w1...wn, we use d∗ to denote the 1-best parse tree produced by the current model as in Eq. (1). For brevity, we omit the feature weight vector w in the equations. Normalized tree score. Following previous works that use scores of local classifiers for uncertainty measurement (Sassano and Kurohashi, 2010; Flannery and Mori, 2015), we use Score(x, d∗) to measure the uncertainty of x, assuming that the model is more uncertain about x if d∗gets a smaller score. However, we find that directly using Score(x, d∗) always selects very short sentences due to the definition in Eq. (1). Thus we normalize the score with the sentence length n as follows.3 Confi(x) = Score(x, d∗) n1.5 (5) Normalized tree probability. The CRF-based parser allows us, for the first time in AL for dependency parsing, to directly use tree probabilities for uncertainty measurement. Unlike previous approximate methods based on k-best parses (Mirroshandel and Nasr, 2011), tree probabilities globally consider all parse trees in the search space, and thus are intuitively more consistent and proper for measuring the reliability of a tree. Our initial assumption is that the model is more uncertain about x if d∗gets a smaller probability. However, we find that directly using p(d∗|x) would select very long sentences because the solution space grows exponentially with sentence length. We find that the normalization strategy below works well.4 Confi(x) = n√ p(d∗|x) (6) Averaged marginal probability. As discussed in Section 2.2, the marginal probability of a dependency directly reflects its reliability, and thus can be regarded as another global measurement besides tree probabilities.In fact, we find that the effect of sentence length is naturally handled with the following metric.5 Confi(x) = ∑ h↷m∈d∗p(h ↷m|x) n (7) 3We have also tried replacing n1.5 with n (still prefer short sentences) and n2 (bias to long sentences). 4We have also tried p(d∗|x)×f(n), where f(n) = log n or f(n) = √n, but both work badly. 5We have also tried n√∏ h↷m∈d∗p(h ↷m|x), leading to slightly inferior results. 346 3.2 Single Dependency-wise PA AL with single dependency-wise PA selects M most uncertain words from U in Step (2), and annotates the heads of the selected words in Step (3). After annotation, the newly annotated sentences with partial trees L′ are added into L. Different from the case of sentence-wise FA, L′ are also put back to U, so that new tasks can be further chosen from them. Marcheggiani and Arti`eres (2014) make systematic comparison among a dozen uncertainty metrics for AL with PA for several sequence labeling tasks. We borrow three effective metrics according to their results. Marginal probability max. Suppose h0 = arg maxh p(h ↷i|x) is the most likely head for i. The intuition is that the lower p(h0 ↷i) is, the more uncertain the model is on deciding the head of the token i. Confi(x, i) = p(h0 ↷i|x) (8) Marginal probability gap. Suppose h1 = arg maxh̸=h0 p(h ↷i|x) is the second most likely head for i. The intuition is that the smaller the probability gap is, the more uncertain the model is about i. Confi(x, i) = p(h0 ↷i|x) −p(h1 ↷i|x) (9) Marginal probability entropy. This metric considers the entropy of all possible heads for i. The assumption is that the smaller the negative entropy is, the more uncertain the model is about i. Confi(x, i) = ∑ h p(h ↷i|x) log p(h ↷i|x) (10) 3.3 Batch Dependency-wise PA In the framework of single dependency-wise PA, we assume that the selection and annotation of dependencies in the same sentence are strictly independent. In other words, annotators may be asked to annotate the head of one selected word after reading and understanding a whole (sometimes partial) sentence, and may be asked to annotate another selected word in the same sentence in next AL iteration. Obviously, frequently switching sentences incurs great waste of cognitive effort, $0 I1 saw2 Sarah3 with4 a5 telescope6 Figure 2: An example parse forest converted from the partial tree in Figure 1. and annotating one dependency can certainly help decide another dependency in practice. Inspired by the work of Flannery and Mori (2015), we propose AL with batch dependencywise PA, which is a compromise between sentence-wise FA and single dependency-wise PA. In Step 2, AL with batch dependency-wise PA selects K most uncertain sentences from U, and also determines r% most uncertain words from each sentence at the same time. In Step 3, annotators are asked to label the heads of the selected words in the selected sentences. We propose and experiment with the following three strategies based on experimental results of sentence-wise FA and single dependency-wise PA. Averaged marginal probability & gap. First, select K sentences from U using averaged marginal probability. Second, select r% words using marginal probability gap for each selected sentence. Marginal probability gap. First, for each sentence in U, select r% most uncertain words according to marginal probability gap. Second, select K sentences from U using the averaged marginal probability gap of the selected r% words in a sentence as the uncertainty metric. Averaged marginal probability. This strategy is the same with the above strategy, except it measures the uncertainty of a word i according to the marginal probability of the dependency pointing to i in d∗, i.e., p(j ↷i|x), where j ↷ i ∈d∗. 3.4 Learning from PA A major challenge for AL with PA is how to learn from partially labeled sentences, as depicted in Figure 1. Li et al. (2014) show that a probabilistic CRF-based parser can naturally and effectively learn from PA. The basic idea is converting a partial tree into a forest as shown in Figure 2, 347 and using the forest as the gold-standard reference during training, also known as ambiguous labeling (Riezler et al., 2002; T¨ackstr¨om et al., 2013). For each remaining word without head, we add all dependencies linking to it as long as the new dependency does not violate the existing dependencies. We denote the resulting forest as Fj, whose probability is naturally the sum of probabilities of each tree d in F. p(F|x; w) = ∑ d∈F p(d|x; w) = ∑ d∈F eScore(x,d;w) ∑ d′∈Y(x) eScore(x,d′;w) (11) Suppose the partially labeled training data is D = {(xi, Fi)}N i=1. Then its log likelihood is: L(D; w) = ∑N i=1 log p(Fi|xi; w) (12) T¨ackstr¨om et al. (2013) show that the partial derivative of the L(D; w) with regard to w (a.k.a the gradient) in both Equation (3) and (12) can be efficiently solved with the classic Inside-Outside algorithm.6 4 Simulation Experiments We use Chinese Penn Treebank 5.1 (CTB) for Chinese and Penn Treebank (PTB) for English. For both datasets, we follow the standard data split, and convert original bracketed structures into dependency structures using Penn2Malt with its default head-finding rules. To be more realistic, we use automatic part-of-speech (POS) tags produced by a state-of-the-art CRF-based tagger (94.1% on CTB-test, and 97.2% on PTB-test, nfold jackknifing on training data), since POS tags encode much syntactic annotation. Because AL experiments need to train many parsing models, we throw out all training sentences longer than 50 to speed up our experiments. Table 1 shows the data statistics. Following previous practice on AL with PA (Sassano and Kurohashi, 2010; Flannery and Mori, 2015), we adopt the following AL settings for both Chinese and English . The first 500 training sentences are used as the seed labeled data L. In the case of FA, K = 500 new sentences 6This work focuses on projective dependency parsing. Please refer to Koo et al. (2007), McDonald and Satta (2007), and Smith and Smith (2007) for building a probabilistic nonprojective parser. Train Dev Test Chinese #Sentences 14,304 803 1,910 #Tokens 318,408 20,454 50,319 English #Sentences 39,115 1,700 2,416 #Tokens 908,154 40,117 56,684 Table 1: Data statistics. are selected and annotated at each iteration. In the case of single dependency-wise PA, we select and annotate M = 10, 000 dependencies, which roughly correspond to 500 sentences considering that the averaged sentence length is about 22.3 in CTB-train and 23.2 in PTB-train. In the case of batch dependency-wise PA, we set K = 500, and r = 20% for Chinese and r = 10% for English, considering that the parser trained on all data achieves about 80% and 90% accuracies. We measure parsing performance using the standard unlabeled attachment score (UAS) including punctuation marks. Please note that we always treat punctuation marks as ordinary words when selecting annotation tasks and calculating UAS, in order to make fair comparison between FA and PA.7 4.1 FA vs. Single Dependency-wise PA First, we make comparison on the performance of AL with FA and with single dependency-wise PA. Results on Chinese are shown in Figure 3. Following previous work, we use the number of annotated dependencies (x-axis) as the annotation cost in order to fairly compare FA and PA. We use FA with random selection as a baseline. We also draw the accuracy of the CRF-based parser trained on all training data, which can be regarded as the upper bound. For FA, the curve of the normalized tree score intertwines with that of random selection. Meanwhile, the performance of normalized tree probability is very close to that of averaged marginal probability, and both are clearly superior to the baseline with random selection. For PA, the difference among the three uncertainty metrics is small. The marginal probability gap clearly outperforms the other two metrics before 50, 000 annotated dependencies, and remains 7Alternatively, we can exclude punctuation marks for task selection in AL with PA. Then, to be fair, we have to discard all dependencies pointing to punctuation marks in the case of FA. This makes the experiment setting more complicated. 348 72 73 74 75 76 77 78 79 80 0 50000 100000 150000 200000 250000 300000 UAS (%) Number of Annotated Dependencies Parser trained on all data PA (single): marginal probability gap PA (single): marginal probability max PA (single): marginal probability entropy FA: averaged marginal probability FA: normalized tree probability FA: normalized tree score FA: random selection Figure 3: FA vs. PA on CTB-dev. 84 85 86 87 88 89 90 91 92 0 50000 100000 150000 200000 250000 300000 UAS (%) Number of Annotated Dependencies Parser trained on all data PA (single): marginal probability gap PA (single): marginal probability max PA (single): marginal probability entropy FA: averaged marginal probability FA: normalized tree probability FA: normalized tree score FA: random selection Figure 4: FA vs. PA on PTB-dev. very competitive at all other points. The marginal probability max achieves best peak UAS, and even outperforms the parser trained on all data, which can be explained by small disturbance during complex model training. The marginal probability entropy, although being the most complex metric among the three, seems inferior all the time. It is clear that using PA can greatly reduce annotation effort compared with using FA in terms of annotated dependencies. Results on English are shown in Figure 4. The overall findings are similar to those in Figure 3, except that the distinction among different methods is more clear. For FA, normalized tree score is consistently better than the random baseline. Normalized tree probability always outperforms normalized tree score. Averaged marginal probability performs best, except being slightly inferior to normalized tree probability in earlier stages. For PA, it is consistent that marginal probability gap is better than marginal probability max, and marginal probability entropy is the worst. In summary, based on the results on the de 72 73 74 75 76 77 78 79 80 10000 20000 30000 40000 50000 60000 70000 UAS (%) Number of Annotated Dependencies Parser trained on all data PA (single): marginal probability gap PA (batch 20%): marginal probability gap PA (batch 20%): averaged marginal probability PA (batch 20%): averaged marginal probability & gap Figure 5: Single vs. batch dependency-wise PA on CTB-dev. 84 85 86 87 88 89 90 91 92 93 10000 20000 30000 40000 50000 60000 70000 UAS (%) Number of Annotated Dependencies Parser trained on all data PA (single): marginal probability gap PA (batch 10%): marginal probability gap PA (batch 10%): averaged marginal probability PA (batch 10%): averaged marginal probability & gap PA (batch 20%): marginal probability gap Figure 6: Single vs. batch dependency-wise PA on PTB-dev. velopment data in Figure 3 and 4, the best AL method with PA only needs about 80,000 318,408 = 25% annotated dependencies on Chinese, and about 90,000 908,154 = 10% on English, to reach the same performance with parsers trained on all data. Moreover, the PA methods converges much faster than the FA ones, since for the same x-axis number, much more sentences (with partial trees) are used as training data for AL with PA than FA. 4.2 Single vs. Batch Dependency-wise PA Then we make comparison on AL with single dependency-wise PA and with the more practical batch dependency-wise PA. Results on Chinese are shown in Figure 5. We can see that the three strategies achieve very similar performance and are also very close to single dependency-wise PA. AL with batch dependencywise PA even achieves higher accuracy before 20, 000 annotated dependencies, which should be caused by the smaller active learning steps (about 349 2, 000 dependencies at each iteration, contrasting 10, 000 for single dependency-wise PA). When the training data runs out at about 7, 300 dependencies, AL with batch dependency-wise PA only lags behind with single dependency-wise PA by about 0.3%, which we suppose can be reduced if larger training data is available. Results on English are shown in Figure 6, and are very similar to those on Chinese. One tiny difference is that the marginal probability gap is slightly worse that the other two metrics. The three uncertainty metrics have very similar accuracy curves, which are also very close to the curve of single dependency-wise PA. In addition, we also try r = 20% and find that results are inferior to r = 10%, indicating that the extra 10% annotation tasks are less valuable and contributive. 4.3 Main Results on Test Data Table 2 shows the results on test data. We compare our CRF-based parser with ZPar v6.08, a state-ofthe-art transition-based dependency parser (Zhang and Nivre, 2011). We train ZPar with default parameter settings for 50 iterations, and choose the model that performs best on dev data. We can see that when trained on all data, our CRFbased parser outperforms ZPar on both Chinese and English. To compare FA and PA, we report the number of annotated dependencies needed under each AL strategy to achieve an accuracy lower by about 1% than the parser trained on all data.9 FA (best) refers to FA with averaged marginal probability, and it needs 187,123−149,051 187,123 = 20.3% less annotated dependencies than FA with random selection on Chinese, and 395,199−197,907 395,199 = 50.0% less on English. PA (single) with marginal probability gap needs 149,051−50,958 149,051 = 65.8% less annotated dependencies than FA (best) on Chinese, and 197,907−61,448 197,907 = 69.0% less on English. PA (batch) with marginal probability gap needs slightly more annotation than PA (single) on Chinese but slightly less annotation on English, and can reduce the amount of annotated dependencies by 149,051−56,389 149,051 = 62.2% over FA (best) on Chi8http://people.sutd.edu.sg/˜yue_zhang/doc/ 9The gap 1% is chosen based on the curves on development data (Figure 3 and 4) with the following two considerations: 1) larger gap may lead to wrong impression that AL is weak; 2) smaller gap (e.g., 0.5%) cannot be reached for the worst AL method (FA: random). Chinese English #Dep labeled UAS #Dep labeled UAS ZPar 318,408 77.97 908,154 91.45 This parser 318,408 78.36 908,154 91.66 FA (random) 187,123 77.43 395,199 90.67 FA (best) 149,051 77.32 197,907 90.66 PA (single) 50,958 77.22 61,448 90.72 PA (batch) 56,389 77.38 51,016 90.70 Table 2: Results on test data. nese and by 197,907−51,016 197,907 = 74.2% on English. 5 Human Annotation Experiments So far, we measure annotation effort in terms of the number of annotated dependencies and assume that it takes the same amount of time to annotate different words, which is obviously unrealistic. To understand whether active learning based on PA can really reduce annotation time over based on FA in practice, we build a web browser based annotation system,10 and conduct human annotation experiments on Chinese. In this part, we use CTB 7.0 which is a newer and larger version and covers more genres, and adopt the newly proposed Stanford dependencies (de Marneffe and Manning, 2008; Chang et al., 2009) which are more understandable for annotators.11 Since manual syntactic annotation is very difficult and time-consuming, we only keep sentences with length [10, 20] in order to better measure annotation time by focusing on sentences of reasonable length, which leave us 12, 912 training sentences under the official data split. Then, we use a random half of training sentences to train a CRF-based parser, and select 20% most uncertain words with marginal probability gap for each sentence of the left half. We employ 6 postgraduate students as our annotators who are at different levels of familiarity in syntactic annotation. Before annotation, the annotators are trained for about two hours by introducing the basic concepts, guidelines, and illustrating examples. Then, they are asked to practice on the annotation system for about another two hours. Finally, all annotators are required to 10http://hlt-service.suda.edu.cn/ syn-dep-batch. Please try. 11We use Stanford Parser 3.4 (2014-06-16) for constituentto-dependency structure conversion. 350 Time: Sec/Dep Annotation accuracy FA PA FA (on 20%) PA (diff) Annotator #1 4.0 7.9 84.65 (73.41) 75.28 (+1.87) Annotator #2 7.5 16.0 78.90 (72.22) 62.18 (-10.04) Annotator #3 10.0 22.2 69.75 (59.77) 56.91 (-2.86) Annotator #4 5.1 8.7 66.75 (49.19) 61.77 (+12.58) Annotator #5 7.0 17.3 65.47 (48.50) 48.39 (-0.11) Annotator #6 7.0 10.6 58.05 (43.28) 48.37 (+5.09) Overall 6.7 13.6 70.36 (57.28) 59.06 (+1.78) Table 3: Statistics of human annotation. formally annotate the same 100 sentences. The system is programed that each sentence has 3 FA submissions and 3 PA submissions. During formal annotation, the annotators are not allowed to discuss with each other or look up any guideline or documents, which may incur unnecessary inaccuracy in timing. Instead, the annotators can only decide the syntactic structures based on the basic knowledge of dependency grammar and one’s understanding of the sentence structure. The annotation process lasts for about 5 hours. On average, each annotator completes 50 sentences with FA (763 dependencies) and 50 sentences with PA (178 dependencies). Table 3 lists the results in descending order of an annotator’s experience in syntactic annotation. The first two columns compare the time needed for annotating a dependency in seconds. On average, annotating a dependency in PA takes about twice as much time as in FA, which is reasonable considering the words to be annotated in PA may be more difficult for annotators while the annotation of some tasks in FA may be very trivial and easy. Combined with the results in Table 2, we may infer that to achieve 77.3% accuracy on CTB-test, AL with FA requires 149, 051 × 6.7 = 998, 641.7 seconds of annotation, whereas AL with batch dependency-wise PA needs 56, 389 × 13.6 = 766, 890.4 seconds. Thus, we may roughly say that AL with PA can reduce annotation time over FA by 998,641.7−766,890.4 998,641.7 = 23.2%. We also report annotation accuracy according to the gold-standard Stanford dependencies converted from bracketed structures.12 Overall, the accuracy of FA is 70.36−59.06 = 11.30% higher 12An anonymous reviewer commented that the direct comparison between an annotator’s performance on PA and FA based on accuracy may be misleading since the FA and PA sentences for one annotator are mutually exclusive. than that of PA, which should be due to the trivial tasks in FA. To be more fair, we compare the accuracies of FA and PA on the same 20% selected difficult words, and find that annotators exhibit different responses to the switch. Annotator #4 achieve 12.58% higher accuracy when under PA than under FA. The reason may be that under PA, annotators can be more focused and therefore perform better on the few selected tasks. In contrast, some annotators may perform better under FA. For example, annotation accuracy of annotator #2 increases by 10.04% when switching from PA to FA, which may be due to that FA allows annotators to spend more time on the same sentence and gain help from annotating easier tasks. Overall, we find that the accuracy of PA is 59.06 −57.28 = 1.78% higher than that of FA, indicating that PA actually can improve annotation quality. 6 Related Work Recently, AL with PA attracts much attention in sentence-wise natural language processing such as sequence labeling and parsing. For sequence labeling, Marcheggiani and Arti`eres (2014) systematically compare a dozen uncertainty metrics in token-wise AL with PA (without comparison with FA), whereas Settles and Craven (2008) investigate different uncertainty metrics in AL with FA. Li et al. (2012) propose to only annotate the most uncertain word boundaries in a sentence for Chinese word segmentation and show promising results on both simulation and human annotation experiments. All above works are based on CRFs and make extensive use of sequence probabilities and token marginal probability. In parsing community, Sassano and Kurohashi (2010) select bunsetsu (similar to phrases) pairs with smallest scores from a local classifier, and let annotators decide whether the pair composes a dependency. They convert partially annotated instances into local dependency/non-dependency classification instances to help a simple shiftreduce parser. Mirroshandel and Nasr (2011) select most uncertain words based on votes of nbest parsers, and convert partial trees into full trees by letting a baseline parser perform constrained decoding in order to preserve partial annotation. Under a different query-by-committee AL framework, Majidi and Crane (2013) select most uncertain words using a committee of diverse parsers, and convert partial trees into full trees by letting 351 the parsers of committee to decide the heads of remaining tokens. Based on a first-order (pointwise) Japanese parser, Flannery and Mori (2015) use scores of a local classifier for task selection, and treat PA as dependency/non-dependency instances (Flannery et al., 2011). Different from above works, this work adopts a state-of-the-art probabilistic dependency parser, uses more principled tree probabilities and dependency marginal probabilities for uncertainty measurement, and learns from PA based on a forest-based training objective which is more theoretically sound. Most previous works on AL with PA only conduct simulation experiments. Flannery and Mori (2015) perform human annotation to measure true annotation time. A single annotator is employed to annotate for two hours alternating FA and PA (33% batch) every fifteen minutes. Beyond their initial expectation, they find that the annotation time per dependency is nearly the same for FA and PA (different from our findings) and gives a few interesting explanations. Under a non-AL framework, Mejer and Crammer (2012) propose an interesting light feedback scheme for dependency parsing by letting annotators decide the better one from top-2 parse trees produced by the current parsing model. Hwa (1999) pioneers the idea of using PA to reduce manual labeling effort for constituent grammar induction. She uses a variant InsideOutside re-estimation algorithm (Pereira and Schabes, 1992) to induce a grammar from PA. Clark and Curran (2006) propose to train a Combinatorial Categorial Grammar parser using partially labeled data only containing predicate-argument dependencies. Tsuboi et al. (2008) extend CRFbased sequence labeling models to learn from incomplete annotations, which is the same with Marcheggiani and Arti`eres (2014). Li et al. (2014) propose a CRF-based dependency parser that can learn from partial tree projected from sourcelanguage structures in the cross-lingual parsing scenario. Mielens et al. (2015) propose to impute missing dependencies based on Gibbs sampling in order to enable traditional parsers to learn from partial trees. 7 Conclusions This paper for the first time applies a state-ofthe-art probabilistic model to AL with PA for dependency parsing. It is shown that the CRFbased parser can on the one hand provide tree probabilities and dependency marginal probabilities as principled uncertainty metrics and on the other hand elegantly learn from partially annotated data. We have proposed and compared several uncertainty metrics through simulation experiments, and show that AL with PA can greatly reduce the amount of annotated dependencies by 62.2% on Chinese 74.2% on English. Finally, we conduct human annotation experiments on Chinese to compare PA and FA on real annotation time and quality. We find that annotating a dependency in PA takes about 2 times long as in FA. This suggests that AL with PA can reduce annotation time by 23.2% over with FA on Chinese. Moreover, the results also indicate that annotators tend to perform better under PA than FA. For future work, we would like to advance this study in the following directions. The first idea is to combine uncertainty and representativeness for measuring informativeness of annotation targets in concern. Intuitively, it would be more profitable to annotate instances that are both difficult for the current model and representative in capturing common language phenomena. Second, we so far assume that the selected tasks are equally difficult and take the same amount of effort for human annotators. However, it is more reasonable that human are good at resolving some ambiguities but bad at others. Our plan is to study which syntactic structures are more suitable for human annotation, and balance informativeness of a candidate task and its suitability for human annotation. Finally, one anonymous reviewer comments that we may use automatically projected trees (Rasooli and Collins, 2015; Guo et al., 2015; Ma and Xia, 2014) as the initial seed labeled data, which is cheap and interesting. Acknowledgments The authors would like to thank the anonymous reviewers for the helpful comments. We also thank Junhui Li and Chunyu Kit for reading our paper and giving many good suggestions. Particularly, Zhenghua is very grateful to many of his students: Fangli Lu, Qiuyi Yan, and Yue Zhang build the annotation system; Jiayuan Chao, Wei Chen, Ziwei Fan, Die Hu, Qingrong Xia, and Yue Zhang participate in data annotation. This work was supported by National Natural Science Foundation of China (Grant No. 61502325, 61525205, 61572338). 352 References Bernd Bohnet. 2010. Top accuracy and fast dependency parsing is not a contradiction. In Proceedings of COLING, pages 89–97. Pi-Chuan Chang, Huihsin Tseng, Dan Jurafsky, and Christopher D. Manning. 2009. Discriminative reordering with Chinese grammatical relations features. In Proceedings of the Third Workshop on Syntax and Structure in Statistical Translation (SSST-3) at NAACL HLT 2009, pages 51–59. Stephen Clark and James Curran. 2006. Partial training for a lexicalized-grammar parser. In Proceedings of the Human Language Technology Conference of the NAACL, pages 144–151. Marie-Catherine de Marneffe and Christopher D. Manning. 2008. The Stanford typed dependencies representation. In Coling 2008: Proceedings of the workshop on Cross-Framework and Cross-Domain Parser Evaluation, pages 1–8. Mark Dredze, John Blitzer, Partha Pratim Talukdar, Kuzman Ganchev, Jo˜ao Graca, and Fernando Pereira. 2007. Frustratingly hard domain adaptation for dependency parsing. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007. Jason Eisner. 2000. Bilexical grammars and their cubic-time parsing algorithms. In Advances in Probabilistic and Other Parsing Technologies, pages 29–62. Daniel Flannery and Shinsuke Mori. 2015. Combining active learning and partial annotation for domain adaptation of a japanese dependency parser. In Proceedings of the 14th International Conference on Parsing Technologies, pages 11–19. Daniel Flannery, Yusuke Miayo, Graham Neubig, and Shinsuke Mori. 2011. Training dependency parsers from partially annotated corpora. In Proceedings of IJCNLP, pages 776–784. Jennifer Foster, Ozlem Cetinoglu, Joachim Wagner, Joseph Le Roux, Joakim Nivre, Deirdre Hogan, and Josef van Genabith. 2011. From news to comment: Resources and benchmarks for parsing the language of web 2.0. In Proceedings of IJCNLP, pages 893– 901. Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2015. Cross-lingual dependency parsing based on distributed representations. In Proceedings of ACL, pages 1234–1244. Rebecca Hwa. 1999. Supervised grammar induction using training data with limited constituent information. In Proceedings of ACL, pages 73–79. Rebecca Hwa. 2004. Sample selection for statistical parsing. Computional Linguistics, 30(3):253–276. Lingpeng Kong, Nathan Schneider, Swabha Swayamdipta, Archna Bhatia, Chris Dyer, and Noah A. Smith. 2014. A dependency parser for tweets. In Proceedings of EMNLP, pages 1001–1012. Terry Koo and Michael Collins. 2010. Efficient thirdorder dependency parsers. In ACL, pages 1–11. Terry Koo, Amir Globerson, Xavier Carreras, and Michael Collins. 2007. Structured prediction models via the matrix-tree theorem. In Proceedings of EMNLP-CoNLL, pages 141–150. David D. Lewis and William A. Gale. 1994. A sequential algorithm for training text classifiers. In Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 3–12. Shoushan Li, Guodong Zhou, and Chu-Ren Huang. 2012. Active learning for Chinese word segmentation. In Proceedings of COLING 2012: Posters, pages 683–692. Zhenghua Li, Min Zhang, and Wenliang Chen. 2014. Soft cross-lingual syntax projection for dependency parsing. In COLING, pages 783–793. Teresa Lynn, Jennifer Foster, Mark Dras, and Elaine U´1 Dhonnchadha. 2012. Active learning and the irish treebank. In Proceedings of ALTA. Xuezhe Ma and Fei Xia. 2014. Unsupervised dependency parsing with transferring distribution via parallel guidance and entropy regularization. In Proceedings of ACL, pages 1337–1348. Xuezhe Ma and Hai Zhao. 2015. Probabilistic models for high-order projective dependency parsing. Arxiv, abs/1502.04174. Saeed Majidi and Gregory Crane. 2013. Active learning for dependency parsing by a committee of parsers. In Proceedings of IWPT, pages 98–105. Diego Marcheggiani and Thierry Arti`eres. 2014. An experimental comparison of active learning strategies for partially labeled sequences. In Proceedings of EMNLP, pages 898–906. Andrew McCallum and Kamal Nigam. 1998. Employing EM and pool-based active learning for text classification. In Proceedings of ICML, pages 350–358. Ryan McDonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proceedings of EACL, pages 81–88. Ryan McDonald and Giorgio Satta. 2007. On the complexity of non-projective data-driven dependency parsing. In Proceedings of the Tenth International Conference on Parsing Technologies, pages 121– 132. 353 Avihai Mejer and Koby Crammer. 2012. Training dependency parser using light feedback. In Proceedings of NAACL. Jason Mielens, Liang Sun, and Jason Baldridge. 2015. Parse imputation for dependency annotations. In Proceedings of ACL-IJCNLP, pages 1385–1394. Seyed Abolghasem Mirroshandel and Alexis Nasr. 2011. Active learning for dependency parsing using partially annotated sentences. In Proceedings of the 12th International Conference on Parsing Technologies, pages 140–149. Fredrik Olsson. 2009. A literature survey of active machine learning in the context of natural language processing. Technical report, Swedish Institute of Computer Science. Fernando Pereira and Yves Schabes. 1992. Insideoutside reestimation from partially bracketed corpora. In Proceedings of the Workshop on Speech and Natural Language (HLT), pages 122–127. Slav Petrov and Ryan McDonald. 2012. Overview of the 2012 shared task on parsing the web. In Notes of the First Workshop on Syntactic Analysis of NonCanonical Language (SANCL). Mohammad Sadegh Rasooli and Michael Collins. 2015. Density-driven cross-lingual transfer of dependency parsers. In Proceedings of EMNLP, pages 328–338. Stefan Riezler, Tracy H. King, Ronald M. Kaplan, Richard Crouch, John T. III Maxwell, and Mark Johnson. 2002. Parsing the wall street journal using a lexical-functional grammar and discriminative estimation techniques. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 271–278. Manabu Sassano and Sadao Kurohashi. 2010. Using smaller constituents rather than sentences in active learning for japanese dependency parsing. In Proceedings of ACL, pages 356–365. Burr Settles and Mark Craven. 2008. An analysis of active learning strategies for sequence labeling tasks. In Proceedings of EMNLP, pages 1070–1079. David A. Smith and Noah A. Smith. 2007. Probabilistic models of nonprojective dependency trees. In Proceedings of EMNLP-CoNLL, pages 132–140. Oscar T¨ackstr¨om, Ryan McDonald, and Joakim Nivre. 2013. Target language adaptation of discriminative transfer parsers. In Proceedings of NAACL, pages 1061–1071. Min Tang, Xiaoqiang Luo, and Salim Roukos. 2002. Active learning for statistical natural language parsing. In Proceedings of ACL, pages 120–127. Yuta Tsuboi, Hisashi Kashima, Hiroki Oda, Shinsuke Mori, and Yuji Matsumoto. 2008. Training conditional random fields using incomplete annotations. In Proceedings of COLING, pages 897–904. William Yang Wang, Lingpeng Kong, Kathryn Mazaitis, and William W Cohen. 2014. Dependency parsing for weibo: An efficient probabilistic logic programming approach. In Proceedings of EMNLP, pages 1152–1158. Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proceedings of ACL, pages 188–193. 354
2016
33
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 355–366, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Dependency Parsing with Bounded Block Degree and Well-nestedness via Lagrangian Relaxation and Branch-and-Bound Caio Corro Joseph Le Roux Mathieu Lacroix Antoine Rozenknop Roberto Wolfler Calvo Laboratoire d’Informatique de Paris Nord, Universit´e Paris 13 – SPC, CNRS UMR 7030, F-93430, Villetaneuse, France {corro,leroux,lacroix,rozenknop,wolfler}@lipn.fr Abstract We present a novel dependency parsing method which enforces two structural properties on dependency trees: bounded block degree and well-nestedness. These properties are useful to better represent the set of admissible dependency structures in treebanks and connect dependency parsing to context-sensitive grammatical formalisms. We cast this problem as an Integer Linear Program that we solve with Lagrangian Relaxation from which we derive a heuristic and an exact method based on a Branch-and-Bound search. Experimentally, we see that these methods are efficient and competitive compared to a baseline unconstrained parser, while enforcing structural properties in all cases. 1 Introduction We address the problem of enforcing two structural properties on dependency trees, namely bounded block degree and well-nestedness, without sacrificing algorithmic efficiency. Intuitively, bounded block degree constraints force each subtree to have a yield decomposable into a limited number of blocks of contiguous words, while well-nestedness asserts that every two distinct subtrees must not interleave: either the yield of one subtree is entirely inside some gap of the other or they are completely separated. These two types of constraints generalize the notion of projectivity: projective trees actually have a block degree bounded to one and are well-nested. Our first motivation is the fact that most dependency trees in NLP treebanks are well-nested and have a low block degree which depends on the language and the linguistic representation, as shown in (Pitler et al., 2012). Unfortunately, although polynomial algorithms exist for this class of trees (G´omez-Rodr´ıguez et al., 2009), they are not efficient enough to be of practical use in applications requiring syntactic structures. In addition, if either property is dropped, but not the other, then the underlying decision problem becomes harder. That is why practical parsing algorithms are either completely unconstrained (McDonald et al., 2005) or enforce strict projectivity (Koo and Collins, 2010). This work is, to the best of our knowledge, the first attempt to build a discriminative dependency parser that enforces well-nestedness and/or bounded block degree and to use it on treebank data. We base our method on the following observation: a non-projective dependency parser, thus not requiring neither well-nestedness nor bounded block degree, returns dependency trees satisfying these constraints in the vast majority of sentences. This would tend to indicate that the heavy machinery involved to parse with these constraints is only needed in very few cases. We consider arc-factored dependency parsing with well-nestedness and bounded block degree constraints. We formulate this problem as an Integer Linear Program (ILP) and apply Lagrangian Relaxation where the dualized constraints are those associated with bounded block degree and well-nestedness. The Lagrangian dual objective then reduces to a maximum spanning arborescence and can be solved very efficiently. This provides an efficient heuristic for our problem. An exact method can be derived by embedding this Lagrangian Relaxation in a Branch-and-Bound procedure to solve the problem with an optimality certificate. Despite the exponential worst-time complexity of the Branch-and-Bound procedure, it is tractable in practice. Our formulation can enforce both types of constraints or only one of them without changing the resolution method. 355 As stated in (Bodirsky et al., 2009), well-nested dependency trees with 2-bounded block degree are structurally equivalent to derivations in Lexicalized Tree Adjoining Grammars (LTAGs) (Joshi and Schabes, 1997).12 While LTAGs can be parsed in polynomial time, developing an efficient parser for these grammars remains an open problem (Eisner and Satta, 2000) and we believe that this work could be a useful step in that direction. Related work is reviewed in Section 2. We define arc-factored dependency parsing with block degree and well-nestedness constraints in Section 3. We derive an ILP formulation of this problem in Section 4 and then present our method based on Lagrangian Relaxation in Section 5 and Branch-and-Bound in Section 6. Section 7 contains experimental results on several languages. 2 Related Work A dynamic programming algorithm has been proposed for parsing well-nested and k-bounded block degree dependency trees in (G´omezRodr´ıguez et al., 2009; G´omez-Rodr´ıguez et al., 2011). Unfortunately, it has a prohibitive O(n3+2k) time complexity, equivalent to Lexicalized TAG parsing when k = 2. Variants of this algorithm have also been proposed for further restricted classes of dependency trees: 1-inherit (O(n6)) (Pitler et al., 2012), head-split (O(n6)) (Satta and Kuhlmann, 2014) and both 1-inherit and head-split (O(n5)) (Satta and Kuhlmann, 2014). Although those restricted classes have good empirical coverage, they do not cover the exact search space of Lexicalized TAG derivation and their time complexity is still prohibitive. Spinal TAGs, described as a dependency parsing task in (Carreras et al., 2008), weaken even more the empirical coverage in practice, restricted to projective trees, but still remain hardly tractable with a complexity of O(n4). On the contrary, the present work does not restrict the search space. Parsing mildly context-sensitive languages with dependencies has been explored in (Fern´andezGonz´alez and Martins, 2015) but the resulting parser cannot guarantee compliance with strict structural properties. On the other hand, the 1It is possible to express a wider class of dependencies with LTAG if we allow dependencies direction to be different from the derivation tree (Kallmeyer and Kuhlmann, 2012). 2In order to be fully compatible with LTAGs, we must ensure that the root has only one child. For algorithmic issues see (Fischetti and Toth, 1992) or (Gabow and Tarjan, 1984). present method enforces the well-nestedness and bounded block degree of solutions. The methods mentioned above all use the graph-based approach and rely on dynamic programming to achieve tractability. There is also a line of of work in transition-based parsing for various dependency classes. Systems have been proposed for projective dependency trees (Nivre, 2003), non-projective, or even unknown classes (Attardi, 2006). Pitler and McDonald (2015) propose a transition system for crossing interval trees, a more general class than well-nested trees with bounded block degree. In the case of spinal TAGs, we can mention the work of Ballesteros and Carreras (2015) and Shen and Joshi (2007). Transition-based algorithms offer low space and time complexity, typically linear in the length of sentences usually by relying on local predictors and beam strategies and thus do not provide any optimality guarantee on the produced structures. The present work follows the graph-based approach, but replaces dynamic programming with a greedy algorithm and Lagrangian Relaxation. The use of Lagrangian Relaxation to elaborate sophisticated parsing models based on plain maximum spanning arborescence solutions originated in (Koo et al., 2010) where this method was used to parse with higher-order features. This technique has been explored to parse CCG dependencies in (Du et al., 2015) without a precise definition of the class of trees. We can also draw connections between our problem reduction procedure and the use of Lagrangian Relaxation to speed up dynamic programming and beam search with exact pruning in (Rush et al., 2013). In this work we rely on Non-Delayed Relaxand-Cut for lazy constraint generation (Lucena, 2006). This can be linked to (Riedel, 2009) which uses a cutting plane algorithm to solve MAP inference in Markov Logic and (Riedel et al., 2012) which uses column and row generation for higherorder dependency parsing. In NLP, the Branch-and-Bound framework (Land and Doig, 1960) has previously been used for dependency parsing with high order features in (Qian and Liu, 2013), and Das et al. (2012) combined Branch-and-Bound to Lagrangian Relaxation in order to retrieve integral solutions for shallow semantic parsing. 356 3 Dependency Parsing We model the dependency parsing problem using a graph-based approach. Given a sentence s = ⟨s0, . . . , sn⟩where s0 is a dummy root symbol, we consider the directed graph D = (V, A) with V = {0, . . . n} and A ⊆V × V . Vertex i ∈V corresponds to word si and arc (i, j) ∈A models a dependency from word si to word sj. In the rest of the paper, we denote V \ {0} by V +. An arborescence is a set of arcs T inducing a connected graph with no circuit such that every vertex has at most one entering arc. The set of vertices incident with any arc of T is denoted by V [T]. If V [T] = V , then T is a spanning arborescence. Among the vertices of V [T], the one with no entering arc is called the root of T. A vertex t is reachable from a vertex s with respect to T if there exists a path from s to t using only arcs of T. The yield of a vertex v ∈V corresponds to the set of vertices reachable from v with respect to T. It is well-known that there is a bijection between dependency trees for s and spanning arborescences with root 0 (McDonald et al., 2005). In what follows, the term dependency tree will refer to both the dependency tree of s and its associated spanning arborescence of D with root 0. In the dependency parsing problem, one has to find a dependency tree with maximal score. Several scores can be associated with each dependency tree and different conditions can restrict the set of valid dependency trees. In this paper, we consider an arc-factored model: each arc (i, j) ∈A is assigned a score wij; the score of a dependency tree is defined as the sum of the scores of the arcs it contains. This model can be computed in O(n2) with Chu–Liu-Edmonds’ algorithm for Maximum Spanning Arborescence (MSA) (McDonald et al., 2005). Unfortunately, this algorithm forbids any modification of the score function, for example adding score contribution for combinations of arcs (i.e. grand-parent or sibling models). Moreover, adding score contribution for combinations of couple of arcs makes the problem NP-hard (McDonald and Pereira, 2006), although several methods have been developed to tackle this problem, for instance dual decomposition (Koo et al., 2010). Likewise, restrictions on the tree structure such as the well-nestedness and bounded block degree conditions are not permitted in the MSA algorithm. We first give a precise definition of these 0 1 2 3 4 s0 s1 s2 s3 s4 0 1 2 3 4 s0 s1 s2 s3 s4 Figure 1: (Left) A 2-BBD arborescence: the block degree of vertex 1 is 2 (its yield is {1, 4}) whereas the block degree of all other vertices is 1. (Right) A not well-nested arborescence: the yields of vertices 1 and 2 interleave. structural properties, equivalent to (Bodirsky et al., 2009), before we present a method to take them into account. From now on, we suppose that instances are equipped with a positive integer k and we call valid dependency trees those satisfying the k-bounded block degree and well-nestedness conditions. A graph-theoretic definition of these two conditions can be given as follows. Block degree The block degree of a vertex set W ⊆V is the number of vertices of W without a predecessor3 inside W. Given an arborescence T, the block degree of a vertex is the block degree of its yield and the block degree of T is the maximum block degree of its incident vertices. An arborescence satisfies the k-bounded block degree condition if its block degree is less than or equal to k. We then say it is k-BBD for short. Figure 1 (left) gives an example of a 2-BBD arborescence. Well-nestedness Two disjoint subsets I1, I2 ⊂ V + interleave if there exist i, j ∈I1 and k, l ∈ I2 such that i < k < j < l. An arborescence is well-nested if it is not incident to two vertices whose yields interleave. Figure 1 (right) shows an arborescence which is not well-nested. 4 ILP Formulation In this section we formulate the dependency parsing problem described in Section 3 as an ILP. We start with some notation and two theorems characterizing k-BBD and well-nested dependency trees. Given a subset W ⊆V , the set of arcs entering W is denoted by δin(W) and the set of arcs leaving W is denoted by δout(W). The set δ(W) = δin(W)∪δout(W) is called the cut of W. Given a positive integer l, let W≥l be the family of vertex subsets of V + with block degree greater than or equal to l. For instance, given any sentence with more than 6 words, {1, 3, 5, 6} ∈W≥3, 3The predecessor of a vertex v ∈V is v −1. 357 while {1, 2, 5, 6} ̸∈W≥3. We also denote by I the family of couples of disjoint interleaving vertex subsets of V +. For instance, ({1, 4}, {2, 3, 5}) belongs to I. Finally, given a vector x ∈RA and a subset B ⊆A, x(B) corresponds to P a∈B xa. Theorem 1. A dependency tree T is not k-BBD iff there exists a vertex subset W ∈W≥k+1 whose cut δ(W) contains a unique arc of T. Proof. By definition of block degree, a dependency tree is not k-BBD iff it is incident with a vertex whose yield W belongs to W≥k+1. It is equivalent to say that T contains a subarborescence T ′ such that V [T ′] equals W. This holds iff W has one entering arc (since 0 /∈W) and no leaving arc belonging to T. Theorem 2. A dependency tree T is not wellnested iff there exists (I1, I2) ∈I such that δ(I1) ∩T and δ(I2) ∩T are singletons. Proof. δ(I1) and δ(I2) both intersect T only once iff T contains two arborescences T1 and T2 such that V [T1] = I1 and V [T2] = I2. This means that T is incident with two vertices whose yields are I1 and I2, respectively. Result follows from the definition of I and well-nested arborescences. The dependency parsing problem can be formulated as follows. A dependency tree will be represented by its incidence vector. Hence, we use variables z ∈RA such that za = 1 if arc a belongs to the dependency tree and 0 otherwise. max z X a∈A waza (1) z(δin(v)) = 1 ∀v ∈V + (2) z(δin(W)) ≥1 ∀W ⊆V + (3) z(δ(W)) ≥2 ∀W ∈W≥k+1 (4) z(δ(I1)) + z(δ(I2)) ≥3 ∀(I1, I2) ∈I (5) z ∈{0, 1}A (6) The objective function (1) maximizes the score of the dependency tree. Inequalities (2) ensure that all vertices but the root have one entering arc. Inequalities (3) force the set of arcs associated with z to induce a connected graph. Inequalities (2) and (3), together with z ≥0, give a linear description of the convex hull of the incidence vectors of the spanning arborescences with root 0 — see e.g., (Schrijver, 2003). Inequalities (4) ensure that the dependency tree is k-BBD and inequalities (5) impose well-nestedness. The validity of (4) and (5) follows from Theorems 1 and 2, respectively. Remark that (3) could be replaced by a polynomial number of additional flow variables and constraints, see (Martins et al., 2009).4 5 Lagrangian Relaxation Solving this ILP using an off-the-shelf solver is ineffective due to the huge number of constraints. We tackle this problem with Lagrangian Relaxation, which has become popular in the NLP community, see for instance (Rush and Collins, 2012). Note that contrary to most previous work on Lagrangian Relaxation for NLP, we do not use it to derive a decomposition method. We note that optimizing objective (1) subject to constraints (2), (3) and (6) amounts to finding a MSA and can be solved combinatorially (McDonald et al., 2005). Thus, since formulation (1)–(6) is based only on arc variables, by relaxing constraints (4) and (5), one obtains a Lagrangian dual objective which is nothing but a MSA problem with reparameterized arc scores. Our Lagrangian approach relies on a subgradient descent where a MSA problem is solved at each iteration. We give more details in the rest of the section. 5.1 Dual Problem Let Z be the set of the incidence vectors of dependency trees. Keeping tree shape constraints (2), (3) and (6) while dualizing k-bounded block degree constraints (4) and well-nestedness constraints (5), we build the following Lagrangian (Lemar´echal, 2001): L(z, u) = X a∈A waza + X W∈W≥k uW 1 × (z(δ(W)) −2) + X (I1,I2)∈I uI1,I2 2 × (z(δ(I1)) + z(δ(I2)) −3) (7) 4Based on this remark, we also developed a formulation of this problem with a polynomial number of variables and constraints. However it requires adding many more variables than (Martins et al., 2009). This leads to a formulation which is not tractable, see Section 7.2. Moreover, it cannot be tackled by our Lagrangian Relaxation approach. 358 with z ∈Z and u = (u1, u2) ≥0 is a vector of Lagrangian multipliers. We refactor to: L(z, u) = X a∈A θaza + c (8) where θ are modified scores and c a constant term. The dual objective is L∗(u) = maxz L(z, u) with z ∈Z. Note that computing L∗(u) amounts to solving the MSA problem with modified scores θ and can be efficiently computed. The dual problem is minu≥0 L∗(u). L∗is a non-differentiable convex piece-wise linear function and one can find its minimum via subgradient descent. For any vector u, we use the following subgradient. Denote Mz ≤b the set of constraints given by (4) and (5) and z∗= arg maxz L(z, u). Let g = b −Mz∗ be a subgradient at u, see (Lemar´echal, 2001) for more details. From this subgradient, we compute the descent direction following (Camerini et al., 1975), which aggregates information during the iteration of the subgradient descent algorithm. Unfortunately, optimizing the dual is expensive with so many relaxed constraints. We handle this problem in the next subsection. 5.2 Efficient Optimization with Many Constraints The Non Delayed Relax-and-Cut (NDRC) method (Lucena, 2005) tackles the problem of optimizing a Lagrangian dual problem with exponentially many relaxed constraints. In standard subgradient descent, at each iteration p of the descent, the Lagrangian update can be formulated as: up+1 = (up −sp × gp)+ (9) where sp > 0 is the stepsize5 and ()+ denotes the projection onto R+, which replaces each negative component by 0. If all Lagrangian multipliers are initialized to 0, the compononent corresponding to a constraint will not be changed until this constraint is violated for the first time. Indeed, by definition of g, we have [gp]i ≥0 if constraint i is satisfied at iteration p: the projection on R+ ensure that [up+1]i stays at 0.6 Thus we do not need to know constraints that have not been violated yet in order to correctly update the Lagrangian multipliers: this is the main intuition behind the NDRC 5As stated above, instead of the subgradient we follow an improved descent direction which aggregates information of previous iterations. However, this does not change the proposal of this subsection. 6[x]i denotes the ith component of vector x. method. However, sp may depend on the full subgradient information. A common step size (Fisher, 1981) is: sp = αp × L∗(up) −LBp ∥gp∥2 (10) with αp a scalar and LBp the best known lower bound. This is also the case with more recent approaches like AdaGrad (Duchi et al., 2011) and AdaDelta (Zeiler, 2012). As reported in (Beasley, 1993; Lucena, 2006), when dealing with many relaxed constraints, the ∥gp∥2 term can result in each Lagrangian update being almost equal to 0. Therefore, a good practice is to modify the subgradient such that if [gp]i > 0 and [up]i = 0, then we set [gp]i = 0: this has the same effect on the multipliers as the projection on R+ in (9), but it prevents the stepsize from becoming too small. Hence, instead of generating a full subgradient at each iteration, which is an expensive operation because we would need to consider all multipliers associated with constraints, we process only a subpart, namely the one associated with constraints that have been violated. Following (Lucena, 2005), at each iteration p of the subgradient descent we define two sets: Currently Violated Active Constraints (CAp) and Previously Violated Active Constraints (PAp). CAp and PAp are not necessarily disjoint. The subgradient is computed only for constraints in CAp ∪ PAp. At each iteration p, we update PAp = PAp−1 ∪CAp−1 and a violation detection step, similar to the separation step in a cutting plane algorithm, generates CAp. Two strategies are possible for the detection: (1) adding to CAp all the constraints violated by the current dual solution; (2) adding only a subset of them. The latter is justified by the fact that many constraints may overlap thus leading to exageration of modified scores on some arcs. We found that strategy (2) gives better convergence results. Detection for violated block degree constraints (4) can be done with the algorithm described in (M¨ohl, 2006) in O(n2). If no violated block degree constraint is found, we search for violated well-nestedness constraints (5) using the O(n2) algorithm described in (Havelka, 2007). 5.3 Lagrangian Heuristic We derive a heuristic from the Lagragian Relaxation. First, a dependency tree is computed with 359 the MSA algorithm. If it is valid, it then corresponds to the optimal solution. Otherwise, we proceed as follows. The computation of the step size in (10) in the subgradient descent needs a lower bound which can be given by the score of any valid dependency tree. In our experiments, we compute the best projective spanning arborescence (Eisner, 2000). Each iteration of the subgradient descent computes a spanning arborescence. Since violating (4) and (5) is penalized in the objective function, it tends to produce valid dependency trees. The heuristic returns the highest scoring one. 6 Branch and Bound Solving the Lagrangian dual problem may not always give an optimal solution to the original problem because of the potential duality gap. Still, we always obtain an upper bound on the optimal solution and if a dual solution satisfies constraints (4) and (5), its score with the original weights provides a lower bound.7 Moreover, the subgradient descent algorithm theoretically converges but we have no guarantee that this will happen in a realistic number of iterations. Therefore, in order to retrieve an optimal solution in all cases, we embed the Lagrangian Relaxation of the problem within a Branch-andBound procedure (Land and Doig, 1960). The search space is recursively split according to an arc variable, creating two subspaces, one where it is fixed to 1 and the other to 0 (branching step). The procedure returns a candidate solution when all arc variables are fixed and constraints are satisfied, and the optimal solution is the highestscoring candidate solution. For each subspace, we estimate an upper bound using the Lagrangian Relaxation (bounding step). The recursive exploration of a subspace stops (pruning step) if (1) we can prove that all candidate solutions it contains have a score lower than the best found so far, or (2) we detect an unsastifiable constraint. The branching strategy is built upon Lagrangian multipliers: we branch on the variable za with highest value θa −wa. Intuitively, if the branching step sets za = 1, it indicates that we add a hard constraint on an arc which has been strongly promoted by Lagrangian Relaxation. This strategy, compared to other variants, gave the best parsing 7Because relaxed constraints are inequalities, constraint satisfaction does not guarantee optimality (Beasley, 1993). time on development data. 6.1 Problem Reduction The efficiency of the Branch-and-Bound procedure crucially depends on the number of free variables. To prune the search space, we rely on problem reduction (Beasley, 1993), once again based on duality and Lagrangian Relaxation, which provides certificates on optimal variable settings. We fix a variable to 1 (resp. 0), and compute an upper bound on the optimal score with this new constraint. If it is lower than the score of the best solution found so far without this constraint, we can guarantee that this variable cannot (resp. must) be in the optimal solution and safely set it to 0 (resp. 1). Problem reduction is performed at each node of the Branch-and-Bound tree after computing the upper bound with subgradient descent. 6.2 Fixing Variables to 1 Since a node in V + must have exactly one parent, fixing zij = 1 for an arc a = (i, j) greatly reduces the problem size, as it will also fix zhj = 0, ∀h ̸= i. Among all arc variables that can be set to 1, promising candidates are the arcs in a solution of the unconstrained MSA and the arcs obtained in a solution after the subgradient descent. There are exactly n such arcs in each set of candidates, so we test fixation for less than 2n variables. In this case, we are ready to pay the price of a quadratic computation for each of these arcs. Hence, for each candidate arc we obtain an upper bound by seeking the (unconstrained) MSA on the graph where this arc is removed. If this upper bound is lower than the score of the best solution found so far, we can safely say that this arc is in the optimal solution. 6.3 Fixing Variables to 0 We could apply the same strategy for fixing variables to 0. However, this reduction is less rewarding and there are many more variables set to 0 than 1 in a MSA solution. Instead, we solve an easier problem, at the expense of a looser upper bound. For each arc a which is not in the MSA, we compute a maximum directed graph that contains this arc and where all nodes but the root have one parent. Remark that if this graph is connected then it corresponds to a dependency tree. Therefore, the score of this directed graph provides an upper bound on a solution containing arc a. If this upper 360 bound is lower than the score of the best solution found so far, we can fix the variable za to 0. Note that this whole fixing procedure can be done in O(n2). 7 Experiments We ran a series of experiments to test our method in the case of unlabelled dependency parsing. Our prototype has been developped in Python with some parts in Cython and C++. We use the MSA implementation available in the LEMON library.8 7.1 Datasets We ran experiments on 5 different corpora: English: Dependencies were extracted from the WSJ part of the Penn Treebank (PTB) with additional NP bracketings (Vadas and Curran, 2007) with the LTH converter9 (default options). Sections 02-21 are used for training, 22 for development and 23 for testing. POS tags were predicted by the Stanford tagger10 trained with 10jackkniffing.11 German: We used dependencies from the SPMRL dataset (Seddah et al., 2014), with predicted POS tags and the official split. We removed sentences of length greater than 100 in the test set. Dutch, Spanish and Portuguese: We used the Universal Dependency Treebank 1.2 (Van der Beek et al., 2002; McDonald et al., 2013; Afonso et al., 2002) with gold POS tags and the official split. We removed sentences of length greater than 100 in the test sets. Those datasets contain different structure distributions as shown in Table 1. Fortunately, our method allows us to easily change the bounded degree constraint or toggle the well-nestedness one. For each language, we decided to use the most constrained combination of bounded block degree constraints and well-nestedness which covers over 99% of the data. Therefore, we chose to enforce 2-BBD and well-nestedness for English and Spanish, 3-BBD and well-nestedness for Dutch and Portuguese and 3-BBD only for German. 7.2 Decoding In order to compare our methods with previous approaches, we tested five decoding strategies. 8https://lemon.cs.elte.hu/trac/lemon 9http://nlp.cs.lth.se/software/treebank_converter/ 10http://nlp.stanford.edu/software/tagger.shtml 11Prediction precision: 97.40% (MSA) computes the best unconstrained dependency tree. (Eisner) computes the best projective tree. (LR) and (B&B) are the heuristic and the exact method presented in Sections 5.3 and 6 respectively.12 Finally (MSA/Eisner) consists in running the MSA algorithm and, if the solution is invalid, returning the (Eisner) solution instead. Our attempt to run the dynamic programming algorithm of (G´omez-Rodr´ıguez et al., 2009) was unsuccessful. Even with heavy pruning we were not able to run it on sentences above 20 words. We also tried to use CPLEX on a compact ILP formulation based on multi-commodity flows (see footnote 4). Parsing time was also prohibitive: a total of 3473 seconds on English data without the well-nestedness constraint, 7298 for German. We discuss the efficiency of our methods on data for English and German. Other languages give similar results. Optimality rate after the subgradient descent are reported in Figure 2. We see that Lagrangian Relaxation often returns optimal solutions but fails to give a certificate of their optimality. Table 2 shows parsing times. We see that (LR) and (B&B), while slower than (MSA), are fast in the majority of cases, below the third quartile. Inevitably, there are some rare cases where a large portion of the search space is explored, and thus their parsing time is high. Let us remark that these algorithms are run only when (MSA) returns an invalid structure, and so total time is very acceptable compared to the baseline. Finally, we stress the importance of problem reduction as a pre-processing step in B&B: after subgradient descent is performed, it removes an average of 83.97% (resp. 76.59%) of arc variables in the English test set (resp. German test set). 7.3 Training Feature weights are trained using the averaged structured perceptron (Collins, 2002) with 10 iterations where the best iteration is selected on the development set. We used the same feature set as in TurboParser (Martins et al., 2010), including features for lemma. For German, we additionally use morpho-syntactic features. The decoding algorithm used at training time is the MSA. We experimented with Branch-andBound and Lagrangian Relaxation decoding dur12In both methods, the subgradient descent is stopped after a fixed maximum number of iterations. We chose 100 for English and 200 for other languages after tuning on the development set. 361 English German Dutch Spanish Portuguese WN IL WN IL WN IL WN IL WN IL Block degree 1 92.26 67.60 69.13 93.95 81.56 0.05 Block degree 2 7.58 0.12 27.12 0.79 28.50 0.08 5.99 0.04 13.92 0.02 Block degree 3 0.12 0.01 3.86 0.30 2.24 0.01 0.02 3.76 Block degree 4 0.19 <0.01 0.04 0.54 Block degree > 4 0.11 <0.01 0.14 Table 1: Distribution of dependency tree characteristics in datasets. English (96 sentences) German (59 sentences) MSA LR B&B MSA LR B&B Mean 0.02 0.26 0.53 0.04 0.51 0.71 Std. 0.01 0.20 0.86 0.02 0.41 1.39 Med. 0.02 0.21 0.27 0.03 0.47 0.47 3rd 0.03 0.34 0.53 0.05 0.71 0.80 Total 1.81 25.09 50.52 2.18 30.19 42.20 Table 2: Timings for strategies (see Section 7.2) on test for solutions which do not satisfy constraints after running MSA. We give (in seconds) average time, standard deviation, median time, time to parse up to the 3rd quartile and total time. 50 100 150 200 0.96 0.97 0.98 0.99 1 Figure 2: Optimality rate (y-axis) vs number of subgradient iterations (x-axis) for English (thin blue) and German (thick red). Solid line is the optimal rate with certificate, dashed is without. ing training. It did not significantly improve accuracy and made training and decoding slower. 7.4 Parsing Results Table 3 shows attachment score (UAS), percentage of valid dependency trees and relative time to (MSA) for different systems for our five decoding strategies. We can see (B&B) is on a par with (LR) on some corpora and more accurate on the others. The former takes more time, and the improvement is correlated with time difference for all corpora but the PTB. Dividing the five corpora in three cases, we can see that: 1. For English and Spanish, where projective dependency trees represent more than 90% of the data, (Eisner) outperforms (MSA). Our methods lie between the two. Here it is better to search for projective trees and (LR) and (B&B) are not interesting in terms of UAS. This is confirmed by the results of (MSA/Eisner). 2. For German and Dutch, where projective dependency trees represent less than 70% of the data, (MSA) outperforms (Eisner). For German, where well-nestedness is not required, our methods are as accurate as (MSA)13, while for Dutch our methods seem to be useful, as (B&B) outperforms all sys13For German, we notice a small regression which we attribute to the representation of enumerations in the corpus: for enumerations of k elements, k-bounded block-degree subtrees must be generated. tems. Moreover, our two methods guarantee validity. 3. For Portuguese, where projective dependency trees represent around 80% of the data, (MSA) is as accurate as (Eisner). In this case we see that, while our heuristic is below, the exact method is more accurate. This seems to be an edge case where neither unconstrained nor projective dependency trees seem to adequately capture the solution space. We also see that it is harder for our methods to give solutions (longer computation times, which tend to indicate that LR cannot guarantee optimality). Our methods are best fitted for this case. In order to see how much well-nested and bounded block-degree structures are missed by a state-of-the-art parser, we compare our results with TurboParser.14 We run the parser with three different feature sets: arc-factored, standard (second-order features), and full (third-order features). The results are shown in Table 4. Our model, by enforcing strict compliance to structural rules (100% valid dependency trees), is closer to the empirical distribution than TurboParser in arc-factored mode on all languages but German. Higher-order scoring functions manage to get more similar to the treebank data than our strict thresholds for all languages but Portuguese, at the expense of a significative computational burden. 14We used 2.1.0 and all defaults but the feature set. 362 MSA Eisner LR B&B MSA/Eisner English UAS 89.45 89.82 89.54 89.53 89.60 2-BBD/WN 96.02 – – – – Relative Time 1 2.5 1.8 2.5 1.2 German UAS 87.79 86.97 87.78 87.78 87.46 3-BBD 98.81 – – – – Relative Time 1 2.1 1.5 1.7 1.3 Dutch UAS 77.30 76.62 76.96 77.40 76.79 3-BBD/WN 94.82 – – – – Relative Time 1 1.5 1.7 5 1.3 Spanish UAS 83.37 83.56 83.37 83.44 83.48 2-BBD/WN 92.62 – – – – Relative Time 1 2.8 2.7 3 1.5 Portuguese UAS 83.13 83.14 82.99 83.21 82.90 3-BBD/WN 87.84 – – – – Relative Time 1 2.7 5.7 19.7 1.7 Table 3: UAS, percentage of valid structure and decoding time for test data. Time is relative to MSA decoding. The percentage of valid structure is always 100% except for MSA decoding. English (99.84) German (99.27) Dutch (99.87) Spanish (99.94) Portuguese (99.24) Order UAS VDT RT UAS VDT RT UAS VDT RT UAS VDT RT UAS VDT RT 1st 89.29 94.87 1 87.97 98.74 1 76.10 93.26 1 83.11 93.43 1 83.53 94.79 1 2nd 92.04 99.75 16 89.83 99.28 16 79.05 97.93 18 86.61 98.54 10 87.35 98.96 15 3rd 92.37 99.75 34 90.35 99.24 36 79.68 97.41 37 87.31 99.64 18 88.09 98.98 32 Table 4: UAS, percentage of valid dependency trees (VDT) and relative time (RT) obtained by Turboparser for different score functions on test sets. For each language we give the percentage of valid dependency structures in the data, according to the constraints postulated in Section 7.1. We interpret this fact as an indication that adding higher order features into our system would make the relaxation method converge more often and faster. 8 Conclusion We presented a novel characterization of dependency trees complying with two structural rules: bounded block degree and well-nestedness from which we derived two methods for arc-factored dependency parsing. The first one is a heuristic which relies on Lagrangian Relaxation and the Chu-Liu-Edmonds efficient maximum spanning arborescence algorithm. The second one is an exact Branch-and-Bound procedure where bounds are provided by Lagrangian Relaxation. We showed experimentally that these methods give results comparable with state-of-the-art arcfactored parsers, while enforcing constraints in all cases. In this paper we focused on arc-factor models, but our method could be extended to higher order models, following the dual decomposition method presented in (Koo et al., 2010) in which the maximum-weight spanning arborescence component would be replaced by our constrained model. Our method opens new perspectives for LTAG parsing, in particular using decomposition techniques where dependencies and templates are predicted separately. Moreover, while well-nested dependencies with 2-bounded block degree can represent LTAG derivations, toggling the wellnestedness or setting the block degree bound allows to express the whole range of derivations in lexicalized LCFRS, whether well-nested or with a bounded fan-out. Our algorithm can exactly represent these settings with a comparable complexity. Acknowledgements We thank the anonymous reviewers for their insightful comments which let us significantly improve the submitted paper. This work is supported by a public grant overseen by the French National Research Agency (ANR) as part of the Investissements d’Avenir program (ANR-10-LABX-0083). References [Afonso et al.2002] Susana Afonso, Eckhard Bick, Renato Haber, and Diana Santos. 2002. Floresta sint´a (c) tica: A treebank for portuguese. In LREC. [Attardi2006] Giuseppe Attardi. 2006. Experiments with a multilanguage non-projective dependency parser. In Proceedings of the Tenth Conference on Computational Natural Language Learning, pages 166–170. Association for Computational Linguistics. [Ballesteros and Carreras2015] Miguel Ballesteros and Xavier Carreras. 2015. Transition-based spinal 363 parsing. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning, pages 289–299, Beijing, China, July. Association for Computational Linguistics. [Beasley1993] John E Beasley. 1993. Lagrangian relaxation. In Modern heuristic techniques for combinatorial problems, pages 243–303. John Wiley & Sons, Inc. [Bertsekas1999] Dimitri P Bertsekas. 1999. Nonlinear programming. Athena scientific. [Bodirsky et al.2009] Manuel Bodirsky, Marco Kuhlmann, and Mathias M¨ohl. 2009. Wellnested drawings as models of syntactic structure. In Tenth Conference on Formal Grammar and Ninth Meeting on Mathematics of Language, pages 195–203. [Camerini et al.1975] Paolo M Camerini, Luigi Fratta, and Francesco Maffioli. 1975. On improving relaxation methods by modified gradient techniques. In Nondifferentiable optimization, pages 26–34. Springer. [Carreras et al.2008] Xavier Carreras, Michael Collins, and Terry Koo. 2008. Tag, dynamic programming, and the perceptron for efficient, feature-rich parsing. In Proceedings of the Twelfth Conference on Computational Natural Language Learning, pages 9–16. Association for Computational Linguistics. [Collins2002] Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, pages 1–8. Association for Computational Linguistics. [Das et al.2012] Dipanjan Das, Andr´e FT Martins, and Noah A Smith. 2012. An exact dual decomposition algorithm for shallow semantic parsing with constraints. In Proceedings of the First Joint Conference on Lexical and Computational SemanticsVolume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 209–217. Association for Computational Linguistics. [Du et al.2015] Yantao Du, Weiwei Sun, and Xiaojun Wan. 2015. A data-driven, factorization parser for CCG dependency structures. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference of Natural Language Processing of the Asian Federation of Natural Language Processing, pages 1545–1555. [Duchi et al.2011] John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121–2159. [Eisner and Satta2000] Jason Eisner and Giorgio Satta. 2000. A faster parsing algorithm for lexicalized tree-adjoining grammars. In Proceedings of the 5th Workshop on Tree-Adjoining Grammars and Related Formalisms (TAG+ 5), pages 14–19. [Eisner2000] Jason Eisner. 2000. Bilexical grammars and their cubic-time parsing algorithms. In Advances in probabilistic and other parsing technologies, pages 29–61. Springer. [Fern´andez-Gonz´alez and Martins2015] Daniel Fern´andez-Gonz´alez and Andr´e F. T. Martins. 2015. Parsing as Reduction. In Annual Meeting of the Association for Computational Linguistics (ACL’15), Beijing, China, July. [Fischetti and Toth1992] Matteo Fischetti and Paolo Toth. 1992. An additive bounding procedure for the asymmetric travelling salesman problem. Mathematical Programming. [Fisher1981] Marshall L Fisher. 1981. The lagrangian relaxation method for solving integer programming problems. Management science, 27. [Gabow and Tarjan1984] Harold N. Gabow and Robert E. Tarjan. 1984. Efficient algorithms for a family of matroid intersection problems. Journal of Algorithms, 5(1):80–131. [G´omez-Rodr´ıguez et al.2009] Carlos G´omezRodr´ıguez, David Weir, and John Carroll. 2009. Parsing mildly non-projective dependency structures. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, pages 291–299. Association for Computational Linguistics. [G´omez-Rodr´ıguez et al.2011] Carlos G´omezRodr´ıguez, John Carroll, and David Weir. 2011. Dependency parsing schemata and mildly nonprojective dependency parsing. Computational linguistics, 37(3):541–586. [Havelka2007] Jiˇr´ı Havelka. 2007. Relationship between non-projective edges, their level types, and well-nestedness. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers, pages 61–64. Association for Computational Linguistics. [Joshi and Schabes1997] Aravind K Joshi and Yves Schabes. 1997. Tree-adjoining grammars. In Handbook of formal languages, pages 69–123. Springer. [Kallmeyer and Kuhlmann2012] Laura Kallmeyer and Marco Kuhlmann. 2012. A formal model for plausible dependencies in lexicalized tree adjoining grammar. In Proceedings of TAG, volume 11, pages 108– 116. [Koo and Collins2010] Terry Koo and Michael Collins. 2010. Efficient third-order dependency parsers. In Proceedings of ACL. 364 [Koo et al.2010] Terry Koo, Alexander M Rush, Michael Collins, Tommi Jaakkola, and David Sontag. 2010. Dual decomposition for parsing with non-projective head automata. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1288–1298. Association for Computational Linguistics. [Land and Doig1960] Ailsa H. Land and Alison G. Doig. 1960. An automatic method of solving discrete programming problems. Econometrica: Journal of the Econometric Society, 28(3):497–520. [Lemar´echal2001] Claude Lemar´echal. 2001. Lagrangian relaxation. In Computational combinatorial optimization, pages 112–156. Springer. [Lucena2005] Abilio Lucena. 2005. Non delayed relax-and-cut algorithms. Annals of Operations Research, 140(1):375–410. [Lucena2006] Abilio Lucena. 2006. Lagrangian relaxand-cut algorithms. In Handbook of Optimization in Telecommunications, pages 129–145. Springer. [Martins et al.2009] Andr´e FT Martins, Noah A Smith, and Eric P Xing. 2009. Concise integer linear programming formulations for dependency parsing. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1, pages 342–350. Association for Computational Linguistics. [Martins et al.2010] Andr´e F. T. Martins, Noah A. Smith, Eric P. Xing, Pedro M. Q. Aguiar, and M´ario A. T. Figueiredo. 2010. Turbo parsers: Dependency parsing by approximate variational inference. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP 2010, 9-11 October 2010, MIT Stata Center, Massachusetts, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 34–44. [McDonald and Pereira2006] Ryan McDonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algorithms. In EACL. [McDonald et al.2005] Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajiˇc. 2005. Nonprojective dependency parsing using spanning tree algorithms. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 523–530. Association for Computational Linguistics. [McDonald et al.2013] Ryan T McDonald, Joakim Nivre, Yvonne Quirmbach-Brundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith B Hall, Slav Petrov, Hao Zhang, Oscar T¨ackstr¨om, et al. 2013. Universal dependency annotation for multilingual parsing. In ACL (2), pages 92–97. Citeseer. [M¨ohl2006] Mathias M¨ohl. 2006. Drawings as models of syntactic structure: Theory and algorithms. Ph.D. thesis, Saarland University. [Nivre2003] Joakim Nivre. 2003. An efficient algorithm for projective dependency parsing. In Proceedings of the 8th International Workshop on Parsing Technologies (IWPT. [Pitler and McDonald2015] Emily Pitler and Ryan McDonald. 2015. A linear-time transition system for crossing interval trees. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics, pages 662— -671. [Pitler et al.2012] Emily Pitler, Sampath Kannan, and Mitchell Marcus. 2012. Dynamic programming for higher order parsing of gap-minding trees. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 478– 488. Association for Computational Linguistics. [Qian and Liu2013] Xian Qian and Yang Liu. 2013. Branch and bound algorithm for dependency parsing with non-local features. Transactions of the Association for Computational Linguistics, 1:37–48. [Riedel et al.2012] Sebastian Riedel, David Smith, and Andrew McCallum. 2012. Parse, price and cut: delayed column and row generation for graph based parsers. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 732–743. Association for Computational Linguistics. [Riedel2009] Sebastian Riedel. 2009. Cutting plane map inference for markov logic. In SRL 2009. [Rush and Collins2012] Alexander M Rush and Michael Collins. 2012. A tutorial on dual decomposition and lagrangian relaxation for inference in natural language processing. Journal of Artificial Intelligence Research. [Rush et al.2013] Alexander M. Rush, Yin-Wen Chang, and Michael Collins. 2013. Optimal beam search for machine translation. In Proceedings of EMNLP. [Satta and Kuhlmann2014] Giorgio Satta and Marco Kuhlmann. 2014. Efficient parsing for head-split dependency trees. Transactions of the Association for Computational Linguistics, 1:267–278. [Schrijver2003] A. Schrijver. 2003. Combinatorial Optimization - Polyhedra and Efficiency. Springer. [Seddah et al.2014] Djam´e Seddah, Sandra K¨ubler, and Reut Tsarfaty. 2014. Introducing the spmrl 2014 shared task on parsing morphologically-rich languages. In Proceedings of the First Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of Non-Canonical Languages, pages 103–109. [Shen and Joshi2007] Libin Shen and Aravind K Joshi. 2007. Bidirectional ltag dependency parsing. Technical report, Technical Report 07-02, IRCS, University of Pennsylvania. 365 [Vadas and Curran2007] David Vadas and James R. Curran. 2007. Adding noun phrase structure to the penn treebank. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL-07), pages 240–247. [Van der Beek et al.2002] Leonoor Van der Beek, Gosse Bouma, Rob Malouf, and Gertjan Van Noord. 2002. The alpino dependency treebank. Language and Computers, 45(1):8–22. [Zeiler2012] Matthew D Zeiler. 2012. Adadelta: An adaptive learning rate method. arXiv preprint arXiv:1212.5701. 366
2016
34
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 367–377, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Query Expansion with Locally-Trained Word Embeddings Fernando Diaz Microsoft [email protected] Bhaskar Mitra Microsoft [email protected] Nick Craswell Microsoft [email protected] Abstract Continuous space word embeddings have received a great deal of attention in the natural language processing and machine learning communities for their ability to model term similarity and other relationships. We study the use of term relatedness in the context of query expansion for ad hoc information retrieval. We demonstrate that word embeddings such as word2vec and GloVe, when trained globally, underperform corpus and query specific embeddings for retrieval tasks. These results suggest that other tasks benefiting from global embeddings may also benefit from local embeddings. 1 Introduction Continuous space embeddings such as word2vec (Mikolov et al., 2013b) or GloVe (Pennington et al., 2014a) project terms in a vocabulary to a dense, lower dimensional space. Recent results in the natural language processing community demonstrate the effectiveness of these methods for analogy and word similarity tasks. In general, these approaches provide global representations of words; each word has a fixed representation, regardless of any discourse context. While a global representation provides some advantages, language use can vary dramatically by topic. For example, ambiguous terms can easily be disambiguated given local information in immediately surrounding words (Harris, 1954; Yarowsky, 1993). The window-based training of word2vec style algorithms exploits this distributional property. A global word embedding, even when trained using local windows, risks capturing only coarse representations of those topics dominant in the corpus. While a particular embedding may be appropriate for a specific word within a sentence-length context globally, it may be entirely inappropriate within a specific topic. Gale et al. refer to this as the ‘one sense per discourse’ property (Gale et al., 1992). Previous work by Yarowsky demonstrates that this property can be successfully combined with information from nearby terms for word sense disambiguation (Yarowsky, 1995). Our work extends this approach to word2vec-style training in the context word similarity. For many tasks that require topic-specific linguistic analysis, we argue that topic-specific representations should outperform global representations. Indeed, it is difficult to imagine a natural language processing task that would not benefit from an understanding of the local topical structure. Our work focuses on a query expansion, an information retrieval task where we can study different lexical similarity methods with an extrinsic evaluation metric (i.e. retrieval metrics). Recent work has demonstrated that similarity based on global word embeddings can be used to outperform classic pseudo-relevance feedback techniques (Sordoni et al., 2014; al Masri et al., 2016). We propose that embeddings be learned on topically-constrained corpora, instead of large topically-unconstrained corpora. In a retrieval scenario, this amounts to retraining an embedding on documents related to the topic of the query. We present local embeddings which capture the nuances of topic-specific language better than global embeddings. There is substantial evidence that global methods underperform local methods for information re367 trieval tasks such as query expansion (Xu and Croft, 1996), latent semantic analysis (Hull, 1994; Sch¨utze et al., 1995; Singhal et al., 1997), cluster-based retrieval (Tombros and van Rijsbergen, 2001; Tombros et al., 2002; Willett, 1985), and term clustering (Attar and Fraenkel, 1977). We demonstrate that the same holds true when using word embeddings for text retrieval. 2 Motivation For the purpose of motivating our approach, we will restrict ourselves to word2vec although other methods behave similarly (Levy and Goldberg, 2014). These algorithms involve discriminatively training a neural network to predict a word given small set of context words. More formally, given a target word w and observed context c, the instance loss is defined as, ℓ(w, c) = log σ(φ(w) · ψ(c)) + η · Ew∼θC [log σ(−φ(w) · ψ(w))] where φ : V →ℜk projects a term into a kdimensional embedding space, ψ : Vm →ℜk projects a set of m terms into a k-dimensional embedding space, and w is a randomly sampled ‘negative’ context. The parameter η controls the sampling of random negative terms. These matrices are estimated over a set of contexts sampled from a large corpus and minimize the expected loss, Lc = Ew,c∼pc [ℓ(w, c)] (1) where pc is the distribution of word-context pairs in the training corpus and can be estimated from corpus statistics. While using corpus statistics may make sense absent any other information, oftentimes we know that our analysis will be topically constrained. For example, we might be analyzing the ‘sports’ documents in a collection. The language in this domain is more specialized and the distribution over word-context pairs is unlikely to be similar to pc(w, c). In fact, prior work in information retrieval suggests that documents on subtopics in a collection have very different unigram distributions compared to the whole corpus (Cronen-Townsend et al., 2002). Let pt(w, c) be the probability log(weight) -1 0 1 2 3 4 5 0 50 100 150 Figure 1: Importance weights for terms occurring in documents related to ‘argentina pegging dollar’ relative to frequency in gigaword. of observing a word-context pair conditioned on the topic t. The expected loss under this distribution is (Shimodaira, 2000), Lt = Ew,c∼pc pt(w, c) pc(w, c)ℓ(w, c)  (2) In general, if our corpus consists of sufficiently diverse data (e.g. Wikipedia), the support of pt(w, c) is much smaller than and contained in that of pc(w, c). The loss, ℓ, of a context that occurs more frequently in the topic, will be amplified by the importance weight ω = pt(w,c) pc(w,c). Because topics require specialized language, this is likely to occur; at the same time, these contexts are likely to be underemphasized in training a model according to Equation 1. In order to quantify this, we took a topic from a TREC ad hoc retrieval collection (see Section 5 for details) and computed the importance weight for each term occurring in the set of on-topic documents. The histogram of weights ω is presented in Figure 1. While larger probabilities are expected since the size of a topic-constrained vocabulary is smaller, there are a non-trivial number of terms with much larger importance weights. If the loss, ℓ(w), of a word2vec embedding is worse for these words with low pc(w), then we expect these errors to be exacerbated for the topic. Of course, these highly weighted terms may have a low value for pt(w) but a very high value relative to the corpus. We can adjust the 368 KL 0.00 0.05 0.10 0.15 rank Figure 2: Pointwise Kullback-Leibler divergence for terms occurring in documents related to ‘argentina pegging dollar’ relative to frequency in gigaword. weights by considering the pointwise KullbackLeibler divergence for each word w, Dw(pt∥pc) = pt(w) log pt(w) pc(w) (3) Words which have a much higher value of pt(w) than pc(w) and have a high absolute value of pt(w) will have high pointwise KL divergence. Figure 2 shows the divergences for the top 100 most frequent terms in pt(w). The higher ranked terms (i.e. good query expansion candidates) tend to have much higher probabilities than found in pc(w). If the loss on those words is large, this may result in poor embeddings for the most important words for the topic. A dramatic change in distribution between the corpus and the topic has implications for performance precisely because of the objective used by word2vec (i.e. Equation 1). The training emphasizes word-context pairs occurring with high frequency in the corpus. We will demonstrate that, even with heuristic downsampling of frequent terms in word2vec, these techniques result in inferior performance for specific topics. Thus far, we have sketched out why using the corpus distribution for a specific topic may result in undesirable outcomes. However, it is even unclear that pt(w|c) = pc(w|c). In fact, we suspect that pt(w|c) ̸= pc(w|c) because of the ‘one sense per discourse’ claim (Gale et al., 1992). We can qualitatively observe the difference in pc(w|c) and pt(w|c) by training global local cutting tax squeeze deficit reduce vote slash budget reduction reduction spend house lower bill halve plan soften spend freeze billion Figure 3: Terms similar to ‘cut’ for a word2vec model trained on a general news corpus and another trained only on documents related to ‘gasoline tax’. two word2vec models: the first on the large, generic Gigaword corpus and the second on a topically-constrained subset of the gigaword. We present the most similar terms to ‘cut’ using both a global embedding and a topicspecific embedding in Figure 3. In this case, the topic is ‘gasoline tax’. As we can see, the ‘tax cut’ sense of ‘cut’ is emphasized in the topic-specific embedding. 3 Local Word Embeddings The previous section described several reasons why a global embedding may result in overgeneral word embeddings. In order to perform topic-specific training, we need a set of topicspecific documents. In information retrieval scenarios users rarely provide the system with examples of topic-specific documents, instead providing a small set of keywords. Fortunately, we can use information retrieval techniques to generate a query-specific set of topical documents. Specifically, we adopt a language modeling approach to do so (Croft and Lafferty, 2003). In this retrieval model, each document is represented as a maximum likelihood language model estimated from document term frequencies. Query language models are estimated similarly, using term frequency in the query. A document score then, is the Kullback-Leibler divergence between the query and document language 369 models, D(pq∥pd) = X w∈V pq(w) log pq(w) pd(w) (4) Documents whose language models are more similar to the query language model will have a lower KL divergence score. For consistency with prior work, we will refer to this as the query likelihood score of a document. The scores in Equation 4 can be passed through a softmax function to derive a multinomial over the entire corpus (Lavrenko and Croft, 2001), p(d) = exp(−D(pq∥pd)) P d′ exp(−D(pq∥pd′)) (5) Recall in Section 2 that training a word2vec model weights word-context pairs according to the corpus frequency. Our query-based multinomial, p(d), provides a weighting function capturing the documents relevant to this topic. Although an estimation of the topicspecific documents from a query will be imprecise (i.e. some nonrelevant documents will be scored highly), the language use tends to be consistent with that found in the known relevant documents. We can train a local word embedding using an arbitrary optimization method by sampling documents from p(d) instead of uniformly from the corpus. In this work, we use word2vec, although any method that operates on a sample of documents can be used. 4 Query Expansion with Word Embeddings When using language models for retrieval, query expansion involves estimating an alternative to pq. Specifically, when each expansion term is associated with a weight, we normalize these weights to derive the expansion language model, pq+. This language model is then interpolated with the original query model, p1 q(w) = λpq(w) + (1 −λ)pq+(w) (6) This interpolated language model can then be used with Equation 4 to rank documents (Abdul-Jaleel et al., 2004). We will refer to this as the expanded query score of a document. Now we turn to using word embeddings for query expansion. Let U be an |V| × k term embedding matrix. If q is a |V| × 1 column term vector for a query, then the expansion term weights are UUTq. We then take the top k terms, normalize their weights, and compute pq+(w). We consider the following alternatives for U. The first approach is to use a global model trained by sampling documents uniformly. The second approach, which we propose in this paper, is to use a local model trained by sampling documents from p(d). 5 Methods 5.1 Data To evaluate the different retrieval strategies described in Section 3, we use the following datasets. Two newswire datasets, trec12 and robust, consist of the newswire documents and associated queries from TREC ad hoc retrieval evaluations. The trec12 corpus consists of Tipster disks 1 and 2; and the robust corpus consists of Tipster disks 4 and 5. Our third dataset, web, consists of the ClueWeb 2009 Category B Web corpus. For the Web corpus, we only retain documents with a Waterloo spam rank above 70.1 We present corpus statistics in Table 1. We consider several publicly available global embeddings. We use four GloVe embeddings of different dimensionality trained on the union of Wikipedia and Gigaword documents.2 We use one publicly available word2vec embedding trained on Google News documents.3 We also trained a global embedding for trec12 and robust using the entire corpus. Instead of training a global embedding on the large web collection, we use a GloVe embedding trained on Common Crawl data.4 We train local embeddings with word2vec using one of three retrieval sources. First, we consider documents retrieved from the target corpus of the query (i.e. trec12, robust, or web). We also consider training a local embed1https://plg.uwaterloo.ca/~gvcormac/ clueweb09spam/ 2http://nlp.stanford.edu/data/glove.6B.zip 3https://code.google.com/archive/p/ word2vec/ 4http://nlp.stanford.edu/data/glove.840B. 300d.zip 370 docs words queries trec12 469,949 438,338 150 robust 528,155 665,128 250 web 50,220,423 90,411,624 200 news 9,875,524 2,645,367 wiki 3,225,743 4,726,862 Table 1: Corpora used for retrieval and local embedding training. ding by performing a retrieval on large auxiliary corpora. We use the Gigaword corpus as a large auxiliary news corpus. We hypothesize that retrieving from a larger news corpus will provide substantially more local training data than a target retrieval. We also use a Wikipedia snapshot from December 2014. We hypothesize that retrieving from a large, high fidelity corpus will provide cleaner language than that found in lower fidelity target domains such as the web. Table 1 shows the relative magnitude of these auxiliary corpora compared to the target corpora. All corpora in Table 1 were stopped using the SMART stopword list5 and stemmed using the Krovetz algorithm (Krovetz, 1993). We used the Indri implementation for indexing and retrieval.6 5.2 Evaluation We consider several standard retrieval evaluation metrics, including NDCG@10 and interpolated precision at standard recall points (J¨arvelin and Kek¨al¨ainen, 2002; van Rijsbergen, 1979). NDCG@10 provides insight into performance specifically at higher ranks. An interpolated precision recall graph describes system performance throughout the entire ranked list. 5.3 Training All retrieval experiments were conducted by performing 10-fold cross-validation across queries. Specifically, we cross-validate the number of expansion terms, k ∈ {5, 10, 25, 50, 100, 250, 500}, and interpolation weight, λ ∈[0, 1]. For local word2vec training, we cross-validate the learning rate α ∈ {10−1, 10−2, 10−3}. 5http://jmlr.csail.mit.edu/papers/volume5/ lewis04a/a11-smart-stop-list/english.stop 6http://www.lemurproject.org/indri/ All word2vec training used the publicly available word2vec cbow implementation.7 When training the local models, we sampled 1000 documents from p(d) with replacement. To compensate for the much smaller corpus size, we ran word2vec training for 80 iterations. Local word2vec models use a fixed embedding dimension of 400 although other choices did not significantly affect our results. Unless otherwise noted, default parameter settings were used. In our experiments, expanded queries rescore the top 1000 documents from an initial query likelihood retrieval. Previous results have demonstrated that this approach results in performance nearly identical with an expanded retrieval at a much lower cost (Diaz, 2015). Because publicly available embeddings may have tokenization inconsistent with our target corpora, we restricted the vocabulary of candidate expansion terms to those occurring in the initial retrieval. If a candidate term was not found in the vocabulary of the embedding matrix, we searched for the candidate in a stemmed version of the embedding vocabulary. In the event that the candidate term was still not found after this process, we removed it from consideration. 6 Results We present results for retrieval experiments in Table 2. We find that embedding-based query expansion outperforms our query likelihood baseline across all conditions. When using the global embedding, the news corpora benefit from the various embeddings in different situations. Interestingly, for trec12, using an embedding trained on the target corpus significantly outperforms all other global embeddings, despite using substantially less data to estimate the model. While this performance may be due to the embedding having a tokenization consistent with the target corpus, it may also come from the fact that the corpus is more representative of the target documents than other embeddings which rely on online news or are mixed with non-news content. To some extent this supports our desire to move training closer to the target distribution. Across all conditions, local embeddings sig7https://code.google.com/p/word2vec/ 371 Table 2: Retrieval results comparing query expansion based on various global and local embeddings. Bolded numbers indicate the best expansion in that class of embeddings. Wilcoxon signed rank test between bolded numbers indicates statistically significant improvements (p < 0.05) for all collections. global local wiki+giga gnews target target giga wiki QL 50 100 200 300 300 400 400 400 400 trec12 0.514 0.518 0.518 0.530 0.531 0.530 0.545 0.535 0.563* 0.523 robust 0.467 0.470 0.463 0.469 0.468 0.472 0.465 0.475 0.517* 0.476 web 0.216 0.227 0.229 0.230 0.232 0.218 0.216 0.234 0.236 0.258* 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 trec12 recall precision QL global local 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 robust recall precision QL global local 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 web recall precision QL global local Figure 4: Interpolated precision-recall curves for query likelihood, the best global embedding, and the best local embedding from Table 2. nificantly outperform global embeddings for query expansion. For our two news collections, estimating the local model using a retrieval from the larger Gigaword corpus led to substantial improvements. This effect is almost certainly due to the Gigaword corpus being similar in writing style to the target corpus but, at the same time, providing significantly more relevant content (Diaz and Metzler, 2006). As a result, the local embedding is trained using a larger variety of topical material than if it were to use a retrieval from the smaller target corpus. An embedding trained with a retrieval from Wikipedia tended to perform worse most likely because the language is dissimilar from news content. Our web collection, on the other hand, benefitted more from embeddings trained using retrievals from the general Wikipedia corpus. The Gigaword corpus was less useful here because news-style language is almost certainly not representative of general web documents. Figure 4 presents interpolated precisionrecall curves comparing the baseline, the best global query expansion method, and the best local query expansion method. Interestingly, although global methods achieve strong performance for NDCG@10, these improvements over the baseline are not reflected in our precision-recall curves. Local methods, on the other hand, almost always strictly dominate both the baseline and global expansion across all recall levels. The results support the hypothesis that local embeddings provide better similarity measures than global embeddings for query expansion. In order to understand why, we first compare the performance differences between local and global embeddings. Figure 2 suggests that we should adopt a local embedding when the local unigram language model deviates from the corpus language model. To test this, we computed the KL divergence between the local unigram distribution, P d p(w|d)p(d), and the corpus unigram language model (CronenTownsend et al., 2002). We hypothesize that, when this value is high, the topic language is different from the corpus language and the 372 Table 3: Kendall’s τ and Spearman’s ρ between improvement in NDCG@10 and local KL divergence with the corpus language model. The improvement is measured for the best local embedding over the best global embedding. τ ρ trec12 0.0585 0.0798 robust 0.0545 0.0792 web 0.0204 0.0283 global embedding will be inferior to the local embedding. We tested the rank correlation between this KL divergence and the relative performance of the local embedding with respect to the global embedding. These correlations are presented in Table 3. Unfortunately, we find that the correlation is low, although it is positive across collections. We can also qualitatively analyze the differences in the behavior of the embeddings. If we have access to the set of documents labeled relevant to a query, then we can compute the frequency of terms in this set and consider those terms with high frequency (after stopping and stemming) to be good query expansion candidates. We can then visualize where these terms lie in the global and local embeddings. In Figure 5, we present a two-dimensional projection (van der Maaten and Hinton, 2008) of terms for the query ‘ocean remote sensing’, with those good candidates highlighted. Our projection includes the top 50 candidates by frequency and a sample of terms occurring in the query likelihood retrieval. We notice that, in the global embedding, the good candidates are spread out amongst poorer candidates. By contrast, the local embedding clusters the candidates in general but also situates them closely around the query. As a result, we suspect that the similar terms extracted from the local embedding are more likely to include these good candidates. 7 Discussion The success of local embeddings on this task should alarm natural language processing researchers using global embeddings as a representational tool. For one, the approach of learning from vast amounts of data is only efglobal local Figure 5: Global versus local embedding of highly relevant terms. Each point represents a candidate expansion term. Red points have high frequency in the relevant set of documents. White points have low or no frequency in the relevant set of documents. The blue point represents the query. Contours indicate distance from the query. fective if the data is appropriate for the task at hand. And, when provided, much smaller high-quality data can provide much better performance. Beyond this, our results suggest that the approach of estimating global representations, while computationally convenient, may overlook insights possible at query time, or evaluation time in general. A similar local embedding approach can be adopted for any natural language processing task where topical locality is expected and can be estimated. Although we used a query to re-weight the corpus in our experiments, we could just as easily use alternative contextual information (e.g. a sentence, paragraph, or document) in other tasks. Despite these strong results, we believe that 373 there are still some open questions in this work. First, although local embeddings provide effectiveness gains, they can be quite inefficient compared to global embeddings. We believe that there is opportunity to improve the efficiency by considering offline computation of local embeddings at a coarser level than queries but more specialized than the corpus. If the retrieval algorithm is able to select the appropriate embedding at query time, we can avoid training the local embedding. Second, although our supporting experiments (Table 3, Figure 5) add some insight into our intuition, the results are not strong enough to provide a solid explanation. Further theoretical and empirical analysis is necessary. 8 Related Work Topical adaptation of models The shortcomings of learning a single global vector representation, especially for polysemic words, have been pointed out before (Reisinger and Mooney, 2010b). The problem can be addressed by training a global model with multiple vector embeddings per word (Reisinger and Mooney, 2010a; Huang et al., 2012) or topicspecific embeddings (Liu et al., 2015). The number of senses for each word may be fixed (Neelakantan et al., 2015), or determined using class labels (Trask et al., 2015). However, to the best of our knowledge, this is the first time that training topic-specific word embeddings has been explored. Several methods exist in the language modeling community for topic-dependent adaptation of language models (Bellegarda, 2004). These can lead to performance improvements in tasks such as machine translation (Zhao et al., 2004) and speech recognition (Nanjo and Kawahara, 2004). Topic-specific data may be gathered in advance, by identifying corpus of topic-specific documents. It may also be gathered during the discourse, using multiple hypotheses from N-best lists as a source of topicspecific language. Then a topic-specific language model is trained (or the global model is adapted) online using the topic-specific training data. A topic-dependent model may be combined with the global model using linear interpolation (Iyer and Ostendorf, 1999) or other more sophisticated approaches (Federico, 1996; Kuhn and De Mori, 1990). Similarly to the adaptation work, we use topicspecific documents to train a topic-specific model. In our case the documents come from a first round of retrieval for the user’s current query, and the word embedding model is trained based on sentences from the topicspecific document set. Unlike the past work, we do not focus on interpolating the local and global models, although this is a promising area for future work. In the current study we focus on a direct comparison between the local-only and global-only approach, for improving retrieval performance. Word embeddings for IR Information Retrieval has a long history of learning representations of words that are low-dimensional dense vectors. These approaches can be broadly classified into two families based on whether they are learnt based on a termdocument matrix or term co-occurence data. Using the term-document matrix for embedding leads to several well-studied approaches such as LSA (Deerwester et al., 1990), PLSA (Hofmann, 1999), and LDA (Blei et al., 2003; Wei and Croft, 2006). The performance of these models varies depending on the task, for example they are known to perform poorly for retrieval tasks unless combined with lexical features (Atreya and Elkan, 2011a). Term-cooccurence based embeddings, such as word2vec (Mikolov et al., 2013b; Mikolov et al., 2013a) and (Pennington et al., 2014b), have recently been remarkably popular for many natural language processing and logical reasoning tasks. However, there are relatively less known successful applications of these models in IR. Ganguly et. al. (Ganguly et al., 2015) used the word similarity in the word2vec embedding space as a way to estimate term transformation probabilities in a language modelling setting for retrieval. More recently, Nalisnick et. al. (Nalisnick et al., 2016) proposed to model document about-ness by computing the similarity between all pairs of query and document terms using dual embedding spaces. Both these approaches estimate the semantic relatedness between two terms as the cosine distance between them in the embedding space(s). We adopt a similar notion of term relatedness but focus on demon374 strating improved retrieval performance using locally trained embeddings. Local latent semantic analysis Despite the mathematical appeal of latent semantic analysis, several experiments suggest that its empirical performance may be no better than that of ranking using standard term vectors (Deerwester et al., 1990; Dumais, 1995; Atreya and Elkan, 2011b). In order to address the coarseness of corpus-level latent semantic analysis, Hull proposed restricting analysis to the documents relevant to a query (Hull, 1994). This approach significantly improved over corpus-level analysis for routing tasks, a result that has been reproduced in consequent research (Sch¨utze et al., 1995; Singhal et al., 1997). Our work can be seen as an extension of these results to more recent techniques such as word2vec. 9 Conclusion We have demonstrated a simple and effective method for performing query expansion with word embeddings. Importantly, our results highlight the value of locally-training word embeddings in a query-specific manner. The strength of these results suggests that other research adopting global embedding vectors should consider local embeddings as a potentially superior representation. Instead of using a “Sriracha sauce of deep learning,” as embedding techniques like word2vec have been called, we contend that the situation sometimes requires, say, that we make a b´echamel or a mole verde or a sambal—or otherwise learn to cook. References Nasreen Abdul-Jaleel, James Allan, W. Bruce Croft, Fernando Diaz, Leah Larkey, Xiaoyan Li, Donald Metzler, Mark D. Smucker, Trevor Strohman, Howard Turtle, and Courtney Wade. 2004. Umass at trec 2004: Novelty and hard. In Online Proceedings of 2004 Text REtrieval Conference. Mohannad al Masri, Catherine Berrut, and JeanPierre Chevallet. 2016. A comparison of deep learning based query expansion with pseudo-relevance feedback and mutual information. In Nicola Ferro, Fabio Crestani, MarieFrancine Moens, Josiane Mothe, Fabrizio Silvestri, Maria Giorgio Di Nunzio, Claudia Hauff, and Gianmaria Silvello, editors, Proceedings of the 38th European Conference on IR Research (ECIR 2016), pages 709–715, Cham. Springer International Publishing. Avinash Atreya and Charles Elkan. 2011a. Latent semantic indexing (lsi) fails for trec collections. ACM SIGKDD Explorations Newsletter, 12(2):5–10. Avinash Atreya and Charles Elkan. 2011b. Latent semantic indexing (lsi) fails for trec collections. SIGKDD Explor. Newsl., 12(2):5–10, March. R. Attar and A. S. Fraenkel. 1977. Local feedback in full-text retrieval systems. J. ACM, 24(3):397–417, July. Jerome R Bellegarda. 2004. Statistical language model adaptation: review and perspectives. Speech communication, 42(1):93–108. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993–1022. W. Bruce Croft and John Lafferty. 2003. Language Modeling for Information Retrieval. Kluwer Academic Publishing. Steve Cronen-Townsend, Yun Zhou, and W. Bruce Croft. 2002. Predicting query performance. In SIGIR ’02: Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, pages 299–306, New York, NY, USA. ACM Press. Scott C. Deerwester, Susan T. Dumais, Thomas K. Landauer, George W. Furnas, and Richard A. Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society of Information Science, 41(6):391–407. Fernando Diaz and Donald Metzler. 2006. Improving the estimation of relevance models using large external corpora. In SIGIR ’06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 154–161, New York, NY, USA. ACM Press. Fernando Diaz. 2015. Condensed list relevance models. In Proceedings of the 2015 International Conference on The Theory of Information Retrieval, ICTIR ’15, pages 313–316, New York, NY, USA, May. ACM. Susan T. Dumais. 1995. Latent semantic indexing (LSI): TREC-3 report. In Overview of the Third Text REtrieval Conference (TREC-3), pages 219–230. Marcello Federico. 1996. Bayesian estimation methods for n-gram language model adaptation. In Spoken Language, 1996. ICSLP 96. Proceedings., Fourth International Conference on, volume 1, pages 240–243. IEEE. 375 William A. Gale, Kenneth W. Church, and David Yarowsky. 1992. One sense per discourse. In Proceedings of the Workshop on Speech and Natural Language, HLT ’91, pages 233–237, Stroudsburg, PA, USA. Association for Computational Linguistics. Debasis Ganguly, Dwaipayan Roy, Mandar Mitra, and Gareth J.F. Jones. 2015. Word embedding based generalized language model for information retrieval. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’15, pages 795–798, New York, NY, USA. ACM. Zellig S. Harris. 1954. Distributional structure. WORD, 10(2-3):146–162. Thomas Hofmann. 1999. Probabilistic latent semantic indexing. In SIGIR ’99: Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval, pages 50–57, New York, NY, USA. ACM Press. Eric H Huang, Richard Socher, Christopher D Manning, and Andrew Y Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 873–882. Association for Computational Linguistics. David Hull. 1994. Improving text retrieval for the routing problem using latent semantic indexing. In Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’94, pages 282–291, New York, NY, USA. SpringerVerlag New York, Inc. R.M. Iyer and M. Ostendorf. 1999. Modeling long distance dependence in language: topic mixtures versus dynamic cache models. Speech and Audio Processing, IEEE Transactions on, 7(1):30–39, Jan. Kalervo J¨arvelin and Jaana Kek¨al¨ainen. 2002. Cumulated gain-based evaluation of ir techniques. TOIS, 20(4):422–446. Robert Krovetz. 1993. Viewing morphology as an inference process. In SIGIR ’93: Proceedings of the 16th annual international ACM SIGIR conference on Research and development in information retrieval, pages 191–202, New York, NY, USA. ACM Press. Roland Kuhn and Renato De Mori. 1990. A cachebased natural language model for speech recognition. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 12(6):570–583. Victor Lavrenko and W. Bruce Croft. 2001. Relevance based language models. In Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, pages 120–127. ACM Press. Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. In Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2177–2185. Curran Associates, Inc. Yang Liu, Zhiyuan Liu, Tat-Seng Chua, and Maosong Sun. 2015. Topical word embeddings. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI’15, pages 2418–2424. AAAI Press. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and JeffDean. 2013b. Distributed representations of words and phrases and their compositionality. In C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 3111–3119. Curran Associates, Inc. Eric Nalisnick, Bhaskar Mitra, Nick Craswell, and Rich Caruana. 2016. Improving document ranking with dual word embeddings. In Proc. WWW. International World Wide Web Conferences Steering Committee. Hiroaki Nanjo and Tatsuya Kawahara. 2004. Language model and speaking rate adaptation for spontaneous presentation speech recognition. Speech and Audio Processing, IEEE Transactions on, 12(4):391–400. Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. 2015. Efficient non-parametric estimation of multiple embeddings per word in vector space. arXiv preprint arXiv:1504.06654. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014a. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014b. Glove: Global vectors for word representation. Proc. EMNLP, 12:1532–1543. Joseph Reisinger and Raymond Mooney. 2010a. A mixture model with sharing for lexical semantics. In Proceedings of the 2010 Conference 376 on Empirical Methods in Natural Language Processing, pages 1173–1182. Association for Computational Linguistics. Joseph Reisinger and Raymond J Mooney. 2010b. Multi-prototype vector-space models of word meaning. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 109–117. Association for Computational Linguistics. Hinrich Sch¨utze, David A. Hull, and Jan O. Pedersen. 1995. A comparison of classifiers and document representations for the routing problem. In Proceedings of the 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’95, pages 229–237, New York, NY, USA. ACM. Hidetoshi Shimodaira. 2000. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90(2):227 – 244. Amit Singhal, Mandar Mitra, and Chris Buckley. 1997. Learning routing queries in a query zone. SIGIR Forum, 31(SI):25–32, July. Alessandro Sordoni, Yoshua Bengio, and Jian-Yun Nie. 2014. Learning concept embeddings for query expansion by quantum entropy minimization. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, AAAI’14, pages 1586–1592. AAAI Press. Anastasios Tombros and C. J. van Rijsbergen. 2001. Query-sensitive similarity measures for the calculation of interdocument relationships. In CIKM ’01: Proceedings of the tenth international conference on Information and knowledge management, pages 17–24, New York, NY, USA. ACM Press. Anastasios Tombros, Robert Villa, and C. J. Van Rijsbergen. 2002. The effectiveness of query-specific hierarchic clustering in information retrieval. Inf. Process. Manage., 38(4):559– 582, July. Andrew Trask, Phil Michalak, and John Liu. 2015. sense2vec-a fast and accurate method for word sense disambiguation in neural word embeddings. arXiv preprint arXiv:1511.06388. Laurens van der Maaten and Geoffrey E. Hinton. 2008. Visualizing high-dimensional data using t-sne. Journal of Machine Learning Research, 9:2579–2605. C. J. van Rijsbergen. 1979. Information Retrieval. Butterworths. Xing Wei and W. Bruce Croft. 2006. LDA-based document models for ad-hoc retrieval. In SIGIR ’06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 178–185, New York, NY, USA. ACM Press. Peter Willett. 1985. Query-specific automatic document classification. In International Forum on Information and Documentation, volume 10, pages 28–32. Jinxi Xu and W. Bruce Croft. 1996. Query expansion using local and global document analysis. In Proceedings of the 19th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’96, pages 4–11, New York, NY, USA. ACM. David Yarowsky. 1993. One sense per collocation. In Proceedings of the Workshop on Human Language Technology, HLT ’93, pages 266–271, Stroudsburg, PA, USA. Association for Computational Linguistics. David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of the 33rd Annual Meeting on Association for Computational Linguistics, ACL ’95, pages 189–196, Stroudsburg, PA, USA. Association for Computational Linguistics. Bing Zhao, Matthias Eck, and Stephan Vogel. 2004. Language model adaptation for statistical machine translation with structured query models. In Proceedings of the 20th International Conference on Computational Linguistics, COLING ’04, Stroudsburg, PA, USA. Association for Computational Linguistics. 377
2016
35
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 378–387, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Together We Stand: Siamese Networks for Similar Question Retrieval Arpita Das1 Harish Yenala1 Manoj Chinnakotla2,1 Manish Shrivastava1 1IIIT Hyderabad, Hyderabad, India {arpita.das,harish.yenala}@research.iiit.ac.in [email protected] 2Microsoft, Hyderabad, India [email protected] Abstract Community Question Answering (cQA) services like Yahoo! Answers1, Baidu Zhidao2, Quora3, StackOverflow4 etc. provide a platform for interaction with experts and help users to obtain precise and accurate answers to their questions. The time lag between the user posting a question and receiving its answer could be reduced by retrieving similar historic questions from the cQA archives. The main challenge in this task is the “lexicosyntactic” gap between the current and the previous questions. In this paper, we propose a novel approach called “Siamese Convolutional Neural Network for cQA (SCQA)” to find the semantic similarity between the current and the archived questions. SCQA consist of twin convolutional neural networks with shared parameters and a contrastive loss function joining them. SCQA learns the similarity metric for question-question pairs by leveraging the question-answer pairs available in cQA forum archives. The model projects semantically similar question pairs nearer to each other and dissimilar question pairs farther away from each other in the semantic space. Experiments on large scale reallife “Yahoo! Answers” dataset reveals that SCQA outperforms current state-of-theart approaches based on translation models, topic models and deep neural network 1https://answers.yahoo.com/ 2http://zhidao.baidu.com/ 3http://www.quora.com/ 4http://stackoverflow.com/ based models which use non-shared parameters. 1 Introduction The cQA forums have emerged as popular and effective means of information exchange on the Web. Users post queries in these forums and receive precise and compact answers in stead of a list of documents. Unlike in Web search, opinion based queries are also answered here by experts and users based on their personal experiences. The question and answers are also enhanced with rich metadata like categories, subcategories, user expert level, user votes to answers etc. One of the serious concerns in cQA is “question-starvation” (Li and King, 2010) where a question does not get immediate answer from any user. When this happens, the question may take several hours and sometimes days to get satisfactory answers or may not get answered at all. This delay in response may be avoided by retrieving semantically related questions from the cQA archives. If a similar question is found in the database of previous questions, then the corresponding best answer can be provided without any delay. However, the major challenge associated with retrieval of similar questions is the lexicosyntactic gap between them. Two questions may mean the same thing but they may differ lexically and syntactically. For example the queries “Why are yawns contagious?” and “Why do we yawn when we see somebody else yawning?” convey the same meaning but differ drastically from each other in terms of words and syntax. Several techniques have been proposed in the literature for similar question retrieval and they could be broadly classified as follows: 1. Classic Term Weighting Based Approaches: Classical IR based retrieval 378 models like BM25 (Robertson et al., 1994) and Language modeling for Information Retrieval (LMIR) (Zhai and Lafferty, 2004) score the similarity based on the weights of the matching text terms between the questions. 2. Translation Models: Learning word or phrase level translation models from question-answer pairs in parallel corpora of same language (Jeon et al., 2005; Xue et al., 2008; Zhou et al., 2011). The similarity function between questions is then defined as the probability of translating a given question into another. 3. Topic Models: Learning topic models from question-answer pairs (Ji et al., 2012; Cai et al., 2011; Zhang et al., 2014). Here, the similarity between questions, is defined in the latent topic space discovered by the topic model. 4. Deep Learning Based Approaches: Deep Learning based models like (Zhou et al., 2016),(Qiu and Huang, 2015), (Das et al., 2016) use variants of neural network architectures to model question-question pair similarity. Retrieving semantically similar questions can be thought of as a classification problem with large number of categories. Here, each category contains a set of related questions and the number of questions per category is small. It is possible that given a test question, we find that there are no questions semantically related to it in the archives, it will belong to a entirely new unseen category. Thus, only a subset of the categories is known during the time of training. The intuitive approach to solve this kind of problem would to learn a similarity metric between the question to be classified and the archive of previous questions. Siamese networks have shown promising results in such distance based learning methods (Bromley et al., 1993; Chopra et al., 2005). These networks possess the capability of learning the similarity metric from the available data, without requiring specific information about the categories. In this paper, we propose a novel unified model called Siamese Convolutional Neural Network for cQA. SCQA architecture contain deep convolutional neural networks as twin networks with a contrastive energy function at the top. These twin networks share the weights with each other (parameter sharing). The energy function used is suitable for discriminative training for Energy-Based models (LeCun and Huang, 2005). SCQA learns the shared model parameters and the similarity metric by minimizing the energy function connecting the twin networks. Parameter sharing guarantees that question and its relevant answer are nearer to each other in the semantic space while the question and any answer irrelevant to it are far away from each other. For example, the representations of “President of USA” and “Barack Obama” should be nearer to each other than those of “President of USA” and “Tom Cruise lives in USA”. The learnt similarity metric is used to retrieve semantically similar questions from the archives given a new posted question. Similar question pairs are required to train SCQA which is usually hard to obtain in large numbers. Hence, SCQA overcomes this limitation by leveraging Question-Answer pairs (Q, A) from the cQA archives. This also has additional advantages such as: • The knowledge and expertise of the answerers and askers usually differ in a cQA forum. The askers, who are novices or nonexperts, usually use less technical terminology whereas the answerers, who are typically experts, are more likely to use terms which are technically appropriate in the given realm of knowledge. Due to this, a model which learns from Question-Answer (Q, A) training data has the advantage of learning mappings from non-technical and simple terms to technical terms used by experts such as shortsight => myopia etc. This advantage will be lost if we learn from (Q, Q) pairs where both the questions are posed by nonexperts only. • Experts usually include additional topics that are correlated to the question topic which the original askers may not even be aware of. For example, for the question “how can I overcome short sight?”, an expert may give an answer containing the concepts “laser surgery”, ”contact lens”, “LASIK surgery” etc. Due to this, the concept short sight gets associated with these expanded concepts as well. Since, the askers are non-experts, such 379 rich concept associations are hard to learn from (Q, Q) training archives even if they are available in large scale. Thus, leveraging (Q, A) training data leads to learning richer concept/term associations in SCQA. In summary, the following are our main contributions in this paper: • We propose a novel model SCQA based on Siamese Convolutional Neural Network which use shared parameters to learn the similarity metric between question-answer pairs in a cQA dataset. • In SCQA, we overcome the non-availability of training data in the form of questionquestion pairs by leveraging existing question-answer pairs from the cQA archives which also helps in improving the effectiveness of the model. • We reduce the computational complexity by directly using character-level representations of question-answer pairs in stead of using sentence modeling based representations which also helps in handling spelling errors and out-of-vocabulary (OOV) words in documents. The rest of the paper is organized as follows. Section 2 presents the previous approaches to conquer the problem. Section 3 describes the architecture of SCQA. Sections 4 and 5 explain the training and testing phase of SCQA respectively. Section 6 introduces a variant of SCQA by adding textual similarity to it. Section 7 describes the experimental set-up, details of the evaluation dataset and evaluation metrics. In Section 8, quantitative and qualitative results are presented. Finally, Section 9 concludes the paper. 2 Related Work The classical retrieval models BM25 (Robertson et al., 1994), LMIR (Zhai and Lafferty, 2004) do not help much to capture semantic relatedness because they mainly consider textual similarity between queries. Researchers have used translation based models to solve the problem of question retrieval. Jeon et al. (2005) leveraged the similarity between the archived answers to estimate the translation probabilities. Xue et al. (2008) enhanced the performance of word based translation model by combining query likelihood language model to it. Zhou et al. (2011) used phrase based translation model where they considered question answer pairs as parallel corpus. However, Zhang et al. (2014) stated that questions and answers cannot be considered parallel because they are heterogeneous in lexical level and in terms of user behaviors. To overcome these vulnerabilities topic modeling was introduced by (Ji et al., 2012; Cai et al., 2011; Zhang et al., 2014). The approach assumes that questions and answers share some common latent topics. These techniques match questions not only on a term level but also on a topic level. Zhou et al. (2015) used a fisher kernel to model the fixed size representation of the variable length questions. The model enhances the embedding of the questions with the metadata “category” involved with them. Zhang et al. (2016) learnt representations of words and question categories simultaneously and incorporated the learnt representations into traditional language models. Following the recent trends, deep learning is also employed to solve this problem. Qiu et al. (2015) introduced convolutional neural tensor network (CNTN), which combines sentence modeling and semantic matching. CNTN transforms the word tokens into vectors by a lookup layer, then encode the questions and answers to fixed-length vectors with convolutional and pooling layers, and finally model their interactions with a tensor layer. Das et al. (2016) used deep structured topic modeling that combined topic model and paired convolutional networks to retrieve related questions. Zhou et al. (2016) used a deep neural network (DNN) to map the question answer pairs to a common semantic space and calculated the relevance of each answer given the query using cosine similarity between their vectors in that semantic space. Finally they fed the learnt semantic vectors into a learning to rank (LTR) framework to learn the relative importance of each feature. On a different line of research, several Textual-based Question Answering (QA) systems (Qanda5, QANUS6, QSQA7 etc.) are developed that retrieve answers from the Web and other textual sources. Similarly, structured QA systems 5http://www.openchannelfoundation.org/ projects/Qanda/ 6http://www.qanus.com/ 7http://www.dzonesoftware.com/ products/open-source-question-answer-software/ 380 F(Q) Convolutional Neural Network F(A) Convolutional Neural Network W || F(Q) - F(A) || S Q A F(Q) F(A) Figure 1: Architecture of Siamese network. (Aqualog8, NLBean9 etc.) obtain answers from structured information sources with predefined ontologies. QALL-ME Framework (Ferrandez et al., 2011) is a reusable multilingual QA architecture built using structured data modeled by an ontology. The reusable architecture of the system may be utilized later to incorporate multilingual question retrieval in SCQA. 2.1 Siamese Neural Network Siamese Neural Networks (shown in Figure 1) were introduced by Bromley et al. (1993) to solve the problem of signature verification. Later, Chopra et al. (2005) used the architecture with discriminative loss function for face verification. Recently these networks are used extensively to enhance the quality of visual search (Liu et al., 2008; Ding et al., 2008). Let, F(X) be the family of functions with set of parameters W. F(X) is assumed to be differentiable with respect to W. Siamese network seeks a value of the parameter W such that the symmetric similarity metric is small if X1 and X2 belong to the same category, and large if they belong to different categories. The scalar energy function S(Q, A) that measures the semantic relatedness between question answer pair (Q,A) can be defined as: S(Q, A) = ∥F(Q) −F(A)∥ (1) In SCQA question and relevant answer pairs are fed to train the network. The loss function is minimized so that S(Q, A) is small if the answer A is relevant to the question Q and large otherwise. 8http://technologies.kmi.open.ac.uk/ aqualog/ 9http://www.markwatson.com/opensource/ Figure 2: Architecture of SCQA. The network consists of repeating convolution, max pooling and ReLU layers and a fully connected layer. Also the weights W1 to W5 are shared between the sub-networks. 3 Architecture of SCQA As shown in Figure 2, SCQA consists of a pair of deep convolutional neural networks (CNN) with convolution, max pooling and rectified linear (ReLU) layers and a fully connected layer at the top. CNN gives a non linear projection of the question and answer term vectors in the semantic space. The semantic vectors yielded are connected to a layer that measures distance or similarity between them. The contrastive loss function combines the distance measure and the label. The gradient of the loss function with respect to the weights and biases shared by the sub-networks, is computed using back-propagation. Stochastic Gradient Descent method is used to update the parameters of the sub-networks. 3.1 Inputs to SCQA The size of training data used is in millions, thus representing every word with one hot vector would be practically infeasible. Word hashing introduced by Mcnamee et al. (2004) involves letter n-gram to reduce the dimensionality of term vectors. For a word, say, “table” represented as (#table#) where # is used as delimiter, letter 3-grams would be #ta, tab, abl, ble, le#. Thus word hashing is character level representation of documents which takes care of OOV words and words with minor spelling errors. It represents a query using a lower dimensional vector with dimension equal to number of unique letter trigrams in the training dataset (48,536 in our case). The input to the twin networks of SCQA are word hashed term vectors of the question and 381 answer pair and a label. The label indicates whether the sample should be placed nearer or farther in the semantic space. For positive samples (which are expected to be nearer in the semantic space), twin networks are fed with word hashed vectors of question and relevant answers which are marked as “best-answer” or “most voted answers” in the cQA dataset of Yahoo! Answers (question-relevant answer pair). For negative samples (which are expected to be far away from each other in the semantic space), twin networks are fed with word hashed vectors of question and answer of any other random question from the dataset (question-irrelevant answer pair). 3.2 Convolution Each question-answer pair is word hashed into (qiai) such that qi ∈Rnt and ai ∈Rnt where nt is the total number of unique letter trigrams in the training data. Convolution layer is applied on the word hashed question answer vectors by convolving a filter with weights c ∈Rhxw where h is the filter height and w is the filter width. A filter consisting of a layer of weights is applied to a small patch of word hashed vector to get a single unit as output. The filter is slided across the length of vector such that the resulting connectivity looks like a series of overlapping receptive fields which output of width w. 3.3 Max Pooling Max pooling performs a kind of non-linear downsampling. It splits the filter outputs into small nonoverlapping grids (larger grids result to greater the signal reduction), and take the maximum value in each grid as the value in the output of reduced size. Max pooling layer is applied on top of the output given by convolutional network to extract the crucial local features to form a fixed-length feature vector. 3.4 ReLU Non-linear function Rectified linear unit (ReLU) is applied element-wise to the output of max pooling layer. ReLU is defined as f(x) = max(0, x). ReLU is preferred because it simplifies backpropagation, makes learning faster and also avoids saturation. 3.5 Fully Connected layer The terminal layer of the convolutional neural subnetworks is a fully connected layer. It converts the output of the last ReLU layer into a fixed-length semantic vector s ∈Rns of the input to the subnetwork. We have empirically set the value of ns to 128 in SCQA. 4 Training We train SCQA for a question while looking for semantic similarity with the answers relevant to it. SCQA is different from the other deep learning counterparts due to its property of parameter sharing. Training the network with a shared set of parameters not only reduces number of parameters (thus, save lot of computations) but also ensures consistency of the representation of questions and answers in semantic space. The shared parameters of the network are learnt with the aim to minimize the semantic distance between the question and the relevant answers and maximize the semantic distance between the question and the irrelevant answers. Given an input {qi, ai} where qi and ai are the ith question answer pair, and a label yi with yi ∈ {1,-1}, the loss function is defined as: loss(qi, ai) = ( 1 −cos(qi, ai), if y = 1; max(0, cos(qi, ai) −m), if y = −1; where m is the margin which decides by how much distance dissimilar pairs should be moved away from each other. It generally varies between 0 to 1. The loss function is minimized such that question answer pairs with label 1 (questionrelevant answer pair) are projected nearer to each other and that with label -1 (question-irrelevant answer pair) are projected far away from each other in the semantic space. The model is trained by minimizing the overall loss function in a batch. The objective is to minimize : L(Λ) = X (qi,ai)∈C∪C′ loss(qi, ai) (2) where C contains batch of question-relevant answer pairs and C′ contain batch of questionirrelevant answer pairs. The parameters shared by the convolutional sub-networks are updated using Stochastic Gradient escent (SGD). 5 Testing While testing, we need to retrieve similar questions given a query. During testing we make pairs of all the questions with the query and feed them 382 to SCQA. The term vectors of the question pairs are word hashed and fed to the twin sub-networks. The trained shared weights of the SCQA projects the question vector in the semantic space. The similarity between the pairs is calculated using the similarity metric learnt during the training. Thus SCQA outputs a value of distance measure (score) for each pair of questions. The threshold is dynamically set to the average similarity score across questions and we output only those which have a similarity greater than the average similarity score. 6 Siamese Neural Network with Textual Similarity SCQA is trained using question-relevant answer pairs as positive samples and questionirrelevant answer pairs as negative samples. It poorly models the basic text similarity because in the (Q, A) training pairs, the answerers often do not repeat the question words while providing the answer. For example: for the question ”Who is the President of the US?”, the answerer would just provide “Barrack Obama”. Due to this, although the model learns that president the US => Barrack Obama, the similarity for president => president wouldn’t be much and hence needs to be augmented through BM25 or some such similarity function. Though SCQA can strongly model semantic relations between documents, it needs boosting in the area of textual similarity. The sense of word based similarity is infused to SCQA by using BM25 ranking algorithm. Lucene10 is used to calculate the BM25 scores for question pairs. The score from similarity metric of SCQA is combined with the BM25 score. A new similarity score is calculated by the weighted combination of the SCQA and BM25 score as: score = α ∗SCQAscore + (1 −α) ∗BM25score (3) where α control the weights given to SCQA and BM25 models. It range from 0 to 1. SCQA with this improved similarity metric is called Siamese Convolutional Neural Network for cQA with Textual Similartity (T-SCQA). Figure 4 depicts the testing phase of T-SCQA. This model will give better performance in datasets with good mix of questions that are lexically and semantically 10https://lucene.apache.org/ Hyperparameter Value Batch Size 100 Depth of CNN 3 Learning rate 0.01 Momentum 0.05 Kernel width of Convolution 10 Kernel width of MaxPooling 100 Length of semantic vector 128 Table 1: Hyperparameters of SCQA. similar. The value of α can be tuned according to the nature of dataset. 7 Experiments We collected Yahoo! Answers dataset from Yahoo! Labs Webscope11. Each question in the dataset contains title, description, best answer, most voted answers and meta-data like categories, sub categories etc. For training dataset, we randomly selected 2 million data and extracted question-relevant answer pairs and question-irrelevant answer pairs from them to train SCQA. Similarly, our validation dataset contains 400,000 question answer pairs. The hyperparameters of the network are tuned on the validation dataset. The values of the hyperparameters for which we obtained the best results is shown in Table 1. We used the annotated survey dataset of 1018 questions released by Zhang et al. (2014) as testset for all the models. On this gold data, we evaluated the performance of the models with three evaluation criteria: Mean Average Precision (MAP), Mean Reciprocal Rank (MRR) and Precision at K (P@K). Each question and answer was pre-processed by lower-casing, stemming, stopword and special character removal. 7.1 Parameter Sharing In order to find out whether parameter sharing helps in the present task we build two models named Deep Structured Neural Network for Community Question Answering(DSQA) and Deep Structured Neural Network for Community Question Answering with Textual Similarity T-DSQA. DSQA and T-DSQA have the same architecture as SCQA and T-SCQA with the exception that in 11http://webscope.sandbox.yahoo.com/ catalog.php?datatype=l 383 0.7 0.75 0.8 0.85 0.9 0.95 10 20 30 40 50 60 70 80 Value of Evaluation Metric Epoch Number MAP MRR P@1 Figure 3: Variation of evaluation metrics with the epochs. the former models weights are not shared by the convolutional sub-networks. The weightage α for controlling corresponding scores of SCQA and BM25 for the model T-SCQA was tuned on the validation set. 8 Results We did a comparative study of the results of the previous methods with respect to SCQA and TSCQA. The baseline performance is shown by query likelihood language model (LM). For the translation based methods translation(word), translation+LM and translation(phrase) we implemented the papers by Jeon et al. (2005), Xue et al. (2008), Zhou et al. (2011) respectively. The first paper deals with word based translation, the second enhanced the first by adding language model to it and the last paper implements phrase based translation method to bridge lexical gap. As seen from Table 2, the translation based methods outperforms the baseline significantly. The models are trained using GIZA++12 tool with the question and best answer pair as the parallel corpus. For the topic based Q-A topic model and Q-A topic model(s), we implemented the models QATM -PR (Question-Answer Topic Model) Ji et al.(2012) and TBLMSQATM−V (Supervised Question-Answer Topic Model with user votes as supervision) Zhang et al. (2014) respectively. Again it is visible from the Table 2 that topic based approaches show slight improvement over translation based methods but they show significant improvement over baseline. The mod12http://www.statmt.org/moses/giza/ GIZA++.html Method MAP MRR P@1 LMIR 0.762 0.844 0.717 translation(word) 0.786 0.870 0.807 translation+LM 0.787 0.869 0.804 translation(phrase) 0.789 0.875 0.817 Q-A topic model 0.787 0.879 0.810 Q-A topic model(s) 0.800 0.888 0.820 DSQA 0.755 0.921 0.751 T-DSQA 0.801 0.932 0.822 SCQA 0.811 0.895 0.830 T-SCQA 0.852∗ 0.934∗ 0.849∗ Table 2: Results on Yahoo! Answers dataset. The best results are obtained by T-SCQA (bold faced). The difference between the results marked(*) and other methods are statistically significant with p <0.001. els DSQA and T-DSQA were built using convolutional neural sub-networks joined by a distance measure at the top. There is no sharing of parameters involved between the sub-networks of these models. It is clear from the comparison of results between T-DSQA and T-SCQA that parameter sharing definitely helps in the task of similar question retrieval in cQA forums. T-SCQA outperforms all the previous approaches significantly. 8.1 Quantitative Analysis SCQA and T-SCQA learns the semantic relationship between the question and their best and most voted answers. It is observed that by varying the weights of SCQA and BM25 scores, the value of MAP changes significantly (Figure 5). The weight is tuned in the validation dataset. We trained our model for several epochs and observed how the results varied with the epochs. We found that the evaluation metrics changed with increasing the number of epochs but became saturated after epoch 60. The comparison of evaluation metrics with epochs can be visualised in Figure 3. The comparisons SCQA and T-SCQA with the previously proposed models is shown in Table 2. For baseline we considered the traditional language model LMIR. The results in the table are consistent with the literature which says translation based models outperform the baseline methods and topic based approaches outperform the translational methods. Also, it is observed that deep learning based solution with parameter sharing is more helpful for this task than without parameter sharing. Note, that the results of previous models stated in Table 2 differ from the original 384 Distance Metric W q r qi rij q Textual matching + BM25 Score Convolution Max pooling ReLU Fully Connected Layer Convolution Max pooling ReLU Fully Connected Layer Final Score SCQA Score W Figure 4: Testing phase of T-SCQA. Here the qi is the ith query and rij is the jth question retrieved by qi. The twin CNN networks share the parameters (W) with each other. The connecting distance metric layer outputs the SCQA score and the textual matching module outputs the BM25 score. The weighted combination of these scores give the final score. rij is stated similar to the query qi if the final score of the pair exceeds an appropriate threshold. Figure 5: The variation of MAP with α. papers since we tried to re-implement those models with our training data (to the best of our capability). Though we use the test data released by Zhang et al. (2014) we do not report their results in Table 2 due to the difference in training data used to train the models. In the test dataset released by Zhang et al. (2014), there are fair amount of questions that possess similarity in the word level hence T-SCQA performed better than SCQA for this dataset. TSCQA gives the best performance in all evaluation measures. The results of T-SCQA in Table 2 uses the trained model at epoch 60 with the value of α as 0.8. 8.2 Qualitative Analysis In Table 3 few examples are shown to depict how results of T-SCQA reflect strong semantic information when compared to other baseline methods. For Q1 we compare performance of LMIR and TSCQA. LMIR outputs the question by considering word based similarity. It focuses on matching the words “how”, “become”, “naturally” etc, hence it outputs “How can I be naturally funny?” which is irrelevant to the query. On the other hand, T −SCQA retrieves the questions that are semantically relevant to the query. For Q2 we compare the performance of T-SCQA with phrase based translation model (Zhou et al., 2011). The outputs of translation(phrase) model shows that the translation of “nursery” and “pre-school” to “daycare”, “going to university” to “qualifications” are highly probable. The questions retrieved are semantically related, however asking craft ideas for pre-school kids for the event of mother’s day is irrelevant in this context. The results of our model solely focuses on the qualifications, degrees and skills one needs to work in a nursery. For Q3 we compare the performance of T-SCQA with supervised topic model (Zhang et al., 2014). The questions retrieved by both the models revolve around the topic “effect of smoking on children”. While the topic model retrieve questions which deal with smoking by mother and its effect on child, TSCQA retrieve questions which deals not only with the affects of a mother smoking but also the effect of passive smoking on the child. For Q4 we com385 Query Comment Q1: How can I become naturally happy? LMIR 1.How can I be naturally happy? LMIR performs 2.How can I become naturally funny? word based 1.Are some of us naturally born happy or do we learn how to matching using T-SCQA become happy? “how”,“become”, 2.How can I become prettier and feel happier with myself? “naturally” etc. Q2: Do you need to go to university to work in a nursery or pre-school? For translation translation 1.What degree do you need to work in a nursery? (phrase) (phrase) 2. I work at a daycare with pre-school kids(3-5). Any ideas on university->degree crafts for mother’s day? nursery->daycare 1.Will my B.A hons in childhood studies put me in as an are highly probable unqualified nursery nurse? translations but craft T-SCQA 2.What skills are needed to work in a nursery, or learned from ideas for daycare working in a nursery? is irrelevant. Q3: Does smoking affect an unborn child? Both models Q-A topic 1.How do smoking cigarettes and drinking affect an unborn retrieve questions model(s) child? on topic “effect of 2.How badly will smoking affect an unborn child? smoking on children” 1.How does cigarette smoking and alcohol consumption by but T-SCQA could mothers affect unborn child? retrieve based on T-SCQA 2.Does smoking by a father affect the unborn child? If there passive smoking is no passive smoking, then is it still harmful? through father. Q4: How do I put a video on YouTube? T-DSQA could not 1.How can I download video from YouTube and put them decipher “put”. T-DSQA on my Ipod? It relates “put” to 2.I really want to put videos from YouTube to my Ipod..how? download and 1.How do I post a video on YouTube? transfer of videos T-SCQA 2.How can I make a channel on YouTube and upload videos while T-SCQA relates on it? plz help me... it to uploading videos. Table 3: This table compares the qualitative performance of T-SCQA with LMIR, phrase based translation model translation(phrase), supervised topic model Q-A topic model(s) and deep semantic model without parameter sharing T-DSQA. For queries Q1-4 T-SCQA show better performance than the previous models . pare the performance of T-SCQA with T-DSQA. TDSQA retrieves the questions that are related to downloading and transferring YouTube videos to other devices. Thus, T-DSQA cannot clearly clarify the meaning of “put” in Q4. However, the retrieved questions of T-SCQA are more aligned towards the ways to record videos and upload them in YouTube. The questions retrieved by T-SCQA are semantically more relevant to the query Q4. 9 Conclusions In this paper, we proposed SCQA for similar question retrieval which tries to bridge the lexicosyntactic gap between the question posed by the user and the archived questions. SCQA employs twin convolutional neural networks with shared parameters to learn the semantic similarity between the question and answer pairs. Interpolating BM25 scores into the model T-SCQA results in improved matching performance for both textual and semantic matching. Experiments on large scale real-life “Yahoo! Answers” dataset revealed that T-SCQA outperforms current state-ofthe-art approaches based on translation models, topic models and deep neural network based models which use non-shared parameters. As part of future work, we would like to enhance SCQA with the meta-data information like categories, user votes, ratings, user reputation of the questions and answer pairs. Also, we would like to experiment with other deep neural architectures such as Recurrent Neural Networks, Long Short Term Memory Networks, etc. to form the sub-networks. 386 References Jane Bromley, James W Bentz, L´eon Bottou, Isabelle Guyon, Yann LeCun, Cliff Moore, Eduard S¨ackinger, and Roopak Shah. 1993. Signature verification using a siamese time delay neural network. IJPRAI. Li Cai, Guangyou Zhou, Kang Liu, and Jun Zhao. 2011. Learning the latent topics for question retrieval in community QA. IJCNLP. Sumit Chopra, Raia Hadsell, and Yann LeCun. 2005. Learning a similarity metric discriminatively, with application to face verification. CVPR. Arpita Das, Manish Shrivastava, and Manoj Chinnakotla. 2016. Mirror on the wall: Finding similar questions with deep structured topic modeling. Springer. Shilin Ding, Gao Cong, Chin-Yew Lin, and Xiaoyan Zhu. 2008. Using conditional random fields to extract contexts and answers of questions from online forums. ACL. Oscar Ferrandez, Christian Spurk, Milen Kouylekov, Iustin Dornescu, Sergio Ferrandez, Matteo Negri, Ruben Izquierdo, David Tomas, Constantin Orasan, Guenter Neumann, et al. 2011. The qall-me framework: A specifiable-domain multilingual question answering architecture. Web semantics. Jiwoon Jeon, W. Bruce Croft, and Joon Ho Lee. 2005. Finding similar questions in large question and answer archives. CIKM. Zongcheng Ji, Fei Xu, Bin Wang, and Ben He. 2012. Question-answer topic model for question retrieval in community question answering. CIKM. Yann LeCun and Fu Jie Huang. 2005. Loss functions for discriminative training of energy-based models. AISTATS. Baichuan Li and Irwin King. 2010. Routing questions to appropriate answerers in community question answering services. CIKM. Yuanjie Liu, Shasha Li, Yunbo Cao, Chin-Yew Lin, Dingyi Han, and Yong Yu. 2008. Understanding and summarizing answers in community-based question answering services. ICCL. Paul Mcnamee and James Mayfield. 2004. Character n-gram tokenization for european language text retrieval. Information retrieval. Xipeng Qiu and Xuanjing Huang. 2015. Convolutional neural tensor network architecture for community-based question answering. IJCAI. Stephen E Robertson, Steve Walker, Susan Jones, Micheline Hancock-Beaulieu, Mike Gatford, et al. 1994. Okapi at trec-3. NIST Special Publication. Xiaobing Xue, Jiwoon Jeon, and W. Bruce Croft. 2008. Retrieval models for question and answer archives. SIGIR. Chengxiang Zhai and John Lafferty. 2004. A study of smoothing methods for language models applied to information retrieval. ACM Trans. Inf. Syst. Kai Zhang, Wei Wu, Haocheng Wu, Zhoujun Li, and Ming Zhou. 2014. Question retrieval with high quality answers in community question answering. CIKM. Kai Zhang, Wei Wu, Fang Wang, Ming Zhou, and Zhoujun Li. 2016. Learning distributed representations of data in community question answering for question retrieval. ICWSDM. Guangyou Zhou, Li Cai, Jun Zhao, and Kang Liu. 2011. Phrase-based translation model for question retrieval in community question answer archives. ACL:HLT. Guangyou Zhou, Tingting He, Jun Zhao, and Po Hu. 2015. Learning continuous word embedding with metadata for question retrieval in community question answering. ACL. Guangyou Zhou, Yin Zhou, Tingting He, and Wensheng Wu. 2016. Learning semantic representation with neural networks for community question answering retrieval. Knowledge-Based Systems. 387
2016
36
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 388–398, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics News Citation Recommendation with Implicit and Explicit Semantics Hao Peng∗, 1 Jing Liu,2 Chin-Yew Lin2 1School of EECS, Peking University, Beijing, 100871, China 2Microsoft Research, Beijing, 100080, China [email protected] {liudani, cyl}@microsoft.com Abstract In this work, we focus on the problem of news citation recommendation. The task aims to recommend news citations for both authors and readers to create and search news references. Due to the sparsity issue of news citations and the engineering difficulty in obtaining information on authors, we focus on content similarity-based methods instead of collaborative filtering-based approaches. In this paper, we explore word embedding (i.e., implicit semantics) and grounded entities (i.e., explicit semantics) to address the variety and ambiguity issues of language. We formulate the problem as a reranking task and integrate different similarity measures under the learning to rank framework. We evaluate our approach on a real-world dataset. The experimental results show the efficacy of our method. 1 Introduction When an author writes an online news article, s/he often cites previously published news reports to elaborate a mentioned event or support his/her point of view. For the convenience of the readers, the editor usually associates the words with hyperlinks. Through the links the readers can directly access the referenced articles to know more details about the events. If there is no reference for a mentioned event, the readers may search the related news reports for further reading. Hence, it is valuable to have automatic news citation recommendations for authors and readers to create or search news references. In this paper, we focus on the problem of news citation recommendation. As shown in Table 1, ∗Work done during internship at Microsoft Research. given a snippet of citing context (left), the task aims to retrieve a list of news articles (right) as references. This task differs from traditional recommendation tasks, e.g., citation recommendation for scientific papers, in that: (a) based on the statistics from our dataset, the number of references per news article is 4.56 on average, much less than the number of citations per academic paper (typically dozens); (b) the author-topic information is usually unavailable, since it is technically difficult to obtain author information from news articles. These differences make the collaborative filteringbased methods, which have been widely applied to paper citation recommendation, less available in our scenario. Therefore, in this paper we focus on content similarity-based methods to deal with the task of news citation recommendation. Previous studies use string-based overlap (Xu et al., 2014), machine translation measures (Madnani et al., 2012), and dependency syntax (Wan et al., 2006; Wang et al., 2015) to model text similarity. More recent work focuses on neural network methods (Yin and Sch¨utze, 2015; He et al., 2015; Hu et al., 2014; dos Santos et al., 2015; Lei et al., 2016). There are two major challenges rendering these approaches not suitable for this task: (i) the variety and (ii) the ambiguity of language. By variety, we mean that the same meaning may be expressed with different phrases. Taking the first row in Table 1 for example, Vlaar in the citing context refers to Ron Vlaar, a Dutch football player, who is referred to as Dutch star and Netherlands international in the cited article. By ambiguity, we mean that the same expression may have different meanings in different contexts. In the second example in Table 1, the mention tiger refers to tiger the mammal. By contrast, in “Detroit Tigers links: The Tigers are in trouble” for example, the word Tiger is the name of a team. In this paper, we explore both implicit and explicit semantics to ad388 Citing Context Cited Article · · · Man United and Arsenal on red alert as top Dutch star officially joins free agent list Manchester United and Arsenal have both been interested in Vlaar in the past, suggesting Southampton will have to fight hard to land him. The Netherlands international has joined the free agent list today and is no longer contractually obliged to remain at Villa Park. · · · · · · · · · Bangladesh ’s abundant tiger population has collapsed to just 100 Conservationists want the Bangladeshi government to step up and help save the tigers through greater administration and enforcement of anti-poaching laws, as Bangladesh does not legally protect tigers to the extent that other governments do, according to Inhabitat. In Bangladesh, a new census shows that tiger populations in the Sundarbans mangrove forest are more endangered than ever. The study, which used hidden cameras to track and record tigers, provides a more accurate update than previous surveys that used other methods. The year-long census, which ended this April, revealed only around 100 of the big cats remain in what was once home to the largest population of tigers on earth. · · · · · · Table 1: Two pair of news snippets. For readability concerns, we keep only the sentence associated with an anchor link in the citing part, and the title and lead paragraph of the cited part. dress the above issues. Specifically, the implicit semantics can be obtained from the word embedding trained on large scale corpus, and the explicit semantics through linking entity mentions to the grounded entities in a knowledge base. In this paper, we explore using both word embedding and grounded knowledge to model the relatedness between citing context and articles. We formulate the problem as a re-ranking task. We use learning to rank to integrate different similarity measures and evaluate the models on a real world dataset constructed from Bing News1. We further give quantitative analysis of the effects of word embedding and grounded entities in the task. In summary, the main contributions of this paper are three-fold: • We propose the task of news citation recommendation and construct a real-world dataset for this task. • We utilize both word embedding based similarity measures and knowledge-based methods to tackle the problem. We formulate the problem as a re-ranking task and leverage learning to rank algorithm to integrate different similarity measures. • We conduct extensive experiments on a large dataset. The results show the effectiveness of word embedding and grounded entities. We further quantitatively analyze how the implicit semantics from word embedding and explicit semantics from grounded knowledge benefit the task of interest. 1https://www.bing.com/news 2 Problem Formulation In this section, we introduce the news citation recommendation problem and formulate it as a reranking task. We first introduce definitions that will be used through the rest of the paper: Citing Context. Citing context is a sentence which contains an anchor text associated with a hyperlink. As shown in Table 1, the underlined words are associated with a hyperlink pointing to another news article, and the sentence (left) which contains the anchor is the citing context. Cited Article. Given a piece of citing context, the article that the hyperlink links to is defined as its cited article. It is expected that a news article is well-structured, and its headline together with its lead paragraph gives a good brief description of the whole story (Kianmehr et al., 2009). In this paper, a news article can either be represented by its title and lead paragraph or by the passage as a whole. We conduct experiments under both of the two different settings. Candidate Article Set. Considering efficiency, we follow the procedure adopted by many recommendation systems (Lei et al., 2016; Tan et al., 2015) and formulate the problem as a re-ranking task. In another word, given a citing context, we first use efficient retrieval methods with high recall to generate a list of articles as the candidate article set, and then run the system to get a re-ranked list. News Citation Recommendation. Given a citing context, the task aims to construct an ordered list of news articles, top of which are most relevant to the context, and can serve as the cited articles. 389 3 Method In this section, we first explain the similarity measures based on word embedding (implicit semantics) and grounded knowledge (explicit semantics) to deal with variety and ambiguity problems. Then we briefly introduce the baselines and the learning to rank framework. 3.1 Implicit Semantics for Variety The distributed word representation by word2vec factors word distance and captures semantic similarities through vector arithmetic (Mikolov et al., 2013). In this work, we train a skip-gram model to bridge the vocabulary gap between context-article pairs. Previous work represents the documents with averaged vectors of words (Tang et al., 2014; Tan et al., 2015). However, this may lead to the loss of detailed information of the documents. In this paper, we adopt a different approach, explained below. Word Mover’s Distance (WMD). Kusner et al. (2015) combine distributed word representations with the earth mover’s distance (EMD) (Rubner et al., 1998; Wan, 2007) to measure the distance between documents. They use the Euclidean distance between words’ low dimensional representations as building blocks, and optimize a special case of the EMD to obtain the cumulative distance. More formally, let X = {(x1, wx1), (x2, wx2), · · · , (xm, wxm)} be the normalized bag-of-words representation for a citing context after removing stop-words, where word xi appears wxi times (then normalized by the total count of words in X), i = 1, 2, · · · , m. Similarly, we have the representation for a candidate article, Y ={(y1, wy1), (y2, wy2), · · · , (yn, wyn)}. The WMD calculates the minimum cumulative cost by solving the linear programming problem below: min T m X i=1 n X j=1 Tijcij s.t. n X j=1 Tij = wxi, i = 1, 2, · · · , m, m X i=1 Tij = wyj, j = 1, 2, · · · , n, where T ∈Rm×n is the transportation flow matrix, and cij indicates the distance between xi and yj. Here cij = ∥vector(xi) −vector(yj)∥, where function vector(w) returns the word vector of w. Then the distance is normalized by the total flow: WMD(X, Y ) = Pm i=1 Pn j=1 Tijcij Pm i=1 Pn j=1 Tij 3.2 Explicit Semantics for Ambiguity News articles tend to be well written, and contain many named entity mentions. Making use of this property, we deal with the ambiguity problem by using grounded entities (explicit semantics). Given a context-article pair, we first recognize all named entity mentions on both sides and link them to knowledge bases (e.g., Wikipedia and Freebase), then use the following measures to model the similarity. • Entity Overlap. Given a context–article pair, we consider two metrics, namely, precision and recall, to measure their entity overlap. The precision is defined as: precision = entity-overlap(citing, cited) entity-count(citing) and recall as: recall = entity-overlap(citing, cited) entity-count(cited) • Embedding Based Matching. We build two separate information networks for Wikipedia entities using (a) the anchor links on Wikipedia pages and (b) the Freebase entity graph (Bollacker et al., 2008). Then we apply Large-scale Information Network Embedding (LINE) (Tang et al., 2015) system2 to the networks to embed the entities into low-dimensional spaces. We then measure the similarity by the minimized cosine distance between entities’ on the citing and the cited side: minDISciting = 1 |citing| |citing| X i=1 min yj∈Y (1−cos(xi, yj)), and vice versa: minDIScited = 1 |citied| |cited| X j=1 min xi∈X(1 −cos(xi, yj)), where X refers to the citing context, and xi ∈X are the grounded entities in the citing part. Similar with Y and yj. 2https://github.com/tangjianpku/LINE 390 • Wikipedia Evidence. Given a contextarticle pair, we refer to world knowledge for supporting evidence. In particular, we first apply an entity linking system to detect the entity mentions on both sides and ground them into Wikipedia entries, each of which has its own description page. Second, we collect the descriptions for entities from the candidate article and extract as evidence those sentences containing entities from citing context. We refer to this evidence as cited evidence. For instance, the article in Table 8 contains grounded entity Scottish National Party. And in the description for it, there is a sentence containing the entity Scotland from the citing context: “The Scottish National Party (SNP) is a Scottish nationalist and social-democratic political party in Scotland.” Thus we extract this sentence as cited evidence supporting this pair. We count the overlapping nouns between the citing context and the cited evidence to calculate precision and recall, precision= noun-overlap(context, cited evidence) noun-count(context) recall= noun-overlap(article, citing evidence) noun-count(article) 3.3 Baselines We design several baseline features for the two groups of features mentioned above: • TF-IDF Distance. We use TF-IDF distance as a basic measure. The similarity is calculated with cosine distance based on TF-IDF vector representations for the text. • Ungrounded Mentions. Note that entity overlap features also adapt to ungrounded mentions. The embedding-based matching features for ungrounded mentions are similar to those for grounded entities. The only difference is that here each mention is represented by the averaged vectors of all the words it contains. Wikipedia evidence is not feasible for ungrounded mentions. Table 2 summarizes all the features we use. A cited article can either be represented by its headline+lead paragraph or as a whole. Therefore, we extract features under two different settings: (a) headlines and lead paragraphs only; (b) the full articles. Most of the features are extracted under both of the settings. However, feature 2 is much too computation-intensive and feature 7 needs POS-tagging as the preprocessing. Thus these two are only extracted under setting (a). 3.4 Learning to Rank Framework Many different learning to rank algorithms have been proposed to deal with the ranking problem, including pointwise, pairwise, and listwise approaches (Xia et al., 2008). Listwise methods receive ranked lists as training samples, and directly optimize the metric of interest by minimizing the respective cost. And it has been reported that the listwise method usually achieves better performance compared to others (Qin et al., 2008; Cao et al., 2007). In this work, we use the linear model and apply coordinate ascent for parameter optimization. 4 Experiments 4.1 Data Collection We collect one month’s news articles from Bing News. The citing context set consists of all the sentences associated with anchor link(s). For each piece of citing context, its cited article is extracted through its hyperlink. If there are multiple links associated with the context, only the first one is considered. We pair each citing context and its cited article as a ground truth sample. We further label as ground truths those articles sharing the same title as the cited article. This is rather reasonable since a single passage may have multiple reprints by difference sources. On average, there are 2.20 ground truth cited articles for each citing context in the dataset. In order to focus only on news events, we filter out those pairs whose hyperlinks are associated with three words or less (usually names for persons or places, and lead to definition pages). We also discard those samples whose citing contexts contain or are exactly the same as the titles of the cited articles. For example, “READ MORE: The stories you need to read, in one handy email” links to an article titled “The stories you need to read, in one handy email”. The dataset is preprocessed with Stanford CoreNLP toolkit (Manning et al., 2014), including sentence splitting, tokenizing for whole passages, and POS-tagging for titles and lead paragraphs. We use the JERL system by Luo et al. (2015) for entity detection and grounding. It recognizes entity mentions and links them to Wikipedia entries. 391 Feature Full Article? # of features Description Dealing with Variety 1 WMD n 1 Word vector based earth mover’s distance. Dealing with Ambiguity 2 Grounded Entity Overlap y 4 Precision and recall for grounded named entities. 3 Embedding-based Matching y 16 Minimized matching distance with LINE vectors. 4 Wikipedia Evidence n 2 Precision and recall for evidence from Wikipedia. Baselines 5 TF-IDF y 2 The cosine distance with TF-IDF. 6 Ungrounded Mention Overlap y 4 Precision and recall for ungrounded mentions. 7 Embedding-based Matching y 4 Minimized matching distance with averaged vectors. Total 33 Table 2: A list of all features used in the experiments. The third column indicates whether the corresponding feature is extracted from the full articles. If not, it’s extracted only from the headlines and lead paragraphs. We use each mention’s text span as an ungrounded mention, and its corresponding Wikipedia ID as a grounded entity. For instance, in Table 8, the detected text span Westminster is an ungrounded mention, and it’s grounded to the entry Parliament of the United Kindom. 4.2 Selecting Candidates Given a citing context, we construct its candidate article set with the top 200 articles retrieved by TFIDF distance. In the experiments, approximately 92.61% of the ground truth cited articles appear in the candidate sets. We discard those that do not. We further randomly split the remaining 33318 pairs into training/validation/test sets with the proportion of 3:1:1. For each training pair, we randomly sample 5 articles from its candidate article set (excluding ground truth) and pair them with the citing context as negative samples. According to Tan et al. (2015), the number of negative samples does not significantly affect the linear learning to rank model’s performance. During validation and testing, all of the 200 candidates are taken into account. 4.3 Experimental Setup In the experiments, we set the TF-IDF as the baseline, and incrementally add different groups of features to the system. The word embedding is pretrained with skipgram model (Mikolov et al., 2013) on Wikipedia corpus and then fine-tuned using the method proposed in Wieting et al. (2015) on PPDB (Ganitkevitch et al., 2013). The embedding fine-tuned with paraphrase pairs can better capture the semantic relatedness of different phrase. In the experiments, we observe a 1% −2% improvement by the finetuned word representations compared to vanilla skip-gram vectors. We use the linear model in RankLib3 for the learning to rank implementation. Coordinate ascent is used for parameter optimization. The model is trained to directly optimize the evaluation metrics, Precision@1, Precision@5, NDCG@5 and MAP, respectively. For NDCG@5 measure, we set a binary relevance score, i.e., the scores equal to 1 for ground truths, 0 for negative samples. 4.4 Experimental Results Table 3 gives the performance of the baselines and the systems using different groups of features on test and validation sets. The results show that WMD brings a consistent improvement over its TF-IDF baseline, and so do grounded entities compared to ungrounded mentions. Individually added to the TF-IDF baseline, WMD has the largest performance boost, followed by grounded entity features. Besides, the additional information from grounded entity knowledge helps the model outperform the ungrounded mentions, with a consistent margin of 1.0%-2.0% NDCG@5. We further compare the performance of the models when using features from headlines+lead paragraphs only and those from full passages. As shown in Table 3, the former brings much better performance on each metric compared to the latter. It’s worth noting that there are ground truths mis-labeled as irrelevant in the dataset. A primary 3https://sourceforge.net/p/lemur/wiki/ RankLib/ 392 Precision@1 Precision@5 NDCG@5 MAP id Features Test Dev Test Dev Test Dev Test Dev Headline + Lead Para 1 TF-IDF 42.61 42.21 19.84 19.78 52.72 52.22 53.50 53.06 2 + Ungrounded Mentions 43.67 43.04 19.45 19.30 53.84 53.26 54.46 54.12 3 + Grounded Entities 44.52 44.02 20.84 20.51 55.99 55.0 56.55 56.09 4 + Ungrounded+Grounded 43.93 44.05 20.2 19.66 55.99 55.17 56.52 56.13 5 TF-IDF + WMD 45.94 45.84 21.11 21.62 57.20 57.50 58.12 58.34 6 + Ungrounded Mentions 46.44 46.63 21.05 21.56 57.61 57.80 58.55 58.78 7 + Grounded Entities 47.63 47.5 21.96 22.07 58.52 58.41 60.01 59.83 8 + Ungrounded+Grounded 47.23 46.84 21.58 21.56 59.01 58.91 59.88 59.66 Full Article 9 TF-IDF 49.3 48.11 23.33 23.06 60.51 59.54 60.71 59.73 10 + Ungrounded Mentions 50.46 50.42 23.81 23.67 61.97 61.73 62.6 61.94 11 + Grounded Entities 51.42 50.27 23.91 23.78 63.26 62.09 63.23 62.15 12 + Ungrounded+Grounded 51.46 50.23 23.85 23.74 62.94 62.48 63.15 63.02 13 TF-IDF + WMD 52.31 51.82 23.87 24.04 63.71 63.99 64.08 63.62 14 + Ungrounded Mentions 53.26 53.3 23.98 24.16 64.57 64.29 64.52 64.37 15 + Grounded Entities 54.12 53.29 24.37 24.05 65.29 64.48 65.32 64.53 16 + Ungrounded+Grounded 54.04 53.21 24.52 24.33 65.56 65.11 65.35 64.56 Table 3: Experimental results in percentage on the dataset collected from Bing News. id Features NDCG@5 on S NDCG@5 on ˜S Headline + Lead Para 1 TF-IDF 52.77 56.28 2 + Ungrounded Mentions 53.34 56.86 3 + Grounded Entities 55.03 58.57 4 + Ungrounded+Grounded 55.18 58.86 5 TF-IDF + WMD 56.51 60.13 6 + Ungrounded Mentions 57.04 60.82 7 + Grounded Entities 57.4 61.47 8 + Ungrounded+Grounded 58.05 61.78 Table 4: Experimental results in percentage on S and ˜S. S is a randomly constructed subset of the test set, and ˜S is obtained by manually labeling samples in S. reason is that news sites sometimes individually publish different reports on a certain event. And the articles don’t necessarily share the same title. To see how this affects the model, we randomly build a subset S of the test set and manually label the selected samples, which gives ˜S4. Table 4 compares the model’s performance on S and ˜S under Headline+Lead paragraph setting. There is a consistent improvement of NDCG@5 score on ˜S compared to that on S. Besides that, on manually labeled data, the model’s performance across different feature settings is almost in accord with that on the full test set. These results show that there are indeed mis-labeled ground truths in the dataset, but they have little influence when comparing different groups of features. 4Manually labeling all of the dev and test samples would be too time consuming, and we leave it to future work. 5 Analysis In this section, we give detailed win-loss analysis for the models trained with NDCG@5 metric under headlines+lead paragraphs setting. Specifically, given two systems with different feature configurations, we compare their performance on each test sample. The results are shown as a heatmap in Figure 1. X and Y axises indicate the identifiers for each feature group, following those in Table 3. For example, the data point at (5, 1) indicates that the inclusion of WMD brings better ranking scores to TF-IDF on 18.4% of the test samples; and as a trade off, it lowers the scores on 11.4% of the samples. We also observe that grounded entities brings gain to 15.9% of the samples, and loss for 9.6% of them. On average, two different groups of features disagree on 26.4% of the test samples. We further give several mis-predictions by the model using certain groups of features, and illustrate how they are corrected by the inclusion of others (or the other way round). By misprediction, we mean that no ground truth cited article appears in the top 5 predictions of the returned list. 5.1 Dealing with Variety Table 5 shows a mis-prediction by TF-IDF, but corrected after including WMD. TF-IDF distance favors the high-score match393 ing keywords approval and rating between the citing context and mis-predicted article. On the other hand, distributed word representations factor the distances between word pairs, which helps to capture their semantic closeness, e.g., (Argentines, Argentina, Cosince distance: 0.210), (poll, election, 0.020), and (increasingly, growing, 0.286). WMD helps to bridge the vocabulary gap between the citing context and the cited article. On the other hand, though not often, the use of distributed representation can also create mistakes. Table 6 gives an example where the inclusion of the WMD feature changes a correct prediction by TF-IDF into a mistake. By analyzing the WMD’s transportation flow matrix T, we find that the used word embedding relates MP to minister, and publicly to government. More curiously, persons’ names are very similar in its semantic space: (Davies, Stephen, 0.602), and (Davies, Harper, 0.635). A possible reason could be that both of the two names are very common, and thus the cooccurrence-based representation learning method is not able to distinguish them. This also justifies our use of grounded entities as additional information: from the Wikipedia description for entity Stephen Harper, the system might be able to find out that he actually serves in Canadian government, not in the UK’s nor in the Welsh. 5.2 Dealing with Ambiguity Entity grounding helps by resolving the ambiguity e.g., alias, abbreviation, of the entity mentions. As shown in Table 7, tiger refers to the mammal in the ground truth pair. However, the same word refers to Detroit Tigers the team in the mispredicted article. This ambiguity is resolved when the mention is grounded to its Wikipedia entry. In another example shown in Table 8, ungrounded mention SNP, though detected, contributes little to supporting the ground truth pair. However, when it’s grounded to the entry Scottish National Party, the system leverages world knowledge and relates it to the mention of Scotland in the citing context. The inclusion of grounded entity information may also lead to mistakes, many of which are due to the limited performance of the entity recognition and disambiguation system. We’d like to discuss another kind of error here, shown in Table 9. In the citing context, The Daily Telegraph is a newspaper published in the UK. It has little to do with the involved event except for reporting it. However, the system favors a farmers’ story which Figure 1: Heatmap for win-loss analysis results. Point (x, y) indicates how much feature x wins (loses if negative) against y.The X and Y axises indicate the identifiers for each feature group, following those in Table 3. actually happened in the UK. We find that this contributes a lot to the system’s errors when including grounded entities. We leave it to future work to figure out how to deal with this issue. 6 Related Work This section reviews three lines of related work: (i) document recommendation, (ii) pharaphrase identification, (iii) question retrieval. 6.1 Document Recommendation Existing literature mainly focuses on contentbased methods and collaborative filtering (Adomavicius and Tuzhilin, 2005). There are studies trying to recommend documents based on citation contexts, either through identifying the motivations of the citations (Aya et al., 2005), or through the topical similarity (Ritchie, 2008; Ramachandran and Amir, 2007). On the other hand, Mcnee et al. (2002) leverage multiple information sources from authorship, paper-citation relations, and cocitations to recommend research papers. Combining the context-based approaches and collaborative filtering, Torres et al. (2004) and Strohman et al. (2007) report better performance. Tang and Zhang (2009) use the Restricted Boltzmann Machine to model citations for placeholders, and Tan et al. (2015) integrate multiple features to recommend quotes for writings. In the news domain, context-based approaches are presumably favorable due to the fact that the articles are relatively content-rich and citationsparse. Previous studies manage to utilize information retrieval techniques to recommend news articles given a seed article (Yang et al., 2009; Bogers and van den Bosch, 2007). 394 Sides – Samples citing – An earlier poll showed Argentines are also increasingly happy with her performance as President, putting her approval rating at almost 43%, up from 31% in September. cited Ground Truth Kirchner’s Growing Popularity Could Skew Argentine Election As Argentina gears up for a presidential election in October, the approval ratings of the current president, Cristina Kirchner, are improving and her rising reputation could affect the results of the election to replace her. Top-1 Prediction Bill Shorten’s Approval Rating Falls in Wake of Royal Commission The opposition leader gained approval from only 27% of the voters surveyed, while 52% disapproved. Table 5: A mis-prediction by TF-IDF corrected by the inclusion of WMD. Sides – Samples citing – Chris Grayling was responding to a question from Gower Conservative MP Byron Davies about the regeneration investment fund for Wales “and the underselling of a large amount of publicly owned property”. cited Ground Truth Wales land deal leaves taxpayers 15m short A Welsh government spokesperson said there were conflicting valuations. Top-1 Prediction Conservative MP compares Stephen Harper government to Jesus, inspiring hilarious #CPCJesus tweets Is it time we started referring to Prime Minister Stephen Harper as “Our Lord and Saviour”? Table 6: A correct prediction by TF-IDF but then changes into a mistake when including WMD. Sides – Samples citing – Conservationists want the Bangladeshi government Government of Bangladesh to step up and help save the tigers through greater administration and enforcement of anti-poaching laws, as Bangladeshi Bangladesh does not legally protect tigers to the extent that other governments do, according to Inhabitat. cited Ground Truth Bangladeshi’s Bangladesh abundant tiger population has collapsed to just 100 In Bangladeshi Bangladesh , a new census shows that tiger populations in the Sundarbans Sundarbans mangrove forest are more endangered than ever. The study, which used hidden cameras to track and record tigers, provides a more accurate update than previous surveys that used other methods. Top-1 Prediction Detroit Tigers Detroit Tigers links: The Tigers are Detroit Tigers in trouble After losing three straight games prior to All-Star break, the Major League Baseball All-Star Game the Tigers don’t Detroit Tigers have much more time to waste if they want to stay in contention. Table 7: A mis-prediction by TF-IDF corrected by the inclusion of grounded entity features. The linked Wikipedia entries are indicated below the underlined entity mentions. Sides – Samples citing – With activities at Westminster challenging Parliament of the United Kingdom a narrow view of nationalism, and a planned charm offensive across the UK and Ireland, it is IrelandUnited Kingdom relations that the party intends to significantly expand its reach beyond Scotland Scotland . cited Ground Truth SNP launches Scottish National Party bid to extend influence beyond Scotland Scotland First Minister of Scotland and SNP leader Nicola Sturgeon Scottish National Party Nicola Sturgeon worked hard to reassure voters in the election campaign. Top-1 Prediction Apple Pay UK launch Apple Pay United Kingdom confirmed for mid-July Leaked documents from retailers suggest a launch date early next week. Table 8: A mis-prediction by TF-IDF+ungrounded mention features corrected by the TF-IDF+grounded entity features. The linked Wikipedia entries are indicated below the underlined entity mentions. 395 Sides – Samples citing – According to a UK Telegraph report, United Kindom The Daily Telegraph the government is now forcing farmers and food manufacturers to sell anywhere from 30-100% of their products to the state , as opposed to stores and supermarkets. cited Ground Truth Venezuelan Venezuela farmers ordered to hand over produce to state As Venezuela’s Venezuela food shortages worsen, the president of the country’s Food Industry Food Industry Chamber has said that authorities ordered producers of milk milk , pasta, oil oil , rice rice , sugar sugar and flour to supply their products to the state stores. Top-1 Prediction Welsh farmers United Kindom launch #NoLambWeek price campaign Fed-up Welsh farmers United Kindom are encouraging others to withhold their fat lambs for a week in protest at the current slump in the UK lamb trade. Table 9: A correct prediction by TF-IDF but then changes into a mistake when including grounded entity features. The linked Wikipedia entries are indicated below the underlined entity mentions. 6.2 Paraphrase Identification Several hand-crafted features have proven helpful in modeling sentence/phrase similarity, e.g., string-based overlap (Xu et al., 2014), machine translation measures (Madnani et al., 2012), and dependency syntax (Wan et al., 2006; Wang et al., 2015). Using the combination and discriminative re-weighting of the mentioned features, Ji and Eisenstein (2013) manage to obtain more competitive results. More recent work has switched the focus onto neural methods. Socher et al. (2011) recursively encode the representations of sentences by the compositions of words. Convolutional neural nets (LeCun et al., 1998; Collobert and Weston, 2008) are also exploited in the tasks of paraphrase identification and sentence matching (Yin and Sch¨utze, 2015; He et al., 2015; Hu et al., 2014). Story link detection (SLD) is a similar task which aims to classify whether two news stories discuss the same event. Farahat et al. (2003) leverage part of speech tagging technique as well as task-specific similarity measures to boost the system’s performance. Shah et al. (2006) show that entity based document representation is a better choice compared to word-based representations in SLD. In our scenario, the query is typically a piece of context sentence instead of an entire article. Therefore, we find that document level methods yield sub-optimal performance when used to model the similarity of citing context and the articles. Besides, due to the fact that there might be multiple reports for a single event, we consider it reasonable to formulate our problem into a ranking task instead of classification. 6.3 Question Retrieval The key problem in question retrieval lies in modeling questions’ similarity. Machine translation techniques (Jeon et al., 2005) and topic models (Duan et al., 2008) have been utilized by previous works. An alternative is representation learning. Zhou et al. (2015) use category-based meta-data to learn word embeddings. dos Santos et al. (2015) and Lei et al. (2016) obtain superior performance over hand-crafted features with CNN. News articles are more well-written than most documents in QA communities, which results in the feasibility of high-quality entity detection and grounding. 7 Discussions In this paper, we propose a novel problem of news citation recommendation, which aims to recommend news citations for references based on a citing context. We develop a re-ranking system leveraging implicit and explicit semantics for content similarity. We construct a real-world dataset. The experimental results show the efficacy of our approach. 8 Acknowledgments This research is partially supported by National Basic Research Program of China under Grant No. 2015CB352201, National Natural Science Foundation of China under Grant No. 61502014, and China Post-doctoral Foundation under Grant No. 2015M580927. 396 References Gediminas Adomavicius and Alexander Tuzhilin. 2005. Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions. IEEE Trans. Knowl. Data Eng., pages 734–749. Selcuk Aya, Carl Lagoze, and Thorsten Joachims. 2005. Citation classification and its applications. In Proceedings of the International Conference on Knowledge Management, pages 287–298. Toine Bogers and Antal van den Bosch. 2007. Comparing and evaluating information retrieval algorithms for news recommendation. In Proceedings of the 2007 ACM Conference on Recommender Systems, RecSys ’07, pages 141–144. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, pages 1247–1250. Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. 2007. Learning to rank: From pairwise approach to listwise approach. In Proceedings of the 24th International Conference on Machine Learning, pages 129–136. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: deep neural networks with multitask learning. In ICML. C´ıcero Nogueira dos Santos, Luciano Barbosa, Dasha Bogdanova, and Bianca Zadrozny. 2015. Learning hybrid representations to retrieve semantically equivalent questions. In ACL-IJCNLP 2015, pages 694–699. Huizhong Duan, Yunbo Cao, Chin-Yew Lin, and Yong Yu. 2008. Searching questions by identifying question topic and question focus. In ACL, pages 156– 164. Ayman Farahat, Francine Chen, and Thorsten Brants. 2003. Optimizing story link detection is not equivalent to optimizing new event detection. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1, pages 232– 239. Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: the paraphrase database. In Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, pages 758– 764. Hua He, Kevin Gimpel, and Jimmy Lin. 2015. Multiperspective sentence similarity modeling with convolutional neural networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1576–1586. Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In Advances in Neural Information Processing Systems, pages 2042–2050. Jiwoon Jeon, W. Bruce Croft, and Joon Ho Lee. 2005. Finding similar questions in large question and answer archives. In Proceedings of the 14th ACM International Conference on Information and Knowledge Management, pages 84–90. Yangfeng Ji and Jacob Eisenstein. 2013. Discriminative improvements to distributional sentence similarity. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 891–896. Keivan Kianmehr, Shang Gao, Jawad Attari, M. Mushfiqur Rahman, KofiAkomeah, Reda Alhajj, Jon Rokne, and Ken Barker. 2009. Text summarization techniques: Svm versus neural networks. In Proceedings of the 11th International Conference on Information Integration and Web-based Applications Services, pages 487–491. Matt J. Kusner, Yu Sun, Nicholas I. Kolkin, and Kilian Q. Weinberger. 2015. From word embeddings to document distances. In ICML, pages 957–966. Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE. Tao Lei, Hrishikesh Joshi, Regina Barzilay, Tommi S. Jaakkola, Kateryna Tymoshenko, Alessandro Moschitti, and Llu´ıs M`arquez i Villodre. 2016. Semisupervised question retrieval with gated convolutions. In NAACL HLT 2016. Gang Luo, Xiaojiang Huang, Chin-Yew Lin, and Zaiqing Nie. 2015. Joint entity recognition and disambiguation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 879–888. Nitin Madnani, Joel R. Tetreault, and Martin Chodorow. 2012. Re-examining machine translation metrics for paraphrase identification. In Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, pages 182–190. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60. Sean M. Mcnee, Istvan Albert, Dan Cosley, Prateep Gopalkrishnan, Shyong K. Lam, Al M. Rashid, Joseph A. Konstan, and John Ried. 2002. On the recommending of citations for research papers. Proceedings of the 2002 ACM conference on Computer supported cooperative work, pages 116–125. 397 Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781. Tao Qin, Xu-Dong Zhang, Ming-Feng Tsai, De-Sheng Wang, Tie-Yan Liu, and Hang Li. 2008. Querylevel loss functions for information retrieval. Inf. Process. Manage., 44:838–855. Deepak Ramachandran and Eyal Amir. 2007. Bayesian Inverse Reinforcement Learning. Proceedings of the 20th International Joint Conference on Artical Intelligence, 51:2586–2591. Anna Ritchie. 2008. Citation Context Analysis for Information Retrieval. Ph.D. thesis. Yossi Rubner, Carlo Tomasi, and Leonidas J. Guibas. 1998. A metric for distributions with applications to image databases. In Proceedings of the Sixth International Conference on Computer Vision. Chirag Shah, W. Bruce Croft, and David Jensen. 2006. Representing documents with named entities for story link detection (sld). In Proceedings of the 15th ACM International Conference on Information and Knowledge Management, pages 868–869. Richard Socher, Eric H. Huang, Jeffrey Pennington, Andrew Y. Ng, and Christopher D. Manning. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in Neural Information Processing Systems, pages 801– 809. Trevor Strohman, W. Bruce Croft, and David Jensen. 2007. Recommending citations for academic papers. In Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 705–706. Jiwei Tan, Xiaojun Wan, and Jianguo Xiao. 2015. Learning to recommend quotes for writing. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, pages 2453–2459. Jie Tang and Jing Zhang. 2009. A discriminative approach to topic-based citation recommendation. In Advances in Knowledge Discovery and Data Mining, volume 5476, pages 572–579. Xuewei Tang, Xiaojun Wan, and Xun Zhang. 2014. Cross-language context-aware citation recommendation in scientific articles. In Proceedings of the 37th International ACM SIGIR Conference on Research &#38; Development in Information Retrieval, pages 817–826. Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. 2015. Line: Large-scale information network embedding. In WWW. ACM. R. Torres, S.M. McNee, M. Abel, J.A. Konstan, and J. Riedl. 2004. Enhancing digital libraries with techlens. In Digital Libraries, 2004. Proceedings of the 2004 Joint ACM/IEEE Conference on, pages 228–236. Stephen Wan, Mark Dras, Robert Dale, and Cecile Paris. 2006. Using dependency-based features to take the ’para-farce’ out of paraphrase. In Proceedings of the Australasian Language Technology Workshop 2006, pages 131–138. Xiaojun Wan. 2007. A novel document similarity measure based on earth mover’s distance. Inf. Sci., pages 3718–3730. Mingxuan Wang, Zhengdong Lu, Hang Li, and Qun Liu. 2015. Syntax-based deep matching of short texts. In IJCAI, pages 1354–1361. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. From paraphrase database to compositional paraphrase model and back. TACL, 3:345–358. Fen Xia, Tie-Yan Liu, Jue Wang, Wensheng Zhang, and Hang Li. 2008. Listwise approach to learning to rank: Theory and algorithm. In Proceedings of the 25th International Conference on Machine Learning, pages 1192–1199. Wei Xu, Alan Ritter, Chris Callison-Burch, William B. Dolan, and Yangfeng Ji. 2014. Extracting lexically divergent paraphrases from twitter. TACL, 2:435– 448. Yin Yang, Nilesh Bansal, Wisam Dakka, Panagiotis Ipeirotis, Nick Koudas, and Dimitris Papadias. 2009. Query by document. In Proceedings of the Second ACM International Conference on Web Search and Data Mining, pages 34–43. Wenpeng Yin and Hinrich Sch¨utze. 2015. Convolutional neural network for paraphrase identification. In The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 901–911. Guangyou Zhou, Tingting He, Jun Zhao, and Po Hu. 2015. Learning continuous word embedding with metadata for question retrieval in community question answering. In ACL-IJCNLP 2015, pages 250– 259. 398
2016
37
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 399–408, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Grapheme-to-Phoneme Models for (Almost) Any Language Aliya Deri and Kevin Knight Information Sciences Institute Department of Computer Science University of Southern California {aderi, knight}@isi.edu Abstract Grapheme-to-phoneme (g2p) models are rarely available in low-resource languages, as the creation of training and evaluation data is expensive and time-consuming. We use Wiktionary to obtain more than 650k word-pronunciation pairs in more than 500 languages. We then develop phoneme and language distance metrics based on phonological and linguistic knowledge; applying those, we adapt g2p models for highresource languages to create models for related low-resource languages. We provide results for models for 229 adapted languages. 1 Introduction Grapheme-to-phoneme (g2p) models convert words into pronunciations, and are ubiquitous in speech- and text-processing systems. Due to the diversity of scripts, phoneme inventories, phonotactic constraints, and spelling conventions among the world’s languages, they are typically languagespecific. Thus, while most statistical g2p learning methods are language-agnostic, they are trained on language-specific data—namely, a pronunciation dictionary consisting of word-pronunciation pairs, as in Table 1. Building such a dictionary for a new language is both time-consuming and expensive, because it requires expertise in both the language and a notation system like the International Phonetic Alphabet, applied to thousands of word-pronunciation pairs. Unsurprisingly, resources have been allocated only to the most heavily-researched languages. GlobalPhone, one of the most extensive multilingual text and speech databases, has pronunciation dictionaries in only 20 languages (Schultz et al., 2013)1. 1We have been unable to obtain this dataset. lang word pronunciation eng anybody e̞ n iː b ɒ d iː pol żołądka z̻ o w o n̪ t̪ k a ben শক্ s̪ ɔ k t̪ ɔ hebחלומותʁ a l o m o t Table 1: Examples of English, Polish, Bengali, and Hebrew pronunciation dictionary entries, with pronunciations represented with the International Phonetic Alphabet (IPA). word eng deu nld gift ɡ ɪ f tʰ ɡ ɪ f t ɣ ɪ f t class kʰ l æ s k l aː s k l ɑ s send s e̞ n d z ɛ n t s ɛ n t Table 2: Example pronunciations of English words using English, German, and Dutch g2p models. For most of the world’s more than 7,100 languages (Lewis et al., 2009), no data exists and the many technologies enabled by g2p models are inaccessible. Intuitively, however, pronouncing an unknown language should not necessarily require large amounts of language-specific knowledge or data. A native German or Dutch speaker, with no knowledge of English, can approximate the pronunciations of an English word, albeit with slightly different phonemes. Table 2 demonstrates that German and Dutch g2p models can do the same. Motivated by this, we create and evaluate g2p models for low-resource languages by adapting existing g2p models for high-resource languages using linguistic and phonological information. To facilitate our experiments, we create several notable data resources, including a multilingual pronunciation dictionary with entries for more than 500 languages. The contributions of this work are: 399 • Using data scraped from Wiktionary, we clean and normalize pronunciation dictionaries for 531 languages. To our knowledge, this is the most comprehensive multilingual pronunciation dictionary available. • We synthesize several named entities corpora to create a multilingual corpus covering 384 languages. • We develop a language-independent distance metric between IPA phonemes. • We extend previous metrics for languagelanguage distance with additional information and metrics. • We create two sets of g2p models for “high resource” languages: 97 simple rule-based models extracted from Wikipedia’s “IPA Help” pages, and 85 data-driven models built from Wiktionary data. • We develop methods for adapting these g2p models to related languages, and describe results for 229 adapted models. • We release all data and models. 2 Related Work Because of the severe lack of multilingual pronunciation dictionaries and g2p models, different methods of rapid resource generation have been proposed. Schultz (2009) reduces the amount of expertise needed to build a pronunciation dictionary, by providing a native speaker with an intuitive rulegeneration user interface. Schlippe et al. (2010) crawl web resources like Wiktionary for wordpronunciation pairs. More recently, attempts have been made to automatically extract pronunciation dictionaries directly from audio data (Stahlberg et al., 2016). However, the requirement of a native speaker, web resources, or audio data specific to the language still blocks development, and the number of g2p resources remains very low. Our method avoids these issues by relying only on text data from high-resource languages. Instead of generating language-specific resources, we are instead inspired by research on cross-lingual automatic speech recognition (ASR) by Vu and Schultz (2013) and Vu et al. (2014), who exploit linguistic and phonetic relationships in low-resource scenarios. Although these works focus on ASR instead of g2p models and rely on audio data, they demonstrate that speech technology is portable across related languages. g2ph word trainingh pronh Mh→l pronl (a) g2ph→l word Mh→l trainingh pronl (b) Figure 1: Strategies for adapting existing language resources through output mapping (a) and training data mapping (b). 3 Method Given a low-resource language l without g2p rules or training data, we adapt resources (either an existing g2p model or a pronunciation dictionary) from a high-resource language h to create a g2p for l. We assume the existence of two modules: a phoneme-to-phoneme distance metric phon2phon, which allows us to map between the phonemes used by h to the phonemes used by l, and a closest language module lang2lang, which provides us with related language h. Using these resources, we adapt resources from h to l in two different ways: • Output mapping (Figure 1a): We use g2ph to pronounce wordl, then map the output to the phonemes used by l with phon2phon. • Training data mapping (Figure 1b): We use phon2phon to map the pronunciations in h’s pronunciation dictionary to the phonemes used by l, then train a g2p model using the adapted data. The next sections describe how we collect data, create phoneme-to-phoneme and languageto-language distance metrics, and build highresource g2p models. 4 Data This section describes our data sources, which are summarized in Table 3. 4.1 Phoible Phoible (Moran et al., 2014) is an online repository of cross-lingual phonological data. We use 400 Phoible Wiki IPA Help tables Wiktionary 1674 languages 97 languages 531 languages 2155 lang. inventories 24 scripts 49 scripts 2182 phonemes 1753 graph. segments 658k word-pron pairs 37 features 1534 phon. segments Wiktionary train Wiktionary test NE data 3389 unique g-p rules 85 languages 501 languages 384 languages 42 scripts 45 scripts 36 scripts 629k word-pron pairs 26k word-pron pairs 9.9m NEs Table 3: Summary of data resources obtained from Phoible, named entity resources, Wikipedia IPA Help tables, and Wiktionary. Note that, although our Wiktionary data technically covers over 500 languages, fewer than 100 include more than 250 entries (Wiktionary train). two of its components: language phoneme inventories and phonetic features. 4.1.1 Phoneme inventories A phoneme inventory is the set of phonemes used to pronounce a language, represented in IPA. Phoible provides 2156 phoneme inventories for 1674 languages. (Some languages have multiple inventories from different linguistic studies.) 4.1.2 Phoneme feature vectors For each phoneme included in its phoneme inventories, Phoible provides information about 37 phonological features, such as whether the phoneme is nasal, consonantal, sonorant, or a tone. Each phoneme thus maps to a unique feature vector, with features expressed as +, -, or 0. 4.2 Named Entity Resources For our language-to-language distance metric, it is useful to have written text in many languages. The most easily accessible source of this data is multilingual named entity (NE) resources. We synthesize 7 different NE corpora: ChineseEnglish names (Ji et al., 2009), Geonames (Vatant and Wick, 2006), JRC names (Steinberger et al., 2011), corpora from LDC2, NEWS 2015 (Banchs et al., 2015), Wikipedia names (Irvine et al., 2010), and Wikipedia titles (Lin et al., 2011); to this, we also add multilingual Wikipedia titles for place names from an online English-language gazetteer (Everett-Heath, 2014). This yields a list of 9.9m named entities (8.9 not including English data) across 384 languages, which include the En2LDC2015E13, LDC2015E70, LDC2015E82, LDC2015E90, LDC2015E84, LDC2014E115, and LDC2015E91 glish translation, named entity type, and script information where possible. 4.3 Wikipedia IPA Help tables To explain different languages’ phonetic notations, Wikipedia users have created “IPA Help” pages,3 which provide tables of simple grapheme examples of a language’s phonemes. For example, on the English page, the phoneme z has the examples “zoo” and “has.” We automatically scrape these tables for 97 languages to create simple graphemephoneme rules. Using the phon2phon distance metric and mapping technique described in Section 5, we clean each table by mapping its IPA phonemes to the language’s Phoible phoneme inventory, if it exists. If it does not exist, we map the phonemes to valid Phoible phonemes and create a phoneme inventory for that language. 4.4 Wiktionary pronunciation dictionaries Ironically, to train data-driven g2p models for high-resource languages, and to evaluate our low-resource g2p models, we require pronunciation dictionaries for many languages. A common and successful technique for obtaining this data (Schlippe et al., 2010; Schlippe et al., 2012a; Yao and Kondrak, 2015) is scraping Wiktionary, an open-source multilingual dictionary maintained by Wikimedia. We extract unique word-pronunciation pairs from the English, German, Greek, Japanese, Korean, and Russian sites of Wiktionary. (Each Wiktionary site, while written in its respective language, contains word entries in multiple languages.) 3https://en.wikipedia.org/wiki/Category: International_Phonetic_Alphabet_help 401 Since Wiktionary data is very noisy, we apply length filtering as discussed by Schlippe et al. (2012b), as well as simple regular expression filters for HTML. We also map Wiktionary pronunciations to valid Phoible phonemes and language phoneme inventories, if they exist, as discussed in Section 5. This yields 658k word-pronunciation pairs for 531 languages. However, this data is not uniformly distributed across languages—German, English, and French account for 51% of the data. We extract test and training data as follows: For each language with at least 1 word-pron pair with a valid word (at least 3 letters and alphabetic), we extract a test set of a maximum of 200 valid words. From the remaining data, for every language with 50 or more entries, we create a training set with the available data. Ultimately, this yields a training set with 629k word-pronunciation pairs in 85 languages, and a test set with 26k pairs in 501 languages. 5 Phonetic Distance Metric Automatically comparing pronunciations across languages is especially difficult in text form. Although two versions of the “sh” sound, “ʃ” and “ɕ,” sound very similar to most people and very different from “m,” to a machine all three characters seem equidistant. Previous research (Özbal and Strapparava, 2012; Vu and Schultz, 2013; Vu et al., 2014) has addressed this issue by matching exact phonemes by character or manually selecting comparison features; however, we are interested in an automatic metric covering all possible IPA phoneme pairs. We handle this problem by using Phoible’s phoneme feature vectors to create phon2phon, a distance metric between IPA phonemes. In this section we also describe how we use this metric to clean open-source data and build phonememapping models between languages. 5.1 phon2phon As described in Section 4.1.2, each phoneme in Phoible maps to a unique feature vector; each feature value is +, -, or 0, representing whether a feature is present, not present, or not applicable. (Tones, for example, can never be syllabic or stressed.) We convert each feature vector into a bit representation by mapping each value to 3 bits. + to 110, - to 101, and 0 to 000. This captures the idea that lang word scraped cleaned ces jód ˈjoːd j o d pusڅلورt͡saˈlor t s a l o r kan ¸ರತ bhārata b h a ɾ a t̪ a hye օդապար otʰɑˈpɑɾ o̞ t̪ʰ a p a l̪ ukr тарган tɑrˈɦɑn t̪ a r̪ h a n̪ Table 4: Examples of scraped and cleaned Wiktionary pronunciation data in Czech, Pashto, Kannada, Armenian, and Ukrainian. Data: all phonemes P, scraped phoneme set S, language inventory T Result: Mapping table M initialize empty table M; for ps in S do if ps /∈P and ASCII(ps) ∈P then ps = ASCII(ps); end pp = min ∀pt∈T(phon2phon(ps, pt)); add ps →pp to M; end Algorithm 1: A condensed version of our procedure for mapping scraped phoneme sets from Wikipedia and Wiktionary to Phoible language inventories. The full algorithm handles segmentation of the scraped pronunciation and heuristically promotes coverage of the Phoible inventory. the features + and - are more similar than 0. We then compute the normalized Hamming distance between every phoneme pair p1,2 with feature vectors f1,2 and feature vector length n as follows: phon2phon(p1, p2) = ∑n i=1 1, iffi 1 ̸= fi 2 n 5.2 Data cleaning We now combine phon2phon distances and Phoible phoneme inventories to map phonemes from scraped Wikipedia IPA help tables and Wiktionary pronunciation dictionaries to Phoible phonemes and inventories. We describe a condensed version of our procedure in Algorithm 1, and provide examples of cleaned Wiktionary output in Table 4. 5.3 Phoneme mapping models Another application of phon2phon is to transform pronunciations in one language to another language’s phoneme inventory. We can do this by 402 lang avg phon script English German Latin French Hindi Gujarati Bengali Sanskrit Vietnamese Indonesian Sindhi Polish Table 5: Closest languages with Wikipedia versions, based on lang2lang averaged metrics, phonetic inventory distance, and script distance. creating a single-state weighted finite-state transducer (wFST) W for input language inventory I and output language inventory O: ∀pi∈I,po∈OW.add(pi, po, 1 −phon2phon(pi, po)) W can then be used to map a pronunciation to a new language; this has the interesting effect of modeling accents by foreign-language speakers: think in English (pronounced "θ ɪ ŋ kʰ") becomes "s̪ ɛ ŋ k" in German; the capital city Dhaka (pronounced in Bengali with a voiced aspirated "ɖ̤") becomes the unaspirated "d æ kʰ æ" in English. 6 Language Distance Metric Since we are interested in mapping high-resource languages to low-resource related languages, an important subtask is finding the related languages of a given language. The URIEL Typological Compendium (Littell et al., 2016) is an invaluable resource for this task. By using features from linguistic databases (including Phoible), URIEL provides 5 distance metrics between languages: genetic, geographic, composite (a weighted composite of genetic and geographic), syntactic, and phonetic. We extend URIEL by adding two additional metrics, providing averaged distances over all metrics, and adding additional information about resources. This creates lang2lang, a table which provides distances between and information about 2,790 languages. 6.1 Phoneme inventory distance Although URIEL provides a distance metric between languages based on Phoible features, it only takes into account broad phonetic features, such as whether each language has voiced plosives. This can result in some non-intuitive results: based on this metric, there are almost 100 languages phonetically equivalent to the South Asian language Gujarati, among them Arawak and Chechen. To provide a more fine-grained phonetic distance metric, we create a phoneme inventory distance metric using phon2phon. For each pair of language phoneme inventories L1,2 in Phoible, we compute the following: d(L1, L2) = ∑ p1∈L1 min p2∈L2(phon2phon(p1, p2)) and normalize by dividing by ∑ i d(L1, Li). 6.2 Script distance Although Urdu is very similar to Hindi, its different alphabet and writing conventions would make it difficult to transfer an Urdu g2p model to Hindi. A better candidate language would be Nepali, which shares the Devanagari script, or even Bengali, which uses a similar South Asian script. A metric comparing the character sets used by two languages is very useful for capturing this relationship. We first use our multilingual named entity data to extract character sets for the 232 languages with more than 500 NE pairs; then, we note that Unicode character names are similar for linguistically related scripts. This is most notable in South Asian scripts: for example, the Bengali ক, Gujarati ક, and Hindi कhave Unicode names BENGALI LETTER KA, GUJARATI LETTER KA, and DEVANAGARI LETTER KA, respectively. We remove script, accent, and form identifiers from the Unicode names of all characters in our character sets, to create a set of reduced character names used across languages. Then we create a binary feature vector f for every language, with each feature indicating the language’s use of a reduced character (like LETTER KA). The distance between two languages L1,2 can then be computed with a spatial cosine distance: d(L1, L2) = 1 − f1 · f2 ∥f1∥2 ∥f2∥2 6.3 Resource information Each entry in our lang2lang distance table also includes the following features for the second language: the number of named entities, whether it is in Europarl (Koehn, 2005), whether it has its own Wikipedia, whether it is primarily written in the same script as the first language, whether it has an IPA Help page, whether it is in our Wiktionary test set, and whether it is in our Wiktionary training set. Table 5 shows examples of the closest languages to English, Hindi, and Vietnamese, according to different lang2lang metrics. 403 0 2000 4000 6000 8000 10000 # training word-pronunciation pairs 0 10 20 30 40 50 60 PER zho eng tgl rus hbs Figure 2: Training data size vs. PER for 85 models trained from Wiktionary. Labeled languages: English (eng), Serbo-Croatian (hbs), Russian (rus), Tagalog (tgl), and Chinese macrolanguage (zho). 7 Evaluation Metrics The next two sections describe our high-resource and adapted g2p models. To evaluate these models, we compute the following metrics: • % of words skipped: This shows the coverage of the g2p model. Some g2p models do not cover all character sequences. All other metrics are computed over non-skipped words. • word error rate (WER): The percent of incorrect 1-best pronunciations. • word error rate 100-best (WER 100): The percent of 100-best lists without the correct pronunciation. • phoneme error rate (PER): The percent of errors per phoneme. A PER of 15.0 indicates that, on average, a linguist would have to edit 15 out of 100 phonemes of the output. We then average these metrics across all languages (weighting each language equally). 8 High Resource g2p Models We now build and evaluate g2p models for the “high-resource” languages for which we have either IPA Help tables or sufficient training data from Wiktionary. Table 6 shows our evaluation of these models on Wiktionary test data, and Table 7 shows results for individual languages. 8.1 IPA Help models We first use the rules scraped from Wikipedia’s IPA Help pages to build rule-based g2p models. We build a wFST for each language, with a path for each rule g →p and weight w = 1/count(g). This method prefers rules with longer grapheme segments; for example, for the word tin, the output "ʃ n" is preferred over the correct "tʰ ɪ n" because of the rule ti→ʃ. We build 97 IPA Help models, but have test data for only 91—some languages, like Mayan, do not have any Wiktionary entries. As shown in Table 6, these rule-based models do not perform very well, suffering especially from a high percentage of skipped words. This is because IPA Help tables explain phonemes’ relationships to graphemes, rather than vice versa. Thus, the English letter x is omitted, since its composite phonemes are better explained by other letters. 8.2 Wiktionary-trained models We next build models for the 85 languages in our Wiktionary train data set, using the wFSTbased Phonetisaurus (Novak et al., 2011) and MITLM (Hsu and Glass, 2008), as described by Novak et al (2012). We use a maximum of 10k pairs of training data, a 7-gram language model, and 50 iterations of EM. These data-driven models outperform IPA Help models by a considerable amount, achieving a WER of 44.69 and PER of 15.06 averaged across all 85 languages. Restricting data to 2.5k or more training examples boosts results to a WER of 28.02 and PER of 7.20, but creates models for only 29 languages. However, in some languages good results are obtained with very limited data; Figure 2 shows the varying quality across languages and data availability. 8.3 Unioned models We also use our rule-based IPA Help tables to improve Wiktionary model performance. We accomplish this very simply, by prepending IPA help rules like the German sch→ʃ to the Wiktionary training data as word-pronunciation pairs, then running the Phonetisaurus pipeline. Overall, the unioned g2p models outperform both the IPA help and Wiktionary models; however, as shown in Table 7, the effects vary across different languages. It is unclear what effect language characteristics, quality of IPA Help rules, and training data size have on unioned model improvement. 404 model # langs % skip WER WER 100 PER ipa-help 91 21.49 78.13 59.18 35.36 wiktionary 85 4.78 44.69 23.15 15.06 unioned 85 3.98 44.17 21.97 14.70 ipa-help 56 22.95 82.61 61.57 35.51 wiktionary 56 3.52 40.28 20.30 13.06 unioned 56 2.31 39.49 18.51 12.52 Table 6: Results for high-resource models. The top portion of the table shows results for all models; the bottom shows results only for languages with both IPA Help and Wiktionary models. lang ben tgl tur deu # train 114 126 2.5k 10k ipa-help 100.0 64.8 69.0 40.2 wikt 85.6 34.2 39.0 32.5 unioned 66.2 36.2 39.0 24.5 Table 7: WER scores for Bengali, Tagalog, Turkish, and German models. Unioned models with IPA Help rules tend to perform better than Wiktionary-only models, but not consistently. 9 Adapted g2p Models Having created a set of high-resource models and our phon2phon and lang2lang metrics, we now explore different methods for adapting highresource models and data for related low-resource languages. For comparable results, we restrict the set of high-resource languages to those covered by both our IPA Help and Wiktionary data. 9.1 No mapping The simplest experiment is to run our g2p models on related low-resource languages, without adaptation. For each language l in our test set, we determine the top high-resource related languages h1,2,... according to the lang2lang averaged metric that have both IPA Help and Wiktionary data and the same script, not including the language itself. For IPA Help models, we choose the 3 most related languages h1,2,3 and build a g2p model from their combined g-p rules. For Wiktionary and unioned models, we compile 5k words from the closest languages h1,2,... such that each h contributes no more than one third of the data (adding IPA Help rules for unioned models) and train a model from the combined data. For each test word-pronunciation pair, we trivially map the word’s letters to the characters used in h1,2,... by removing accents where necessary; we then use the high-resource g2p model to produce a pronunciation for the word. For example, our Czech IPA Help model uses a model built from g-p rules from Serbo-Croatian, Polish, and Slovenian; the Wiktionary and unioned models use data and rules from these languages and Latin as well. This expands 56 g2p models (the languages covered by both IPA Help and Wiktionary models) to models for 211 languages. However, as shown in Table 8, results are very poor, with a very high WER of 92% using the unioned models and a PER of more than 50%. Interestingly, IPA Help models perform better than the unioned models, but this is primarily due to their high skip rate. 9.2 Output mapping We next attempt to improve these results by creating a wFST that maps phonemes from the inventories of h1,2... to l (as described in Section 5.3). As shown in Figure 1a, by chaining this wFST to h1,2...’s g2p model, we map the g2p model’s output phonemes to the phonemes used by l. In each base model type, this process considerably improves accuracy over the no mapping approach; however, the IPA Help skip rate increases (Table 8). 9.3 Training data mapping We now build g2p models for l by creating synthetic data for the Wiktionary and unioned models, as in Figure 1b. After compiling wordpronunciation pairs and IPA Help g-p rules from closest languages h1,2,..., we then map the pronunciations to l and use the new pronunciations as training data. We again create unioned models by adding the related languages’ IPA Help rules to the training data. This method performs slightly worse in accuracy than output mapping, a WER of 87%, but has a much lower skip rate of 7%. 405 method base model # langs % skip WER WER 100 PER ipa-help 211 12.46 91.57 78.96 54.84 no mapping wikt 211 8.99 93.15 80.36 57.07 unioned 211 8.54 92.38 79.26 57.21 ipa-help 211 12.68 85.45 67.07 47.94 output mapping wikt 211 15.00 86.48 66.20 46.84 unioned 211 11.72 84.82 63.63 46.25 training data mapping wikt 211 8.55 87.40 70.94 48.89 unioned 211 7.19 87.36 70.75 47.48 rescripted wikt +10 15.94 93.66 81.76 56.37 unioned +10 14.97 94.45 80.68 57.35 final wikt/unioned 229 6.77 88.04 69.80 48.01 Table 8: Results for adapted g2p models. Final adapted results (using the 85 languages covered by Wiktionary and unioned high-resource models, as well as rescripting) cover 229 languages. lang method base model rel langs word gold hyp eng no mapping ipa-help deu, nld, swe fuse f j uː z f ʏ s ɛ arz output mapping unioned fas, urdبانجوb æː n̪ ɡ uː b a n̪ d̪ ʃ uː afr training mapping unioned nld, lat, isl dood d ɔ t d uː t sah training mapping unioned rus, bul, ukr хатырык k a t̪ ɯ r̪ ɯ k k a t̪ i r̪ i k kan rescripted unioned hin, ben ದು￿ಷɭ d̪ u ʂ ʈʰ a d̪̤ uː ʂ ʈʰ guj rescripted unioned san, ben, hin ગળૠોએિશઆ k ɾ o e ç ɪ a k ɾ õː ə ʂ ɪ a Table 9: Sample words, gold pronunciations, and hypothesis pronunciations for English, Egyptian Arabic, Afrikaans, Yakut, Kannada, and Gujarati. 9.4 Rescripting Adaptation methods thus far have required that h and l share a script. However, this excludes languages with related scripts, like Hindi and Bengali. We replicate our data mapping experiment, but now allow related languages h1,2,... with different scripts from l but a script distance of less than 0.2. We then build a simple “rescripting” table based on matching Unicode character names; we can then map not only h’s pronunciations to l’s phoneme set, but also h’s word to l’s script. Although performance is relatively poor, rescripting adds 10 new languages, including Telugu, Gujarati, and Marwari. 9.5 Discussion Table 8 shows evaluation metrics for all adaptation methods. We also show results using all 85 Wiktionary models (using unioned where IPA Help is available) and rescripting, which increases the total number of languages to 229. Table 9 provides examples of output with different languages. In general, mapping combined with IPA Help rules in unioned models provides the best results. Training data mapping achieves similar scores as output mapping as well as a lower skip rate. Word skipping is problematic, but could be lowered by collecting g-p rules for the low-resource language. Although the adapted g2p models make many individual phonetic errors, they nevertheless capture overall pronunciation conventions, without requiring language-specific data or rules. Specific points of failure include rules that do not exist in related languages (e.g., the silent “e” at the end of “fuse” and the conversion of "d̪ʃ" to "ɡ" in Egyptian Arabic), mistakes in phoneme mapping, and overall “pronounceability” of the output. 9.6 Limitations Although our adaptation strategies are flexible, several limitations prevent us from building a g2p model for any language. If there is not enough information about the language, our lang2lang table will not be able to provide related highresource languages. Additionally, if the language’s script is not closely related to another language’s and thus cannot be rescripted (as with Thai and Armenian), we are not able to adapt related g2p data or models. 406 10 Conclusion Using a large multilingual pronunciation dictionary from Wiktionary and rule tables from Wikipedia, we build high-resource g2p models and show that adding g-p rules as training data can improve g2p performance. We then leverage lang2lang distance metrics and phon2phon phoneme distances to adapt g2p resources for highresource languages for 229 related low-resource languages. Our experiments show that adapting training data for low-resource languages outperforms adapting output. To our knowledge, these are the most broadly multilingual g2p experiments to date. With this publication, we release a number of resources to the NLP community: a large multilingual Wiktionary pronunciation dictionary, scraped Wikipedia IPA Help tables, compiled named entity resources (including a multilingual gazetteer), and our phon2phon and lang2lang distance tables.4 Future directions for this work include further improving the number and quality of g2p models, as well as performing external evaluations of the models in speech- and text-processing tasks. We plan to use the presented data and methods for other areas of multilingual natural language processing. 11 Acknowledgements We would like to thank the anonymous reviewers for their helpful comments, as well as our colleagues Marjan Ghazvininejad, Jonathan May, Nima Pourdamghani, Xing Shi, and Ashish Vaswani for their advice. We would also like to thank Deniz Yuret for his invaluable help with data collection. This work was supported in part by DARPA (HR0011-15-C-0115) and ARL/ARO (W911NF-10-1-0533). Computation for the work described in this paper was supported by the University of Southern California’s Center for HighPerformance Computing. References Rafael E Banchs, Min Zhang, Xiangyu Duan, Haizhou Li, and A Kumaran. 2015. Report of NEWS 2015 machine transliteration shared task. In Proc. NEWS Workshop. 4Instructions for obtaining this data are available at the authors’ websites. John Everett-Heath. 2014. The Concise Dictionary of World Place-Names. Oxford University Press, 2nd edition. Bo-June Paul Hsu and James R Glass. 2008. Iterative language model estimation: efficient data structure & algorithms. In Proc. Interspeech. Ann Irvine, Chris Callison-Burch, and Alexandre Klementiev. 2010. Transliterating from all languages. In Proc. AMTA. Heng Ji, Ralph Grishman, Dayne Freitag, Matthias Blume, John Wang, Shahram Khadivi, Richard Zens, and Hermann Ney. 2009. Name extraction and translation for distillation. Handbook of Natural Language Processing and Machine Translation: DARPA Global Autonomous Language Exploitation. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proc. MT Summit. M Paul Lewis, Gary F Simons, and Charles D Fennig. 2009. Ethnologue: Languages of the world. SIL international, Dallas. Wen-Pin Lin, Matthew Snover, and Heng Ji. 2011. Unsupervised language-independent name translation mining from Wikipedia infoboxes. In Proc. Workshop on Unsupervised Learning in NLP. Patrick Littell, David Mortensen, and Lori Levin. 2016. URIEL. Pittsburgh: Carnegie Mellon University. http://www.cs.cmu.edu/~dmortens/ uriel.html. Accessed: 2016-03-19. Steven Moran, Daniel McCloy, and Richard Wright, editors. 2014. PHOIBLE Online. Max Planck Institute for Evolutionary Anthropology, Leipzig. Josef R Novak, D Yang, N Minematsu, and K Hirose. 2011. Phonetisaurus: A WFST-driven phoneticizer. The University of Tokyo, Tokyo Institute of Technology. Josef R Novak, Nobuaki Minematsu, and Keikichi Hirose. 2012. WFST-based grapheme-to-phoneme conversion: open source tools for alignment, modelbuilding and decoding. In Proc. International Workshop on Finite State Methods and Natural Language Processing. Gözde Özbal and Carlo Strapparava. 2012. A computational approach to the automation of creative naming. In Proc. ACL. Tim Schlippe, Sebastian Ochs, and Tanja Schultz. 2010. Wiktionary as a source for automatic pronunciation extraction. In Proc. Interspeech. Tim Schlippe, Sebastian Ochs, and Tanja Schultz. 2012a. Grapheme-to-phoneme model generation for Indo-European languages. In Proc. ICASSP. Tim Schlippe, Sebastian Ochs, Ngoc Thang Vu, and Tanja Schultz. 2012b. Automatic error recovery for pronunciation dictionaries. In Proc. Interspeech. 407 Tanja Schultz, Ngoc Thang Vu, and Tim Schlippe. 2013. GlobalPhone: A multilingual text & speech database in 20 languages. In Proc. ICASSP. Tanja Schultz. 2009. Rapid language adaptation tools and technologies for multilingual speech processing systems. In Proc. IEEE Workshop on Automatic Speech Recognition. Felix Stahlberg, Tim Schlippe, Stephan Vogel, and Tanja Schultz. 2016. Word segmentation and pronunciation extraction from phoneme sequences through cross-lingual word-to-phoneme alignment. Computer Speech & Language, 35:234 – 261. Ralf Steinberger, Bruno Pouliquen, Mijail Kabadjov, and Erik Van der Goot. 2011. JRC-Names: A freely available, highly multilingual named entity resource. In Proc. Recent Advances in Natural Language Processing. Bernard Vatant and Marc Wick. 2006. Geonames ontology. Online at http://www.geonames.org/ontology. Ngoc Thang Vu and Tanja Schultz. 2013. Multilingual multilayer perceptron for rapid language adaptation between and across language families. In Proc. Interspeech. Ngoc Thang Vu, David Imseng, Daniel Povey, Petr Motlicek Motlicek, Tanja Schultz, and Hervé Bourlard. 2014. Multilingual deep neural network based acoustic modeling for rapid language adaptation. In Proc. ICASSP. Lei Yao and Grzegorz Kondrak. 2015. Joint generation of transliterations from multiple representations. In Proc. NAACL HLT. 408
2016
38
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 409–420, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Neural Word Segmentation Learning for Chinese Deng Cai and Hai Zhao∗ Department of Computer Science and Engineering Key Lab of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering Shanghai Jiao Tong University, Shanghai, China [email protected], [email protected] Abstract Most previous approaches to Chinese word segmentation formalize this problem as a character-based sequence labeling task so that only contextual information within fixed sized local windows and simple interactions between adjacent tags can be captured. In this paper, we propose a novel neural framework which thoroughly eliminates context windows and can utilize complete segmentation history. Our model employs a gated combination neural network over characters to produce distributed representations of word candidates, which are then given to a long shortterm memory (LSTM) language scoring model. Experiments on the benchmark datasets show that without the help of feature engineering as most existing approaches, our models achieve competitive or better performances with previous stateof-the-art methods. 1 Introduction Most east Asian languages including Chinese are written without explicit word delimiters, therefore, word segmentation is a preliminary step for processing those languages. Since Xue (2003), most methods formalize the Chinese word segmentation (CWS) as a sequence labeling problem with character position tags, which can be handled with su∗Corresponding author. This paper was partially supported by Cai Yuanpei Program (CSC No. 201304490199 and No. 201304490171), National Natural Science Foundation of China (No. 61170114 and No. 61272248), National Basic Research Program of China (No. 2013CB329401), Major Basic Research Program of Shanghai Science and Technology Committee (No. 15JC1400103), Art and Science Interdisciplinary Funds of Shanghai Jiao Tong University (No. 14JCRZ04), and Key Project of National Society Science Foundation of China (No. 15-ZDA041). pervised learning methods such as Maximum Entropy (Berger et al., 1996; Low et al., 2005) and Conditional Random Fields (Lafferty et al., 2001; Peng et al., 2004; Zhao et al., 2006a). However, those methods heavily depend on the choice of handcrafted features. Recently, neural models have been widely used for NLP tasks for their ability to minimize the effort in feature engineering. For the task of CWS, Zheng et al. (2013) adapted the general neural network architecture for sequence labeling proposed in (Collobert et al., 2011), and used character embeddings as input to a two-layer network. Pei et al. (2014) improved upon (Zheng et al., 2013) by explicitly modeling the interactions between local context and previous tag. Chen et al. (2015a) proposed a gated recursive neural network to model the feature combinations of context characters. Chen et al. (2015b) used an LSTM architecture to capture potential long-distance dependencies, which alleviates the limitation of the size of context window but introduced another window for hidden states. Despite the differences, all these models are designed to solve CWS by assigning labels to the characters in the sequence one by one. At each time step of inference, these models compute the tag scores of character based on (i) context features within a fixed sized local window and (ii) tagging history of previous one. Nevertheless, the tag-tag transition is insufficient to model the complicated influence from previous segmentation decisions, though it could sometimes be a crucial clue to later segmentation decisions. The fixed context window size, which is broadly adopted by these methods for feature engineering, also restricts the flexibility of modeling diverse distances. Moreover, word-level information, which is being the greater granularity unit as suggested in (Huang and Zhao, 2006), remains 409 Models Characters Words Tags character based (Zheng et al., 2013), ... ci−2, ci−1, ci, ci+1, ci+2 ti−1ti (Chen et al., 2015b) c0, c1, . . . , ci, ci+1, ci+2 ti−1ti word based (Zhang and Clark, 2007), ... c in wj−1, wj, wj+1 wj−1, wj, wj+1 Ours c0, c1, . . . , ci w0, w1, . . . , wj Table 1: Feature windows of different models. i(j) indexes the current character(word) that is under scoring. unemployed. To alleviate the drawbacks inside previous methods and release those inconvenient constrains such as the fixed sized context window, this paper makes a latest attempt to re-formalize CWS as a direct segmentation learning task. Our method does not make tagging decisions on individual characters, but directly evaluates the relative likelihood of different segmented sentences and then search for a segmentation with the highest score. To feature a segmented sentence, a series of distributed vector representations (Bengio et al., 2003) are generated to characterize the corresponding word candidates. Such a representation setting makes the decoding quite different from previous methods and indeed much more challenging, however, more discriminative features can be captured. Though the vector building is word centered, our proposed scoring model covers all three processing levels from character, word until sentence. First, the distributed representation starts from character embedding, as in the context of word segmentation, the n-gram data sparsity issue makes it impractical to use word vectors immediately. Second, as the word candidate representation is derived from its characters, the inside character structure will also be encoded, thus it can be used to determine the word likelihood of its own. Third, to evaluate how a segmented sentence makes sense through word interacting, an LSTM (Hochreiter and Schmidhuber, 1997) is used to chain together word candidates incrementally and construct the representation of partially segmented sentence at each decoding step, so that the coherence between next word candidate and previous segmentation history can be depicted. To our best knowledge, our proposed approach to CWS is the first attempt which explicitly models the entire contents of the segmenter’s state, including the complete history of both segmentation decisions and input characters. The compar Neural Network Scoring Model Decoder ··· ··· ··· ··· ··· Max-Margin Training 自然/语言/处理 自 +1.5 自然语 -1.5 然语 -1.5 然语言 +0.7 言处理 -2.1 处理 +1.5 处理 +1.5 +1.2 +0.8 +2.0 +2.3 +3.2 +0.3 +1.2 自然语言处理 (input sequence) 自/然语言/处理 (output sentence) (golden sentence) Figure 1: Our framework. isons of feature windows used in different models are shown in Table 1. Compared to both sequence labeling schemes and word-based models in the past, our model thoroughly eliminates context windows and can capture the complete history of segmentation decisions, which offers more possibilities to effectively and accurately model segmentation context. 2 Overview We formulate the CWS problem as finding a mapping from an input character sequence x to a word sequence y, and the output sentence y∗satisfies: y∗= arg max y∈GEN(x) ( n X i=1 score(yi|y1, · · · , yi−1)) where n is the number of word candidates in y, and GEN(x) denotes the set of possible segmentations for an input sequence x. Unlike all previous works, our scoring function is sensitive to the complete contents of partially segmented sentence. As shown in Figure 1, to solve CWS in this way, a neural network scoring model is designed to evaluate the likelihood of a segmented sentence. Based on the proposed model, a decoder is developed to find the segmented sentence with the highest score. Meanwhile, a max-margin method is utilized to perform the training by comparing 410 segmented sentence Lookup Table GCNN Unit LSTM Unit Predicting Scoring c1 c2 c3 c4 c5 c6 c7 c8 y1 y2 y3 y4 p1 p2 p3 p4 u Figure 2: Architecture of our proposed neural network scoring model, where ci denotes the i-th input character, yj denotes the learned representation of the j-th word candidate, pk denotes the prediction for the (k + 1)-th word candidate and u is the trainable parameter vector for scoring the likelihood of individual word candidates. the structured difference of decoder output and the golden segmentation. The following sections will introduce each of these components in detail. 3 Neural Network Scoring Model The score for a segmented sentence is computed by first mapping it into a sequence of word candidate vectors, then the scoring model takes the vector sequence as input, scoring on each word candidate from two perspectives: (1) how likely the word candidate itself can be recognized as a legal word; (2) how reasonable the link is for the word candidate to follow previous segmentation history immediately. After that, the word candidate is appended to the segmentation history, updating the state of the scoring system for subsequent judgements. Figure 2 illustrates the entire scoring neural network. 3.1 Word Score Character Embedding. While the scores are decided at the word-level, using word embedding (Bengio et al., 2003; Wang et al., 2016) immediately will lead to a remarkable issue that rare words and out-of-vocabulary words will be poorly estimated (Kim et al., 2015). In addition, the character-level information inside an n-gram can be helpful to judge whether it is a true word. Therefore, a lookup table of character embeddings is used as the bottom layer. Formally, we have a character dictionary D of size |D|. Then each character c ∈D is represented as a real-valued vector (character embedding) c ∈Rd, where d is the dimensionality of the vector space. The character embeddings are then stacked into an embedding matrix M ∈Rd×|D|. For a character c ∈D, its character embedding c ∈Rd is retrieved by the embedding layer according to its index. Gated Combination Neural Network. In order to obtain word representation through its characters, in the simplest strategy, character vectors are integrated into their word representation using a weight matrix W(L) that is shared across all words with the same length L, followed by a non-linear function g(·). Specifically, ci (1 ≤i ≤L) are d-dimensional character vector representations respectively, the corresponding word vector w will be d-dimensional as well: w = g(W(L)   c1 ... cL  ) (1) where W(L) ∈Rd×Ld and g is a non-linear function as mentioned above. Although the mechanism above seems to work well, it can not sufficiently model the complicated combination features in practice, yet. Gated structure in neural network can be useful for hybrid feature extraction according to (Chen et al., 2015a; Chung et al., 2014; Cho et al., 2014), 411 c1 cL ˆw w r1 rL zN z1 zL Figure 3: Gated combination neural network. we therefore propose a gated combination neural network (GCNN) especially for character compositionality which contains two types of gates, namely reset gate and update gate. Intuitively, the reset gates decide which part of the character vectors should be mixed while the update gates decide what to preserve when combining the characters information. Concretely, for words with length L, the word vector w ∈Rd is computed as follows: w = zN ⊙ˆw + L X i=1 zi ⊙ci where zN, zi (1 ≤i ≤L) are update gates for new activation ˆw and governed characters respectively, and ⊙indicates element-wise multiplication. The new activation ˆw is computed as: ˆw = tanh(W(L)   r1 ⊙c1 ... rL ⊙cL  ) where W(L) ∈Rd×Ld and ri ∈Rd (1 ≤i ≤L) are the reset gates for governed characters respectively, which can be formalized as:   r1 ... rL  = σ(R(L)   c1 ... cL  ) where R(L) ∈RLd×Ld is the coefficient matrix of reset gates and σ denotes the sigmoid function. The update gates can be formalized as:   zN z1 ... zL  = exp(U(L)   ˆw c1 ... cL  ) ⊙   1/Z 1/Z ... 1/Z   where U(L) ∈R(L+1)d×(L+1)d is the coefficient matrix of update gates, and Z ∈Rd is the normalization vector, Zk = L X i=1 [exp(U(L)   ˆw c1 ... cL  )]d×i+k where 0 ≤k < d. According to the normalization condition, the update gates are constrained by: zN + L X i=1 zi = 1 The gated mechanism is capable of capturing both character and character interaction characteristics to give an efficient word representation (See Section 6.3). Word Score. Denote the learned vector representations for a segmented sentence y with [y1, y2, · · · , yn], where n is the number of word candidates in the sentence. word score will be computed by the dot products of vector yi(1 ≤ i ≤n) and a trainable parameter vector u ∈Rd. Word Score(yi) = u · yi (2) It indicates how likely a word candidate by itself is to be a true word. 3.2 Link Score Inspired by the recurrent neural network language model (RNN-LM) (Mikolov et al., 2010; Sundermeyer et al., 2012), we utilize an LSTM system to capture the coherence in a segmented sentence. Long Short-Term Memory Networks. The LSTM neural network (Hochreiter and Schmidhuber, 1997) is an extension of the recurrent neural network (RNN), which is an effective tool for sequence modeling tasks using its hidden states for history information preservation. At each time step t, an RNN takes the input xt and updates its recurrent hidden state ht by ht = g(Uht−1 + Wxt + b) where g is a non-linear function. Although RNN is capable, in principle, to process arbitrary-length sequences, it can be difficult to train an RNN to learn long-range dependencies due to the vanishing gradients. LSTM addresses 412 yt−1 pt yt pt+1 yt+1 pt+2 ht−1 ht ht+1 Figure 4: Link scores (dashed lines). this problem by introducing a memory cell to preserve states over long periods of time, and controls the update of hidden state and memory cell by three types of gates, namely input gate, forget gate and output gate. Concretely, each step of LSTM takes input xt, ht−1, ct−1 and produces ht, ct via the following calculations: it = σ(Wixt + Uiht−1 + bi) ft = σ(Wfxt + Ufht−1 + bf) ot = σ(Woxt + Uoht−1 + bo) ˆct = tanh(Wcxt + Ucht−1 + bc) ct = ft ⊙ct−1 + it ⊙ˆct ht = ot ⊙tanh(ct) where σ, ⊙are respectively the element-wise sigmoid function and multiplication, it, ft, ot, ct are respectively the input gate, forget gate, output gate and memory cell activation vector at time t, all of which have the same size as hidden state vector ht ∈RH. Link Score. LSTMs have been shown to outperform RNNs on many NLP tasks, notably language modeling (Sundermeyer et al., 2012). In our model, LSTM is utilized to chain together word candidates in a left-to-right, incremental manner. At time step t, a prediction pt+1 ∈ Rd about next word yt+1 is made based on the hidden state ht: pt+1 = tanh(Wpht + bp) link score for next word yt+1 is then computed as: Link Score(yt+1) = pt+1 · yt+1 (3) Due to the structure of LSTM, the prediction vector pt+1 carries useful information detected from the entire segmentation history, including previous segmentation decisions. In this way, our model gains the ability of sequence-level discrimination rather than local optimization. 3.3 Sentence score Sentence score for a segmented sentence y with n word candidates is computed by summing up word scores (2) and link scores (3) as follow: s(y[1:n], θ) = n X t=1 (u · yt + pt · yt) (4) where θ is the parameter set used in our model. 4 Decoding The total number of possible segmented sentences grows exponentially with the length of character sequence, which makes it impractical to compute the scores of every possible segmentation. In order to get exact inference, most sequence-labeling systems address this problem with a Viterbi search, which takes the advantage of their hypothesis that the tag interactions only exist within adjacent characters (Markov assumption). However, since our model is intended to capture complete history of segmentation decisions, such dynamic programming algorithms can not be adopted in this situation. Algorithm 1 Beam Search. Input: model parameters θ beam size k maximum word length w input character sequence c[1 : n] Output: Approx. k best segmentations 1: π[0] ←{(score = 0, h = h0, c = c0)} 2: for i = 1 to n do 3: ▷Generate Candidate Word Vectors 4: X ←∅ 5: for j = max(1, i −w) to i do 6: w = GCNN-Procedure(c[j : i]) 7: X.add((index = j −1, word = w)) 8: end for 9: ▷Join Segmentation 10: Y ←{ y.append(x) | y ∈π[x.index] and x ∈X} 11: ▷Filter k-Max 12: π[i] ←k- arg max y∈Y y.score 13: end for 14: return π[n] To make our model efficient in practical use, we propose a beam-search algorithm with dynamic programming motivations as shown in Algorithm 1. The main idea is that any segmentation of the 413 first i characters can be separated as two parts, the first part consists of characters with indexes from 0 to j that is denoted as y, the rest part is the word composed by c[j+1 : i]. The influence from previous segmentation y can be represented as a triple (y.score, y.h, y.c), where y.score, y.h, y.c indicate the current score, current hidden state vector and current memory cell vector respectively. Beam search ensures that the total time for segmenting a sentence of n characters is w × k × n, where w, k are maximum word length and beam size respectively. 5 Training We use the max-margin criterion (Taskar et al., 2005) to train our model. As reported in (Kummerfeld et al., 2015), the margin methods generally outperform both likelihood and perception methods. For a given character sequence x(i), denote the correct segmented sentence for x(i) as y(i). We define a structured margin loss ∆(y(i), ˆy) for predicting a segmented sentence ˆy: ∆(y(i), ˆy) = m X t=1 µ1{y(i),t ̸= ˆyt} where m is the length of sequence x(i) and µ is the discount parameter. The calculation of margin loss could be regarded as to count the number of incorrectly segmented characters and then multiple it with a fixed discount parameter for smoothing. Therefore, the loss is proportional to the number of incorrectly segmented characters. Given a set of training set Ω, the regularized objective function is the loss function J(θ) including an ℓ2 norm term: J(θ) = 1 |Ω| X (x(i),y(i))∈Ω li(θ) + λ 2 ||θ||2 2 li(θ) = max ˆy∈GEN(x(i)) (s(ˆy, θ) + ∆(y(i), ˆy) −s(y(i), θ)) where the function s(·) is the sentence score defined in equation (4). Due to the hinge loss, the objective function is not differentiable, we use a subgradient method (Ratliff et al., 2007) which computes a gradientlike direction. Following (Socher et al., 2013), we use the diagonal variant of AdaGrad (Duchi et al., 2011) with minibatchs to minimize the objective. Character embedding size d = 50 Hidden unit number H = 50 Initial learning rate α = 0.2 Margin loss discount µ = 0.2 Regularization λ = 10−6 Dropout rate on input layer p = 0.2 Maximum word length w = 4 Table 2: Hyper-parameter settings. The update for the i-th parameter at time step t is as follows: θt,i = θt−1,i − α qPt τ=1 g2 τ,i gt,i where α is the initial learning rate and gτ,i ∈R|θi| is the subgradient at time step τ for parameter θi. 6 Experiments 6.1 Datasets To evaluate the proposed segmenter, we use two popular datasets, PKU and MSR, from the second International Chinese Word Segmentation Bakeoff (Emerson, 2005). These datasets are commonly used by previous state-of-the-art models and neural network models. Both datasets are preprocessed by replacing the continuous English characters and digits with a unique token. All experiments are conducted with standard Bakeoff scoring program1 calculating precision, recall, and F1-score. 6.2 Hyper-parameters Hyper-parameters of neural network model significantly impact on its performance. To determine a set of suitable hyper-parameters, we divide the training data into two sets, the first 90% sentences as training set and the rest 10% sentences as development set. We choose the hyper-parameters as shown in Table 2. We found that the character embedding size has a limited impact on the performance as long as it is large enough. The size 50 is chosen as a good trade-off between speed and performance. The number of hidden units is set to be the same as the character embedding. Maximum word length determines the number of parameters in GCNN part and the time consuming of beam search, since the words with a length l > 4 are relatively rare, 1http://www.sighan.org/bakeoff2003/score 414 92 93 94 95 96 0 10 20 30 40 beam size=2 beam size=4 beam size=8 epochs F1-score(%) Figure 5: Performances of different beam sizes on PKU dataset. 92 93 94 95 96 0 10 20 30 40 only word score only link score both epochs F1-score(%) Figure 6: Performances of different score strategies on PKU dataset. 0.29% in PKU training data and 1.25% in MSR training data, we set the maximum word length to 4 in our experiments.2 Dropout is a popular technique for improving the performance of neural networks by reducing overfitting (Srivastava et al., 2014). We also drop the input layer of our model with dropout rate 20% to avoid overfitting. 6.3 Model Analysis Beam Size. We first investigated the impact of beam size over segmentation performance. Figure 5 shows that a segmenter with beam size 4 is enough to get the best performance, which makes our model find a good balance between accuracy and efficiency. GCNN. We then studied the role of GCNN in our model. To reveal the impact of GCNN, we re-implemented a simplified version of our model, 2This 4-character limitation is just for consistence for both datasets. We are aware that it is a too strict setting, especially which makes additional performance loss in a dataset with larger average word length, i.e., MSR. models P R F Single layer (d = 50) 94.3 93.7 94.0 GCNN (d = 50) 95.8 95.2 95.5 Single layer (d = 100) 94.9 94.4 94.7 Table 3: Performances of different models on PKU dataset. PKU MSR +Dictionary ours theirs ours theirs (Chen et al., 2015a) 94.9 95.9 95.8 96.2 (Chen et al., 2015b) 94.6 95.7 95.7 96.4 This work 95.7 96.4 Table 4: Comparison of using different Chinese idiom dictionaries.3 which replaces the GCNN part with a single nonlinear layer as in equation (1). The results are listed in Table 3, which demonstrate that the performance is significantly boosted by exploiting the GCNN architecture (94.0% to 95.5% on F1-score), while the best performance that the simplified version can achieve is 94.7%, but using a much larger character embedding size. Link Score & Word Score. We conducted several experiments to investigate the individual effect of link score and word score, since these two types of scores are intended to estimate the sentence likelihood from two different perspectives: the semantic coherence between words and the existence of individual words. The learning curves of models with different scoring strategies are shown in Figure 6. The model with only word score can be regarded as the situation that the segmentation decisions are made only based on local window information. The comparisons show that such a model gives moderate performance. By contrast, the model with only link score gives a much better performance close to the joint model, which demonstrates that the complete segmentation history, which can not be effectively modeled in previous schemes, possesses huge appliance value for word segmentation. 6.4 Results 3The dictionary used in (Chen et al., 2015a; Chen et al., 2015b) is neither publicly released nor specified the exact source until now. We have to re-run their code using our selected dictionary to make a fair comparison. Our dictionary has been submitted along with this submission. 415 Models PKU MSR P R F P R F (Zheng et al., 2013) 92.8 92.0 92.4 92.9 93.6 93.3 (Pei et al., 2014) 93.7 93.4 93.5 94.6 94.2 94.4 (Chen et al., 2015a)* 94.6 94.2 94.4 94.6 95.6 95.1 (Chen et al., 2015b) * 94.6 94.0 94.3 94.5 95.5 95.0 This work 95.5 94.9 95.2 96.1 96.7 96.4 +Pre-trained character embedding (Zheng et al., 2013) 93.5 92.2 92.8 94.2 93.7 93.9 (Pei et al., 2014) 94.4 93.6 94.0 95.2 94.6 94.9 (Chen et al., 2015a)* 94.8 94.1 94.5 94.9 95.9 95.4 (Chen et al., 2015b)* 95.1 94.4 94.8 95.1 96.2 95.6 This work 95.8 95.2 95.5 96.3 96.8 96.5 Table 5: Comparison with previous neural network models. Results with * are from our runs on their released implementations.5 Models PKU MSR PKU MSR (Tseng et al., 2005) 95.0 96.4 (Zhang and Clark, 2007) 94.5 97.2 (Zhao and Kit, 2008b) 95.4 97.6 (Sun et al., 2009) 95.2 97.3 (Sun et al., 2012) 95.4 97.4 (Zhang et al., 2013) 96.1* 97.4* (Chen et al., 2015a) 94.5 95.4 96.4* 97.6* (Chen et al., 2015b) 94.8 95.6 96.5* 97.4* This work 95.5 96.5 Table 6: Comparison with previous state-of-the-art models. Results with * used external dictionary or corpus. We first compare our model with the latest neural network methods as shown in Table 4. However, (Chen et al., 2015a; Chen et al., 2015b) used an extra preprocess to filter out Chinese idioms according to an external dictionary.4 Table 4 lists the results (F1-scores) with different dictionaries, which show that our models perform better when under the same settings. Table 5 gives comparisons among previous neural network models. In the first block of Table 5, the character embedding matrix M is randomly initialized. The results show that our proposed novel model outperforms previous neural network 4In detail, when a dictionary is used, a preprocess is performed before training and test, which scans original text to find out Chinese idioms included in the dictionary and replace them with a unique token. This treatment does not strictly follow the convention of closed-set setting defined by SIGHAN Bakeoff, as no linguistic resources, either dictionary or corpus, other than the training corpus, should be adopted. 5To make comparisons fair, we re-run their code (https://github.com/dalstonChen) without their unspecified Chinese idiom dictionary. methods. Previous works have found that the performance can be improved by pre-training the character embeddings on large unlabeled data. Therefore, we use word2vec (Mikolov et al., 2013) toolkit6 to pre-train the character embeddings on the Chinese Wikipedia corpus and use them for initialization. Table 5 also shows the results with additional pre-trained character embeddings. Again, our model achieves better performance than previous neural network models do. Table 6 compares our models with previous state-of-the-art systems. Recent systems such as (Zhang et al., 2013), (Chen et al., 2015b) and (Chen et al., 2015a) rely on both extensive feature engineering and external corpora to boost performance. Such systems are not directly comparable with our models. In the closed-set setting, our models can achieve state-of-the-art performance 6http://code.google.com/p/word2vec/ 416 Max. word length F1 score Time (Days) 4 96.5 4 5 96.7 5 6 96.8 6 Table 7: Results on MSR dataset with different maximum decoding word length settings. on PKU dataset but a competitive result on MSR dataset, which can attribute to too strict maximum word length setting for consistence as it is well known that MSR corpus has a much longer average word length (Zhao et al., 2010). Table 7 demonstrates the results on MSR corpus with different maximum decoding word lengths, in which both F1 scores and training time are given. The results show that the segmentation performance can indeed further be improved by allowing longer words during decoding, though longer training time are also needed. As 6character words are allowed, F1 score on MSR can be furthermore improved 0.3%. For the running cost, we roughly report the current computation consuming on PKU dataset.7 It takes about two days to finish 50 training epochs (for results in Figure 6 and the last row of Table 6) only with two cores of an Intel i7-5960X CPU. The requirement for RAM during training is less than 800MB. The trained model can be saved within 4MB on the hard disk. 7 Related Work Neural Network Models. Most modern CWS methods followed (Xue, 2003) treated CWS as a sequence labeling problems (Zhao et al., 2006b). Recently, researchers have tended to explore neural network based approaches (Collobert et al., 2011) to reduce efforts of feature engineering (Zheng et al., 2013; Qi et al., 2014; Chen et al., 2015a; Chen et al., 2015b). They modeled CWS as tagging problem as well, scoring tags on individual characters. In those models, tag scores are decided by context information within local windows and the sentence-level score is obtained via context-independently tag transitions. Pei et al. (2014) introduced the tag embedding as input to capture the combinations of context and tag history. However, in previous works, only the tag of previous one character was taken into consideration though theoretically the complete history of 7Our code is released at https://github.com/jcyk/CWS. actions taken by the segmenter should be considered. Alternatives to Sequence Labeling. Besides sequence labeling schemes, Zhang and Clark (2007) proposed a word-based perceptron method. Zhang et al. (2012) used a linear-time incremental model which can also benefits from various kinds of features including word-based features. But both of them rely heavily on massive handcrafted features. Contemporary to this work, some neural models (Zhang et al., 2016a; Liu et al., 2016) also leverage word-level information. Specifically, Liu et al. (2016) use a semi-CRF taking segment-level embeddings as input and Zhang et al. (2016a) use a transition-based framework. Another notable exception is (Ma and Hinrichs, 2015), which is also an embedding-based model, but models CWS as configuration-action matching. However, again, this method only uses the context information within limited sized windows. Other Techniques. The proposed model might furthermore benefit from some techniques in recent state-of-the-art systems, such as semisupervised learning (Zhao and Kit, 2008b; Zhao and Kit, 2008a; Sun and Xu, 2011; Zhao and Kit, 2011; Zeng et al., 2013; Zhang et al., 2013), incorporating global information (Zhao and Kit, 2007; Zhang et al., 2016b), and joint models (Qian and Liu, 2012; Li and Zhou, 2012). 8 Conclusion This paper presents a novel neural framework for the task of Chinese word segmentation, which contains three main components: (1) a factory to produce word representation when given its governed characters; (2) a sentence-level likelihood evaluation system for segmented sentence; (3) an efficient and effective algorithm to find the best segmentation. The proposed framework makes a latest attempt to formalize word segmentation as a direct structured learning procedure in terms of the recent distributed representation framework. Though our system outputs results that are better than the latest neural network segmenters but comparable to all previous state-of-the-art systems, the framework remains a great of potential that can be further investigated and improved in the future. 417 References Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. The Journal of Machine Learning Research, 3:1137–1155. Adam L Berger, Vincent J Della Pietra, and Stephen A Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational linguistics, 22(1):39–71. Xinchi Chen, Xipeng Qiu, Chenxi Zhu, and Xuanjing Huang. 2015a. Gated recursive neural network for chinese word segmentation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1744–1753. Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Pengfei Liu, and Xuanjing Huang. 2015b. Long short-term memory neural networks for chinese word segmentation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1197–1206. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1724–1734. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493–2537. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121–2159. Thomas Emerson. 2005. The second international chinese word segmentation bakeoff. In Proceedings of the fourth SIGHAN workshop on Chinese language Processing, volume 133. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Chang-Ning Huang and Hai Zhao. 2006. Which is essential for chinese word segmentation: Character versus word. In The 20th Pacific Asia Conference on Language, Information and Computation, pages 1–12. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. 2015. Character-aware neural language models. arXiv preprint arXiv:1508.06615. Jonathan K. Kummerfeld, Taylor Berg-Kirkpatrick, and Dan Klein. 2015. An empirical analysis of optimization for max-margin nlp. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 273–279. John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth Interntional Conference on Machine Learning. Zhongguo Li and Guodong Zhou. 2012. Unified dependency parsing of chinese morphological and syntactic structures. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1445–1454. Yijia Liu, Wanxiang Che, Jiang Guo, Bing Qin, and Ting Liu. 2016. Exploring segment representations for neural segmentation models. arXiv preprint arXiv:1604.05499. Jin Kiat Low, Hwee Tou Ng, and Wenyuan Guo. 2005. A maximum entropy approach to chinese word segmentation. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing, volume 1612164, pages 448–455. Jianqiang Ma and Erhard Hinrichs. 2015. Accurate linear-time chinese word segmentation via embedding matching. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1733–1743. Tomas Mikolov, Martin Karafi´at, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In 11th Annual Conference of the International Speech Communication Association, pages 1045–1048. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Wenzhe Pei, Tao Ge, and Baobao Chang. 2014. Maxmargin tensor neural network for chinese word segmentation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 293–303. Fuchun Peng, Fangfang Feng, and Andrew McCallum. 2004. Chinese segmentation and new word detection using conditional random fields. In Proceedings of the 20th international conference on Computational Linguistics, page 562. 418 Yanjun Qi, Sujatha G Das, Ronan Collobert, and Jason Weston. 2014. Deep learning for character-based information extraction. In Advances in Information Retrieval, pages 668–674. Xian Qian and Yang Liu. 2012. Joint chinese word segmentation, pos tagging and parsing. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 501– 511. Nathan D Ratliff, J Andrew Bagnell, and Martin Zinkevich. 2007. (approximate) subgradient methods for structured prediction. In International Conference on Artificial Intelligence and Statistics, pages 380– 387. Richard Socher, John Bauer, Christopher D. Manning, and Ng Andrew Y. 2013. Parsing with compositional vector grammars. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 455–465. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Weiwei Sun and Jia Xu. 2011. Enhancing chinese word segmentation using unlabeled data. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 970–979. Xu Sun, Yaozhong Zhang, Takuya Matsuzaki, Yoshimasa Tsuruoka, and Jun’ichi Tsujii. 2009. A discriminative latent variable chinese segmenter with hybrid word/character information. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 56–64. Xu Sun, Houfeng Wang, and Wenjie Li. 2012. Fast online training with frequency-adaptive learning rates for chinese word segmentation and new word detection. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 253–262. Martin Sundermeyer, Ralf Schl¨uter, and Hermann Ney. 2012. Lstm neural networks for language modeling. In 13th Annual Conference of the International Speech Communication Association. Ben Taskar, Vassil Chatalbashev, Daphne Koller, and Carlos Guestrin. 2005. Learning structured prediction models: A large margin approach. In Proceedings of the 22nd international conference on Machine learning, pages 896–903. Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher Manning. 2005. A conditional random field word segmenter for sighan bakeoff 2005. In Proceedings of the fourth SIGHAN workshop on Chinese language Processing, volume 171. Peilu Wang, Yao Qian, Hai Zhao, Frank K. Soong, Lei He, and Ke Wu. 2016. Learning distributed word representations for bidirectional lstm recurrent neural network. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Nianwen Xue. 2003. Chinese word segmentation as character tagging. Computational Linguistics and Chinese Language Processing, 8(1):29–48. Xiaodong Zeng, Derek F. Wong, Lidia S. Chao, and Isabel Trancoso. 2013. Graph-based semi-supervised model for joint chinese word segmentation and partof-speech tagging. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 770–779. Yue Zhang and Stephen Clark. 2007. Chinese segmentation with a word-based perceptron algorithm. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 840– 847. Kaixu Zhang, Maosong Sun, and Changle Zhou. 2012. Word segmentation on chinese mirco-blog data with a linear-time incremental model. In Second CIPSSIGHAN Joint Conference on Chinese Language Processing, pages 41–46. Longkai Zhang, Houfeng Wang, Xu Sun, and Mairgup Mansur. 2013. Exploring representations from unlabeled data with co-training for Chinese word segmentation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 311–321. Meishan Zhang, Yue Zhang, and Guohong Fu. 2016a. Transition-based neural word segmentation. In Proceedings of the 54nd Annual Meeting of the Association for Computational Linguistics. Zhiong Zhang, Hai Zhao, and Lianhui Qin. 2016b. Probabilistic graph-based dependency parsing with convolutional neural network. In Proceedings of the 54nd Annual Meeting of the Association for Computational Linguistics. Hai Zhao and Chunyu Kit. 2007. Incorporating global information into supervised learning for chinese word segmentation. In Proceedings of the 10th Conference of the Pacific Association for Computational Linguistics, pages 66–74. Hai Zhao and Chunyu Kit. 2008a. Exploiting unlabeled text with different unsupervised segmentation criteria for chinese word segmentation. Research in Computing Science, 33:93–104. 419 Hai Zhao and Chunyu Kit. 2008b. Unsupervised segmentation helps supervised learning of character tagging for word segmentation and named entity recognition. In Proceedings of the Third International Joint Conference on Natural Language Processing, pages 106–111. Hai Zhao and Chunyu Kit. 2011. Integrating unsupervised and supervised word segmentation: The role of goodness measures. Information Sciences, 181(1):163–183. Hai Zhao, Chang-Ning Huang, and Mu Li. 2006a. An improved chinese word segmentation system with conditional random field. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, volume 1082117. Hai Zhao, Chang-Ning Huang, Mu Li, and Bao-Liang Lu. 2006b. Effective tag set selection in chinese word segmentation via conditional random field modeling. In Proceedings of the 9th Pacific Association for Computational Linguistics, volume 20, pages 87–94. Hai Zhao, Chang-Ning Huang, Mu Li, and Bao-Liang Lu. 2010. A unified character-based tagging framework for chinese word segmentation. ACM Transactions on Asian Language Information Processing, 9(2):5. Xiaoqing Zheng, Hanyang Chen, and Tianyu Xu. 2013. Deep learning for Chinese word segmentation and POS tagging. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 647–657. 420
2016
39
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 33–43, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Language to Logical Form with Neural Attention Li Dong and Mirella Lapata Institute for Language, Cognition and Computation School of Informatics, University of Edinburgh 10 Crichton Street, Edinburgh EH8 9AB [email protected], [email protected] Abstract Semantic parsing aims at mapping natural language to machine interpretable meaning representations. Traditional approaches rely on high-quality lexicons, manually-built templates, and linguistic features which are either domainor representation-specific. In this paper we present a general method based on an attention-enhanced encoder-decoder model. We encode input utterances into vector representations, and generate their logical forms by conditioning the output sequences or trees on the encoding vectors. Experimental results on four datasets show that our approach performs competitively without using hand-engineered features and is easy to adapt across domains and meaning representations. 1 Introduction Semantic parsing is the task of translating text to a formal meaning representation such as logical forms or structured queries. There has recently been a surge of interest in developing machine learning methods for semantic parsing (see the references in Section 2), due in part to the existence of corpora containing utterances annotated with formal meaning representations. Figure 1 shows an example of a question (left handside) and its annotated logical form (right handside), taken from JOBS (Tang and Mooney, 2001), a well-known semantic parsing benchmark. In order to predict the correct logical form for a given utterance, most previous systems rely on predefined templates and manually designed features, which often render the parsing model domain- or representation-specific. In this work, we aim to use a simple yet effective method to bridge the gap between natural language and logical form with minimal domain knowledge. Sequence Encoder Sequence/Tree Decoder LSTM answer(J,(compa ny(J,'microsoft'),j ob(J),not((req_de g(J,'bscs'))))) Attention Layer LSTM what microsoft jobs do not require a bscs? Input Utterance Logical Form Figure 1: Input utterances and their logical forms are encoded and decoded with neural networks. An attention layer is used to learn soft alignments. Encoder-decoder architectures based on recurrent neural networks have been successfully applied to a variety of NLP tasks ranging from syntactic parsing (Vinyals et al., 2015a), to machine translation (Kalchbrenner and Blunsom, 2013; Cho et al., 2014; Sutskever et al., 2014), and image description generation (Karpathy and FeiFei, 2015; Vinyals et al., 2015b). As shown in Figure 1, we adapt the general encoder-decoder paradigm to the semantic parsing task. Our model learns from natural language descriptions paired with meaning representations; it encodes sentences and decodes logical forms using recurrent neural networks with long short-term memory (LSTM) units. We present two model variants, the first one treats semantic parsing as a vanilla sequence transduction task, whereas our second model is equipped with a hierarchical tree decoder which explicitly captures the compositional structure of logical forms. We also introduce an attention mechanism (Bahdanau et al., 2015; Luong et al., 2015b) allowing the model to learn soft alignments between natural language and logical forms and present an argument identification step to handle rare mentions of entities and numbers. Evaluation results demonstrate that compared to previous methods our model achieves similar or better performance across datasets and meaning representations, despite using no hand-engineered domain- or representation-specific features. 33 2 Related Work Our work synthesizes two strands of research, namely semantic parsing and the encoder-decoder architecture with neural networks. The problem of learning semantic parsers has received significant attention, dating back to Woods (1973). Many approaches learn from sentences paired with logical forms following various modeling strategies. Examples include the use of parsing models (Miller et al., 1996; Ge and Mooney, 2005; Lu et al., 2008; Zhao and Huang, 2015), inductive logic programming (Zelle and Mooney, 1996; Tang and Mooney, 2000; Thomspon and Mooney, 2003), probabilistic automata (He and Young, 2006), string/tree-to-tree transformation rules (Kate et al., 2005), classifiers based on string kernels (Kate and Mooney, 2006), machine translation (Wong and Mooney, 2006; Wong and Mooney, 2007; Andreas et al., 2013), and combinatory categorial grammar induction techniques (Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007; Kwiatkowski et al., 2010; Kwiatkowski et al., 2011). Other work learns semantic parsers without relying on logicalfrom annotations, e.g., from sentences paired with conversational logs (Artzi and Zettlemoyer, 2011), system demonstrations (Chen and Mooney, 2011; Goldwasser and Roth, 2011; Artzi and Zettlemoyer, 2013), question-answer pairs (Clarke et al., 2010; Liang et al., 2013), and distant supervision (Krishnamurthy and Mitchell, 2012; Cai and Yates, 2013; Reddy et al., 2014). Our model learns from natural language descriptions paired with meaning representations. Most previous systems rely on high-quality lexicons, manually-built templates, and features which are either domain- or representationspecific. We instead present a general method that can be easily adapted to different domains and meaning representations. We adopt the general encoder-decoder framework based on neural networks which has been recently repurposed for various NLP tasks such as syntactic parsing (Vinyals et al., 2015a), machine translation (Kalchbrenner and Blunsom, 2013; Cho et al., 2014; Sutskever et al., 2014), image description generation (Karpathy and Fei-Fei, 2015; Vinyals et al., 2015b), question answering (Hermann et al., 2015), and summarization (Rush et al., 2015). Mei et al. (2016) use a sequence-to-sequence model to map navigational instructions to actions. Our model works on more well-defined meaning representations (such as Prolog and lambda calculus) and is conceptually simpler; it does not employ bidirectionality or multi-level alignments. Grefenstette et al. (2014) propose a different architecture for semantic parsing based on the combination of two neural network models. The first model learns shared representations from pairs of questions and their translations into knowledge base queries, whereas the second model generates the queries conditioned on the learned representations. However, they do not report empirical evaluation results. 3 Problem Formulation Our aim is to learn a model which maps natural language input q = x1 · · · x|q| to a logical form representation of its meaning a = y1 · · · y|a|. The conditional probability p (a|q) is decomposed as: p (a|q) = |a| Y t=1 p (yt|y<t, q) (1) where y<t = y1 · · · yt−1. Our method consists of an encoder which encodes natural language input q into a vector representation and a decoder which learns to generate y1, · · · , y|a| conditioned on the encoding vector. In the following we describe two models varying in the way in which p (a|q) is computed. 3.1 Sequence-to-Sequence Model This model regards both input q and output a as sequences. As shown in Figure 2, the encoder and decoder are two different L-layer recurrent neural networks with long short-term memory (LSTM) units which recursively process tokens one by one. The first |q| time steps belong to the encoder, while the following |a| time steps belong to the decoder. Let hl t ∈Rn denote the hidden vector at time step t and layer l. hl t is then computed by: hl t = LSTM  hl t−1, hl−1 t  (2) where LSTM refers to the LSTM function being used. In our experiments we follow the architecture described in Zaremba et al. (2015), however other types of gated activation functions are possible (e.g., Cho et al. (2014)). For the encoder, h0 t = Wqe(xt) is the word vector of the current input token, with Wq ∈Rn×|Vq| being a parameter matrix, and e(·) the index of the corresponding 34 LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM Figure 2: Sequence-to-sequence (SEQ2SEQ) model with two-layer recurrent neural networks. token. For the decoder, h0 t = Wae(yt−1) is the word vector of the previous predicted word, where Wa ∈Rn×|Va|. Notice that the encoder and decoder have different LSTM parameters. Once the tokens of the input sequence x1, · · · , x|q| are encoded into vectors, they are used to initialize the hidden states of the first time step in the decoder. Next, the hidden vector of the topmost LSTM hL t in the decoder is used to predict the t-th output token as: p (yt|y<t, q) = softmax WohL t ⊺e (yt) (3) where Wo ∈R|Va|×n is a parameter matrix, and e (yt) ∈{0, 1}|Va| a one-hot vector for computing yt’s probability from the predicted distribution. We augment every sequence with a “start-ofsequence” <s> and “end-of-sequence” </s> token. The generation process terminates once </s> is predicted. The conditional probability of generating the whole sequence p (a|q) is then obtained using Equation (1). 3.2 Sequence-to-Tree Model The SEQ2SEQ model has a potential drawback in that it ignores the hierarchical structure of logical forms. As a result, it needs to memorize various pieces of auxiliary information (e.g., bracket pairs) to generate well-formed output. In the following we present a hierarchical tree decoder which is more faithful to the compositional nature of meaning representations. A schematic description of the model is shown in Figure 3. The present model shares the same encoder with the sequence-to-sequence model described in Section 3.1 (essentially it learns to encode input q as vectors). However, its decoder is fundamentally different as it generates logical forms in a topdown manner. In order to represent tree structure, LSTM LSTM LSTM LSTM lambda $0 e and <n> LSTM LSTM LSTM LSTM LSTM LSTM <n> <n> </s> LSTM </s> from LSTM LSTM LSTM LSTM $0 dallas:ci </s> > LSTM LSTM LSTM LSTM <n> 1600:ti </s> LSTM LSTM departure _time $0 LSTM </s> Parent feeding Start decoding LSTM Encoder unit LSTM Decoder unit <n> Nonterminal Figure 3: Sequence-to-tree (SEQ2TREE) model with a hierarchical tree decoder. we define a “nonterminal” <n> token which indicates subtrees. As shown in Figure 3, we preprocess the logical form “lambda $0 e (and (>(departure time $0) 1600:ti) (from $0 dallas:ci))” to a tree by replacing tokens between pairs of brackets with nonterminals. Special tokens <s> and <(> denote the beginning of a sequence and nonterminal sequence, respectively (omitted from Figure 3 due to lack of space). Token </s> represents the end of sequence. After encoding input q, the hierarchical tree decoder uses recurrent neural networks to generate tokens at depth 1 of the subtree corresponding to parts of logical form a. If the predicted token is <n>, we decode the sequence by conditioning on the nonterminal’s hidden vector. This process terminates when no more nonterminals are emitted. In other words, a sequence decoder is used to hierarchically generate the tree structure. In contrast to the sequence decoder described in Section 3.1, the current hidden state does not only depend on its previous time step. In order to better utilize the parent nonterminal’s information, we introduce a parent-feeding connection where the hidden vector of the parent nonterminal is concatenated with the inputs and fed into LSTM. As an example, Figure 4 shows the decoding tree corresponding to the logical form “A B (C)”, where y1 · · · y6 are predicted tokens, and t1 · · · t6 denote different time steps. Span “(C)” corresponds to a subtree. Decoding in this example has two steps: once input q has been encoded, we first generate y1 · · · y4 at depth 1 until token </s> is 35 t1 t2 t3 t4 t5 t6 y1=A y3=<n> <s> q y6=</s> <(> y4=</s> y2=B y5=C Figure 4: A SEQ2TREE decoding example for the logical form “A B (C)”. predicted; next, we generate y5, y6 by conditioning on nonterminal t3’s hidden vectors. The probability p (a|q) is the product of these two sequence decoding steps: p (a|q) = p (y1y2y3y4|q) p (y5y6|y≤3, q) (4) where Equation (3) is used for the prediction of each output token. 3.3 Attention Mechanism As shown in Equation (3), the hidden vectors of the input sequence are not directly used in the decoding process. However, it makes intuitively sense to consider relevant information from the input to better predict the current token. Following this idea, various techniques have been proposed to integrate encoder-side information (in the form of a context vector) at each time step of the decoder (Bahdanau et al., 2015; Luong et al., 2015b; Xu et al., 2015). As shown in Figure 5, in order to find relevant encoder-side context for the current hidden state hL t of decoder, we compute its attention score with the k-th hidden state in the encoder as: st k = exp{hL k · hL t } P|q| j=1 exp{hL j · hL t } (5) where hL 1 , · · · , hL |q| are the top-layer hidden vectors of the encoder. Then, the context vector is the weighted sum of the hidden vectors in the encoder: ct = |q| X k=1 st khL k (6) In lieu of Equation (3), we further use this context vector which acts as a summary of the encoder to compute the probability of generating yt as: hatt t = tanh W1hL t + W2ct (7) LSTM LSTM LSTM LSTM LSTM Attention Scores Figure 5: Attention scores are computed by the current hidden vector and all the hidden vectors of encoder. Then, the encoder-side context vector ct is obtained in the form of a weighted sum, which is further used to predict yt. p (yt|y<t, q) = softmax Wohatt t ⊺e (yt) (8) where Wo ∈R|Va|×n and W1, W2 ∈Rn×n are three parameter matrices, and e (yt) is a one-hot vector used to obtain yt’s probability. 3.4 Model Training Our goal is to maximize the likelihood of the generated logical forms given natural language utterances as input. So the objective function is: minimize − X (q,a)∈D log p (a|q) (9) where D is the set of all natural language-logical form training pairs, and p (a|q) is computed as shown in Equation (1). The RMSProp algorithm (Tieleman and Hinton, 2012) is employed to solve this non-convex optimization problem. Moreover, dropout is used for regularizing the model (Zaremba et al., 2015). Specifically, dropout operators are used between different LSTM layers and for the hidden layers before the softmax classifiers. This technique can substantially reduce overfitting, especially on datasets of small size. 3.5 Inference At test time, we predict the logical form for an input utterance q by: ˆa = arg max a′ p a′|q  (10) where a′ represents a candidate output. However, it is impractical to iterate over all possible results to obtain the optimal prediction. According to Equation (1), we decompose the probability p (a|q) so that we can use greedy search (or beam search) to generate tokens one by one. 36 Algorithm 1 describes the decoding process for SEQ2TREE. The time complexity of both decoders is O(|a|), where |a| is the length of output. The extra computation of SEQ2TREE compared with SEQ2SEQ is to maintain the nonterminal queue, which can be ignored because most of time is spent on matrix operations. We implement the hierarchical tree decoder in a batch mode, so that it can fully utilize GPUs. Specifically, as shown in Algorithm 1, every time we pop multiple nonterminals from the queue and decode these nonterminals in one batch. 3.6 Argument Identification The majority of semantic parsing datasets have been developed with question-answering in mind. In the typical application setting, natural language questions are mapped into logical forms and executed on a knowledge base to obtain an answer. Due to the nature of the question-answering task, many natural language utterances contain entities or numbers that are often parsed as arguments in the logical form. Some of them are unavoidably rare or do not appear in the training set at all (this is especially true for small-scale datasets). Conventional sequence encoders simply replace rare words with a special unknown word symbol (Luong et al., 2015a; Jean et al., 2015), which would be detrimental for semantic parsing. We have developed a simple procedure for argument identification. Specifically, we identify entities and numbers in input questions and replace them with their type names and unique IDs. For instance, we pre-process the training example “jobs with a salary of 40000” and its logical form “job(ANS), salary greater than(ANS, 40000, year)” as “jobs with a salary of num0” and “job(ANS), salary greater than(ANS, num0, year)”. We use the pre-processed examples as training data. At inference time, we also mask entities and numbers with their types and IDs. Once we obtain the decoding result, a post-processing step recovers all the markers typei to their corresponding logical constants. 4 Experiments We compare our method against multiple previous systems on four datasets. We describe these datasets below, and present our experimental settings and results. Finally, we conduct model analysis in order to understand what the model learns. The code is available at https://github. com/donglixp/lang2logic. 4.1 Datasets Our model was trained on the following datasets, covering different domains and using different meaning representations. Examples for each domain are shown in Table 1. JOBS This benchmark dataset contains 640 queries to a database of job listings. Specifically, questions are paired with Prolog-style queries. We used the same training-test split as Zettlemoyer and Collins (2005) which contains 500 training and 140 test instances. Values for the variables company, degree, language, platform, location, job area, and number are identified. GEO This is a standard semantic parsing benchmark which contains 880 queries to a database of U.S. geography. GEO has 880 instances split into a training set of 680 training examples and 200 test examples (Zettlemoyer and Collins, 2005). We used the same meaning representation based on lambda-calculus as Kwiatkowski et al. (2011). Values for the variables city, state, country, river, and number are identified. ATIS This dataset has 5, 410 queries to a flight booking system. The standard split has 4, 480 training instances, 480 development instances, and 450 test instances. Sentences are paired with lambda-calculus expressions. Values for the variables date, time, city, aircraft code, airport, airline, and number are identified. IFTTT Quirk et al. (2015) created this dataset by extracting a large number of if-this-then-that 37 Dataset Length Example JOBS 9.80 22.90 what microsoft jobs do not require a bscs? answer(company(J,’microsoft’),job(J),not((req deg(J,’bscs’)))) GEO 7.60 19.10 what is the population of the state with the largest area? (population:i (argmax $0 (state:t $0) (area:i $0))) ATIS 11.10 28.10 dallas to san francisco leaving after 4 in the afternoon please (lambda $0 e (and (>(departure time $0) 1600:ti) (from $0 dallas:ci) (to $0 san francisco:ci))) IFTTT 6.95 21.80 Turn on heater when temperature drops below 58 degree TRIGGER: Weather - Current temperature drops below - ((Temperature (58)) (Degrees in (f))) ACTION: WeMo Insight Switch - Turn on - ((Which switch? (””))) Table 1: Examples of natural language descriptions and their meaning representations from four datasets. The average length of input and output sequences is shown in the second column. recipes from the IFTTT website1. Recipes are simple programs with exactly one trigger and one action which users specify on the site. Whenever the conditions of the trigger are satisfied, the action is performed. Actions typically revolve around home security (e.g., “turn on my lights when I arrive home”), automation (e.g., “text me if the door opens”), well-being (e.g., “remind me to drink water if I’ve been at a bar for more than two hours”), and so on. Triggers and actions are selected from different channels (160 in total) representing various types of services, devices (e.g., Android), and knowledge sources (such as ESPN or Gmail). In the dataset, there are 552 trigger functions from 128 channels, and 229 action functions from 99 channels. We used Quirk et al.’s (2015) original split which contains 77, 495 training, 5, 171 development, and 4, 294 test examples. The IFTTT programs are represented as abstract syntax trees and are paired with natural language descriptions provided by users (see Table 1). Here, numbers and URLs are identified. 4.2 Settings Natural language sentences were lowercased; misspellings were corrected using a dictionary based on the Wikipedia list of common misspellings. Words were stemmed using NLTK (Bird et al., 2009). For IFTTT, we filtered tokens, channels and functions which appeared less than five times in the training set. For the other datasets, we filtered input words which did not occur at least two times in the training set, but kept all tokens in the logical forms. Plain string matching was employed to identify augments as described in Section 3.6. More sophisticated approaches could be used, however we leave this future work. Model hyper-parameters were cross-validated 1http://www.ifttt.com Method Accuracy COCKTAIL (Tang and Mooney, 2001) 79.4 PRECISE (Popescu et al., 2003) 88.0 ZC05 (Zettlemoyer and Collins, 2005) 79.3 DCS+L (Liang et al., 2013) 90.7 TISP (Zhao and Huang, 2015) 85.0 SEQ2SEQ 87.1 −attention 77.9 −argument 70.7 SEQ2TREE 90.0 −attention 83.6 Table 2: Evaluation results on JOBS. on the training set for JOBS and GEO. We used the standard development sets for ATIS and IFTTT. We used the RMSProp algorithm (with batch size set to 20) to update the parameters. The smoothing constant of RMSProp was 0.95. Gradients were clipped at 5 to alleviate the exploding gradient problem (Pascanu et al., 2013). Parameters were randomly initialized from a uniform distribution U (−0.08, 0.08). A two-layer LSTM was used for IFTTT, while a one-layer LSTM was employed for the other domains. The dropout rate was selected from {0.2, 0.3, 0.4, 0.5}. Dimensions of hidden vector and word embedding were selected from {150, 200, 250}. Early stopping was used to determine the number of epochs. Input sentences were reversed before feeding into the encoder (Sutskever et al., 2014). We use greedy search to generate logical forms during inference. Notice that two decoders with shared word embeddings were used to predict triggers and actions for IFTTT, and two softmax classifiers are used to classify channels and functions. 4.3 Results We first discuss the performance of our model on JOBS, GEO, and ATIS, and then examine our results on IFTTT. Tables 2–4 present comparisons against a variety of systems previously described 38 Method Accuracy SCISSOR (Ge and Mooney, 2005) 72.3 KRISP (Kate and Mooney, 2006) 71.7 WASP (Wong and Mooney, 2006) 74.8 λ-WASP (Wong and Mooney, 2007) 86.6 LNLZ08 (Lu et al., 2008) 81.8 ZC05 (Zettlemoyer and Collins, 2005) 79.3 ZC07 (Zettlemoyer and Collins, 2007) 86.1 UBL (Kwiatkowski et al., 2010) 87.9 FUBL (Kwiatkowski et al., 2011) 88.6 KCAZ13 (Kwiatkowski et al., 2013) 89.0 DCS+L (Liang et al., 2013) 87.9 TISP (Zhao and Huang, 2015) 88.9 SEQ2SEQ 84.6 −attention 72.9 −argument 68.6 SEQ2TREE 87.1 −attention 76.8 Table 3: Evaluation results on GEO. 10-fold crossvalidation is used for the systems shown in the top half of the table. The standard split of ZC05 is used for all other systems. Method Accuracy ZC07 (Zettlemoyer and Collins, 2007) 84.6 UBL (Kwiatkowski et al., 2010) 71.4 FUBL (Kwiatkowski et al., 2011) 82.8 GUSP-FULL (Poon, 2013) 74.8 GUSP++ (Poon, 2013) 83.5 TISP (Zhao and Huang, 2015) 84.2 SEQ2SEQ 84.2 −attention 75.7 −argument 72.3 SEQ2TREE 84.6 −attention 77.5 Table 4: Evaluation results on ATIS. in the literature. We report results with the full models (SEQ2SEQ, SEQ2TREE) and two ablation variants, i.e., without an attention mechanism (−attention) and without argument identification (−argument). We report accuracy which is defined as the proportion of the input sentences that are correctly parsed to their gold standard logical forms. Notice that DCS+L, KCAZ13 and GUSP output answers directly, so accuracy in this setting is defined as the percentage of correct answers. Overall, SEQ2TREE is superior to SEQ2SEQ. This is to be expected since SEQ2TREE explicitly models compositional structure. On the JOBS and GEO datasets which contain logical forms with nested structures, SEQ2TREE outperforms SEQ2SEQ by 2.9% and 2.5%, respectively. SEQ2TREE achieves better accuracy over SEQ2SEQ on ATIS too, however, the difference is smaller, since ATIS is a simpler domain without complex nested structures. We find that adding atMethod Channel +Func F1 retrieval 28.9 20.2 41.7 phrasal 19.3 11.3 35.3 sync 18.1 10.6 35.1 classifier 48.8 35.2 48.4 posclass 50.0 36.9 49.3 SEQ2SEQ 54.3 39.2 50.1 −attention 54.0 37.9 49.8 −argument 53.9 38.6 49.7 SEQ2TREE 55.2 40.1 50.4 −attention 54.3 38.2 50.0 (a) Omit non-English. Method Channel +Func F1 retrieval 36.8 25.4 49.0 phrasal 27.8 16.4 39.9 sync 26.7 15.5 37.6 classifier 64.8 47.2 56.5 posclass 67.2 50.4 57.7 SEQ2SEQ 68.8 50.5 60.3 −attention 68.7 48.9 59.5 −argument 68.8 50.4 59.7 SEQ2TREE 69.6 51.4 60.4 −attention 68.7 49.5 60.2 (b) Omit non-English & unintelligible. Method Channel +Func F1 retrieval 43.3 32.3 56.2 phrasal 37.2 23.5 45.5 sync 36.5 24.1 42.8 classifier 79.3 66.2 65.0 posclass 81.4 71.0 66.5 SEQ2SEQ 87.8 75.2 73.7 −attention 88.3 73.8 72.9 −argument 86.8 74.9 70.8 SEQ2TREE 89.7 78.4 74.2 −attention 87.6 74.9 73.5 (c) ≥3 turkers agree with gold. Table 5: Evaluation results on IFTTT. tention substantially improves performance on all three datasets. This underlines the importance of utilizing soft alignments between inputs and outputs. We further analyze what the attention layer learns in Figure 6. Moreover, our results show that argument identification is critical for smallscale datasets. For example, about 92% of city names appear less than 4 times in the GEO training set, so it is difficult to learn reliable parameters for these words. In relation to previous work, the proposed models achieve comparable or better performance. Importantly, we use the same framework (SEQ2SEQ or SEQ2TREE) across datasets and meaning representations (Prolog-style logical forms in JOBS and lambda calculus in the other two datasets) without modification. Despite this relatively simple approach, we observe that SEQ2TREE ranks second on JOBS, and is tied for first place with ZC07 on ATIS. 39 </s> degid0 a requir not do that num0 pay job which <s> job ( ANS ) , salary_greater_than ( ANS , num0 , year ) , \+ ( ( req_deg ( ANS , degid0 ) ) ) </s> (a) which jobs pay num0 that do not require a degid0 </s> ci1 to ci0 from trip round fare class first what <s> ( lambda $0e( exists $1( and( round_trip $1)( class_type $1 first:cl)( from $1 ci0)( to $1 ci1)( =( fare $1) $0)))) </s> (b) what’s first class fare round trip from ci0 to ci1 </s> tomorrow ci1 to ci0 from flight earliest the is what <s> argmin $0 ( and ( flight $0 ) ( from $0 ci0 ) ( to $0 ci1 ) ( tomorrow $0 ) ) ( departure_time $0 ) </s> (c) what is the earliest flight from ci0 to ci1 tomorrow </s> co0 the in elev highest the is what <s> argmax $0 ( and ( place:t $0 ) ( loc:t $0 co0 ) ) ( elevation:i $0 ) </s> (d) what is the highest elevation in the co0 Figure 6: Alignments (same color rectangles) produced by the attention mechanism (darker color represents higher attention score). Input sentences are reversed and stemmed. Model output is shown for SEQ2SEQ (a, b) and SEQ2TREE (c, d). We illustrate examples of alignments produced by SEQ2SEQ in Figures 6a and 6b. Alignments produced by SEQ2TREE are shown in Figures 6c and 6d. Matrices of attention scores are computed using Equation (5) and are represented in grayscale. Aligned input words and logical form predicates are enclosed in (same color) rectangles whose overlapping areas contain the attention scores. Also notice that attention scores are computed by LSTM hidden vectors which encode context information rather than just the words in their current positions. The examples demonstrate that the attention mechanism can successfully model the correspondence between sentences and logical forms, capturing reordering (Figure 6b), manyto-many (Figure 6a), and many-to-one alignments (Figures 6c,d). For IFTTT, we follow the same evaluation protocol introduced in Quirk et al. (2015). The dataset is extremely noisy and measuring accuracy is problematic since predicted abstract syntax trees (ASTs) almost never exactly match the gold standard. Quirk et al. view an AST as a set of productions and compute balanced F1 instead which we also adopt. The first column in Table 5 shows the percentage of channels selected correctly for both triggers and actions. The second column measures accuracy for both channels and functions. The last column shows balanced F1 against the gold tree over all productions in the proposed derivation. We compare our model against posclass, the method introduced in Quirk et al. and several of their baselines. posclass is reminiscent of KRISP (Kate and Mooney, 2006), it learns distributions over productions given input sentences represented as a bag of linguistic features. The retrieval baseline finds the closest description in the training data based on character string-edit-distance and returns the recipe for that training program. The phrasal method uses phrase-based machine translation to generate the recipe, whereas sync extracts synchronous grammar rules from the data, essentially recreating WASP (Wong and Mooney, 2006). Finally, they use a binary classifier to predict whether a production should be present in the derivation tree corresponding to the description. Quirk et al. (2015) report results on the full test data and smaller subsets after noise filtering, e.g., when non-English and unintelligible descriptions are removed (Tables 5a and 5b). They also ran their system on a high-quality subset of description-program pairs which were found in the gold standard and at least three humans managed to independently reproduce (Table 5c). Across all subsets our models outperforms posclass and related baselines. Again we observe that SEQ2TREE consistently outperforms SEQ2SEQ, albeit with a small margin. Compared to the previous datasets, the attention mechanism and our argument iden40 tification method yield less of an improvement. This may be due to the size of Quirk et al. (2015) and the way it was created – user curated descriptions are often of low quality, and thus align very loosely to their corresponding ASTs. 4.4 Error Analysis Finally, we inspected the output of our model in order to identify the most common causes of errors which we summarize below. Under-Mapping The attention model used in our experiments does not take the alignment history into consideration. So, some question words, expecially in longer questions, may be ignored in the decoding process. This is a common problem for encoder-decoder models and can be addressed by explicitly modelling the decoding coverage of the source words (Tu et al., 2016; Cohn et al., 2016). Keeping track of the attention history would help adjust future attention and guide the decoder towards untranslated source words. Argument Identification Some mentions are incorrectly identified as arguments. For example, the word may is sometimes identified as a month when it is simply a modal verb. Moreover, some argument mentions are ambiguous. For instance, 6 o’clock can be used to express either 6 am or 6 pm. We could disambiguate arguments based on contextual information. The execution results of logical forms could also help prune unreasonable arguments. Rare Words Because the data size of JOBS, GEO, and ATIS is relatively small, some question words are rare in the training set, which makes it hard to estimate reliable parameters for them. One solution would be to learn word embeddings on unannotated text data, and then use these as pretrained vectors for question words. 5 Conclusions In this paper we presented an encoder-decoder neural network model for mapping natural language descriptions to their meaning representations. We encode natural language utterances into vectors and generate their corresponding logical forms as sequences or trees using recurrent neural networks with long short-term memory units. Experimental results show that enhancing the model with a hierarchical tree decoder and an attention mechanism improves performance across the board. Extensive comparisons with previous methods show that our approach performs competitively, without recourse to domain- or representation-specific features. Directions for future work are many and varied. For example, it would be interesting to learn a model from question-answer pairs without access to target logical forms. Beyond semantic parsing, we would also like to apply our SEQ2TREE model to related structured prediction tasks such as constituency parsing. Acknowledgments We would like to thank Luke Zettlemoyer and Tom Kwiatkowski for sharing the ATIS dataset. The support of the European Research Council under award number 681760 “Translating Multiple Modalities into Text” is gratefully acknowledged. References Jacob Andreas, Andreas Vlachos, and Stephen Clark. 2013. Semantic parsing as machine translation. In Proceedings of the 51st ACL, pages 47–52, Sofia, Bulgaria. Yoav Artzi and Luke Zettlemoyer. 2011. Bootstrapping semantic parsers from conversations. In Proceedings of the 2011 EMNLP, pages 421–432, Edinburgh, United Kingdom. Yoav Artzi and Luke Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. TACL, 1(1):49–62. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the ICLR, San Diego, California. Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python. O’Reilly Media. Qingqing Cai and Alexander Yates. 2013. Semantic parsing freebase: Towards open-domain semantic parsing. In 2nd Joint Conference on Lexical and Computational Semantics, pages 328–338, Atlanta, Georgia. David L. Chen and Raymond J. Mooney. 2011. Learning to interpret natural language navigation instructions from observations. In Proceedings of the 15th AAAI, pages 859–865, San Francisco, California. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 EMNLP, pages 1724–1734, Doha, Qatar. 41 James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from the world’s response. In Proceedings of CONLL, pages 18–27, Uppsala, Sweden. Trevor Cohn, Cong Duy Vu Hoang, Ekaterina Vymolova, Kaisheng Yao, Chris Dyer, and Gholamreza Haffari. 2016. Incorporating structural alignment biases into an attentional neural translation model. In Proceedings of the 2016 NAACL, San Diego, California. Ruifang Ge and Raymond J. Mooney. 2005. A statistical semantic parser that integrates syntax and semantics. In Proceedings of CoNLL, pages 9–16, Ann Arbor, Michigan. Dan Goldwasser and Dan Roth. 2011. Learning from natural instructions. In Proceedings of the 22nd IJCAI, pages 1794–1800, Barcelona, Spain. Edward Grefenstette, Phil Blunsom, Nando de Freitas, and Karl Moritz Hermann. 2014. A deep architecture for semantic parsing. In Proceedings of the ACL 2014 Workshop on Semantic Parsing, Atlanta, Georgia. Yulan He and Steve Young. 2006. Semantic processing using the hidden vector state model. Speech Communication, 48(3-4):262–275. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems 28, pages 1684– 1692. Curran Associates, Inc. S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In Proceedings of 53rd ACL and 7th IJCNLP, pages 1– 10, Beijing, China. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the 2013 EMNLP, pages 1700–1709, Seattle, Washington. Andrej Karpathy and Li Fei-Fei. 2015. Deep visualsemantic alignments for generating image descriptions. In Proceedings of CVPR, pages 3128–3137, Boston, Massachusetts. Rohit J. Kate and Raymond J. Mooney. 2006. Using string-kernels for learning semantic parsers. In Proceedings of the 21st COLING and 44th ACL, pages 913–920, Sydney, Australia. Rohit J. Kate, Yuk Wah Wong, and Raymond J. Mooney. 2005. Learning to transform natural to formal languages. In Proceedings of the 20th AAAI, pages 1062–1068, Pittsburgh, Pennsylvania. Jayant Krishnamurthy and Tom Mitchell. 2012. Weakly supervised training of semantic parsers. In Proceedings of the 2012 EMNLP, pages 754–765, Jeju Island, Korea. Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higher-order unification. In Proceedings of the 2010 EMNLP, pages 1223–1233, Cambridge, Massachusetts. Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2011. Lexical generalization in CCG grammar induction for semantic parsing. In Proceedings of the 2011 EMNLP, pages 1512–1523, Edinburgh, United Kingdom. Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proceedings of the 2013 EMNLP, pages 1545–1556, Seattle, Washington. Percy Liang, Michael I. Jordan, and Dan Klein. 2013. Learning dependency-based compositional semantics. Computational Linguistics, 39(2):389–446. Wei Lu, Hwee Tou Ng, Wee Sun Lee, and Luke S. Zettlemoyer. 2008. A generative model for parsing natural language to meaning representations. In Proceedings of the 2008 EMNLP, pages 783–792, Honolulu, Hawaii. Minh-Thang Luong, Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wojciech Zaremba. 2015a. Addressing the rare word problem in neural machine translation. In Proceedings of the 53rd ACL and 7th IJCNLP, pages 11–19, Beijing, China. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015b. Effective approaches to attentionbased neural machine translation. In Proceedings of the 2015 EMNLP, pages 1412–1421, Lisbon, Portugal. Hongyuan Mei, Mohit Bansal, and Matthew R Walter. 2016. Listen, attend, and walk: Neural mapping of navigational instructions to action sequences. In Proceedings of the 30th AAAI, Phoenix, Arizona. to appear. Scott Miller, David Stallard, Robert Bobrow, and Richard Schwartz. 1996. A fully statistical approach to natural language interfaces. In ACL, pages 55–61. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In Proceedings of the 30th ICML, pages 1310–1318, Atlanta, Georgia. Hoifung Poon. 2013. Grounded unsupervised semantic parsing. In Proceedings of the 51st ACL, pages 933–943, Sofia, Bulgaria. 42 Ana-Maria Popescu, Oren Etzioni, and Henry Kautz. 2003. Towards a theory of natural language interfaces to databases. In Proceedings of the 8th IUI, pages 149–157, Miami, Florida. Chris Quirk, Raymond Mooney, and Michel Galley. 2015. Language to code: Learning semantic parsers for if-this-then-that recipes. In Proceedings of 53rd ACL and 7th IJCNLP, pages 878–888, Beijing, China. Siva Reddy, Mirella Lapata, and Mark Steedman. 2014. Large-scale semantic parsing without question-answer pairs. TACL, 2(Oct):377–392. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 EMNLP, pages 379–389, Lisbon, Portugal. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc. Lappoon R. Tang and Raymond J. Mooney. 2000. Automated construction of database interfaces: Intergrating statistical and relational learning for semantic parsing. In Proceedings of the 2000 EMNLP, pages 133–141, Hong Kong, China. Lappoon R. Tang and Raymond J. Mooney. 2001. Using multiple clause constructors in inductive logic programming for semantic parsing. In Proceedings of the 12th ECML, pages 466–477, Freiburg, Germany. Cynthia A. Thomspon and Raymond J. Mooney. 2003. Acquiring word-meaning mappings for natural language interfaces. Journal of Artifical Intelligence Research, 18:1–44. T. Tieleman and G. Hinton. 2012. Lecture 6.5— RmsProp: Divide the gradient by a running average of its recent magnitude. Technical report. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of the 54th ACL, Berlin, Germany. Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015a. Grammar as a foreign language. In Advances in Neural Information Processing Systems 28, pages 2755– 2763. Curran Associates, Inc. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015b. Show and tell: A neural image caption generator. In Proceedings of CVPR, pages 3156–3164, Boston, Massachusetts. Yuk Wah Wong and Raymond J. Mooney. 2006. Learning for semantic parsing with statistical machine translation. In Proceedings of the 2006 NAACL, pages 439–446, New York, New York. Yuk Wah Wong and Raymond J. Mooney. 2007. Learning synchronous grammars for semantic parsing with lambda calculus. In Proceedings of the 45th ACL, pages 960–967, Prague, Czech Republic. W. A. Woods. 1973. Progress in natural language understanding: An application to lunar geology. In Proceedings of the June 4-8, 1973, National Computer Conference and Exposition, pages 441–450, New York, NY. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the 32nd ICML, pages 2048– 2057, Lille, France. Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2015. Recurrent neural network regularization. In Proceedings of the ICLR, San Diego, California. John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of the 19th AAAI, pages 1050–1055, Portland, Oregon. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Proceedings of the 21st UAI, pages 658–666, Toronto, ON. Luke Zettlemoyer and Michael Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In In Proceedings of the EMNLPCoNLL, pages 678–687, Prague, Czech Republic. Kai Zhao and Liang Huang. 2015. Type-driven incremental semantic parsing with polymorphism. In Proceedings of the 2015 NAACL, pages 1416–1421, Denver, Colorado. 43
2016
4
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 421–431, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Transition-Based Neural Word Segmentation Meishan Zhang1 and Yue Zhang2 and Guohong Fu1 1. School of Computer Science and Technology, Heilongjiang University, Harbin, China 2. Singapore University of Technology and Design [email protected], yue [email protected], [email protected] Abstract Character-based and word-based methods are two main types of statistical models for Chinese word segmentation, the former exploiting sequence labeling models over characters and the latter typically exploiting a transition-based model, with the advantages that word-level features can be easily utilized. Neural models have been exploited for character-based Chinese word segmentation, giving high accuracies by making use of external character embeddings, yet requiring less feature engineering. In this paper, we study a neural model for word-based Chinese word segmentation, by replacing the manuallydesigned discrete features with neural features in a word-based segmentation framework. Experimental results demonstrate that word features lead to comparable performances to the best systems in the literature, and a further combination of discrete and neural features gives top accuracies. 1 Introduction Statistical word segmentation methods can be categorized character-based (Xue, 2003; Tseng et al., 2005) and word-based (Andrew, 2006; Zhang and Clark, 2007) approaches. The former casts word segmentation as a sequence labeling problem, using segmentation tags on characters to mark their relative positions inside words. The latter, in contrast, ranks candidate segmented outputs directly, extracting both character and full-word features. An influential character-based word segmentation model (Peng et al., 2004; Tseng et al., 2005) uses B/I/E/S labels to mark a character as the beginning, internal (neither beginning nor end), end and only-character (both beginning and end) of a character-based word-based discrete Peng et al. (2004) Andrew (2006) Tseng et al. (2005) Zhang and Clark (2007) neural Zheng et al. (2013) this work Chen et al. (2015b) Figure 1: Word segmentation methods. word, respectively, employing conditional random field (CRF) to model the correspondence between the input character sequence and output label sequence. For each character, features are extracted from a five-character context window and a twolabel history window. Subsequent work explores different label sets (Zhao et al., 2006), feature sets (Shi and Wang, 2007) and semi-supervised learning (Sun and Xu, 2011), reporting state-of-the-art accuracies. Recently, neural network models have been investigated for the character tagging approach. The main idea is to replace manual discrete features with automatic real-valued features, which are derived automatically from distributed character representations using neural networks. In particular, convolution neural network1 (Zheng et al., 2013), tensor neural network (Pei et al., 2014), recursive neural network (Chen et al., 2015a) and longshort-term-memory (LSTM) (Chen et al., 2015b) have been used to derive neural feature representations from input word sequences, which are fed into a CRF inference layer. In this paper, we investigate the effectiveness of word embedding features for neural network segmentation using transition-based models. Since it is challenging to integrate word features to the CRF inference framework of the existing 1The term in this paper is used to denote the neural network structure with convolutional layers, which is different from the typical convolution neural network that has a pooling layer upon convolutional layers (Krizhevsky et al., 2012). 421 step action buffer(· · · w−1w0) queue(c0c1 · · · ) 0 φ 中国· · · 1 SEP 中 国外· · · 2 APP 中国 外企· · · 3 SEP 中国外 企业· · · 4 APP 中国外企 业务· · · 5 SEP 中国外企业 务发· · · 6 APP 中国外企业务 发展· · · 7 SEP · · · 业务发 展迅速 8 APP · · · 业务发展 迅速 9 SEP · · · 发展迅 速 10 APP · · · 发展迅速 φ Figure 2: Segmentation process of “中国(Chinese) 外企(foreign company) 业务(business) 发展(develop) 迅速(quickly)”. character-based methods, we take inspiration from word-based discrete segmentation instead. In particular, we follow Zhang and Clark (2007), using the transition-based framework to decode a sentence from left-to-right incrementally, scoring partially segmented results using both character-level and word-level features. Beam-search is applied to reduce error propagation and large-margin training with early-update (Collins and Roark, 2004) is used for learning from inexact search. We replace the discrete word and character features of Zhang and Clark (2007) with word and character embeddings, respectively, and change their linear model into a deep neural network. Following Zheng et al. (2013) and Chen et al. (2015b), we use convolution neural networks to achieve local feature combination and LSTM to learn global sentence-level features, respectively. The resulting model is a word-based neural segmenter that can leverage rich embedding features. Its correlation with existing work on Chinese segmentation is shown in Figure 1. Results on standard benchmark datasets show the effectiveness of word embedding features for neural segmentation. Our method achieves stateof-the-art results without any preprocess based on external knowledge such as Chinese idioms of Chen et al. (2015a) and Chen et al. (2015b). We release our code under GPL for research reference.2 2 Baseline Discrete Model We exploit the word-based segmentor of Zhang and Clark (2011) as the baseline system. It incrementally segments a sentence using a transition system, with a state holding a partially-segmented 2https://github.com/SUTDNLP/NNTransitionSegmentor. sentence in a buffer s and ordering the next incoming characters in a queue q. Given an input Chinese sentence, the buffer is initially empty and the queue contains all characters of the sentence, a sequence of transition actions are used to consume characters in the queue and build the output sentence in the buffer. The actions include: • Append (APP), which removes the first character from the queue, and appends it to the last word in the buffer; • Separate (SEP), which moves the first character of the queue onto the buffer as a new (sub) word. Given the input sequence of characters “中国 外企业务发展迅速” (The business of foreign company in China develops quickly), the correct output can be derived using action sequence “SEP APP SEP APP SEP APP SEP APP SEP APP”, as shown in Figure 2. Search. Based on the transition system, the decoder searches for an optimal action sequence for a given sentence. Denote an action sequence as A = a1 · · · an. We define the score of A as the total score of all actions in the sequence, which is computed by: score(A) = X a∈A score(a) = X a∈A w · f(s, q, a), where w is the model parameters, f is a feature extraction function, s and q are the buffer and queue of a certain state before the action a is applied. The feature templates are shown in Table 1, which are the same as Zhang and Clark (2011). These base features include three main source of information. First, characters in the front of the queue and the end of the buffer are used for scoring both separate and append actions (e.g. c0). Second, words that are identified are used to guide separate actions (e.g. w0). Third, relevant information of identified words, such as their lengths and first/last characters are utilized for additional features (e.g. len(w−1)). We follow Zhang and Clark (2011) in using beam-search for decoding, shown in Algorith 1, where Θ is the set of model parameters. Initially the beam contains only the initial state. At each step, each state in the beam is extended by applying both SEP and APP, resulting in a set of new states, which are scored and ranked. The top B are 422 Feature templates Action c−1c0 APP, SEP w−1, w−1w−2, w−1c0, w−2len(w−1) SEP start(w−1)c0, end(w−1)c0 start(w−1)end(w−1), end(w−2)end(w−1) w−2len(w−1), len(w−2)w−1 w−1, where len(w−1) = 1 Table 1: Feature templates for the baseline model, where wi denotes the word in the buffer, ci denotes the character in the queue, as shown in Figure 2, start(.), end(.) and len(.) denote the first, last character and length of a word, respectively. Algorithm 1 Beam-search decoding. function DECODE(c1 · · · cn, Θ) agenda ←{ (φ, c1 · · · cn, score=0.0) } for i in 1 · · · n beam ←{ } for cand in agenda new ←SEP(cand, ci, Θ) ADDITEM(beam, new) new ←APP(cand, ci, Θ) ADDITEM(beam, new) agenda ←TOP-B(beam, B) best ←BESTITEM(agenda) w1 · · · wm ←EXTRACTWORDS(best) used as the beam states for the next step. The same process replaces until all input character are processed, and the highest-scored state in the beam is taken for output. Online leaning with max-margin is used, which is given in section 4. 3 Transition-Based Neural Model We use a neural network model to replace the discrete linear model for scoring transition action sequences. For better comparison between discrete and neural features, the overall segmentation framework of the baseline is used, which includes the incremental segmentation process, the beamsearch decoder and the training process integrated with beam-search (Zhang and Clark, 2011). In addition, the neural network scorer takes the similar feature sources as the baseline, which includes character information over the input, word information of the partially constructed output, and the history sequence of the actions that have been applied so far. The overall architecture of the neural scorer is shown in Figure 3. Given a certain state score(SEP) score(APP) · · · hsep · · · happ · · · rc · · · rw · · · ra word sequence character sequence action sequence RNN RNN RNN · · · w−1w0 · · · c−1c0c1 · · · · · · a−1a0 Figure 3: Scorer for the neural transition-based Chinese word segmentation model. We denote the last word in the buffer as w0, the next incoming character as c0 in the queue in consistent with Figure 2, and the last applied action as a0. configuration (s, q), we use three separate recurrent neural networks (RNN) to model the word sequence · · · w−1w0, the character sequence · · · c−1c0c1 · · · , and the action sequence · · · a−1a0, respectively, resulting in three dense real-valued vectors {rw, rc and ra}, respectively. All the three feature vectors are used scoring the SEP action. For APP, on the other hand, we use only the character and action features rc and ra because the last word w0 in the buffer is a partial word. Formally, given rw, rc, ra, the action scores are computed by: score(SEP) = wsephsep score(APP) = wapphapp where hsep = tanh(Wsep[rw, rc, ra] + bsep) happ = tanh(Wapp[rc, ra] + bapp) Wsep, Wapp, bsep, bapp, wsep, wapp are model parameters. The neural networks take the embedding forms of words, characters and actions as input, for extracting rw, rc and ra, respectively. We exploit the LSTM-RNN structure (Hochreiter and Schmidhuber, 1997), which can better capture non-local syntactic and semantic information from a sequential input, yet reducing gradient explosion or diminishing during training. In general, given a sequence of input vectors x0 · · · xn, the LSTM-RNN computes a sequence of hidden vectors h0 · · · hn, respectively, with each hi being determined by the input xi and the previous hidden vector hi−1. A cell structure ce is 423 ... wi ... wi−1 ...... ... xw i ...... (a) word representation ... ai ... ai−1 ...... ... xa i ...... (b) action representation ... ⊕ ... ci, ci−1ci ... ⊕ ... ci−1, ci−2ci−1 ... ⊕ ... ci+1, ci+1ci ...... ...... ... xc i ...... ...... (c) character representation Figure 4: Input representations of LSTMS for ra (actions) rw (words) and rc (characters). used to carry long-term memory information over the history h0 · · · hi for calculating hi, and information flow is controlled by an input gate ig, an output gate og and a forget gate fg. Formally, the calculation of hi using hi−1 and xi is: igi = σ(Wigxi + Uighi−1 + Vigcei−1 + big) fgi = σ(Wfgxi + Ufghi−1 + Vfgcei−1 + bfg) cei = fgi ⊙cei−1+ igi ⊙tanh(Wcexi + Ucehi−1 + bce) ogi = σ(Wogxi + Uoghi−1 + Vogcei + bog) hi = ogi ⊙tanh(cei), where U, V, W, b are model parameters, and ⊙denotes Hadamard product. When used to calculate rw, rc and ra, the general LSTM structure above is given different input sequences x0 · · · xn, according to the word, character and action sequences, respectively. 3.1 Input representation Words. Given a word w, we use a looking-up matrix Ew to obtain its embedding ew(w). The matrix can be obtained through pre-training on large size of auto segmented corpus. As shown in Figure 4(a), we use a convolutional neural layer upon a two-word window to obtain · · · xw −1xw 0 for the LSTM for rw, with the following formula: xw i = tanh Ww[ew(wi−1), ew(wi)] + bw  Actions. We represent an action a with an embedding ea(a) from a looking-up table Ea, and apply the similar convolutional neural network to obtain · · · xa −1xa 0 for ra, as shown in Figure 4(b). Given the input action sequence · · · a−1a0, the xa i is computed by: xa i = tanh Wa[ea(ai−1), ea(ai)] + ba  Characters. We make embeddings for both character unigrams and bigrams by looking-up matrixes Ec and Ebc, respectively, the latter being shown to be useful by Pei et al. (2014). For each character ci, the unigram embedding ec(ci) and the bigram embedding ebc(ci−1ci) are concatenated, before being given to a CNN with a convolution size of 5. For the character sequence · · · c−1c0c1 · · · of a given state (s, q), we compute its input vectors · · · xc −1xc 0xc 1 · · · for the LSTM for rc by: xc i = tanh Wc[ec(ci−2) ⊕ebc(ci−3ci−2), · · · , ec(ci) ⊕ebc(ci−1ci), · · · , ec(ci+2) ⊕ebc(ci+1ci+2)] + bc  For all the above input representations, the looking-up tables Ew, Ea , Ec, Ebc and the weights Ww, Wa, Wc, bw, ba, bc are model parameters. For calculating rw and ra, we apply the LSTMs directly over the sequences · · · xw −1xw 0 and · · · xa −1xa 0 for words and actions, and use the outputs hw 0 and ha 0 for rw and ra, respectively. For calculating rc, we further use a bi-directional extension of the original LSTM structure. In particular, the base LSTM is applied to the input character sequence both from left to right and from right to left, leading to two hidden node sequences · · · hcL −1hcL 0 hcL 1 · · · and · · · hcR −1hcR 0 hcR 1 · · · , respectively. For the current character c0, hcL 0 and hcR 0 are concatenated to form the final vector rc. This is feasible because the character sequence is input and static, and previous work has demonstrated better capability of bi-directional LSTM for modeling sequences (Yao and Zweig, 2015). 3.2 Integrating discrete features Our model can be extended by integrating the baseline discrete features into the feature layer. In particular, score(SEP) = w′sep(hsep ⊕fsep) score(APP) = w′app(happ ⊕fapp), where fsep and fapp represent the baseline sparse vector for SEP and APP features, respectively, and ⊕denotes the vector concatenation operation. 424 Algorithm 2 Max-margin training with earlyupdate. function TRAIN(c1 · · · cn, ag 1 · · · ag n, Θ) agenda ←{ (φ, c1 · · · cn, score=0.0) } for i in 1 · · · n beam ←{ } for cand in agenda new ←SEP(cand, ci, Θ) if {ag i ̸= SEP} new.score += η ADDITEM(beam, new) new ←APP(cand, ci, Θ) if {ag i ̸= APP} new.score += η ADDITEM(beam, new) agenda ←TOP-B(beam, B) if {ITEM(ag 1 · · · ag i ) /∈agenda} Θ = Θ −f BESTITEM(agenda)  Θ = Θ + f ITEM((ag 1 · · · ag i )  return if {ITEM(ag 1 · · · ag n) ̸= BESTITEM(agenda)} Θ = Θ −f BESTITEM(agenda)  Θ = Θ + f ITEM((ag 1 · · · ag n)  4 Training To train model parameters for both the discrete and neural models, we exploit online learning with early-update as shown in Algorithm 2. A maxmargin objective is exploited,3 which is defined as: L(Θ) = 1 K K X k=1 l(Ag k, Θ) + λ 2 ∥Θ ∥2 l(Ag k, Θ) = max A score(Ak, Θ) + η · δ(Ak, Ag k)  −score(Ag k, Θ), where Θ is the set of all parameters, {Ag k}K n=1 are gold action sequences to segment the training corpus, Ak is the model output action sequence, λ is a regularization parameter and η is used to tune the loss margins. For the discrete models, f(·) denotes the features extracted according to the feature templates in Table 1. For the neural models, f(·) denotes the corresponding hsep and happ. Thus only the output layer is updated, and we further use backpropagation to learn the parameters of the other layers (LeCun et al., 2012). We use online Ada3Zhou et al. (2015) find that max-margin training did not yield reasonable results for neural transition-based parsing, which is different from our findings. One likely reason is that when the number of labels is small max-margin is effective. CTB60 PKU MSR Training #sent 23k 17k 78k #word 641k 1,010k 2,122k Development #sent 2.1k 1.9k 8.7k #word 60k 100k 246k Test #sent 2.8k 1.9k 4.0k #word 82k 104k 106k Table 2: Statistics of datasets. Type hyper-parameters Network d(hsep) = 100, d(happ) = 80 structure d(ha i ) = 20, d(xa i ) = 20 d(hw i ) = 50, d(xw i ) = 50 d(hcL i ) = d(hcR i ) = 50, d(xc i) = 50 d(ew(wi)) = 50, d(ea(ai)) = 20 d(ec(ci)) = 50, d(ebc(ci−1ci)) = 50 Training λ = 10−8, α = 0.01, η = 0.2 Table 3: Hyper-parameter values in our model. Grad (Duchi et al., 2011) to minimize the objective function for both the discrete and neural models. All the matrix and vector parameters are initialized by uniform sampling in (−0.01, 0.01). 5 Experiments 5.1 Experimental Settings Data. We use three datasets for evaluation, namely CTB6, PKU and MSR. The CTB6 corpus is taken from Chinese Treebank 6.0, and the PKU and MSR corpora can be obtained from BakeOff 2005 (Emerson, 2005). We follow Zhang et al. (2014), splitting the CTB6 corpus into training, development and testing sections. For the PKU and MSR corpora, only the training and test datasets are specified and we randomly split 10% of the training sections for development. Table 1 shows the overall statistics of the three datasets. Embeddings. We use word2vec4 to pre-train word, character and bi-character embeddings on Chinese Gigaword corpus (LDC2011T13). In order to train full word embeddings, the corpus is segmented automatically by our baseline model. Hyper-parameters. The hyper-parameter values are tuned according to development performances. We list their final values in Table 3. 5.2 Development Results To better understand the word-based neural models, we perform several development experiments. All the experiments in this section are conducted on the CTB6 development dataset. 4http://word2vec.googlecode.com/ 425 5 10 15 20 86 88 90 92 94 96 (a) discrete 5 10 15 20 b16 b8 b4 b2 b1 (b) neural(-tune) 5 10 15 20 (c) neural(+tune) Figure 5: Accuracies against the training epoch using beam sizes 1, 2, 4, 8 and 16, respectively. 5.2.1 Embeddings and beam size We study the influence of beam size on the baseline and neural models. Our neural model has two choices of using pre-trained word embeddings. We can either fine-tune or fix the embeddings during training. In case of fine-tuning, only words in the training data can be learned, while embeddings of out-of-vocabulary (OOV) words could not be used effectively.5 In addition, following Dyer et al. (2015) we randomly set words with frequency 1 in the training data as the OOV words in order to learn the OOV embedding, while avoiding overfitting. If the pretrained word embeddings are not fine-tuned, we can utilize all word embeddings. Figure 5 shows the development results, where the training curve of the discrete baseline is shown in Figure 5(a) and the curve of the neural model without and with fine tuning are shown in 5(b) and 5(c), respectively. The performance increases with a larger beam size in all settings. When the beam increases into 16, the gains levels out. The results of the discrete model and the neural model without fine-tuning are highly similar, showing the usefulness of beam-search. On the other hand, with fine-tuning, the results are different. The model with beam size 1 gives better accuracies compared to the other models with the same beam size. However, as the beam size increases, the performance increases very little. The results are consistent with Dyer et al. (2015), who find that beam-search improves the results only slightly on dependency parsing. When a beam size of 16 is used, this model performs the 5We perform experiments using random initialized word embeddings as well when fine-tune is used, which is a fully supervised model. The performance is slightly lower. Model P R F neural 95.21 95.69 95.45 -word 91.81 92.00 91.90 -character unigram 94.89 95.56 95.22 -character bigram 94.93 95.53 95.23 -action 95.00 95.31 95.17 +discrete features 96.38 96.22 96.30 (combined) Table 4: Feature experiments. 0.8 0.84 0.88 0.92 0.96 1 0.8 0.84 0.88 0.92 0.96 1 discrete neural Figure 6: Sentence accuracy comparisons for the discrete and the neural models. worst compared with the discrete model and the neural model without fine-tuning. This is likely because the fine-tuning of embeddings leads to overfitting of in-vocabulary words, and underfitting over OOV words. Based on the observation, we exploit fixed word embeddings in our final models. 5.2.2 Feature ablation We conduct feature ablation experiments to study the effects of the word, character unigram, character bigram and action features to the neural model. The results are shown in Table 4. Word features are particularly important to the model, without which the performance decreases by 4.5%. The effects of the character unigram, bigram and action features are relatively much weaker.6 This demonstrates that in the word-based incremental search framework, words are the most crucial information to the neural model. 5.2.3 Integrating discrete features Prior work has shown the effectiveness of integrating discrete and neural features for several NLP tasks (Turian et al., 2010; Wang and Manning, 6In all our experiments, we fix the character unigram and bigram embeddings, because fine-tuning of these embeddings results in little changes. 426 Models P R F word-based models discrete 95.29 95.26 95.28 neural 95.34 94.69 95.01 combined 96.11 95.79 95.95 character-based models discrete 95.38 95.12 95.25 neural 94.59 94.92 94.76 combined 95.63 95.60 95.61 other models Zhang et al. (2014) N/A N/A 95.71 Wang et al. (2011) 95.83 95.75 95.79 Zhang and Clark (2011) 95.46 94.78 95.13 Table 5: Main results on CTB60 test dataset. 2013; Durrett and Klein, 2015; Zhang and Zhang, 2015). We investigate the usefulness of such integration to our word-based segmentor on the development dataset. We study it by two ways. First, we compare the error distributions between the discrete and the neural models. Intuitively, different error distributions are necessary for improvements by integration. We draw a scatter graph to show their differences, with the (x, y) values of each point denoting the F-measure scores of the two models with respect to sentences, respectively. As shown in Figure 6, the points are rather dispersive, showing the differences of the two models. Further, we directly look at the results after integration of both discrete and neural features. As shown in Table 4, the integrated model improves the accuracies from 95.45% to 96.30%, demonstrating that the automatically-induced neural features contain highly complementary information to the manual discrete features. 5.3 Final Results Table 6 shows the final results on CTB6 test dataset. For thorough comparison, we implement discrete, neural and combined character-based models as well.7 In particular, the character-based discrete model is a CRF tagging model using character unigrams, bigrams, trigrams and tag transitions (Tseng et al., 2005), and the character-based neural model exploits a bi-directional LSTM layer to model character sequences8 and a CRF layer for 7The code is released for research reference under GPL at https://github.com/SUTDNLP/NNSegmentation. 8We use a concatenation of character unigram and bigram embeddings at each position as the input to LSTM, because our experiments show that the character bigram embeddings are useful, without which character-based neural models are significantly lower than their discrete counterparts. Models PKU MSR our word-based models discrete 95.1 97.3 neural 95.1 97.0 combined 95.7 97.7 character-based models discrete 94.9 96.8 neural 94.4 97.2 combined 95.4 97.2 other models Cai and Zhao (2016) 95.5 96.5 Ma and Hinrichs (2015) 95.1 96.6 Pei et al. (2014) 95.2 97.2 Zhang et al. (2013a) 96.1 97.5 Sun et al. (2012) 95.4 97.4 Zhang and Clark (2011) 95.1 97.1 Sun (2010) 95.2 96.9 Sun et al. (2009) 95.2 97.3 Table 6: Main results on PKU and MSR test datasets. output (Chen et al., 2015b).9 The combined model uses the same method for integrating discrete and neural features as our word-based model. The word-based models achieve better performances than character-based models, since our model can exploit additional word information learnt from large auto-segmented corpus. We also compare the results with other models. Wang et al. (2011) is a semi-supervised model that exploits word statistics from auto-segmented raw corpus, which is similar with our combined model in using semi-supervised word information. We achieve slightly better accuracies. Zhang et al. (2014) is a joint segmentation, POS-tagging and dependency parsing model, which can exploit syntactic information. To compare our models with other state-of-theart models in the literature, we report the performance on the PKU and MSR datasets also.10 Our combined model gives the best result on the MSR dataset, and the second best on PKU. The method of Zhang et al. (2013a) gives the best performance on PKU by co-training on large-scale data. 5.4 Error Analysis To study the differences between word-based and character-based neural models, we conduct error analysis on the test dataset of CTB60. First, 9Bi-directional LSTM is slightly better than a single leftright LSTM used in Chen et al. (2015b). 10The results of Chen et al. (2015a) and Chen et al. (2015b) are not listed, because they take a preprocessing step by replacing Chinese idioms with a uniform symbol in their test data. 427 0.8 0.84 0.88 0.92 0.96 1 0.8 0.84 0.88 0.92 0.96 1 word character Figure 7: Sentence accuracy comparisons for word- and character-based neural models. 5 10 15 20 25 30 35 40 45 50+ 92 94 96 98 F-measures(%) word character Figure 8: F-measure against character length. we examine the error distribution on individual sentences. Figure 7 shows the F-measure values of each test sentence by word- and characterbased neural models, respectively, where the xaxis value denotes the F-measure value of the word-based neural model, and the y-axis value denotes its performance of the character-based neural model. We can see that the majority scatter points are off the diagonal line, demonstrating strong differences between the two models. This results from the differences in feature sources. Second, we study the F-measure distribution of the two neural models with respect to sentence lengths. We divide the test sentences into ten bins, with bin i denoting sentence lengths in [5 ∗(i −1), 5 ∗i]. Figure 8 shows the results. According to the figure, we observe that word-based neural model is relatively weaker for sentences with length in [5, 10], while can better tackle long sentences. Third, we compare the two neural models by their capabilities of modeling words with different lengths. Figure 9 shows the results. The perfor1 2 3 4+ 75 80 85 90 95 Figure 9: F-measure against word length, where the boxes with red dots denote the performances of word-based neural model, and the boxes with blue slant lines denote character-based neural model. mances are lower for words with lengths beyond 2, and the performance drops significantly for words with lengths over 3. Overall, the word-based neural model achieves comparable performances with the character-based model, but gives significantly better performances for long words, in particular when the word length is over 3. This demonstrates the advantage of word-level features. 6 Related Work Xue (2003) was the first to propose a charactertagging method to Chinese word segmentation, using a maximum entropy model to assign B/I/E/S tags to each character in the input sentence separately. Peng et al. (2004) showed that better results can be achieved by global learning using a CRF model. This method has been followed by most subsequent models in the literature (Tseng et al., 2005; Zhao, 2009; Sun et al., 2012). The most effective features have been character unigrams, bigrams and trigrams within a five-character window, and a bigram tag window. Special characters such as alphabets, numbers and date/time characters are also differentiated for extracting features. Zheng et al. (2013) built a neural network segmentor, which essentially substitutes the manual discrete features of Peng et al. (2004), with dense real-valued features induced automatically from character embeddings, using a deep neural network structure (Collobert et al., 2011). A tag transition matrix is used for inference, which makes the model effectively. Most subsequent work on neural segmentation followed this method, improving the extraction of emission features by using more complex neural network structures. Mansur et al. (2013) experimented with embeddings of richer features, and in particular charac428 ter bigrams. Pei et al. (2014) used a tensor neural network to achieve extensive feature combinations, capturing the interaction between characters and tags. Chen et al. (2015a) used a recursive network structure to the same end, extracting more combined features to model complicated character combinations in a five-character window. Chen et al. (2015b) used a LSTM model to capture long-range dependencies between characters in a sentence. Xu and Sun (2016) proposed a dependency-based gated recursive neural network to efficiently integrate local and long-distance features. The above methods are all character-based models, making no use of full word information. In contrast, we leverage both character embeddings and word embeddings for better accuracies. For word-based segmentation, Andrew (2006) used a semi-CRF model to integrate word features, Zhang and Clark (2007) used a perceptron algorithm with inexact search, and Sun et al. (2009) used a discriminative latent variable model to make use of word features. Recently, there have been several neural-based models using word-level embedding features (Morita et al., 2015; Liu et al., 2016; Cai and Zhao, 2016), which are different from our work in the basic framework. For instance, Liu et al. (2016) follow Andrew (2006) using a semi-CRF for structured inference. We followed the global learning and beamsearch framework of Zhang and Clark (2011) in building a word-based neural segmentor. The main difference between our model and that of Zhang and Clark (2011) is that we use a neural network to induce feature combinations directly from character and word embeddings. In addition, the use of a bi-directional LSTM allows us to leverage non-local information from the word sequence, and look-ahead information from the incoming character sequence. The automatic neural features are complementary to the manual discrete features of Zhang and Clark (2011). We show that our model can accommodate the integration of both types of features. This is similar in spirit to the work of Sun (2010) and Wang et al. (2014), who integrated features of character-based and word-based segmentors. Transition-based framework with beam search has been widely exploited in a number of other NLP tasks, including syntactic parsing (Zhang and Nivre, 2011; Zhu et al., 2013), information extraction (Li and Ji, 2014) and the work of joint models (Zhang et al., 2013b; Zhang et al., 2014). Recently, the effectiveness of neural features has been studied for this framework. In the natural language parsing community, it has achieved great success. Representative work includes Zhou et al. (2015), Weiss et al. (2015), Watanabe and Sumita (2015) and Andor et al. (2016). In this work, we apply the transition-based neural framework to Chinese segmentation, in order to exploit wordlevel neural features such as word embeddings. 7 Conclusion We proposed a word-based neural model for Chinese segmentation, which exploits not only character embeddings as previous work does, but also word embeddings pre-trained from large scale corpus. The model achieved comparable performances compared with a discrete word-based baseline, and also the state-of-the-art characterbased neural models in the literature. We further demonstrated that the model can utilize discrete features conveniently, resulting in a combined model that achieved top performances compared with previous work. Finally, we conducted several comparisons to study the differences between our word-based model with character-based neural models, showing that they have different error characteristics. Acknowledgments We thank the anonymous reviewers, Yijia Liu and Hai Zhao for their constructive comments, which help to improve the final paper. This work is supported by National Natural Science Foundation of China (NSFC) under grant 61170148, Natural Science Foundation of Heilongjiang Province (China) under grant No.F2016036, the Singapore Ministry of Education (MOE) AcRF Tier 2 grant T2MOE201301 and SRG ISTD 2012 038 from Singapore University of Technology and Design. Yue Zhang is the corresponding author. References Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In Proceedings of the ACL 2016. Galen Andrew. 2006. A hybrid markov/semi-markov conditional random field for sequence segmentation. 429 In Proceedings of the 2006 Conference on EMNLP, pages 465–472, Sydney, Australia, July. Deng Cai and Hai Zhao. 2016. Neural word segmentation learning for Chinese. In Proceedings of ACL 2016. Xinchi Chen, Xipeng Qiu, Chenxi Zhu, and Xuanjing Huang. 2015a. Gated recursive neural network for chinese word segmentation. In Proceedings of the 53nd ACL, pages 1744–1753, July. Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Pengfei Liu, and Xuanjing Huang. 2015b. Long short-term memory neural networks for chinese word segmentation. In Proceedings of the 2015 EMNLP, pages 1197–1206, September. Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL’04), Main Volume, pages 111–118, Barcelona, Spain, July. R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493–2537. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121–2159. Greg Durrett and Dan Klein. 2015. Neural crf parsing. In Proceedings of the 53nd ACL, pages 302– 312, July. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of the 53nd ACL, pages 334–343, July. Thomas Emerson. 2005. The second international chinese word segmentation bakeoff. In Proceedings of the Second SIGHAN Workshop on Chinese Language Processing, pages 123–133. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105. Yann A LeCun, L´eon Bottou, Genevieve B Orr, and Klaus-Robert M¨uller. 2012. Efficient backprop. In Neural networks: Tricks of the trade, pages 9–48. Springer. Qi Li and Heng Ji. 2014. Incremental joint extraction of entity mentions and relations. In Proceedings of the ACL 2014. Yijia Liu, Wanxiang Che, Jiang Guo, Bing Qin, and Ting Liu. 2016. Exploring segment representations for neural segmentation models. In Proceedings of IJCAI 2016. Jianqiang Ma and Erhard Hinrichs. 2015. Accurate linear-time chinese word segmentation via embedding matching. In Proceedings of the 53nd ACL, pages 1733–1743, July. Mairgup Mansur, Wenzhe Pei, and Baobao Chang. 2013. Feature-based neural language model and chinese word segmentation. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 1271–1277, Nagoya, Japan, October. Asian Federation of Natural Language Processing. Hajime Morita, Daisuke Kawahara, and Sadao Kurohashi. 2015. Morphological analysis for unsegmented languages using recurrent neural network language model. In Proceedings of the 2015 Conference on EMNLP, pages 2292–2297. Wenzhe Pei, Tao Ge, and Baobao Chang. 2014. Maxmargin tensor neural network for chinese word segmentation. In Proceedings of the 52nd ACL, pages 293–303, Baltimore, Maryland, June. Fuchun Peng, Fangfang Feng, and Andrew McCallum. 2004. Chinese segmentation and new word detection using conditional random fields. In Proceedings of Coling 2004, pages 562–568, Geneva, Switzerland, Aug 23–Aug 27. Yanxin Shi and Mengqiu Wang. 2007. A dual-layer crfs based joint decoding method for cascaded segmentation and labeling tasks. In IJCAI, pages 1707– 1712. Weiwei Sun and Jia Xu. 2011. Enhancing chinese word segmentation using unlabeled data. In Proceedings of the 2011 Conference on EMNLP, pages 970–979, July. Xu Sun, Yaozhong Zhang, Takuya Matsuzaki, Yoshimasa Tsuruoka, and Jun’ichi Tsujii. 2009. A discriminative latent variable chinese segmenter with hybrid word/character information. In Proceedings of NAACL 2009, pages 56–64, June. Xu Sun, Houfeng Wang, and Wenjie Li. 2012. Fast online training with frequency-adaptive learning rates for chinese word segmentation and new word detection. In Proceedings of the 50th ACL, pages 253– 262, July. Weiwei Sun. 2010. Word-based and character-based word segmentation models: Comparison and combination. In Coling 2010: Posters, pages 1211–1219, August. Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher Manning. 2005. A conditional random field word segmenter for sighan bakeoff 2005. In Proceedings of the fourth SIGHAN workshop, pages 168–171. 430 Joseph Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 384–394, July. Mengqiu Wang and Christopher D. Manning. 2013. Effect of non-linear deep architecture in sequence labeling. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 1285–1291, Nagoya, Japan, October. Asian Federation of Natural Language Processing. Yiou Wang, Jun’ichi Kazama, Yoshimasa Tsuruoka, Wenliang Chen, Yujie Zhang, and Kentaro Torisawa. 2011. Improving chinese word segmentation and pos tagging with semi-supervised methods using large auto-analyzed data. In Proceedings of 5th IJCNLP, pages 309–317, Chiang Mai, Thailand, November. Mengqiu Wang, Rob Voigt, and Christopher D. Manning. 2014. Two knives cut better than one: Chinese word segmentation with dual decomposition. In Proceedings of the 52nd ACL, pages 193–198, Baltimore, Maryland, June. Taro Watanabe and Eiichiro Sumita. 2015. Transitionbased neural constituent parsing. In Proceedings of the 53rd ACL, pages 1169–1179, July. David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. 2015. Structured training for neural network transition-based parsing. In Proceedings of the 53rd ACL, pages 323–333, July. Jingjing Xu and Xu Sun. 2016. Dependency-based gated recursive neural network for chinese word segmentation. In Proceedings of ACL 2016. Nianwen Xue. 2003. Chinese word segmentation as character tagging. International Journal of Computational Linguistics and Chinese Language Processing, 8(1). Kaisheng Yao and Geoffrey Zweig. 2015. Sequence-to-sequence neural net models for grapheme-to-phoneme conversion. arXiv preprint arXiv:1506.00196. Yue Zhang and Stephen Clark. 2007. Chinese segmentation with a word-based perceptron algorithm. In Proceedings of the 45th ACL, pages 840–847, Prague, Czech Republic, June. Yue Zhang and Stephen Clark. 2011. Syntactic processing using the generalized perceptron and beam search. Computational Linguistics, 37(1):105–151. Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proceedings of the 49th ACL, pages 188–193, June. Meishan Zhang and Yue Zhang. 2015. Combining discrete and continuous features for deterministic transition-based dependency parsing. In Proceedings of the 2015 EMNLP, pages 1316–1321, September. Longkai Zhang, Houfeng Wang, Xu Sun, and Mairgup Mansur. 2013a. Exploring representations from unlabeled data with co-training for Chinese word segmentation. In Proceedings of the EMNLP 2013, pages 311–321, Seattle, Washington, USA, October. Meishan Zhang, Yue Zhang, Wanxiang Che, and Ting Liu. 2013b. Chinese parsing exploiting characters. In Proceedings of the 51st ACL, pages 125–134, August. Meishan Zhang, Yue Zhang, Wanxiang Che, and Ting Liu. 2014. Character-level chinese dependency parsing. In Proceedings of the 52nd ACL, pages 1326–1336, Baltimore, Maryland, June. Hai Zhao, Chang-Ning Huang, Mu Li, and Bao-Liang Lu. 2006. Effective tag set selection in chinese word segmentation via conditional random field modeling. In Proceedings of PACLIC, volume 20, pages 87–94. Citeseer. Hai Zhao. 2009. Character-level dependencies in chinese: Usefulness and learning. In Proceedings of the EACL, pages 879–887, Athens, Greece, March. Xiaoqing Zheng, Hanyang Chen, and Tianyu Xu. 2013. Deep learning for Chinese word segmentation and POS tagging. In Proceedings of the 2013 Conference on EMNLP, pages 647–657, Seattle, Washington, USA, October. Hao Zhou, Yue Zhang, Shujian Huang, and Jiajun Chen. 2015. A neural probabilistic structuredprediction model for transition-based dependency parsing. In Proceedings of the 53rd ACL, pages 1213–1222, July. Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. 2013. Fast and accurate shiftreduce constituent parsing. In Proceedings of the 51st ACL, pages 434–443, August. 431
2016
40
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 432–441, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics A Parallel-Hierarchical Model for Machine Comprehension on Sparse Data Adam Trischler∗ adam.trischler Zheng Ye∗ jeff.ye Xingdi Yuan eric.yuan Jing He jing.he Philip Bachman phil.bachman Maluuba Research Montreal, Qu´ebec, Canada Kaheer Suleman [email protected] Abstract Understanding unstructured text is a major goal within natural language processing. Comprehension tests pose questions based on short text passages to evaluate such understanding. In this work, we investigate machine comprehension on the challenging MCTest benchmark. Partly because of its limited size, prior work on MCTest has focused mainly on engineering better features. We tackle the dataset with a neural approach, harnessing simple neural networks arranged in a parallel hierarchy. The parallel hierarchy enables our model to compare the passage, question, and answer from a variety of trainable perspectives, as opposed to using a manually designed, rigid feature set. Perspectives range from the word level to sentence fragments to sequences of sentences; the networks operate only on word-embedding representations of text. When trained with a methodology designed to help cope with limited training data, our Parallel-Hierarchical model sets a new state of the art for MCTest, outperforming previous feature-engineered approaches slightly and previous neural approaches by a significant margin (over 15 percentage points). 1 Introduction Humans learn in a variety of ways—by communication with each other and by study, the reading of text. Comprehension of unstructured text by machines, at a near-human level, is a major goal for natural language processing. It has garnered ∗A. Trischler and Z. Ye contributed equally to this work. significant attention from the machine learning research community in recent years. Machine comprehension (MC) is evaluated by posing a set of questions based on a text passage (akin to the reading tests we all took in school). Such tests are objectively gradable and can be used to assess a range of abilities, from basic understanding to causal reasoning to inference (Richardson et al., 2013). Given a text passage and a question about its content, a system is tested on its ability to determine the correct answer (Sachan et al., 2015). In this work, we focus on MCTest, a complex but data-limited comprehension benchmark, whose multiple-choice questions require not only extraction but also inference and limited reasoning (Richardson et al., 2013). Inference and reasoning are important human skills that apply broadly, beyond language. We present a parallel-hierarchical approach to machine comprehension designed to work well in a data-limited setting. There are many use-cases in which comprehension over limited data would be handy: for example, user manuals, internal documentation, legal contracts, and so on. Moreover, work towards more efficient learning from any quantity of data is important in its own right, for bringing machines more in line with the way humans learn. Typically, artificial neural networks require numerous parameters to capture complex patterns, and the more parameters, the more training data is required to tune them. Likewise, deep models learn to extract their own features, but this is a data-intensive process. Our model learns to comprehend at a high level even when data is sparse. The key to our model is that it compares the question and answer candidates to the text using several distinct perspectives. We refer to a question combined with one of its answer candidates as a hypothesis (to be detailed below). The seman432 tic perspective compares the hypothesis to sentences in the text viewed as single, self-contained thoughts; these are represented using a sum and transformation of word embedding vectors, similarly to Weston et al. (2014). The word-by-word perspective focuses on similarity matches between individual words from hypothesis and text, at various scales. As in the semantic perspective, we consider matches over complete sentences. We also use a sliding window acting on a subsentential scale (inspired by the work of Hill et al. (2015)), which implicitly considers the linear distance between matched words. Finally, this word-level sliding window operates on two different views of story sentences: the sequential view, where words appear in their natural order, and the dependency view, where words are reordered based on a linearization of the sentence’s dependency graph. Words are represented throughout by embedding vectors (Bengio et al., 2000; Mikolov et al., 2013). These distinct perspectives naturally form a hierarchy that we depict in Figure 1. Language is hierarchical, so it makes sense that comprehension relies on hierarchical levels of understanding. The perspectives of our model can be considered a type of feature. However, they are implemented by parametric differentiable functions. This is in contrast to most previous efforts on MCTest, whose numerous hand-engineered features cannot be trained. Our model, significantly, can be trained end-to-end with backpropagation. To facilitate learning with limited data, we also develop a unique training scheme. We initialize the model’s neural networks to perform specific heuristic functions that yield decent (though not impressive) performance on the dataset. Thus, the training scheme gives the model a safe, reasonable baseline from which to start learning. We call this technique training wheels. Computational models that comprehend (insofar as they perform well on MC datasets) have been developed contemporaneously in several research groups (Weston et al., 2014; Sukhbaatar et al., 2015; Hill et al., 2015; Hermann et al., 2015; Kumar et al., 2015). Models designed specifically for MCTest include those of Richardson et al. (2013), and more recently Sachan et al. (2015), Wang et al. (2015), and Yin et al. (2016). In experiments, our Parallel-Hierarchical model achieves state-of-the-art accuracy on MCTest, outperforming these existing methods. Below we describe related work, the mathematical details of our model, and our experiments, then analyze our results. 2 The Problem In this section, we borrow from Sachan et al. (2015), who laid out the MC problem nicely. Machine comprehension requires machines to answer questions based on unstructured text. This can be viewed as selecting the best answer from a set of candidates. In the multiple-choice case, candidate answers are predefined, but candidate answers may also be undefined yet restricted (e.g., to yes, no, or any noun phrase in the text) (Sachan et al., 2015). For each question q, let T be the unstructured text and A = {ai} the set of candidate answers to q. The machine comprehension task reduces to selecting the answer that has the highest evidence given T. As in Sachan et al. (2015), we combine an answer and a question into a hypothesis, hi = f(q, ai). To facilitate comparisons of the text with the hypotheses, we also break down the passage into sentences tj, T = {tj}. In our setting, q, ai, and tj each represent a sequence of embedding vectors, one for each word and punctuation mark in the respective item. 3 Related Work Machine comprehension is currently a hot topic within the machine learning community. In this section we will focus on the best-performing models applied specifically to MCTest, since it is somewhat unique among MC datasets (see Section 5). Generally, models can be divided into two categories: those that use fixed, engineered features, and neural models. The bulk of the work on MCTest falls into the former category. Manually engineered features often require significant effort on the part of a designer, and/or various auxiliary tools to extract them, and they cannot be modified by training. On the other hand, neural models can be trained end-to-end and typically harness only a single feature: vectorrepresentations of words. Word embeddings are fed into a complex and possibly deep neural network which processes and compares text to question and answer. Among deep models, mechanisms of attention and working memory are common, as in Weston et al. (2014) and Hermann et al. (2015). 433 3.1 Feature-engineering models Sachan et al. (2015) treated MCTest as a structured prediction problem, searching for a latent answerentailing structure connecting question, answer, and text. This structure corresponds to the best latent alignment of a hypothesis with appropriate snippets of the text. The process of (latently) selecting text snippets is related to the attention mechanisms typically used in deep networks designed for MC and machine translation (Bahdanau et al., 2014; Weston et al., 2014; Hill et al., 2015; Hermann et al., 2015). The model uses event and entity coreference links across sentences along with a host of other features. These include specifically trained word vectors for synonymy; antonymy and class-inclusion relations from external database sources; dependencies and semantic role labels. The model is trained using a latent structural SVM extended to a multitask setting, so that questions are first classified using a pretrained top-level classifier. This enables the system to use different processing strategies for different question categories. The model also combines question and answer into a well-formed statement using the rules of Cucerzan and Agichtein (2005). Our model is simpler than that of Sachan et al. (2015) in terms of the features it takes in, the training procedure (stochastic gradient descent vs. alternating minimization), question classification (we use none), and question-answer combination (simple concatenation or mean vs. a set of rules). Wang et al. (2015) augmented the baseline feature set from Richardson et al. (2013) with features for syntax, frame semantics, coreference chains, and word embeddings. They combined features using a linear latent-variable classifier trained to minimize a max-margin loss function. As in Sachan et al. (2015), questions and answers are combined using a set of manually written rules. The method of Wang et al. (2015) achieved the previous state of the art, but has significant complexity in terms of the feature set. Space does not permit a full description of all models in this category, but we refer the reader to the contributions of Smith et al. (2015) and Narasimhan and Barzilay (2015) as well. Despite its relative lack of features, the ParallelHierarchical model improves upon the featureengineered state of the art for MCTest by a small amount (about 1% absolute) as detailed in Section 5. 3.2 Neural models Neural models have, to date, performed relatively poorly on MCTest. This is because the dataset is sparse and complex. Yin et al. (2016) investigated deep-learning approaches concurrently with the present work. They measured the performance of the Attentive Reader (Hermann et al., 2015) and the Neural Reasoner (Peng et al., 2015), both deep, end-to-end recurrent models with attention mechanisms, and also developed an attention-based convolutional network, the HABCNN. Their network operates on a hierarchy similar to our own, providing further evidence of the promise of hierarchical perspectives. Specifically, the HABCNN processes text at the sentence level and the snippet level, where the latter combines adjacent sentences (as we do through an n-gram input). Embedding vectors for the question and the answer candidates are combined and encoded by a convolutional network. This encoding modulates attention over sentence and snippet encodings, followed by maxpooling to determine the best matches between question, answer, and text. As in the present work, matching scores are given by cosine similarity. The HABCNN also makes use of a question classifier. Despite the conceptual overlap between the HABCNN and our approach, the ParallelHierarchical model performs significantly better on MCTest (more than 15% absolute) as detailed in Section 5. Other neural models tested in Yin et al. (2016) fare even worse. 4 The Parallel-Hierarchical Model Let us now define our machine comprehension model in full. We first describe each of the perspectives separately, then describe how they are combined. Below, we use subscripts to index elements of sequences, like word vectors, and superscripts to indicate whether elements come from the text, question, or answer. In particular, we use the subscripts k, m, n, p to index sequences from the text, question, answer, and hypothesis, respectively, and superscripts t, q, a, h. We depict the model schematically in Figure 1. 4.1 Semantic Perspective The semantic perspective is similar to the Memory Networks approach for embedding inputs into memory space (Weston et al., 2014). Each sen434 Semantic Sentential SW-sequential SW-dependency MLP Word-by-word top N tj tj |tj+1 unigram bigram tj |tj+1 tj-1| trigram MLP+Sum MLP Embedding q ai Mi Figure 1: Schematic of the Parallel-Hierarchical model. SW stands for “sliding window.” MLP represents a fully connected neural network. tence of the text is a sequence of d-dimensional word vectors: tj = {tk}, tk ∈Rd. The semantic vector st is computed by embedding the word vectors into a D-dimensional space using a two-layer network that implements weighted sum followed by an affine tranformation and a nonlinearity; i.e., st = f At X k ωktk + bt A ! . (1) The matrix At ∈RD×d, the bias vector bt A ∈ RD, and for f we use the leaky ReLU function. The scalar ωk is a trainable weight associated with each word in the vocabulary. These scalar weights implement a kind of exogenous or bottomup attention that depends only on the input stimulus (Mayer et al., 2004). They can, for example, learn to perform the function of stopword lists in a soft, trainable way, to nullify the contribution of unimportant filler words. The semantic representation of a hypothesis is formed analogously, except that we concatenate the question word vectors qm and answer word vectors an as a single sequence {hp} = {qm, an}. For semantic vector sh of the hypothesis, we use a unique transformation matrix Ah ∈RD×d and bias vector bh A ∈RD. These transformations map a text sentence and a hypothesis into a common space where they can be compared. We compute the semantic match between text sentence and hypothesis using the cosine similarity, Msem = cos(st, sh). (2) 4.2 Word-by-Word Perspective The first step in building the word-by-word perspective is to transform word vectors from a text sentence, question, and answer through respective neural functions. For the text, ˜tk = f Bttk + bt B  , where Bt ∈RD×d, bt B ∈RD and f is again the leaky ReLU. We transform the question and the answer to ˜qm and ˜an analogously using distinct matrices and bias vectors. In contrast to the semantic perspective, we keep the question and answer candidates separate in the wordby-word perspective. This is because matches to answer words are inherently more important than matches to question words, and we want our model to learn to use this property. 4.2.1 Sentential Inspired by the work of Wang and Jiang (2015) in paraphrase detection, we compute matches between hypotheses and text sentences at the word level. This computation uses the cosine similarity as before: cq km = cos(˜tk, ˜qm), (3) ca kn = cos(˜tk, ˜an). (4) The word-by-word match between a text sentence and question is determined by taking the maximum over k (finding the text word that best matches each question word) and then taking a weighted mean over m (finding the average match over the full question): Mq = 1 Z X m ωm max k cq km. (5) Here, ωm is the word weight for the question word and Z normalizes these weights to sum to one over the question. We define the match between a sentence and answer candidate, Ma, analogously. Finally, we combine the matches to question and answer according to Mword = α1Mq + α2Ma + α3MqMa. (6) Here, the α are trainable parameters that control the relative importance of the terms. 435 4.2.2 Sequential Sliding Window The sequential sliding window is related to the original MCTest baseline by Richardson et al. (2013). Our sliding window decays from its focus word according to a Gaussian distribution, which we extend by assigning a trainable weight to each location in the window. This modification enables the window to use information about the distance between word matches; the original baseline (Richardson et al., 2013) used distance information through a predefined function. The sliding window scans over the words of the text as one continuous sequence, without sentence breaks. Each window is treated like a sentence in the previous subsection, but we include a location-based weight λ(k). This weight is based on a word’s position in the window, which, given a window, depends on its global position k. The cosine similarity is adapted as sq km = λ(k) cos(˜tk, ˜qm), (7) for the question and analogously for the answer. We initialize the location weights with a Gaussian and fine-tune them during training. The final matching score, denoted as Msws, is computed as in (5) and (6) with sq km replacing cq km. 4.2.3 Dependency Sliding Window The dependency sliding window operates identically to the linear sliding window, but on a different view of the text passage. The output of this component is Mswd and is formed analogously to Msws. The dependency perspective uses the Stanford Dependency Parser (Chen and Manning, 2014) as an auxiliary tool. Thus, the dependency graph can be considered a fixed feature. Moreover, linearization of the dependency graph, because it relies on an eigendecomposition, is not differentiable. However, we handle the linearization in data preprocessing so that the model sees only reordered word-vector inputs. Specifically, we run the Stanford Dependency Parser on each text sentence to build a dependency graph. This graph has nw vertices, one for each word in the sentence. From the dependency graph we form the Laplacian matrix L ∈Rnw×nw and determine its eigenvectors. The second eigenvector u2 of the Laplacian is known as the Fiedler vector. It is the solution to the minimization minimize g N X i,j=1 ηij(g(vi) −g(vj))2, (8) where vi are the vertices of the graph and ηij is the weight of the edge from vertex i to vertex j (Golub and Van Loan, 2012). The Fiedler vector maps a weighted graph onto a line such that connected nodes stay close, modulated by the connection weights.1 This enables us to reorder the words of a sentence based on their proximity in the dependency graph. The reordering of the words is given by the ordered index set I = arg sort(u2). (9) To give an example of how this works, consider the following sentence from MCTest and its dependency-based reordering: Jenny, Mrs. Mustard ’s helper, called the police. the police, called Jenny helper, Mrs. ’s Mustard. Sliding-window-based matching on the original sentence will answer the question Who called the police? with Mrs. Mustard. The dependency reordering enables the window to determine the correct answer, Jenny. 4.3 Combining Distributed Evidence It is important in comprehension to synthesize information found throughout a document. MCTest was explicitly designed to ensure that it could not be solved by lexical techniques alone, but would instead require some form of inference or limited reasoning (Richardson et al., 2013). It therefore includes questions where the evidence for an answer spans several sentences. To perform synthesis, our model also takes in ngrams of sentences, i.e., sentence pairs and triples strung together. The model treats these exactly as it treats single sentences, applying all functions detailed above. A later pooling operation combines scores across all n-grams (including the single-sentence input). This is described in the next subsection. 1We experimented with assigning unique edge weights to unique relation types in the dependency graph. However, this had negligible effect. We hypothesize that this is because dependency graphs are trees, which do not have cycles. 436 With n-grams, the model can combine information distributed across contiguous sentences. In some cases, however, the required evidence is spread across distant sentences. To give our model some capacity to deal with this scenario, we take the top N sentences as scored by all the preceding functions, and then repeat the scoring computations, viewing these top N as a single sentence. The reasoning behind these approaches can be explained well in a probabilistic setting. If we consider our similarity scores to model the likelihood of a text sentence given a hypothesis, p(tj | hi), then the n-gram and top N approaches model a joint probability p(tj1, tj2, . . . , tjk | hi). We cannot model the joint probability as a product of individual terms (score values) because distributed pieces of evidence are likely not independent. 4.4 Combining Perspectives We use a multilayer perceptron (MLP) to combine Msem, Mword, Mswd, and Msws, as well as the scores for separate n-grams, as a final matching score Mi for each answer candidate. This MLP has multiple layers of staged input, because the distinct scores have different dimensionality: there is one Msem and one Mword for each story sentence, and one Mswd and one Msws for each application of the sliding window. The MLP’s activation function is linear. Our overall training objective is to minimize the ranking loss L(T, q, A) = max(0, µ + max i Mi̸=i∗−Mi∗), (10) where µ is a constant margin, i∗indexes the correct answer. We take the maximum over i so that we are ranking the correct answer over the bestranked incorrect answer (of which there are three). This approach worked better than comparing the correct answer to the incorrect answers individually as in Wang et al. (2015). Our implementation of the Parallel-Hierarchical model, built in Theano (Bergstra et al., 2010) using the Keras framework (Chollet, 2015), is available on Github.2 4.5 Training Wheels Before training, we initialized the neural-network components of our model to perform sensible heuristic functions. Training did not converge on the small MCTest without this vital approach. 2https://github.com/Maluuba/mctest-model Empirically, we found that we could achieve above 50% accuracy on MCTest using a simple sum of word vectors followed by a dot product between the story-sentence sum and the hypothesis sum. Therefore, we initialized the network for the semantic perspective to perform this sum, by initializing Ax as the identity matrix and bx A as the zero vector, x ∈{t, h}. Recall that the activation function is a ReLU so that positive outputs are unchanged. We also found basic word-matching scores to be helpful, so we initialized the word-by-word networks likewise. The network for perspectivecombination was initialized to perform a sum of individual scores, using a zero bias-vector and a weight matrix of ones, since we found that each perspective contributed positively to the overall result. This training wheels approach is related to other techniques from the literature. For instance, Socher et al. (2013) proposed the identitymatrix initialization in the context of parsing, and Le et al. (2015) proposed it in the context of recurrent neural networks (to preserve the error signal through backpropagation). In residual networks (He et al., 2015), shortcut connections bypass certain layers in the network so that a simpler function can be trained in conjunction with the full model. 5 Experiments 5.1 The Dataset MCTest is a collection of 660 elementary-level children’s stories and associated questions, written by human subjects. The stories are fictional, ensuring that the answer must be found in the text itself, and carefully limited to what a young child can understand (Richardson et al., 2013). The more challenging variant consists of 500 stories with four multiple-choice questions each. Despite the elementary level, stories and questions are more natural and more complex than those found in synthetic MC datasets like bAbI (Weston et al., 2014) and CNN (Hermann et al., 2015). MCTest is challenging because it is both complicated and small. As per Hill et al. (2015), “it is very difficult to train statistical models only on MCTest.” Its size limits the number of parameters that can be trained, and prevents learning any complex language modeling simultaneously with the capacity to answer questions. 437 5.2 Training and Model Details In this section we describe important details of the training procedure and model setup. For a complete list of hyperparameter settings, our stopword list, and other minutiæ, we refer interested readers to our Github repository. For word vectors we use Google’s publicly available embeddings, trained with word2vec on the 100-billion-word News corpus (Mikolov et al., 2013). These vectors are kept fixed throughout training, since we found that training them was not helpful (likely because of MCTest’s size). The vectors are 300-dimensional (d = 300). We do not use a stopword list for the text passage, instead relying on the trainable word weights to ascribe global importance ratings to words. These weights are initialized with the inverse document frequency (IDF) statistic computed over the MCTest corpus.3 However, we do use a short stopword list for questions. This list nullifies query words such as {who, what, when, where, how}, along with conjugations of the verbs to do and to be. Following earlier methods, we use a heuristic to improve performance on negation questions (Sachan et al., 2015; Wang et al., 2015). When a question contains the words which and not, we negate the hypothesis ranking scores so that the minimum becomes the maximum. This heuristic leads to an improvement around 6% on the validation set. The most important technique for training the model was the training wheels approach. Without this, training was not effective at all (see the ablation study in Table 2). The identity initialization requires that the network weight matrices are square (d = D). We found dropout (Srivastava et al., 2014) to be particularly effective at improving generalization from the training to the test set, and used 0.5 as the dropout probability. Dropout occurs after all neural-network transformations, if those transformations are allowed to change with training. Our best performing model held networks at the wordby-word level fixed. For combining distributed evidence, we used up to trigrams over sentences and our bestperforming model reiterated over the top two sentences (N = 2). 3We override the IDF initialization for words like not, which are frequent but highly informative. We used the Adam optimizer with the standard settings (Kingma and Ba, 2014) and a learning rate of 0.003. To determine the best hyperparameters we performed a search over 150 settings based on validation-set accuracy. MCTest’s original validation set is too small for reliable hyperparameter tuning, so, following Wang et al. (2015), we merged the training and validation sets of MCTest-160 and MCTest-500, then split them randomly into a 250-story training set and a 200story validation set. This repartition of the data did not affect overall performance per se; rather, the larger validation set made it easier to choose hyperparameters because validation results were more consistent. 5.3 Results Table 1 presents the performance of featureengineered and neural methods on the MCTest test set. Accuracy scores are divided among questions whose evidence lies in a single sentence (single) and across multiple sentences (multi), and among the two variants. Clearly, MCTest-160 is easier. The first three rows represent featureengineered methods. Richardson et al. (2013) + RTE is the best-performing variant of the original baseline published along with MCTest. It uses a lexical sliding window and distance-based measure, augmented with rules for recognizing textual entailment. We described the methods of Sachan et al. (2015) and Wang et al. (2015) in Section 3. On MCTest-500, the Parallel Hierarchical model significantly outperforms these methods on single questions (> 2%) and slightly outperforms the latter two on multi questions (≈0.3%) and overall (≈1%). The method of Wang et al. (2015) achieves the best overall result on MCTest-160. We suspect this is because our neural method suffered from the relative lack of training data. The last four rows in Table 1 are neural methods that we discussed in Section 3. Performance measures are taken from Yin et al. (2016). Here we see our model outperforming the alternatives by a large margin across the board (> 15%). The Neural Reasoner and the Attentive Reader are large, deep models with hundreds of thousands of parameters, so it is unsurprising that they performed poorly on MCTest. The specifically-designed HABCNN fared better, its convolutional architecture cutting down on the parameter count. Because there are similarities between our model and the 438 Method MCTest-160 accuracy (%) MCTest-500 accuracy (%) Single (112) Multiple (128) All Single (272) Multiple (328) All Richardson et al. (2013) + RTE 76.78 62.50 69.16 68.01 59.45 63.33 Sachan et al. (2015) 67.65 67.99 67.83 Wang et al. (2015) 84.22 67.85 75.27 72.05 67.94 69.94 Attentive Reader 48.1 44.7 46.3 44.4 39.5 41.9 Neural Reasoner 48.4 46.8 47.6 45.7 45.6 45.6 HABCNN-TE 63.3 62.9 63.1 54.2 51.7 52.9 Parallel-Hierarchical 79.46 70.31 74.58 74.26 68.29 71.00 Table 1: Experimental results on MCTest. Ablated component Validation accuracy (%) 70.13 n-gram 66.25 Top N 66.63 Sentential 65.00 SW-sequential 68.88 SW-dependency 69.75 Word weights 66.88 Trainable embeddings 63.50 Training wheels 34.75 Table 2: Ablation study on MCTest-500 (all). HABCNN, we hypothesize that the performance difference is attributable to the greater simplicity of our model and our training wheels methodology. 6 Analysis and Discussion We measure the contribution of each component of the model by ablating it. Results on the validation set are given in Table 2. Not surprisingly, the n-gram functionality is important, contributing almost 4% accuracy improvement. Without this, the model has almost no means for synthesizing distributed evidence. The top N function contributes similarly to the overall performance, suggesting that there is a nonnegligible number of multi questions that have their evidence distributed across noncontiguous sentences. Ablating the sentential component made a significant difference, reducing performance by about 5%. Simple word-by-word matching is obviously useful on MCTest. The sequential sliding window contributes about 1.3%, suggesting that word-distance measures are not overly important. Similarly, the dependency-based sliding window makes a very minor contribution. We found this surprising. It may be that linearization of the dependency graph removes too much of its information. The exogenous word weights make a significant contribution of over 3%. Allowing the embeddings to change with training reduced performance fairly significantly, almost 8%. As discussed, this is a case of having too many parameters for the available training data. Finally, we see that the training wheels methodology had enormous impact. Without heuristic-based initialization of the model’s various weight matrices, accuracy goes down to about 35%, which is only ten points over random chance. Analysis reveals that most of our system’s test failures occur on questions about quantity (e.g., How many...?) and temporal order (e.g., Who was invited last?). Quantity questions make up 9.5% of our errors on the validation set, while order questions make up 10.3%. This weakness is not unexpected, since our architecture lacks any capacity for counting or tracking temporal order. Incorporating mechanisms for these forms of reasoning is a priority for future work (in contrast, the Memory Network model (Weston et al., 2014) is quite good at temporal reasoning). The Parallel-Hierarchical model is simple. It does no complex language or sequence modeling. Its simplicity is a response to the limited data of MCTest. Nevertheless, the model achieves stateof-the-art results on the multi questions, which (putatively) require some limited reasoning. Our model is able to handle them reasonably well just by stringing important sentences together. Thus, the model imitates reasoning with a heuristic. This suggests that, to learn true reasoning abilities, MCTest is too simple a dataset—and it is almost certainly too small for this goal. However, it may be that human language processing can be factored into separate processes of comprehension and reasoning. If so, the ParallelHierarchical model is a good start on the former. Indeed, if we train the method exclusively on single questions then its results become even more impressive: we can achieve a test accuracy of 79.1% on MCTest-500. Note that this boost in performance comes from training on only about half the data. The ‘single’ questions can be con439 sidered a close analogue of the RTE task, at which our model becomes very adept even with less data. Incorporating the various views of our model amounts to encoding prior knowledge about the problem structure. This is similar to the purpose of feature engineering, except that the views can be fully trained. Encoding problem structure into the structure of neural networks is not new: as another example, the convolutional architecture has led to large gains in vision tasks. 7 Conclusion We have presented the novel Parallel-Hierarchical model for machine comprehension, and evaluated it on the small but complex MCTest. Our model achieves state-of-the-art results, outperforming several feature-engineered and neural approaches. Working with our model has emphasized to us the following (not necessarily novel) concepts, which we record here to promote further empirical validation. • Good comprehension of language is supported by hierarchical levels of understanding (cf. Hill et al. (2015)). • Exogenous attention (the trainable word weights) may be broadly helpful for NLP. • The training wheels approach, that is, initializing neural networks to perform sensible heuristics, appears helpful for small datasets. • Reasoning over language is challenging, but easily simulated in some cases. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Yoshua Bengio, R´ejean Ducharme, and Pascal Vincent. 2000. A neural probabilistic language model. In Advances in Neural Information Processing Systems, pages 932–938. J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley, and Y. Bengio. 2010. Theano: a CPU and GPU math expression compiler. In In Proc. of SciPy. Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural networks. In EMNLP, pages 740–750. Franc¸ois Chollet. 2015. keras. https://github.com/fchollet/keras. Silviu Cucerzan and Eugene Agichtein. 2005. Factoid question answering over unstructured and structured web content. In TREC, volume 72, page 90. Gene H Golub and Charles F Van Loan. 2012. Matrix computations, volume 3. JHU Press. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1684– 1692. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representations. arXiv preprint arXiv:1511.02301. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Ankit Kumar, Ozan Irsoy, Jonathan Su, James Bradbury, Robert English, Brian Pierce, Peter Ondruska, Ishaan Gulrajani, and Richard Socher. 2015. Ask me anything: Dynamic memory networks for natural language processing. arXiv preprint arXiv:1506.07285. Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. 2015. A simple way to initialize recurrent networks of rectified linear units. arXiv preprint arXiv:1504.00941. Andrew R Mayer, Jill M Dorflinger, Stephen M Rao, and Michael Seidenberg. 2004. Neural networks underlying endogenous and exogenous visual–spatial orienting. Neuroimage, 23(2):534– 541. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Karthik Narasimhan and Regina Barzilay. 2015. Machine comprehension with discourse relations. In 53rd Annual Meeting of the Association for Computational Linguistics. Baolin Peng, Zhengdong Lu, Hang Li, and Kam-Fai Wong. 2015. Towards neural network-based reasoning. arXiv preprint arXiv:1508.05508. Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP, volume 1, page 2. 440 Mrinmaya Sachan, Avinava Dubey, Eric P Xing, and Matthew Richardson. 2015. Learning answerentailing structures for machine comprehension. In Proceedings of ACL. Ellery Smith, Nicola Greco, Matko Bosnjak, and Andreas Vlachos. 2015. A strong lexical matching method for the machine comprehension test. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1693– 1698, Lisbon, Portugal, September. Association for Computational Linguistics. Richard Socher, John Bauer, Christopher D Manning, and Andrew Y Ng. 2013. Parsing with compositional vector grammars. In ACL (1), pages 455–465. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in Neural Information Processing Systems, pages 2431–2439. Shuohang Wang and Jing Jiang. 2015. Learning natural language inference with lstm. arXiv preprint arXiv:1512.08849. Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. 2015. Machine comprehension with syntax, frames, and semantics. In Proceedings of ACL, Volume 2: Short Papers, page 700. Jason Weston, Sumit Chopra, and Antoine Bordes. 2014. Memory networks. arXiv preprint arXiv:1410.3916. Wenpeng Yin, Sebastian Ebert, and Hinrich Sch¨utze. 2016. Attention-based convolutional neural network for machine comprehension. arXiv preprint arXiv:1602.04341. 441
2016
41
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 442–452, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Combining Natural Logic and Shallow Reasoning for Question Answering Gabor Angeli Stanford University Stanford, CA 94305 Neha Nayak Stanford University Stanford, CA 94305 Christopher D. Manning Stanford University Stanford, CA 94305 {angeli, nayakne, manning}@cs.stanford.edu Abstract Broad domain question answering is often difficult in the absence of structured knowledge bases, and can benefit from shallow lexical methods (broad coverage) and logical reasoning (high precision). We propose an approach for incorporating both of these signals in a unified framework based on natural logic. We extend the breadth of inferences afforded by natural logic to include relational entailment (e.g., buy →own) and meronymy (e.g., a person born in a city is born the city’s country). Furthermore, we train an evaluation function – akin to gameplaying – to evaluate the expected truth of candidate premises on the fly. We evaluate our approach on answering multiple choice science questions, achieving strong results on the dataset. 1 Introduction Question answering is an important task in NLP, and becomes both more important and more difficult when the answers are not supported by handcurated knowledge bases. In these cases, viewing question answering as textual entailment over a very large premise set can offer a means of generalizing reliably to open domain questions. A natural approach to textual entailment is to treat it as a logical entailment problem. However, this high-precision approach is not feasible in cases where a formal proof is difficult or impossible. For example, consider the following hypothesis (H) and its supporting premise (P) for the question Which part of a plant produces the seeds?: P: Ovaries are the female part of the flower, which produces eggs that are needed for making seeds. H: A flower produces the seeds. This requires a relatively large amount of inference: the most natural atomic fact in the sentence is that ovaries produce eggs. These inferences are feasible in a limited domain, but become difficult the more open-domain reasoning they require. In contrast, even a simple lexical overlap classifier could correctly predict the entailment. In fact, such a bag-of-words entailment model has been shown to be surprisingly effective on the Recognizing Textual Entailment (RTE) challenges (MacCartney, 2009). On the other hand, such methods are also notorious for ignoring even trivial cases of nonentailment that are easy for natural logic, e.g., recognizing negation in the example below: P: Eating candy for dinner is an example of a poor health habit. H: Eating candy is an example of a good health habit. We present an approach to leverage the benefits of both methods. Natural logic – a proof theory over the syntax of natural language – offers a framework for logical inference which is already familiar to lexical methods. As an inference system searches for a valid premise, the candidates it explores can be evaluated on their similarity to a premise by a conventional lexical classifier. We therefore extend a natural logic inference engine in two key ways: first, we handle relational entailment and meronymy, increasing the total number of inferences that can be made. We further implement an evaluation function which quickly provides an estimate for how likely a candidate premise is to be supported by the knowledge base, without running the full search. This can then more easily match a known premise despite still not matching exactly. We present the following contributions: (1) we extend the classes of inferences NaturalLI can perform on real-world sentences by incorporating relational entailment and meronymy, and by operat442 ing over dependency trees; (2) we augment NaturalLI with an evaluation function to provide an estimate of entailment for any query; and (3) we run our system over the Aristo science questions corpus, achieving the strong results. 2 Background We briefly review natural logic and NaturalLI – the existing inference engine we use. Much of this paper will extend this system, with additional inferences (Section 3) and a soft lexical classifier (Section 4). 2.1 Natural Logic Natural logic is a formal proof theory that aims to capture a subset of logical inferences by appealing directly to the structure of language, without needing either an abstract logical language (e.g., Markov Logic Networks; Richardson and Domingos (2006)) or denotations (e.g., semantic parsing; Liang and Potts (2015)). We use the logic introduced by the NatLog system (MacCartney and Manning, 2007; 2008; 2009), which was in turn based on earlier theoretical work on Monotonicity Calculus (van Benthem, 1986; S´anchez Valencia, 1991). We adopt the precise semantics of Icard and Moss (2014); we refer the reader to this paper for a more thorough introduction to the formalism. At a high level, natural logic proofs operate by mutating spans of text to ensure that the mutated sentence follows from the original – each step is much like a syllogistic inference. Each mutation in the proof follows three steps: 1. An atomic lexical relation is induced by either inserting, deleting or mutating a span in the sentence. For example, in Figure 1, mutating The to No induces the ⋏relation; mutating cat to carnivore induces the ⊑relation. The relations ≡and ⊑are variants of entailment; ⋏and ⇃↾are variants of negation. 2. This lexical relation between words is projected up to yield a relation between sentences, based on the polarity of the token. For instance, The cat eats animals ⊑some carnivores eat animals. We explain this in more detail below. 3. These sentence level relations are joined together to produce a relation between a premise, and a hypothesis multiple mutations away. For example in Figure 1, if we join ⊑, ≡, ⊑, and ⋏, we get negation (⇃↾). The notion of projecting a relation from a lexical item to a sentence is important to understand.1 To illustrate, cat ⊑animal, and some cat meows ⊑some animal meows (recall, ⊑denotes entailment), but no cat barks ̸⊑no animal barks. Despite differing by the same lexical relation, the sentence-level relation is different in the two cases. We appeal to two important concepts: monotonicity – a property of arguments to natural language operators; and polarity – a property of tokens. From the example above, some is monotone in its first argument (i.e., cat or animal), and no is antitone in its first argument. This means that the first argument to some is allowed to mutate up the specified hierarchy (e.g., hypernymy), whereas the first argument to no is allowed to mutate down. Polarity is a property of tokens in a sentence determined by the operators acting on it. All lexical items have upward polarity by default; monotone operators – like some, several, or a few – preserve polarity. Antitone operators – like no, not, and all (in its first argument) – reverse polarity. For example, mice in no cats eat mice has downward polarity, whereas mice in no cats don’t eat mice has upward polarity (it is in the scope of two downward monotone operators). As a final note, although we refer to the monotonicity calculus described above as natural logic, this formalism is only one of many possible natural logics. For example, McAllester and Givan (1992) introduce a syntax for first order logic which they call Montagovian syntax. This syntax has two key advantages over first order logic: first, the “quantifier-free” version of the syntax (roughly equivalent to the monotonicity calculus we use) is computationally efficient while still handling limited quantification. Second, the syntax more closely mirrors that of natural language. 2.2 NaturalLI We build our extensions within the framework of NaturalLI, introduced by Angeli and Manning (2014). NaturalLI casts inference as a search problem: given a hypothesis and an arbitrarily large corpus of text, it searches through the space of lexical mutations (e.g., cat →carnivore), with associated costs, until a premise is found. An example search using NaturalLI is given in Figure 1. The relations along the edges denote re1For clarity we describe a simplified semantics here; NaturalLI implements the semantics described in Icard and Moss (2014). 443 No carnivores eat animals? The carnivores eat animals The cat eats animals The cat ate an animal The cat ate a mouse ⊒ ≡ ⊒ ⋏ No animals eat animals No animals eat things ⊒ . . . ⊒ . . . Figure 1: An illustration of NaturalLI searching for a candidate premise to support the hypothesis at the root of the tree. We are searching from a hypothesis no carnivores eat animals, and find a contradicting premise the cat ate a mouse. The edge labels denote Natural Logic inference steps. lations between the associated sentences – i.e., the projected lexical relations from Section 2.2. Importantly, and in contrast with traditional entailment systems, NaturalLI searches over an arbitrarily large knowledge base of textual premises rather than a single premise/hypothesis pair. 3 Improving Inference in NaturalLI We extend NaturalLI in three ways to improve its coverage. We adapt the search algorithm to operate over dependency trees rather than the surface forms (Section 3.1). We enrich the class of inferences warranted by natural logic beyond hypernymy and operator rewording to also encompass meronymy and relational entailment (Section 3.2). Lastly, we handle token insertions during search more elegantly (Section 3.3). The general search algorithm in NaturalLI is parametrized as follows: First, an order is chosen to traverse the tokens in a sentence. For example, the original paper traverses tokens left-toright. At each token, one of three operations can be performed: deleting a token (corresponding to inserting a word in the proof derivation), mutating a token, and inserting a token (corresponding to deleting a token in the proof derivation). 3.1 Natural logic over Dependency Trees Operating over dependency trees rather than a token sequence requires reworking (1) the semantics of deleting a token during search, and (2) the order in which the sentence is traversed. We recently defined a mapping from Stanford Dependency relations to the associated lexical relation deleting the dependent subtree would induce (Angeli et al., 2015). We adapt this mapping to yield the relation induced by inserting a given dependency edge, corresponding to our deletions in search; we also convert the mapping to use Universal Dependencies (de Marneffe et al., 2014). This now lends a natural deletion operation: at a given node, the subtree rooted at that node can be deleted to induce the associated natural logic relation. For example, we can infer that all truly notorious villains have lairs from the premise all villains have lairs by observing that deleting an amod arc induces the relation ⊒, which in the downward polarity context of villains↓projects to ⊑(entailment): All↑truly↓notorious↓villains↓have↑lairs↑. operator nsubj amod advmod dobj An admittedly rare but interesting subtlety in the order we chose to traverse the tokens in the sentence is the effect mutating an operator has on the polarity of its arguments. For example, mutating some to all changes the polarity of its first argument. There are cases where we must mutate the argument to the operator before the operator itself, as well as cases where we must mutate the operator before its arguments. Consider, for instance: P: All felines have a tail H: Some cats have a tail where we must first mutate cat to feline, versus: P: All cats have a tail H: Some felines have a tail where we must first mutate some to all. Therefore, our traversal first visits each operator, then performs a breadth-first traversal of the tree, and then visits each operator a second time. 3.2 Meronymy and Relational Entailment Although natural logic and the underlying monotonicity calculus has only been explored in the context of hypernymy, the underlying framework can be applied to any partial order. Natural language operators can be defined as a mapping from denotations of objects to truth values. The domain of word denotations is then or444 (a) x = Felix kitten cat animal thing Denotation of word x False True Truth Value of Sentence all x drink milk some x bark (b) x = Hilo Big Island Hawaii USA North America Denotation of word x False True Truth Value of Sentence Obama was born in x x is an island Figure 2: An illustration of monotonicity using different partial orders. (a) The monotonicity of all and some in their first arguments, over a domain of denotations. (b) An illustration of the born in monotone operator over the meronymy hierarchy, and the operator is an island as neither monotone or antitone. dered by the subset operator, corresponding to ordering by hypernymy over the words.2 However, hypernymy is not the only useful partial ordering over denotations. We include two additional orderings as motivating examples: relational entailment and meronymy. Relational Entailment For two verbs v1 and v2, we define v1 ≤v2 if the first verb entails the second. In many cases, a verb v1 may entail a verb v2 even if v2 is not a hypernym of v1. For example, to sell something (hopefully) entails owning that thing. Apart from context-specific cases (e.g., orbit entails launch only for man-made objects), these hold largely independent of context. Note that the usual operators apply to relational entailments – if all cactus owners live in Arizona then all cactus sellers live in Arizona. This information was incorporated using data from VERBOCEAN (Chklovski and Pantel, 2004), adapting the confidence weights as transition costs. VERBOCEAN uses lexicosyntactic patterns to score pairs of verbs as candidate participants in a set of relations. We approximate the VERBOCEAN relations stronger-than(v1, v2) (e.g., to kill is stronger than to wound) and 2Truth values are a trivial partial order corresponding to entailment: if t1 ≤t2 (i.e., t1 ⊑t2), and you know that t1 is true, then t2 must be true. happens-before(v2, v1) (e.g., buying happens before owning) to indicate that v1 entails v2. These verb entailment transitions are incorporated using costs derived from the original weights from Chklovski and Pantel (2004). Meronymy The most salient use-case for meronymy is with locations. For example, if Obama was born in Hawaii, then we know that Obama was born in America, because Hawaii is a meronym of (part of) America. Unlike relational entailment and hypernymy, meronymy is operated on by a distinct set of operators: if Hawaii is an island, we cannot necessarily entail that America is an island. We semi-automatically collect a set of 81 operators (e.g., born in, visited) which then compose in the usual way with the conventional operators (e.g., some, all). These operators consist of dependency paths of length 2 that co-occurred in newswire text with a named entity of type PERSON and two different named entities of type LOCATION, such that one location was a meronym of the other. All other operators are considered nonmonotone with respect to the meronym hierarchy. Note that these are not the only two orders that can be incorporated into our framework; they just happen to be two which have lexical resources available and are likely to be useful in real-world entailment tasks. 3.3 Removing the Insertion Transition Inserting words during search poses an inherent problem, as the space of possible words to insert at any position is on the order of the size of the vocabulary. In NaturalLI, this was solved by keeping a trie of possible insertions, and using that to prune this space. This is both computationally slow and adapts awkwardly to a search over dependency trees. Therefore, this work instead opts to perform a bidirectional search: when constructing the knowledge base, we add not only the original sentence but also all entailments with subtrees deleted. For example, a premise of some furry cats have tails would yield two facts for the knowledge base: some furry cats have tails as well as some cats have tails. For this, we use the process described in Angeli et al. (2015) to generate short entailed sentences from a long utterance using natural logic. This then leaves the reverse search to only deal with mutations and inference insertions, 445 which are relatively easier. The new challenge this introduces, of course, is the additional space required to store the new facts. To mitigate this, we hash every fact into a 64 bit integer, and store only the hashed value in the knowledge base. We construct this hash function such that it operates over a bag of edges in the dependency tree. This has two key properties: it allows us to be invariant to the word order of of the sentence, and more importantly it allows us to run our search directly over modifications to this hash function. To elaborate, we notice that each of the two classes of operations our search is performing are done locally over a single dependency edge. When adding an edge, we can simply take the XOR of the hash saved in the parent state and the hash of the added edge. When mutating an edge, we XOR the hash of the parent state with the edge we are mutating, and again with the mutated edge. In this way, each search node need only carry an 8 byte hash, local information about the edge currently being considered (8 bytes), global information about the words deleted during search (5 bytes), a 3 byte backpointer to recover the inference path, and 8 bytes of operator metadata – 32 bytes in all, amounting to exactly half a cache line on our machines. This careful attention to data structures and memory layout turn out to have a large impact on runtime efficiency. More details are given in Angeli (2016). 4 An Evaluation Function for NaturalLI There are many cases – particularly as the length of the premise and the hypothesis grow – where despite our improvements NaturalLI will fail to find any supporting premises; for example: P: Food serves mainly for growth, energy and body repair, maintenance and protection. H: Animals get energy for growth and repair from food. In addition to requiring reasoning with multiple implicit premises (a concomitant weak point of natural logic), a correct interpretation of the sentence requires fairly nontrivial nonlocal reasoning: Food serves mainly for x →Animals get x from food. Nonetheless, there enough lexical clues in the sentence that even a simple entailment classifier would get the example correct. We build such a classifier and adapt it as an evaluation function inside NaturalLI in case no premises are found during search. 4.1 A Standalone Entailment Classifier Our entailment classifier is designed to be as domain independent as possible; therefore we define only 5 unlexicalized real-valued features, with an optional sixth feature encoding the score output by the Solr information extraction system (in turn built upon Lucene). In fact, this classifier is a stronger baseline than it may seem: evaluating the system on RTE-3 (Giampiccolo et al., 2007) yielded 63.75% accuracy – 2 points above the median submission. All five of the core features are based on an alignment of keyphrases between the premise and the hypothesis. A keyphrase is defined as a span of text which is either (1) a possibly empty sequence of adjectives and adverbs followed by a sequence of nouns, and optionally followed by either of or the possessive marker (’s), and another noun (e.g., sneaky kitten or pail of water); (2) a possibly empty sequence of adverbs followed by a verb (e.g., quietly pounce); or (3) a gerund followed by a noun (e.g., flowing water). The verb to be is never a keyphrase. We make a distinction between a keyphrase and a keyword – the latter is a single noun, adjective, or verb. We then align keyphrases in the premise and hypothesis by applying a series of sieves. First, all exact matches are aligned to each other. Then, prefix or suffix matches are aligned, then if either keyphrase contains the other they are aligned as well. Last, we align a keyphrase in the premise pi to a keyphrase in the hypothesis hk if there is an alignment between pi−1 and hk−1 and between pi+1 and hk+1. This forces any keyphrase pair which is “sandwiched” between aligned pairs to be aligned as well. An example alignment is given in Figure 3. Features are extracted for the number of alignments, the numbers of alignments which do and do not match perfectly, and the number of keyphrases in the premise and hypothesis which were not aligned. A feature for the Solr score of the premise given the hypothesis is optionally included; we revisit this issue in the evaluation. 4.2 An Evaluation Function for Search A version of the classifier constructed in Section 4.1, but over keywords rather than keyphrases can be incorporated directly into NaturalLI’s search to give a score for each candidate premise 446 Heat energy is being transferred when a stove is used to boil water in a pan. When you heat water on a stove, thermal energy is transferred. Figure 3: An illustration of an alignment between a premise and a hypothesis. Keyphrases can be multiple words (e.g., heat energy), and can be approximately matched (e.g., to thermal energy). In the premise, used, boil and pan are unaligned. Note that heat water is incorrectly tagged as a compound noun. visited. This can be thought of as analogous to the evaluation function in game-playing search – even though an agent cannot play a game of Chess to completion, at some depth it can apply an evaluation function to its leaf states. Using keywords rather than keyphrases is in general a hindrance to the fuzzy alignments the system can produce. Importantly though, this allows the feature values to be computed incrementally as the search progresses, based on the score of the parent state and the mutation or deletion being performed. For instance, if we are deleting a word which was previously aligned perfectly to the premise, we would subtract the weight for a perfect and imperfect alignment, and add the weight for an unaligned premise keyphrase. This has the same effect as applying the trained classifier to the new state, and uses the same weights learned for this classifier, but requires substantially less computation. In addition to finding entailments from candidate premises, our system also allows us to encode a notion of likely negation. We can consider the following two statements na¨ıvely sharing every keyword. Each token marked with its polarity: P: some↑cats↑have↑tails↑ H: no↑cats↓have↓tails↓ However, we note that all of the keyword pairs are in opposite polarity contexts. We can therefore define a pair of keywords as matching in NaturalLI if the following two conditions hold: (1) their lemmatized surface forms match exactly, and (2) they have the same polarity in the sentence. The second constraint encodes a good approximation for negation. To illustrate, consider the polarity signatures of common operators: Operators Subj. polarity Obj. polarity Some, few, etc. ↑ ↑ All, every, etc. ↓ ↑ Not all, etc. ↑ ↓ No, not, etc. ↓ ↓ Most, many, etc. – ↑ We note that most contradictory operators (e.g., some/no; all/not all) induce the exact opposite polarity on their arguments. Otherwise, pairs of operators which share half their signature are usually compatible with each other (e.g., some and all). This suggests a criterion for likely negation: If the highest classifier score is produced by a contradictory candidate premise, we have reason to believe that we may have found a contradiction. To illustrate with our example, NaturalLI would mutate no cats have tails to the cats have tails, at which point it has found a contradictory candidate premise which has perfect overlap with the premise some cats have tails. Even had we not found the exact premise, this suggests that the hypothesis is likely false. 5 Related Work This work is similar in many ways to work on recognizing textual entailment – e.g., Schoenmackers et al. (2010), Berant et al. (2011), Lewis and Steedman (2013). In the RTE task, a single premise and a single hypothesis are given as input, and a system must return a judgment of either entailment or nonentailment (in later years, nonentailment is further split into contradiction and independence). These approaches often rely on alignment features, similar to ours, but do not generally scale to large premise sets (i.e., a comprehensive knowledge base). The discourse commitments in Hickl and Bensley (2007) can be thought of as similar to the additional entailed facts we add to the knowledge base (Section 3.3). In another line of work, Tian et al. (2014) approach the 447 RTE problem by parsing into Dependency Compositional Semantics (DCS) (Liang et al., 2011). This work particularly relevant in that it also incorporates an evaluation function (using distributional similarity) to augment their theorem prover – although in their case, this requires a translation back and forth between DCS and language. Beltagy et al. (To appear 2016) takes a similar approach, but encoding distributional information directly in entailment rules in a Markov Logic Network (Richardson and Domingos, 2006). Many systems make use of structured knowledge bases for question answering. Semantic parsing methods (Zettlemoyer and Collins, 2005; Liang et al., 2011) use knowledge bases like Freebase to find support for a complex question. Knowledge base completion (e.g., Chen et al. (2013), Bordes et al. (2011), or Riedel et al. (2013)) can be thought of as entailment, predicting novel knowledge base entries from the original database. In contrast, this work runs inference over arbitrary text without needing a structured knowledge base. Open IE (Wu and Weld, 2010; Mausam et al., 2012) QA approaches – e.g., Fader et al. (2014) are closer to operating over plain text, but still requires structured extractions. Of course, this work is not alone in attempting to incorporate strict logical reasoning into question answering systems. The COGEX system (Moldovan et al., 2003) incorporates a theorem prover into a QA system, boosting overall performance on the TREC QA task. Similarly, Watson (Ferrucci et al., 2010) incorporates logical reasoning components alongside shallower methods. This work follows a similar vein, but both the theorem prover and lexical classifier operate over text, without requiring either the premises or axioms to be in logical forms. On the Aristo corpus we evaluate on, Hixon et al. (2015) proposes a dialog system to augment a knowledge graph used for answering the questions. This is in a sense an oracle measure, where a human is consulted while answering the question; although, they show that their additional extractions help answer questions other than the one the dialog was collected for. 6 Evaluation We evaluate our entailment system on the Regents Science Exam portion of the Aristo dataset (Clark et al., 2013; Clark, 2015). The dataset consists of a collection of multiple-choice science questions from the New York Regents 4th Grade Science Exams (NYSED, 2014). Each multiple choice option is translated to a candidate hypotheses. A large corpus is given as a knowledge base; the task is to find support in this knowledge base for the hypothesis. Our system is in many ways well-suited to the dataset. Although certainly many of the facts require complex reasoning (see Section 6.4), the majority can be answered from a single premise. Unlike FraCaS (Cooper et al., 1996) or the RTE challenges, however, the task does not have explicit premises to run inference from, but rather must infer the truth of the hypothesis from a large collection of supporting text. 6.1 Data Processing We make use of two collections of unlabeled corpora for our experiments. The first of these is the Barron’s study guide (BARRON’S), consisting of 1200 sentences. This is the corpus used by Hixon et al. (2015) for their conversational dialog engine Knowbot, and therefore constitutes a more fair comparison against their results. However, we also make use of the full SCITEXT corpus (Clark et al., 2014). This corpus consists of 1 316 278 supporting sentences, including the Barron’s study guide alongside simple Wikipedia, dictionaries, and a science textbook. Since we lose all document context when searching over the corpus with NaturalLI, we first pre-process the corpus to resolve high-precision cases of pronominal coreference, via a set of very simple high-precision sieves. This finds the most recent candidate antecedent (NP or named entity) which, in order of preference, matches either the pronoun’s animacy, gender, and number. Filtering to remove duplicate sentences and sentences containing non-ASCII characters yields a total of 822 748 facts in the corpus. These sentences were then indexed using Solr. The set of promising premises for the soft alignment in Section 4, as well as the Solr score feature in the lexical classifier (Section 4.1), were obtained by querying Solr using the default similarity metric and scoring function. On the query side, questions were converted to answers using the same methodology as Hixon et al. (2015). In cases where the question contained multiple sentences, only the last sentence was considered. As 448 discussed in Section 6.4, we do not attempt reasoning over multiple sentences, and the last sentence is likely the most informative sentence in a longer passage. 6.2 Training an Entailment Classifier To train a soft entailment classifier, we needed a set of positive and negative entailment instances. These were collected on Mechanical Turk. In particular, for each true hypothesis in the training set and for each sentence in the Barron’s study guide, we found the top 8 results from Solr and considered these to be candidate entailments. These were then shown to Turkers, who decided whether the premise entailed the hypothesis, the hypothesis entailed the premise, both, or neither. Note that each pair was shown to only one Turker, lowering the cost of data collection, but consequently resulting in a somewhat noisy dataset. The data was augmented with additional negatives, collected by taking the top 10 Solr results for each false hypothesis in the training set. This yielded a total of 21 306 examples. The scores returned from NaturalLI incorporate negation in two ways: if NaturalLI finds a contradictory premise, the score is set to zero. If NaturalLI finds a soft negation (see Section 4.2), and did not find an explicit supporting premise, the score is discounted by 0.75 – a value tuned on the training set. For all systems, any premise which did not contain the candidate answer to the multiple choice query was discounted by a value tuned on the training set. 6.3 Experimental Results We present results on the Aristo dataset in Table 1, alongside prior work and strong baselines. In all cases, NaturalLI is run with the evaluation function enabled; the limited size of the text corpus and the complexity of the questions would cause the basic NaturalLI system to perform poorly. The test set for this corpus consists of only 68 examples, and therefore both perceived large differences in model scores and the apparent best system should be interpreted cautiously. NaturalLI consistently achieves the best training accuracy, and is more stable between configurations on the test set. For instance, it may be consistently discarding lexically similar but actually contradictory premises that often confuse some subset of the baselines. KNOWBOT is the dialog system presented in Hixon et al. (2015). We report numbers for two System Barron’s SCITEXT Train Test Train Test KNOWBOT (held-out) 45 – – – KNOWBOT (oracle) 57 – – – Solr Only 49 42 62 58 Classifier 53 52 68 60 + Solr 53 48 66 64 Evaluation Function 52 54 61 63 + Solr 50 45 62 58 NaturalLI 52 51 65 61 + Solr 55 49 73 61 + Solr + Classifier 55 49 74 67 Table 1: Accuracy on the Aristo science questions dataset. All NaturalLI runs include the evaluation function. Results are reported using only the Barron’s study guide or SCITEXT as the supporting KNOWBOT is the dialog system presented in Hixon et. al (2015). The held-out version uses additional facts from other question’s dialogs; the oracle version made use of human input on the question it was answering. The test set did not exist at the time KNOWBOT was published. variants of the system: held-out is the system’s performance when it is not allowed to use the dialog collected from humans for the example it is answering; oracle is the full system. Note that the oracle variant is a human-in-the-loop system. We additionally present three baselines. The first simply uses Solr’s IR confidence to rank entailment (Solr Only in Table 1). The max IR score of any premise given a hypothesis is taken as the score for that hypothesis. Furthermore, we report results for the entailment classifier defined in Section 4.1 (Classifier), optionally including the Solr score as a feature. We also report performance of the evaluation function in NaturalLI applied directly to the premise and hypothesis, without any inference (Evaluation Function). Last, we evaluate NaturalLI with the improvements presented in this paper (NaturalLI in Table 1). We additionally tune weights on our training set for a simple model combination with (1) Solr (with weight 6:1 for NaturalLI) and (2) the standalone classifier (with weight 24:1 for NaturalLI). Empirically, both parameters were observed to be fairly robust. To demonstrate the system’s robustness on a larger dataset, we additionally evaluate on a test set of 250 additional science exam questions, with 449 System Test Accuracy Solr Only 46.8 Classifier 43.6 NaturalLI 46.4 + Solr 48.0 Table 2: Results of our baselines and NaturalLI on a larger dataset of 250 examples. All NaturalLI runs include the evaluation function. an associated 500 example training set (and 249 example development set). These are substantially more difficult as they contain a far larger number of questions that require an understanding of a more complex process. Nonetheless, the trend illustrated in Table 1 holds for this larger set, as shown in Table 2. Note that with a web-scale corpus, accuracy of an IR-based system can be pushed up to 51.4%; a PMI-based solver, in turn, achieves an accuracy of 54.8% – admittedly higher than our best system (Clark et al., 2016).3 An interesting avenue of future work would be to run NaturalLI over such a large web-scale corpus, and to incorporate PMI-based statistics into the evaluation function. 6.4 Discussion We analyze some common types of errors made by the system on the training set. The most common error can be attributed to the question requiring complex reasoning about multiple premises. 29 of 108 questions in the training set (26%) contain multiple premises. Some of these cases can be recovered from (e.g., This happens because the smooth road has less friction.), while others are trivially out of scope for our method (e.g., The volume of water most likely decreased.). Although there is usually still some signal for which answer is most likely to be correct, these questions are fundamentally out-of-scope for the approach. Another class of errors which deserves mention are cases where a system produces the same score for multiple answers. This occurs fairly frequently in the standalone classifier (7% of examples in training; 4% loss from random guesses), and especially often in NaturalLI (11%; 6% loss from random guesses). This offers some insight into why incorporating other models – even with low weight – can offer significant boosts in the per3Results from personal correspondence with the authors. formance of NaturalLI. Both this and the previous class could be further mitigated by having a notion of a process, as in Berant et al. (2014). Other questions are simply not supported by any single sentence in the corpus. For example, A human offspring can inherit blue eyes has no support in the corpus that does not require significant multi-step inferences. A remaining chunk of errors are simply classification errors. For example, Water freezing is an example of a gas changing to a solid is marked as the best hypothesis, supported incorrectly by An ice cube is an example of matter that changes from a solid to a liquid to a gas, which after mutating water to ice cube matches every keyword in the hypothesis. 7 Conclusion We have improved NaturalLI to be more robust for question answering by running the inference over dependency trees, pre-computing deletions, and incorporating a soft evaluation function for predicting likely entailments when formal support could not be found. Lastly, we show that relational entailment and meronymy can be elegantly incorporated into natural logic. These features allow us to perform large-scale broad domain question answering, achieving strong results on the Aristo science exams corpus. Acknowledgments We thank the anonymous reviewers for their thoughtful comments. We gratefully acknowledge the support of the Allen Institute for Artificial Intelligence, and in particular Peter Clark and Oren Etzioni for valuable discussions, as well as for access to the Aristo corpora and associated preprocessing. We would also like to acknowledge the support of the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) contract no. FA8750-13-2-0040. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of AI2, DARPA, AFRL, or the US government. 450 References Gabor Angeli and Christopher D. Manning. 2014. NaturalLI: Natural logic inference for common sense reasoning. In EMNLP. Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D. Manning. 2015. Leveraging linguistic structure for open domain information extraction. In ACL. Gabor Angeli. 2016. Learning Open Domain Knowledge From Text. Ph.D. thesis, Stanford University. Islam Beltagy, Stephen Roller, Pengxiang Cheng, Katrin Erk, and Raymond J. Mooney. To appear, 2016. Representing meaning with a combination of logical and distributional models. Computational Linguistics. Jonathan Berant, Ido Dagan, and Jacob Goldberger. 2011. Global learning of typed entailment rules. In Proceedings of ACL, Portland, OR. Jonathan Berant, Vivek Srikumar, Pei-Chun Chen, Brad Huang, Christopher D Manning, Abby Vander Linden, Brittany Harding, and Peter Clark. 2014. Modeling biological processes for reading comprehension. In Proc. EMNLP. Antoine Bordes, Jason Weston, Ronan Collobert, Yoshua Bengio, et al. 2011. Learning structured embeddings of knowledge bases. In AAAI. Danqi Chen, Richard Socher, Christopher D Manning, and Andrew Y Ng. 2013. Learning new facts from knowledge bases with neural tensor networks and semantic word vectors. arXiv preprint arXiv:1301.3618. Timothy Chklovski and Patrick Pantel. 2004. Verbocean: Mining the web for fine-grained semantic verb relations. In EMNLP. Peter Clark, Philip Harrison, and Niranjan Balasubramanian. 2013. A study of the knowledge base requirements for passing an elementary science test. In AKBC. Peter Clark, Niranjan Balasubramanian, Sumithra Bhakthavatsalam, Kevin Humphreys, Jesse Kinkead, Ashish Sabharwal, and Oyvind Tafjord. 2014. Automatic construction of inferencesupporting knowledge bases. AKBC. Peter Clark, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Turney, and Daniel Khashabi. 2016. Combining retrieval, statistics, and inference to answer elementary science questions. Peter Clark. 2015. Elementary school science and math tests as a driver for AI: Take the Aristo challenge! AAAI. Robin Cooper, Dick Crouch, Jan Van Eijck, Chris Fox, Johan Van Genabith, Jan Jaspars, Hans Kamp, David Milward, Manfred Pinkal, Massimo Poesio, et al. 1996. Using the framework. Technical report, The FraCaS Consortium. Marie-Catherine de Marneffe, Timothy Dozat, Natalia Silveira, Katri Haverinen, Filip Ginter, Joakim Nivre, and Christopher D Manning. 2014. Universal Stanford dependencies: A cross-linguistic typology. In Proceedings of LREC. Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and extracted knowledge bases. In KDD. David Ferrucci, Eric Brown, Jennifer Chu-Carroll, Jmes Fan, David Gondek, Aditya Kalyanpur, Adam Lally, J. William Murdock, Eric Nyberg, John Prager, Nico Schlaefer, and Chris Welty. 2010. The AI behind Watson. The AI Magazine. Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recognizing textual entailment challenge. In Proc. of the ACL-PASCAL workshop on textual entailment and paraphrasing. Association for Computational Linguistics. Andrew Hickl and Jeremy Bensley. 2007. A discourse commitment-based framework for recognizing textual entailment. In ACL-PASCAL Workshop on Textual Entailment and Paraphrasing. Ben Hixon, Peter Clark, and Hannaneh Hajishirzi. 2015. Learning knowledge graphs for question answering through conversational dialog. NAACL. Thomas Icard, III and Lawrence Moss. 2014. Recent progress on monotonicity. Linguistic Issues in Language Technology. Mike Lewis and Mark Steedman. 2013. Combined distributional and logical semantics. TACL, 1:179– 192. Percy Liang and Christopher Potts. 2015. Corpusbased semantics and pragmatics. Annual Review of Linguistics, 1(1). Percy Liang, Michael Jordan, and Dan Klein. 2011. Learning dependency-based compositional semantics. In ACL. Bill MacCartney and Christopher D Manning. 2007. Natural logic for textual inference. In ACL-PASCAL Workshop on Textual Entailment and Paraphrasing. Bill MacCartney and Christopher D Manning. 2008. Modeling semantic containment and exclusion in natural language inference. In Coling. Bill MacCartney and Christopher D Manning. 2009. An extended model of natural logic. In Proceedings of the eighth international conference on computational semantics. 451 Bill MacCartney. 2009. Natural Language Inference. Ph.D. thesis, Stanford. Mausam, Michael Schmitz, Robert Bart, Stephen Soderland, and Oren Etzioni. 2012. Open language learning for information extraction. In EMNLP. David A McAllester and Robert Givan. 1992. Natural language syntax and first-order inference. Artificial Intelligence, 56(1):1–20. Dan Moldovan, Christine Clark, Sanda Harabagiu, and Steve Maiorano. 2003. COGEX: A logic prover for question answering. In NAACL. NYSED. 2014. The grade 4 elementary-level science test. http://www.nysedregents. org/Grade4/Science/home.html. Matthew Richardson and Pedro Domingos. 2006. Markov logic networks. Machine learning, 62(12):107–136. Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In NAACL-HLT. V´ıctor Manuel S´anchez Valencia. 1991. Studies on natural logic and categorial grammar. Ph.D. thesis, University of Amsterdam. Stefan Schoenmackers, Oren Etzioni, Daniel S Weld, and Jesse Davis. 2010. Learning first-order horn clauses from web text. In EMNLP. Ran Tian, Yusuke Miyao, and Takuya Matsuzaki. 2014. Logical inference on dependency-based compositional semantics. In ACL. Johan van Benthem. 1986. Essays in logical semantics. Springer. Fei Wu and Daniel S Weld. 2010. Open information extraction using wikipedia. In ACL. Association for Computational Linguistics. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In UAI. AUAI Press. 452
2016
42
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 453–463, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Easy Questions First? A Case Study on Curriculum Learning for Question Answering Mrinmaya Sachan Eric P. Xing School of Computer Science Carnegie Mellon University {mrinmays, epxing}@cs.cmu.edu Abstract Cognitive science researchers have emphasized the importance of ordering a complex task into a sequence of easy to hard problems. Such an ordering provides an easier path to learning and increases the speed of acquisition of the task compared to conventional learning. Recent works in machine learning have explored a curriculum learning approach called selfpaced learning which orders data samples on the easiness scale so that easy samples can be introduced to the learning algorithm first and harder samples can be introduced successively. We introduce a number of heuristics that improve upon selfpaced learning. Then, we argue that incorporating easy, yet, a diverse set of samples can further improve learning. We compare these curriculum learning proposals in the context of four non-convex models for QA and show that they lead to real improvements in each of them. 1 Introduction A key challenge in building an intelligent agent is in modeling the incrementality and the cumulative nature of human learning (Skinner, 1958; Peterson, 2004; Krueger and Dayan, 2009). Children typically learn grade by grade, progressing from simple concepts to more complex ones. Given a complex set of concepts, it is often the case that some concepts are easier than others. Some concepts can even be prerequisite to learning other concepts. Hence, evolving a useful curriculum where easy concepts are presented first and more complex concepts are gradually introduced can be beneficial for learning. We explore methods for learning a curriculum in the context of non-convex models for question answering. Curriculum learning (CL) (Bengio et al., 2009) and self-paced learning (SPL) (Kumar et al., 2010) have been recently introduced in machine learning literature. However, their usefulness in the context of NLP tasks such as QA has not been studied so far. The main challenge in learning a curriculum is that it requires the identification of easy and hard concepts in the given training dataset. However, in real-world applications, such a ranking of training samples is difficult to obtain. Furthermore, a human judgement of ‘easiness’ of a task might not correlate with what is easy for the algorithm in the feature and hypothesis space employed for the given application. SPL combines the selection of the curriculum and the learning task in a single objective. The easiness of a question in self-paced learning is defined by its local loss. We propose and study other heuristics that define a measure of easiness and learn the curriculum by selecting samples using this measure. These heuristics are similar to those used in active learning, but with one key difference. In curriculum learning, all the training examples and labels are already known, which is not the case in active learning. Our experiments show that these heuristics work well in practice. While the strategy of learning from easy questions first and then gradually handling harder questions is supported by many cognitive scientists, others (Cantor, 1946) argue that it is also important to expose the learning to diverse (even if sometimes harder) examples. We argue that the right curriculum should not only be arranged in the increasing order of difficulty but also introduce the learner to sufficient number of diverse examples that are sufficiently dissimilar from what has already been introduced to the learning process. We showed that the above heuristics when coupled with diversity lead to significant improvements. 453 We provide empirical evaluation on four QA models: (a) an alignment-based approach (Sachan et al., 2015) for machine comprehension – a reading comprehension task (Richardson et al., 2013) with a set of questions and associated texts, (b) an alignment-based approach (Sachan et al., 2016) for a multiple-choice elementary science test (Clark and Etzioni, 2016), (c) QANTA (Iyyer et al., 2014) – a recursive neural network for answering quiz bowl questions, and (d) memory networks (Weston et al., 2014) – a recurrent neural network with a long-term memory component for answering 20 pre-defined tasks for machine comprehension. We show value in our approaches for curriculum learning on all these settings. Our paper has the following contributions: 1. In our knowledge, this is the first application of curriculum learning to the task of QA and one of the first in NLP. We hope to make the NLP and ML communities aware of the benefits of CL for non-convex optimization. 2. We perform an in-depth analysis of SPL, and propose heuristics which offer significant improvements over SPL; the state-of-the-art in curriculum learning. 3. We stress on diversity of questions in the curriculum during learning and propose a method that learns a curriculum while capturing diversity to gain more improvements. 2 Problem Setting for QA For each question qi ∈Q, let Ai = {ai1, . . . , aim} be the set of candidate answers to the question. Let a∗ i be the correct answer. The candidate answers may be pre-defined, as in multiple-choice QA, or may be undefined but easy to extract with a high degree of confidence (e.g., by using a pre-existing system). We want to learn a function f : (q, K) → a that, given a question qi and background knowledge K (texts/resources required to answer the question), outputs an answer ˆai ∈Ai. We consider a scoring function Sw(q, a; K) (with model parameters w) and a prediction rule fw(qi) = ˆai = arg max aij∈Ai Sw(qi, aij; K). Let ∆(ˆai, a∗ i ) be the cost of giving a wrong answer. We consider the empirical risk minimization (ERM) framework given a loss function L and a regularizer Ω: min w X qi∈Q Lw(a∗ i , fw(qi); K) + Ω(w) (1) 3 QA Models The field of QA is quite rich. Solutions proposed have ranged from various IR based approaches that treat this as a problem of retrieval from existing knowledge bases or perform inference using a large corpus of unstructured texts by learning a similarity between the question and a set of candidate answers (Yih et al., 2013). A comprehensive review of QA is out of scope of this paper. So we point the interested readers to Jurafsky and Martin (2000), chapter 28 for a more comprehensive review. In this paper, we will explore curriculum learning in the context of non-convex models for QA. The models will be (1) latent structural SVM (Yu and Joachims, 2009) based solutions for standardized question-answering tests and (2) deep learning models (Iyyer et al., 2014; Weston et al., 2014) for QA. Recently, researchers have proposed standardized tests as ‘drivers for progress in AI’ (Clark and Etzioni, 2016). Some example standardized tests are reading comprehensions (Richardson et al., 2013), algebra word problems (Kushman et al., 2014), geometry problems (Seo et al., 2014), entrance exams (Fujita et al., 2014; Arai and Matsuzaki, 2014), etc. These tests are usually in the form of question-answers and focus on elementary learning. The idea of learning the curriculum could be especially useful in the context of standardized tests. Standardized tests (Clark and Etzioni, 2016) are implicitly incremental in nature, covering various levels of difficulty. Thus they are rich sources of data for building systems that learn incrementally. These datasets can also help us understand the shaping hypothesis as we can use them to verify if easier questions are indeed getting picked by our incremental learning algorithm before harder questions. On the other hand, deep learning models (LeCun et al., 2015) have recently shown good performance in many standard NLP and vision tasks, including QA. These models usually learn representations of data and the QA model jointly. The models use a cascade of many layers of nonlinear processing units, leading to a highly non-convex model and a large parameter space. This renders these models susceptible to local-minima. Hence, the idea of learning the curricula is also very useful in the context of deep-learning models, as the technique of processing questions in the increasing order of difficulty often leads to better minima 454 Text: … Natural greenhouse gases include carbon dioxide, methane, water vapor, and ozone ... CFCs and ! some other man-made compounds are also greenhouse gases … Hypothesis: The important greenhouse gases are Carbon dioxide , Methane, Ozone and CFC Q: What are the important greenhouse gases? ! A: Carbon dioxide, Methane, Ozone and CFC Figure 1: Alignment structure for an example question from the science QA dataset. The question and answer candidate are combined to generate a hypothesis sentence. Then alignments (shown by red lines) are found between the hypothesis and the appropriate snippet in the texts. (as shown in our results). 3.1 Alignment Based Models Alignment based models for QA (Yih et al., 2013; Sachan et al., 2015; Sachan et al., 2016) cast QA as a textual entailment problem by converting each question-answer candidate pair (qi, aij) into a hypothesis statement hij. For example, the question “What are the important greenhouse gases?” and answer candidate “Carbon dioxide, Methane, Ozone and CFC” in Figure 1 can be combined to achieve a hypothesis “The important greenhouse gases are Carbon dioxide , Methane, Ozone and CFC.”. A set of question matching/rewriting rules are used to achieve this transformation. These rules match the question into one of a large set of pre-defined templates and apply a unique transformation to the question and answer candidate to achieve the hypothesis statement. For each question qi, the QA task thereby reduces to picking the hypothesis ˆhi that has the highest likelihood among the set of hypotheses hi = {hi1, . . . , him} generated for that question of being entailed by a body of relevant texts. The body of relevant texts can vary for each instance of the QA task. For example, it could be just the passage in a reading comprehension task, or a set of science textbooks in the science QA task. Let h∗ i ∈hi be the correct hypothesis. The model considers the quality of word alignment from a hypothesis hij (formed by combining question-answer candidates (qi, aij)) to snippets in the textbooks as a proxy for the evidence. The alignment depends on: (a) snippet from the relevant texts chosen to be aligned to the hypothesis and (b) word alignment from the hypothesis to the snippet. The snippet from the texts to be aligned to the hypothesis is determined by picking a subset of sentences in the texts. Then each hypothesis word is aligned to a unique word in the snippet. See Figure 1 for an illustration. The choice of snippets composed with the word alignment is latent. Let zij represent the latent structure for the question-answer candidate pair (qi, ai,j). A natural solution is to treat QA as a problem of ranking the hypothesis set hi such that the correct hypothesis is at the top of this ranking. Hence, a scoring function Sw(h, z) is learnt such that the score given to the correct hypothesis h∗ i and the corresponding latent structure z∗ i is higher than the score given to any other hypothesis and its corresponding latent structure. In fact, in a max-margin fashion, the model learns the scoring function such that Sw(h∗ i , z∗ i ) > Sw(hij, zij) + ∆(h∗ i , hij) −ξi for all hj ∈h \ h∗for some slack ξi. This can be formulated as the following optimization problem: min ||w|| 1 2||w||2 2 + C X i ξi s.t. Sw(h∗ i , z∗ i ) ≥max zij Sw(hij, zij) + ∆(h∗ i , hij) −ξi It is intuitive to use 0-1 cost, i.e. ∆(h∗ i , hij) = 1(h∗ i ̸= hij) If the scoring function is convex then this objective is in concave-convex form and can be minimized by the concave-convex programming procedure (CCCP) (Yuille and Rangarajan, 2003). The scoring function is assumed to be linear: Sw(h, z) = wT ψ(h, z). Here, ψ(h, z) is a task-dependent feature map (see Sachan et al. (2015) and Sachan et al. (2016) for details). 3.2 Deep Learning Models We briefly review two neural network models for QA – Iyyer et al. (2014) and Weston et al. (2014). QANTA: QANTA (Iyyer et al., 2014) answers quiz bowl questions using a dependency tree structured recursive neural network. It combines predictions across sentences to produce a question answering neural network with trans-sentential averaging. The model is optimized using AdaGrad (Duchi et al., 2011). In quiz bowl, questions typically consist of four to six sentences and are associated with factoid answers. Every sentence in the question is guaranteed to contain clues that uniquely identify its answer, even without the context of previous sentences1. Recently, QANTA had beaten the well-known Jeopardy! star Ken Jennings at an exhibition quiz bowl contest. Memory Networks: Memory networks (Weston et al., 2014) are essentially recurrent neural networks with a long-term memory component. The memory can be read and written to, and can be used for prediction. The memory can be seen as 1Refer to Figure 1 in (Iyyer et al., 2014) for an example 455 acting like a dynamic knowledge base. The model is trained using a margin ranking loss and stochastic gradient descent. It was evaluated on a set of synthetic QA tasks. For each task, a set of statements were generated by a simulation of 4 characters, 3 objects and 5 rooms using an automated grammar with characters moving around, picking up and dropping objects are given, followed by a question whose answer is typically a single word2. 4 Curriculum Learning Studies in cognitive science (Skinner, 1958; Peterson, 2004; Krueger and Dayan, 2009) have shown that humans learn much better when the training examples are not randomly presented but organized in increasing order of difficulty. The idea of shaping, which consists of training a machine learning algorithm with a curriculum was first introduced by (Elman, 1993) in the context of grammatical structure learning using a recurrent connectionist network. This idea also lent support for the much debated Newport’s “less is more” hypothesis (Goldowsky and Newport, 1993; Newport, 1990) that child language acquisition is aided, rather than hindered, by limited cognitive resources. Curriculum learning (Bengio et al., 2009) is a recent idea in machine learning, where a curriculum is designed by ranking samples based on manually curated difficulty measures. These measurements are usually not known in real-world scenarios, and are hard to elicit from humans. 4.1 Self-paced Learning Self-paced learning (SPL) (Kumar et al., 2010; Jiang et al., 2014a; Jiang et al., 2015) reformulates curriculum learning as an optimization problem by jointly modeling the curriculum and the task at hand. Let v ∈[0, 1]|Q| be the weight vector that models the weight of the sample questions in the curriculum. The SPL model includes a weighted loss term on all samples and an additional self-paced regularizer imposed on sample weights v. SPL formulation for the ERM framework described in eq 1 can be rewritten as: min w,v∈[0,1]|Q| X qi∈Q viLw(a∗ i , fw(qi); K) + g(vi, λ) +Ω(w) 2Refer to Table 1 in (Weston et al., 2015) for examples The problem usually has closed-form solution with respect to v (described later; lets call the solution v∗(λ; L) for now). g(v, λ) is usually called the self-paced regularizer with the “age” or “pace” parameter λ. g is convex with respect to v ∈[0, 1]|Q|. Furthermore, v(λ; L) is monotonically decreasing with respect to L, and limL→0 v∗(λ; L) = 1 and limL→∞v∗(λ; L) = 0. This means that the model inclines to select easy samples (with smaller losses) in favor of complex samples (with larger losses). Finally, v∗(λ; L) is monotonically increasing with respect to λ, and limλ→0 v∗(λ; L) = 0 and limλ→∞v∗(λ; L) ≤1. This means that when the model “ages” (i.e. the age parameter λ gets larger), it tends to incorporate more, probably complex samples to train a ‘mature’ model. Four popular self-paced regularizers in the literature (Kumar et al., 2010; Jiang et al., 2014a; Zhao et al., 2015) are hard, soft logarithmic, soft linear and mixture. These SP-regularizers, summarized with corresponding closed form solutions for v are shown in Table 1. Hard weighting is usually less appropriate as it cannot discriminate the importance of samples. However, soft weighting assigns real-valued weights and reflects the latent importance of samples in training. The soft linear regularizer linearly weighs samples with respect to their losses and the soft logarithmic penalizes the weight logarithmically. Mixture weighting combines both hard and soft weighting schemes. We can solve the model in the SPL regime by iteratively updating v (closed form solution for v is shown in Table 1) and w (by CCCP, AdaGrad or SGD), and gradually increasing the age parameter λ to let harder and harder problems in. Since its inception, variations of SPL such as self-paced re-ranking (Jiang et al., 2014a), selfpaced learning with diversity (Jiang et al., 2014b), self-paced multiple-instance learning (Zhang et al., 2015) and self-paced curriculum learning (Jiang et al., 2015) have been proposed. The techniques have been shown to be useful in some computer vision tasks (Lee and Grauman, 2011; Kumar et al., 2011; Tang et al., 2012; Supancic and Ramanan, 2013; Jiang et al., 2014a). SPL is different from active learning (Settles, 1995) in the sense that the training examples (and labels) are already provided and the solution only orders the examples to achieve a better solution. On the other hand, active learning tries to interactively query 456 Regularizer g(v; λ) v∗(λ; L) Hard −λv  1, if L ≤λ 0, o/w Soft Linear λ( 1 2v2 −v)  −L λ + 1, if L ≤λ 0, otherwise Soft Logarithmic P qi∈Q  (1 −λ)vi −(1−λ)vi log(1−λ)   log(L+1−λ) log(1−λ) , if L ≤λ 0, o/w Mixed γ2 v+ γ λ        1, if L ≤  λγ λ+γ 2 0, if L ≥λ2 γ  1 √ L −1 λ  , o/w Table 1: Various SP-regularizers for SPL. the user (or another information source) to achieve a better model with few queries. Curriculum learning is also related to teaching dimension (Khan et al., 2011) which studies the strategies that humans follow as they teach a target concept to a robot by assuming a teaching goal of minimizing the learner’s expected generalization error at each iteration. One can also think of curriculum learning as an approach for achieving a better local optimum in non-convex problems. 5 Improved Curriculum Learning Heuristics SPL selects questions based on the local loss term of the question. This is not the only way to define ‘easiness’ of the question. Hence, we suggest some other heuristics for selecting the order of questions to be presented to our learning algorithm. The heuristics select the next question qi ∈Q \ Q0 given the current model (M) and the set of questions already presented for learning (Q0). We assume access to a minimization oracle (CCCP/AdaGrad/SGD) for the QA models. We explore the following heuristics: 1) Greedy Optimal (GO): The simplest and greedy optimal heuristic (Schohn and Cohn, 2000) would be to pick a question qi ∈Q \ Q0 which has the minimum expected effect on the model. The expected effect on adding qi can be written as: P aij∈Ai p(a∗ i = aij) P qj∈Q0∪qi E h Lw(a∗ j, fw(qj); K) i . p(a∗ i = aij) can be estimated by normalizing Sw(q, a; K). P qj∈Q0∪qi E h Lw(a∗ j, fw(qj); K) i can be estimated by retraining the model on Q0 ∪qi. 2) Change in Objective (CiO): Choose the question qi ∈Q \ Q0 that causes the smallest increase in the objective. If there are multiple questions with the smallest increase in objective, pick one of them randomly. 3) Mini-max (M2): Chooses question qi ∈Q\Q0 that minimizes the regularized expected risk when including the question with the answer candidate aij that yields the maximum error. ˆqi = arg min qi∈Q\Q0 max aij∈Ai Lw(aij, fw(qi); K) 4) Expected Change in Objective (ECiO): In this greedy heuristic, we pick a question qi ∈ Q \ Q0 which has the minimum expected effect on the model. The expected effect can be written as P aij∈Ai p(a∗ i = aij)×E [Lw(a∗ i , fw(qi); K)]. Here, p(a∗ i = aij) can be achieved by normalizing Sw(q, a; K) and E [Lw(a∗ i , fw(qi); K)] can be estimated by running inference for qi. 4) Change in Objective-Expected Change in Objective (CiO - ECiO): We pick a question qi ∈Q \ Q0 which has the minimum value of the difference between the change in objective and the expected change in objective. Intuitively, the difference represents how much the model is surprised to see this new question. 5) Correctly Answered (CA): Pick a question qi ∈Q \ Q0 which is answered by the model M with the minimum cost ∆(ˆai, a∗ i ). If there are multiple questions with minimum cost, pick one of them randomly. 6) Farthest from Decision Boundary (FfDB): This heuristic applies for latent structural SVMs only. Here, we choose the question qi ∈Q \ Q0 whose predicted answer ˆai is farthest from the decision boundary: max z∗wT φ(qi, a∗, z∗, K) = max ˆz wT φ(q, ˆa, ˆz, K) + ∆(ˆa, a∗). 5.1 Timing Considerations: A key consideration in applying the above heuristics is efficiency as the QA models considered (latent structural SVM and deep learning) are compu457 tationally expensive. Among our selection strategies, GO and CiO require updating the model, M2, ECiO, CA and FfDB require performing inference on the candiate questions, while CiO - ECiO requires both retraining as well as inference. Consequently, M2, ECiO, CA and FfDB are most efficient. We can also gain considerable speed-up by picking questions in batches. This results in significant speed-up with small loss in accuracy. We will discuss the batch question selection setup in more detail in our experiments. 5.2 Smarter Selection Strategies: We further describe some improvements to the above selection strategies: 1) Ensemble Strategy: In this strategy, we combine all of the above heuristics into an ensemble. The ensemble computes the ratio of the score of the suggested question pick and the average score over remaining Q\Q0 questions for all the heuristics and picks the question with the highest ratio. As we will see in our results, this ensemble works well in practice. 2) Importance-Weighting (IW): Importance weighting is a common technique in active learning literature (Tong and Koller, 2002; Beygelzimer et al., 2009; Beygelzimer et al., 2010), which mitigates the problem that if we query questions actively instead of selecting them uniformly at random, the training (and test) question sets are no longer independent and identically distributed (i.i.d.). In other words, the training set will have a sample selection bias that can impair prediction performance. To mitigate this, we propose to sample questions from a biased sample distribution eD. To achieve eD, we introduce the weighted loss eLw(a, fw(q); K) = ew(q, a) × Lw(a, fw(q); K) where ew(q, a) is the weighting function ew(q, a) = pD(q,a) p e D(q,a) which represents how likely it is to observe (q, a) under D compared to eD. In this setting, we can show that the generalization error under eD is the same as that under D: E(q,a)∼e D h eLw(a, fw(q); K) i = Z (q,a) p e D(q, a)pD(q, a) p e D(q, a)Lw(a, fw(q); K)d(q, a) = Z (q,a) pD(q, a)Lw(a, fw(q); K)d(q, a) = E(q,a)∼D [Lw(a, fw(q); K)] Thus, given appropriate weights ew(q, a), we modify our loss-function in order to compute an unbiased estimator of the generalization error. Each question-answer is assigned with a non-negative weight. For latent structural SVMs, one can minimize the weighted loss by simply multiplying the corresponding regularization parameter Ci with a corresponding term. In neural networks, this is simply achieved by multiplying the gradients with the corresponding weights. The weights can be set by an appropriate heuristic, e.g. proportional to distance from the decision boundary. 5.3 Incorporating Diversity with Explore and Exploit (E&E): The strategy of learning from easy questions first and then gradually handling harder questions is intuitive as it helps the learning process. Yet, it has one key deficiency. Under curriculum learning, by focusing on easy questions first, our learning algorithm is usually not exposed to a diverse set of questions. This is particularly a problem for deeplearning approaches that learn representations during the process of learning. Hence, when a harder question arrives, it is usually hard for the learner to adjust to this new question as the current representation may not be appropriate for the new level of difficulty. This motivates our E&E strategy. The explore and exploit strategy ensures that while we still select easy questions first, we also want to make our selection as diverse as possible. We define a measure for diversity as the angle between the hyperplanes that the question samples induce in feature space: ∠(φ(qi, a∗ i , z∗ i , K), φ(qi′, a∗ i′, z∗ i′, K)) = Cosine−1  |φ(qi,a∗ i ,z∗ i ,K)φ(qi′,a∗ i′,z∗ i′,K)| ||φ(qi,a∗ i ,z∗ i ,K)||||φ(qi′,a∗ i′,z∗ i′,K)||  . The E&E solution picks the question which optimizes a convex combination of the curriculum learning objective and the sum of angles between the candidate question pick and questions in Q0. The convex combination is tuned on the development set. 6 Experiments 6.1 Datasets As described, we study curriculum learning on four different tasks. The first task is question answering for reading comprehensions. We use MCTest-500 dataset (Richardson et al., 2013), a freely available set of 500 stories (300 train, 50 dev and 150 test) and associated questions to evaluate our model. Each story in MCTest has four 458 Machine Comprehension Science QA QANTA Memory Networks No Curriculum (NC) 66.62±0.22 42.77±0.04 70.40±0.07 75.03±0.06 SPL Hard 67.36±0.16 43.85±0.18 70.66±0.19 71.01±0.09 Soft Linear 68.04±0.17 43.80±0.22 71.65±0.18 72.33±0.07 Soft Log 68.89±0.16 44.19±0.20 71.92±0.16 73.32±0.09 Mixed 69.47±0.18 44.86±0.20 72.89±0.19 74.28±0.13 Heuristics CA 66.86±0.06 42.93±0.08 70.78±0.08 70.96±0.04 M2 66.98±0.12 43.19±0.17 71.02±0.18 69.73±0.06 ECiO 67.39±0.14 44.00±0.22 71.66±0.19 71.01±0.07 GO 67.65±0.12 44.35±0.15 71.94±0.17 71.28±0.06 CiO 68.20±0.10 44.56±0.12 72.61±0.14 71.98±0.06 FfDB 68.32±0.11 44.78±0.13 CiO-ECiO 68.65±0.13 44.97±0.11 73.34±0.10 73.22±0.05 Heur++ Ensemble 69.26±0.08 45.48±0.07 74.11±0.07 74.24±0.04 +IW 69.86±0.10 45.86±0.12 75.02±0.15 74.55±0.05 +E&E 69.93±0.13 46.57±0.17 76.24±0.15 77.64±0.11 +IW+E&E 70.16±0.14 46.68±0.19 76.89±0.18 77.85±0.09 SPL+E&E Hard 68.03±0.17 44.50±0.20 72.34±0.19 74.43±0.06 Soft Linear 68.51±0.19 44.43±0.21 73.16±0.19 75.74±0.07 Soft Log 69.27±0.18 44.92±0.20 73.47±0.18 76.63±0.10 Mixed 69.89±0.21 45.58±0.21 74.39±0.21 77.12±0.15 Table 2: Accuracy on the test set obtained on the four experiments, comparing results when no curriculum (NC) was learnt, when we use self-paced learning (SPL) with four variations of SP-regularizers, the six heuristics and four improvements proposed by us. Each cell reports the mean±se (standard error) accuracy over 10 repetitions of each experimental configuration. multiple-choice questions, each with four answer choices. Each question has exactly one correct answer. The second task is science question answering. We use a mix of 855 third, fourth and fifth grade science questions derived from a variety of regional and state science exams3 for training and evaluating our model. We used publicly available science textbooks available through ck12.org and Simple English Wikipedia4 as texts required to answer the questions. The model retrieves a section from the textbook or a Wikipedia page (using a lucene index on the sections and Wikipedia pages) by querying for the hypothesis hij and then aligning the hypothesis to snippets in the document. For QANTA (Iyyer et al., 2014), we use questions from quiz bowl tournaments for training as in Iyyer et al. (2014). The dataset contains 20,407 questions with 2347 answers. For each answer in the dataset, its corresponding Wikipedia page is also provided. Finally, for memory networks (Weston et al., 2014), we use the synthetic QA tasks defined in Weston et al. (2015) (version 1.1 of the dataset). There are 20 different types of tasks that probe different forms of reasoning and deduction. Each task consists of a set of statements, followed by a question whose answer is typically a single word or a set of words. We report mean accuracy 3http://aristo-public-data.s3.amazonaws.com/AI2Elementary-NDMC-Feb2016.zip 4https://dumps.wikimedia.org/simplewiki/20151102/ across these 20 tasks. 6.2 Results We implemented and compared the six selection heuristics (§5) with the suggested improvements (§5.2) and self-paced learning (§4) with the explore and exploit extension for both alignment based models (§3.1) and two deep learning models (§3.2). We use accuracy (proportion of test questions correctly answered) as our evaluation metric. In all our experiments, we begin with zero training data (random initialization). For alignment based models, we select 1 percent of training set questions after every epoch (an epoch is defined as a single pass through the current training set by the optimization oracle) and add them to the training set based on the selection strategy. For deep learning models, we discovered that the learning was a lot slower so we added 0.1 percent of new training set questions after every epoch. Hyper parameters of the alignment based models and the deep learning models were fixed to the corresponding values proposed in their corresponding papers (pre-tuned for the optimization oracle on a held-out development set). All the results reported in this paper are averaged over 10 runs of each experiment. Table 5.3 reports test accuracies obtained on all the QA tasks, comparing the aforementioned proposals against corresponding models when curriculum learning is not used. We can observe from 459 0 1 2 3 4 5 6 7 8 9 10 0 0.2 0.4 0.6 0.8 1 Net Rela3ve change in params*10^(-x) Diversity Interpolant Machine Comprehension Science QA QANTA Memory Networks Figure 2: Relative change in parameters*10−x where x = 2 for machine comprehension and science QA, 4 for QANTA and memory networks when CL is used. these results that variants of SPL (and E&E) as well as the heuristics (and improvements) lead to improvements in the final test accuracy for both alignment-based models and QANTA. The surprising ineffectiveness of the heuristics and SPL for memory networks essentially boils down to the abrupt restructure of memory the model has to do for curriculum learning. We provide support for this argument in Figure 2 which plots the net relative change in all the parameters W until convergence 1 No. of parameters ∞ P e:epoch=1 ||We+1−We||1 ||We||1 ! for each of the four tasks on the model Ensemble+E&E against the linear interpolant used to tune the explore and exploit combination. As the interpolant grows from 0 to 1, more and more diverse questions get selected. We can observe that the change in parameters decreases as more diverse questions are selected for all the four tasks. Furthermore, once we bring in diversity (change the interpolant from 0 to 0.1), the relative change in parameters drops sharply for both neural network approaches. The drop is sharpest for memory networks. Easier examples usually require less memory than hard examples. Memory networks have no incentive to utilize only a fraction of its state for easy examples. They simply use the entire memory capacity. This implies that harder examples appearing later require a restructuring of all memory patterns. The network needs to change its memory representation every time in order to free space and accommodate the harder example. This process of memory pattern restructuring is difficult to achieve, so it could be the reason for the relatively poor performance of naive curriculum learning and SPL strategies. However, as we can see from the previous results, the explore and exploit strategy of mixing in some harder examples avoids the problem of having to abruptly restructure memory patterns. The extra samples of all difficulties prevent the network from utilizing all the memory on the easy examples, thus eliminating the need to restructure memory patterns. From Table 5.3 , we can observe that the choice of the SP-regularizer is important. The soft regularizers perform better than the hard regularizer. The mixed regularizer (with mixture weighting) performs even better. We can also observe that all the heuristics work as well as SPL, despite being a lot simpler. The heuristics arranged in increasing order of performance are: CA, M2, ECiO, GO, CiO, FfDB and CiO-ECiO,. The differences between the heuristics are larger for alignment-based models and smaller for deep learning models. The ECiO heuristic has very similar performance to SPL with hard SP-regularizer. This is understandable as SPL also selects ‘easy’ questions based on their expected objective value. The Ensemble is a significant improvement over the individual heuristics. Importance weighting (IW) and the explore and exploit strategies (E&E) provide further improvements. E&E is crucial to making curriculum learning work for deep learning approaches as described before. Motivated by the success of E&E, we also extended it to SPL5 by tuning a convex combination as before. E&E provides improvements across all the experiments for all the SPL experiments. While, the strategy is more important for memory networks, it leads to improvements on all the tasks. In order to understand the curriculum learning process and to test the hypothesis that the procedure indeed selects easy questions first, successively moving on to harder questions, we plot the number of questions of grade 3, 4 and 5 picked by SPL, Ensemble and Ensemble+E&E against the epoch number in Figure 3. We can observe that all the three methods pick more questions from grade 3 initially, successively moving on to more and more grade 4 questions and finally more grade 5 questions. Both Ensemble and Ensemble+E&E are more aggressive at learning this curriculum than SPL. Ensemble becomes too aggressive so 5This is different from Jiang et al. (2014c) which encourages diversity in samples across groups. On the other hand, we encourage diversity in feature space. 460 0 10 20 30 40 50 60 70 80 90 100 0 20 40 60 80 100 Training Set Ques8ons Picked Epochs E&E (Grade 3) E&E (Grade 4) E&E(Grade5) Ensemble (Grade 3) Ensemble (Grade 4) Ensemble (Grade 5) SPL (Grade 3) SPL (Grade 4) SPL (Grade 5) Figure 3: Number of grade 3, 4 and 5 questions picked vs Epoch for various CL approaches for Science QA. 25 30 35 40 45 50 55 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 Test Accuracy Epochs Grade3 (NC) Grade4 (NC) Grade5 (NC) Overall (NC) Grade3 (CL) Grade4 (CL) Grade5 (CL) Overall (CL) Figure 4: Test split accuracy on grade 3, 4 and 5 questions picked vs Epoch for Science QA when CL is used/not used. E&E, initially increases the number of grade 4 and grade 5 questions received by the learner, thereby incorporating diversity in learning. In order to further the claim that curriculum learning follows the principal of learning simpler concepts first and then learning successively harder and harder concepts, we plot the test accuracy on grade 3, 4 and 5 questions with curriculum learning (CL) – i.e. Ensemble+E&E and without curriculum learning (NC) against the epoch number in Figure 4. Here, we can see that the test accuracy increases for questions in all three grade levels. With curriculum learning, the accuracy on grade 3 questions rises sharply in the beginning. This rise is sharper than the case when curriculum learning is not used. Grade 3 test accuracy for curriculum learning then saturates (saturates earlier compared to the case when curriculum learning is not used). The improvements due to curriculum learning for grade 4 questions mainly occur in epochs 30-140. The final epochs of curriculum learning see greater gain in test accuracy for grade 5 questions over the case when curriculum learning is not used. All these experiments together support the intuition of curriculum learning. The models indeed pick and learn from easier questions first and successively learn from harder and harder questions. We also tried variants of our models where we used curriculum learning on grade 3 questions, followed by grade 4 and grade 5 questions. However, this did not lead to significant improvements. Perhaps, this is because questions that are easy for humans may not always correspond to what is easy for our algorithms. Characterizing what is easy for algorithms and how it relates to what is easy for humans is an interesting question for future research. 7 Conclusion Curriculum learning is inspired by the way humans acquire knowledge and skills: by mastering simple concepts first, and progressing through information with increasing difficulty to grasp more complex topics. We studied self-paced learning, an approach for curriculum learning that expresses the difficulty of a data sample in terms of the value of the objective function and builds the curriculum via a joint optimization framework. We proposed a number of heuristics, an ensemble, and several improvements for selecting the curriculum that improves upon self-paced learning. We stressed on another important aspect of human learning – diversity, that requires that the right curriculum should not only arrange the data samples in increasing order of difficulty but should also introduce the learner to a small number of samples that are sufficiently dissimilar to the samples that have already been introduced to the learning process. We showed that our heuristics when coupled with diversity lead to significant improvements in a number of question answering tasks. The approach is quite general and we hope that this paper will encourage more NLP researchers to explore curriculum learning in their own works. Acknowledgments We thank the anonymous reviewers, along with Emmanouil A. Platanios and Snigdha Chaturvedi for their valuable comments and suggestions that helped improve the quality of this paper. This work was supported by the following research grants: NSF IIS1218282, NSF IIS1447676 and AFOSR FA95501010247. 461 References [Arai and Matsuzaki2014] Noriko H Arai and Takuya Matsuzaki. 2014. The impact of ai on education– can a robot get into the university of tokyo? In Proc. ICCE, pages 1034–1042. [Bengio et al.2009] Yoshua Bengio, J´erˆome Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41–48. ACM. [Beygelzimer et al.2009] Alina Beygelzimer, Sanjoy Dasgupta, and John Langford. 2009. Importance weighted active learning. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 49–56. ACM. [Beygelzimer et al.2010] Alina Beygelzimer, John Langford, Zhang Tong, and Daniel J Hsu. 2010. Agnostic active learning without constraints. In Advances in Neural Information Processing Systems, pages 199–207. [Cantor1946] Nathaniel Freeman Cantor. 1946. Dynamics of learning. Foster and Stewart publishing corporation, Buffalo, NY. [Clark and Etzioni2016] Peter Clark and Oren Etzioni. 2016. My computer is an honor student - but how intelligent is it? standardized tests as a measure of ai. In Proceedings of AI Magazine. [Duchi et al.2011] John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121–2159. [Elman1993] Jeffrey L Elman. 1993. Learning and development in neural networks: The importance of starting small. Cognition, 48(1):71–99. [Fujita et al.2014] Akira Fujita, Akihiro Kameda, Ai Kawazoe, and Yusuke Miyao. 2014. Overview of todai robot project and evaluation framework of its nlp-based problem solving. World History, 36:36. [Goldowsky and Newport1993] B.N. Goldowsky and E.L. Newport. 1993. Modeling the effects of processing limitations on the acquisition of morphology: The less is more hypothesis. In Proceedings of the 11th West Coast Conference on Formal Linguistics. [Iyyer et al.2014] Mohit Iyyer, Jordan Boyd-Graber, Leonardo Claudino, Richard Socher, and Hal Daum´e III. 2014. A neural network for factoid question answering over paragraphs. In Proceedings of Empirical Methods in Natural Language Processing. [Jiang et al.2014a] Lu Jiang, Deyu Meng, Teruko Mitamura, and Alexander G Hauptmann. 2014a. Easy samples first: Self-paced reranking for zero-example multimedia search. In Proceedings of the ACM International Conference on Multimedia, pages 547– 556. ACM. [Jiang et al.2014b] Lu Jiang, Deyu Meng, Shoou-I Yu, Zhenzhong Lan, Shiguang Shan, and Alexander Hauptmann. 2014b. Self-paced learning with diversity. In Advances in Neural Information Processing Systems, pages 2078–2086. [Jiang et al.2014c] Lu Jiang, Deyu Meng, Shoou-I Yu, Zhenzhong Lan, Shiguang Shan, and Alexander Hauptmann. 2014c. Self-paced learning with diversity. In Advances in Neural Information Processing Systems, pages 2078–2086. [Jiang et al.2015] Lu Jiang, Deyu Meng, Qian Zhao, Shiguang Shan, and Alexander G Hauptmann. 2015. Self-paced curriculum learning. In TwentyNinth AAAI Conference on Artificial Intelligence. [Jurafsky and Martin2000] Daniel Jurafsky and James H Martin. 2000. Speech and language processing: An introduction to natural language processing, computational linguistics, and speech recognition. Prentice Hall. [Khan et al.2011] Faisal Khan, Bilge Mutlu, and Xiaojin Zhu. 2011. How do humans teach: On curriculum learning and teaching dimension. In Advances in Neural Information Processing Systems, pages 1449–1457. [Krueger and Dayan2009] Kai A Krueger and Peter Dayan. 2009. Flexible shaping: How learning in small steps helps. Cognition, 110(3):380–394. [Kumar et al.2010] M Pawan Kumar, Benjamin Packer, and Daphne Koller. 2010. Self-paced learning for latent variable models. In Advances in Neural Information Processing Systems, pages 1189–1197. [Kumar et al.2011] M Pawan Kumar, Haithem Turki, Dan Preston, and Daphne Koller. 2011. Learning specific-class segmentation from diverse data. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 1800–1807. IEEE. [Kushman et al.2014] Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. [LeCun et al.2015] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature, 521(7553):436–444. [Lee and Grauman2011] Yong Jae Lee and Kristen Grauman. 2011. Learning the easy things first: Self-paced visual category discovery. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1721–1728. IEEE. [Newport1990] Elissa L Newport. 1990. Maturational constraints on language learning. Cognitive science, 14(1):11–28. 462 [Peterson2004] Gail B Peterson. 2004. A day of great illumination: Bf skinner’s discovery of shaping. Journal of the Experimental Analysis of Behavior, 82(3):317–328. [Richardson et al.2013] Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP). [Sachan et al.2015] Mrinmaya Sachan, Avinava Dubey, Eric P Xing, and Matthew Richardson. 2015. Learning answer-entailing structures for machine comprehension. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. [Sachan et al.2016] Mrinmaya Sachan, Avinava Dubey, and Eric P. Xing. 2016. Science question answering using instructional materials. CoRR, abs/1602.04375. [Schohn and Cohn2000] Greg Schohn and David Cohn. 2000. Less is more: Active learning with support vector machines. In Proceedings of the 17th Annual International Conference on Machine Learning. ACM. [Seo et al.2014] Min Joon Seo, Hannaneh Hajishirzi, Ali Farhadi, and Oren Etzioni. 2014. Diagram understanding in geometry questions. In Proceedings of AAAI. [Settles1995] Burr Settles. 1995. Active learning literature survey. University of Wisconsin, Madison, 52(55-66):11. [Skinner1958] Burrhus F Skinner. 1958. Reinforcement today. American Psychologist, 13(3):94. [Supancic and Ramanan2013] James Supancic and Deva Ramanan. 2013. Self-paced learning for long-term tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2379–2386. [Tang et al.2012] Kevin Tang, Vignesh Ramanathan, Li Fei-Fei, and Daphne Koller. 2012. Shifting weights: Adapting object detectors from image to video. In Advances in Neural Information Processing Systems, pages 638–646. [Tong and Koller2002] Simon Tong and Daphne Koller. 2002. Support vector machine active learning with applications to text classification. The Journal of Machine Learning Research, 2:45–66. [Weston et al.2014] Jason Weston, Sumit Chopra, and Antoine Bordes. 2014. Memory networks. CoRR, abs/1410.3916. [Weston et al.2015] Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698. [Yih et al.2013] Wentau Yih, Ming-Wei Chang, Christopher Meek, and Andrzej Pastusiak. 2013. Question answering using enhanced lexical semantic models. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. [Yu and Joachims2009] Chun-Nam Yu and T. Joachims. 2009. Learning structural svms with latent variables. In Proceedings of International Conference on Machine Learning (ICML). [Yuille and Rangarajan2003] A. L. Yuille and Anand Rangarajan. 2003. The concave-convex procedure. Neural Comput. [Zhang et al.2015] Dingwen Zhang, Deyu Meng, Chao Li, Lu Jiang, Qian Zhao, and Junwei Han. 2015. A self-paced multiple-instance learning framework for co-saliency detection. June. [Zhao et al.2015] Qian Zhao, Deyu Meng, Lu Jiang, Qi Xie, Zongben Xu, and Alexander G Hauptmann. 2015. Self-paced learning for matrix factorization. In Proceedings of AAAI. 463
2016
43
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 464–473, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Improved Representation Learning for Question Answer Matching Ming Tan, Cicero dos Santos, Bing Xiang & Bowen Zhou IBM Watson Core Technologies Yorktown Heights, NY, USA {mingtan,cicerons,bingxia,zhou}@us.ibm.com Abstract Passage-level question answer matching is a challenging task since it requires effective representations that capture the complex semantic relations between questions and answers. In this work, we propose a series of deep learning models to address passage answer selection. To match passage answers to questions accommodating their complex semantic relations, unlike most previous work that utilizes a single deep learning structure, we develop hybrid models that process the text using both convolutional and recurrent neural networks, combining the merits on extracting linguistic information from both structures. Additionally, we also develop a simple but effective attention mechanism for the purpose of constructing better answer representations according to the input question, which is imperative for better modeling long answer sequences. The results on two public benchmark datasets, InsuranceQA and TREC-QA, show that our proposed models outperform a variety of strong baselines. 1 Introduction Passage-level answer selection is one of the essential components in typical question answering (QA) systems. It can be defined as follows: Given a question and a pool of candidate passages, select the passages that contain the correct answer. The performance of the passage selection task is not only crucial to non-factoid QA systems, where a question is expected to be answered with a sequence of descriptive text (e.g. the question in Table 1), but also very important to factoid QA systems, where the answer passage selection step is Question: Does Medicare cover my spouse? Ground-truth answer: If your spouse has worked and paid Medicare taxes for the entire required 40 quarters, or is eligible for Medicare by virtue of being disabled or some other reason, your spouse can receive his/her own medicare benefits. If your spouse has not met those qualifications, if you have met them, and if your spouse is age 65, he/she can receive Medicare based on your eligibility. Another candidate answer: If you were married to a Medicare eligible spouse for at least 10 years, you may qualify for Medicare. If you are widowed, and have not remarried, and you were married to your spouse at least 9 months before your spouse’s death, you may be eligible for Medicare benefits under a spouse provision. Table 1: An example of a question with the ground-truth answer and a negative answer extracted from the InsuranceQA dataset. also known as passage scoring. In factoid QA, if the sentences selected by the passage scorer module do not contain the answer, it will definitely lead to an incorrect response from the QA system. One central challenge of this task lies in the complex and versatile semantic relations observed between questions and passage answers. For example, while the task of supporting passage selection for factoid QA may be largely cast as a textual entailment problem, what makes an answer better than another in the real world for non-factoid QA often depends on many factors. Specifically, different from many other pairmatching NLP tasks, the linguistic similarities between questions and answers may or may not be indicative for our task. This is because, depending on what the question is looking for, a good answer may come in different forms: sometimes a correct 464 answer completes the question precisely with the missing information, and in other scenarios, good answers need to elaborate part of the question to rationalize it, and so on. For instance, the question in Table 1 only contains five words, while the best answer uses 60 words for elaboration. On the other hand, the best answers from a pool can also be noisy and include extraneous information irrelevant to the question. Additionally, while a good answer must relate to the question, they often do not share common lexical units. For instance, in the example question, “cover” is not directly mentioned in the answer. This issue may confuse simple word-matching systems. These challenges consequently make handcrafting features much less desirable compared to deep learning based methods. Furthermore, they also require our systems to learn how to distinguish useful pieces from irrelevant ones, and further, to focus more on the former. Finally, the system should be capable of capturing the nuances between the best answer and an acceptable one. For example, the second answer in Table 1 is suitable for a questioner, whose spouse is Medicare eligible, asking about his/her own coverage, while the example question is more likely asked by a person, who is Medicare eligible, asking about his/her spouse’ coverage. Clearly, the first answer is more appropriate for the question, although the second one implicitly answers it. A good system should reflect this preference. While this task is usually approached as a pairwise-ranking problem, the best strategy to capture the association between the questions and answers is still an open problem. Established approaches normally suffer from two weaknesses at this point. First, prior work, such as (Feng et al., 2015; Wang and Nyberg, 2015), resort to either convolutional neural network (CNN) or recurrent neural network (RNN) respectively. However, each structure describes only one semantic perspective of the text. CNN emphasizes the local interaction within n-gram, while RNN is designed to capture long range information and forget unimportant local information. How to combine the merits from both has not been sufficiently explored. Secondly, previous approaches are usually based on independently generated question and answer embeddings; the quality of such representations, however, usually degrades as the answer sequences grow longer. In this work, we propose a series of deep learning models in order to address such weaknesses. We start with the basic discriminative framework for answer selection. We first propose two independent models, Convolutional-pooling LSTM and Convolution-based LSTM, which are designed to benefit from both of the two popular deep learning structures to distinguish better between useful and irrelevant pieces presented in questions and answers. Next, by breaking the independence assumption of the question and answer embedding, we introduce an effective attention mechanism to generate answer representations according to the question, such that the embeddings do not overlook informative parts of the answers. We report experimental results for two answer selection datasets: (1) InsuranceQA (Feng et al., 2015) 1, a recently released large-scale nonfactoid QA dataset from the insurance domain, and (2) TREC-QA 2, which was created by Wang et al. (2007) based on Text REtrieval Conference (TREC) QA track data. The contribution of this paper is hence threefold: 1) We propose hybrid neural networks, which learn better representations for both questions and answers by combining merits of both RNN and CNN. 2) We prove the effectiveness of attention on the answer selection task, which has not been sufficiently explored in prior work. 3) We achieve the state-of-the-art results on both TRECQA and InsuranceQA datasets. The rest of the paper is organized as follows: Section 2 describes the related work for answer selection; Section 3 provides the details of the proposed models; Experimental settings and results are discussed in Section 4 and 5; Finally, we draw conclusions in Section 6. 2 Related work Previous work on answer selection normally used feature engineering, linguistic tools, or external resources. For example, semantic features were constructed based on WordNet in (Yih et al., 2013). This model pairs semantically related words based on word semantic relations. In (Wang and Manning, 2010; Wang et al., 2007), the answer selection problem was transformed to a syntacti1git clone https://github.com/shuzi/insuranceQA.git (We use the V1 version of this dataset). 2The data is obtained from (Yao et al., 2013) http://cs.jhu.edu/˜xuchen/packages/jacana-qa-naacl2013data-results.tar.bz2 465 cal matching between the question/answer parse trees. Some work tried to fulfill the matching using minimal edit sequences between dependency parse trees (Heilman and Smith, 2010; Yao et al., 2013). Discriminative tree-edit feature extraction and engineering over parsing trees were automated in (Severyn and Moschitti, 2013). Such methods might suffer from the availability of additional resources, the effort of feature engineering and the systematic complexity introduced by the linguistic tools, such as parse trees and dependency trees. Some recent work has used deep learning methods for the passage-level answer selection task. The approaches normally pursue the solution on the following directions. First, a joint feature vector is constructed based on both the question and the answer, and then the task can be converted into a classification or ranking problem (Wang and Nyberg, 2015; Hu et al., 2014). Second, recently proposed models for text generation can intrinsically be used for answer selection and generation (Bahdanau et al., 2015; Vinyals and Le, 2015). Finally, the question and answer representations can be learned and then matched by certain similarity metrics (Feng et al., 2015; Yu et al., 2014; dos Santos et al., 2015; Qiu and Huang, 2015). Fundamentally, our proposed models belong to the last category. Meanwhile, attention-based systems have shown very promising results on a variety of NLP tasks, such as machine translation (Bahdanau et al., 2015; Sutskever et al., 2014), machine reading comprehension (Hermann et al., 2015), text summarization (Rush et al., 2015) and text entailment (Rockt¨aschel et al., 2016). Such models learn to focus their attention to specific parts of their input and most of them are based on a one-way attention, in which the attention is basically performed merely over one type of input based on another (e.g. over target languages based on the source languages for machine translation, or over documents according to queries for reading comprehension). Most recently, several two-way attention mechanisms are proposed, where the information from the two input items can influence the computation of each others representations. Rockt¨aschel et al. (2016) develop a two-way attention mechanism including another one-way attention over the premise conditioned on the hypothesis, in addition to the one over hypothesis conditioned on premise. dos Santos et al. (2016) and Yin et al. (2015) generate interactive attention weights on both inputs by assignment matrices. Yin et al. (2015) use a simple Euclidean distance to compute the interdependence between the two input texts, while dos Santos et al. (2016) resort to attentive parameter matrices. 3 Approaches In this section, we first present our basic discriminative framework for answer selection based on long short-term memory (LSTM), which we call QA-LSTM. Next, we detail the proposed hybrid and attentive neural networks that are built on top of the QA-LSTM framework. 3.1 LSTM for Answer Selection Our LSTM implementation is similar to the one in (Graves et al., 2013) with minor modifications. Given an input sequence X = {x(1), x(2), · · · , x(n)}, where x(t) is an Edimension word vector in this paper, the hidden vector h(t) (with size H) at the time step t is updated as follows. it = σ(Wix(t) + Uih(t −1) + bi) (1) ft = σ(Wfx(t) + Ufh(t −1) + bf) (2) ot = σ(Wox(t) + Uoh(t −1) + bo) (3) ˜Ct = tanh(Wcx(t) + Uch(t −1) + bc)(4) Ct = it ∗˜Ct + ft ∗Ct−1 (5) ht = ot ∗tanh(Ct) (6) There are three gates (input i, forget f and output o), and a cell memory vector Ct. σ is the sigmoid function. W ∈RH×E, U ∈RH×H and b ∈RH×1 are the network parameters. Single-direction LSTMs suffer from the weakness of not making use of the contextual information from the future tokens. Bidirectional LSTMs (biLSTMs) use both the previous and future context by processing the sequence in two directions, and generate two sequences of output vectors. The output for each token is the concatenation of the two vectors from both directions, i.e. ht = −→ ht ∥←− ht. QA-LSTM: Our basic answer selection framework is shown in Figure 1. Given an input pair (q,a), where q is a question and a is a candidate answer, first we retrieve the word embeddings (WEs) of both q and a. Then, we separately apply a biLSTM over the two sequences of WEs. Next, 466 we generate a fixed-sized distributed vector representations using one of the following three approaches: (1) the concatenation of the last vectors on both directions of the biLSTM; (2) average pooling over all the output vectors of the biLSTM; (3) max pooling over all the output vectors. Finally, we use cosine similarity sim(q, a) to score the input (q, a) pair. It is important to note that the same biLSTM is applied to both q and a. Similar to (Feng et al., 2015; Weston et al., 2014; Hu et al., 2014), we define the training objective as a hinge loss. L = max{0, M −sim(q, a+)+sim(q, a−)} (7) where a+ is a ground truth answer, a−is an incorrect answer randomly chosen from the entire answer space, and M is a margin. We treat any question with more than one ground truth as multiple training examples. During training, for each question we randomly sample K negative answers, but only use the one with the highest L to update the model. Finally, dropout operation is performed on the representations before cosine similarity matching. The same scoring function, loss function and negative sampling procedure is also used in the NN architectures presented in what follows. 3.2 Convolutional LSTMs The pooling strategies used in QA-LSTM suffer from the incapability of filtering important local information, especially when dealing with long answer sequences. Also, it is well known that LSTM models successfully keep the useful information from longrange dependency. But the strength has a tradeoff effect of ignoring the local n-gram coherence. This can be partially alleviated with bidirectional architectures. Meanwhile, the convolutional structures have been widely used in the question answering tasks, Figure 1: Basic Model: QA-LSTM such as (Yu et al., 2014; Feng et al., 2015; Hu et al., 2014). Classical convolutional layers usually emphasize the local lexical connections of the n-gram. However, the local pieces are associated with each other only at the pooling step. No longrange dependencies are taken into account during the formulation of convolution vectors. Fundamentally, recurrent and convolutional neural networks have their own pros and cons, due to their different topologies. How to keep both merits motivates our studies of the following two hybrid models. 3.2.1 Convolutional-pooling LSTMs In Figure 2 we detail the convolutional-pooling LSTM architecture. In this NN architecture, we replace the simple pooling layers (average/maxpooling) by a convolutional layer, which allows to capture richer local information by applying a convolution over sequences of LSTM output vectors. The number of output vectors k (context window size) considered by the convolution is a hyper-parameter of the model. The convolution structure adopted in this work is as follows: Z ∈Rk|h|×L is a matrix where the m-th column is the concatenation of k hidden vectors generated from biLSTM centralized in the m-th word of the sequence, L is the length of the sequence after wide convolution (Kalchbrenner et al., 2014). The output of the convolution with c filters is, C = tanh(WcpZ) (8) where Wcp are network parameters, and C ∈ Rc×L. The j-th element of the representation vectors (oq and oa) is computed as follows, [oj] = max 1<l<L [Cj,l] (9) Figure 2: Convolutional-pooling LSTM 467 3.2.2 Convolution-based LSTMs In Figure 3, we detail our second hybrid NN architecture. The aim of this approach is to capture the local n-gram interaction at the lower level using a convolution. At the higher level, we build bidirectional LSTMs, which extract the long range dependency based on convoluted n-gram. Combining convolutional and recurrent structures have been investigated in prior work other than question answering (Donahue et al., 2015; Zuo et al., 2015; Sainath et al., 2015). As shown in Figure 3, the model first retrieves word vectors for each token in the sequence. Next, we compose the matrix D ∈RkE×L, where each column l in D consists of the concatenation of k word vectors of size E centered at the l-th word. The matrix X ∈Rc×L, which is the output of the convolution with c filters is computed as follows: X = tanh(WcbD) (10) The matrix X is the input to the biLSTM structure in Eqs. 1-6. After the biLSTM step, we use maxpooling over the biLSTM output vectors to obtain the representations of both q and a. 3.3 Attentive LSTMs In the previous subsections, the two most popular deep learning architectures are integrated to generate semantic representations for questions and answers from both the long-range sequential and local n-gram perspectives. QA-LSTM and the two proposed hybrid models are basically siamese networks (Chopra et al., 2005). These structures overlook another potential issue. The answers might be extremely long and contain lots of words that are not related to the question at hand. No matter what advanced neural networks are exploited at the answer side, the resulting representation might still be distracted by non-useful information. A typical example is the Figure 3: Convolution-based LSTM second candidate answer in Table 1. If the construction of the answer representation is not aware of the input question, the representation might be strongly influenced by n-grams such as “are widowed” and “your spouse’s death”, which are informative if we only look at the candidate answer, but are not so important for the input question. We address this problem by developing a simple attention model for the answer vector generation, in order to alleviate this weakness by dynamically aligning the more informative parts of answers to the questions. Inspired by the work in (Hermann et al., 2015), we develop a very simple but efficient word-level attention on the basic model. In Figure 4, we detail our Attentive LSTM architecture. Prior to the average or mean pooling, each biLSTM output vector is multiplied by a softmax weight, which is determined by the question representation from biLSTM. Specifically, given the output vector of biLSTM on the answer side at time step t, ha(t), and the question representation, oq, the updated vector eha(t) for each answer token are formulated below. ma,q(t) = Wamha(t) + Wqmoq (11) sa,q(t) ∝ exp(wT ms tanh(ma,q(t))) (12) eha(t) = ha(t)sa,q(t) (13) where Wam, Wqm and wms are attention parameters. Conceptually, the attention mechanism gives more weight to certain words of the candidate answer, where the weights are computed by taking into consideration information from the question. The expectation is that words in the candidate answer that are more important with regard to the input question should receive larger weights. The attention mechanism in this paper is conceptually analogous to the one used in one-layer Figure 4: Attentive LSTM 468 Train Validation Test1 Test2 # of Qs 12887 1000 1800 1800 # of As 18540 1454 2616 2593 Table 2: Numbers of Qs and As in InsuranceQA. memory network (Sukhbaatar et al., 2015). The fundamental difference is that the transformed question vector and answer unit vectors are combined in an inner-product pattern in order to generate attentive weights in memory network, whereas this work adopts a summation operation (Eq. 11). 4 InsuranceQA Experiments The first dataset we use to evaluate the proposed approaches is the InsuranceQA, which has been recently proposed by Feng et al. (2015). We use the first version of this dataset. This dataset contains question and answer pairs from the insurance domain and is already divided into a training set, a validation set, and two test sets. We do not see any obvious categorical differentiation between two tests’ questions. We list the numbers of questions and answers of the dataset in Table 2. We refer the reader to (Feng et al., 2015), for more details regarding the InsuranceQA data. In this dataset, a question may have multiple correct answers, and normally the questions are much shorter than answers. The average length of questions in tokens is 7, while the average length of answers is 94. Such difference posts additional challenges for the answer selection task. This corpus contains 24981 unique answers in total. For the development and test sets, the InsuranceQA also includes an answer pool of 500 candidate answers for each question. These answer pools were constructed by including the correct answer(s) and randomly selected candidates from the complete set of unique answers. The top-1 accuracy of the answer selection is reported. 4.1 Setup The proposed models are implemented with Theano (Bastien et al., 2012) and all experiments are conducted in a GPU cluster. We use the accuracy on validation set to select the best epoch and best hyper-parameter settings for testing. The word embeddings are pre-trained, using word2vec (Mikolov et al., 2013) 3. The training data for the word embeddings is a Wikipedia cor3https://code.google.com/p/word2vec/ pus of 164 million tokens combined with the questions and answers in the InsuranceQA training set. The word vector size is set to 100. Word embeddings are also part of the parameters and are optimized during the training. Stochastic Gradient Descent (SGD) is the optimization strategy. The learning rate λ is 1.1. We get the best performances when the negative answer count K = 50. We also tried different margins in the hing loss function, and finally fixed the margin as M=0.2. We train our models in mini-batches (with batch size as 20), and the maximum length L of questions and answers is 200. Any tokens out of this range are discarded. In order to get more obvious comparison between the proposed models and the basic framework, with respect to the ground-truth answer length in Fig. 5, we also provide the results of K = 1. In this case, we set M = 0.1, λ = 0.1 and mini-batches as 100 to get the best performance on the validation set. Also, the dimension of LSTM output vectors is 141x2 for bidirectional LSTM in QA-LSTM, Attentive LSTM and Convolutional-pooling LSTM, such that biLSTM has a comparable number of parameters with a single-direction LSTM with 200 dimensions. For Convolution-based LSTM, since LSTM structure is built on the top of CNN, we fixed the CNN output as 282 dimensions and tune the biLSTM hidden vector size in the experiments. Because the sequences within a mini-batch have different lengths, we use a mask matrix to indicate the real length of each sequence. 4.2 Baselines For comparison, we report the performances of four baselines in the top group in Table 3: two state-of-the-art non-DL approaches and two variations of a strong DL approach based on CNN. Bag-of-word: The idf-weighted sum of word vectors is used as a feature vector. The candidates are ranked by the cosine similarity to the question. Metzler-Bendersky IR model: A state-of-theart weighted dependency model (Bendersky et al., 2010; Bendersky et al., 2011), which employs a weighted combination of term-based and term proximity-based features to score each candidate. Architecture-II in (Feng et al., 2015): A CNN model is employed to learn distributed representations of questions and answers. Cosine similarity is used to rank answers. 469 Model Validation Test1 Test2 Bag-of-word 31.9 32.1 32.2 Metzler-Bendersky IR model 52.7 55.1 50.8 CNN (Feng et al., 2015) 61.8 62.8 59.2 CNN with GESD (Feng et al., 2015) 65.4 65.3 61.0 A QA-LSTM (head/tail) 54.8 53.6 51.0 B QA-LSTM (avg pooling,K=50) 55.0 55.7 52.4 C QA-LSTM (max pooling,K=1) 64.3 63.1 58.0 D QA-LSTM (max pooling,K=50) 66.6 66.6 63.7 E Conv-pooling LSTM (c=4000,K=1) 66.2 64.6 62.2 F Conv-pooling LSTM (c=200,K=50) 66.4 67.4 63.5 G Conv-pooling LSTM (c=400,K=50) 67.8 67.5 64.4 H Conv-based LSTM (|h|=200,K=50) 66.0 66.1 63.0 I Conv-based LSTM (|h|=400,K=50) 67.1 67.6 64.4 J QA-CNN (max-pooling, k = 3) 61.6 62.2 57.9 K Attentive CNN (max-pooling, k = 3) 62.3 63.3 60.2 L Attentive LSTM (avg-pooling K=1) 68.4 68.1 62.2 M Attentive LSTM (avg-pooling K=50) 68.4 67.8 63.2 N Attentive LSTM (max-pooling K=50) 68.9 69.0 64.8 Table 3: The experimental results of InsuranceQA. Architecture-II with Geometricmean of Euclidean and Sigmoid Dot product (GESD): Cosine similarity is replaced by GESD, which got the best performance in (Feng et al., 2015). 4.3 Results and discussions In this section, we provide detailed analysis on the experimental results. Table 3 summarizes the results of our models on InsuranceQA. From Row (A) to (D), we list QA-LSTM without either CNN structure or attention. They vary on the pooling method used. We can see that by concatenating the last vectors from both directions, (A) performs the worst. We see that using max-pooling (C) is much better than average pooling (B). The potential reason may be that the max-pooling extracts more local values for each dimension. Compared to (C), (D) is better, showing the need of multiple negative answers in training. Row (E) to (I) show the results of Convolutional-pooling LSTMs and Convolutionbased LSTMs with different filter sizes c, biLSTM hidden sizes |h| and negative answer pool size K. Increasing the negative answer pool size, we are allowed to use less filter counts (F vs E). Larger filter counts help on the test accuracies (G vs F) for Convolutional-pooling LSTMs. We have the same observation with larger biLSTM hidden vector size for Convolution-based LSTMs. Both convolutional models outperform the plain QA-LSTM (D) by about 1.0% on test1, and 0.7% on test2. Rows (L-N) correspond to QA-LSTM with the attention model, with either max-pooling or average pooling. We observe that max-pooling is better than avg-pooling, which is consistent with QALSTMs. In comparison to Model (D), Model (N) shows over 2% improvement on both validation and Test1 sets. And (N) gets improvements over the best baseline in Table 3 by 3.5%, 3.7% and 3.8% on the validation, Test1 and Test2 sets, respectively. Compared to Architecture II in (Feng et al., 2015), which involved a large number of CNN filters, (N) model also has fewer parameters. We also test the proposed attention mechanism on convolutional networks. (J) replaces the LSTM in QA-LSTM with a convolutional layer. We set the filter size c = 400 and window size k = 3 according to the validation accuracy. (K) performs the similar attention on the convolutional output of the answers. Similar to biLSTM, the attention on the convolutional layer gives over 2% accuracy improvement on both test sets, which proves the attention’s efficiency on both CNN and RNN structures. Finally, we investigate the proposed models on how they perform with respect to long answers. To better illustrate the performance difference, we 470 Models MAP MRR (Yao et al., 2013) 0.631 0.748 (Severyn and Moschitti, 2013) 0.678 0.736 (Yih et al., 2013)-BDT 0.694 0.789 (Yih et al., 2013)-LCLR 0.709 0.770 (Wang and Nyberg, 2015) 0.713 0.791 Architecture-II (Feng et al., 2015) 0.711 0.800 (Severyn and Moschitti, 2015) 0.671 0.728 w/o additional features (Severyn and Moschitti, 2015) 0.746 0.808 with additional features A. QA-CNN 0.714 0.807 B. QA-LSTM (max-pooling) 0.733 0.819 C. Conv-pooling LSTM 0.742 0.819 D. Conv-based LSTM 0.737 0.827 E. Attentive LSTM 0.753 0.830 Table 4: The test set results on TREC-QA compare the models with K = 1 (i.e. the models C, E, L). We divide the questions of Test1 and Test2 sets into eleven buckets, according to the average length of their ground truth answers. As shown in Figure 5, QA-LSTM gets better or similar performance compared to the proposed models on buckets with shorter answers (L ≤50, 50 < L ≤55, 55 < L ≤60). As the answer lengths increase, the gap between QA-LSTM and other models becomes more obvious. It suggests the effectiveness of Convolutional-pooling LSTM and Attentive LSTM for long-answer questions. In (Feng et al., 2015), GESD outperforms cosine similarity in their models. However, the proposed models with GESD as similarity scores do not provide any improvement on the accuracy. 5 TREC-QA Experiments In this section we detail our experimental setup and results using the TREC-QA dataset. 5.1 Data, metrics and baselines We test the models on TREC-QA dataset, created based on Text REtrieval Conference (TREC) QA track (8-13) data. More detail of the generation steps for this data can be found in (Wang et al., 2007). We follow the exact approach of train/dev/test questions selection in (Wang and Nyberg, 2015), in which all questions with only positive or negative answers are removed. Finally, we have 1162 training, 65 development and 68 test questions. Similar to previous work, we use Mean Average Precision (MAP) and Mean Reciprocal Rank (MRR) as evaluation metrics, which are evaluated using the official scripts. In the top part of Table 4, we list the performance of recent prior work on this dataset. We implemented the Architecture II in (Feng et al., 2015) from scratch. The CNN structure in (Severyn and Moschitti, 2015) combined with additional human-designed features achieved the best MAP and MRR. 5.2 Setup We keep the configurations same as those in InsuranceQA in section 4.1, except the following differences: 1) Following Wang and Nyberg (2015), we use 300-dimensional vectors that were trained and provided by word2vec (Mikolov et al., 2013) using a part of the Google News dataset 4. 2) Since the word vectors of TREC-QA have a greater dimension than InsuranceQA, we accordingly have larger biLSTM hidden vectors and CNN filters, in order not to lose information from word vectors. Here we set both of them as 600. 3) We use the models from the epoch with the best MAP on the validation set. 4) We also observe that because of the smaller data size, we need a decayed learning rate λ in order to stablize the models’ training. Specifically, we set the initial λ0 = 1.1, and decrease it for each epoch T > 1 as λT = λ0/T. 5) We fix the negative answer size K = 50 during training. 5.3 Results The bottom part of Table 4 shows the performance of the proposed models. For the comparison purpose, we replace biLSTM with a convolution in Model (A), and also use max-pooling to get question and answer embeddings, and call this model QA-CNN. QA-LSTM (B) improves MAP and MRR in more than 1% when compared to QA-CNN (A). Compared to (B), convolutionalpooling (C) performs better on MAP by 0.9%, and convolution-based models on MAP by 0.4% and MRR by 0.8%. Attentive LSTM is the best proposed model, and outperforms the best baseline (Severyn and Moschitti, 2015) by 0.7% on MAP and 2.2% on MRR. Note that the best result in (Severyn and Moschitti, 2015) was obtained by combining CNN-based features with additional human-defined features. In contrast, our attentive LSTM model achieves higher performance without using any human-defined features. 6 Conclusion In this paper, we address the following problem for the answer passage selection: how can we construct the embeddings for questions and candidate 4https://code.google.com/archive/p/word2vec/ 471 Figure 5: The accuracy of Test1 and Test2 of InsuranceQA sets for three models, i.e. maxpooling QALSTM (C), Convolutional-pooling LSTM (E) and Attentive LSTM (L) in Table 3, on different levels of ground truth answer lengths on each test set. The figures show the accuracy of each bucket. answers, in order to better distinguish the correct answers from other candidates? We propose three independent models in two directions. First, we develop two hybrid models which combine the strength of both recurrent and convolutional neural networks. Second, we introduce a simple oneway attention mechanism, in order to generate answer embeddings influenced by the question context. Such attention fixes the issue of independent generation of the question and answer embeddings in previous work. All proposed models are departed from a basic architecture, built on bidirectional LSTMs. We conduct experiments on InsuranceQA and TREC-QA datasets, and the experimental results demonstrate that the proposed models outperform a variety of strong baselines. Potential future work include: 1) Evaluating the proposed approaches for different tasks, such as community QA and textual entailment; 2) Including the sentential attention mechanism; 3) Integrating the hybrid and the attentive mechanisms into a single framework. References Dzmitry Bahdanau, KyungHyun Cho, and Yoshua. Bengio. 2015. Neural machine translation by jointly learning to align and translate. Proceedings of International conference of learning representations. Frederic Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio. 2012. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop. Michael Bendersky, Donald Metzler, and W. Bruce Croft. 2010. Learning concept importance using a weighted dependence model. In in Proceedings of the Third ACM International Conference on Web Search and Data Mining (WSDM). Michael Bendersky, Donald Metzler, and W. Bruce Croft. 2011. Parameterized concept weighting in verbose queries. In in Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR). Sumit Chopra, Raia Hadsell, and Yann LeCun. 2005. Learning a similarity metric discriminatively, with application to face verification. Computer Vision and Pattern Recognition (CVPR). Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. 2015. Long-term recurrent convolutional networks for visual recognition and description. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June. C´ıcero dos Santos, Luciano Barbosa, Dasha Bogdanova, and Bianca Zadrozny. 2015. Learning hybrid representations to retrieve semantically equivalent questions. In Proceedings of ACL, pages 694– 699, Beijing, China, July. C´ıcero dos Santos, Ming Tan, Bing Xiang, and Bowen Zhou. 2016. Attentive pooling networks. CoRR, abs/1602.03609. Minwei Feng, Bing Xiang, Michael Glass, Lidan Wang, and Bowen Zhou. 2015. Applying deep learning to answer selection: A study and an open task. IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 472 Michael Heilman and Noah A. Smith. 2010. Tree edit models for recognizing textual entailments, paraphrases, and answers to questions. Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics (NAACL). Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems (NIPS). Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. Advances in Neural Information Processing Systems (NIPS). Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. In ACL. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. Advances in Neural Information Processing Systems (NIPS). Xipeng Qiu and Xuanjing Huang. 2015. Convolutional neural tensor network architecture for community-based question answering. Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI). Tim Rockt¨aschel, Edward Grefenstette, Karl Moritz Hermann, Tom´as Kocisk´y, and Phil Blunsom. 2016. Reasoning about entailment with neural attention. International Conference on Learning Representations (ICLR). Alexander Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for sentence summarization. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Tara N. Sainath, Andrew Senior Oriol Vinyals, and Hasim Sak. 2015. Convolutional, long short-term memory, fully connected deep neural networks. In Acoustics, Speech and Signal Processing (ICASSP), IEEE International Conference. Aliaksei Severyn and Alessandro Moschitti. 2013. Automatic feature engineering for answer selection and extraction. In Proceedings of Conference on Empirical Methods in Natural Language Processing (EMNLP). Aliaksei Severyn and Alessandro Moschitti. 2015. Learning to rank short text pairs with convolutional deep neural networks. In SIGIR. Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In Advances in Neural Information Processing Systems (NIPS). Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems (NIPS). Oriol Vinyals and Quoc V. Le. 2015. A neural conversational model. Proceedings of the 31st International Conference on Machine Learning. Mengqiu Wang and Christopher Manning. 2010. Probabilistic tree-edit models with structured latent variables for textual entailment and question answering. The Proceedings of the 23rd International Conference on Computational Linguistics (COLING). Di Wang and Eric Nyberg. 2015. A long shortterm memory model for answer sentence selection in question answering. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Mengqiu Wang, Noah Smith, and Mitamura Teruko. 2007. What is the jeopardy model? a quasisynchronous grammar for qa. The Proceedings of EMNLP-CoNLL. Jason Weston, Sumit Chopra, and Keith Adams. 2014. #tagspace: Semantic embeddings from hashtags. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Xuchen Yao, Benjamin Durme, and Peter Clark. 2013. Answer extraction as sequence tagging with tree edit distance. Proceedings of NAACL-HLT. Wen-tau Yih, Ming-Wei Chang, Christopher Meek, and Andrzej Pastusiak. 2013. Question answering using enhanced lexical semantic models. Proceedings of the 51st Annual Meeting of the Association for Computational Linguist (ACL). Wenpeng Yin, Hinrich Schutze, Bing Xiang, and Bowen Zhou. 2015. Abcnn: attention-based convolutional neural network for modeling sentence pairs. CoRR, abs/1512.05193. Lei Yu, Karl M. Hermann, Phil Blunsom, and Stephen Pulman. 2014. Deep learning for answer sentence selection. NIPS Deep Learning Workshop. Zhen Zuo, Bing Shuai, Gang Wang, Xiao Liu, Xingxing Wang, Bing Wang, and Yushi Chen. 2015. Convolutional recurrent neural networks: Learning spatial dependencies for image representation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 473
2016
44
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 474–483, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Tables as Semi-structured Knowledge for Question Answering Sujay Kumar Jauhar Carnegie Mellon University Pittsburgh, PA, USA [email protected] Peter D. Turney Allen Institute for Artificial Intelligence Seattle, WA, USA [email protected] Eduard Hovy Carnegie Mellon University Pittsburgh, PA, USA [email protected] Abstract Question answering requires access to a knowledge base to check facts and reason about information. Knowledge in the form of natural language text is easy to acquire, but difficult for automated reasoning. Highly-structured knowledge bases can facilitate reasoning, but are difficult to acquire. In this paper we explore tables as a semi-structured formalism that provides a balanced compromise to this tradeoff. We first use the structure of tables to guide the construction of a dataset of over 9000 multiple-choice questions with rich alignment annotations, easily and efficiently via crowd-sourcing. We then use this annotated data to train a semistructured feature-driven model for question answering that uses tables as a knowledge base. In benchmark evaluations, we significantly outperform both a strong unstructured retrieval baseline and a highlystructured Markov Logic Network model. 1 Introduction Question answering (QA) has emerged as a practical research problem for pushing the boundaries of artificial intelligence (AI). Dedicated projects and open challenges to the research community include examples such as Facebook AI Research’s challenge problems for AI-complete QA (Weston et al., 2015) and the Allen Institute for AI’s (AI2) Aristo project (Clark, 2015) along with its recently completed Kaggle competition1. The reason for this emergence is the diversity of core language and reasoning problems that a complex, integrated 1https://www.kaggle.com/c/ the-allen-ai-science-challenge task like QA exposes: information extraction (Srihari and Li, 1999), semantic modelling (Shen and Lapata, 2007; Narayanan and Harabagiu, 2004), logic and reasoning (Moldovan et al., 2003), and inference (Lin and Pantel, 2001). Complex tasks such as QA require some form of knowledge base to store facts about the world and reason over them. By knowledge base, we mean any form of knowledge: structured (e.g., tables, ontologies, rules) or unstructured (e.g., natural language text). For QA, knowledge has been harvested and used in a number of different modes and formalisms: large-scale extracted and curated knowledge bases (Fader et al., 2014), structured models such as Markov Logic Networks (Khot et al., 2015), and simple text corpora in information retrieval approaches (Tellex et al., 2003). There is, however, a fundamental trade-off in the structure and regularity of a formalism and its ability to be curated, modelled or reasoned with easily. For example, simple text corpora contain no structure, and are therefore hard to reason with in a principled manner. Nevertheless, they are easily and abundantly available. In contrast, Markov Logic Networks come with a wealth of theoretical knowledge connected with their usage in principled inference. However, they are difficult to induce automatically from text or to build manually. In this paper we explore tables as semistructured knowledge for multiple-choice question (MCQ) answering. Specifically, we focus on tables that represent general knowledge facts, with cells that contain free-form text (Secton 3 details the nature and semantics of these tables). The structural properties of tables, along with their free-form text content represents a semi-structured balanced compromise in the trade-off between degree of structure and ubiquity. We present two main contributions, with tables and their structural properties playing a crucial role in both. First, 474 we crowd-source a collection of over 9000 MCQs with alignment annotations to table elements, using tables as guidelines in efficient data harvesting. Second, we develop a feature-driven model that uses these MCQs to perform QA, while factchecking and reasoning over tables. Others have used tables in the context of QA. Question bank creation for tables has been investigated (Pasupat and Liang, 2015), but without structural guidelines or the alignment information that we propose. Similarly, tables have been used in QA reasoning (Yin et al., 2015b; Neelakantan et al., 2015; Sun et al., 2016) but have not explicitly attempted to encode all the semantics of table structure (see Section 3.1). To the best of our knowledge, no previous work uses tables for both creation and reasoning in a connected framework. We evaluate our model on MCQ answering for three benchmark datasets. Our results consistently and significantly outperform a strong retrieval baseline as well as a Markov Logic network model (Khot et al., 2015). We thus show the benefits of semi-structured data and models over unstructured or highly-structured counterparts. We also validate our curated MCQ dataset and its annotations as an effective tool for training QA models. Finally, we find that our model learns generalizations that permit inference when exact answers may not even be contained in the knowledge base. 2 Related Work Our work with tables, semi-structured knowledge bases and QA relates to several parallel lines of research. In terms of dataset creation via crowdsourcing, Aydin et al. (2014) harvest MCQs via a gamified app, although their work does not involve tables. Pasupat and Liang (2015) use tables from Wikipedia to construct a set of QA pairs. However their annotation setup does not impose structural constraints from tables, and does not collect finegrained alignment to table elements. On the inference side Pasupat and Liang (2015) also reason over tables to answer questions. Unlike our approach, they do not require alignments to table cells. However, they assume knowledge of the table that contains the answer, a priori – which we do not. Yin et al. (2015b) and Neelakantan et al. (2015) also use tables in the context of QA, but deal with synthetically generated query data. Sun et al. (2016) perform cell search over web tables via relational chains, but are more generally interested in web queries. Clark et al. (2016) combine different levels of knowledge for QA, including an integer-linear program for searching over table cells. None of these other efforts leverage tables for generation of data. Our research more generally pertains to natural language interfaces for databases. Answering questions in this context refers to executing queries over relational databases (Cafarella et al., 2008; Pimplikar and Sarawagi, 2012). Yin et al. (2015a) consider databases where information is stored in n-tuples, which are essentially tables. Also, investigation of the relational structure of tables is connected with research on database schema analysis and induction (Venetis et al., 2011; Syed et al., 2010). Finally, unstructured text and structured formats links to work on open information extraction (Etzioni et al., 2008) and knowledge base population (Ji and Grishman, 2011). 3 Tables as Semi-structured Knowledge Representation Tables can be found on the web containing a wide range of heterogenous data. To focus and facilitate our work on QA we select a collection of tables that were specifically designed for the task. Specifically we use AI2’s Aristo Tablestore2. However, it should be noted that the contributions of this paper are not tied to specific tables, as we provide a general methodology that could equally be applied to a different set of tables. The structural properties of this class of tables is further described in Section 3.1. The Aristo Tablestore consists of 65 handcrafted tables organized by topic. Some of the topics are bounded, containing only a fixed number of facts, such as the possible phase changes of matter (see Table 1). Other topics are unbounded, containing a very large or even infinite number of facts, such as the kind of energy used in performing an action (the corresponding tables can only contain a sample subset of these facts). A total of 3851 facts (one fact per row) are present in the manually constructed tables. An individual table has between 2 and 5 content columns. The target domain for these tables is two 4th grade science exam datasets. The majority of the tables were constructed to contain topics and facts 2http://allenai.org/content/data/ AristoTablestore-Nov2015Snapshot.zip 475 Phase Change Initial State Final State Form of Energy Transfer Melting causes a solid to change into a liquid by adding heat Vaporization causes a liquid to change into a gas by adding heat Condensation causes a gas to change into a liquid by removing heat Sublimation causes a solid to change into a gas by adding heat Table 1: Part of a table concerning phase changes in matter. Rows are facts. Columns without header text provide filler text, so that each row forms a sentence. In columns with header text, the header describes the type of entry in the column; the header is a hypernym of the text in the body below. from the publicly available Regents dataset3. The rest were targeted at an unreleased dataset called Monarch. In both cases only the training partition of each dataset was used to formulate and handcraft tables. However, for unbounded topics, additional facts were added to each table, using science education text books and websites. 3.1 Table Semantics and Relations Part of a table from the Aristo Tablestore is given as an example in Table 1. The format is semistructured: the rows of the table (with the exception of the header) are a list of sentences, but with well-defined recurring filler patterns. Together with the header, these patterns divide the rows into meaningful columns. This semi-structured data format is flexible. Since facts are presented as sentences, the tables can act as a text corpus for information retrieval. At the same time the structure can be used – as we do – to focus on specific nuggets of information. The flexibility of these tables allows us to compare our table-based system to an information retrieval baseline. Such tables have some interesting structural semantics, which we will leverage throughout the paper. A row in a table corresponds to a fact4. The cells in a row correspond to concepts, entities, or processes that participate in this fact. A content column5 corresponds to a group of concepts, entities, or processes that are the same type. The header cell of the column is an abstract description of the type. We may view the head as a hypernym and the cells in the column below as co-hyponyms of the head. The header row defines a generalization of which the rows in the table are specific instances. This structure is directly relevant to multiplechoice QA. Facts (rows) form the basis for creat3http://allenai.org/content/data/ Regents.zip 4Also predicates, or more generally frames with typed arguments. 5Different from filler columns, which only contain a recurring pattern, and no information in their header cells. ing or answering questions, while instances of a type (columns) act as the choices of an MCQ. We use these observations both for crowd-sourcing MCQ creation as well as for designing features to answer MCQs with tables. 4 Crowd-sourcing Multiple-choice Questions from Tables We use Amazon’s Mechanical Turk (MTurk) service to generate MCQs by imposing constraints derived from the structure of the tables. These constraints help annotators create questions with scaffolding information, and lead to consistent quality in the generated output. An additional benefit of this format is the alignment information, linking cells in the tables to the MCQs generated by the Turkers. The alignment information is generated as a by-product of making the MCQs. We present Turkers with a table such as the one in Figure 1. Given this table, we choose a target cell to be the correct answer for a new MCQ; for example, the red cell in Figure 1. First, Turkers create a question by using information from the rest of the row containing the target (i.e., the blue cells in Figure 1), such that the target is its correct answer. Then they select the cells in the row that they used to construct the question. Following this, they construct four succinct choices for the question, one of which is the correct answer and the other three are distractors. Distractors are formed from other cells in the column containing the target (i.e. yellow cells in Figure 1). If there are insufficient unique cells in the column Turkers create their own. Annotators can rephrase and shuffle the contents of cells as required. In addition to an MCQ, we obtain alignment information with no extra effort from annotators. We know which table, row, and column contains the answer, and thus we know which header cells might be relevant to the question. We also know the cells of a row that were used to construct a question. 476 Figure 1: Example table from MTurk annotation task illustrating constraints. We ask Turkers to construct questions from blue cells, such that the red cell is the correct answer, and yellow cells form distractors. Task Avg. Time (s) $/hour % Reject Rewrite 345 2.61 48 Paraphrase 662 1.36 49 Add choice 291 2.47 24 Write new 187 5.78 38 TabMCQ 72 5.00 2 Table 2: Comparison of different ways of generating MCQs with MTurk. What is the orbital event with the longest day and the shortest night? A) Summer solstice B) Winter solstice C) Spring equinox D) Fall equinox Steel is a/an of electricity A) Separator B) Isolator C) Insulator D) Conductor Table 3: Examples of MCQs generated by MTurk. Correct answer choices are in bold. 4.1 The TabMCQ Dataset We created a HIT (the MTurk acronym for Human Intelligence Task) for every non-filler cell (see Section 3) from each one of the 65 manually constructed tables of the Aristo Tablestore. We paid annotators 10 cents per MCQ, and asked for 1 annotation per HIT for most tables. For an initial set of four tables which we used in a pilot study, we asked for three annotations per HIT6. We required Turkers to have a HIT approval rating of 95% or higher, with a minimum of at least 500 HITs approved. We restricted the demographics of our workers to the US. Table 2 compares our method with other studies conducted at AI2 to generate MCQs. These methods attempt to generate new MCQs from existing 6The goal was to obtain diversity in the MCQs created for a target cell. The results were not sufficiently conclusive to warrant a threefold increase in the cost of creation. ones, or write them from scratch, but do not involve tables in any way. Our annotation procedure leads to faster data creation, with consistent output quality that resulted in the lowest percentage of rejected HITs. Manual inspection of the generated output also revealed that questions are of consistently good quality. They are good enough for training machine learning models and many are good enough as evaluation data for QA. A sample of generated MCQs is presented in Table 3. We implemented some simple checks to evaluate the data before approving HITs. These included things like checking whether an MCQ has at least three choices and whether choices are repeated. We had to further prune our data to discard some MCQs due to corrupted data or badly constructed MCQs. A total of 159 MCQs were lost through the cleanup. In the end our complete data consists of 9092 MCQs, which is – to the best of our knowledge – orders of magnitude larger than any existing collection of science exam style MCQs available for research. These MCQs also come with alignment information to tables, rows, columns and cells. The dataset, bundled together with the Aristo Tablestore, can be freely downloaded7. 5 Solving MCQs with Table Cell Search Consider the MCQ “What is the process by which water is changed from a liquid to a gas?” with choices “melting, sublimation, vaporization, condensation”, and the table given in Figure 1. Finding the correct answer amounts to finding a cell in the table that is most relevant to a candidate QA pair. In other words, a relevant cell should confirm the assertion made by a particular QA pair. By applying the reasoning used to create MCQs 7http://ai2-website.s3.amazonaws.com/ data/TabMCQ_v_1.0.zip 477 (see Section 4) in the inverse direction, finding these relevant cells becomes the task of finding an intersection between rows and columns of interest. Consider the table in Figure 1: assuming we have some way of aligning a question to a row (blue cells) and choices to a column (yellow cells), then the relevant cell is at the intersection of the two (the red cell). This alignment is precisely what we get as a by-product of the annotation task we setup in Section 4 to harvest MCQs. We can thus featurize connections between MCQs and elements of tables and use the alignment data to train a model over the features. This is outlined in the next section, describing our Feature Rich Table Embedding Solver (FRETS). 5.1 Model and Training Objective Let Q = {q1, ..., qN} denote a set of MCQs, and An = {a1 n, ..., ak n} be the set of candidate answer choices for a given question qn. Let the set of tables be defined as T = {T1, ..., TM}. Given a table Tm, let tij m be the cell in that table corresponding to the ith row and jth column. We define a log-linear model that scores every cell tij m of every table in our collection according to a set of discrete weighted features, for a given QA pair. We have the following: log p(tij m|qn, ak n; An, T ) = X d λdfd(qn, ak n, tij m; An, T ) −log Z (1) Here λd are weights and fd(qn, ak n, tij m; An, T ) are features. These features should ideally leverage both structure and content of tables to assign high scores to relevant cells, while assigning low scores to irrelevant cells. Z is the partition function, defined as follows: Z = X m,i,j exp X d λdfd(qn, ak n, tij m; An, T ) ! (2) Z normalizes the scores associated with every cell over all the cells in all the tables to yield a probability distribution. During inference the partition term log Z can be ignored, making scoring cells of every table for a given QA pair efficient. These scores translate to a solution for an MCQ. Every QA pair produces a hypothetical fact, and as noted in Section 3.1, the row of a table is in essence a fact. Relevant cells (if they exist) should confirm the hypothetical fact asserted by a given QA pair. During inference, we assign the score of the highest scoring row (or the most likely fact) to a hypothetical QA pair. Then the correct solution to the MCQ is simply the answer choice associated with the QA pair that was assigned the highest score. Mathematically, this is expressed as follows: a∗ n = arg max akn max m,i X j X d λdfd(qn, ak n, tij m; An, T ) (3) 5.1.1 Training Since FRETS is a log-linear model, training involves optimizing a set of weights λd. As training data, we use alignment information between MCQs and table elements (see Section 4.1). The predictor value that we try to maximize with our model is an alignment score that is closest to the true alignments in the training data. True alignments to table cells for a given QA pair are essentially indicator values but we convert them to numerical scores as follows8. For a correct QA hypothesis we assign a score of 1.0 to cells whose row and column and both aligned to the MCQ (i.e. cells that exactly answer the question), 0.5 to cells whose row but not column is aligned in some way to the question (i.e. cells that were used to construct the question), and 0.0 otherwise. For an incorrect QA hypothesis we assign a score of 0.1 to random cells from tables that contain no alignments to the QA (so all except one), with a probability of 1%, while all other cells are scored 0.0. The intuition behind this scoring scheme is to guide the model to pick relevant cells for correct answers, while encouraging it to pick faulty evidence with low scores for incorrect answers. Given these scores assigned to all cells of all tables for all QA pairs in the training set, suitably normalized to a probability distribution over tables for a given QA pair, we can then proceed to train our model. We use cross-entropy, which minimizes the following loss: 8On training data, we experimented with a few different scoring heuristics and found that these ones worked well. 478 Level Feature Description Intuition S-Var Cmpct Table Table score Ratio of words in t to q+a Topical consistency ♦ †TF-IDF table score Same but TF-IDF weights Topical consistency ♦ Row-question score Ratio of words in r to q Question align ♦ Row Row-question w/o focus score Ratio of words in r to q-(af+qf) Question align ♦ Header-question score Ratio of words in h to q Prototype align ♦ Column overlap Ratio of elements in c and A Choices align ♦ Column Header answer-type match Ratio of words in ch to af Choices hypernym align ♦ Header question-type match Ratio of words in ch to qf Question hypernym align ♦ †Cell salience Salience of s to q+a QA hypothesis assert ♦ Cell †Cell answer-type entailment Entailment score between s and af Hypernym-hyponym align Cell answer-type similarity Avg. vector sim between s and af Hypernym-hyponym sim. Table 4: Summary of features. For a question (q) and answer (a) we compute scores for elements of tables: whole tables (t), rows (r), header rows (h), columns (c), column headers (ch) and cells (s). Answer-focus (af) and question-focus (qf) terms added where appropriate. Features marked ♦denote soft-matching variants, marked with while those marked with a † are described in further detail in Section 5.2. Finally, features denote those that received high weights during training with all features, and were subsequently selected to form a compact FRETS model. L(⃗λ) = X qn ak n∈An X m,i,j p(t∗ij m |qn, ak n; T )· log p(tij m|qn, ak n; An, T ) (4) Here p(t∗ij m |qn, ak n; T ) is the normalized probability of the true alignment scores. While this is an indirect way to train our model to pick the best answer, in our pilot experiments it worked better than direct maximum likelihood or ranking with hinge loss, achieving a training accuracy of almost 85%. Our experimental results on the test suite, presented in the next section, also support the empirical effectiveness of this approach. 5.2 Features The features we use are summarized in Table 4. These features compute statistics between question-answer pairs and different structural components of tables. While the features are weighted and summed for each cell individually, they can capture more global properties such as scores associated with tables, rows or columns in which the specific cell is contained. Features are divided into four broad categories based on the level of granularity at which they operate. In what follows we give some details of Table 4 that require further elaboration. 5.2.1 Soft matching Many of the features that we implement are based on string overlap between bags of words. However, since the tables are defined statically in terms of a fixed vocabulary (which may not necessarily match words contained in an MCQ), these overlap features will often fail. We therefore soften the constraint imposed by hard word overlap by a more forgiving soft variant. More specifically we introduce a word-embedding based soft matching overlap variant for every feature in the table marked with ♦. The soft variant targets high recall while the hard variant aims at providing high precision. We thus effectively have almost twice the number of features listed. Mathematically, let a hard overlap feature define a score |S1 ∩S2| / |S1| between two bags of words S1 and S2. We can define the denominator S1 here, without loss of generality. Then, a corresponding word-embedding soft overlap feature is given by this formula: 1 |S1| X wi∈S1 max wj∈S2 sim( ⃗wi, ⃗wj) (5) Intuitively, rather than matching a word to its exact string match in another set, we instead match it to its most similar word, discounted by the score of that similarity. 5.2.2 Question parsing We parse questions to find the desired answertype and, in rarer cases, question-type words. For example, in the question “What form of energy is required to convert water from a liquid to a gas?”, the type of the answer we are expecting is a “form of energy”. Generally, this answer-type corresponds to a hypernym of the answer choices, and can help find relevant information in the table, specifically related to columns. 479 By carefully studying the kinds of question patterns in our data, we implemented a rule-based parser that finds answer-types from queries. This parser uses a set of hand-coded regular expressions over phrasal chunks. The parser is designed to have high accuracy, so that we only produce an output for answer-types in high confidence situations. In addition to producing answer-types, in some rarer cases we also detect hypernyms for parts of the questions. We call this set of words question-type words. Together, the question-type and answer-type words are denoted as focus words in the question. 5.2.3 TF-IDF weighting TF-IDF scores for weighting terms are precomputed for all words in all the tables. We do this by treating every table as a unique document. At run-time we discount scores by table length as well as length of the QA pair under consideration to avoid disproportionately assigning high scores to large tables or long MCQs. 5.2.4 Salience The salience of a string for a particular QA pair is an estimate of how relevant it is to the hypothesis formed from that QA pair. It is computed by taking words in the question, pairing them with words in an answer choice and then computing PMI statistics between these pairs from a large corpus. A high salience score indicates words that are particularly relevant for a given QA pair hypothesis. 5.2.5 Entailment To calculate the entailment score between two strings, we use several features, such as overlap, paraphrase probability, lexical entailment likelihood, and ontological relatedness, computed with n-grams of varying lengths. 5.2.6 Normalization All the features in Table 4 produce numerical scores, but the range of these scores vary to some extent. To make our final model more robust, we normalize all feature scores to have a range between 0.0 and 1.0. We do this by finding the maximum and minimum values for any given feature on a training set. Subsequently, instead of using the raw feature value of a feature fd, we instead replace it with (fd −min fd) / (max fd −min fd). 6 Experimental Results We train FRETS (Section 5) on the TabMCQ dataset (Section 4) using adaptive gradient descent with an L2 penalty of 1.0 and a mini-batch size of 500 training instances. We train two variants: one consisting of all the features from Table 4, the other – a compact model – consisting of the most important features (above a threshold) from the first model by feature-weight. These features are noted by in the final column of Table 4. We run experiments on three 4th grade science exam MCQ datasets: the publicly available Regents dataset, the larger but unreleased dataset called Monarch, and a third even larger public dataset of Elementary School Science Questions (ESSQ)9. For the first two datasets we use the test splits only, since the training sets were directly studied to construct the Aristo Tablestore, which was in turn used to generate our TabMCQ training data. On ESSQ we use all the questions since they are independent of the tables. The Regents test set consists of 129 MCQs, the Monarch test set of 250 MCQs, and ESSQ of 855 MCQs. Since we are investigating semi-structured models, we compare against two baselines. The first is an unstructured information retrieval method, which uses the Lucene search engine. To apply Lucene to the tables, we ignore their structure and simply use rows as plain-text sentences. The score for top retrieved hits are used to rank the different choices of MCQs. The second baseline is the highly-structured Markov-logic Network (MLN) model from Khot et al. (2015) as reported in Clark et al. (2016), who use the model as a baseline10. Note that Clark et al. (2016) achieve a score of 71.3 on Regents Test, which is higher than FRETS’ scores (see Table 5), but their results are not comparable to ours because they use an ensemble of algorithms. In contrast, we use a single algorithm with a much smaller collection of knowledge. FRETS rivals the best individual algorithm from their work. We primarily use the tables from the Aristo Tablestore as knowledge base data in three different settings: with only tables constructed for Regents (40 tables), with only supplementary tables constructed for Monarch (25 tables), and with all ta9http://aristo-public-data.s3. amazonaws.com/AI2-Elementary-NDMC-Feb2016. zip 10We do not re-implement the MLN, and therefore only cite results from previous work on part of our test suite. 480 Model Data Regents Test Monarch Test ESSQ Lucene Regents Tables 37.5 32.6 36.9 Monarch Tables 28.4 27.3 27.7 Regents+Monarch Tables 34.8 35.3 37.3 Waterloo Corpus 55.4 51.8 54.4 MLN 47.5 (Khot et al., 2015) Regents Tables 60.7 47.2 51.0 FRETS Monarch Tables 56.0 45.6 48.4 (Compact) Regents+Monarch Tables 59.9 47.6 50.7 Regents Tables 59.1 52.8 54.4 FRETS Monarch Tables 52.9 49.8 49.5 Regents+Monarch Tables 59.1 52.4 54.9 Table 5: Evaluation results on three benchmark datasets using different sets of tables as knowledge bases. Best results on a dataset are highlighted in bold. bles together (all 65 tables; see Section 3). For the Lucene baseline we also experiment with several orders of magnitude more data by indexing over the 5 × 1010 words Waterloo corpus compiled by Charles Clarke at the University of Waterloo. Data is not a variable for MLN, since we directly cite results from Clark et al. (2016). The word vectors we used in soft matching feature variants (i.e., ♦features from Table 4) for all our experiments were trained on 300 million words of Newswire English from the monolingual section of the WMT-2011 shared task data. These vectors were improved post-training by retrofitting (Faruqui et al., 2014) them to PPDB (Ganitkevitch et al., 2013). The results of these experiments is presented in Table 5. All numbers are reported in percentage accuracy. We perform statistical significance testing on these results using Fisher’s exact test with a p-value of 0.05 and report them in our discussions. First, FRETS – in both full and compact form – consistently outperforms the baselines, often by large margins. For Lucene, the improvements over all but the Waterloo corpus baseline are statistically significant. Thus FRETS is able to capitalize on data more effectively and rival an unstructured model with access to orders of magnitude more data. For MLN, the improvements are statistically significant in the case of Regents and Regents+Monarch tables. FRETS is thus performing better than a highly structured model while making use of a much simpler data formalism. Our models are able to effectively generalize. With Monarch tables, the Lucene baseline is little better than random (25%). But with the same knowledge base data, FRETS is competitive and sometimes scores higher than the best Lucene or MLN models (although this difference is statistiModel REG MON ESSQ FRETS 59.1 52.4 54.9 w/o tab features 59.1 47.6 52.8 w/o row features 49.0 40.4 44.3 w/o col features 59.9 47.2 53.1 w/o cell features 25.7 25.0 24.9 w/o ♦features 62.2 47.5 53.3 Table 6: Ablation study on FRETS, removing groups of features based on level of granularity. ♦ refers to the soft matching features from Table 4. Best results on a dataset are highlighted in bold. cally insignificant). These results indicate that our models are able to effectively capture both content and structure, reasoning approximately (and effectively) when the knowledge base may not even contain the relevant information to answer a question. The Monarch tables themselves seem to add little value, since results for Regents tables by themselves are just as good or better than Regents+Monarch tables. This is not a problem with FRETS, since the same phenomenon is witnessed with the Lucene baseline. It is noteworthy, however, that our models do not suffer from the addition of more tables, showing that our search procedure over table cells is robust. Finally, dropping some features in the compact model doesn’t always hurt performance, in comparison with the full model. This indicates that potentially higher scores are possible by a principled and detailed feature selection process. In these experiments the difference between the two FRETS models on equivalent data is statistically insignificant. 6.1 Ablation Study To evaluate the contribution of different features we perform an ablation study, by individually removing groups of features from the full FRETS 481 model, and re-training. Evaluation of these partial models is given in Table 6. In this experiment we use all tables as knowledge base data. Judging by relative score differential, cell features are by far the most important group, followed by row features. In both cases the drops in score are statistically significant. Intuitively, these results make sense, since row features are crucial in alignment to questions, while cell features capture the most fine-grained properties. It is less clear which among the other three feature groups is dominant, since the differences are not statistically significant. It is possible that cell features replicate information of other feature groups. For example, the cell answer-type entailment feature indirectly captures the same information as the header answer-type match feature (a column feature). Similarly, salience captures weighted statistics that are roughly equivalent to the coarsegrained table features. Interestingly, the success of these fine-grained features would explain our improvements over the Lucene baseline in Table 5, which is incapable of such fine-grained search. 7 Conclusions We have presented tables as knowledge bases for question answering. We explored a connected framework in which tables are first used to guide the creation of MCQ data with alignment information to table elements, then jointly with this data are used in a feature-driven model to answer unseen MCQs. A central research question of this paper was the trade-off between the degree of structure in a knowledge base and its ability to be harvested or reasoned with. On three benchmark evaluation sets our consistently and significantly better scores over an unstructured and a highly-structured baseline strongly suggest that tables can be considered a balanced compromise in this trade-off. We also showed that our model is able to generalize from content to structure, thus reasoning about questions whose answer may not even be contained in the knowledge base. We are releasing our dataset of more than 9000 MCQs and their alignment information, to the research community. We believe it offers interesting challenges that go beyond the scope of this paper – such as question parsing, or textual entailment – and are exciting avenues for future research. Acknowledgement We’d like to thank AI2 for funding this research and the creation of our MCQ dataset. The first and third authors of this paper were also supported in part by the following grants: NSF grant IIS1143703, NSF award IIS-1147810, DARPA grant FA87501220342. Thanks also go to the anonymous reviewers, whose valuable comments helped to improve the quality of the paper. References Bahadir Ismail Aydin, Yavuz Selim Yilmaz, Yaliang Li, Qi Li, Jing Gao, and Murat Demirbas. 2014. Crowdsourcing for multiple-choice question answering. In Twenty-Sixth IAAI Conference. Michael J Cafarella, Alon Halevy, Daisy Zhe Wang, Eugene Wu, and Yang Zhang. 2008. Webtables: exploring the power of tables on the web. Proceedings of the VLDB Endowment, 1(1):538–549. Peter Clark, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Turney, and Daniel Khashabi. 2016. Combining retrieval, statistics, and inference to answer elementary science questions. Proceedings of the 30th AAAI Conference on Artificial Intelligence, AAAI-2016. Peter Clark. 2015. Elementary school science and math tests as a driver for ai: Take the aristo challenge. Proceedings of IAAI, 2015. Oren Etzioni, Michele Banko, Stephen Soderland, and Daniel S Weld. 2008. Open information extraction from the web. Communications of the ACM, 51(12):68–74. Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and extracted knowledge bases. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1156– 1165. ACM. Manaal Faruqui, Jesse Dodge, Sujay K Jauhar, Chris Dyer, Eduard Hovy, and Noah A Smith. 2014. Retrofitting word vectors to semantic lexicons. arXiv preprint arXiv:1411.4166. Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The paraphrase database. In Proceedings of NAACL-HLT, pages 758–764, Atlanta, Georgia, June. Association for Computational Linguistics. Heng Ji and Ralph Grishman. 2011. Knowledge base population: Successful approaches and challenges. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 1148– 1158. Association for Computational Linguistics. 482 Tushar Khot, Niranjan Balasubramanian, Eric Gribkoff, Ashish Sabharwal, Peter Clark, and Oren Etzioni. 2015. Exploring markov logic networks for question answering. Proceedings of EMNLP, 2015. Dekang Lin and Patrick Pantel. 2001. Discovery of inference rules for question-answering. Natural Language Engineering, 7(04):343–360. Dan Moldovan, Christine Clark, Sanda Harabagiu, and Steve Maiorano. 2003. Cogex: A logic prover for question answering. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1, pages 87–93. Association for Computational Linguistics. Srini Narayanan and Sanda Harabagiu. 2004. Question answering based on semantic structures. In Proceedings of the 20th international conference on Computational Linguistics, page 693. Association for Computational Linguistics. Arvind Neelakantan, Quoc V Le, and Ilya Sutskever. 2015. Neural programmer: Inducing latent programs with gradient descent. arXiv preprint arXiv:1511.04834. Panupong Pasupat and Percy Liang. 2015. Compositional semantic parsing on semi-structured tables. arXiv preprint arXiv:1508.00305. Rakesh Pimplikar and Sunita Sarawagi. 2012. Answering table queries on the web using column keywords. Proceedings of the VLDB Endowment, 5(10):908–919. Dan Shen and Mirella Lapata. 2007. Using semantic roles to improve question answering. In EMNLPCoNLL, pages 12–21. Rohini Srihari and Wei Li. 1999. Information extraction supported question answering. Technical report, DTIC Document. Huan Sun, Xiaodong He, Wen-tau Yih, Yu Su, and Xifeng Yan. 2016. Table cell search for question answering. In Proceedings of the 25th International Conference on World Wide Web (to appear). Zareen Syed, Tim Finin, Varish Mulwad, and Anupam Joshi. 2010. Exploiting a web of semantic data for interpreting tables. In Proceedings of the Second Web Science Conference. Stefanie Tellex, Boris Katz, Jimmy Lin, Aaron Fernandes, and Gregory Marton. 2003. Quantitative evaluation of passage retrieval algorithms for question answering. In Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, pages 41–47. ACM. Petros Venetis, Alon Halevy, Jayant Madhavan, Marius Pas¸ca, Warren Shen, Fei Wu, Gengxin Miao, and Chung Wu. 2011. Recovering semantics of tables on the web. Proc. VLDB Endow., 4(9):528–538, June. Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. 2015. Towards ai-complete question answering: a set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698. Pengcheng Yin, Nan Duan, Ben Kao, Junwei Bao, and Ming Zhou. 2015a. Answering questions with complex semantic constraints on open knowledge bases. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pages 1301–1310. ACM. Pengcheng Yin, Zhengdong Lu, Hang Li, and Ben Kao. 2015b. Neural enquirer: Learning to query tables. arXiv preprint arXiv:1512.00965. 483
2016
45
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 484–494, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Neural Summarization by Extracting Sentences and Words Jianpeng Cheng Mirella Lapata ILCC, School of Informatics, University of Edinburgh 10 Crichton Street, Edinburgh EH8 9AB [email protected] [email protected] Abstract Traditional approaches to extractive summarization rely heavily on humanengineered features. In this work we propose a data-driven approach based on neural networks and continuous sentence features. We develop a general framework for single-document summarization composed of a hierarchical document encoder and an attention-based extractor. This architecture allows us to develop different classes of summarization models which can extract sentences or words. We train our models on large scale corpora containing hundreds of thousands of document-summary pairs1. Experimental results on two summarization datasets demonstrate that our models obtain results comparable to the state of the art without any access to linguistic annotation. 1 Introduction The need to access and digest large amounts of textual data has provided strong impetus to develop automatic summarization systems aiming to create shorter versions of one or more documents, whilst preserving their information content. Much effort in automatic summarization has been devoted to sentence extraction, where a summary is created by identifying and subsequently concatenating the most salient text units in a document. Most extractive methods to date identify sentences based on human-engineered features. These include surface features such as sentence position and length (Radev et al., 2004), the words in the title, the presence of proper nouns, content features such as word frequency (Nenkova et al., 2006), and event features such as action nouns (Filatova and Hatzivassiloglou, 2004). Sentences are 1Resources are available for download at http:// homepages.inf.ed.ac.uk/s1537177/resources.html typically assigned a score indicating the strength of presence of these features. Several methods have been used in order to select the summary sentences ranging from binary classifiers (Kupiec et al., 1995), to hidden Markov models (Conroy and O’Leary, 2001), graph-based algorithms (Erkan and Radev, 2004; Mihalcea, 2005), and integer linear programming (Woodsend and Lapata, 2010). In this work we propose a data-driven approach to summarization based on neural networks and continuous sentence features. There has been a surge of interest recently in repurposing sequence transduction neural network architectures for NLP tasks such as machine translation (Sutskever et al., 2014), question answering (Hermann et al., 2015), and sentence compression (Rush et al., 2015). Central to these approaches is an encoderdecoder architecture modeled by recurrent neural networks. The encoder reads the source sequence into a list of continuous-space representations from which the decoder generates the target sequence. An attention mechanism (Bahdanau et al., 2015) is often used to locate the region of focus during decoding. We develop a general framework for singledocument summarization which can be used to extract sentences or words. Our model includes a neural network-based hierarchical document reader or encoder and an attention-based content extractor. The role of the reader is to derive the meaning representation of a document based on its sentences and their constituent words. Our models adopt a variant of neural attention to extract sentences or words. Contrary to previous work where attention is an intermediate step used to blend hidden units of an encoder to a vector propagating additional information to the decoder, our model applies attention directly to select sentences or words of the input document as the output summary. Similar neural attention architectures have been previously used for geometry reasoning (Vinyals et al., 2015), under the name Pointer Networks. 484 One stumbling block to applying neural network models to extractive summarization is the lack of training data, i.e., documents with sentences (and words) labeled as summary-worthy. Inspired by previous work on summarization (Woodsend and Lapata, 2010; Svore et al., 2007) and reading comprehension (Hermann et al., 2015) we retrieve hundreds of thousands of news articles and corresponding highlights from the DailyMail website. Highlights usually appear as bullet points giving a brief overview of the information contained in the article (see Figure 1 for an example). Using a number of transformation and scoring algorithms, we are able to match highlights to document content and construct two large scale training datasets, one for sentence extraction and the other for word extraction. Previous approaches have used small scale training data in the range of a few hundred examples. Our work touches on several strands of research within summarization and neural sequence modeling. The idea of creating a summary by extracting words from the source document was pioneered in Banko et al. (2000) who view summarization as a problem analogous to statistical machine translation and generate headlines using statistical models for selecting and ordering the summary words. Our word-based model is similar in spirit, however, it operates over continuous representations, produces multi-sentence output, and jointly selects summary words and organizes them into sentences. A few recent studies (Kobayashi et al., 2015; Yogatama et al., 2015) perform sentence extraction based on pre-trained sentence embeddings following an unsupervised optimization paradigm. Our work also uses continuous representations to express the meaning of sentences and documents, but importantly employs neural networks more directly to perform the actual summarization task. Rush et al. (2015) propose a neural attention model for abstractive sentence compression which is trained on pairs of headlines and first sentences in an article. In contrast, our model summarizes documents rather than individual sentences, producing multi-sentential discourse. A major architectural difference is that our decoder selects output symbols from the document of interest rather than the entire vocabulary. This effectively helps us sidestep the difficulty of searching for the next output symbol under a large vocabulary, with lowfrequency words and named entities whose representations can be challenging to learn. Gu et al. (2016) and Gulcehre et al. (2016) propose a similar “copy” mechanism in sentence compression and other tasks; their model can accommodate both generation and extraction by selecting which sub-sequences in the input sequence to copy in the output. We evaluate our models both automatically (in terms of ROUGE) and by humans on two datasets: the benchmark DUC 2002 document summarization corpus and our own DailyMail news highlights corpus. Experimental results show that our summarizers achieve performance comparable to state-of-the-art systems employing handengineered features and sophisticated linguistic constraints. 2 Problem Formulation In this section we formally define the summarization tasks considered in this paper. Given a document D consisting of a sequence of sentences {s1,··· ,sm} and a word set {w1,··· ,wn}, we are interested in obtaining summaries at two levels of granularity, namely sentences and words. Sentence extraction aims to create a summary from D by selecting a subset of j sentences (where j < m). We do this by scoring each sentence within D and predicting a label yL ∈{0,1} indicating whether the sentence should be included in the summary. As we apply supervised training, the objective is to maximize the likelihood of all sentence labels yL = (y1 L,··· ,ym L ) given the input document D and model parameters θ: log p(yL|D;θ) = m ∑ i=1 log p(yi L|D;θ) (1) Although extractive methods yield naturally grammatical summaries and require relatively little linguistic analysis, the selected sentences make for long summaries containing much redundant information. For this reason, we also develop a model based on word extraction which seeks to find a subset of words2 in D and their optimal ordering so as to form a summary ys = (w′ 1,··· ,w′ k),w′ i ∈D. Compared to sentence extraction which is a sequence labeling problem, this task occupies the middle ground between full abstractive summarization which can exhibit a wide range of rewrite operations and extractive 2The vocabulary can also be extended to include a small set of commonly-used (high-frequency) words. 485 AFL star blames vomiting cat for speeding :::::::: Adelaide:::::: Crows::::::::: defender :::::: Daniel::::: Talia:::: has :::: kept::: his::::::: driving:::::::: license, :::::: telling::a ::::: court::: he :::: was :::::::: speeding ::::: 36km:::: over:::: the :::: limit:::::::: because ::he:::: was::::::::: distracted::: by::: his:::: sick:::: cat. The 22-year-old AFL star, who drove 96km/h in a 60km/h road works zone on the South Eastern expressway in February, said he didn’t see the reduced speed sign because he was so distracted by his cat vomiting violently in the back seat of his car. :: In:::: the ::::::::: Adelaide ::::::::::: magistrates ::::: court::: on:::::::::::: Wednesday,:::::::::: Magistrate::::: Bob::::::: Harrap:::::: fined ::::: Talia:::::: $824 ::: for ::::::::: exceeding::: the:::::: speed :::: limit::: by::::: more:::: than:::::::: 30km/h. He lost four demerit points, instead of seven, because of his significant training commitments. • Adelaide Crows defender Daniel Talia admits to speeding but says he didn’t see road signs because his cat was vomiting in his car. • 22-year-old Talia was fined $824 and four demerit points, instead of seven, because of his ’significant’ training commitments. Figure 1: DailyMail news article with highlights. Underlined sentences bear label 1, and 0 otherwise. summarization which exhibits none. We formulate word extraction as a language generation task with an output vocabulary restricted to the original document. In our supervised setting, the training goal is to maximize the likelihood of the generated sentences, which can be further decomposed by enforcing conditional dependencies among their constituent words: log p(ys|D;θ)= k ∑ i=1 log p(w′ i|D,w′ 1,···,w′ i−1;θ) (2) In the following section, we discuss the data elicitation methods which allow us to train neural networks based on the above defined objectives. 3 Training Data for Summarization Data-driven neural summarization models require a large training corpus of documents with labels indicating which sentences (or words) should be in the summary. Until now such corpora have been limited to hundreds of examples (e.g., the DUC 2002 single document summarization corpus) and thus used mostly for testing (Woodsend and Lapata, 2010). To overcome the paucity of annotated data for training, we adopt a methodology similar to Hermann et al. (2015) and create two large-scale datasets, one for sentence extraction and another one for word extraction. In a nutshell, we retrieved3 hundreds of thousands of news articles and their corresponding highlights from DailyMail (see Figure 1 for an example). The highlights (created by news editors) 3The script for constructing our datasets is modified from the one released in Hermann et al. (2015). are genuinely abstractive summaries and therefore not readily suited to supervised training. To create the training data for sentence extraction, we reverse approximated the gold standard label of each document sentence given the summary based on their semantic correspondence (Woodsend and Lapata, 2010). Specifically, we designed a rulebased system that determines whether a document sentence matches a highlight and should be labeled with 1 (must be in the summary), and 0 otherwise. The rules take into account the position of the sentence in the document, the unigram and bigram overlap between document sentences and highlights, the number of entities appearing in the highlight and in the document sentence. We adjusted the weights of the rules on 9,000 documents with manual sentence labels created by Woodsend and Lapata (2010). The method obtained an accuracy of 85% when evaluated on a held-out set of 216 documents coming from the same dataset and was subsequently used to label 200K documents. Approximately 30% of the sentences in each document were deemed summary-worthy. For the creation of the word extraction dataset, we examine the lexical overlap between the highlights and the news article. In cases where all highlight words (after stemming) come from the original document, the document-highlight pair constitutes a valid training example and is added to the word extraction dataset. For out-of-vocabulary (OOV) words, we try to find a semantically equivalent replacement present in the news article. Specifically, we check if a neighbor, represented 486 by pre-trained4 embeddings, is in the original document and therefore constitutes a valid substitution. If we cannot find any substitutes, we discard the document-highlight pair. Following this procedure, we obtained a word extraction dataset containing 170K articles, again from the DailyMail. 4 Neural Summarization Model The key components of our summarization model include a neural network-based hierarchical document reader and an attention-based hierarchical content extractor. The hierarchical nature of our model reflects the intuition that documents are generated compositionally from words, sentences, paragraphs, or even larger units. We therefore employ a representation framework which reflects the same architecture, with global information being discovered and local information being preserved. Such a representation yields minimum information loss and is flexible allowing us to apply neural attention for selecting salient sentences and words within a larger context. In the following, we first describe the document reader, and then present the details of our sentence and word extractors. 4.1 Document Reader The role of the reader is to derive the meaning representation of the document from its constituent sentences, each of which is treated as a sequence of words. We first obtain representation vectors at the sentence level using a single-layer convolutional neural network (CNN) with a max-overtime pooling operation (Kalchbrenner and Blunsom, 2013; Zhang and Lapata, 2014; Kim et al., 2016). Next, we build representations for documents using a standard recurrent neural network (RNN) that recursively composes sentences. The CNN operates at the word level, leading to the acquisition of sentence-level representations that are then used as inputs to the RNN that acquires document-level representations, in a hierarchical fashion. We describe these two sub-components of the text reader below. Convolutional Sentence Encoder We opted for a convolutional neural network model for representing sentences for two reasons. Firstly, singlelayer CNNs can be trained effectively (without any long-term dependencies in the model) and secondly, they have been successfully used for 4We used the Python Gensim library and the 300-dimensional GoogleNews vectors. sentence-level classification tasks such as sentiment analysis (Kim, 2014). Let d denote the dimension of word embeddings, and s a document sentence consisting of a sequence of n words (w1,··· ,wn) which can be represented by a dense column matrix W ∈Rn×d. We apply a temporal narrow convolution between W and a kernel K ∈Rc×d of width c as follows: fi j = tanh(W j:j+c−1 ⊗K+b) (3) where ⊗equates to the Hadamard Product followed by a sum over all elements. fi j denotes the j-th element of the i-th feature map fi and b is the bias. We perform max pooling over time to obtain a single feature (the ith feature) representing the sentence under the kernel K with width c: si,K = max j fi j (4) In practice, we use multiple feature maps to compute a list of features that match the dimensionality of a sentence under each kernel width. In addition, we apply multiple kernels with different widths to obtain a set of different sentence vectors. Finally, we sum these sentence vectors to obtain the final sentence representation. The CNN model is schematically illustrated in Figure 2 (bottom). In the example, the sentence embeddings have six dimensions, so six feature maps are used under each kernel width. The blue feature maps have width two and the red feature maps have width three. The sentence embeddings obtained under each kernel width are summed to get the final sentence representation (denoted by green). Recurrent Document Encoder At the document level, a recurrent neural network composes a sequence of sentence vectors into a document vector. Note that this is a somewhat simplistic attempt at capturing document organization at the level of sentence to sentence transitions. One might view the hidden states of the recurrent neural network as a list of partial representations with each focusing mostly on the corresponding input sentence given the previous context. These representations altogether constitute the document representation, which captures local and global sentential information with minimum compression. The RNN we used has a Long Short-Term Memory (LSTM) activation unit for ameliorating the vanishing gradient problem when training long sequences (Hochreiter and Schmidhuber, 487 Figure 2: A recurrent convolutional document reader with a neural sentence extractor. 1997). Given a document d = (s1,··· ,sm), the hidden state at time step t, denoted by ht, is updated as:   it ft ot ˆct  =   σ σ σ tanh  W· ht−1 st  (5) ct = ft ⊙ct−1 +it ⊙ˆct (6) ht = ot ⊙tanh(ct) (7) where W is a learnable weight matrix. Next, we discuss a special attention mechanism for extracting sentences and words given the recurrent document encoder just described, starting from the sentence extractor. 4.2 Sentence Extractor In the standard neural sequence-to-sequence modeling paradigm (Bahdanau et al., 2015), an attention mechanism is used as an intermediate step to decide which input region to focus on in order to generate the next output. In contrast, our sentence extractor applies attention to directly extract salient sentences after reading them. The extractor is another recurrent neural network that labels sentences sequentially, taking into account not only whether they are individually relevant but also mutually redundant. The complete architecture for the document encoder and the sentence extractor is shown in Figure 2. As can be seen, the next labeling decision is made Figure 3: Neural attention mechanism for word extraction. with both the encoded document and the previously labeled sentences in mind. Given encoder hidden states (h1,··· ,hm) and extractor hidden states (¯h1,··· , ¯hm) at time step t, the decoder attends the t-th sentence by relating its current decoding state to the corresponding encoding state: ¯ht = LSTM(pt−1st−1, ¯ht−1) (8) p(yL(t) = 1|D) = σ(MLP(¯ht : ht)) (9) where MLP is a multi-layer neural network with as input the concatenation of ¯ht and ht. pt−1 represents the degree to which the extractor believes the previous sentence should be extracted and memorized (pt−1=1 if the system is certain; 0 otherwise). In practice, there is a discrepancy between training and testing such a model. During training we know the true label pt−1 of the previous sentence, whereas at test time pt−1 is unknown and has to be predicted by the model. The discrepancy can lead to quickly accumulating prediction errors, especially when mistakes are made early in the sequence labeling process. To mitigate this, we adopt a curriculum learning strategy (Bengio et al., 2015): at the beginning of training when pt−1 cannot be predicted accurately, we set it to the true label of the previous sentence; as training goes on, we gradually shift its value to the predicted label p(yL(t −1) = 1|d). 4.3 Word Extractor Compared to sentence extraction which is a purely sequence labeling task, word extraction is closer to a generation task where relevant content must be selected and then rendered fluently and grammatically. A small extension to the structure of the sequential labeling model makes it suitable for generation: instead of predicting a label for the next sentence at each time step, the model directly outputs the next word in the summary. The 488 model uses a hierarchical attention architecture: at time step t, the decoder softly5 attends each document sentence and subsequently attends each word in the document and computes the probability of the next word to be included in the summary p(w′ t = wi|d,w′ 1,··· ,w′ t−1) with a softmax classifier: ¯ht = LSTM(w′t−1, ¯ht−1)6 (10) at j = zT tanh(We ¯ht +Wrh j),hj ∈D (11) bt j = softmax(at j) (12) ˜ht = m ∑ j=1 bt jh j (13) ut i = vT tanh(We′ ˜ht +Wr′wi),wi ∈D (14) p(w′ t = wi|D,w′ 1,··· ,w′ t−1) = softmax(ut i) (15) In the above equations, wi corresponds to the vector of the i-th word in the input document, whereas z, We, Wr, v, We′, and Wr′ are model weights. The model architecture is shown in Figure 3. The word extractor can be viewed as a conditional language model with a vocabulary constraint. In practice, it is not powerful enough to enforce grammaticality due to the lexical diversity and sparsity of the document highlights. A possible enhancement would be to pair the extractor with a neural language model, which can be pretrained on a large amount of unlabeled documents and then jointly tuned with the extractor during decoding (Gulcehre et al., 2015). A simpler alternative which we adopt is to use n-gram features collected from the document to rerank candidate summaries obtained via beam decoding. We incorporate the features in a log-linear reranker whose feature weights are optimized with minimum error rate training (Och, 2003). 5 Experimental Setup In this section we present our experimental setup for assessing the performance of our summarization models. We discuss the datasets used for 5A simpler model would use hard attention to select a sentence first and then a few words from it as a summary, but this would render the system non-differentiable for training. Although hard attention can be trained with the REINFORCE algorithm (Williams, 1992), it requires sampling of discrete actions and could lead to high variance. 6We empirically found that feeding the previous sentencelevel attention vector as additional input to the LSTM would lead to small performance improvements. This is not shown in the equation. training and evaluation, give implementation details, briefly introduce comparison models, and explain how system output was evaluated. Datasets We trained our sentence- and wordbased summarization models on the two datasets created from DailyMail news. Each dataset was split into approximately 90% for training, 5% for validation, and 5% for testing. We evaluated the models on the DUC-2002 single document summarization task. In total, there are 567 documents belonging to 59 different clusters of various news topics. Each document is associated with two versions of 100-word7 manual summaries produced by human annotators. We also evaluated our models on 500 articles from the DailyMail test set (with the human authored highlights as goldstandard). We sampled article-highlight pairs so that the highlights include a minimum of 3 sentences. The average byte count for each document is 278. Implementation Details We trained our models with Adam (Kingma and Ba, 2014) with initial learning rate 0.001. The two momentum parameters were set to 0.99 and 0.999 respectively. We performed mini-batch training with a batch size of 20 documents. All input documents were padded to the same length with an additional mask variable storing the real length for each document. The size of word, sentence, and document embeddings were set to 150, 300, and 750, respectively. For the convolutional sentence model, we followed Kim et al. (2016)8 and used a list of kernel sizes {1, 2, 3, 4, 5, 6, 7}. For the recurrent document model and the sentence extractor, we used as regularization dropout with probability 0.5 on the LSTM input-to-hidden layers and the scoring layer. The depth of each LSTM module was 1. All LSTM parameters were randomly initialized over a uniform distribution within [-0.05, 0.05]. The word vectors were initialized with 150 dimensional pre-trained embeddings.9 Proper nouns pose a problem for embeddingbased approaches, especially when these are rare 7According to the DUC2002 guidelines http: //www-nlpir.nist.gov/projects/duc/guidelines/ 2002.html, the generated summary should be within 100 words. 8The CNN-LSTM architecture is publicly available at https://github.com/yoonkim/lstm-char-cnn. 9We used the word2vec (Mikolov et al., 2013) skip-gram model with context window size 6, negative sampling size 10 and hierarchical softmax 1. The model was trained on the Google 1-billion word benchmark (Chelba et al., 2014). 489 or unknown (e.g., at test time). Rush et al. (2015) address this issue by adding a new set of features and a log-linear model component to their system. As our model enjoys the advantage of generation by extraction, we can force the model to inspect the context surrounding an entity and its relative position in the sentence in order to discover extractive patterns, placing less emphasis on the meaning representation of the entity itself. Specifically, we perform named entity recognition with the package provided by Hermann et al. (2015) and maintain a set of randomly initialized entity embeddings. During training, the index of the entities is permuted to introduce some noise but also robustness in the data. A similar data augmentation approach has been used for reading comprehension (Hermann et al., 2015). A common problem with extractive methods based on sentence labeling is that there is no constraint on the number of sentences being selected at test time. We address this by reranking the positively labeled sentences with the probability scores obtained from the softmax layer (rather than the label itself). In other words, we are more interested in is the relative ranking of each sentence rather than their exact scores. This suggests that an alternative to training the network would be to employ a ranking-based objective or a learning to rank algorithm. However, we leave this to future work. We use the three sentences with the highest scores as the summary (also subject to the word or byte limit of the evaluation protocol). Another issue relates to the word extraction model which is challenging to batch since each document possesses a distinct vocabulary. We sidestep this during training by performing negative sampling (Mikolov et al., 2013) which trims the vocabulary of different documents to the same length. At each decoding step the model is trained to differentiate the true target word from 20 noise samples. At test time we still loop through the words in the input document (and a stop-word list) to decide which word to output next. System Comparisons We compared the output of our models to various summarization methods. These included the standard baseline of simply selecting the “leading” three sentences from each document as the summary. We also built a sentence extraction baseline classifier using logistic regression and human engineered features. The classifier was trained on the same datasets as our neural network models with the following features: sentence length, sentence position, number of entities in the sentence, sentence-tosentence cohesion, and sentence-to-document relevance. Sentence-to-sentence cohesion was computed by calculating for every document sentence its embedding similarity with every other sentence in the same document. The feature was the normalized sum of these similarity scores. Sentence embeddings were obtained by averaging the constituent word embeddings. Sentence-to-document relevance was computed similarly. We calculated for each sentence its embedding similarity with the document (represented as bag-of-words), and normalized the score. The word embeddings used in this baseline are the same as the pre-trained ones used for our neural models. In addition, we included a neural abstractive summarization baseline. This system has a similar architecture to our word extraction model except that it uses an open vocabulary during decoding. It can also be viewed as a hierarchical documentlevel extension of the abstractive sentence summarizer proposed by Rush et al. (2015). We trained this model with negative sampling to avoid the excessive computation of the normalization constant. Finally, we compared our models to three previously published systems which have shown competitive performance on the DUC2002 single document summarization task. The first approach is the phrase-based extraction model of Woodsend and Lapata (2010). Their system learns to produce highlights from parsed input (phrase structure trees and dependency graphs); it selects salient phrases and recombines them subject to length, coverage, and grammar constraints enforced via integer linear programming (ILP). Like ours, this model is trained on document-highlight pairs, and produces telegraphic-style bullet points rather than full-blown summaries. The other two systems, TGRAPH (Parveen et al., 2015) and URANK (Wan, 2010), produce more typical summaries and represent the state of the art. TGRAPH is a graph-based sentence extraction model, where the graph is constructed from topic models and the optimization is performed by constrained ILP. URANK adopts a unified ranking system for both single- and multidocument summarization. Evaluation We evaluated the quality of the summaries automatically using ROUGE (Lin and Hovy, 2003). We report unigram and bigram over490 DUC 2002 ROUGE-1 ROUGE-2 ROUGE-L LEAD 43.6 21.0 40.2 LREG 43.8 20.7 40.3 ILP 45.4 21.3 42.8 NN-ABS 15.8 5.2 13.8 TGRAPH 48.1 24.3 — URANK 48.5 21.5 — NN-SE 47.4 23.0 43.5 NN-WE 27.0 7.9 22.8 DailyMail ROUGE-1 ROUGE-2 ROUGE-L LEAD 20.4 7.7 11.4 LREG 18.5 6.9 10.2 NN-ABS 7.8 1.7 7.1 NN-SE 21.2 8.3 12.0 NN-WE 15.7 6.4 9.8 Table 1: ROUGE evaluation (%) on the DUC-2002 and 500 samples from the DailyMail. lap (ROUGE-1,2) as a means of assessing informativeness and the longest common subsequence (ROUGE-L) as a means of assessing fluency. In addition, we evaluated the generated summaries by eliciting human judgments for 20 randomly sampled DUC 2002 test documents. Participants were presented with a news article and summaries generated by a list of systems. These include two neural network systems (sentenceand word-based extraction), the neural abstractive system described earlier, the lead baseline, the phrase-based ILP model10 of Woodsend and Lapata (2010), and the human authored summary. Subjects were asked to rank the summaries from best to worst (with ties allowed) in order of informativeness (does the summary capture important information in the article?) and fluency (is the summary written in well-formed English?). We elicited human judgments using Amazon’s Mechanical Turk crowdsourcing platform. Participants (self-reported native English speakers) saw 2 random articles per session. We collected 5 responses per document. 6 Results Table 1 (top half) summarizes our results on the DUC 2002 test dataset using ROUGE. NN-SE represents our neural sentence extraction model, 10We are grateful to Kristian Woodsend for giving us access to the output of his system. Unfortunately, we do not have access to the output of TGRAPH or URANK for inclusion in the human evaluation. Models 1st 2nd 3rd 4th 5th 6th MeanR LEAD 0.10 0.17 0.37 0.15 0.16 0.05 3.27 ILP 0.19 0.38 0.13 0.13 0.11 0.06 2.77 NN-SE 0.22 0.28 0.21 0.14 0.12 0.03 2.74 NN-WE 0.00 0.04 0.03 0.21 0.51 0.20 4.79 NN-ABS 0.00 0.01 0.05 0.16 0.23 0.54 5.24 Human 0.27 0.23 0.29 0.17 0.03 0.01 2.51 Table 2: Rankings (shown as proportions) and mean ranks given to systems by human participants (lower is better). NN-WE our word extraction model, and NN-ABS the neural abstractive baseline. The table also includes results for the LEAD baseline, the logistic regression classifier (LREG), and three previously published systems (ILP, TGRAPH, and URANK). The NN-SE outperforms the LEAD and LREG baselines with a significant margin, while performing slightly better than the ILP model. This is an encouraging result since our model has only access to embedding features obtained from raw text. In comparison, LREG uses a set of manually selected features, while the ILP system takes advantage of syntactic information and extracts summaries subject to well-engineered linguistic constraints, which are not available to our models. Overall, our sentence extraction model achieves performance comparable to the state of the art without sophisticated constraint optimization (ILP, TGRAPH) or sentence ranking mechanisms (URANK). We visualize the sentence weights of the NN-SE model in the top half of Figure 4. As can be seen, the model is able to locate text portions which contribute most to the overall meaning of the document. ROUGE scores for the word extraction model are less promising. This is somewhat expected given that ROUGE is n-gram based and not very well suited to measuring summaries which contain a significant amount of paraphrasing and may deviate from the reference even though they express similar meaning. However, a meaningful comparison can be carried out between NN-WE and NN-ABS which are similar in spirit. We observe that NN-WE consistently outperforms the purely abstractive model. As NN-WE generates summaries by picking words from the original document, decoding is easier for this model compared to NN-ABS which deals with an open vocabulary. The extraction-based generation approach is more robust for proper nouns and rare words, which pose a serious problem to open vocabulary mod491 sentence extraction: a gang of at least three people poured gasoline on a car that stopped to fill up at entity5 gas station early on Saturday morning and set the vehicle on fire a gang of at least three people poured gasoline on a car that stopped to fill up at entity5 gas station early on Saturday morning and set the vehicle on fire the driver of the car, who has not been identified, said he got into an argument with the suspects while he was pumping gas at a entity13 in entity14 the driver of the car, who has not been identified, said he got into an argument with the suspects while he was pumping gas at a entity13 in entity14 the group covered his white entity16 in gasoline and lit it ablaze while there were two passengers inside the group covered his white entity16 in gasoline and lit it ablaze while there were two passengers inside at least three people poured gasoline on a car and lit it on fire at a entity14 gas station explosive situation the passengers and the driver were not hurt during the incident but the car was completely ruined the man’s grandmother said the fire was lit after the suspects attempted to carjack her grandson, entity33 reported the man’s grandmother said the fire was lit after the suspects attempted to carjack her grandson, entity33 reported she said:’ he said he was pumping gas and some guys came up and asked for the car ’ they pulled out a gun and he took off running ’ they took the gas tank and started spraying ’ no one was injured during the fire , but the car ’s entire front end was torched , according to entity52 the entity53 is investigating the incident as an arson and the suspects remain at large the entity53 is investigating the incident as an arson and the suspects remain at large surveillance video of the incident is being used in the investigation before the fire , which occurred at 12:15am on Saturday , the suspects tried to carjack the man hot case the entity53 is investigating the incident at the entity67 station as an arson word extraction: gang poured gasoline in the car, entity5 Saturday morning. the driver argued with the suspects. his grandmother said the fire was lit by the suspects attempted to carjack her grandson. entities: entity5:California entity13:76-Station entity14: South LA entity16:Dodge Charger entity33:ABC entity52:NBC entity53:LACFD entity67:LA76 Figure 4: Visualization of the summaries for a DailyMail article. The top half shows the relative attention weights given by the sentence extraction model. Darkness indicates sentence importance. The lower half shows the summary generated by the word extraction. els. An example of the generated summaries for NN-WE is shown at the lower half of Figure 4. Table 1 (lower half) also shows system results on the 500 DailyMail news articles (test set). In general, we observe similar trends to DUC 2002, with NN-SE performing the best in terms of all ROUGE metrics. Note that scores here are generally lower compared to DUC 2002. This is due to the fact that the gold standard summaries (aka highlights) tend to be more laconic and as a result involve a substantial amount of paraphrasing. The results of our human evaluation study are shown in Table 2. Specifically, we show, proportionally, how often our participants ranked each system 1st, 2nd, and so on. Perhaps unsurprisingly, the human-written descriptions were considered best and ranked 1st 27% of the time, however closely followed by our NN-SE model which was ranked 1st 22% of the time. The ILP system was mostly ranked in 2nd place (38% of the time). The rest of the systems occupied lower ranks. We further converted the ranks to ratings on a scale of 1 to 6 (assigning ratings 6...1 to rank placements 1...6). This allowed us to perform Analysis of Variance (ANOVA) which revealed a reliable effect of system type. Specifically, post-hoc Tukey tests showed that NN-SE and ILP are significantly (p < 0.01) better than LEAD, NN-WE, and NN-ABS but do not differ significantly from each other or the human goldstandard. 7 Conclusions In this work we presented a data-driven summarization framework based on an encoder-extractor architecture. We developed two classes of models based on sentence and word extraction. Our models can be trained on large scale datasets and learn informativeness features based on continuous representations without recourse to linguistic annotations. Two important ideas behind our work are the creation of hierarchical neural structures that reflect the nature of the summarization task and generation by extraction. The later effectively enables us to sidestep the difficulties of generating under a large vocabulary, essentially covering the entire dataset, with many low-frequency words and named entities. Directions for future work are many and varied. One way to improve the word-based model would be to take structural information into account during generation, e.g., by combining it with a tree-based algorithm (Cohn and Lapata, 2009). It would also be interesting to apply the neural models presented here in a phrase-based setting similar to Lebret et al. (2015). A third direction would be to adopt an information theoretic perspective and devise a purely unsupervised approach that selects summary sentences and words so as to minimize information loss, a task possibly achievable with the dataset created in this work. Acknowledgments We would like to thank three anonymous reviewers and members of the ILCC at the School of Informatics for their valuable feedback. The support of the European Research Council under award number 681760 “Translating Multiple Modalities into Text” is gratefully acknowledged. 492 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR 2015, San Diego, California. Michele Banko, Vibhu O. Mittal, and Michael J. Witbrock. 2000. Headline generation based on statistical translation. In Proceedings of the 38th ACL, pages 318–325, Hong Kong. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In Advances in Neural Information Processing Systems 28, pages 1171–1179. Curran Associates, Inc. Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. 2014. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005. Trevor Anthony Cohn and Mirella Lapata. 2009. Sentence compression as tree transduction. Journal of Artificial Intelligence Research, pages 637–674. Conroy and O’Leary. 2001. Text summarization via hidden Markov models. In Proceedings of the 34th Annual ACL SIGIR, pages 406–407, New Oleans, Louisiana. G¨unes¸ Erkan and Dragomir R. Radev. 2004. Lexpagerank: Prestige in multi-document text summarization. In Proceedings of the 2004 EMNLP, pages 365–371, Barcelona, Spain. Elena Filatova and Vasileios Hatzivassiloglou. 2004. Event-based extractive summarization. In Stan Szpakowicz Marie-Francine Moens, editor, Text Summarization Branches Out: Proceedings of the ACL04 Workshop, pages 104–111, Barcelona, Spain. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th ACL, Berlin, Germany. to appear. Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On using monolingual corpora in neural machine translation. arXiv preprint arXiv:1503.03535. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th ACL, Berlin, Germany. to appear. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems 28, pages 1684– 1692. Curran Associates, Inc. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent convolutional neural networks for discourse compositionality. In Proceedings of the Workshop on Continuous Vector Space Models and their Compositionality, pages 119–126, Sofia, Bulgaria. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. 2016. Character-aware neural language models. In Proceedings of the 30th AAAI, Phoenix, Arizon. to appear. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 EMNLP, pages 1746–1751, Doha, Qatar. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Hayato Kobayashi, Masaki Noguchi, and Taichi Yatsuka. 2015. Summarization based on embedding distributions. In Proceedings of the 2015 EMNLP, pages 1984–1989, Lisbon, Portugal. Julian Kupiec, Jan O. Pedersen, and Francine Chen. 1995. A trainable document summarizer. In Proceedings of the 18th Annual International ACM SIGIR, pages 68–73, Seattle, Washington. R´emi Lebret, Pedro O Pinheiro, and Ronan Collobert. 2015. Phrase-based image captioning. In Proceedings of the 32nd ICML, Lille, France. Chin-Yew Lin and Eduard H. Hovy. 2003. Automatic evaluation of summaries using n-gram cooccurrence statistics. In Proceedings of HLT NAACL, pages 71–78, Edmonton, Canada. Rada Mihalcea. 2005. Language independent extractive summarization. In Proceedings of the ACL Interactive Poster and Demonstration Sessions, pages 49–52, Ann Arbor, Michigan. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26, pages 3111–3119. Curran Associates, Inc. Ani Nenkova, Lucy Vanderwende, and Kathleen McKeown. 2006. A compositional context sensitive multi-document summarizer: exploring the factors that influence summarization. In Proceedings of the 29th Annual ACM SIGIR, pages 573–580, Washington, Seattle. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st ACL, pages 160–167, Sapporo, Japan. 493 Daraksha Parveen, Hans-Martin Ramsl, and Michael Strube. 2015. Topical coherence for graph-based extractive summarization. In Proceedings of the 2015 EMNLP, pages 1949–1954, Lisbon, Portugal, September. Dragomir Radev, Timothy Allison, Sasha BlairGoldensohn, John Blitzer, Arda Celebi, Stanko Dimitrov, Elliott Drabek, Ali Hakim, Wai Lam, Danyu Liu, et al. 2004. Mead-a platform for multidocument multilingual text summarization. Technical report, Columbia University Academic Commons. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 EMNLP, pages 379–389, Lisbon, Portugal. Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc. Krysta Svore, Lucy Vanderwende, and Christopher Burges. 2007. Enhancing single-document summarization by combining RankNet and third-party sources. In Proceedings of the 2007 EMNLPCoNLL, pages 448–457, Prague, Czech Republic. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems 28, pages 2674–2682. Curran Associates, Inc. Xiaojun Wan. 2010. Towards a unified approach to simultaneous single-document and multi-document summarizations. In Proceedings of the 23rd COLING, pages 1137–1145. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256. Kristian Woodsend and Mirella Lapata. 2010. Automatic generation of story highlights. In Proceedings of the 48th ACL, pages 565–574, Uppsala, Sweden. Dani Yogatama, Fei Liu, and Noah A. Smith. 2015. Extractive summarization by maximizing semantic volume. In Proceedings of the 2015 EMNLP, pages 1961–1966, Lisbon, Portugal. Xingxing Zhang and Mirella Lapata. 2014. Chinese poetry generation with recurrent neural networks. In Proceedings of 2014 EMNLP, pages 670–680, Doha, Qatar. 494
2016
46
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 495–504, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Neural Networks For Negation Scope Detection Federico Fancellu and Adam Lopez and Bonnie Webber School of Informatics University of Edinburgh 11 Crichton Street, Edinburgh f.fancellu[at]sms.ed.ac.uk, {alopez,bonnie}[at]inf.ed.ac.uk Abstract Automatic negation scope detection is a task that has been tackled using different classifiers and heuristics. Most systems are however 1) highly-engineered, 2) English-specific, and 3) only tested on the same genre they were trained on. We start by addressing 1) and 2) using a neural network architecture. Results obtained on data from the *SEM2012 shared task on negation scope detection show that even a simple feed-forward neural network using word-embedding features alone, performs on par with earlier classifiers, with a bi-directional LSTM outperforming all of them. We then address 3) by means of a specially-designed synthetic test set; in doing so, we explore the problem of detecting the negation scope more in depth and show that performance suffers from genre effects and differs with the type of negation considered. 1 Introduction Amongst different extra-propositional aspects of meaning, negation is one that has received a lot of attention in the NLP community. Previous work have focused in particular on automatically detecting the scope of negation, that is, given a negative instance, to identify which tokens are affected by negation (§2). As shown in (1), only the first clause is negated and therefore we mark he and the car, along with the predicate was driving as inside the scope, while leaving the other tokens outside. (1) He was not driving the car and she left to go home. In the BioMedical domain there is a long line of research around the topic (e.g. Velldal et al. (2012) and Prabhakaran and Boguraev (2015)), given the importance of recognizing negation for information extraction from medical records. In more general domains, efforts have been more limited and most of the work centered around the *SEM2012 shared task on automatically detecting negation (§3), despite the recent interest (e.g. machine translation (Wetzel and Bond, 2012; Fancellu and Webber, 2014; Fancellu and Webber, 2015)). The systems submitted for this shared task, although reaching good overall performance are highly feature-engineered, with some relying on heuristics based on English (Read et al. (2012)) or on tools that are available for a limited number of languages (e.g. Basile et al. (2012), Packard et al. (2014)), which do not make them easily portable across languages. Moreover, the performance of these systems was only assessed on data of the same genre (stories from Conan Doyle’s Sherlock Holmes) but there was no attempt to test the approach on data of different genre. Given these shortcomings, we investigate whether neural network based sequence-tosequence models (§ 4) are a valid alternative. The first advantage of neural networks-based methods for NLP is that we could perform classification by means of unsupervised word-embeddings features only, under the assumption that they also encode structural information previous system had to explicitly represent as features. If this assumption holds, another advantage of continuous representations is that, by using a bilingual word-embedding space, we would be able to transfer the model cross-lingually, obviating the problem of the lack of annotated data in other languages. The paper makes the following contributions: 1. Comparable or better performance: We show that neural networks perform on par with previously developed classifiers, with a bi-directional LSTM outperforming them 495 when tested on data from the same genre. 2. Better understanding of the problem: We analyze in more detail the difficulty of detecting negation scope by testing on data of different genre and find that the performance of wordembedding features is comparable to that of more fine-grained syntactic features. 3. Creation of additional resources: We create a synthetic test set of negative sentences extracted from Simple English Wikipedia (§ 5) and annotated according to the guidelines released during the *SEM2012 shared task (Morante et al., 2011), that we hope will guide future work in the field. 2 The task Before formalizing the task, we begin by giving some definitions. A negative sentence n is defined as a vector of words ⟨w1, w2...wn⟩containing one or more negation cues, where the latter can be a word (e.g. not), a morpheme (e.g. im-patient) or a multi-word expression (e.g. by no means, no longer) inherently expressing negation. A word is a scope token if included in the scope of a negation cue. Following Blanco and Moldovan (2011), in the *SEM2012 shared task the negation scope is understood as part of a knowledge representation focused around a negated event along with its related semantic roles and adjuncts (or its head in the case of a nominal event). This is exemplified in (2) (from Blanco and Moldovan (2011)) where the scope includes both the negated event eat along the subject the cow, the object grass and the PP with a fork. (2) The cow did n’t eat grass with a fork.1 Each cue defines its own negation instance, here defined as a tuple I(n,c) where c ∈{1,0}|n| is a vector of length n s.t. ci = 1 if wi is part of the cue and 0 otherwise. Given I the goal of automatic scope detection is to predict a vector s ∈{O,I}|n| s.t. si = I (inside of the scope) if wi is in the scope of the cue or O (outside) otherwise. In (3) for instance, there are two cues, not and no longer, each one defining a separate negation instance, I1(n,c1) and I2(n,c2), and each with its own scope, s1 and s2. In both (3a) and (3b), n = 1In the *SEM2012 shared task, negation is not considered as a downward monotone function and definite expressions are included in its scope. [I, do, not, love, you, and, you, are, no, longer, invited]; in (3a), the vector c1 is 1 only at index 3 (w2=‘not’), while in (3b) c2 is 1 at position 9, 10 (where w9 w10 = ‘no longer’); finally the vectors s1 and s2 are I only at the indices of the words underlined and O anywhere else. (3) a. I do not love you and you are no longer invited b. I do not love you and you are no longer invited There are the two main challenges involved in detecting the scope of negation: 1) a sentence can contain multiple instances of negation, sometimes nested and 2) scope can be discontinuous. As for 1), the classifier must correctly classify each word as being inside or outside the scope and assign each word to the correct scope; in (4) for instance, there are two negation cues and therefore two scopes, one spanning the entire sentence (3a.) and the other the subordinate only (3b.), with the latter being nested in the former (given that, according to the guidelines, if we negate the event in the main, we also negate its cause). (4) a. I did not drive to school because my wife was not feeling well .2 b. I did not drive to school because my wife was not feeling well . In (5), the classifier should instead be able to capture the long range dependency between the subject and its negated predicate, while excluding the positive VP in the middle. (5) Naomi went to visit her parents to give them a special gift for their anniversary but never came back . In the original task, the performance of the classifier is assessed in terms of precision, recall and F1 measure over the number of words correctly classified as part of the scope (scope tokens) and over the number of scopes predicted that exactly 2One might object that the scope only spans over the subordinate given that it is the part of the scope most likely to be interpreted as false (It is not the case that I drove to school because my wife was not at home, but for other reasons). In the *SEM2012 shared task however this is defined separately as the focus of negation and considered as part of the scope. One reason to distinguish the two is the high ambiguity of the focus: one can imagine for instance that if the speaker stresses the words to school this will be most likely considered the focus and the statement interpreted as It is not the case that I drive to school because my wife was not feeling well (but I drove to the hospital instead). 496 match the gold scopes (exact scope match). As for latter, recall is a measure of accuracy since we score how many scopes we fully predict (true positives) over the total number of scopes in our test set (true positives and false negatives); precision takes instead into consideration false positives, that is those negation instances that are predicted as having a scope but in reality don’t have any. This is the case of the interjection No (e.g. ‘No, leave her alone’) that never take scope. 3 Previous work Table 1 summarizes the performance of systems previously developed to resolve the scope of negation in non-Biomedical texts. In general, supervised classifiers perform better than rule-based systems, although it is a combination of hand-crafted heuristics and SVM rankers to achieve the best performance. Regardless of the approach used, the syntactic structure (either constituent or dependency-based) of the sentence is often used to detect the scope of negation. This is because the position of the cue in the tree along with the projection of its parent/governor are strong indicators of scope boundaries. Moreover, given that during training we basically learn which syntactic patterns the scope are likely to span, it is also possible to hypothesize that this system should scale well to other genre/domain, as long as we can have a parse for the sentence; this however was never confirmed empirically. Although informative, these systems suffers form three main shortcomings: 1) they are highly-engineered (as in the case of Read et al. (2012)) and syntactic features add up to other PoS, word and lemma n-gram features, 2) they rely on the parser producing a correct parse and 3) they are English specific. Other systems (Basile et al., 2012; Packard et al., 2014) tried to traverse a semantic representation instead. Packard et al. (2014) achieves the best results so far, using hand-crafted heuristics to traverse the MRS (Minimal Recursion Semantics) structures of negative sentences. If the semantic parser cannot create a reliable representation for a sentence, the system ‘backs-off’ to the hybrid model of Read et al. (2012), which uses syntactic information instead. This system suffers however from the same shortcomings mentioned above, in particular, given that MRS representation can only be built for a small set of languages. 4 Scope detection using Neural Networks In this paper, we experiment with two different neural networks architecture: a one hidden layer feed-forward neural network and a bidirectional LSTM (Long Short Term Memory, BiLSTM below) model. We chose to ‘start simple’ from a feed-forward network to investigate whether even a simple model can reach good performance using word-embedding features only. We then turned to a BiLSTM because a better fit for the task. BiLSTM are sequential models that operate both in forward and backwards fashion; the backward pass is especially important in the case of negation scope detection, given that a scope token can appear in a string before the cue and it is therefore important that we see the latter first to classify the former. We opted in this case for LSTM over RNN cells given that their inner composition is able to better retain useful information when backpropagating the error.4 Both networks take as input a single negative instance I(n,c). We represent each word wi ∈n as a d-dimensional word-embedding vector x ∈ Rd (d=50). In order to encode information about the cue, each word is also represented by a cueembedding vector c ∈Rd of the same dimensionality of x. c can only take two representations, cue, if ci=1, or notcue otherwise. We also define Evxd w as the word-embedding matrix, where v is the vocabulary size, and E2xd c as the cue-embedding matrix. In the case of a feed-forward neural network, the input for each word wi ∈n is the concatenation of its representation with the ones of its neighboring words in a context window of length l. This is because feed-forward networks treat the input units as separate and information about how words are arranged as sequences must be explicitly encoded in the input. We define these concatenations xconc and cconc as xwi−l...xwi−1 ; xwi ; xwi+1...xwi+l and cwi−l...cwi−1 ; cwi ; cwi+1...cwi+l respectively. We chose the value of l after analyzing the negation scopes in the dev set. We found that although the furthest scope tokens are 23 and 31 positions away from the cue on the left and the right respectively, 95% of the scope tokens fall in a window of 9 tokens to the left and 15 to the right, these two values being the window sizes we con4For more details on LSTM and related mathematical formulations, we refer to reader to Hochreiter and Schmidhuber (1997) 497 Scope tokens3 Exact scope match Method Prec. Rec. F1 Prec. Rec. F1 *SEM2012 Closed track UiO1 (Read et al., 2012) heuristics + SVM 81.99 88.81 85.26 87.43 61.45 72.17 UiO2 (Lapponi et al., 2012) CRF 86.03 81.55 83.73 85.71 62.65 72.39 FBK (Chowdhury and Mahbub, 2012) CRF 81.53 82.44 81.89 88.96 58.23 70.39 UWashington (White, 2012) CRF 83.26 83.77 83.51 82.72 63.45 71.81 UMichigan (Abu-Jbara and Radev, 2012) CRF 84.85 80.66 82.70 90.00 50.60 64.78 UABCoRAL (Gyawali and Solorio, 2012) SVM 85.37 68.86 76.23 79.04 53.01 63.46 Open track UiO2 (Lapponi et al., 2012) CRF 82.25 82.16 82.20 85.71 62.65 72.39 UGroningen (Basile et al., 2012) rule-based 69.20 82.27 75.15 76.12 40.96 53.26 UCM-1 (de Albornoz et al., 2012) rule-based 85.37 68.53 76.03 82.86 46.59 59.64 UCM-2 (Ballesteros et al., 2012) rule-based 58.30 67.70 62.65 67.13 38.55 48.98 Packard et al. (2014) heuristics + SVM 86.1 90.4 88.2 98.8 65.5 78.7 Table 1: Summary of previous work on automatic detection of negation scope. sider for our input. The probability of a given input is then computed as follows: h = σ(Wxxconc + Wccconc + b) y = g(Wyh + by) (1) where W and b the weight and biases matrices, h the hidden layer representation, σ the sigmoid activation function and g the softmax operation (g(zm)= ezm/ P k ezk) to assign a probability to the input of belonging to either the inside (I) or outside (O) of the scope classes. In the biLSTM, no concatenation is performed, given that the structure of the network is already sequential. The input to the network for each word wi are the word-embedding vector xwi and the cue-embedding vector cwi, where wi constitutes a time step. The computation of the hidden layer at time t and the output can be represented as follows: it = σ(W(i) x x + W(i) c c + W(i) h ht−1 + b(i)) ft = σ(W(f) x x + W(f) c c + W(f) h ht−1 + b(f)) ot = σ(W(o) x x + W(o) c c + W(o) h ht−1 + b(o)) ˜ct = tanh(W(c) x x + W(c) c c + W(c) h ht−1 + b(c)) ct = ft · ˜ct−1 + it · ˜ct hback/forw = ot · tanh(ct) yt = g(Wy(hback; hforw) + by) (2) where the Ws are the weight matrices, ht−1 the hidden layer state a time t-1, it, ft, ot the input, forget and the output gate at the time t and hback ; hforw the concatenation of the backward and forward hidden layers. Finally, in both networks our training objective is to minimise, for each negative instance, the negative log likelihood J(W,b) of the correct predictions over gold labels: J(W, b) = −1 l l X i=1 y(wi) log hθ(x(wi)) + (1 −y(wi)) log(1 −hθ(x(wi))) (3) where l is the length of the sentence n ∈I, x(wi) the probability for the word wi to belong to either the I or O class and y(wi) its gold label. An overview of both architectures is shown in Figure 1. 4.1 Experiments Training, development and test set are a collection of stories from Conan Doyle’s Sherlock Holmes annotated for cue and scope of negation and released in concomitance with the *SEM2012 shared task.5 For each word, the correspondent lemma, POS tag and the constituent subtree it belongs to are also annotated. If a sentence contains multiple instances of negation, each is annotated separately. Both training and testing is done on negative sentences only, i.e. those sentences with at least one cue annotated. Training and test size are of 848 and 235 sentences respectively. If a sentence contains multiple negation instances, we create as many copies as the number of instances. If the sentence contains a morphological cue (e.g. impatient) we split it into affix (im-) and root (patient), and consider the former as cue and the latter as part of the scope. Both neural network architectures are implemented using TensorFlow (Abadi et al., 2015) with a 200-units hidden layer (400 in total for two concatenated hidden layers in the BiLSTM), the Adam optimizer (Kingma and Ba, 2014) with a 5For the statistics regarding the data, we refer the reader to Morante and Blanco (2012). 498 Figure 1: An example of scope detection using feed-forward and BiLSTM for the tokens ‘you are no longer invited’ in the instance in ex. (3b). starting learning rate of 0.0001, learning rate decay after 10 iterations without improvement and early stopping. In both cases we experimented with different settings: 1. Simple baseline: In order to understand how hard the task of negation scope detection is, we created a simple baseline by tagging as part of the scope all the tokens 4 words to the left and 6 to the right of the cue; these values were found to be the average span of the scope in either direction in the training data. 2. Cue info (C): The word-embedding matrix is randomly initialised and updated relying on the training data only. Information about the cue is fed through another set of embedding vectors, as shown in 4. This resembles the ‘Closed track’ of the *SEM2012 shared task since no external resource is used. 3. Cue info + external embeddings (E): This is the same as setting (2) except that the embeddings are pre-trained using external data. We experimented with both keeping the wordembedding matrix fixed and updating it during training but we found small or no difference between the two settings. To do this, we train a word-embedding matrix using Word2Vec (Mikolov et al., 2013) on 770 million tokens (for a total of 30 million sentences and 791028 types) from the ‘One Billion Words Language Modelling’ dataset 6 and the Sherlock Holmes data (5520 sentences) combined. The dataset was tokenized and morphological cues split into negation affix and root to match the Conan Doyle’s data. In order to perform this split, we matched each word against an hand-crafted list of words containing affixal negation7; this method have an accuracy of 0.93 on the Conan Doyle test data. 4. Adding PoS / Universal PoS information (PoS/uni PoS): This was mainly to assess whether we could get further improvement by adding additional information. For all the setting above, we also add an extra embedding input vector for the POS or Universal POS of each word wi. As for the word and the cue embeddings, PoS-embedding information are fed to the hidden layer through a separate weight matrix. When pre-trained, the training data for the external PoS-embedding matrix is the same used for building the word embedding representation, except that in this case we feed the PoS / Universal PoS tag for each word. As in (3), we experimented with both updating the tag-embedding matrix and keeping it fixed but found again small or no difference between the two settings. In order to maintain consistency with the original data, we perform PoS tagging using the GENIA tagger (Tsuruoka et al., 2005)8 and then map the resulting tags to universal POS tags.9 4.2 Results The results for the scope detection task are shown in Table 2. 6Available at https://code.google.com/ archive/p/word2vec/ 7The list was courtesy of Ulf Hermjakob and Nathan Schneider. 8https://github.com/saffsd/geniatagger 9Mapping available at https://github.com/ slavpetrov/universal-pos-tags 499 Results for both architecture when wordembedding features only are used (C and C + E) show that neural networks are a valid alternative for scope detection, with bi-directional LSTM being able to outperform all previously developed classifiers on both scope token recognition and exact scope matching. Moreover, a bi-directional LSTM shows similar performance to the hybrid system of Packard et al. (2014) (rule-based + SVM as a back-off) in absence of any hand-crafted heuristics. It is also worth noticing that although pretraining the word-embedding and PoS-embedding matrices on external data leads to a slight improvement in performance, the performance of the systems using internal data only is already competitive; this is a particularly positive result considering that the training data is relatively small. Finally, adding universal POS related information leads to a better performance in most cases. The fact that the best system is built using language-independent features only is an important result when considering the portability of the model across different languages. 4.3 Error analysis In order to understand the kind of errors our best classifier makes, we performed an error analysis on the held-out set. First, we investigate whether the per-instance prediction accuracy correlates with scope-related (length of the scope to the left, to the right and combined; maximum length of the gap in a discontinuous scope) and cue-related (type of cue -oneword, prefixal, suffixal, multiword-) variables. We also checked whether the neural network is biased towards the words it has seen in the training(for instance, if it has seen the same token always labeled as O it will then classify it as O). For our best biLSTM system, we found only weak to moderate negative correlations with the following variables: • length of the gap, if the scope is discontinuous (r=-0.1783, p = 0.004); • overall scope length (r=-0.3529, p < 0.001); • scope length to the left and to the right (r=0.3251 and -0.2659 respectively with p < 0.001) • presence of a prefixal cue (r=-0.1781, p = 0.004) • presence of a multiword cue (r=-0.1868, p = 0.0023) meaning that the variables considered are not strong enough to be considered as error patterns. For this reason we also manually analyzed the 96 negation scopes that the best biLSTM system predicted incorrectly and noticed several error patterns: • in 5 cases, the scope should only span on the subordinate but end up including elements from the main. In (6) for instance, where the system prediction is reported in curly brackets, the BiLSTM ends up including the main predicate with its subject in the scope. (6) You felt so strongly about it that {I knew you could} not {think of Beecher without thinking of that also} . • in 5 cases, the system makes an incorrect prediction in presence of the syntactic inversion, where a subordinate appears before the main clause; in (7) for instance, the system extends the prediction to the main clause when the scope should instead span the subordinate only. (7) But {if she does} not {wish to shield him she would give his name} • in 8 cases, where two VPs, one positive and one negative, are coordinated, the system ends up including in the scope the positive VP as well, as shown in (8). We hypothesized this is due to the lack of such examples in the training set. (8) Ah, {you do} n’t {know Sarah ’s temper or you would wonder no more} . As in Packard et al. (2014), we also noticed that in 15 cases, the gold annotations do not follow the guidelines; in the case of a negated adverb in particular, as shown in (9a) and (9b) the annotations do not seem to agree on whether consider as scope only the adverb or the entire clause around it. 500 Scope tokens Exact scope match System gold tp fp fn Prec. Rec. F1 Prec. Rec. F1 Baseline 1830 472 3031 1358 13.47 25.79 17.70 0.0 0.0 0.0 Best closed track: UiO1 N/A N/A N/A N/A 81.99 88.81 85.26 87.43 61.45 72.17 Packard et al. (2014) N/A N/A N/A N/A 86.1 90.4 88.2 98.8 65.5 78.7 FF - C 1830 1371 273 459 83.39 74.91 78.92 93.61 34.10 50.00 FF - C + PoS 1830 1413 235 417 85.74 77.21 81.25 92.51 37.50 53.33 FF - C + Uni PoS 1830 1435 276 395 83.86 78.41 81.05 93.06 36.57 52.51 FF - C + E 1830 1455 398 375 78.52 79.50 79.01 89.53 30.19 45.16 FF - C + PoS + E 1830 1413 179 417 88.75 77.21 82.58 96.63 44.23 60.68 FF - C + Uni PoS + E 1830 1412 158 418 89.93 77.15 83.05 96.58 43.46 59.94 BiLSTM - C 1830 1583 175 247 90.04 86.50 88.23 98.71 58.77 73.68 BiLSTM - C + PoS 1830 1591 203 239 88.68 86.93 87.80 98.70 58.01 73.07 BiLSTM - C + Uni Pos 1830 1592 193 238 89.18 86.95 88.07 98.96 57.63 72.77 BiLSTM - C + E 1830 1570 157 260 90.90 85.79 88.27 99.37 60.83 75.47 BiLSTM - C + PoS + E 1830 1546 148 284 91.26 84.48 87.74 98.75 60.30 74.88 BiLSTM - C + Uni PoS + E 1830 1552 124 272 92.62 85.13 88.72 99.40 63.87 77.77 Table 2: Results for the scope detection task on the held-out set. Results are plotted against the simple baseline, the best system so far (Packard et al., 2014) and the system with the highest F1 for scope tokens classification amongst the ones submitted for the *SEM2012 shared task. We also report the number of gold scope tokens, true positive (tp), false positives(fp) and false negatives(fn). (9) a. [...] tossing restlessly from side to side [..] b. [...] glaring helplessly at the frightful thing which was hunting him down. 5 Evaluation on synthetic data set 5.1 Methodology One question left unanswered by previous work is whether the performance of scope detection classifiers is robust against data of a different genre and whether different types of negation lead to difference in performance. To answer this, we compare two of our systems with the only original submission to the *SEM2012 we found available (White, 2012)10. We decided to use both our best system, BiLSTM+C+UniPoS+E and a sub-optimal systems, BiLSTM+C+E to also assess the robustness of non-English specific features. The synthetic test set here used is built on sentences extracted from Simple Wikipedia and manually annotated for cue and scope according to the annotation guidelines released in concomitance with the *SEM2012 shared task (Morante et al., 2011). We created 7 different subsets to test different types of negative sentences: Simple: we randomly picked 50 positive sentences, containing only one predicate, no dates and no named entities, and we made them negative by 10In order for the results to be comparable, we feed White’s system with the cues from the gold-standard instead of automatically detecting them. adding a negation cue (do support or minor morphological changes were added when required). If more than a lexical negation cue fit in the context, we used them all by creating more than one negative counterpart, as shown in (10). The sentences were picked to contain different kind of predicates (verbal, existential, nominal, adjectival). (10) a. Many people disagree on the topic b. Many people do not disagree on the topic c. Many people never disagree on the topic Lexical: we randomly picked 10 sentences11 for each lexical (i.e. one-word) cue in training data (these are not, no, none, nobody, never, without) Prefixal: we randomly picked 10 sentences for each prefixal cue in the training data (un-, im-, in-, dis-, ir-) Suffixal: we randomly picked 10 sentences for the suffixal cue -less. Multi-word: we randomly picked 10 sentences for each multi-word cue (neither...nor,no longer,by no means). Unseen: we include 10 sentences for each of the negative prefixes a- (e.g. a-cyclic), ab- (e.g. ab-normal) non- (e.g. non-Communist) that are not annotated as cue in the Conan Doyle corpus, 11In some cases, we ended up with more than 10 examples for some cues given that some of the sentences we picked contained more than a negation instance. 501 Scope tokens Exact scope match Data gold tp fp fn Prec. Rec. F1 Prec. Rec. F1 White (2012) simple 850 830 0 20 100.00 97.65 98.81 100.00 93.98 96.90 lexical 814 652 101 162 86.59 80.10 83.22 100.00 58.41 73.75 prefixal 316 232 103 83 68.98 73.40 71.12 100.00 32.76 49.35 suffixal 100 78 7 22 91.76 78.00 84.32 100.00 69.23 81.82 multi-word 269 190 12 49 89.62 70.63 79.00 100.00 9.00 16.67 unseen 220 138 40 82 77.53 62.73 69.35 100.00 38.89 56.00 avg. 2569 2120 263 418 85.74 77.08 80.97 100.00 50.37 62.41 BiLSTM - C+ E simple 850 827 0 23 100.00 97.29 98.62 100.00 88.72 94.02 lexical 814 618 120 133 85.01 83.66 84.33 100.00 40.35 57.50 prefixal 316 235 156 81 60.10 74.36 66.47 100.00 10.34 18.75 suffixal 100 53 5 47 91.52 53.46 67.50 100.00 15.28 26.66 multi-word 269 192 22 79 93.65 71.37 81.01 100.00 36.36 53.00 unseen 220 151 79 69 66.09 69.05 67.54 100.00 22.22 36.36 avg. 2569 2076 382 432 82.72 74.86 77.57 100.00 35.54 47.76 BiLSTM - C+ UniPos + E simple 850 816 0 34 100.00 96 97.95 100.00 82.70 90.05 lexical 814 668 97 146 87.32 82.06 84.61 100.00 42.10 59.25 prefixal 316 231 128 85 64.34 73.10 68.44 100.00 20.68 34.28 suffixal 100 54 3 47 94.73 53.46 68.35 100.00 38.46 55.55 multi-word 269 202 19 67 91.40 75.09 82.44 100.00 27.27 42.85 unseen 220 152 56 71 73.07 68.16 70.53 100.00 25.00 40.00 avg. 2569 2123 303 449 85.14 74.64 78.72 100.00 39.36 53.66 Table 3: Results for the scope detection task on the synthetic test set. to test whether the system can generalise the classification to unseen cues.12 5.2 Results Table 3. shows the results for the comparison on the synthetic test set. The first thing worth noticing is that by using word-embedding features only it is possible to reach comparable performance with a classifier using syntactic features, with universal PoS generally contributing to a better performance; this is particularly evident in the multiword and lexical sub-sets. In general, genre effects hinder both systems; however, considering that the training data is less than 1000 sentences, results are relatively good. Performance gets worse when dealing with morphological cues and in particular in the case of our classifier, with suffixal cues; at a closer inspection however, the cause of such poor performance is attributable to a discrepancy between the annotation guidelines and the training data, already noted in §4.4. The guidelines state in fact that ‘If the negated affix is attached to an adverb that is a complement of a verb, the negation scopes over the entire clause’(Morante et al., 2011, p. 21) and we annotated suffixal negation in this way. However, 3 out of 4 examples of suffixal negation in adverbs in the training data (e.g. 9a.) mark the 12The data, along with the code, is freely available at https://github.com/ffancellu/NegNN scope on the adverbial root only and that’s what our classifiers learn to do. Finally, it can be noticed that our system does worse at exact scope matching than the CRF classifier. This is because White (2012)’s CRF model is build on constituency-based features that will then predict scope tokens based on constituent boundaries (which, as we said, are good indicator of scope boundaries), while neural networks, basing the prediction only on word-embedding information, might extend the prediction over these boundaries or leave ‘gaps’ within. 6 Conclusion and Future Work In this work, we investigated and confirmed that neural networks sequence-to-sequence models are a valid alternative for the task of detecting the scope of negation. In doing so we offer a detailed analysis of its performance on data of different genre and containing different types of negation, also in comparison with previous classifiers, and found that non-English specific continuous representation can perform batter than or on par with more fine-grained structural features. Future work can be directed towards answering two main questions: Can we improve the performance of our classifier? To do this, we are going to explore whether adding language-independent structural informa502 tion (e.g. universal dependency information) can help the performance on exact scope matching. Can we transfer our model to other languages? Most importantly, we are going to test the model using word-embedding features extracted from a bilingual embedding space. Acknowledgments This project was also founded by the European Unions Horizon 2020 research and innovation programme under grant agreement No 644402 (HimL). The authors would like to thank Naomi Saphra, Nathan Schneider and Claria Vania for the valuable suggestions and the three anonymous reviewers for their comments. References M Abadi, A Agarwal, P Barham, E Brevdo, Z Chen, C Citro, GS Corrado, A Davis, J Dean, M Devin, et al. 2015. Tensorflow: Large-scale machine learning on heterogeneous systems. White paper, Google Research. Amjad Abu-Jbara and Dragomir Radev. 2012. Umichigan: A conditional random field model for resolving the scope of negation. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 328–334. Association for Computational Linguistics. Miguel Ballesteros, Alberto D´ıaz, Virginia Francisco, Pablo Gerv´as, Jorge Carrillo De Albornoz, and Laura Plaza. 2012. Ucm-2: a rule-based approach to infer the scope of negation via dependency parsing. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 288–293. Association for Computational Linguistics. Valerio Basile, Johan Bos, Kilian Evang, and Noortje Venhuizen. 2012. Ugroningen: Negation detection with discourse representation structures. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 301–309. Association for Computational Linguistics. Eduardo Blanco and Dan I Moldovan. 2011. Some issues on detecting negation from text. In FLAIRS Conference, pages 228–233. Citeseer. Md Chowdhury and Faisal Mahbub. 2012. Fbk: Exploiting phrasal and contextual clues for negation scope detection. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 340–346. Association for Computational Linguistics. Jorge Carrillo de Albornoz, Laura Plaza, Alberto D´ıaz, and Miguel Ballesteros. 2012. Ucm-i: A rule-based syntactic approach for resolving the scope of negation. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 282–287. Association for Computational Linguistics. Federico Fancellu and Bonnie L Webber. 2014. Applying the semantics of negation to smt through nbest list re-ranking. In EACL, pages 598–606. Federico Fancellu and Bonnie Webber. 2015. Translating negation: A manual error analysis. ExProM 2015, page 1. Binod Gyawali and Thamar Solorio. 2012. Uabcoral: a preliminary study for resolving the scope of negation. In Proceedings of the First Joint Conference on Lexical and Computational SemanticsVolume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 275–281. Association for Computational Linguistics. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Emanuele Lapponi, Erik Velldal, Lilja Øvrelid, and Jonathon Read. 2012. Uio 2: sequence-labeling negation using dependency features. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 319–327. Association for Computational Linguistics. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Roser Morante and Eduardo Blanco. 2012. * sem 2012 shared task: Resolving the scope and focus of negation. In Proceedings of the First Joint Conference 503 on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 265–274. Association for Computational Linguistics. Roser Morante, Sarah Schrauwen, and Walter Daelemans. 2011. Annotation of negation cues and their scope: Guidelines v1. Computational linguistics and psycholinguistics technical report series, CTRS003. Woodley Packard, Emily M Bender, Jonathon Read, Stephan Oepen, and Rebecca Dridan. 2014. Simple negation scope resolution through deep parsing: A semantic solution to a semantic problem. In ACL (1), pages 69–78. Vinodkumar Prabhakaran and Branimir Boguraev. 2015. Learning structures of negations from flat annotations. Lexical and Computational Semantics (* SEM 2015), page 71. Jonathon Read, Erik Velldal, Lilja Øvrelid, and Stephan Oepen. 2012. Uio 1: Constituent-based discriminative ranking for negation resolution. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 310–318. Association for Computational Linguistics. Yoshimasa Tsuruoka, Yuka Tateishi, Jin-Dong Kim, Tomoko Ohta, John McNaught, Sophia Ananiadou, and Jun’ichi Tsujii. 2005. Developing a robust partof-speech tagger for biomedical text. Advances in informatics, pages 382–392. Erik Velldal, Lilja Øvrelid, Jonathon Read, and Stephan Oepen. 2012. Speculation and negation: Rules, rankers, and the role of syntax. Computational linguistics, 38(2):369–410. Dominikus Wetzel and Francis Bond. 2012. Enriching parallel corpora for statistical machine translation with semantic negation rephrasing. In Proceedings of the Sixth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 20– 29. Association for Computational Linguistics. James Paul White. 2012. Uwashington: Negation resolution using machine learning methods. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 335–339. Association for Computational Linguistics. 504
2016
47
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 505–515, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics CSE: Conceptual Sentence Embeddings based on Attention Model Yashen Wang, Heyan Huang∗, Chong Feng, Qiang Zhou, Jiahui Gu and Xiong Gao Beijing Engineering Research Center of High Volume Language Information Processing and Cloud Computing Applications, School of Computer, Beijing Institute of Technology, Beijing, P. R. China {yswang,hhy63,fengchong,qzhou,gujh,gaoxiong}@bit.edu.cn Abstract Most sentence embedding models typically represent each sentence only using word surface, which makes these models indiscriminative for ubiquitous homonymy and polysemy. In order to enhance representation capability of sentence, we employ conceptualization model to assign associated concepts for each sentence in the text corpus, and then learn conceptual sentence embedding (CSE). Hence, this semantic representation is more expressive than some widely-used text representation models such as latent topic model, especially for short-text. Moreover, we further extend CSE models by utilizing a local attention-based model that select relevant words within the context to make more efficient prediction. In the experiments, we evaluate the CSE models on two tasks, text classification and information retrieval. The experimental results show that the proposed models outperform typical sentence embed-ding models. 1 Introduction Many natural language processing applications require the input text to be represented as a fixedlength feature, of which sentence representation is very important. Perhaps the most common fixedlength vector representation for texts is the bag-ofwords or bag-of-n-grams (Harris, 1970). However, they suffer severely from data sparsity and high dimensionality, and have very little sense about the semantics of words or the distances between the words. Recently, in sentence representation and classification, deep neural network (DNN) approaches have achieved state-of-the-art results (Le ∗The contact author. and Mikolov, 2014; Liu et al., 2015; Palangi et al., 2015; Wieting et al., 2015). Despite of their usefulness, recent sentence embeddings face several challenges: (i) Most sentence embedding models represent each sentence only using word surface, which makes these models indiscriminative for ubiquitous polysemy; (ii) For short-text, however, neither parsing nor topic modeling works well because there are simply not enough signals in the input; (iii) Setting window size of context words is very difficult. To solve these problems, we must derive more semantic signals from the input sentence, e.g., concepts. Besides, we should assigned different attention for different contextual word, to enhance the influence of words that are relevant for each prediction. This paper proposed Conceptual Sentence Embedding (CSE), an unsupervised framework that learns continuous distributed vector representations for sentence. Specially, by innovatively introducing concept information, this concept-level vector representations of sentence are learned to predict the surrounding words or target word in contexts. Our research is inspired by the recent work in learning vector representations of words using deep learning strategy (Mikolov et al., 2013a; Le and Mikolov, 2014). More precisely, we first obtain concept distribution of the sentence, and generate corresponding concept vector. Then we concatenate or average the sentence vector, contextual word vectors with concept vector of the sentence, and predict the target word in the given context. All of the sentence vectors and word vectors are trained by the stochastic gradient descent and backpropagation (Rumelhart et al., 1986). At prediction time, sentence vectors are inferred by fixing the word vectors and observed sentence vectors, and training the new sentence vector until convergence. In parallel, the concept of attention has gained 505 popularity recently in neural natural language processing researches, which allowing models to learn alignments between different modalities (Bahdanau et al., 2014; Bansal et al., 2014; Rush et al., 2015). In this work, we further propose the extensions to CSE, which adds an attention model that considers contextual words differently depending on the word type and its relative position to the predicted word. The main intuition behind the extended model is that prediction of a word is mainly dependent on certain words surrounding it. In summary, the basic idea of CSE is that, we allow each word to have different embeddings under different concepts. Taking word apple into consideration, it may indicate a fruit under the concept food, and indicate an IT company under the concept information technology. Hence, concept information significantly contributes to the discriminative of sentence vector. Moreover, an important advantage of the proposed conceptual sentence embeddings is that they could be learned from unlabeled data. Another advantage is that we take the word order into account, in the same way of ngram model, while bag-of-n-grams model would create a very high-dimensional representation that tends to generalize poorly. To summarize, this work contributes on the following aspects: We integrate concepts and attention-based strategy into basic sentence embedding representation, and allow the resulting conceptual sentence embedding to model different meanings of a word under different concept. The experimental results on text classification task and information retrieval task demonstrate that this concept-level sentence representation is robust. The outline of the paper is as follows. Section 2 surveys related researches. Section 3 formally de-scribes the proposed model of conceptual sentence embedding. Corresponding experimental results are shown in Section 4. Finally, we conclude the paper. 2 Related Works Conventionally, one-hot sentence representation has been widely used as the basis of bag-of-words (BOW) text model. However, it can-not take the semantic information into consideration. Recently, in sentence representation and classification, deep neural network approaches have achieved state-of-the-art results (Le and Mikolov, 2014; Liu et al., 2015; Ma et al., 2015; Palangi et al., 2015; Wieting et al., 2015), most of which are inspired by word embedding (Mikolov et al., 2013a). (Le and Mikolov, 2014) proposed the paragraph vector (PV) that learns fixed-length representations from variable-length pieces of texts. Their model represents each document by a dense vector which is trained to predict words in the document. However, their model depends only on word surface, ignoring semantic information such as topics or concepts. In this paper, we extent PV by introducing concept information. Aiming at enhancing discriminativeness for ubiquitous polysemy, (Liu et al., 2015) employed latent topic models to assign topics for each word in the text corpus, and learn topical word embeddings (TWE) and sentence embeddings based on both words and their topics. Besides, to combine deep learning with linguistic structures, many syntax-based embedding algorithms have been proposed (Severyn et al., 2014; Wang et al., 2015b) to utilize long-distance dependencies. However, short-texts usually do not observe the syntax of a written language, nor do they contain enough signals for statistical inference (e.g., topic model). Therefore, neither parsing nor topic modeling works well because there are simply not enough signals in the input, and we must derive more semantic signals from the input, e.g., concepts, which have been demonstrated effective in knowledge representation (Wang et al., 2015c; Song et al., 2015). Shot-text conceptualization, is an interesting task to infer the most likely concepts for terms in the short-text, which could help better make sense of text data, and extend the texts with categorical or topical information (Song et al., 2011). Therefore, our models utilize shorttext conceptualization algorithm to discriminate concept-level sentence senses and provide a good performance on short-texts. Recently, attention model has been used to improve many neural natural language pro-cessing researches by selectively focusing on parts of the source data (Bahdanau et al., 2014; Bansal et al., 2014; Wang et al., 2015a). To the best of our knowledge, there has not been any other work exploring the use of attentional mechanism for sentence embeddings. 3 Conceptual Sentence Embedding This paper proposes four conceptual sentence embedding models. The first one is based on continu506 ous bag-of-word model (denoted as CSE-1) which have not taken word order into consideration. To overcome this drawback, its extension model (denoted as CSE-2), which is based on Skip-Gram model, is proposed. Based on the basic conceptual sentence embedding models above, we obtain their variants (aCSE-1 and aCSE-2) by introducing attention model. 3.1 CBOW Model & Skip-Gram Model As inspiration of the proposed conceptual sentence embedding models, we start by discussing previous models for learning word vectors (Mikolov et al., 2013a; Mikolov et al., 2013b) firstly. Let us overview the framework of Continuous Bag-of-Words (CBOW) firstly, which is shown in Figure 1(a). Each word is typically mapped to an unique vector, represented by a column in a word matrix W ∈ℜd∗|V |. Wherein, V denotes the word vocabulary and d is embedding dimension of word. The column is indexed by position of the word in V . The concatenation or average of the vectors, the context vector wt, is then used as features for predicting the target word in the current context. Formally, Given a sentence S = {w1, w2, . . . , wl}, the objective of CBOW is to maximize the average log probability: L(S)= 1 (l−2k−2) Pl−k t=k+1 log Pr(wt|wt−k,···,wt+k) (1) Wherein, k is the context windows size of target word wt. The prediction task is typically done via a softmax function, as follows: Pr(wt|wt−k, · · · , wt+k) = eywt P wi∈V eywi (2) Each of y(wt) is an un-normalized logprobability for each target word wt, as follows: ywt = Uh(wt−k, . . . , wt+k); W) + b (3) Wherein, U and b are softmax parameters. And h(·) is constructed by a concatenation or average of word vectors {wt−k, . . . , wt+k} extracted from word matrix W according to {wt−k, . . . , wt+k}. For illustration purposes, we utilize average here. On the condition of average, the context vector ct is obtained by averaging the embeddings of each word, as follows: ct = 1 2k X −k≤c≤k,c̸=0 wt+c (4) The framework of Skip-Gram (Figure 1(b)) aims to predict context words given a target word wt in a sliding window, instead of predicting the current word based on its context. Formally, given a sentence S = {w1, w2, . . . , wl}, the objective of Skip-Gram is to maximize the following average log probability: L(S)= 1 (l−2k) Pl−k t=k+1 P −k≤c≤k,c̸=0 log Pr(wt+c|wt) (5) Wherein, wt and wc are respectively the vector representations of the target word wt and the context word wc. Usually, during the training stage of CBOW and Skip-Gram: (i) in order to make the models efficient for learning, the techniques of hierarchical softmax and negative sampling are used to ensure the models efficient for learning (Morin and Bengio, 2005; Mikolov et al., 2013a); (ii) the word vectors are trained by using stochastic gradient descent where the gradient is obtained via backpropagation (Rumelhart et al., 1986). After the training stage converges, words with similar meaning are mapped to a similar position in the semantic vector space. e.g., ‘powerful’ and ‘strong’ are close to each other. W wt-k wt-k+1 wt+k-1 wt+k … W W W wt wt W wt-k wt-k+1 wt+k-1 wt+k … (a) (b) Figure 1: (a) CBOW model and (b) Skip-Gram model. 3.2 CSE based on CBOW Model Intuitively, the proposed (attention-based) conceptual sentence embedding model for learning sentence vectors, is inspired by the methods for learning the word vectors. The inspiration is that, in researches of word embeddings: (i) The word vectors are asked to contribute to a prediction task about the target word or the surrounding words in the context; (ii) The word representation vectors are initialized randomly, however they could 507 finally capture precise semantics as an indirect result. Therefore, we will utilize this idea in our sentence vectors in a similar manner: The conceptassociated sentence vectors are also asked to contribute to the prediction task of the target word or surrounding words in given contextual text windows. Furthermore, attention model will attribute different influence value to different contextual words. We describe the first conceptual sentence embedding model, denoted as CSE-1, which is based on CBOW. In the framework of CSE-1 (Figure 2(a)), each sentence, denoted by sentence ID, is mapped to a unique vector s, represented by a column in matrix S. And its concept distribution θC are generated from a knowledge-based text conceptualization algorithm (Wang et al., 2015c). Moreover, similar to word embedding methods, each word wi is also mapped to a unique vector wi, represented by a column in matrix W. The surrounding words in contextual text window {wt−k, . . . , wt+k}, sentence ID and concept distribution θC corresponding to this sentence are the inputs. Besides, C is a fixed linear operator similar to the one used in (Huang et al., 2013) that converts the concept distribution θC to a concept vector, denoted as c. Note that, this makes our model very different from (Le and Mikolov, 2014) where no concept information is used, and experimental results demonstrate the efficiency of introducing concept information. It is clear that CSE-1 also does not take word order into consideration just like CBOW. Afterward, the sentence vector s, surrounding word vectors {wt−k, . . . , wt+k} and the concept vector c are concatenated or averaged to predict the target word wt in current context. In reality, the only change in this model compared to the word embedding method is in Eq. 3, where h(·) is constructed from not only W but also C and S. Note that, the sentence vector is shared across all contexts generated from the same sentence but not across sentences. Wherein, the contexts are fixedlength (length is 2k) and sampled from a sliding window over the current sentence. However, the word matrix W is shared across sentences. In summary, the procedure of CSE-1 itself is described as follows. A probabilistic conceptualization algorithm (Wang et al., 2015c) is employed here to obtain the corresponding concepts about given sentence: Firstly, we preprosess and Sentence ID Conceptualization C S wt-k wt-k+1 wt+k-1 wt+k … W wt-k wt-k+1 wt+k-1 wt+k … W W W wt Sentence ID Conceptualization C S θC θC c s c s (a) (b) Figure 2: CSE-1 model (a) and CSE-2 model (b). Green circles indicate word embeddings, blue circles indicate concept embeddings, and purple circles indicate sentence embeddings. Besides, orange circles indicate concept distribution θC generated by knowledge-based text conceptualization algorithm. segment the given sentence into a set of words; Then, based on a probabilistic lexical knowledgebase Probase (Wu et al., 2012), the heterogeneous semantic graph for these words and their corresponding concepts are constructed (Figure 3 shows an example); Finally, we utilize a simple iterative process to identify the most likely mapping from words to concepts. After efforts above, we could conceptualize words in given sentence, and access the concepts and corresponding probabilities, which is the concept distribution θC mentioned before. Note that, the concept distribution yields an important influence on the entire framework of conceptual sentence embedding, by contributing greatly to the semantic representation. During the training stage, we aim at obtaining word matrix W, sentence matrix S, and softmax weights {U, b} on already observed sentences. The techniques of hierarchical softmax and negative sampling are used to make the model efficient for learning. W and S are trained using stochastic gradient descent: At each step of stochastic gradient descent, we sample a fixed-length context from the given sentence, compute the error gradient which is obtained via backpropagation, and then use the gradient to update the parameters. During the inferring stage, we get sentence vectors for new sentences (unobserved before) by adding more columns in S and gradient descending on S while holding W, U and b fixed. Finally, we use S to make a prediction about multi-labels by using a standard classifier in output layer. 508 FRUIT microsoft office apple ipad COMPANY BRAND PRODUCT LOCATION BUILDING 0.80 0.41 0.16 0.86 0.81 0.91 0.31 ACCESSORY 0.53 0.82 0.76 0.37 Figure 3: Semantic graph of example sentence microsoft unveils office for apples ipad. Rectangles indicate terms occurred in given sentence, and ellipses indicate concept defined in knowledge-base (e.g., Probase). Bule solid links indicate isA relationship between terms and concepts, and red dashed lines indicate correlation relationship between two concepts. Numerical values on the line is corresponding probabilities. 3.3 CSE based on Skip-Gram Model The above method considers the combination of the sentence vector with the surrounding word vectors and concept vector to predict the target word in given text window. However, it loss information about word order somehow, just like CBOW. In fact, there exists another for modeling the prediction procedure: we could ignore the context words in the input, but force the model to predict words randomly sampled from the fix-length contexts in the output. As is shown in Figure 2 (b), only sentence vector s and concept vector c are used to predict the next word in a text window. That means, contextual words are no longer used as inputs, whereas they become what the output layer predict. Hence, this model is similar to the Skip-Gram model in word embedding (Mikolov et al., 2013b). In reality, what this means is that at each iteration of stochastic gradient descent, we sample a text window {wt−k, . . . , wt+k}, then sample a random word from this text window and form a classification task given the sentence vector s and corresponding concept vector c. We denote this sort of conceptual sentence embedding model as CSE-2. The scheme of CSE-2 is similar to that of CSE-1 as described above. In addition to being conceptually simple, CSE-2 requires to store less data. We only need to store {U,b,S} as opposed to {U,b,S,W} in CSE-1. 3.4 CSE based on Attention Model As mentioned above, setting a good value for contextual window size k is difficult. Because a larger value of k may introduce a degenerative behavior in the model, and more effort is spent predicting words that are conditioned on unrelated words, while a smaller value of k may lead to cases where the window size is not large enough include words that are semantically related (Bansal et al., 2014; Wang et al., 2015a). To solve these problems , we extend the proposed models by introducing attention model (Bahdanau et al., 2014; Rush et al., 2015), by allowing it to consider contextual words within the window in a non-uniform way. For illustration purposes, we extend CSE-1 here with attention model. Following (Wang et al., 2015a), we rewrite Eq.(4) as follows: ct = 1 2k X −k≤c≤k,c̸=0 at+c(wt+c)wt+c (6) Wherein we replace the average of the surrounding word vectors in Eq.(4) with a weighted sum of the these vectors. That means, each contextual word wt+c is attributed a different attention level, representing how much the attention model believes whether it is important to look at in order to predict the target word wt. The attention factor ai(wi) for word wi in position i is formulated as a softmax function over contextual words (Bahdanau et al., 2014), as follows: ai(w) = edw,i + ri P −k≤c≤k,c̸=0 edw,c + rc (7) Wherein, dw,i is an element of matrix D ∈ ℜ|V |∗2k, which is a set of parameters determining the importance of each word type in each relative position i (distance to the left/right of target word wt). Moreover, ri, an element of R ∈ℜ2k, is a bias, which is conditioned only on the relative position i. Note that, attention models have been reported expensive for large tables in terms of storage and performance (Bahdanau et al., 2014; Wang et al., 2015a). Nevertheless the computation consumption here is simple, and compute the attention of all words in the input requires 2k operations, as it simply requires retrieving on value from the lookup-matrix D for each word and one value from the bias vector R for each word in the context. Although this strategy may not be the best approach and there exist more elaborate attention models (Bahdanau et al., 2014; Luong et al., 2015), the proposed attention model is a proper balance of computational efficiency and complexity. Thus, besides {W,C,S} in CSE models, D and R are added into parameter set which relates to 509 gradients of the loss function Eq.(1). All parameters are computed with backpropagation and updated after each training instance using a fixed learning rate. We denote the attention-based CSE1 model above as aCSE-1. With limitation of space, attention variant of CSE-2, denoted as aCSE-2, is not described here, however the principle is similar to aCSE-1. W w1 microsoft w2 unveil w4 for w5 ipad W W W w apple Sentence ID Conceptualization C S w3 office W a1 a2 a3 a4 a5 ... θc c s Figure 4: aCSE-1 model. The illustration of example sentence ‘mcrosoft unveils office for apple’s ipad’ for predicting word ‘apple’. Taking example ‘microsoft unveils office for apple’s ipad’ into consideration. The prediction of the polysemy word ‘apple’ by CSE-1 is shown in Figure 4, and darker cycle cell indicate higher attention value. We could observe that preposition word ‘for’ tend to be attributed very low attention, while context words, especially noun-words which contribute much to conceptualization (such as ‘ipad’, ‘office’, and ‘microsoft’) are attributed higher weights as these word own more predictive power. Wherein, ‘ipad’ is assigned the highest attention value as it close to the predicted word and co-occurs with it more frequently. As described before, concept distribution θC yields a considerable influence on conceptual sentence embedding. This is because, each dimensionality of this distribution denotes the probability of the concept (topic or category) this sentence is respect to. In other words, the concept distribution is a solid semantic representation of the sentence. Nevertheless, the information in each dimensionality of sentence (or word) vector makes no sense. Hence, there exist a linear operator in CSE-1, CSE-2, aCSE-1, and aCSE-2, which transmit the concept distribution into word vector and sentence vector, as shown in Figure 2 and Figure 3. 4 Experiments and Results In this section, we show experiments on two text understanding problems, text classification and information retrieval, to evaluate related models in several aspects. These tasks are always used to evaluate the performance of sentence embedding methods (Liu et al., 2015; Le and Mikolov, 2014). The source codes and datasets of this paper are publicly available1. 4.1 Datasets We utilize four datasets for training and evaluating. For text classification task, we use three datasets: NewsTile, TREC and Twitter. Dataset Tweet11 is used for evaluation in information retrieval task. Moreover, we construct dataset Wiki to fully train topic model-based models. NewsTitle: The news articles are extracted from a large news corpus, which contains about one million articles searched from Web pages. We organize volunteers to classify these news articles manually into topics according its article content (Song et al., 2015), and we select six topics: company, health, entertainment, food, politician, and sports. We randomly select 3,000 news articles in each topic, and only keep its title and its first one line of article. The average word count of titles is 9.41. TREC: It is the corpus for question classification on TREC (Li and Roth, 2002), which is widely used as benchmark in text classification task. There are 5,952 sentences in the entire dataset, classified into the 6 categories as follows: person, abbreviation, entity, description, location and numeric. Tweet11: This is the official tweet collections used in TREC Microblog Task 2011 and 2012 (Ounis et al., 2011; Soboroff et al., 2012). Using the official API, we crawled a set of local copies of the corpus. Our local Tweets11 collection has a sample of about 16 million tweets, and a set of 49 (TMB2011) and 60 (TMB2012) timestamped topics. Twitter: This dataset is constructed by manually labeling the previous dataset Tweet11. Similar to dataset NewsTitle, we ask our volunteers to label these tweets. After manually labeling, the dataset contains 12,456 tweets which are in four 1http://hlipca.org/index. php/2014-12-09-02-55-58/ 2014-12-09-02-56-24/58-acse 510 categories: company, country, entertainment, and device. The average length of the tweets is 13.16 words. Because of its noise and sparsity, this social media dataset is very challenging for the comparative models. Moreover, we also construct a Wikipedia dataset (denoted as Wiki) for training. We preprocess the Wikipedia articles2 with the following rules. First, we remove the articles less than 100 words, as well as the articles less than 10 links. Then we remove all the category pages and disambiguation pages. Moreover, we move the content to the right redirection pages. Finally we obtain about 3.74 million Wikipedia articles for indexing and training. 4.2 Alternative Algorithms We compare the proposed models with the following comparative algorithms. BOW: It is a simple baseline which represents each sentence as bag-of-words, and uses TF-IDF scores (Salton and Mcgill, 1986) as features to generate sentence vector. LDA: It represents each sentence as its topic distribution inferred by latent dirichlet allocation (Blei et al., 2003). We train this model in two ways: (i) on both Wikipedia articles and the evaluation datasets above, and (ii) only on the evaluation datasets. We report the better of the two. PV: Paragraph Vector models are variablelength text embedding models, including the distributed memory model (PV-DM) and the distributed bag-of-words model (PV-DBOW). It has been reported to achieve the state-of-the-art performance on task of sentiment classification (Le and Mikolov, 2014), however it only utilizes word surface. TWE: By taking advantage of topic model, it overcome ambiguity to some extent (Liu et al., 2015). Typically, TWE learn topic models on training set. It further learn topical word embeddings using the training set, then generate sentence embeddings for both training set and testing set. (Liu et al., 2015) proposed three models for topical word embedding, and we present the best results here. Besides, We also train TWE in two ways like LDA. 2http://en.wikipedia.org/wiki/ Wikipedia:Databasedown-load 4.3 Experiment Setup The details about parameter settings of the comparative algorithms are described in this section, respectively. For TWE, CSE-1, CSE-2 and their attention variants aCSE-1, and aCSE-2, the structure of the hierarchical softmax is a binary Huffman tree (Mikolov et al., 2013a; Mikolov et al., 2013b), where short codes are assigned to frequent words. This is a good speedup trick because common words are accessed quickly (Le and Mikolov, 2014).We set the dimensions of sentence, word, topic and concept embeddings as 5,000, which is like the number of concept clusters in Probase (Wu et al., 2012; Wang et al., 2015c). Meanwhile, we have done many experiments on choosing the context window size (k). We perform experiments on increasing windows size from 3 to 11, and different size works differently on different dataset with different average length of short-texts. And we choose the result of windows size of 5 present here, because it performs best in almost datasets. Usually, in project layer, the sentence vector, the context vector and the concept vectors could be averaged or concatenated for combination to predict the next word in a context. We perform experiments following these two strategies respectively, and report the better of the two. In fact, the concatenation performs better since averaging different types of vectors may cause loss of information somehow. For BOW and LDA, we remove stop words by using InQuery stop-word list. For BOW, we select top 50,000 words according to TF-IDF scores as features. For both LDA and TWE, in the text classification task, we set the topic number to be the cluster number or twice, and report the better of the two; while in the information retrieval task, we experimented with a varying number of topics from 100 to 500, which gives similar performance, and we report the final results of using 500 topics. In summary, we use the sentence vectors generated by each algorithm as features and run a linear classifier using Liblinear (Fan et al., 2010) for evaluation. 4.4 Text Classification In this section, we run the multi-class text classification experiments on the dataset NewsTitle, Twitter, and TREC. We report precision, recall and F-measure for comparison (as shown in Table 1). Statistical t-test are employed here. To de511 NewsTitle Twitter TREC Model P R F P R F P R F BOW 0.782 0.791 0.786 0.437 0.429 0.433 0.892 0.891 0.891 LDA 0.717 0.705 0.711 0.342 0.308 0.324 0.813 0.809 0.811 PV-DBOW 0.725 0.719 0.722 0.413 0.408 0.410 0.824 0.819 0.821 PV-DM 0.748 0.740 0.744 0.426 0.424 0.425 0.836 0.825 0.830 TWE 0.811β 0.803β 0.807β 0.459β 0.438 0.448β 0.898β 0.886β 0.892β CSE-1 0.815 0.809 0.812 0.461 0.449 0.454 0.896 0.890 0.893 CSE-2 0.827 0.817 0.822 0.475 0.447 0.462 0.901 0.895 0.898 aCSE-1 0.824 0.818 0.821 0.471 0.454 0.462 0.901 0.897 0.899 aCSE-2 0.831αβ 0.820αβ 0.825αβ 0.477αβ 0.450αβ 0.463αβ 0.909αβ 0.904αβ 0.906αβ Table 1: Evaluation results of multi-class text classification task. cide whether the improvement by method A over method B is significant, the t-test calculates a value p based on the performance of A and B. The smaller p is, the more significant is the improvement. If the p is small enough (p < 0.05), we conclude that the improvement is statistically significant. In Table 1, the superscript α and β respectively denote statistically significant improvements over TWE and PV-DM. Without regard to attention-based model firstly, we could conclude that CSE-2 outperforms all the baselines significantly (expect for recall in Twitter). This fully indicates that the proposed model could capture more precise semantic information of sentence as compared to topic model-based models and other embedding models. Because the concepts we obtained contribute significantly to the semantic representation of sentence, meanwhile suffer slightly from texts noisy and sparsity. Moreover, as compared to BOW, CSE-1 and CSE-2 manage to reduce the feature space by 90 percent, while among them, CSE-2 needs to store less data comparing with CSE-1. By introducing attention model, performances of CSE models are entirely promoted, as compared aCSE-2 with original CSE-2, which demonstrates the advantage of attention model. PV-DM and PV-DBOW are reported as the state-of-the-art model for sentence embedding. From the results we can also see that, the proposed model CSE-2 and aCSE-2 significantly outperforms PV-DBOW. As expected, LDA performs worst, even worse than BOW, because it is trained on very sparse short-texts (i.e., question and social media text), where there is no enough statistical information to infer word co-occurrence and word topics, and latent topic model suffer extremely from the sparsity of the short-text. Besides, the number of topics slightly impacts the performance of LDA. In future, we may conduct more experiments to explore genuine reasons. As described in section 3, aCSE-2 (CSE-2) performs better than aCSE-1 (CSE-1), because the former one take word order into consideration. Based on Skip-Gram similarly, CSE-2 outperforms TWE. Although TWE aims at enhancing sentence representation by using topic model, neither parsing nor topic modeling would work well because shorttexts lack enough signals for inference. Whats more, sentence embeedings are generated by simple aggregating over all topical word embeddings of each word in this sentence in TWE, which limits its capability of semantic representation. Overall, nearly all the alternative algorithms perform worse on Twitter, especially LDA and TWE. This is mainly because that data in Twitter are more challenging for topic model as short-texts are noisy, sparse, and ambiguous. Although the training on larger corpus, i.e., way (i), contributes greatly to improving the performance of these topic-model based algorithms, they only have similar performance to CSE-1 and could not transcend the attention-based variants. Certainly, we could also train TWE (even LDA) on a very larger corpus, and could expect a letter better results. However, training latent topic model on very large dataset is very slow, although many fast algorithms of topic models are available (Smola and Narayanamurthy, 2010; Ahmed et al., 2012). Whats more, from the complexity analysis, we could conclude that, compared with PV, CSE only need a little more space to store look-ups matrix D 512 and R; while compared with CSE and PV, TWE require more parameters to store more discriminative information for word embedding. 4.5 Information Retrieval The information retrieval task is also utilized to evaluate the proposed models, and we want to examine whether a sentence should be retrieved given a query. Specially, we mainly focus on shorttext retrieval by utilizing official tweet collection Tweet11, which is the benchmark dataset for microblog retrieval. We index all tweets in this collection by using Indri toolkit, and then perform a general relevance-pseudo feedback procedure, as follows: (i) Given a query, we firstly obtain associated tweets, which are before query issue time, via preliminary retrieval as feedback tweets. (ii) We generate the sentence representation vector of both original query and these feedback tweets by the alternative algorithms above. (iii) With efforts above, we compute cosine scores between query vector and each tweet vector to measure the semantic similarity between the query and candidate tweets, and then re-rank the feedback tweets with descending cosine scores. We utilize the official metric for the TREC Microblog track, i.e., Precision at 30 (P@30), and Mean Average Precision (MAP), for evaluating the ranking performance of different algorithms. Experimental results for this task are shown in Table 2. Besides, we also operate a query-by-query analysis and conduct t-test to demonstrate the improvements on both metrics are statistically significant. In Table 2, the superscript α and β respectively denote statistically significant improvements over TWE and PV-DM (p < 0.05). As shown in Table 2, the CSE-2 significantly outperforms all these models, and exceeds the best baseline model (TWE) by 11.9% in MAP and 4.5% in P@30, which is a statistically significant improvement. Without regard to attention-based model firstly, such an improvement comes from the CSE-2’s ability to embed the contextual and semantic information of the sentences into a finite dimension vector. Topic model based algorithms (e.g., LDA and TWE) suffer extremely from the sparsity and noise of tweet collection. For the twitter data, since we are not able to find appropriate long texts, latent topic models are not performed. We could observe that attention-based CSE model (aCSE-1 and aCSE-2) improves over oTMB2011 TMB2012 Model MAP P@30 MAP P@30 BOW 0.304 0.412 0.321 0.494 LDA 0.281 0.409 0.311 0.486 PV-DBOW 0.285 0.412 0.324 0.491 PV-DM 0.327 0.431 0.340 0.524 TWE 0.331 0.446β 0.347β 0.511 CSE-1 0.337 0.451 0.344 0.512 CSE-2 0.367 0.461 0.360 0.517 aCSE-1 0.342 0.459 0.351 0.516 aCSE-2 0.370αβ0.464αβ 0.364αβ0.522αβ Table 2: Results of information retrieval. riginal CSE model (CSE-1 and CSE-2). However, attention model promotes CSE-1 significantly, while aCSE-2 obtain similar results compared to CSE-2, indicating that attention model leads to small improvement for Skip-Gram based CSE model. We argue that it is because Skip-Gram itself gives less weight to the distant words by sampling less from those words, which is essentially similar to attention model somehow. 5 Conclusion By inducing concept information, the proposed conceptual sentence embedding maintains and enhances the semantic information of sentence embedding. Furthermore, we extend the proposed models by introducing attention model, which allows it to consider contextual words within the window in a non-uniform way while maintaining the efficiency. We compare them with different algorithms, including bag-of-word models, topic model-based model and other state-of-the-art sentence embedding models. The experimental results demonstrate that the proposed method performs the best and shows improvement over the compared methods, especially for short-texts. Acknowledgments The work was supported by National Natural Science Foundation of China (Grant No. 61132009), National Basic Research Program of China (973 Program, Grant No. 2013CB329303), and National Hi-Tech Research & Development Program (863 Pro-gram, Grant No. 2015AA015404). 513 References Amr Ahmed, Moahmed Aly, Joseph Gonzalez, Shravan Narayanamurthy, and Alexander J. Smola. 2012. Scalable inference in latent variable models. In International Conference on Web Search and Web Data Mining, WSDM 2012, Seattle, Wa, Usa, February, pages 123–132. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. Eprint Arxiv. Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2014. Tailoring continuous word representations for dependency parsing. In Meeting of the Association for Computational Linguistics, pages 809–815. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research, 3:993–1022. Rongen Fan, Kaiwei Chang, Cho Jui Hsieh, Xiangrui Wang, and Chih Jen Lin. 2010. Liblinear: A library for large linear classification. Journal of Machine Learning Research, 9(12):1871–1874. Zellig S. Harris. 1970. Distributional Structure. Springer Netherlands. Posen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In ACM International Conference on Conference on Information and Knowledge Management, pages 2333–2338. Quoc V. Le and Tomas. Mikolov. 2014. Distributed representations of sentences and documents. Eprint Arxiv, 4:1188–1196. Xin Li and Dan Roth. 2002. Learning question classifiers. In COLING. Yang Liu, Zhiyuan Liu, Tat-Seng Chua, and Maosong Sun. 2015. Topical word embeddings. In TwentyNinth AAAI Conference on Artificial Intelligence. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In EMNLP. Mingbo Ma, Liang Huang, Bing Xiang, and Bowen Zhou. 2015. Dependency-based convolutional neural networks for sentence embedding. In Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. Computer Science. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. Advances in Neural Information Processing Systems, 26:3111–3119. Frederic Morin and Yoshua Bengio. 2005. Hierarchical probabilistic neural network language model. Aistats. Iadh Ounis, Craig MacDonald, Jimmy Lin, and Ian Soboroff. 2011. Overview of the trec-2011 microblog track. H Palangi, L Deng, Y Shen, J Gao, X He, J Chen, X Song, and R Ward. 2015. Deep sentence embedding using the long short term memory network: Analysis and application to information retrieval. Arxiv, 24(4):694–707. David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1986. Learning representations by backpropagating errors. Nature, 323(6088):533–536. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Gerard Salton and Michael J. Mcgill. 1986. Introduction to modern information retrieval. McGraw-Hill,. Aliaksei Severyn, Alessandro Moschitti, Manos Tsagkias, Richard Berendsen, and Maarten De Rijke. 2014. A syntax-aware re-ranker for microblog retrieval. In SIGIR, pages 1067–1070. Alexander Smola and Shravan Narayanamurthy. 2010. An architecture for parallel topic models. Proceedings of the Vldb Endowment, 3(1):703–710. Ian Soboroff, Iadh Ounis, Craig MacDonald, and Jimmy Lin. 2012. Overview of the trec-2012 microblog track. In TREC. Yangqiu Song, Haixun Wang, Zhongyuan Wang, Hongsong Li, and Weizhu Chen. 2011. Short text conceptualization using a probabilistic knowledgebase. In Proceedings of the Twenty-Second international joint conference on Artificial Intelligence Volume Volume Three, pages 2330–2336. Yangqiu Song, Shusen Wang, and Haixun Wang. 2015. Open domain short text conceptualization: a generative + descriptive modeling approach. In Proceedings of the 24th International Conference on Artificial Intelligence. Ling Wang, Tsvetkov Yulia, Amir Silvio, Fermandez Ramon, Dyer Chris, Black Alan W, Trancoso Isabel, and Lin Chu-Cheng. 2015a. Not all contexts are created equal: Better word representations with variable attention. In Conference on Empirical Methods in Natural Language Processing, pages 1367–1372. Mingxuan Wang, Zhengdong Lu, Hang Li, and Qun Liu. 2015b. Syntax-based deep matching of short texts. Computer Science. 514 Zhongyuan Wang, Kejun Zhao, Haixun Wang, Xiaofeng Meng, and Ji-Rong Wen. 2015c. Query understanding through knowledge-based conceptualization. In Proceedings of the 24th International Conference on Artificial Intelligence. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. Towards universal paraphrastic sentence embeddings. Computer Science. Wentao Wu, Hongsong Li, Haixun Wang, and Kenny Q. Zhu. 2012. Probase: a probabilistic taxonomy for text understanding. In Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data, pages 481–492. 515
2016
48
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 516–525, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics DocChat: An Information Retrieval Approach for Chatbot Engines Using Unstructured Documents Zhao Yan† ∗, Nan Duan‡ , Junwei Bao+ , Peng Chen§ , Ming Zhou‡ , Zhoujun Li† , Jianshe Zhou¶ †State Key Laboratory of Software Development Environment, Beihang University ‡Microsoft Research +Harbin Institute of Technology §Microsoft Search Technology Center ¶BAICIT, Capital Normal University †{yanzhao, lizj}@buaa.edu.cn [email protected] ‡§{nanduan, peche, mingzhou}@microsoft.com ¶[email protected] Abstract Most current chatbot engines are designed to reply to user utterances based on existing utterance-response (or Q-R)1 pairs. In this paper, we present DocChat, a novel information retrieval approach for chatbot engines that can leverage unstructured documents, instead of Q-R pairs, to respond to utterances. A learning to rank model with features designed at different levels of granularity is proposed to measure the relevance between utterances and responses directly. We evaluate our proposed approach in both English and Chinese: (i) For English, we evaluate DocChat on WikiQA and QASent, two answer sentence selection tasks, and compare it with state-of-the-art methods. Reasonable improvements and good adaptability are observed. (ii) For Chinese, we compare DocChat with XiaoIce2, a famous chitchat engine in China, and side-by-side evaluation shows that DocChat is a perfect complement for chatbot engines using Q-R pairs as main source of responses. 1 Introduction Building chatbot engines that can interact with humans with natural language is one of the most challenging problems in artificial intelligence. Along with the explosive growth of social media, like community question answering (CQA) websites (e.g., Yahoo Answers and WikiAnswers) and social media websites (e.g., Twitter and Weibo), ∗Contribution during internship at Microsoft Research. 1For convenience sake, we denote all utterance-response pairs (either QA pairs or conversational exchanges from social media websites like Twitter) as Q-R pairs in this paper. 2http://www.msxiaoice.com the amount of utterance-response (or Q-R) pairs has experienced massive growth in recent years, and such a corpus greatly promotes the emergence of various data-driven chatbot approaches. Instead of multiple rounds of conversation, we only consider a much simplified task, short text conversation (STC) in which the response R is a short text and only depends on the last user utterance Q. Previous methods for the STC task mostly rely on Q-R pairs and fall into two categories: Retrieval-based methods (e.g., Ji et al., 2014). This type of methods first retrieve the most possible ⟨ˆQ, ˆR⟩pair from a set of existing Q-R pairs, which best matches current utterance Q based on semantic matching models, then take ˆR as the response R. One disadvantage of such a method is that, for many specific domains, collecting such QR pairs is intractable. Generation based methods (e.g., Shang et al., 2015). This type of methods usually uses an encoder-decoder framework which first encode Q as a vector representation, then feed this representation to decoder to generate response R. Similar to retrieval-based methods, such approaches also depend on existing Q-R pairs as training data. Like other language generation tasks, such as machine translation and paraphrasing, the fluency and naturality of machine generated text is another drawback. To overcome the issues mentioned above, we present a novel response retrieval approach, DocChat, to find responses based on unstructured documents. For each user utterance, instead of looking for the best Q-R pair or generating a word sequence based on language generation techniques, our method selects a sentence from given documents directly, by ranking all possible sentences based on features designed at different levels of granularity. On one hand, using documents rather than Q-R pairs greatly improve the adapt516 ability of chatbot engines on different chatting topics. On the other hand, all responses come from existing documents, which guarantees their fluency and naturality. We also show promising results in experiments, on both QA and chatbot scenarios. 2 Task Description Formally, given an utterance Q and a document set D, the document-based chatbot engine retrieves response R based on the following three steps: • response retrieval, which retrieves response candidates C from D based on Q: C = Retrieve(Q, D) Each S ∈C is a sentence existing in D. • response ranking, which ranks all response candidates in C and selects the most possible response candidate as ˆS: ˆS = arg max S∈C Rank(S, Q) • response triggering, which decides whether it is confident enough to response Q using ˆS: I = Trigger( ˆS, Q) where I is a binary value. When I equals to true, let the response R = ˆS and output R; otherwise, output nothing. In the following three sections, we will describe solutions of these three components one by one. 3 Response Retrieval Given a user utterance Q, the goal of response retrieval is to efficiently find a small number of sentences from D, which have high possibility to contain suitable sentences as Q’s response. Although it is not necessarily true that a good response always shares more words with a given utterance, this measurement is still helpful in finding possible response candidates (Ji et al., 2014). In this paper, the BM25 term weighting formulas (Jones et al., 2000) is used to retrieve response candidates from documents. Given each document Dk ∈D, we collect a set of sentence triples ⟨Sprev, S, Snext⟩from Dk, where S denotes a sentence in Dk, Sprev and Snext denote S’s previous sentence and next sentence respectively. Two special tags, ⟨BOD⟩and ⟨EOD⟩, are added at the beginning and end of each passage, to make sure that such sentence triples can be extracted for every sentence in the document. The reason for indexing each sentence together with its context sentences is intuitive: If a sentence within a document can respond to an utterance, then its context should be revelent to the utterance as well. 4 Response Ranking Given a user utterance Q and a response candidate S, the ranking function Rank(S, Q) is designed as an ensemble of individual matching features: Rank(S, Q) = X k λk · hk(S, Q) where hk(·) denotes the k-th feature function, λk denotes hk(·)’s corresponding weight. We design features at different levels of granularity to measure the relevance between S and Q, including word-level, phrase-level, sentencelevel, document-level, relation-level, type-level and topic-level, which will be introduced below. 4.1 Word-level Feature We define three word-level features in this work: (1) hWM(S, Q) denotes a word matching feature that counts the number (weighted by the IDF value of each word in S) of non-stopwords shared by S and Q. (2) hW2W (S, Q) denotes a word-toword translation-based feature that calculates the IBM model 1 score (Brown et al., 1993) of S and Q based on word alignments trained on ‘questionrelated question’ pairs using GIZA++ (Och and Ney, 2003). (3) hW2V (S, Q) denotes a word embedding-based feature that calculates the average cosine distance between word embeddings of all non-stopword pairs ⟨vSj, vQi⟩. vSj represent the word vector of jth word in S and vQj represent the word vector of ith word in Q. 4.2 Phrase-level Feature 4.2.1 Paraphrase We first describe how to extract phrase-level paraphrases from an existing SMT (statistical machine translation) phrase table. PT = {⟨si, ti, p(ti|si), p(si|ti)⟩}3 is a phrase table, which is extracted from a bilingual corpus, where si (or ti) denotes a phrase, in source 3We omit lexical weights that are commonly used in phrase tables, as they are not useful in paraphrase extraction. 517 (or target) language, p(ti|si) (or p(si|ti)) denotes the translation probability from si (or ti) to ti (or si). We follow Bannard and CallisonBurch (2005) to extract a paraphrase table PP = {⟨si, sj, score(sj; si)⟩}. si and sj denote two phrases in source language, score(sj; si) denotes a confidence score that si can be paraphrased to sj, which is computed based on PT: score(sj; si) = X t {p(t|si) · p(sj|t)} The underlying idea of this approach is that, two source phrases that are aligned to the same target phrase trend to be paraphrased. We then define a paraphrase-based feature as: hPP (S, Q) = PN n=1 P|S|−n j=0 CountP P (Sj+n−1 j ,Q) |S|−n+1 N where Sj+n−1 j denotes the consecutive word sequence (or phrase) in S, which starts from Sj and ends with Sj+n−1, N denotes the maximum n-gram order (here is 3). CountPP (Sj+n−1 j , Q) is computed based on the following rules: • If Sj+n−1 j ∈Q, then CountP P (Sj+n−1 j , Q) = 1; • Else, if ⟨Sj+n−1 j , s, score(s; Sj+n−1 j )⟩ ∈ PP and Sj+n−1 j ’s paraphrase s occurs in Q, then CountP P (Sj+n−1 j , Q) = score(s; Sj+n−1 j ) • Else, CountP P (Sj+n−1 j , Q) = 0. 4.2.2 Phrase-to-Phrase Translation Similar to hPP (S, Q), a phrase translation-based feature based on a phrase table PT is defined as: hPT (S, Q) = PN n=1 P|S|−n j=0 CountP T (Sj+n−1 j ,Q) |S|−n+1 N where CountPT (Sj+n−1 j , Q) is computed based on the following rules: • If Sj+n−1 j ∈Q, then CountP T (Sj+n−1 j , Q) = 1; • Else, if ⟨Sj+n−1 j , s, p(Sj+n−1 j |s), p(s|Sj+n−1 j )⟩∈ PT and Sj+n−1 j ’s translation s ∈Q, then CountP T (Sj+n−1 j , Q) = p(Sj+n−1 j |s) · p(s|Sj+n−1 j ) • Else, CountP T (Sj+n−1 j , Q) = 0 We train a phrase table based on ‘question-answer’ pairs crawled from community QA websites. 4.3 Sentence-level Feature We first present an attention-based sentence embedding method based on a convolution neural network (CNN), whose input is a sentence pair and output is a sentence embedding pair. Two features will be introduced in Section 4.3.1 and 4.3.2, which are designed based on two sentence embedding models trained using different types of data. In the input layer, given a sentence pair ⟨SX, SY ⟩, an attention matrix A ∈R|SX|×|SY | is generated based on pre-trained word embeddings of SX and SY , where each element Ai,j ∈A is computed as: Ai,j = cosine(vSX i , vSY j ) where vSX i (or vSY j ) denotes the embedding vector of the ith (or jth) word in SX (or SY ). Then, column-wise and row-wise max-pooling are applied to A to generate two attention vectors V SX ∈R|SX| and V SY ∈R|SY |, where the kth elements of V SX and V SY are computed as: V SX k = max 1<l<|SY |{Ak,l} and V SY k = max 1<l<|SX|{Al,k} V SX k (or V SY k ) can be interpreted as the attention score of the kth word in SX (or SY ) with regard to all words in SY (or SX). Next, two attention distributions DSX ∈R|SX| and DSY ∈R|SY | are generated for SX and SY based on V SX and V SY respectively, where the kth elements of DSX and DSY are computed as: DSX k = eV SX k P|SX| l=1 eV SX l and D SY k = eV SY k P|SY | l=1 eV SY l DSX k (or DSY k ) can be interpreted as the normalized attention score of the kth word in SX (or SY ) with regard to all words in SY (or SX). Last, we update each pre-trained word embedding vSX k (or vSY k ) to ˆvSX k (or ˆvSY k ), by multiplying every value in vSX k (or vSY k ) with DSX k (or DSY k ). The underlying intuition of updating pre-trained word embeddings is to re-weight the importance of each word in SX (or SY ) based on SY (or SX), instead of treating them in an equal manner. In the convolution layer, we first derive an input matrix ZSX = {l1, ..., l|SX|}, where lt is the concatenation of a sequence of m = 2d−14 updated word embeddings [ˆvSX t−d, ..., ˆvSX t , ..., ˆvSX t+d], centralized in the tth word in SX. Then, the convo4In this paper, m is set to 3. 518 lution layer performs sliding window-based feature extraction to project each vector representation lt ∈ZSX to a contextual feature vector hSX t : hSX t = tanh(Wc · lt) where Wc is the convolution matrix, tanh(x) = 1−e−2x 1+e−2x is the activation function. The same operation is performed to SY as well. In the pooling layer, we aggregate local features extracted by the convolution layer from SX, and form a sentence-level global feature vector with a fixed size independent of the length of the input sentence. Here, max-pooling is used to force the network to retain the most useful local features by lSX p = [vSX 1 , ..., vSX K ], where: vSX i = max t=1,...,|SX|{hSX t (i)} hSX t (i) denotes the ith value in the vector hSX t . The same operation are performed to SY as well. In the output layer, one more non-linear transformation is applied to lSX p : y(SX) = tanh(Ws · lSX p ) Ws is the semantic projection matrix, y(SX) is the final sentence embedding of SX. The same operation is performed to SY to obtain y(SY ). We train model parameters Wc and Ws by minimizing the following ranking loss function: L = max{0, M −cosine(y(SX), y(SY )) +cosine(y(SX), y(S− Y ))} where M is a constant, S− Y is a negative instance. 4.3.1 Causality Relationship Modeling We train the first attention-based sentence embedding model based on a set of ‘question-answer’ pairs as input sentence pairs, and then design a causality relationship-based feature as: hSCR(S, Q) = cosine(ySCR(S), ySCR(Q)) ySCR(S) and ySCR(Q) denote the sentence embeddings of S and Q respectively. We expect this feature captures the causality relationship between questions and their corresponding answers, and works on question-like utterances. 4.3.2 Discourse Relationship Modeling We train the second attention-based sentence embedding model based on a set of ‘sentence-next sentence’ pairs as input sentence pairs, and then design a discourse relationship-based feature as: hSDR(S, Q) = cosine(ySDR(S), ySDR(Q)) ySDR(S) and ySDR(Q) denote the sentence embeddings of S and Q respectively. We expect this feature learns and captures the discourse relationship between sentences and their next sentences, and works on statement-like utterances. Here, a large number of ‘sentence-next sentence’ pairs can be easily obtained from documents. 4.4 Document-level Feature We take document-level information into consideration to measure the semantic similarity between Q and S, and define two context features as: hDM(S∗, Q) = cosine(ySCR(S∗), ySCR(Q)) where S∗can be Sprev and Snext that denote previous and next sentences of S in the original document. The sentence embedding model trained based on ‘question-answer’ pairs (in Section 4.3.1) is directly used to generate context embeddings for hDM(Sprev, Q) and hDM(Snext, Q). So no further training data is needed for this feature. 4.5 Relation-level Feature Given a structured knowledge base, such as Freebase5, a single relation question Q (in natural language) with its answer can be first parsed into a fact formatted as ⟨esbj, rel, eobj⟩, where esbj denotes a subject entity detected from the question, rel denotes the relationship expressed by the question, eobj denotes an object entity found from the knowledge base based on esbj and rel. Then we can get ⟨Q, rel⟩pairs. This rel can help for modeling semantic relationships between Q and R. For example, the Q-A pair ⟨What does Jimmy Neutron do? −inventor⟩ can be parsed into ⟨Jimmy Neutron, fictional character occupation, inventor⟩where the rel is fictional character occupation. Similar to Yih et al. (2014), We use ⟨Q, rel⟩ pairs as training data, and learn a rel-CNN model, which can encode each question Q (or each relation rel) into a relation embedding. For a given question Q, the corresponding relation rel+ is 5http://www.freebase.com/ 519 treated as a positive example, and randomly selected other relations are used as negative examples rel−. The posterior probability of rel+ given Q is computed as: P(rel+|Q) = ecosine(y(rel+),y(Q)) P rel−ecosine(y(rel−),y(Q)) y(rel) and y(Q) denote relation embeddings of rel and Q based on rel-CNN. rel-CNN is trained by maximizing the log-posterior. We then define a relation-based feature as: hRE(S, Q) = cosine(yRE(Q), yRE(S)) yRE(S) and yRE(Q) denote relation embeddings of S and Q respectively, coming from rel-CNN. 4.6 Type-level Feature We extend each ⟨Q, esbj, rel, eobj⟩in the SimpleQuestions data set to ⟨Q, esbj, rel, eobj, type⟩, where type denotes the type name of eobj based on Freebase. Thus, we obtain ⟨Q, type⟩pairs. Similar to rel-CNN, we use ⟨Q, type⟩pairs to train another CNN model, denoted as type-CNN. Based on which, we define a type-based feature as: hTE(S, Q) = cosine(yTE(Q), yTE(S)) yTE(S) and yTE(Q) denote type embeddings of S and Q respectively, coming from type-CNN. 4.7 Topic-level Feature 4.7.1 Unsupervised Topic Model As the assumption that Q-R pair should share similar topic distribution, We define an unsupervised topic model-based feature hUTM as the average cosine distance between topic vectors of all non-stopword pairs ⟨vSj, vQi⟩, where vw = [p(t1|w), ..., p(tN|w)]T denotes the topic vector of a given word w. Given a corpus, various topic modeling methods, such as pLSI (probabilistic latent semantic indexing) and LDA (latent Dirichlet allocation), can be used to estimate p(ti|w), which denotes the probability that w belongs to a topic ti. 4.7.2 Supervised Topic Model One shortcoming of the unsupervised topic model is that, the topic size is pre-defined, which might not reflect the truth on a specific corpus. In this paper, we explore a supervised topic model approach as well, based on ‘sentence-topic’ pairs. We crawl a large number of ⟨S, topic⟩pairs from Wikipedia documents, where S denotes a sentence, topic denotes the content name of the section that S extracted from. Such content names are labeled by Wikipedia article editors, and can be found in the Contents fields. Similar to rel-CNN and type-CNN, we use the ⟨S, topic⟩pairs to train another CNN model, denoted as topic-CNN. Based on which, we define a supervised topic model-based feature as: hSTM(S, Q) = cosine(ySTM(S), ySTM(Q)) ySTM(S) and ySTM(Q) denote topic embeddings of S and Q respectively, coming from topic-CNN. 4.8 Learning to Ranking Model We employ a regression-based learning to rank method (Nallapati, 2004) to train response ranking model, based on a set of labeled ⟨Q, C⟩pairs, Feature weights in the ranking model are trained by SGD based on the training data that consists of a set of ⟨Q, C⟩pairs, where Q denotes a user utterance and C denotes a set of response candidates. Each candidate S in C is labeled by + or −, which indicates whether S is a suitable response of Q (+), or not (−). As manually labeled data, such as WikiQA (Yang et al., 2015), needs expensive human annotation effort, we propose an automatic way to collect training data. First, ‘question-answer’ (or Q-A) pairs {Qi, Ai}M i=1 are crawled from community QA websites. Qi denotes a question. Ai denotes Qi’s answer, which includes one or more sentences Ai = {s1, ..., sK}. Then, we index answer sentences of all questions. Next, for each question Qi, we run response retrieval to obtain answer sentence candidates Ci = {s ′ 1, ..., s ′ N}. Last, if we know the correct answer sentences of each question Qi, we can then label each candidate in Ci as + or −. In experiments, manually labeled data (WikiQA) is used in open domain question answering scenario, and automatically generated data is used in chatbot scenario. 5 Response Triggering There are two types of utterances, chit-chat utterances and informative utterances. The former should be handled by chit-chat engines, and the latter is more suitable to our work, as documents usually contain formal and informative contents. Thus, we have to respond to informative utterances only. Response retrieval cannot always guarantee to return a candidate set that contains 520 at least one suitable response, but response ranking will output the best possible candidate all the time. So, we have to decide which responses are confident enough to be output, and which are not. In this paper, we define response triggering as a function that decides whether a response candidate S has enough confidence to be output: I = Trigger(S, Q) = IU(Q) ∧IRank(S, Q) ∧IR(S) where Trigger(Q, S) returns true, if and only if all its three sub-functions return true. IU(Q) returns true, if Q is an informative query. We collect and label chit-chat queries based on conversational exchanges from social media websites to train the classifier. IRank(S, Q) returns true, if the score s(S, Q) exceeds an empirical threshold τ: s(S, Q) = 1 1 + e−α·Rank(S,Q) where α is the scaling factor that controls the distribution of s(·) smooth or sharp. Both α and τ are selected based on a separated development set. IR(S) returns true, if (i) the length of S is less than a pre-defined threshold, and (ii) S does not start with a phrase that expresses a progressive relation, such as but also, besides, moreover and etc., as the contents of sentences starting with such phrases usually depend on their context sentences, and they are not suitable for responses. 6 Related Work For modeling dialogue. Previous works mainly focused on rule-based or learning-based approaches (Litman et al., 2000; Schatzmann et al., 2006; Williams and Young, 2007). These methods require efforts on designing rules or labeling data for training, which suffer the coverage issue. For short text conversation. With the fast development of social media, such as microblog and CQA services, large scale conversation data and data-driven approaches become possible. Ritter et al. (2011) proposed an SMT based method, which treats response generation as a machine translation task. Shang et al. (2015) presented an RNN based method, which is trained based on a large number of single round conversation data. Grammatical and fluency problems are the biggest issue for such generation-based approaches. Retrievalbased methods selects the most suitable response to the current utterance from the large number of Q-R pairs. Ji et al. (2014) built a conversation system using learning to rank and semantic matching techniques. However, collecting enough Q-R pairs to build chatbots is often intractable for many domains. Compared to previous methods, DocChat learns internal relationships between utterances and responses based on statistical models at different levels of granularity, and relax the dependency on Q-R pairs as response sources. These make DocChat as a general response generation solution to chatbots, with high adaptation capability. For answer sentence selection. Prior work in measuring the relevance between question and answer is mainly in word-level and syntactic-level (Wang and Manning, 2010; Heilman and Smith, 2010; Yih et al., 2013). Learning representation by neural network architecture (Yu et al., 2014; Wang and Nyberg, 2015; Severyn and Moschitti, 2015) has become a hot research topic to go beyond word-level or phrase-level methods. Compared to previous works we find that, (i) Large scale existing resources with noise have more advantages as training data. (ii) Knowledge-based semantic models can play important roles. 7 Experiments 7.1 Evaluation on QA (English) Take into account response ranking task and answer selection task are similar, we first evaluate DocChat in a QA scenario as a simulation. Here, response ranking is treated as the answer selection task, and response triggering is treated as the answer triggering task. 7.1.1 Experiment Setup We select WikiQA6 as the evaluation data, as it is precisely constructed based on natural language questions and Wikipedia documents, which contains 2,118 ‘question-document’ pairs in the training set, 296 ‘question-document’ pairs in development set, and 633 ‘question-document’ pairs in testing set. Each sentence in the document of a given question is labeled as 1 or 0, where 1 denotes the current sentence is a correct answer sentence, and 0 denotes the opposite meaning. Given a question, the task of WikiQA is to select answer sentences from all sentences in a question’s corresponding document. The training data settings of response ranking features are described below. 6http://aka.ms/WikiQA 521 Fw denotes 3 word-level features, hWM, hW2W and hW2V . For hW2W , GIZA++ is used to train word alignments on 11.6M ‘question-related question’ pairs (Fader et al., 2013) crawled from WikiAnswers.7. For hW2V , Word2Vec (Mikolov et al., 2013) is used to train word embedding on sentences from Wikipedia in English. Fp denotes 2 phrase-level features, hPP and hPT . For hPP , bilingual data8 is used to extract a phrase-based translation table (Koehn et al., 2003), from which paraphrases are extracted (Section 4.2.1). For hPT , GIZA++ trains word alignments on 4M ‘question-answer’ pairs9 crawled from Yahoo Answers10, and then a phrase table is extracted from word alignments using the intersect-diag-grow refinement. Fs denotes 2 sentence-level features, hSCR and hSDR. For hSCR, 4M ‘question-answer’ pairs (the same to hPT ) is used to train the CNN model. For hSDR, we randomly select 0.5M ‘sentence-next sentence’ pairs from English Wikipedia. Fd denotes document-level feature hDM. Here, we didn’t train a new model. Instead, we just reuse the CNN model used in hSCR. Fr and Fty denote relation-level feature hRE and type-level feature hTE. Bordes et al. (2015) released the SimpleQuestions data set11, which consists of 108,442 English questions. Each question (e.g., What does Jimmy Neutron do?) is written by human annotators based on a triple in Freebase which formatted as ⟨esbj, rel, eobj⟩(e.g., ⟨Jimmy Neutron, fictional character occupation, inventor⟩) Here, as described in Section 4.5 and 4.6, ‘question-relation’ pairs and ‘question-type’ pairs based upon SimpleQuestions data set are used to train hRE and hTE. Fto denotes 2 topic-level features, hUTM and hSTM. For hUTM, we run LightLDA (Yuan et al., 2015) on sentences from English Wikipedia, where the topic is set to 1,000. For hSTM, 4M ‘sentence-topic’ pairs are extracted from English Wikipedia (Section 4.7.2), where the most frequent 25,000 content names are used as topics. 7http://wiki.answers.com 8We use 0.5M Chinese-English bilingual sentences in phrase table extraction, i.e., LDC2003E07, LDC2003E14, LDC2005T06, LDC2005T10, LDC2005E83, LDC2006E26, LDC2006E34, LDC2006E85 and LDC2006E92. 9For each question, we only select the first sentence in its answer to construct a ‘question-answer’ pair, as it contains more causality information than sentences in other positions. 10https://answers.yahoo.com 11https://research.facebook.com/research/-babi/ Features MAP MRR Fw 60.25% 61.70% Fp 61.31% 62.61% Fs 61.99% 64.32% Fd 59.15% 61.17% Fr 46.95% 45.89% Fty 45.67% 43.37% Fto 58.34% 59.96% Table 1: Impacts of features at different levels. # Methods MAP MRR (1) Yih et al. (2013) 59.93% 60.68% (2) Yang et al. (2015) 65.20% 66.52% (3) Miao et al. (2015) 68.86% 70.69% (4) Yin et al. (2015) 69.21% 71.08% (5) DocChat 68.25% 70.73% (6) DocChat+(2) 70.08% 72.22% Table 2: Evaluation of AS task on WikiQA. 7.1.2 Results on Answer Selection (AS) The performance of answer selection is evaluated by Mean Average Precision (MAP) and Mean Reciprocal Rank (MRR). Among all ‘questiondocument’ pairs in WikiQA, only one-third of documents contain answer sentences to their corresponding questions. Similar to previous work, questions without correct answers in the candidate sentences are not taken into account. We first evaluate the impact of features at each level, and show results in Table 1. Fw, Fp, and Fs perform best among all features, which makes sense, as they can capture lexical features. Fr and Fty perform not very good, but make sense, as the training data (i.e. SimpleQuestions) are based on Freebase instead of Wikipedia. Interestingly, we find that Fto and Fd can achieve comparable results as well. We think the reason is that, their training data come from Wikipedia, which fit the WikiQA task very well. We evaluate the quality of DocChat on WikiQA, and show results in Table 2. The first four rows in Table 2 represent four baseline methods, including: (1) Yih et al. (2013), which makes use of rich lexical semantic features; (2) Yang et al. (2015), which uses a bi-gram CNN model with average pooling; (3) Miao et al. (2015), which uses an enriched LSTM with a latent stochastic attention mechanism to model similarity between Q-R pairs; and (4) Yin et al. (2015), which adds the attention mechanism to the CNN architecture. Table 2 shows that, without using WikiQA’s training set (only development set for ranking weights), DocChat can achieve comparable per522 Methods MAP MRR CNNW ikiQA 0.6575 0.7534 CNNQASent 0.6951 0.7633 DocChat 0.6896 0.7688 Table 3: Evaluation of AS on QASent. formance with state-of-the-art baselines. Furthermore, by combining the CNN model proposed by Yang et al. (2015) and trained on WikiQA training set, we achieve the best result on both metrics. Compared to previous methods, we think DocChat has the following two advantages: First, our feature models depending on existing resources are readily available (such as Q-Q pairs, Q-A pairs, ‘sentence-next sentence’ pairs, and etc.), instead of requiring manually annotated data (such as WikiQA and QASent). Training of the response ranking model does need labeled data, but the size demanded is acceptable. Second, as the training data used in our approach come from open domain resources, we can expect a high adaptation capability and comparable results on other WikiQAlike tasks, as our models are task-independent. To verify the second advantage, we evaluate DocChat on another answer selection data set, QASent (Wang et al., 2007), and list results in Table 3. CNNW ikiQA and CNNQASent refer to the results of Yang et al. (2015)’s method, where the CNN models are trained on WikiQA’s training set and QASent’s training set respectively. All these three methods train feature weights using QASent’s development set. Table 3 tells, DocChat outperforms CNNW ikiQA in terms of MAP and MRR, and achieves comparable results compared to CNNQASent. The comparisons results show a good adaptation capability of DocChat. Table 4 evaluates the contributions of features at different levels of granularity. To highlight the differences, we report the percent deviation by removing different features at the same level from DocChat. From Table 4 we can see that, 1) Each feature group is indispensable to DocChat; 2) Features at sentence-level are most important than other feature groups; 3) Compared to results in Table 1, combining all features can significantly promote the performance. 7.1.3 Evaluation of Answer Triggering (AT) In both QA and chatbot, response triggering is important. Similar to Yang et al. (2015), we also evaluate answer triggering using Precision, Recall, and F1 score as metrics. We use the WikiQA deModels MAP Change MRR Change DocChat 68.25% 70.73% DocChat - Fw 66.06% -2.19 67.99% -2.74 DocChat - Fp 66.80% -1.45 68.66% -2.07 DocChat - Fs 65.49% -2.76 67.27% -3.46 DocChat - Fd 68.02% -0.23 69.79% -0.94 DocChat - Fr 67.00% -1.25 69.07% -1.66 DocChat - Fty 67.09% -1.16 69.28% -1.45 DocChat - Fto 66.85% -1.40 68.96% -1.77 Table 4: Impacts of different feature groups. Methods Precision Recall F1 Yang et al. (2015) 28.34 35.80 31.64 DocChat 28.95 44.44 35.06 Table 5: Evaluation of AT on WikiQA. velopment set to tune the scaling factor α and trigger threshold τ that are described in Section 5, where α is set to 0.9 and τ is set to 0.5. Table 5 shows the evaluation results compare to Yang et al. (2015). We think the improvements come from the fact that our response ranking model are more discriminative, as more semantic-level features are leveraged. 7.2 Evaluation on Chatbot (Chinese) XiaoIce is a famous Chinese chatbot engine, which can be found in many platforms including WeChat official accounts (like business pages on Facebook Messenger). The documents that each official account maintains and post to their followers can be easily obtained from the Web. Meanwhile, a WeChat official account can choose to authorize XiaoIce to respond to its followers’ utterances. We design an interesting evaluation below to compare DocChat with XiaoIce, based on the publicly available documents. 7.2.1 Experiment Setup For ranking features, 17M ‘question-related questions’ pairs crawled from Baidu Zhidao are used to train word alignments for hW2W ; sentences from Chinese Wikipedia are used to train word embeddings for hW2V and a topic model for hUTM; the same bilingual phrase table described in last experiment is also used to extract a Chinese paraphrase table for hPP which use Chinese as the source language; 5M ‘question-answer’ pairs crawled from Baidu Zhidao are used for hPT , hSCR and hDM; 0.5M ‘sentence-next sentence’ pairs from Chinese Wikipedia are used for hSDR; 1.3M ‘sentence-topic pairs’ crawled from Chinese Wikipedia are used to train topic−CNN for 523 Utterance Response \•®{¤oº (Do you know the history of Beijing?) [XiaoIce Response]: ·{¤‘ÆØÐ" (I am not good at history class) [DocChat Response]:®{¤aȧŒ±J ˆ3000cc" (Beijing is a historical city that can be traced back to 3,000 years ago.) Table 6: XiaoIce response is more colloquial, as it comes from Q-R pairs; while DocChat response is more formal, as it comes from documents. hSTM. As there is no knowledge base based labeled data for Chinese, we ignore relation-level feature hRE and type-level feature hTE. For ranking weights, we generate 90,321 ⟨Q, C⟩ pairs based on Baidu Zhidao Q-A pairs by the automatic method described in Section 4.8. This data set is used to train the learning to rank model feature weights {λk} by SGD. For documents, we randomly select 10 WeChat official accounts, and index their documents separately. The average number of documents is 600. Human annotators are asked to freely issue 100 queries to each official account to get XiaoIce response. Thus, we obtain 100 ⟨query, XiaoIce response⟩pairs for each official account. We also send the same 100 queries of each official account to DocChat based on official account’s corresponding document index, and obtain another 100 ⟨query, DocChat response⟩pairs. Given these 1,000 ⟨query, XiaoIce response, DocChat response⟩triples, we let human annotators do a side-by-side evaluation, by asking them which response is better for each query. Note that, the source of each response is masked during evaluation procedure. Table 6 gives an example. 7.2.2 DocChat v.s. XiaoIce Table 7 shows the results. Better (or Worse) denotes a DocChat response is better (or worse) than a XiaoIce response, Tie denotes a DocChat response and a XiaoIce response are equally good or bad. From Table 7 we observe that: (1) 156 DocChat responses (58+47+51) out of 1,000 queries are triggered. The trigger rate of DocChat is 15.6%. We check un-triggered queries, and find most of them are chitchat, such as ”hi”, ”hello”, ”who are you?”. (2) Better cases are more than worse cases. Most queries in better cases are nonchitchat ones, and their contents are highly related to the domain of their corresponding WeChat official accounts. (3) Our proposed method is a perfect complement for chitchat engines on inBetter Worse Tie Compare to XiaoIce 58 47 51 Table 7: Chatbot side-by-side evaluation. formative utterances. The reasons for bad cases are two-fold: First, a DocChat response overlaps with a query, but cannot actually response it. For this issue, we need to refine the capability of our response ranking model on measuring causality relationships. Second, we wrongly send a chitchat query to DocChat, as currently, we only use a white list of chitchat queries for chitchat/non-chitchat classification (Section 5). 8 Conclusion This paper presents a response retrieval method for chatbot engines based on unstructured documents. We evaluate our method on both question answering and chatbot scenarios, and obtain promising results. We leave better triggering component and multiple rounds of conversation handling to be addressed in our future work. Acknowledgments This paper is supported by Beijing Advanced Innovation Center for Imaging Technology (No.BAICIT-2016001), the National Natural Science Foundation of China (Grant Nos. 61170189, 61370126), National High Technology Research and Development Program of China under grant (No.2015AA016004), the Fund of the State Key Laboratory of Software Development Environment (No.SKLSDE-2015ZX-16). References Colin Bannard and Chris Callison-Burch. 2005. Paraphrasing with bilingual parallel corpora. In Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL), pages 597–604. Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075. Peter F Brown, Vincent J Della Pietra, Stephen A Della Pietra, and Robert L Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational linguistics, 19(2):263–311. Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2013. Paraphrase-driven learning for open question answering. In Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL). 524 Michael Heilman and Noah A Smith. 2010. Tree edit models for recognizing textual entailments, paraphrases, and answers to questions. In Proceedings of Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 1011–1019. Zongcheng Ji, Zhengdong Lu, and Hang Li. 2014. An information retrieval approach to short text conversation. arXiv preprint arXiv:1408.6988. K Sparck Jones, Steve Walker, and Stephen E. Robertson. 2000. A probabilistic model of information retrieval: development and comparative experiments: Part 2. Information Processing & Management, 36(6):809–840. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. Proceedings of Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACLHLT), 1:48–54. Diane Litman, Satinder Singh, Michael Kearns, and Marilyn Walker. 2000. Njfun: a reinforcement learning spoken dialogue system. In Proceedings of the 2000 ANLP/NAACL Workshop on Conversational systems-Volume 3, pages 17–20. Yishu Miao, Lei Yu, and Phil Blunsom. 2015. Neural variational inference for text processing. arXiv preprint arXiv:1511.06038. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems (NIPS), pages 3111–3119. Ramesh Nallapati. 2004. Discriminative models for information retrieval. In Proceedings of the international ACM SIGIR conference on Research and development in information retrieval, pages 64–71. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. Alan Ritter, Colin Cherry, and William B Dolan. 2011. Data-driven response generation in social media. In Proceedings of the conference on Empirical Methods in Natural Language Processing (EMNLP), pages 583–593. Jost Schatzmann, Karl Weilhammer, Matt Stuttle, and Steve Young. 2006. A survey of statistical user simulation techniques for reinforcement-learning of dialogue management strategies. The knowledge engineering review, 21(02):97–126. Aliaksei Severyn and Alessandro Moschitti. 2015. Learning to rank short text pairs with convolutional deep neural networks. In Proceedings of ACM SIGIR Conference on Research and Development in Information Retrieval, pages 373–382. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL), pages 1577–1586. Mengqiu Wang and Christopher D Manning. 2010. Probabilistic tree-edit models with structured latent variables for textual entailment and question answering. In Proceedings of the International Conference on Computational Linguistics (COLING), pages 1164–1172. Di Wang and Eric Nyberg. 2015. A long shortterm memory model for answer sentence selection in question answering. In Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL), pages 707–712. Mengqiu Wang, Noah A Smith, and Teruko Mitamura. 2007. What is the jeopardy model? a quasisynchronous grammar for qa. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), volume 7, pages 22– 32. Jason D Williams and Steve Young. 2007. Partially observable markov decision processes for spoken dialog systems. Computer Speech & Language, 21(2):393–422. Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. Wikiqa: A challenge dataset for open-domain question answering. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2013–2018. Wen-tau Yih, Ming-Wei Chang, Christopher Meek, and Andrzej Pastusiak. 2013. Question answering using enhanced lexical semantic models. In Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL), pages 1744–1753. Wen-tau Yih, Xiaodong He, and Christopher Meek. 2014. Semantic parsing for single-relation question answering. In Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL), pages 643–648. Wenpeng Yin, Hinrich Sch¨utze, Bing Xiang, and Bowen Zhou. 2015. Abcnn: Attention-based convolutional neural network for modeling sentence pairs. arXiv preprint arXiv:1512.05193. Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. Deep learning for answer sentence selection. NIPS Deep Learning and Representation Learning Workshop. Jinhui Yuan, Fei Gao, Qirong Ho, Wei Dai, Jinliang Wei, Xun Zheng, Eric Po Xing, Tie-Yan Liu, and Wei-Ying Ma. 2015. Lightlda: Big topic models on modest computer clusters. In Proceedings of the Annual International Conference on World Wide Web (WWW), pages 1351–1361. 525
2016
49
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 44–53, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Unsupervised Person Slot Filling based on Graph Mining Dian Yu Heng Ji Computer Science Department Rensselaer Polytechnic Institute Troy, NY 12180, USA {yud2,jih}@rpi.edu Abstract Slot filling aims to extract the values (slot fillers) of specific attributes (slots types) for a given entity (query) from a largescale corpus. Slot filling remains very challenging over the past seven years. We propose a simple yet effective unsupervised approach to extract slot fillers based on the following two observations: (1) a trigger is usually a salient node relative to the query and filler nodes in the dependency graph of a context sentence; (2) a relation is likely to exist if the query and candidate filler nodes are strongly connected by a relation-specific trigger. Thus we design a graph-based algorithm to automatically identify triggers based on personalized PageRank and Affinity Propagation for a given (query, filler) pair and then label the slot type based on the identified triggers. Our approach achieves 11.6%-25% higher F-score over state-ofthe-art English slot filling methods. Our experiments also demonstrate that as long as a few trigger seeds, name tagging and dependency parsing capabilities exist, this approach can be quickly adapted to any language and new slot types. Our promising results on Chinese slot filling can serve as a new benchmark. 1 Introduction The goal of the Text Analysis Conference Knowledge Base Population (TAC-KBP) Slot Filling (SF) task (McNamee and Dang, 2009; Ji et al., 2010; Ji et al., 2011; Surdeanu and Ji, 2014) is to extract the values (fillers) of specific attributes (slot types) for a given entity (query) from a largescale corpus and provide justification sentences to support these slot fillers. KBP defines 25 slot types for persons (e.g., spouse) and 16 slots for organizations (e.g., founder). For example, given a person query “Dominich Dunne” and slot type spouse, a SF system may extract a slot filler “Ellen Griffin” and its justification sentence E1 as shown in Figure 1. E1: Ellen Griffin Dunne, from whom he was divorced in 1965, died in 1997. Ellen Griffin Dunne whom in he was 1965 Dominick Dunne in 1997 from case nmod nsubjpass auxpass nmod case coreference case Person Person | Query Year Year died divorced Figure 1: Extended dependency tree for E1. Slot filling remains a very challenging task. The two most successful state-of-the-art techniques are as follows. (1) Supervised classification. Considering any pair of query and candidate slot filler as an instance, these approaches train a classifier from manually labeled data through active learning (Angeli et al., 2014b) or noisy labeled data through distant supervision (Angeli et al., 2014a; Surdeanu et al., 2010) to predict the existence of a specific relation between them. (2) Pattern matching. These approaches extract and generalize lexical and syntactic patterns automatically or semi-automatically (Sun et al., 2011; Li et al., 2012; Yu et al., 2013; Hong et al., 2014). They usually suffer from low recall due to numerous different ways to express a certain relation type (Surdeanu and Ji, 2014). For example, none of the top-ranked patterns (Li et al., 2012) based on dependency paths in Table 1 can capture the spouse slot in E1. 44 Query poss−1 Slot Filler Query poss−1 [wife-widow-husband] appos Slot Filler Query nsubj−1 married dobj Slot Filler Query appos wife prep of Slot Filler Query nsubjpass−1 survived agent Slot Filler Table 1: Dependency patterns for slot spouse. Both of the previous methods have poor portability to a new language or a new slot type. Furthermore, both methods focus on the flat relation representation between the query and the candidate slot filler, while ignoring the global graph structure among them and other facts in the context. When multiple facts about a person entity are presented in a sentence, the author (e.g., a news reporter or a discussion forum poster) often uses explicit trigger words or phrases to indicate their relations with the entity. As a result, these interdependent facts and query entities are strongly connected via syntactic or semantic relations. Many slot types, especially when the queries are person entities, are indicated by such triggers. We call these slots trigger-driven slots. In this paper, we define a trigger as the smallest extent of a text which most clearly indicates a slot type. For example, in E1, “divorced” is a trigger for spouse while “died” is a trigger for death-related slots. Considering the limitations of previous flat representations for the relations between a query (Q) and a candidate slot filler (F), we focus on analyzing the whole dependency tree structure that connects Q, F and other semantically related words or phrases in each context sentence. Our main observation is that there often exists a trigger word (T) which plays an important role in connecting Q and F in the dependency tree for trigger-driven slots. From the extended dependency tree shown in Figure 1, we can clearly see that “divorced” is most strongly connected to the query mention (“he”) and the slot filler (“Ellen Griffin Dunne”). Therefore we can consider it as a trigger word which explicitly indicates a particular slot type. Based on these observations, we propose a novel and effective unsupervised graph mining approach for person slot filling by deeply exploring the structures of dependency trees. It consists of the following three steps: • Step 1 - Candidate Relation Identification: Construct an extended dependency tree for each sentence including any mention referring to the query entity. Identify candidate slot fillers based on slot type constraints (e.g., the spouse fillers are limited to person entities) (Section 2). • Step 2 - Trigger Identification: Measure the importance of each node in the extended dependency tree relative to Q and F, rank them and select the most important ones as the trigger set (Section 3). • Step 3 - Slot Typing: For any given new slot type, automatically expand a few trigger seeds using the Paraphrase Database (Ganitkevitch et al., 2013). Then we use the expanded trigger set to label the slot types of identified triggers (Section 4). This framework only requires name tagging and dependency parsing as pre-processing, and a few trigger seeds as input, and thus it can be easily adapted to a new language or a new slot type. Experiments on English and Chinese demonstrate that our approach dramatically advances state-ofthe-art results for both pre-defined KBP slot types and new slot types. 2 Candidate Relation Identification We first present how to build an extended dependency graph for each evidence sentence (Section 2.1) and generate query and filler candidate mentions (Section 2.2). 2.1 Extended Dependency Tree Construction Given a sentence containing N words, we construct an undirected graph G = (V, E), where V = {v1, . . . , vN} represents the words in a sentence, E is an edge set, associated with each edge eij representing a dependency relation between vi and vj. We first apply a dependency parser to generate basic uncollapsed dependencies by ignoring the direction of edges. Figure 1 shows the dependency tree built from the example sentence. In addition, we annotate an entity, time or value mention node with its type. For example, in Figure 1, “Ellen Griffin Dunne” is annotated as a person, and “1997” is annotated as a year. Finally we perform co-reference resolution, which introduces implicit links between nodes that refer to the same entity. We replace any nominal or pronominal entity mention with its coreferential name mention. For example, “he” is replaced by “Dominick Dunne” in Figure 1. Formally, an extended dependency tree is an annotated tree of entity mentions, phrases and their links. 45 2.2 Query Mention and Filler Candidate Identification Given a query q and a set of relevant documents, we construct a dependency tree for each sentence. We identify a person entity e as a query mention if e matches the last name of q or e shares two or more tokens with q. For example, “he/Dominick Dunne” in Figure 1 is identified as a mention referring to the query Dominick Dunne. For each sentence which contains at least one query mention, we regard all other entities, values and time expressions as candidate fillers and generate a set of entity pairs (q, f), where q is a query mention, and f is a candidate filler. In Example E1, we can extract three entity pairs (i.e., {Dominick Dunne} × {Ellen Griffin Dunne, 1997, 1965}). For each entity pair, we represent the query mention and the filler candidate as two sets of nodes Q and F respectively, where Q, F ⊆V . 3 Trigger Identification In this section, we proceed to introduce an unsupervised graph-based method to identify triggers for each query and candidate filler pair. We rank all trigger candidates (Section 3.1) and then keep the top ones as the trigger set (Section 3.2). 3.1 Trigger Candidate Ranking As we have discussed in Section 1, we can consider trigger identification problem as finding the important nodes relative to Q and F in G. Algorithms such as Pagerank (Page et al., 1999) are designed to compute the global importance of each node relative to all other nodes in a graph. By redefining the importance according to our preference toward F and Q, we can extend PageRank to generate relative importance scores. We use the random surfer model (Page et al., 1999) to explain our motivation. Suppose a random surfer keeps visiting adjacent nodes in G at random. The expected percentage of surfers visiting each node converges to the PageRank score. We extend PageRank by introducing a “back probability” β to determine how often surfers jump back to the preferred nodes (i.e., Q or F) so that the converged score can be used to estimate the relative probability of visiting these preferred nodes. Given G and a set of preferred nodes R where R ⊆V , we denote the relative importance for all v ∈V with respect to R as I(v | R), following the work of White and Smyth (2003). For a node vk, we denote N(k) as the set of neighbors of vk. We use π(k), the k-th component of the vector π, to denote the stationary distribution of vk where 1 ≤k ≤|V |. We define a preference vector pR = {p1, ..., p|V |} such that the probabilities sum to 1, and pk denotes the relative importance attached to vk. pk is set to 1/|R| for vk ∈R, otherwise 0. Let A be the matrix corresponding to the graph G where Ajk = 1/|N(k)| and Ajk = 0 otherwise. For a given pR, we can obtain the personalized PageRank equation (Jeh and Widom, 2003): π = (1 −β)Aπ + βpR (1) where β ∈[0, 1] determines how often surfers jump back to the nodes in R. We set β = 0.3 in our experiment. The solution π to Equation 1 is a steady-state importance distribution induced by pR. Based on a theorem of Markov Theory, a solution π with P|V | k=1 π(k) = 1 always exists and is unique (Motwani and Raghavan, 1996). We define relative importance scores based on the personalized ranks described above, i.e., I(v | R) = π(v) after convergence, and we compute the importance scores for all the nodes in V relative to Q and F respectively. A query mention in a sentence is more likely to be involved in multiple relations while a filler is usually associated with only one slot type. Therefore we combine two relative importance scores by assigning a higher priority to I(v | F) as follows. I(v | {Q, F}) = I(v | F) + I(v | F) · I(v | Q) (2) We discard a trigger candidate if it is (or part of) an entity which can only act as a query or a slot filler. We assume a trigger can only be a noun, verb, adjective, adverb or preposition. In addition, verbs, nouns and adjectives are more informative to be triggers. Thus, we remove any trigger candidate v if it has a higher I(v | {Q, F}) than the first top-ranked verb/noun/adjective trigger candidate. For example, we rank the candidate triggers based on the query and slot filler pair (“Dominick Dunne”, “Ellen Griffin Dunne”) as shown in Figure 2. 46 E1: Ellen Griffin Dunne, from whom he was divorced in 1965, died in 1997. Ellen Griffin Dunne whom in he was 1965 Dominick Dunne in 1997 from acl:relcl case nmod nsubjpass auxpass nmod case coreference case Person | Filler Person | Query Date Date 0.128 0.078 0.013 0.006 0.006 died divorced Figure 2: Importance scores of trigger candidates relative to query and filler in E1. 3.2 Trigger Candidate Selection Given Q and F, we can obtain a relative importance score I(v | {Q, F}) for each candidate trigger node v in V as shown in Section 3.1. We denote the set of trigger candidates as T = {t1, · · · , tn} where n ≤|V |. Since a relation can be indicated by a single trigger word, a trigger phrase or even multiple non-adjacent trigger words, it is difficult to set a single threshold even for one slot type. Instead, we aim to automatically classify top ranked candidates into one group (i.e., a trigger set) so that they all have similar higher scores compared to other candidates. Therefore, we define this problem as a clustering task. We mainly consider clustering algorithms which do not require pre-specified number of clusters. We apply the affinity propagation approach to take as input a collection of real-valued similarity scores between pairs of candidate triggers. Realvalued messages are exchanged between candidate triggers until a high-quality set of exemplars (centers of clusters), and corresponding clusters gradually emerges (Frey and Dueck, 2007). There are two kinds of messages exchanged between candidate triggers: one is called responsibility γ(i, j), sent from ti to a candidate exemplar tj; the other is availability α(i, j), sent from the candidate exemplar tj to ti. The calculation of each procedure iterates until convergence. To begin with, the availabilities are initialized to zero: α(i, j) = 0. Then the responsibilities are computed using the following rule: γ(i, j) ←s(i, j) − max j′s.t.j′̸=j{α(i, j′) + s(i, j′)} (3) where the similarity score s(i, j) indicates how well tj is suited to be the exemplar for ti. Whereas the above responsibility update lets all candidate exemplars compete for the ownership of a trigger candidate ti, the following availability update gathers evidence from trigger candidates as to whether each candidate exemplar would make a good exemplar: α(i, j) ←min n 0, γ(j, j) + X i′s.t.i′ /∈{i,j} max{0, γ(i′, j)} o (4) Given T, we can generate an n × n affinity matrix M which serves as the input of the affinity propagation. Mij represents the negative squared difference in relative importance score between ti and tj (Equation 5). Mij = −(I(i | {Q, F}) −I(j | {Q, F}))2 (5) We compute the average importance score for all the clusters after convergence and keep the one with the highest average score as the trigger set. For example, given the query and slot filler pair in Figure 3, we obtain trigger candidates T = {died, divorced, from, in, in} and their corresponding relative importance scores. After the above clustering, we obtain three clusters and choose the cluster {divorced} with the highest average relative importance score (0.128) as the trigger set. 0.006 0.078 0.128 0.013 0.006 E1: Ellen Griffin Dunne, from whom he was divorced in 1965, died in 1997. Ellen Griffin Dunne in Dominick Dunne in from Person | Filler Person | Query died divorced Average = 0.006 + 0.013 + 0.006 /3 ≈0.008 Cluster 1 Cluster 3 Cluster 2 Figure 3: Trigger candidate filtering for E1. 4 Slot Type Labeling In this section, we will introduce how to label the slot type for an identified relation tuple (Q, T, F). The simplest solution is to match T against existing trigger gazetteers for certain types of slots. For 47 E1: Ellen Griffin Dunne, from whom he was divorced in 1965, died in 1997. Ellen Griffin Dunne Dominick Dunne Person | Filler Person | Query divorced wife husband divorce marry … Trigger Gazetteer for slot spouse { Dominick Dunne|Query, spouse, Ellen Griffin Dunne|Filler } Figure 4: Example of slot type labeling. example, Figure 4 shows how we label the relation as a spouse slot type. In fact, some trigger gazetteers have already been constructed by previous work such as (Yu et al., 2015). However, manual construction of these triggers heavily rely upon labeled training data and high-quality patterns, which would be unavailable for a new language or a new slot type. Inspired by the trigger-based event extraction work (Bronstein et al., 2015), we propose to extract trigger seeds from the slot filling annotation guideline 1 and then expand them by paraphrasing techniques. For each slot type we manually select two trigger seeds from the guideline and then use the Paraphrase Database (PPDB) (Ganitkevitch et al., 2013; Pavlick et al., 2015) to expand these seeds. Specifically, we select top-20 lexical paraphrases based on similarity scores as our new triggers for each slot type. Some examples are shown in Table 2. Seeds Slot Types Expanded Triggers assassinate death kill, die, slay, murder graduate schools PhD, supervisor, diploma sister siblings twin, half-brother, sibling marriage spouse married, spouse, matrimony Table 2: PPDB-based trigger expansion examples. 5 Filler Validation After we label each relation tuple, we perform the following validation steps to filter noise and remove redundancy. For many slot types, there are some specific constraints on entity types of slot fillers defined in the task specification. For example, employee or member of fillers should be either organizations or geopolitical entities, while family slots (e.g., spouse and children) expect person entities. We apply these constraints to further validate all relation tuples. 1http://www.nist.gov/tac/2015/KBP/ColdStart/guidelines/ TAC KBP 2015 Slot Descriptions V1.0.pdf Moreover, single-value slots can only have a single filler (e.g., date of birth), while listvalue slots can take multiple fillers (e.g., cities of residence). However, we might extract conflicting relation tuples from multiple sentences and sources. For each relation tuple, it can also be extracted from multiple sentences, and thus it may receive multiple relative importance scores. We aim to keep the most reliable relation tuple for a single-value slot. For a single-value slot, suppose we have a collection of relation tuples R which share the same query. Given r ∈R with a set of relative importance scores I = {i1, i2, · · · , in}, we can regard the average score of I as the credibility score of r. The reason is that the higher the relative importance score, the more likely the tuple is to be correct. In our experiments, we use the weighted arithmetic mean as follows so that higher scores can contribute more to the final average: ¯i = Pn k=1 wk · ik Pn k=1 wk (6) where wk denotes the non-negative weight of ik. When we regard the weight wk equal to the score ik, Equation 6 can be simplified as: ¯i = Pn k=1 w2 k Pn k=1 wk (7) We calculate the weighted mean ¯i for each r ∈ R and keep the relation tuple with the highest ¯i. 6 Experiments 6.1 Data and Scoring Metric In order to evaluate the quality of our proposed framework and its portability to a new language, we use TAC-KBP2013 English Slot Filling (ESF), TAC-KBP 2015 English Cold Start Slot Filling (CSSF) and TAC-KBP2015 Chinese Slot Filling (CSF) data sets for which we can compare with the ground truth and state-of-the-art results reported in previous work. The source collection includes news documents, web blogs and discussion forum posts. In ESF there are 50 person queries and on average 20 relevant documents per query; while in CSF there are 51 person queries, and on average 5 relevant documents per query. 48 Slot Type Our Approach Roth’13 Angeli’14 siblings 62.9 48.0 40 other family 42.4 11.8 0 spouse 58.7 40.0 66 children 66.7 27.3 27 parents 43.1 47.8 39 schools attended 81.4 30.2 60 date of birth 87.0 60.0 92 date of death 73.2 3.2 48 state of birth 55.6 30.8 17 state of death 88.2 53.3 0 city of birth 70.0 64.0 25 city of death 72.7 73.7 30 country of birth 75.0 0.0 0 country of death 70.0 46.2 18 states of residence 57.1 25.6 12 cities of res. 61.4 38.8 38 countries of res. 45.7 20.0 41 employee of 43.8 18.5 38 Overall 57.4 32.3 – Table 3: English Slot Filling F1 (%) (KBP2013 SF data set). We only test our method on 18 trigger-driven person slot types shown in Table 3. Some other slot types (e.g., age, origin, religion and title) do not rely on lexical triggers in most cases; instead the query mention and the filler are usually adjacent or seperated by a comma. In addition, we do not deal with the two remaining triggerdriven person slot types (i.e., cause of death and charges) since these slots often expect other types of concepts (e.g., a disease or a crime phrase). We use the official TAC-KBP slot filling evaluation scoring metrics: Precision (P), Recall (R) and F-score (F1) (Ji et al., 2010) to evaluate our results. 6.2 English Slot Filling We apply Stanford CoreNLP (Manning et al., 2014) for English part-of-speech (POS) tagging, name tagging, time expression extraction, dependency parsing and coreference resolution. In Table 3 we compare our approach with two stateof-the-art English slot filling methods: a distant supervision method (Roth et al., 2013) and a hybrid method that combines distant and partial supervision (Angeli et al., 2014b). Our method outperforms both methods dramatically. KBP2015 English cold start slot filling is a task which combines entity mention extraction and slot filing (Surdeanu and Ji, 2014). Based on the released evaluation queries from KBP2015 Cold Start Slot Filling, our approach achieves 39.2% overall Fscore on 18 person trigger-driven slot types, which Slot Type Our Approach Angeli’15 siblings 48.0 26.1 other family 0.0 33.3 spouse 14.3 15.4 children 72.8 0.0 parents 25.0 14.3 schools attended 63.6 42.1 date of birth 0.0 80.0 date of death 44.0 0.0 state of birth 0.0 33.3 state of death 0.0 15.4 city of birth 0.0 85.7 city of death 0.0 0.0 country of birth 0.0 66.7 country of death 100.0 0.0 states of residence 0.0 0.0 cities of res. 0.0 50.0 countries of res. 0.0 0.0 employee of 60.0 26.7 Overall 39.2 27.6 Table 4: English Cold Start Slot Filling F1 (%) (KBP2015 CSSF data set). is significantly better than state-of-the-art (Angeli et al., 2015) on the same set of news documents (Table 4). Compared to the previous work, our method discards a trigger-driven relation tuple if it is not supported by triggers. For example, “Poland” is mistakenly extracted as the country of residence of “Mandelbrot” by distant supervision (Roth et al., 2013) from the following sentence: A professor emeritus at Yale University, Mandelbrot was born in Poland but as a child moved with his family to France where he was educated. maybe because the relation tuple (Mandelbrot, live in, Poland) indeed exists in external knowledge bases. Given the same entity pair, our method identifies “born” as the trigger word and labels the slot type as country of birth. When there are several triggers indicating different slot types in a sentence, our approach performs better in associating each trigger with the filler it dominates by analyzing the whole dependency tree. For example, given a sentence: Haig is survived by his wife of 60 years, Patricia; his children Alexander, Brian and Barbara; eight grandchildren; and his brother, the Rev. Francis R. Haig. (Haig, sibling, Barbara) is the only relation tuple extracted from the above sentence by the previous method. Given the entity pair (Haig, Barbara), the relative importance score of “children” (0.1) is higher than the score of “brother” (0.003), 49 and “children” is kept as the only trigger candidate after clustering. Therefore, we extract the tuple (Haig, children, Barbara) instead. In addition, we successfully identify the missing fillers for other slot types: spouse (Patricia), children (Alexander, Brian and Barbara) and siblings (Francis R. Haig) by identifying their corresponding triggers. In addition, flat relation representations fail to extract the correct relation (i.e., alternate names) between “Dandy Don” and “Meredith” since “brother” is close to both of them in the following sentence: In high school and at Southern Methodist University, where, already known as Dandy Don (a nickname bestowed on him by his brother) , Meredith became an all-American. 6.3 Adapting to New Slot Types Our framework can also be easily adapted to new slot types. We evaluate it on three new person list-value slot types: friends, colleagues and collaborators. We use “friend” as the slot-specific trigger for the slot friends and “colleague” for the slot colleagues. “collaborate”, “cooperate” and “partner” are used to type the slot collaborators. We manually annotate ground truth for evaluation. It is difficult to find all the correct fillers for a given query from millions of documents. Therefore, we only calculate precision. Experiments show we can achieve 56.3% for friends, 100% for colleagues and 60% for collaborators (examples shown in Table 5). 6.4 Impact of Trigger Mining In Section 3.2, we keep top-ranked trigger candidates based on clustering rather than threshold tuning. We explore a range of thresholds for comparison, as shown in Figure 5. Our approach achieves 57.4% F-score, which is comparable to the highest F-score 58.1% obtained by threshold tuning. We also measure the impact of the size of the trigger gazetteer. We already outperform state-ofthe-art by using PPDB to expand triggers mined from guidelines as shown in Table 6. As the size of the trigger gazetteer increases, our method (marked with a ⋆) achieves better performance. 6.5 Chinese Slot Filling As long as we have the following resources: (1) a POS tagger, (2) a name tagger, (3) a dependen1 1 2 3 4 5 6 40 42 44 46 48 50 52 54 56 58 F-score (%) Top N candidates as triggers Threshold Tuning Affinity Propagation Figure 5: The effect of the number of trigger candidates on ESF. Method Size F1 (%) State-of-the-art (Roth et al., 2013) – 32.3 Guideline seeds⋆ 20 27.3 Guideline seeds + PPDB expansion⋆ 220 38.9 Manually Constructed Trigger Gazetteers⋆ 7,463 57.4 Table 6: The effect of trigger gazetteers on ESF (size: the number of triggers). cy parser and (4) slot-specific trigger gazetteers, we can apply the framework to a new language. Coreference resolution is optional. We demonstrate the portability of our framework to Chinese since all the resources mentioned above are available. We apply Stanford CoreNLP (Manning et al., 2014) for Chinese POS tagging, name tagging (Wang et al., 2013) and dependency parsing (Levy and Manning, 2003). To explore the impact of the quality of annotation resources, we also use a Chinese language analysis tool: Language Technology Platform (LTP) (Che et al., 2010). We use the full set of Chinese trigger gazetteers published by Yu et al. (2015). Experimental results (Table 7) demonstrate that our approach can serve as a new and promising benchmark. As far as we know, there are no results available for comparison. However, the performance of Chinese SF is heavily influenced by the relatively low performance of name tagging since our method returns an empty result if it fails to find any query metnion. About 20% and 16% queries cannot be recognized by CoreNLP and LTP respectively. One reason is that many Chinese names are also common words. For example, a buddhist monk’s name “觉醒”(wake) is identified as a verb rather than a person entity. 50 Evidence Sentence Slot Type Query Extracted Fillers Many of his subjects were friends from his previous life , such as Elizabeth Taylor and Gloria Vanderbilt . friends Dominick Dunne Gloria Vanderbilt; Elizabeth Taylor Toby Keith hit an emotional note with a performance of “Cryin’ For Me (Wayman’s Song),” dedicated to his late friend, jazz artist and former basketball star Wayman Tisdale, who died last May. friends Wayman Tisdale Toby Keith “I think all of her writing came from her heart,” Michael Glaser, a longtime colleague at St. Mary’s and former Maryland poet laureate, said last week. colleagues Lucille Clifton Michael Glaser Cunningham has collaborated on two books: “Changes: Notes on Choreography,” with Frances Starr, and “The Dancer and the Dance,” with Jacqueline Lesschaeve. collaborators Merce Cunningham Jacqueline Lesschaeve Table 5: Examples for new slot types. A dependency parser is indispensable to produce reliable rankings of trigger candidates. Unfortunately, a high-quality parser for a new language is often not available because of languagespecific features. For example, in Chinese a single sentence about a person’s biography often contains more than five co-ordinated clauses, each of which includes a trigger. Therefore a dependency parser adapted from English often mistakenly identifies one of the triggers as a main predicate of the sentence. In addition, Chinese is a very concise language. For example, a “[Person Name][Organization Suffix]”structure can indicate various different types of relations between the person name and the organization: “杨明牙医诊所”(Yang Ming Clinic) indicates ownership, “邵逸夫图书馆”(Shao Yifu Library) indicates sponsorship, “丰子恺研 究中心”(Feng Zikai Research Center) indicates Slot Type CoreNLP-based LTP-based siblings 40.0 57.1 other family 40.0 0.0 spouse 40.0 48.0 children 19.0 21.4 parents 0.0 25.0 schools attended 11.1 17.1 date of birth 42.4 0.0 date of death 48.5 0.0 state of birth 38.1 52.2 state of death 55.6 70.0 city of birth 28.6 26.7 city of death 33.3 42.9 country of birth 11.8 11.8 country of death 0.0 0.0 states of residence 30.8 29.6 cities of residence 27.3 34.8 country of residence 6.5 0.0 employee of 31.0 31.2 Overall 29.6 28.3 Table 7: Chinese Slot Filling F1 (%) (KBP2015 CSF data set). research theme, and “罗京治丧委员会”(Luojing Commemoration Committee) indicates commemoration. None of them includes an explicit trigger nor indicates employment relation. It requires more fine-grained dependency relation types to distinguish them. Finally, compared to English, Chinese tends to have more variants for some types of triggers (e.g., there are at least 31 different titles for “wife”in Chinese). Some of them are implicit and require shallow inference. For example, “投奔”(to seek shelter or asylum) indicates a residence relation in most cases. 7 Related Work Besides the methods based on distant supervision (e.g., (Surdeanu et al., 2010; Roth et al., 2013; Angeli et al., 2014b)) discussed in Section 6.2, pattern-based methods have also been proven to be effective in SF in the past years (Sun et al., 2011; Li et al., 2012; Yu et al., 2013). Dependency-based patterns achieve better performance since they can capture long-distance relations. Most of these approaches assume that a relation exists between Q and F if there is a dependency path connecting Q and F and all the words on the path are equally regarded as trigger candidates. We explore the complete graph structure of a sentence rather than chains/subgraphs as in previous work. Our previous research focused on identifying the relation between F and T by extracting filler candidates from the identified scope of a trigger (e.g., (Yu et al., 2015)). We found that each slot-specific trigger has its own scope, and corresponding fillers seldom appear outside its scope. We did not compare with results from this previous approach which did not consider redundancy removal required in the official evaluations. 51 Soderland et al. (2013) built their SF system based on Open Information Extraction (IE) technology. Our method achieves much higher recall since dependency trees can capture the relations among query, slot filler and trigger in more complicated long sentences. In addition, our triggers are automatically labeled so that we do not need to design manual rules to classify relation phrases as in Open IE. 8 Conclusions and Future Work In this paper, we demonstrate the importance of deep mining of dependency structures for slot filling. Our approach outperforms state-of-the-art and can be rapidly portable to a new language or a new slot type, as long as there exists capabilities of name tagging, POS tagging, dependency parsing and trigger gazetteers. In the future we aim to label slot types based on contextual information as well as sentence structures instead of trigger gazetteers only. There are two primary reasons. First, a trigger can serve for multiple slot types. For example, slot children and its inverse slot parents share a subset of triggers. Second, a trigger word can have multiple different meanings. For example, a sibling trigger word “sister” can also represent a female member of a religious community. We attempt to combine multi-prototype approaches (e.g., (Reisinger and Mooney, 2010)) to better disambiguate senses of trigger words. Besides considering the cross-sentence conflicts, we also want to investigate the within-sentence conflicts caused by the competition of triggers. A trigger identified by our approach is the most important node in the dependency tree relative to the given entity pair. However, this trigger might be more important to another entity pair, which shares the same filler, in the same sentence. A promising solution is to rank all the entities in the sentence based on their importance relative to the identified trigger and the filler candidate. Acknowledgement We would like to thank Chris Callison-Burch for providing English and Chinese paraphrase resources. This work was supported by the DARPA LORELEI Program No. HR0011-15-C0115, DARPA DEFT Program No. FA8750-132-0041, ARL NS-CTA No. W911NF-09-2-0053, NSF CAREER Award IIS-1523198. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. References G. Angeli, S. Gupta, M. Jose, C. Manning, C. R´e, J. Tibshirani, J. Wu, S. Wu, and C. Zhang. 2014a. Stanford’s 2014 slot filling systems. In Proc. Text Analysis Conference (TAC 2014). G. Angeli, J. Tibshirani, J. Wu, and C. Manning. 2014b. Combining distant and partial supervision for relation extraction. In Proc. Empirical Methods on Natural Language Processing (EMNLP 2014). G. Angeli, V. Zhong, D. Chen, J. Bauer, A. Chang, V. Spitkovsky, and C. Manning. 2015. Bootstrapped self training for knowledge base population. In Proc. Text Analysis Conference (TAC 2015). O. Bronstein, I. Dagan, Q. Li, H. Ji, and A. Frank. 2015. Seed-based event trigger labeling: How far can event descriptions get us? In Proc. Association for Computational Linguistics (ACL 2015). W. Che, Z. Li, and T. Liu. 2010. Ltp: A chinese language technology platform. In Proc. Computational Linguistics (COLING 2010). B. Frey and D. Dueck. 2007. Clustering by passing messages between data points. science. J. Ganitkevitch, B. Van Durme, and C. CallisonBurch. 2013. PPDB: The paraphrase database. In Proc. North American Chapter of the Association for Computational Linguistics - Human Language Technologies (NAACL-HLT 2013). Y. Hong, X. Wang, Y. Chen, J. Wang, T. Zhang, J. Zheng, D. Yu, and Q. Li. 2014. Rpi blender tac-kbp2014 knowledge base population system. In Proc. Text Analysis Conference (TAC 2014). G. Jeh and J. Widom. 2003. Scaling personalized web search. In Proc. World Wide Web (WWW 2003). H. Ji, R. Grishman, H. Dang, K. Griffitt, and Joe Ellis. 2010. An overview of the tac2010 knowledge base population track. In Proc. Text Analysis Conference (TAC 2010). H. Ji, R. Grishman, and H. Dang. 2011. An overview of the tac2011 knowledge base population track. In Proc. Text Analysis Conference (TAC 2011). R. Levy and C. Manning. 2003. Is it harder to parse chinese, or the chinese treebank? In Proc. Association for Computational Linguistics (ACL 2003). 52 Y. Li, S. Chen, Z. Zhou, J. Yin, H. Luo, L. Hong, W. Xu, G. Chen, and J. Guo. 2012. Pris at tac2012 kbp track. In Proc. Text Analysis Conference (TAC 2012). C. Manning, M. Surdeanu, J. Bauer, J. Finkel, S. Bethard, and D. McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proc. Association for Computational Linguistics (ACL 2014). P. McNamee and H. Dang. 2009. Overview of the tac 2009 knowledge base population track. In Proc. Text Analysis Conference (TAC 2009). R. Motwani and P. Raghavan. 1996. Randomized algorithms. ACM Computing Surveys (CSUR). L. Page, S. Brin, R. Motwani, and T. Winograd. 1999. The pagerank citation ranking: Bringing order to the web. Technical report, Stanford InfoLab. E. Pavlick, P. Rastogi, J. Ganitkevitch, and C. Van Durme, B.and Callison-Burch. 2015. Ppdb 2.0: Better paraphrase ranking, fine-grained entailment relations, word embeddings, and style classification. In Proc. Association for Computational Linguistics (ACL 2015). J. Reisinger and R. Mooney. 2010. Multiprototype vector-space models of word meaning. In Proc. North American Chapter of the Association for Computational Linguistics - Human Language Technologies (NAACL-HLT 2010). B. Roth, T. Barth, M. Wiegand, M. Singh, and D. Klakow. 2013. Effective slot filling based on shallow distant supervision methods. In Proc. Text Analysis Conference (TAC 2013). S. Soderland, J. Gilmer, R. Bart, O. Etzioni, and D. Weld. 2013. Open ie to kbp relations in 3 hours. In Proc. Text Analysis Conference (TAC 2013). A. Sun, R. Grishman, B. Min, and W. Xu. 2011. Nyu 2011 system for kbp slot filling. In Proc. Text Analysis Conference (TAC 2011). M. Surdeanu and H. Ji. 2014. Overview of the english slot filling track at the tac2014 knowledge base population evaluation. In Proc. Text Analysis Conference (TAC 2014). M. Surdeanu, D. McClosky, J. Tibshirani, J. Bauer, A. Chang, V. Spitkovsky, and C. Manning. 2010. A simple distant supervision approach for the tac-kbp slot filling task. In Proc. Text Analysis Conference (TAC 2010). M. Wang, W. Che, and C. Manning. 2013. Joint word alignment and bilingual named entity recognition using dual decomposition. In Proc. Association for Computational Linguistics (ACL 2013). S. White and P. Smyth. 2003. Algorithms for estimating relative importance in networks. In Proc. Knowledge discovery and data mining (KDD 2003). D. Yu, H. Li, T. Cassidy, Q. Li, H. Huang, Z. Chen, H. Ji, Y. Zhang, and D. Roth. 2013. Rpi-blender tac-kbp2013 knowledge base population system. In Proc. Text Analysis Conference (TAC 2013). D. Yu, H. Ji, S. Li, and C. Lin. 2015. Why read if you can scan: Scoping strategy for biographical fact extraction. In Proc. North American Chapter of the Association for Computational Linguistics - Human Language Technologies (NAACL-HLT 2015). 53
2016
5
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 526–536, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Investigating the Sources of Linguistic Alignment in Conversation Gabriel Doyle Department of Psychology Stanford University Stanford, CA 94305 [email protected] Michael C. Frank Department of Psychology Stanford University Stanford, CA 94305 [email protected] Abstract In conversation, speakers tend to “accommodate” or “align” to their partners, changing the style and substance of their communications to be more similar to their partners’ utterances. We focus here on “linguistic alignment,” changes in word choice based on others’ choices. Although linguistic alignment is observed across many different contexts and its degree correlates with important social factors such as power and likability, its sources are still uncertain. We build on a recent probabilistic model of alignment, using it to separate out alignment attributable to words versus word categories. We model alignment in two contexts: telephone conversations and microblog replies. Our results show evidence of alignment, but it is primarily lexical rather than categorical. Furthermore, we find that discourse acts modulate alignment substantially. This evidence supports the view that alignment is shaped by strategic communicative processes related to the ongoing discourse. 1 Introduction In conversation, people tend to adapt to one another across a broad range of behaviors. This adaptation behavior is collectively known as “communication accommodation” (Giles et al., 1991). Linguistic alignment, the use of similar words to a conversational partner, is one prominent form of accommodation. Alignment is found robustly across many settings, including inperson, computer-mediated, and web-based conversation (Danescu-Niculescu-Mizil et al., 2012; Giles et al., 1979; Niederhoffer and Pennebaker, 2002). In addition, the strength of alignment to conversational partners varies with relevant sociological factors, such as the power of the partners, their social network centrality, and their likability. Potentially, this alignment could be used to infer these factors in situations where they are difficult to observe directly. Although linguistic alignment appears to reflect important social dynamics, the mechanisms underlying alignment are still not well-understood. One particular question is whether alignment is supported by relatively automatic priming mechanisms, or higher-level, discourse and communicative strategies. The Interactive Alignment Model proposes that conversational partners prime each other, causing alignment via the primed reuse of structures ranging from individual lexical items to syntactic abstractions (Pickering and Garrod, 2004). In contrast, Accommodation Theory emphasizes the relatively more communicative and strategic nature of alignment (Giles et al., 1991). Relative to this theoretical landscape, a number of questions have emerged. First, does alignment occur at structural levels? If alignment is driven by interactive priming of structures, effects of alignment should be expected not only at the lexical level but also for structural elements or categories as well. In contrast, if alignment is primarily communicative, then alignment strength might differ and be greater for specific words that serve particular conversational or discourse functions in a particular situation. Second, does alignment vary with conversational goals? If alignment is driven primarily by priming, it should be relatively consistent across different aspects of a discourse. In contrast, from a strategic or communicative perspective, alignment – in which preceding words and concepts are reused – must be balanced against a need to move the conversation forward by introducing new words and concepts. Thus, on a communica526 tive account, alignment should be modulated by the speaker’s discourse act, reflecting whether the balance of the concern is convergence on a current focus or conveyal of new information. Our goal in the current work is to investigate these questions. We make use of a recent probabilistic model of linguistic alignment, modifying it to operate robustly over corpora with highly varying distributional structures and to consider both lexical and category-based alignment. We use two corpora of spontaneous conversations, the Switchboard Corpus and a corpus of Twitter conversations, to perform two experiments. First, in both datasets we measure alignment across different levels of representation and find very limited evidence for category-level alignment. Second, we make use of annotations in Switchboard to measure alignment across different discourse acts, finding that the level of alignment depends on the discourse actions that are included in the analysis. Taken together, these findings are consistent with the idea that alignment arises from discourselevel, strategic processes that operate primarily over lexical items. 2 Previous Work 2.1 Why does alignment matter? Linguistic alignment, like other kinds of accommodation, can be a critical part of achieving social goals. Performance in cooperative decisionmaking tasks is positively related to the participants’ linguistic convergence (Fusaroli et al., 2012; Kacewicz et al., 2013). Romantically, match-making in speed dating and stability in established relationships have both been linked to increased alignment (Ireland et al., 2011). Alignment can also improve perceived persuasiveness, encouraging listeners to follow good health practices (Kline and Ceropski, 1984) or to leave larger tips (van Baaren et al., 2003). Alignment is also important as an indicator of implicit sociological variables. Less powerful conversants generally accommodate to more to powerful conversants. Prominent examples include interviews and jury trials (Willemyns et al., 1997; Gnisci, 2005; Danescu-Niculescu-Mizil et al., 2012). A similar effect is found for network structure: speakers align more to more networkcentral speakers (Noble and Fern´andez, 2015). Additionally, factors such as gender, likability, respect, and attraction all interact with the magnitude of accommodation (Bilous and Krauss, 1988; Natale, 1975). 2.2 Sources of linguistic alignment Despite the important outcomes associated with alignment, its sources are not clear. The most prominent strand of work on alignment has focused on the level of word categories, looking at how interlocutors change their frequency of using, for instance, pronouns or quantitative words (Danescu-Niculescu-Mizil et al., 2012; Ireland et al., 2011). These results show alignment effects at the category level, but it is in principle possible that these effects arose purely from alignment on individual words (and that conclusion would not be inconsistent with the interpretation of that work). Syntactic alignment is one area in which theoretical predictions have been tested, though results have been somewhat equivocal. The Interactive Alignment Model has generally been taken to suggest that there should be cross-person priming of syntactic categories and structures (Pickering and Garrod, 2004). But while some studies have found support for syntactic priming (Gries, 2005; Dubey et al., 2005), others have found negative or null alignment (Healey et al., 2014; Reitter et al., 2006). In one particularly thorough study, Healey et al. (2014) found across two corpora that speakers syntactically diverged from their interlocutors once lexical alignment was accounted for. Furthermore, positive alignment is generally regarded as a good conversational tactic, but there is clearly a limit to its virtues, at least when it comes to content words. Alignment is inherently backward-looking, while the general goal of a conversation is to exchange information that is not already known by both parties, an inherently forward-looking goal. Perhaps because of this, some recent work finding positive alignment has limited itself to “non-topical” word categories, which are less contentful (Danescu-NiculescuMizil et al., 2011; Doyle et al., 2016). And suggestively, alignment within a task-relevant syntactic category was a better predictor of decisionmaking performance than overall lexical alignment (Fusaroli et al., 2012). In sum, although individual studies do bear on the sources of alignment, the picture is still not clear. Because most work on alignment has been done either on categories of words or aggregating 527 across the lexicon, we do not have a good sense of whether there are systematic differences in alignment at different levels of representation. A further complication is that there is no standard measure of alignment; we turn to this issue next. 2.3 Measures of alignment The metrics used in previous work fall into two basic categories: distributional and conditional. Distributional methods such as Linguistic Style Matching (LSM) (Niederhoffer and Pennebaker, 2002; Ireland et al., 2011) or the Zelig Quotient (Jones et al., 2014) calculate the similarity between the conversation participants over their frequencies of word or word category use in all utterances within the conversation. In contrast, conditional metrics, such as Local Linguistic Alignment (LLA) (Fusaroli et al., 2012; Wang et al., 2014) and the metric used by Danescu-Nicolescu-Mizil et al. (2011), look at how a message conditions its reply, with alignment indicated by elevated word use in the reply when that word was in the preceding message. While distributional methods have been popular, a major weakness of such methods is that they do not necessarily show true alignment, only similarity. A high level of distributional similarity does not imply that two conversational partners have aligned to one another, because they might instead have been similar to begin with. In contrast, conditional measures allow for stronger inferences about the temporal sequence of alignment (even though they cannot guarantee any causal interpretation). Thus, we focus here on conditional measures exclusively. By-message conditional methods Several existing conditional methods have started from the simplified representation that messages either do or do not contain particular words (“markers”), irrespective of message length or marker count. (Danescu-Niculescu-Mizil et al., 2012; Doyle et al., 2016). We refer to these as “by-message” methods. Consider the following example of conditional alignment, using pronouns as the marker: Bob aligns to Alice if his replies are more likely to contain a pronoun when in response to a message from Alice that contains a pronoun. Bob’s reply Alice’s message has pronoun no pronoun has pronoun 8 2 no pronoun 5 5 Here, Alice sends 10 messages that contain at least one pronoun, and 8 of Bob’s replies contain at least one pronoun. But Alice also sends 10 messages that don’t contain any pronouns, and only 5 of Bob’s replies to these contain pronouns. This increased likelihood of a pronoun-containing reply to a pronoun-containing message is the conditional alignment. Different models quantify this conditional alignment slightly differently. DanescuNiculescu-Mizil et al. (2011) proposed a subtractive conditional probability model, where alignment is the difference between the likelihood of a pronoun-containing reply B to a pronoun-containing message A and the probability of a pronoun-containing reply to any message: alignSCP = p(B|A) −p(B) (1) Doyle et al. (2016) showed that this measure can be affected by the overall frequency of the category being aligned on, though. To correct this issue, they proposed a Hierarchical Alignment Model (HAM), which defines alignment as a linear effect on the log-odds of a reply containing the relevant marker (e.g., a pronoun), similar to a linear predictor in a logistic regression.1 (2) alignHAM ≈logit−1(p(B|A)) − logit−1(p(B|¬A)) These binary conditional methods depend on the assumption that all messages have similar, and small, numbers of words, however. The probability that a message contains at least one of any marker of interest is dependent on the message’s length, so if messages vary substantially in their length, these alignment values can be at least noisy, if not biased. They are also not robust as messages increase in length, since the likelihood that a message contains any marker approaches 1 as message length increases. By-word conditional methods A solution to the problem of variable message lengths is simply to shift from binarized data to count data. Instead of counting how many times Bob’s replies contain at least one pronoun, we can count what proportion of his replies’ word tokens are pronouns. 1Because the HAM estimated this quantity via Bayesian inference, the inferred alignment value depends on the prior and number of messages observed, so unlike the other measures, this equality is only approximate. 528 Some existing measures use a related quantity, the proportion of the preceding message that appears in its reply, to estimate alignment, notably Local Linguistic Alignment (LLA) (Fusaroli et al., 2012; Wang et al., 2014) and the lexical similarity (LS) measure of Healey et al. (2014). LLA is defined as the number of word tokens (wi) that appear in both the message (Ma) and the reply (Mb), divided by the product of the total number of word tokens in the message and reply: alignLLA = P wi∈Mb δ(wi ∈Ma) length(Ma)length(Mb) (3) These measures have an aspect of conditionality, as they only count words that appear in both the message and the reply. But they nevertheless fail to control for the baseline frequency of the initial marker, and hence may be biased in measurements across words or categories of different frequencies (Doyle et al., 2016). They also can be affected by reply length, as the maximum alignment estimate is only possible when the reply is shorter than the message. All of these by-word conditional models treat the reply as a bag of words, without order information. The by-word models, including the WHAM model we propose, are agnostic about reply length effects, correcting for the artifactual length effects of by-message models, but assuming that all messages have similar alignment strengths independent of length. This is in contrast to models that explicitly model priming effects as decaying over time (Reitter et al., 2006; Reitter, 2008), which predict higher alignment in shorter replies. Future by-word alignment models could infer a discounting for words that occur later in the reply, similar to the beta value on the log-distance from the prime proposed in Reitter et al. (2006). Our goal in this work is to create a model that combines the benefits of the existing by-message conditional models with the length-robustness of a by-word conditional method. We present WHAM, a modification of the HAM model that satisfies this goal. 3 The Word-Based Hierarchical Alignment Model (WHAM) We propose the Word-Based Hierarchical Alignment Model (WHAM). Like HAM, WHAM assumes that word use in replies is shaped by whether the preceding message contained the marker of interest. But WHAM uses marker token frequencies within replies, so that a 40-word reply with two instances of the marker is represented differently from a 3-word reply containing one instance. For each marker, WHAM treats each reply as a series of token-by-token independent draws from a binomial distribution. The binomial probability µ is dependent on whether the preceding message did (µalign) or did not (µbase) contain the marker, and the inferred alignment value is the difference between these probabilities in log-odds space (ηalign). The graphical model is shown in Figure 1. For a set of message-reply pairs between a speaker-replier dyad (a, b), we first separate the replies into two sets based on whether the preceding message contained the marker m (the “alignment” set) or not (the “baseline” set). All replies within a set are then aggregated in a single bagof-words representation, with marker token counts Calign m,a,b and Cbase m,a,b, and total token counts Nbase m,a,b and Nbase m,a,b, the observed variables on the far right of the model. Moving from right to left, these counts are assumed to come from binomial draws with probability µalign m,a,b or µbase m,a,b. The µ values are generated from η values in log-odds space by an inverse-logit transform, similar to linear predictors in logistic regression. The ηbase variables are representations of the baseline frequency of a marker in log-odds space, and µbase is simply a conversion of ηbase to probability space, the equivalent of an intercept term in a logistic regression. ηalign is an additive value, with µalign = logit−1(ηbase + ηalign), the equivalent of a binary feature coefficient in a logistic regression. Alignment is then the change in logodds of the replier using m above baseline usage, given that the initial message uses m. The remainder of the model is a hierarchy of normal distributions that allow social and word category structure to be integrated into the analysis. In the present work, we have three levels in the hierarchy: category level, marker level,2 and conversational dyad level. All of these normal distributions have identical standard deviations σ2 = .25.3 A Cauchy(0, 2.5) distribution 2In the lexical and category-not-word alignment models, these markers are words within a category. The category alignment model does not include this level, since all words in a category are treated identically. 3This value was chosen as a good balance between rea529 C N ηbase s ηbase m ηbase m,a,b µbase m,a,b Cbase m,a,b ηalign s ηalign m ηalign m,a,b µalign m,a,b Calign m,a,b Category Marker Dyad N N logit−1 Binom N N logit−1 Binom N base m,a,b N align m,a,b (a, b) ∈D m ∈s s ∈S Figure 1: The Word-Based Hierarchical Alignment Model (WHAM). A chain of normal distributions generates a linear predictor η, which is converted into a probability µ for binomial draws of the words in each reply. gives a relatively uninformative prior for the baseline marker frequency (Gelman et al., 2008). The alignment hierarchy is headed by a normal distribution centered at 0, biasing the model equally in favor of positive and negative alignments. For our marker set, we adopt the Linguistic Inquiry and Word Count (LIWC) system to categorize words (Pennebaker et al., 2007). We use a set of 11 categories that have shown alignment effects in previous work (Danescu-Niculescu-Mizil et al., 2011). These can be loosely grouped into a set of five syntactic categories (articles, conjunctions, prepositions, pronouns, and quantifiers) and six conceptual categories (certainty, discrepancy, exclusion, inclusion, negation, and tentative). Categories and example elements are shown in Table 1. We manually lemmatized all words in each category. We implemented WHAM in RStan (Carpenter, 2015), with code available at http: //github.com/langcog/disc_align. 3.1 Validating WHAM A major goal of our by-word alignment model, WHAM, is to fix the length issues discussed in Section 2.3. We test WHAM and the by-message HAM model on simulated data, using a method similar to Simulation 2 in Doyle et al. (2016), to sonable parameter convergence (improved by smaller σ2) and good model log-probability (improved by larger σ2). Swbd Twit Category Examples Size Prob Prob Article a, the 2 .053 .047 Certainty always, never 17 .014 .015 Conjunction but, and, though 18 .077 .051 Discrepancy should, would 21 .015 .019 Exclusive without, exclude 77 .038 .028 Inclusive with, include 57 .057 .028 Negation not, never 12 .020 .023 Preposition to, in, by, from 97 .097 .091 Pronoun it, you 55 .17 .16 Quantifier few, many 23 .028 .025 Tentative maybe, perhaps 28 .033 .025 Table 1: Marker categories for linguistic alignment, with examples, number of distinct word lemmas, and token probability of in a reply in Switchboard and Twitter. see how robust they are to different reply lengths. We generate 500 speaker-replier dyads, each exchanging an average of 5 message pairs (drawn from a geometric distribution). Each message pair consists of a message whose length in words is drawn from a uniform distribution [1, 25], and a reply of length L. Because our goal is to test the effect of length on the models’ performances, we create separate simulated datasets for different values of L, and see whether the model correctly estimates the alignment value ηalign. Three independent simulations were run for each alignmentlength pair. We present data here for a simulated 530 G G G G GGGG GGGGGGG G G GG GG GG GGGGGGG GGGG GGGGGG GGGGG GG G GG GG GGGGGGGG GG G GGGGGGGGGGG G GGG G GGG GGG GG G G G GGGG GGGGG G GG GGG GG G GGG GGG G G G G GG GG G G GG GGG GG G G G G G GG GG G G G G G G G G G G −1.0 −0.5 0.0 0.5 −1 0 1 2 WHAM HAM −1.0 −0.5 0.0 0.5 1.0 true alignment estimated alignment reply length G G G G G 1 5 10 25 50 Figure 2: Actual versus estimated alignment on simulated data. Lines are loess-fit curves; colors represent the reply length in the simulation run. WHAM estimates alignment accurately regardless of reply length; HAM is highly affected by length. word category with a baseline frequency of 0.1, around the middle of the attested category frequency range (see Table 1). Figure 2 plots the true alignment value in the simulations against the model-estimated alignment values. Different colors represent different reply lengths L, ranging from single-word replies (light yellow) to 50-word replies (dark orange). The WHAM model shows consistently accurate alignment estimates over the range of simulated alignment values and reply lengths. The HAM model estimates the alignment far less accurately, and the reply length biases its estimates. 4 Data Moving on to real data, we use two corpora for our experiments. The first is a collection of Twitter conversations collected by Doyle & Frank (2015) to examine information density in conversation. This corpus focuses on conversations within a set of 14 mostly distinct sub-communities on Twitter, and contains 63,673 conversation threads, covering 228,923 total tweets. We divide these conversations into message pairs, also called conversational turns, which are two consecutive tweets within a conversation thread. The second tweet is always in reply to the first (according to the Twitter API), although this does not necessarily mean that the content of the reply is a response to the preceding tweet. Retweets (including explicit retweets and some common manual retweet methods) were removed automatically. This processing leaves us with 122,693 message pairs, spanning 2,815 users. The tweets were parsed into word tokens using the Twokenizer (Owoputi et al., 2013). The second corpus is the SwDA version of the Switchboard corpus (Godfrey et al., 1992; Jurafsky et al., 1997).4 This corpus is a collection of transcribed telephone conversations, with each utterance labeled with the discourse act it is performing (e.g., statement of opinion, signal of nonunderstanding). It contains 221,616 total utterances in 1,155 conversations. We combine consecutive utterances by the same speaker without interruption from the listener into a single message and treat consecutive pairs of messages from different speakers as conversation turns, resulting in 110,615 message pairs. 5 Experiment 1: Lexical- and Category-Level Alignment Our first experiment examines how alignment differs across the lexical and categorical levels. We use the WHAM framework to infer alignment on word and category counts, and also introduce a measure to estimate the influence of one word in a category on other words in its category, “categorynot-word” alignment. We include this last type of alignment because it is possible that the category alignment effects in previous work are the result of lexical alignment on the individual words in the category, without any influence across words in the category. If categorical alignment is a real effect over and above lexical alignment, as an interactive-priming source for alignment would suggest, then the presence of a word in a message should not only increase the chance of seeing that word in the reply, but also other words in its category. 5.1 Category-not-word-alignment model Assessing the amount of alignment triggered across words in a category (which we call “category-not-word alignment” or CNW) is not trivial, as there are a variety of interactions between lexical items within a category that can cause the lexical alignment to actually be less than 4Available courtesy of Christopher Potts at http:// compprag.christopherpotts.net/swda.html. 531 Reply Message ∅ he she ∅ 25 25 25 he 20 50 10 she 20 10 50 Table 2: A theoretical case where lexical alignment surpasses categorical alignment due to negative CNW between the words. the category alignment. Table 2 illustrates this with a theoretical distribution over the pronouns he and she; one use of the pronoun he makes another use more likely (A: Did he like the movie? B: Yeah, he loved it.) while also reducing the likelihood of she, since the topic of conversation is now a male, and vice versa for she. For both he and she, the lexical alignment is approximately logit−1(p(B|A) −p(B|¬A)) = logit−1(50 80 − 25 75) ≈1.2, but categorical alignment is approximately logit−1(120 160 −50 75) ≈0.4. On the other hand, the pronouns you and I might trigger each other more than themselves (A: Did you like the movie? B: Yeah, I loved it.). The differences between lexical, categorical, and CNW alignment are also relevant to discussions of “lexical boosts” in the syntactic priming literature, an increased priming effect at the categorical level when there is lexical repetition. Lexicalist residual activation accounts (Pickering and Branigan, 1998) predict such a boost, while implicit learning accounts do not (Bock and Griffin, 2000; Chang et al., 2006). In the context of this experiment, such a lexical boost could make lexical and categorical alignment appear elevated and closer together, but would not have a substantial effect on CNW alignment.5 To investigate CNW alignment, we look at a subset of the data: for each word w, exclude all messages that contain a word from that category (S) that is not w. This limits the category alignment influence on the reply to the single word w. Then, instead of looking at how often w appears in the reply, we look at how often all other words in category S appear in the reply. The model then infers the influence of w on the other words in the category independent of their lexical alignment. 5The categories being investigated in our work contain mostly non-topical, closed-class words, which have not exhibited lexical boosts in past research (Bock, 1989; Pickering and Branigan, 1998; Hartsuiker et al., 2008), but such boosting may be detectable in estimates on topical categories. Within the WHAM model, we change the count variables C· and N· so that Calign is the number of tokens of {S −w} in replies to messages containing w but not {S −w}. Cbase is then the number in replies to messages not containing any words in S. Similarly, Nalign is the total token counts over replies containing w but not any other words in S, and Nbase the total token counts over replies containing no words in S. 5.2 Methods We conducted three sets of simulations, fitting the model with marker categories, individual words, and with the CNW scheme described above. In each, the model was fit with two chains of 200 iterations of the sampler for each dataset. We then extracted alignment estimates from each of the final 100 samples, and we report 95% highest posterior density intervals on ηalign S . 5.3 Results Figure 3 shows the alignment on each marker category in the Twitter and Switchboard corpora. There were substantial differences in the overall rate of alignment between the corpora: Mean category alignment on Twitter was .19, while Switchboard category alignment was −.051. These differences may reflect the nature of the two discourse contexts: Replies on Twitter are composed while looking at the preceding message, encouraging the replier to take more account of the other tweeter’s words, and a replier can draft and edit their reply to make it better fit the conversation. Messages on Switchboard, on the other hand, are evanescent, so a replier must compose a reply without looking back at the message, without editing, and in real-time. Differences in the discourse structure of these corpora may also be contributing, an effect we will consider in Experiment 2. Despite the difference in reply construction in the two corpora, the results across levels of alignment were similar. Alignment was found primarily at the lexical – rather than the category – level. Lexical and category alignment were not significantly different from each other, but the strength of lexical alignment was significantly larger than the CNW alignment, according to a t-test over categories (Twitter: t(10) = .21, p < .001; Swbd: t(10) = .12, p = .003). CNW alignment was significantly negative on Switchboard (t(10) = −.11, p = .01) and not significantly different from zero on Twitter (t(10) = .009, p = .79). 532 Syntactic Conceptual ●● ● ●● ● ● ● ● ●●● ●● ● ●●● ● ● ● ● ● ● ●●● ●● ● ● ● ● ●● ● ●● ● ●● ● ● ● ● ● ●● ● ●● ●● ● ●● ● ● ● ● ● ●● ●● ● −0.5 0.0 0.5 1.0 −0.5 0.0 0.5 1.0 Twitter Switchboard article conjunction preposition pronouns quantifier certainty discrepancy exclusion inclusion negation tentative estimated alignment (log odds) ● ● ● Category Lexical CNW Figure 3: Categorical (red), lexical (blue), and CNW (green) alignments plotted by category, on the Twitter (left) and Switchboard (right) datasets. 95% HPD intervals from WHAM shown. WHAM – unlike other previous measures – provides estimates of alignment that are unbiased by either marker frequency or message length, but we still observed modest alignment on Twitter, replicating previous work (Doyle et al., 2016; Danescu-Niculescu-Mizil et al., 2011). Alignment was smaller in Switchboard, and in both cases there were no category effects. Thus, the categorical alignment results may result primarily from lexical alignment, inconsistent with the predictions of interactive priming accounts of alignment. 6 Experiment 2: Discourse Acts and Alignment Messages within a discourse can serve a very wide range of purposes. This variety has effects for both linguistic structure and the relationship to neighboring messages. For example, a simple yes/no question is likely to receive a short, constrained reply, while a statement of an opinion is more likely to yield a longer reply. In addition, different types of messages can either introduce new information to the conversation (e.g., statements, questions, offers) or look back at existing information (e.g., acknowledgments, reformulations, yes/no answers). We hypothesize that alignment will be substantially different depending on the discourse act, as speakers’ conversational goals vary. Thus, our second experiment examines how alignment differs depending on discourse act. We focus on a particular kind of discourse act, the backchannel (Yngve, 1970). Backchannels are extremely common in Switchboard, accounting for almost 20% of utterances, and include utterances such as single words signaling understanding or misunderstanding (yeah, uh-huh, no) or simple messages expressing empathy without trying to take a full conversational turn (It must have been tough). Backchannels are a particularly interesting case because their short and constrained nature makes it difficult to align on some categories (e.g., backchannels rarely contain quantifiers or prepositions), while the purpose of giving feedback to the speaker makes it important to align on others (e.g., matching the positive/negative tone or certainty of a speaker). In addition, backchannels are primarily restricted to spoken corpora. Twitter conversations contain far fewer backchannels than Switchboard, which may account for some of their alignment differences—especially as the results of this experiment suggest that backchannels reduce overall alignment. 6.1 Methods We use the discourse-annotated Switchboard corpus to compare alignment in conversations containing backchannels with those whose backchannels have been removed. We make this comparison by creating a second corpus, removing every utterance classified as a backchannel from the corpus prior to parsing the utterances into conversation turns as before. 533 syntactic conceptual GG G GGG G G G G G G G G G GG G G G G GGG G G G G GG GG G −0.5 0.0 0.5 1.0 article conjunction preposition pronouns quantifier certainty discrepancy exclusion inclusion negation tentative marker category estimated alignment alignment type G G G category lexical CNW Switchboard alignments (w/o backchannels) Figure 4: Categorical (red), lexical (blue), and CNW (green) alignments on the Switchboard dataset with backchannels removed. 95% HPD intervals from WHAM shown. 6.2 Results Alignment values for the Switchboard corpus without backchannels are shown in Figure 4. As expected, alignment is on average higher without the backchannels (p = .09 for category, p < .05 for lexical and CNW), reflecting the constrained nature of backchannels. Lexical alignment is significantly higher than category alignment (t(10) = −.08, p = .03), consistent with the findings of Experiment 1. The mean category alignment without backchannels is .029. Figure 5 compares the category alignments for the full Switchboard corpus (green) and Switchboard without backchannels (orange). Alignment on the full corpus is lower for all but two categories, exhibiting the reduced opportunity for alignment provided by backchannels. Syntactic category alignment is especially affected by backchannels, whose constrained forms provide very little ability to align syntactically. Interestingly, the two categories that do show greater alignment when backchannels are included are certainty and negation. These categories are both important for backchannels; a negative backchannel is generally inappropriate in reply to a non-negative message, and similarly a confident backchannel would often be out of place in reply to an uncertain message. These influences of discourse acts on alignment are more consistent with a discourse-strategic origin for alignment than a priming-based account. syntactic conceptual G G G G G G GG G G G G GG G G G G G G G G −0.4 −0.2 0.0 0.2 0.4 article conjunction preposition pronouns quantifier certainty discrepancy exclusion inclusion negation tentative marker category estimated alignment corpus G G Swbd Swbd w/o backchannels Category alignment and backchannels Figure 5: Comparing categorical alignment on the Switchboard dataset with and without backchannels. 95% HPD intervals from WHAM shown. 7 Discussion Linguistic alignment is a prominent type of communicative accommodation, but its sources are unclear. We presented WHAM, a length-robust extension of a probabilistic alignment model. Using this model, we find evidence that linguistic alignment is primarily lexical, and that it is strongly affected by at least some aspects of the discourse goal of a message. This combination of a primarily-lexical origin for linguistic alignment and its variation by word category and discourse act suggest that alignment is primarily a higher-level discourse strategy rather than a low-level priming-based mechanism. This set of results is consistent with both Accommodation Theory and the set of findings, reviewed above, that sociological factors affect the level of observed alignment. The effect of discourse acts on alignment further suggests that alignment is not a completely automatic process but rather one of many discourse strategies that speakers use to achieve their conversational goals. Acknowledgments We wish to thank Dan Yurovsky, Aaron Chuey, and Jake Prasad for their work on and discussion of earlier versions of the model, Herb Clark for discussions of potential effects of message length, and, of course, the reviewers. The authors were funded by NSF BCS 1528526, NSF BCS 1456077, and a grant from the Stanford Data Science Initiative. 534 References Frances R. Bilous and Robert M. Krauss. 1988. Dominance and accommodation in the conversational behaviours of same-and mixed-gender dyads. Language & Communication. Kay Bock and Zenzi M. Griffin. 2000. The persistence of structural priming: Transient activation or implicit learning. Journal of Experimental Psychology: General, 129:177–192. Kay Bock. 1989. Closed-class immanence in sentence production. Cognition, 31:163–186. Bob Carpenter. 2015. Stan: A Probabilistic Programming Language. Journal of Statistical Software. Franklin Chang, Gary S. Dell, and Kay Bock. 2006. Becoming syntactic. Psychological Review, 113:234–272. Cristian Danescu-Niculescu-Mizil, Michael Gamon, and Susan Dumais. 2011. Mark my words!: linguistic style accommodation in social media. In Proceedings of the 20th international conference on World Wide Web - WWW ’11, page 745, New York, New York, USA. ACM Press. Cristian Danescu-Niculescu-Mizil, Lillian Lee, Bo Pang, and Jon Kleinberg. 2012. Echoes of power: Language effects and power differences in social interaction. In Proceedings of the 21st international conference on World Wide Web WWW ’12, page 699. Gabriel Doyle and Michael C. Frank. 2015. Audience size and contextual effects on information density in Twitter conversations. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics. Gabriel Doyle, Dan Yurovsky, and Michael C. Frank. 2016. A robust framework for estimating linguistic alignment in Twitter conversations. In WWW 2016. Amit Dubey, Patrick Sturt, and Frank Keller. 2005. Parallelism in coordination as an instance of syntactic priming: Evidence from corpus-based modeling. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 827–834. Association for Computational Linguistics. Riccardo Fusaroli, Bahador Bahrami, Karsten Olsen, Andreas Roepstorff, Geraint Rees, Chris Frith, and Kristian Tyl´en. 2012. Coming to Terms: Quantifying the Benefits of Linguistic Coordination. Psychological Science, 23(8):931–939. Andrew Gelman, Aleks Jakulin, Maria Grazia Pittau, and Yu-Sung Su. 2008. A weakly informative default prior distribution for logistic and other regression models. The Annals of Applied Statistics. Howard Giles, Klaus R. Scherer, and Donald M. Taylor. 1979. Speech markers in social interaction. In Klaus R. Scherer and Howard Giles, editors, Social markers in speech, pages 343–81. Cambridge University Press, Cambridge. Howard Giles, Nikolas Coupland, and Justine Coupland. 1991. Accommodation theory: Communication, context, and consequences. In Howard Giles, Justine Coupland, and Nikolas Coupland, editors, Contexts of accommodation: Developments in applied sociolinguistics. Cambridge University Press, Cambridge. Augusto Gnisci. 2005. Sequential strategies of accommodation: A new method in courtroom. British Journal of Social Psychology, 44(4):621–643. John J. Godfrey, Edward C. Holliman, and Jane McDaniel. 1992. Switchboard: Telephone speech corpus for research and development. In 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing., volume 1, pages 517–520. IEEE. Stefan Th Gries. 2005. Syntactic priming: A corpusbased approach. Journal of psycholinguistic research, 34(4):365–399. Robert J. Hartsuiker, Sarah Bernolet, Sofie Schoonbaert, Sara Speybroeck, and Dieter Vanderelst. 2008. Syntactic priming persists while the lexical boost decays: Evidence from written and spoken dialogue. Journal of Memory and Language, 58:214–238. Patrick G. T. Healey, Matthew Purver, and Christine Howes. 2014. Divergence in dialogue. PloS one, 9(6):e98598. Molly E. Ireland, Richard B. Slatcher, Paul W. Eastwick, Lauren E. Scissors, Eli J. Finkel, and James W. Pennebaker. 2011. Language style matching predicts relationship initiation and stability. Psychological Science, 22:39–44. Simon Jones, Rachel Cotterill, Nigel Dewdney, Kate Muir, and Adam Joinson. 2014. Finding Zelig in text: A measure for normalising linguistic accommodation. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics, pages 455–465. Dan Jurafsky, Elizabeth Shriberg, and Debra Biasca. 1997. Switchboard swbd-damsl shallow-discoursefunction annotation coders manual. Institute of Cognitive Science Technical Report, pages 97–102. Ewa Kacewicz, James W. Pennebaker, Matthew Davis, Moongee Jeon, and Arthur C. Graesser. 2013. Pronoun use reflects standings in social hierarchies. Journal of Language and Social Psychology, 33(2):125–143. Susan L. Kline and Janet M. Ceropski. 1984. Personcentered communication in medical practice. In Human Decision-Making, pages 120–141. SIU Press, Carbondale. 535 Michael Natale. 1975. Convergence of mean vocal intensity in dyadic communication as a function of social desirability. Journal of Personality and Social Psychology, 32(5):790–804. Kate G. Niederhoffer and James W. Pennebaker. 2002. Linguistic style matching in social interaction. Journal of Language and Social Psychology, 21(4):337– 360. Bill Noble and Raquel Fern´andez. 2015. Centre Stage: How Social Network Position Shapes Linguistic Coordination. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics. Olutobi Owoputi, Brendan O’Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah Smith. 2013. Improved Part-of-Speech Tagging for Online Conversational Text with Word Clusters. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 380–391. James W. Pennebaker, Roger J. Booth, and Martha E. Francis. 2007. Linguistic Inquiry and Word Count: LIWC. Martin J. Pickering and H. P. Branigan. 1998. The representation of verbs: Evidence from syntactic priming in language production. Journal of Memory and Language, 39:633–651. Martin J. Pickering and Simon Garrod. 2004. Toward a mechanistic psychology of dialogue. Behavioral and brain sciences, 27(2):169–190. David Reitter, Johanna D. Moore, and Frank Keller. 2006. Priming of syntactic rules in task-oriented dialogue and spontaneous conversation. In Proceedings of the 28th Annual Conference of the Cognitive Science Society. David Reitter. 2008. Context Effects in Language Production: Models of Syntactic Priming in Dialogue Corpora. Ph.D. thesis, U. of Edinburgh. Rick B. van Baaren, Rob W. Holland, Bregje Steenaert, and Ad van Knippenberg. 2003. Mimicry for money: Behavioral consequences of imitation. Journal of Experimental Social Psychology, 39(4):393–398. Yafei Wang, David Reitter, and John Yen. 2014. Linguistic Adaptation in Conversation Threads: Analyzing Alignment in Online Health Communities. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Michael Willemyns, Cynthia Gallois, Victor Callan, and Jeffrey Pittam. 1997. Accent accommodation in the employment interview. Journal of Language and Social Psychology, 15(1):3–22. Victor Yngve. 1970. On getting a word in edgewise. In Papers from the Sixth Regional Meeting of the Chicago Linguistics Society, pages 567–577. 536
2016
50
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 537–546, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Entropy Converges Between Dialogue Participants: Explanations from an Information-Theoretic Perspective Yang Xu and David Reitter College of Information Sciences and Technology The Pennsylvania State University University Park, PA 16802, USA [email protected], [email protected] Abstract The applicability of entropy rate constancy to dialogue is examined on two spoken dialogue corpora. The principle is found to hold; however, new entropy change patterns within the topic episodes of dialogue are described, which are different from written text. Speaker’s dynamic roles as topic initiators and topic responders are associated with decreasing and increasing entropy, respectively, which results in local convergence between these speakers in each topic episode. This implies that the sentence entropy in dialogue is conditioned on different contexts determined by the speaker’s roles. Explanations from the perspectives of grounding theory and interactive alignment are discussed, resulting in a novel, unified informationtheoretic approach of dialogue. 1 Introduction Information in written text and speech is strategically distributed. It has been claimed to be ordered such that the rate of information is not only close to the channel capacity, but also approximately constant (Genzel and Charniak, 2002, 2003; Jaeger, 2010); these results were developed within the framework of Information Theory (Shannon, 1948). In these studies, the per-word cross-entropy of a sentence is used to model the amount of information transmitted. Language is treated as a series of random variables of words. Most existing work examined written text as opposed to speech. Spoken dialogue is different from written text in many ways. For example, dialogue contains more irregular or ungrammatical components, such as incomplete utterances, disfluencies etc. (Jurafsky and Martin, 2014, ch 12), which are “theoretically uninterested complexities that are unwanted” (Pickering and Garrod, 2004). Dialogue is also different from written text in high level discourse structure. The paragraphs in written text, which function as relatively standalone topic units, are constructed under the guidance of one consistent author. On the other hand, the constitution and transformation of topics in dialogue are more dynamic processes, which are the result of the joint activity from multiple speakers (Linell, 1998). In nature, written text is a monologue, while dialogue is a joint activity (Clark, 1996). From the application perspective, investigating entropy in dialogue can help us better understand which speaker contributes the most information, and thus may potentially benefit tasks such as conversational roles identification (Traum, 2003) etc. From the theoretical perspective, we believe that such investigation will reveal some unique features of the formation of higher level discourse structure in dialogue that are different from written text, e.g., topic episode shifts, because previous studies have found the correlation between entropy decrease and potential topic shift in written text (Qian and Jaeger, 2011). Finally, entropy is closely related to predictability and processing demands, which has implications for cognitive aspects of communication. The main purpose of this study is to characterize how lexical entropy changes in spoken language. We will focus on spontaneous dialogue of two speakers and carry out two steps of investigation. First, we examine the overall entropy patterns within dialogue as a whole context that does not differentiate speakers. Second, we zoom in to topic episodes within dialogue and explore how each of the two speakers’ entropy develops. The goal of the second step is to account the complexity of topic shifts within spoken dialogues and to reach a more detailed understanding 537 of human communication from an informationtheoretic perspective. If topic shifts in dialogue do correlate with changes in entropy, how do they affect the two speakers, only one of whom typically initiates the topic shift, while another follows along? To answer this question, we use the transcribed text data from two well-developed corpora. 2 Related Work 2.1 The principle of entropy rate constancy The constancy rate principle governing language generation in human communication was first proposed by Genzel and Charniak (2002). Inspired by ideas from Information Theory (Shannon, 1948), this principle asserts that people communicate (written or spoken) in a way that keeps the rate of information being transmitted approximately constant. Genzel and Charniak (2002) provide evidence to support this principle by formulating the problem into Equation 1. They treat text as a sequence of random variables Xi, and Xi corresponds to the i th word in the corpus. They focus on the entropy of a word conditioned on its context, i.e., Xi|X1 = w1, . . . , Xi−1 = wi−1, and decompose the context into two parts: the global context Ci that refers to all the words from preceding sentences, and the local context Li that refers to all the preceding words within the same sentence as Xi. Thus, the conditioned entropy of Xi is also decomposed into two terms (see the right side of Equation 1): the local measure of entropy (first term), and the mutual information between the word and global context (second term). H(Xi|Ci, Li) = H(Xi|Li) −I(Xi, Ci|Li) (1) The constancy rate principle predicts that the left side of Equation 1 should be constant as i increases. Because H(Xi|Ci, Li) itself is difficult to estimate (because it is hard to define Ci mathematically), and that the mutual information turn I(Xi, Ci|Li) is known to increase with i, the whole problem becomes examining whether the local measure of entropy H(Xi|Li) also increases with i. Genzel and Charniak (2002) have confirmed this prediction by showing that H(Xi|Li) does increase with i within multiple genres of written text of different languages. The constancy rate principle also leads to an interesting prediction about the relationship between entropy change and topic shift in text. Generally, a sentence that initiate a shift in topic will have lower mutual information between its context, because the previous context provides little information to the new topic. Thus, a topic shift corresponds to the drop of the mutual information term I(Xi, Ci|Li). Then in order to keep constancy of the left term as predicted by the principle, the entropy term needs to decrease when a topic shift happens. Genzel and Charniak (2003) verified this prediction by showing that paragraph-starting sentences have lower entropy than non-paragraphstarting ones, with the assumption that a new paragraph often indicates a topic shift in written text. More recently, latent topic modeling (Qian and Jaeger, 2011) showed that lower sentence entropy was associated with topic shifts. Genzel and Charniak’s work has been extended to integrate non-linguistic information into the principle. Doyle and Frank (2015) leveraged Twitter data to find further support to the constancy rate principle: the entropy of message gradually increases as the context builds up, and it sharply goes down when there is a sudden change in the non-linguistic context (Baseball world series news reports, Doyle and Frank, 2015). Uniform Information Density (UID) (Jaeger and Levy, 2006) extends the principle in a framework that governs how people manage the amount of information in language production, from lexical levels to all levels of linguistic representations, e.g., syntactic levels. Its core idea is that people avoid salient changes in the density of information (i.e., amount of information per amount of linguistic signal) by making specific linguistic choices under certain contexts (Jaeger, 2010). 2.2 Topic shift in dialogues As a conversation unfolds, topic changes naturally happen when a current topic is exhausted or a new one occurs, which is referred to as topic shift in the field of Conversation Analysis (CA) (Ng and Bradac, 1993; Linell, 1998). In CA, the basic unit of topical structure analysis in dialogue is episode, which refers to a sequence of speech events that are “about” something specific in the world (Linell, 1998, ch 10, p 187). Here, to be precise, we use the term topic episode. According to related theories in CA, the for538 Table 1: Basic statistics of corpora Statistics Switchboard BNC # of dialogues. 1126 1346 Avg # of turns in dialogue 109 52 Avg # of sentences in dialogue 141 70 mation of topic episode is a joint accomplishment from two speakers and a product of initiatives and responses (Linell, 1990). When establishing a new topic jointly, one speaker first produces an initiatory contribution that introduce a “candidate” topic, and the other speaker makes a response that shares his perspective on that (Linell, 1998). From the information theoretic point of view, the initiator of a new topic plays a role of introducing novelty or surprisal into the context, while the other speaker, the responder, is more of a commenter or evaluator of information, who does not contribute as much in terms of novelty. Since previous studies have shown that the decrease of sentence entropy is correlated with topic shifts in written text (Genzel and Charniak, 2003; Qian and Jaeger, 2011), it is reasonable to expect the same effect to be present at the boundaries of topic episodes in dialogue. Furthermore, considering the initiator vs. responder discrepancy in speaker roles, we expect their entropy change patterns also to be different. 3 Overall Trend of Entropy in Dialogue In this section we examine whether the overall entropy increase trend is present in dialogue text. 3.1 Corpus data The Switchboard corpus (Godfrey et al., 1992) and the British National Corpus (BNC) (BNC, 2007) are used in this study. Switchboard contains 1126 dialogues by telephone between two native NorthAmerican English speakers in each dialogue. We use only a subset of BNC (spoken part) that contain spoken conversations with exactly two participants, so that the dialogue structures are consistent with Switchboard. 3.2 Computing Entropy of One Sentence We use language model to estimate the sentence entropy, which is similar to Genzel and Charniak (2003)’s method. A sentence is considered as a sequence of words, W = {w1, w2, . . . , wn}, and its per-word entropy is estimated by: H(w1 . . . wn) = −1 n X wi∈W log P(wi|w1 . . . wi−1) where P(wi|w1 . . . wi−1) is estimated using a trigram language model. The model is trained using Katz backoff (Katz, 1987) and Lidstone smoothing (Chen and Goodman, 1996). For the two corpora respectively, we extract the first 100 sentences from each conversation, and apply a 10-fold cross-validation, i.e., dividing all the data into 10 folds. Then we choose each fold as the testing set, and compute the entropy of each sentence in it, using the language model trained against the rest of the folds. 3.3 Eliminating sentence length effects Intuitively, longer sentences tend to convey more information than short ones. Thus, the per-word entropy of a sentence should be correlated with the sentence length, i.e., the number of words. This correlation is confirmed in our data by calculating the Pearson correlation between the per-word entropy and sentence length: For Switchboard, r = 0.258, p < 0.001; for BNC, r = 0.088, p < 0.001. Sentence length is found to vary with its relative position in text (Keller, 2004). Thus, in order to truly examine the variation pattern of sentence entropy within dialogue, we need to eliminate the effect of sentence length from it. We calculate a normalized entropy that is independent of sentence length in the following way. (This method is used by Genzel and Charniak (2003) to get the length-independent tree depth and branching factor of sentence.) First, we compute ¯e(n), the average per-word entropy of sentences of the same length n, for all lengths (n = 1, 2, . . . ) that have occurred ¯e(n) = 1/|L(n)| X s∈L(n) e(s) where e : S →R is the original per-word entropy of a sentence s, and L(n) = s|l(s) = n is the set of sentences of length n. Then we compute the sentence-length adjusted entropy measure that we want by e′(s) = e(s) ¯e(n) This normalized entropy measure sums up to 1, and is not sensitive to sentence length. In later part 539 of this paper, we demonstrate our results in both entropy and normalized entropy because the former is the direct measure of information content. 3.4 Results We plot the per-word entropy and normalized entropy of sentence against its global position, which is the sentence position from the beginning of the dialogue (Figure 1). It can be seen that both measures increase with global position. BNC shows larger slope than Switchboard, and the latter has a flatter curve but sharper increase at the early stage of conversations. To test the reliability of the observed increasing trend, we fit linear mixed-effect models using entropy and normalized entropy as response variables, and the global position of sentence as predictor (fixed effect), with a random intercept grouped by distinct dialogues. The lme4 package in R is used (Bates et al., 2014). The results show that the fixed effects of global position are significant for both measures in both corpora: Entropy in Switchboard, β = 4.2 × 10−3, p < 0.001; normalized entropy in Switchboard, β = 5.9 × 10−4, p < 0.001; entropy in BNC, β = 1.5 × 10−2, p < 0.001; normalized entropy in BNC, β = 1.4 × 10−3, p < 0.001). In particular, since the curves of Switchboard seem flat after a boost in the early phase (between 0 to 5 in global position), we fit extra models to examine whether the entropy increase for global positions larger than 10 is significant. The long-term changes are reliable, too: Entropy, β = 3.4 × 10−3, p < 0.001; normalized entropy, β = 5.1 × 10−4, p < 0.001. In sum, we find increasing entropy over the course of the whole dialogue. These findings are consistent with previous findings on written text. 4 Topic Shift and Speaker Roles Since the topic structure of dialogue differs from written text, it is our interest to investigate how this difference affects the sentence entropy patterns. First, we identify the boundaries of topic episodes, and examine the presence of entropy drop effect at the boundaries. Second, we differentiate the speakers’ roles in initiating the topic episode, i.e., initiator vs. responder, and compare their entropy change patterns within the episode. 4.1 Topic segmentation There are multiple computational frameworks for topic segmentation, such as the Bayesian model (Eisenstein and Barzilay, 2008), Hidden Markov model (Blei and Moreno, 2001), latent topic model (Blei et al., 2003) etc. Considering that performance is not the prior requirement in our task, and also to avoid being confounded by segmentation method that utilize entropy measure per se, we use a less sophisticated cohesion-based TextTiling algorithm (Hearst, 1997) to carry out topic segmentation. TextTiling algorithm inserts boundaries into dialogue as a sequence of sentences. We treat the segments between those boundaries as topic episodes. For each episode within a dialogue, we assign it a unique episode index, indicating its relative position in the dialogue (e.g., from 1 to N for a dialogue that contains N episodes). For each sentence, we assign it a within-episode position, indicating its relative position within the topic episode. In Figure 2 we plot the entropy (and normalized) of sentence against the within-episode positions, grouped by episode index. Due to the space limit, we only present the first 6 topic episodes and the first 10 sentences in each episode. It can be seen that entropy drops at the beginning of topic episode, and then increases within the episode. To examine the reliability of the entropy increase within topic episodes, we fit linear mixed effect models using entropy (and normalized) as response variables, and the within-episode position of sentence as predictor (fixed effect), with a random intercept grouped by the unique episode index of each topic episode. We find a significant fixed effect of within-episode position on both measures for both corpora: Entropy in Switchboard, β = 5.9 × 10−4, p < 0.001; normalized entropy in Switchboard, β = 4.5 × 10−3, p < 0.001; entropy in BNC, β = 2.5 × 10−2, p < 0.001; normalized entropy in BNC, β = 3.0 × 10−3, p < 0.001. Our results show that when we treat the sentences in dialogue indiscriminately, their entropy change patterns at topic boundaries are consistent with previous findings on written text. 4.2 Identifying topic initiating utterances Having dialogue segmented into topic episodes, our next step is to identify each speaker’s role in initiating the topic. According to the theories 540 8 10 12 14 0 25 50 75 100 global position entropy (a) 0.8 0.9 1.0 1.1 1.2 1.3 0 25 50 75 100 global position normalized entropy (b) corpus BNC Switchboard Figure 1: Entropy (a) and normalized entropy (b) against global position of sentences (from 1 to 100). Shadow area indicates 95% bootstrapped Confidence Interval. reviewed in Section 2.2, the key to identify the speaker roles is to identify who produces the initiatory “candidate” topic. To be convenient, we use the term topic initiating utterance (TIU) to refer to the very first utterance produced by the initiator to bring up the new topic. Here, we give an empirical operational definition of TIU. Since we treat dialogue as a series of sentences, and apply the TextTiling algorithm to insert topic boundaries indiscriminately (without differentiating whether adjacent sentences are from the same speaker or not), it results in two types of topic boundaries: Within-turn boundaries, the ones located in the middle of a turn (i.e., from one speaker). Between-turn boundaries, the ones located at the gap between two different turns (i.e., from two speakers). Our survey shows that in Switchboard 27.2% of the topic boundaries are within turns, and 72.8% are between turns. For BNC the two proportions are 41.2% and 58.8% respectively. Intuitively, a within-turn topic boundary suggests that the speaker of the current turn is initiating the topic shift. On the other hand, a betweenturn boundary suggests that the following speaker who first gives substantial contribution is more likely to be the initiator of the next topic. Following this intuition, for within-turn boundaries, we define TIU as the rest part of current turn after the boundary. For between-turn boundaries, we define TIU as the whole body of the next relatively long turn after the boundary, whose length is larger than N words. Note that the determination of threshold N is totally empirical, because our goal is to identify the most probable TIU, based on the intuition that longer sentences tend to contain more information, and thus are more likely to initiate a new topic. For the results shown later in this paper, we use N = 5, and our experiments draw similar results for N ≥5. The operational definition of TIU is demonstrated in Figure 3. 4.3 The effect of topic initiator vs. responder Based on the operational definition of topic initiating utterance (TIU), we distinguish the two speakers’ roles in each topic segment: the author of TIU is the initiator of the current topic, while the other speaker is the responder. Again, we plot the sentence entropy (and normalized) against the within-episode position respectively, this time, grouped by speaker roles (initiator vs. responder) in Figure 4. It can be seen that at the beginning of a topic, initiators have significantly higher entropy than responders. As the topic develops, the initiators’ entropy decreases (Figure 4a) or stays relatively steady (Figure 4b), and the responder’s entropy increases. Together they form a convergence trend within topic episode. We use standard linear mixed models to examine the convergence trend observed, i.e., to test 541 episode 1 episode 2 episode 3 episode 4 episode 5 episode 6 8 10 12 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 within−episode position entropy corpus BNC Switchboard (a) Entropy vs. within-episode position episode 1 episode 2 episode 3 episode 4 episode 5 episode 6 0.8 0.9 1.0 1.1 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 within−episode position normalized entropy corpus BNC Switchboard (b) Normalized entropy vs. within-episode position Figure 2: Entropy (a) and normalized entropy (b) against within-episode position grouped by episode index. The x-axis in each block indicates the within-episode position of sentence. The number 1 to 6 on top of the blocks are episode indexes. Shadow area indicates 95% bootstrapped Confidence Interval Within-turn topic boundary Speaker A: Speaker A: Speaker B: Between-turn topic boundary topic initiating utterance (TIU) topic initiating utterance (TIU) …… Figure 3: Operational definition of topic initiating utterances (TIUs). The red vertical bars indicate the topic boundaries placed using TextTiling. A complete horizontal bar of one color represents a turn from one speaker (green for speaker A and blue for speaker B). The upper line shows the case of within-turn topic boundary, and the lower line shows the case of between-turn topic boundary. whether the initiators’ entropy reliably decreases and whether the responders’ entropy reliably increases. Models are fitted for initiators and responders respectively, using the entropy (and normalized) as response variables, and the withinepisode position as predictor (fixed effect), with a random intercept grouped by the unique episode index. Our models show that for the entropy measure, the fixed effect of within-episode position is reliably negative for initiators (Switchboard, β = −3.6 × 10−2, p < 0.001; BNC, β = −2.9 × 10−2, p < 0.05) and reliably positive for responders (Switchboard, β = 3.3 × 10−1, p < 0.001; BNC, β = 1.4 × 10−1, p < 0.001). For the normalized entropy measure, the fixed effect of within-episode position is insignificant for initiators, which means there is neither increase nor decrease, and is reliably positive for responders (Switchboard, β = 1.4 × 10−2, p < 0.001; BNC, β = 1.2 × 10−2, p < 0.001). Thus, the convergence trend is confirmed. The entropy change patterns of topic initiators (decrease or remain constant within topic episode) are inconsistent with previous findings that assert an entropy increase in written text (Genzel and 542 G G G G G G G G G G G G G G G G G G G G 6 8 10 1 2 3 4 5 6 7 8 9 10 within−episode position entropy group G G BNC: initiator BNC: responder Switchboard: initiator Switchboard: responder (a) G G G G G G G G G G G G G G G G G G G G 0.85 0.90 0.95 1.00 1.05 1 2 3 4 5 6 7 8 9 10 within−episode position normalized entropy group G G BNC: initiator BNC: responder Switchboard: initiator Switchboard: responder (b) Figure 4: Entropy (a) and normalized entropy (b) against within-episode position grouped by speaker roles (topic initiator vs. topic responder) Charniak, 2002, 2003), which will be discussed in the next section. 5 Discussion 5.1 Summary Our main contribution is that we find new entropy change patterns in dialogues that are different from those in written text. Specifically, when distinguishing the speakers’ roles by topic initiator vs. responder, we see that the initiator’s entropy decreases (or remain steady) whilst the responder’s increases within a topic episode, and together they form a convergence pattern. The partial trend of entropy decrease in topic initiators seems to be contrary to the principle of entropy rate constancy, but as we will discuss next, it is actually an effect of the unique topic shift mechanism of dialogues that is different from written text, which does not violate the principle. From an information theoretic perspective, we view dialogue as a process of information exchange, in which the interlocutors play the roles of information provider and receiver, interactively within each topic episode. Beyond differences in speaker roles, we do observe that sentence entropy increases with its global position in the dialogue, which is consistent with written text data (Genzel and Charniak, 2002, 2003; Qian and Jaeger, 2011; Keller, 2004). Thus, overall speaking, spoken dialogue do follow the general principle of entropy rate constancy. 5.2 Dialogue as a process of information exchange By combining topic segmentation techniques and fine-grained discourse analysis, we provide a new angle to view the big picture of human communication: the perspective of how information is distributed between different speakers. One critical difference between written text and spoken text in conversation is that there is only one direct input source of information in the former, i.e., the author of the text, but for the latter, there are multiple direct input sources, i.e., the multiple speakers. That means, when language production is treated as a process of choosing proper words (or other representations) within a context, the definition of “context” is different between the two categories of text. In written language (see Equation 1 in Section 2), Ci, the global context of a word Xi, is assumed to be all the words in preceding sentences. This is a reasonable assumption, because when one author is writing a complete piece of text, he may organize information smoothly to keep the entropy rate constant. Within a dialogue, for any upcoming utterance, all preceding utterances together can be viewed as the shared context for the two speakers. To help us un543 derstand the nature of this shared context, we propose the following mental experiment. Suppose we, as researchers and “super-readers”, observe the transcript of a dialogue between interlocutors A and B. To us, all utterances are based upon the context of previous ones, which is why we can observe consistent entropy increase within the whole dialogue (Figure 1 in Section 3). Also, to us, a new topic episode in dialogue is just like a new paragraph in written text, within which we can observe steady entropy increase without differentiating the utterances from the two speakers. By contrast, let’s look at the context used by the two speakers. They will not necessarily leverage the preceding utterances as a coherent context. A topic initiator introduces new information from a context outside of the dialogue. Therefore the mutual information between the initiator’s current sentence and the previous context is reduced, which causes the sentence entropy to start high before decreasing. On the other side, a topic responder relies much on the previous shared context (because he is not an active topic influencer). The responder is dynamically updating the context as the initiator pours new information into the mix. This causes the mutual information with the previous context to be high, and thus the sentence entropy start low before increasing again. We think that the respective cognitive load in the topic responder imposed by following the other speaker in a new topic direction may be complemented by reduced information at the language level. This is, again, compatible with a cognitive communication framework that imposes a tendency to limit or keep constant overall information levels. It is also an example of extralinguistic information that causes complementary entropy changes in a speaker’s language (cf., Doyle and Frank, 2015). 5.3 Dialogue as a process of building up common ground Our findings can also be explained by a theory of grounding (Clark and Brennan, 1991; Clark, 1996) of communication. Dialogue can be seen as a joint activity during which multiple speakers contribute alternatively to build common ground (Clark and Brennan, 1991). Common ground can be understood as the mutual knowledge shared between interlocutors. Clark (1996) proposes that joint activities have a number of characteristics: First, participants play different roles in the activity. Second, a major activity is usually comprised of sequences of subactivities, and the participants’ role may differ from sub-activity to next. Third, to achieve the goal of the activity, it requires coordination between participants of different roles. In our design, the local roles of topic initiator vs. topic responder correspond to roles suggested by the joint-activity theory. The initiator sets up the dominant goal of the sub-activity, i.e., developing a new topic episode, and the responder joins him or her in order to achieve the goal. The converging sentence entropy indicates that the mutual knowledge between them is accumulating, i.e., the common ground is being gradually built up. Once the goal is achieved, i.e., the current topic is fully developed, a new goal will emerge, and a new common ground needs to be built again, which is sometimes accompanied by a change in participants roles. 5.4 Convergence of linguistic behaviors One mechanism that may lead to the convergence of sentence entropy may be the interactive alignment of linguistic features between speakers (Pickering and Garrod, 2004); repeating words and syntactic structure leads to increased similarity. The entropy-converging pattern also reflects the convergence of higher-level dialogical behavior, say, speakership occupancy; the discrepancy between the two speakers’ roles gradually becomes smaller, i.e., the “speaker” becomes more of a “listener”, and vice versa. A psychologist might treat the fragmented topic episodes in dialogues as the locus where interlocutors build temporarily shared understanding (Linell, 1998), through the process of “synchronization of two streams of consciousness” (Schutz, 1967). 6 Conclusion In this study, we validate the principle of entropy rate constancy in spoken dialogue, using two common corpora. Besides the results that are consistent with previous findings on written text, we find new entropy change patterns unique to dialogue. Speakers that actively initiate a new topic tend to use language with higher entropy compared to the language of those who passively respond to the topic shift. These two speaker’s respective entropy 544 levels converge as the topic develops. A model of this phenomenon may provide explanations from the perspectives of information exchange, common ground building, and the convergence of linguistic behaviors in general. With this, we put forward what we think is a new perspective to analyzing dialogue. As much dialogue happens for the purpose of information exchange, loosely defined, it makes sense to apply information-theoretic models to the semantics as well as the form of speaker’s messages. The quantitative approach taken here augments rather than supplants speech acts (Searle, 1976), identifying who leads the dialogic process by introducing topics and shifting them. Furthermore, our approach actually provides a unified perspective of dialogue that combines Grounding theory (Clark and Brennan, 1991) and Interactive Alignment (Pickering and Garrod, 2004). These two models are often described as opposite; by applying each theory to the dialogic structure between and within topic episodes, we find both of them can explain our findings. The entropy measure of information content quantifies interlocutors’ contributions to common ground and also allows us to show convergence patterns. This unified information-theoretic perspective may eventually allow us to identify further systematic patterns of information exchange between dialogue participants. There is, of course, no reason to think that multi-party dialogue should work differently; we leave the empirical examination as an open task. Acknowledgments This work has been funded by the National Science Foundation under CRII IIS grant 1459300. References Douglas Bates, Martin Mächler, Ben Bolker, and Steve Walker. 2014. Fitting linear mixedeffects models using lme4. arXiv preprint arXiv:1406.5823 . David M Blei and Pedro J Moreno. 2001. Topic segmentation with an aspect hidden markov model. In Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, New Orleans, LA, USA, pages 343–348. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. The Journal of Machine Learning Research 3:993–1022. BNC. 2007. The British National Corpus, version 3 (BNC XML Edition). Stanley F Chen and Joshua Goodman. 1996. An empirical study of smoothing techniques for language modeling. In Proceedings of the 34th Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, pages 310–318. Herbert H Clark. 1996. Using language. Cambridge University Press. Herbert H Clark and Susan E Brennan. 1991. Grounding in communication. Perspectives on socially shared cognition 13(1991):127–149. Gabriel Doyle and Michael C Frank. 2015. Shared common ground influences information density in microblog texts. In Proceedings of NAACLHLT. Denver, CO, USA. Jacob Eisenstein and Regina Barzilay. 2008. Bayesian unsupervised topic segmentation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Honolulu, HI, USA, pages 334–343. Dmitriy Genzel and Eugene Charniak. 2002. Entropy rate constancy in text. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, Philadelphia, PA, USA, pages 199–206. Dmitriy Genzel and Eugene Charniak. 2003. Variation of entropy and parse trees of sentences as a function of the sentence number. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Sapporo, Japan, pages 65–72. John J Godfrey, Edward C Holliman, and Jane McDaniel. 1992. Switchboard: Telephone speech corpus for research and development. In Acoustics, Speech, and Signal Processing, 1992. ICASSP-92., 1992 IEEE International Conference on. IEEE, volume 1, pages 517–520. Marti A Hearst. 1997. Texttiling: Segmenting text into multi-paragraph subtopic passages. Computational linguistics 23(1):33–64. 545 T Florian Jaeger. 2010. Redundancy and reduction: Speakers manage syntactic information density. Cognitive Psychology 61(1):23–62. T Florian Jaeger and Roger P Levy. 2006. Speakers optimize information density through syntactic reduction. In Advances in Neural Information Processing Systems. pages 849–856. Dan Jurafsky and James H Martin. 2014. Speech and language processing. Pearson. Slava M Katz. 1987. Estimation of probabilities from sparse data for the language model component of a speech recognizer. Acoustics, Speech and Signal Processing, IEEE Transactions on 35(3):400–401. Frank Keller. 2004. The entropy rate principle as a predictor of processing effort: An evaluation against eye-tracking data. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Barcelona, Spain, volume 317, page 324. Per Linell. 1990. The power of dialogue dynamics. Harvester Wheatsheaf. Per Linell. 1998. Approaching dialogue: Talk, interaction and contexts in dialogical perspectives, volume 3. John Benjamins Publishing. Sik Hung Ng and James J Bradac. 1993. Power in language: Verbal communication and social influence. Sage. Martin J Pickering and Simon Garrod. 2004. Toward a mechanistic psychology of dialogue. Behavioral and Brain Sciences 27(02):169–190. Ting Qian and T Florian Jaeger. 2011. Topic shift in efficient discourse production. In Proceedings of the 33rd Annual Conference of the Cognitive Science Society. Boston, MA, USA, pages 3313–3318. Alfred Schutz. 1967. The phenomenology of the social world. Northwestern University Press. John R Searle. 1976. A classification of illocutionary acts. Language in Society 5(01):1–23. Claude Elwood Shannon. 1948. A mathematical theory of communication. The Bell System Technical Journal 27:379–423. David Traum. 2003. Issues in multiparty dialogues. In Advances in Agent Communication, Springer, pages 201–211. 546
2016
51
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 547–557, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Finding the Middle Ground - A Model for Planning Satisficing Answers Sabine Janzen Saarland University Saarbr¨ucken, Germany [email protected] Wolfgang Maaß Saarland University Saarbr¨ucken, Germany [email protected] Tobias Kowatsch University of St. Gallen St. Gallen, Switzerland [email protected] Abstract To establish sophisticated dialogue systems, text planning needs to cope with congruent as well as incongruent interlocutor interests as given in everyday dialogues. Little attention has been given to this topic in text planning in contrast to dialogues that are fully aligned with anticipated user interests. When considering dialogues with congruent and incongruent interlocutor interests, dialogue partners are facing the constant challenge of finding a balance between cooperation and competition. We introduce the concept of fairness that operationalize an equal and adequate, i.e. equitable satisfaction of all interlocutors’ interests. Focusing on Question-Answering (QA) settings, we describe an answer planning approach that support fair dialogues under congruent and incongruent interlocutor interests. Due to the fact that fairness is subjective per se, we present promising results from an empirical study (N=107) in which human subjects interacted with a QA system implementing the proposed approach. 1 Introduction For building dialogue systems that cope with contradictions and individual interests of dialog partners, text planning is required to process incongruent and congruent interests of interlocutors. So far, research on dialogue systems focusses on supporting dialogues that are fully aligned with anticipated user interests, e.g., (Hovy, 1991; Grosz and Kraus, 1996; Moore and Paris, 1993; Lochbaum, 1998; Rich and Sidner, 1997), and, thus, maximizing cooperativeness (Bunt and Black, 2000, 191 p. 5). Few approaches exist that investigate text planning with pure conflict, e.g., (Jameson et al., 1994; Hadjinikolis et al., 2013; Black and Atkinson, 2011; Prakken, 2006). When considering dialogues with congruent as well as incongruent interlocutors interests, dialogue partners are facing the constant challenge of finding a balance between cooperation and competition (Parikh, 2010). We introduce the concept of fairness that operationalize an equal and adequate, i.e. equitable satisfaction of all interlocutors’ interests (Oxford Dictionaries, 2016). Focusing on Question-Answering (QA) settings, we describe an answer planning approach that support fair dialogues under congruent and incongruent interests of interlocutors. Due to the fact that fairness is subjective per se, we present results from an empirical study in which human subjects interacted with a QA system in various dialogue settings. When determining appropriate answers in text planning, approaches range from (1) wrong answer avoidance concepts technically checking the correctness of answers, e.g., Dong et al. (2011), and (2) opponent models in persuasion dialogues for choosing most suitable arguments, e.g., Hadjinikolis et al. (2013), to (3) the prediction of emotions of interlocutors to generate answers, e.g., Hasegawa et al. (2013). Here, related work is relevant that focuses on the determination of appropriate answers by processing concepts like users’ intentions (e.g., Levelt (1993)), desires (e.g., Rao & Georgeff (1995)), preferences (e.g., Li et al. (2013)), objectives (e.g., Schelling (1960)) and goals (e.g., Traum et al. (2008)) which we will hereafter subsume under the term motives. Motives refer to objectives or situations that interlocutors would like to accomplish, e.g., to find the best price when shopping. According to the belief-desire-intention model, motives can be described as desires in the sense of a motivational state (Georgeff et al., 1998). Motives do not in547 volve the mandatory purpose of being recognizable by other participants; so they are equivalent with the concept of intentions in (Levelt, 1993). Regarding the processing of congruent and incongruent motives, existing approaches rather focus on motives of single interlocutors or on joint motives, e.g., Paquette (2012), Li et al. (2013). In the following, the aggregation of congruent and incongruent interlocutor motives in dialogues will be described as mixed motives. In this work, we propose a model that formalizes answer planning as psychological game (Bjorndahl et al., 2013) embedded in text planning approaches (Mann and Thompson, 1986; Moore and Paris, 1993) for creating dialogues perceived as fair by all interlocutors. Since traditional formalization of motives by means of utility functions is not sufficient to handle complex interactions as given in the considered dialogue setting (Bjorndahl et al., 2013), psychological games enrich classical game settings with user models, i.e. in our case explicit representations of mixed motives. One appeal of the model is the consideration of answer planning as psychological game that lifts the process of finding appropriate answers from the short-term linguistic level to the long-term motive level in contrast to other approaches (van Deemter, 2009; Stevens et al., 2015). Interlocutors do not have preferences for answers, but try to satisfy motives. So, we assume that this approach enables a more sophisticated simulation of human behavior in mixed motive interactions as well as the establishment of “cooperativeness in response formulation” (Bunt and Black, 2000, p. 5) for creating dialogues perceived as fair. By exemplifying the model within a QA system as natural language sales assistant for conducting sales dialogues, we were able to evaluate the proposed approach in an empirical user study (N=107) in terms of perceived fairness of created dialogues with promising results. 2 Planning Answers given Mixed Motives Adopting a computational pragmatics perspective, we intend to compute relevant linguistic aspects of answers based on contextual aspects given by mixed motives (Bunt and Black, 2000, p. 3). When searching for answers that support an equitable satisfaction of mixed motives during dialogue, Ωrepresents the solution space with potential answers. An objective function f : Ω→R assigns values to all answers x ∈Ωfor representing their potential in satisfying motives of interlocutor i ∈I. Of course, all interlocutors I prefer answers x that satisfy best their motives; xa ≻xb ⇔ f(xa) > f(xb). So, the goal would be to find an answer x ∈Ωwith highest satisfaction of motives f(x) of interlocutor i ∈I in the sense of an optimal solution x*; i.e. f(x*) = max{f(x)|x ∈Ω}. But, in order to achieve fair outcomes regarding an equitable, i.e. equal and adequate satisfaction of mixed motives, this definition is not sufficient. First, decision making takes places in the context of social dialogue interaction, i.e. answers have to be selected based on multiple objective functions since motives of all interlocutors i ∈I shall be satisfied; fi : Ω→R. For capturing the aspect of equal motive satisfaction, the potential of answers has to be represented absolutely and relatively. In other words, the performance of an answer in satisfying motives of an interlocutor i ∈I is combined with its performance in satisfying motives of counterparts −i ∈I; max{fi,-i(x)|x ∈Ω}. Second, the aforementioned conflict between cooperation and competition needs to be solved adequately. Since it is impossible to find an answer satisfying all motives of all interlocutors at any time in the dialogue, we search for a compromise in form of a solution i.e. an answer x+ with a minimum quality s so that f(x+) ≥s. Adopting the concept of satisficing by Simon (1956), an approach that attempts to find the best alternative available in contrast to optimal decision making, the goal is to find an answer x ∈Ωwith highest sufficient satisfaction of motives f(x) ≥s of all interlocutors I in the sense of a satisficing solution x+; i.e. f(x+) = max{fi,-i(x) ≥s|x ∈Ω}. 3 Model for Planning Satisficing Answers To capture these issues, we defined a model for planning satisficing answers in dialogues with mixed motives. In the considered setting, a user with motives poses questions to a QA system that takes the role of a proxy for indirect interlocutors, e.g., retailers in online shopping scenarios. The QA system adopts their motives and develops strategies to satisfy them. Adopted motives as well as user motives that are anticipated by the system represent mixed motives in the dialogue. Task of the system is to process these mixed mo548 tives with the objective to create a dialogue that is perceived as fair by all interlocutors after a finite number of question answer pairs. As full satisfaction of all motives of all interlocutors at any time in a mixed motive dialogue is not possible, the QA system has to find a compromise, i.e. it has to plan answers that satisfice mixed motives during dialogue. Let us start by describing an example dialogue between a customer and a retailer in a shopping scenario: Q: Is the range of this wifirouter appropriate for a house with 3 floors? A: In case of 3 floors, I would recommend an additional wifirepeater that got very good feedback by other customers. You can buy both router and repeater as a bundle with 15% discount. In this dialogue snippet, the customer intends to get comprehensive product information regarding the wifirouter; the retailer also wants to satisfy informational needs of the customer to establish excellent services. Beyond these congruent motives, the retailer wants to increase revenue and to raise sales figures. A balance between mixed motives is found by giving information regarding the wifi router as well as preferences of other customers followed by a discounted bundle offer. In order to implement this kind of behavior into dialogue systems, the model for planning satisficing answers separates linguistics from conceptual non-linguistic aspects (Traum and Larsson, 2003; Allen et al., 2001) and consists of three main modules: linguistic module, mapper and mixed motive module (cf. Fig. 1). The linguistic module takes care for handling user questions as input as well for generating answers as output. Essential components of the linguistic module are the linguistic intention model and flexible text planning technologies. For the latter, we apply text plans according to the Rhetorical Structure Theory (Mann and Thompson, 1986) in form of plan operators (Moore and Paris, 1993) for generating answers. Each plan operator consists of a single compulsive part, called nucleus, that is related with diverse optional text segments, mentioned as satellites. We assume that beside supporting the effect of the nucleus, satellites represent an opportunity to satisfice mixed motives during dialogue. Satellites are linked with entities of the linguistic intention model, means linguistic intentions that capture the intended effects, i.e. functions of satellites within answers (Grosz and Sidner, 1986). By means of second module - the mapper - linguistic intentions are mapped onto motives and vice versa (cf. Fig. 1). Therefore domain-specific knowledge about correlations between linguistic intentions and mixed motives is required that is induced by a domain configurator and has to be derived empirically. Last, the mixed motive module combines an explicit representation and situated processing of mixed motives (Cohen and Levesque, 1990) with a game-theoretical equilibrium approach (Nash, 1951) to establish a psychological game setting (Bjorndahl et al., 2013) (cf. Fig. 1). Our approach operates by assuming that interlocutors are rational. That means they act strategically and purposively in pursuit of their own motives that they try to maximally satisfy. Therefore, we assume that game theory is an adequate prospect to deliver the analytical tools for planning answers in the context of mixed motives. In game theory literature, equilibrium concepts are widely applied, e.g., Nash equilibrium (Nash, 1951). A Nash equilibrium is an outcome that holds because no involved actor has a rational incentive to deviate from it, i.e., the final result is “good enough” for all actors in the sense of a happy medium. Adapted to this work, this refers to a satisficing combination of motives at a particular time in the dialogue, that is good enough for planning an answer that supports equitable satisfaction of mixed motives. 3.1 Concepts From a conceptual perspective, the model uses several core entities. First, we have players p ∈ P that represent interlocutors I. Players have domain-specific motives m for participating in the dialogue. For each player p ∈P, we assume a MotiveSet that consists of individual motives, IndM, as well as of motives the player, i.e. the interlocutor anticipates from counterparts, AntM. MotiveSetp = IndMp + AntM-p (1) Mixed motives MM are represented by the nonredundant aggregation of 1...n MotiveSet of players p ∈P in the dialogue. MM = {MotiveSetp1 . . . MotiveSetpn} (2) All motives m ∈MM are operationalized by means of real-valued weights for each player covered by a weight vector −−−−−→ Weightm. Motives are formed earlier and persist during dialogue, but 549 Figure 1: Model for planning satisficing answers in dialogues with mixed motives players deliberate about weights of motives continuously (Bratman, 1987). The achievement and thereby satisfaction of motives is supported by linguistic intentions li ∈LI that are satisfied by satellites sat that are offered by plan operators and integrated into an answer. That means motives are achieved, if answers were given, that contributed to satisfaction of these motives. 3.2 Algorithm and example For introducing the proposed approach, we will give an example course of satisficing answer planning starting with user question and ending with system answer. The description of the process will be supported by a model view marked with step numbers in Fig. 1 as well as by an algorithmic view in Alg. 1. In the example, we apply domainspecific knowledge that was derived empirically in the retailing domain. Although, in literature review, customer and retailer motives in sales dialogues were specified. Combinations of these motives were analyzed in simulated sales conversations between real retailers (N=3) and subjects acting as customers (N=12). Recorded as video files, conversations and identified motives were validated in a web-based user study (N=120) regarding their naturalness and relevance. Sales conversations were transcribed, aggregated to a text corpus and analyzed regarding question and answer structures. So, the domain-specific knowledge representation used in the example bases on results of this empirical analysis and covers all core model concepts introduced before: a mixed motive model with empirically derived default weights consisting of 19 customer and 4 retailer motives (cf. Tab. 1); 39 question and 33 answer schemata (McKeown, 1985), 31 plan operators (Moore and Motive m ∈MM Weight pa Weight pb High level of reliability of product (mR) 1.90 1.00 Fair price of product (mFP) 0.70 0.00 Exclusive design of product (mED) 0.53 1.00 Comprehensive product information (mCPI) 1.67 1.00 Improving customer relationship (mICR) 0.00 4.00 Increase revenue (mIR) 4.00 4.00 Table 1: Extract of domain-specific mixed motive model with default weights for player (pa) and player (pb) representing customer and retailer Paris, 1993), 21 satellites with 18 linguistic intentions (cf. Tab. 2) and 14 rhetorical relations (Hobbs, 1978; Hovy, 1993; Mann and Thompson, 1986), and exemplary product information. Imagine a sales conversation regarding consumer electronics between customer and retailer represented by player (pa) and player (pb). Sets of motives by players are equal regarding the motives included but differ in weights of individual and anticipated motives by players (cf. Tab. 1). MM = MotiveSetpa + MotiveSetpb (3) MotiveSetpa = IndM pa + AntM pb MotiveSetpb = IndM pb + AntM pa The customer poses a question concerning products with a specific feature: “How many tablets offer the wififeatures 802.11A, 802.11B, 802.11G, 802.11n?” Based on the identified question schema as well as the determined communicative function of the question, a dialogue system that instantiates the proposed model selects an appropriate plan operator (cf. Fig. 1, step 1 & 2). 550 Ling. Intention li ∈LI supports m ∈ MM Description Advantages (liA) {mICR, mFP, mED, mIPD, mR, mHLS, mACB, mSCD, mI, mHLP, mPB, mQ} Integration of information about advantages of product(s) into answer External Review (liER) {mSI} Presentation of customer reviews My Product (liMP) {mICR, mSP, mR, mHLC} Mentioning products that could be interesting for customer Functionality (liF) {mEU} Extension of answer regarding product functions Opinion (liO) {mHEM, mSP, mR, mSI} Integration of subjective (retailer) opinion into answer Table 2: Extract of domain-specific linguistic intentions li ∈LI with supported motives and description 3.2.1 Definition of set S and determination of SatisfactionSet In our case, a plan operator named NUMBER OF PRODUCTS is selected that offers an obligatory nucleus and a set S of four optional satellites (cf. Fig. 2): S = {satAAS, satVER, satDF, satEUP} (4) Overall objective is to determine set S+ out of set S, that consists of satellites that - besides supporting the effect of the nucleus - contribute to satisficing mixes motives of customer and retailer during dialogue (cf. Alg. 1). According to (Grosz and Sidner, 1986; Moore and Paris, 1993), satellites are linked with linguistic intentions; i.e. they fulfill certain functions regarding the overall dialogue. Set S is sent to the linguistic intention handler that specifies the SatisfactionSet (cf. Fig. 1, step 3 and Alg. 1, line 1-4). This set covers linguistic intentions that can be satisfied by satellites of set S (cf. Tab. 2): SatisfactionSet = {liA, liER, liF, liMP} (5) Figure 2: Plan operator NUMBER OF PRODUCTS 3.2.2 Mapping linguistic intentions onto mixed motives Next, linguistic intentions have to be mapped onto motives. The m:n correlation between linguistic intentions and motives (Moore and Paris, 1993) is domain-specific, has to be specified empirically and is induced by the domain configurator (cf. Fig. 1, step 5). Each motive is supported by a set of linguistic intentions that contribute to the achievement of this motive (cf. Fig. 3). On the other hand, each linguistic intention can support the achievement of several motives. By processing Figure 3: Correlations between motives (M) and linguistic intentions (LI) the supports-relation between both concepts, the mapper specifies the RelevanceSet based on the SatisfactionSet. The resulting RelevanceSet represents all mixed motives relevant for planning the actual answer (cf. Alg. 1, line 5-8): RelevanceSet = {mQ, mR, mIPD, mHCS, mACB, mICR, mI, mSCD, mPB, mEU, mFP, mED, mHLP, mSI, mSP, mHLC} (6) 3.2.3 Satisficing mixed motives Having identified the RelevanceSet, we now intend to identify a satisficing combination of the involved motives. Therefore, the mapper sends the RelevanceSet to the mixed motive model handler for specifying the SatisficingSet that consists of motives that (1) are sufficiently interesting for all interlocutors (i.e. weighted positively), and (2) have preferably low conflict potential (i.e. small differences in player weights) (cf. Fig. 1, step 6). Satisficing mixed motives is considered as multiplayer non-zero-sum game that is played for infinitely many rounds, more precisely pairs of user questions and system answers. In each round of the game, it has to be decided which motives 551 Algorithm 1 Determining set S+ of satisficing satellites Require: set of default satellites S = {sat1 . . . satn}; set of players P = {p1 . . . pn}; set of mixed motives MM = {m1 . . . mn}; set of linguistic intentions LI = {li1 . . . lin} Ensure: set of satisficing satellites S+ = {sat1 . . . satn} 1: Initialize SatisfactionSet = {li1 . . . lin ∈LI|li.isSatisfiedBy(sat ∈S)} 2: for ∀sat ∈S do 3: SatisfactionSet ⇐SatisfactionSetsat ∪SatisfactionSet 4: end for 5: Initialize RelevanceSet = {m1 . . . mn ∈MM|m.isSupportedBy(li ∈SatisfactionSet)} 6: for ∀li ∈SatisfactionSet do 7: RelevanceSet ⇐RelevanceSetli ∪RelevanceSet 8: end for 9: Determine StrategySet ⇐P(RelevanceSet) 10: Initialize StrategyProfiles = {−→s 1 . . . −→s n} 11: for ∀s ∈StrategySet; ∀p ∈P do 12: Calculate LocalPayout(s) 13: Define −→s = {s1 . . . sn ∈StrategySet|LocalPayout(sp *|s-p) ≥LocalPayout(sp|s-p)} 14: StrategyProfiles.add(−→s ) 15: end for 16: for ∀−→s ∈StrategyProfiles do 17: if LocalPayout(sp *|s* -p) ≥LocalPayout(sp|s* -p) then 18: −→s * ⇐−→s 19: end if 20: end for 21: Determine SatisficingSet = {m1 . . . mn ∈s ∈−→s *} 22: if SatisficingSet ̸= ∅then 23: Initialize SupportSet = {li1 . . . lin ∈LI|li.supports(m ∈SatisficingSet)} 24: for ∀m ∈SatisficingSet do 25: SupportSet ⇐SupportSetm ∪SupportSet 26: end for 27: Return S+ = {sat1 . . . satn ∈S|sat.satisfies(li ∈SupportSet ∩SatisfactionSet)} 28: else 29: Return S+ = {∅} 30: end if of the RelevanceSet are selected as trigger for planning an answer that supports the creation of dialogues perceived as fair by all interlocutors. The equilibrium identifier specifies strategy sets Sp = {s1 . . . sn} for all players P by generating the power set of the RelevanceSet (cf. Fig. 1, step 7 and Alg. 1, line 9). Each of the 137 resulting strategies s = {m1 . . . mn} represents a possible combination of motives or the empty set and is measured by a normalized local payout for each player based on weights of involved motives. Spa = Spb = {s1 . . . s137}; s18 = {mQ, mR} (7) LocalPayoutpa,s18 = 0.1280; LocalPayoutpb,s18 = 0.0090 Strategy sets of players are identical regarding types of covered strategies, but differ in local payouts that can be expected by players when playing this strategy as shown in eq. (7). As players prefer those strategies that provide high local payouts, the equilibrium identifier identifies strategies s* ∈Sp for each player that represent best answers regarding the behavior of counterparts −→ s-p: LocalPayout(s*, −→ s-p) ≥LocalPayout(s, −→ s-p), ∀s ∈Sp (8) Best answers of players in the sense of highest local payouts are aggregated to 17 strategy profiles, each a vector consisting of two strategies one for each player (cf. Alg. 1, line 10-15): −→s = {sx, sy}; sx ∈Spa, sy ∈Spb. Next, strategy profiles are selected that meet the Nash equilibrium condition, i.e. those strategy profiles exclusively cover strategies that represent mutual best answers of players (cf. Alg. 1, line 16-20): LocalPayout(s*, −→ s-p *) ≥LocalPayout(s, −→ s-p *) (9) ∀s ∈−→ s1 . . . −→ sn In our example, we find two Nash equilibria. Those two strategy profiles represent best answers for the player p as well as the whole group of players P in the sense of a solution with minimum quality. No player has an incentive to deviate from those strategy profile because then its local payout would decrease. With −→s = {s36, s36}, we select the non-pareto-dominant option for finding the strategy profile with the lowest difference in local payouts following the idea of the model to 552 create a balance between mixed motives. With each answer planning, players generate local payouts that are added during the course of dialogue to global payouts. Instead of gaining high global payouts, the objective of the model is to balance payouts of players during dialogue or to approximate them in case of drifting apart. We assume that similar global payouts of players can be regarded as evidence for satisficed mixed motives. Based on the selected strategy profile, involved motives are aggregated to the SatisficingSet = {mR, mICR} that represents a combination of mixed motives that is satisficing for all players in this time in the dialogue (cf. Alg. 1, line 21). 3.2.4 Mapping mixed motives onto linguistic intentions The resulting SatisficingSet is forwarded to the mapper for mapping back motives onto linguistic intentions (cf. Fig. 1, step 8 & 9). In case, the SatisficingSet covers zero motives, no mapping takes place, the process ends and none of the satellites in set S, cf. eq (4), will be considered in the actual answer planning. Otherwise, the mapper determines the set of supporting linguistic intentions by processing the inverse is-supported-byrelation between motives and linguistic intentions (cf. Alg. 1, line 22-30) (cf. Fig. 3). Comparing this set with the SatisfactionSet (cf. eq. (5)), an intersection called SupportSet is created that represents the set of linguistic intentions that will be satisfied in current answer planning: SupportSet = {liA, liMP} (10) 3.2.5 Determination of set S+ and generation of answer The linguistic intention handler determines the final set of satellites S+ by analyzing 1:1 relations between linguistic intentions of the SupportSet and satellites of the set S (cf. Fig. 1, step 10 and Alg. 1, line 27). The resulting set S+ = {satAAS, satEUP} consists of two satellites: Alternatives Advantages Survey (satAAS) and Emotion User Preferences (satEUP). The text plan lib handler adjusts the final text plan regarding the selected satellites before sending it to the answer generator (cf. Fig. 1, step 11 & 12). Last, the text plan provided by the plan operator NUMBER OF PRODUCTS is transformed into an answer. Thereby, answer schemata referenced by nucleus as well as satellites of set S+ are instantiated (cf. Fig. 1, step 13): Q: “How many tablets offer the wififeatures 802.11A, 802.11B, 802.11G, 802.11n?” A: “[nuc The following tablets offer this feature: Sony SGPT122 Xperia.] [satAAS Due to its features, e.g., storage capacity: 32GB, Sony SGPT122 Xperia has some advantages compared to other products in this category.] [satEUP How about having a look at Sony SGPT122 Xperia by Sony?] 3.2.6 Summary In summary, satisficing answer planning is considered as a game consisting of four components ⟨P, S, F, A⟩: the set of players P = {pa, pb}, strategies of players S = {Spa, Spb}, objective functions of players F = {fpa, fpb}, and a state space A = {a1 . . . at} that represents the rounds of the game, i.e. answers planned in the dialogue. The game starts in an initial state a1. At a particular time t in the dialogue, the equilibrium identifier observes the state at characterized by P, S, and F and identifies best answers for all players; st ∈Sp; ∀p ∈P. Consequential, a strategy profile meeting the Nash equilibrium condition, −→s t = {stpa, stpb}, is specified and resulting payouts are observed: f(at, −→s t) →LocalPayout →R. The calculation of local payouts by means of objective functions f ∈F in state at does not depend solely on the selected strategy profile, but on results of former states in A, i.e. all answers planned in the dialogue until at. That means, infinite playing of the described non-zero-sum game a1, −→s 1, . . . , at, −→s t, . . . generates a stream of payouts f1, f2, . . . , ft = f(at, −→s t). Besides relevant motives of the RelevanceSet, answer planning in state at+1 is directly influenced by local payouts ft(at, −→s t) in at leading to a continuous deliberation of the mixed motive model during dialogue. 4 Implementation and Evaluation Based on the proposed model (cf. Fig. 1), we implemented a German text-based QA system in form of a online shopping assistant (cf. Fig. 4)1. Users are able to construct questions termby-term. Having tapped the last term of a question, the answer is given. The QA system uses the domain-specific knowledge representation mentioned in section 3.2 formalized in RDF2. 1QA system was implemented as web application: http://redqueen.iss.uni-saarland.de/satin 2Resource Description Framework 553 Figure 4: Web-based QA system with posed question and given answer in German 4.1 Setting To evaluate our approach, we conducted a user study with the implemented prototype in German that was set up as lab experiment. Goal of this study was to assess the perceived fairness and naturalness of the dialogue with the QA system as well as the extent of motive satisfaction of participants. For that purpose, four randomized groups were formed. Each group was characterized by a combination of motives by users (fair price of product (mFP) or exclusive design of product (mED)) and the QA system representing the retailer (increasing revenue (mIR) or improving customer relationship (mICR)) (cf. Tab. 3). These Table 3: Groups and mixed motive combinations of user study mixed motives were combined systematically by means of scenarios given to users and a manipulated mixed motive model of the QA system. Before interacting with the QA system that was embedded into a web-based questionnaire, participants had to opportunity to get to know the QA system and interacting with it for the first time (cf. Fig. 8). Participants were then asked to pose questions to the QA system and to evaluate generated answers against the background of their motive (e.g., mFP) and the related scenario, e.g.: “You are searching for a new tablet that shall be functional regarding standby and storage capacity. A fair price is important; no need for the latest innovation. You do not want to spend a lot of money for the new tablet. You are price conscious.” Participants were told to interact with the QA system as long as it needed to gain the information that was required by the scenario. Finally, sevenpoint Likert scales ranging from strongly disagree (1), neither (4) to strongly agree (7) were used to assess the perceived fairness of the dialogue, the naturalness of the dialogue and the motive satisfaction. Tab. 4 lists the questionnaire items for each of these constructs. 4.2 Results In summary, 120 subjects participated in the experiment. A complete dataset from 107 participants (58,3% female) with an average age of 24.3 (SD=6.9) was considered for analysis. On average, interactions between participants (N=107) and the QA system covered 5.19 question answer pairs (cf. example dialogue in appendix A). 556 questions were posed by subjects; 35.07% of them were propositional questions (e.g., “Is product A up-to-date?”), 62.41% set questions (e.g., “Where is the difference between product A and product B?”) and 2.52% choice questions (e.g., “Which product is better than product A?”), cf. Bunt et al. (2010). Due to the fact that Cronbachs alpha values for all three multi-item constructs lie clearly above the recommended threshold of .70 (Nunnally, 1967), Figure 5: Subject during interaction with QA system in user study which indicates a good to excellent reliability of the scales, we calculated aggregated mean scores for each construct. The descriptive statistics of the three core constructs are presented in Tab. 4. 554 Table 4: Descriptive statistics and results of one-sample t-tests for the empirical core constructs (N=107) Additionally, results of one-sample t-tests are provided to evaluate whether the aggregated scores lie significantly above or below the neutral scale value of 4. Results indicate that the participants were undecided with respect to the “Perceived Naturalness of Dialogue” with the QA system. We assume that this is owed to the restricted QA setting since there were no significant differences among the four groups (F(3,104) = 2.06, p = .11) (cf. Tab. 3). However, the data support the conclusion that participants perceived the dialogue as fair and that they were able to sufficiently satisfy their motives. Assuming rather conflicting motives of subject and QA system as given for instance in group #4 in Tab. 3, it could be assumed that perceived fairness and motive satisfaction should be smaller than in rather congruent motive combinations as shown in group #1. Nonetheless, the mean value of the construct “Perceived Fairness of Dialogue” was 5.17 across all groups (significant above mean value 4) and there were no significant differences between the randomized groups (F(3,104) = 1.59, p = .20). Furthermore, “Motive Satisfaction” was rated with a mean value of 5.16 across all groups (significant above mean value 4) and again, there was not a significant effect of the group on motive satisfaction at the .05 level of significance (F(3,104) = 2.33, p = .08). Overall, this indicates a positive evaluation of the QA system regarding its ability to generate satisficing answers despite of mixed motives of interlocutors. 5 Conclusion We considered dialogues with congruent as well as incongruent interlocutor motives, where dialogue partners are facing the constant challenge of finding a balance between cooperation and competition. Despite of the overall presence of dialogues with such mixed motives in everyday life, little attention has been given to this topic in text planning in contrast to scrutinized dialogue systems that support dialogues fully aligned with anticipated user interests. Focusing on Question-Answering (QA) settings, we introduced a model that formalizes answer planning as psychological game embedded in text planning approaches for supporting fair dialogues under mixed motives. The model was exemplified within a QA sales assistant with domain-specific world knowledge for conducting sales dialogues. Due to the fact that fairness is subjective per se, we presented results from an empirical study (N=107) in which human subjects interacted with the QA system in various mixed motive settings. Results indicate a positive evaluation of the systems performance in planning answers that support fair dialogues despite of mixed motives of interlocutors. Acknowledgments This work was partially funded by the German Federal Ministry for Education and Research (BMBF) under the contract 01IS12030. 555 References James F. Allen, Donna K Byron, Myroslava Dzikovska, George Ferguson, Lucian Galescu, and Amanda Stent. 2001. Toward conversational humancomputer interaction. AI magazine, 22(4):27. Adam Bjorndahl, Joseph Y Halpern, and Rafael Pass. 2013. Language-based games. In Proc. of the 23rd Int. Joint Conf. on Artificial Intelligence, pages 2967–2971. Elizabeth Black and Katie Atkinson. 2011. Choosing persuasive arguments for action. In 10th Int. Conf. on Autonomous Agents and Multiagent Systems, pages 905–912. Michael Bratman. 1987. Intention, Plans, and Practical Reason. Center for the Study of Language and Information. Harry Bunt and William Black. 2000. The abc of computational pragmatics. Abduction, Belief and Context in Dialogue: Studies in Computational Pragmatics, pages 1–46. Harry Bunt, Jan Alexandersson, Jean Carletta, JaeWoong Choe, Alex Chengyu Fang, Koiti Hasida, Kiyong Lee, Volha Petukhova, Andrei PopescuBelis, Laurent Romary, et al. 2010. Towards an iso standard for dialogue act annotation. In Seventh conference on International Language Resources and Evaluation (LREC’10). Philip R. Cohen and Hector J. Levesque. 1990. Intention is choice with commitment. Artif. Intell., 42(23):213–261. Tiansi Dong, Ulrich Furbach, Ingo Gl¨ockner, and Bj¨orn Pelzer. 2011. A natural language question answering system as a participant in human q&a portals. In Proc. of the 22nd Int. Joint Conf. on Artificial Intelligence, pages 2430–2435. Michael P. Georgeff, Barney Pell, Martha E. Pollack, Milind Tambe, and Michael Wooldridge. 1998. The belief-desire-intention model of agency. In Proc. of the 5th Int. Workshop on Intelligent Agents V, Agent Theories, Architectures, and Languages, pages 1– 10. Barbara J. Grosz and Sarit Kraus. 1996. Collaborative plans for complex group action. Artificial Intelligence, 86(2):269 – 357. Barbara J. Grosz and Candace L. Sidner. 1986. Attention, intentions, and the structure of discourse. Comput. Linguist., 12(3):175–204. Christos Hadjinikolis, Yiannis Siantos, Sanjay Modgil, Elizabeth Black, and Peter McBurney. 2013. Opponent modelling in persuasion dialogues. In Proc. of the 23rd Int. Joint Conf. on Artificial Intelligence, pages 164–170. Takayuki Hasegawa, Nobuhiro Kaji, Naoki Yoshinaga, and Masashi Toyoda. 2013. Predicting and eliciting addressee’s emotion in online dialogue. In Proc. of the 51st Annual Meeting of the Association for Computational Linguistics, pages 964–972. Jerry R. Hobbs. 1978. Why is Discourse Coherent?: Technical Note 176. Stanford Research Inst., Menlo Park. Eduard H. Hovy, 1991. Approaches to the planning of coherent text, volume 119 of The Kluwer International Series in Engineering and Computer Science, pages 83–102. Springer. Eduard H. Hovy, 1993. Automated discourse generation using discourse structure relations, pages 341– 385. MIT Press. Anthony Jameson, Bernhard Kipper, Alassane Ndiaye, Ralph Sch¨afer, Joep Simons, Thomas Weis, and Detlev Zimmermann. 1994. Cooperating to be noncooperative: The dialog system pracma. In Proc. of KI 1994. Springer. Willem JM Levelt. 1993. Speaking: From intention to articulation, volume 1. MIT press. Fangtao Li, Yang Gao, Shuchang Zhou, Xiance Si, and Decheng Dai. 2013. Deceptive answer prediction with user preference graph. In ACL (1), pages 1723– 1732. Citeseer. Karen E Lochbaum. 1998. A collaborative planning model of intentional structure. Comput. Linguist., 24(4):525–572. William C. Mann and Sandra A. Thompson. 1986. Assertions from discourse structure. In Proc. of Workshop on Strategic comp. natural language, pages 257–270. Kathleen R. McKeown. 1985. Discourse strategies for generating natural-language text. Artificial Intelligence, 27(1):1–41. Johanna D. Moore and C´ecile L. Paris. 1993. Planning text for advisory dialogues: capturing intentional and rhetorical information. Comput. Linguist., 19(4):651–694. John Nash. 1951. Non-cooperative games. Annals of Mathematics, 54(2):286–295. Jum C. Nunnally. 1967. Psychometric Theory. McGraw-Hill, New York. Oxford Dictionaries. 2016. ”fair”. Oxford University Press (http://oxforddictionaries.com). Michel A. Paquette. 2012. Speech acts, dialogues and the common ground. In FLAIRS Conference. Prashant Parikh. 2010. Language and Equilibrium. MIT Press. 556 Henry Prakken. 2006. Formal systems for persuasion dialogue. The Knowledge Engineering Review, 21(02):163–188. Anand S. Rao and Michael P. Georgeff. 1995. Bdi agents: From theory to practice. In Proc. of the 1st Intl. Conf. on Multiagent Systems (ICMAS), pages 312–319. The MIT Press. Charles Rich and Candace L. Sidner. 1997. Collagen: when agents collaborate with people. In Proceedings of the first international conference on Autonomous agents, AGENTS ’97, pages 284–291, New York, NY, USA. ACM. Thomas C Schelling. 1960. The strategy of conflict. Harvard university press. Herbert A. Simon. 1956. Rational choice and the structure of the environment. Psychological review, 63(2):129. Jon Scott Stevens, Anton Benz, Sebastian Reue, and Ralf Klabunde. 2015. A strategic reasoning model for generating alternative answers. In ACL (1), pages 534–542. The Association for Computer Linguistics. David Traum and Staffan Larsson, 2003. The information state approach to dialogue management, pages 325–353. Springer. David Traum, Stacy C Marsella, Jonathan Gratch, Jina Lee, and Arno Hartholt. 2008. Multi-party, multiissue, multi-strategy negotiation for multi-modal virtual agents. In Intelligent Virtual Agents, pages 117–130. Springer. Kees van Deemter. 2009. What game theory can do for nlg: the case of vague language. In Proceedings of the 12th European Workshop on Natural Language Generation, pages 154–161. A Example dialogue between user and QA system In the following, an example dialogue between a user and the QA system consisting of four question answer pairs is given. Questions (Q) are segmentally constructed by the user. Answers (A) consisting of text, images, links to webpages and/or external product offers are generated by the QA system. (1) Q: Which tablets are available in lower price range? (2) A: In this price range, the following products are available: Lenovo Yoga. Compared to other products in this category, Lenovo Yoga offers advantages regarding its price of 198.99e. Currently, a lot of customers ask for Lenovo Yoga. Regarding the price-performance ratio, the product is absolutely recommended. (3) Q: Is there a discount for Lenovo Yoga? (4) A: For Lenovo Yoga, there is a discount of 1%. The total comes to 197.00e. Lenovo Yoga goes with Apple iPod Nano, Sony NWZ-E585B Walkman, Apple iPhone 5S and Apple iPhone 4S. (5) Q: How many tablets offer a standby time of at least 18h? (6) A: The following tablets offer this feature: Lenovo Yoga. How about having a look at Lenovo Yoga by Lenovo? (7) Q: Are there any offers for Lenovo Yoga? (8) A: You can order this product online, as you can see here: 557
2016
52
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 558–567, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics A Sentence Interaction Network for Modeling Dependence between Sentences Biao Liu1, Minlie Huang1∗, Song Liu2, Xuan Zhu2, Xiaoyan Zhu1 1State Key Lab. of Intelligent Technology and Systems 1National Lab. for Information Science and Technology 1Dept. of Computer Science and Technology, Tsinghua University, Beijing, China 2Samsung R&D Institute, Beijing, China [email protected], {aihuang, zxy-dcs}@tsinghua.edu.cn Abstract Modeling interactions between two sentences is crucial for a number of natural language processing tasks including Answer Selection, Dialogue Act Analysis, etc. While deep learning methods like Recurrent Neural Network or Convolutional Neural Network have been proved to be powerful for sentence modeling, prior studies paid less attention on interactions between sentences. In this work, we propose a Sentence Interaction Network (SIN) for modeling the complex interactions between two sentences. By introducing “interaction states” for word and phrase pairs, SIN is powerful and flexible in capturing sentence interactions for different tasks. We obtain significant improvements on Answer Selection and Dialogue Act Analysis without any feature engineering. 1 Introduction There exist complex interactions between sentences in many natural language processing (NLP) tasks such as Answer Selection (Yu et al., 2014; Yin et al., 2015), Dialogue Act Analysis (Kalchbrenner and Blunsom, 2013), etc. For instance, given a question and two candidate answers below, though they are all talking about cats, only the first Q What do cats look like? A1 Cats have large eyes and furry bodies. A2 Cats like to play with boxes and bags. answer correctly answers the question about cats’ appearance. It is important to appropriately model the relation between two sentences in such cases. ∗Correspondence author For sentence pair modeling, some methods first project the two sentences to fix-sized vectors separately without considering the interactions between them, and then fed the sentence vectors to other classifiers as features for a specific task (Kalchbrenner and Blunsom, 2013; Tai et al., 2015). Such methods suffer from being unable to encode context information during sentence embedding. A more reasonable way to capture sentence interactions is to introduce some mechanisms to utilize information from both sentences at the same time. Some methods attempt to introduce an attention matrix which contains similarity scores between words and phrases to approach sentence interactions (Socher et al., 2011; Yin et al., 2015). While the meaning of words and phrases may drift from contexts to contexts, simple similarity scores may be too weak to capture the complex interactions, and a more powerful interaction mechanism is needed. In this work, we propose a Sentence Interaction Network (SIN) focusing on modeling sentence interactions. The main idea behind this model is that each word in one sentence may potentially influence every word in another sentence in some degree (the word “influence” here may refer to “answer” or “match” in different tasks). So, we introduce a mechanism that allows information to flow from every word (or phrase) in one sentence to every word (or phrase) in another sentence. These “information flows” are real-valued vectors describing how words and phrases interact with each other, for example, a word (or phrase) in one sentence can modify the meaning of a word (or phrase) in another sentence through such “information flows”. Specifically, given two sentences s1 and s2, for every word xt in s1, we introduce a “candidate interaction state” for every word xτ in s2. This 558 state is regarded as the “influence” of xτ to xt, and is actually the “information flow” from xτ to xt mentioned above. By summing over all the “candidate interaction states”, we generate an “interaction state” for xt, which represents the influence of the whole sentence s2 to word xt . When feeding the “interaction state” and the word embedding together into Recurrent Neural Network (with Long Short-Time Memory unit in our model), we obtain a sentence vector with context information encoded. We also add a convolution layer on the word embeddings so that interactions between phrases can also be modeled. SIN is powerful and flexible for modeling sentence interactions in different tasks. First, the “interaction state” is a vector, compared with a single similarity score, it is able to encode more information for word or phrase interactions. Second, the interaction mechanism in SIN can be adapted to different functions for different tasks during training, such as “word meaning adjustment” for Dialogue Act Analysis or “Answering” for Answer Selection. Our main contributions are as follows: • We propose a Sentence Interaction Network (SIN) which utilizes a new mechanism to model sentence interactions. • We add convolution layers to SIN, which improves the ability to model interactions between phrases. • We obtain significant improvements on Answer Selection and Dialogue Act Analysis without any handcrafted features. The rest of the paper is structured as follows: We survey related work in Section 2, introduce our method in Section 3, present the experiments in Section 4, and summarize our work in Section 5. 2 Related Work Our work is mainly related to deep learning for sentence modeling and sentence pair modeling. For sentence modeling, we have to first represent each word as a real-valued vector (Mikolov et al., 2010; Pennington et al., 2014) , and then compose word vectors into a sentence vector. Several methods have been proposed for sentence modeling. Recurrent Neural Network (RNN) (Elman, 1990; Mikolov et al., 2010) introduces a hidden state to represent contexts, and repeatedly feed the hidden state and word embeddings to the network to update the context representation. RNN suffers from gradient vanishing and exploding problems which limit the length of reachable context. RNN with Long Short-Time Memory Network unit (LSTM) (Hochreiter and Schmidhuber, 1997; Gers, 2001) solves such problems by introducing a “memory cell” and “gates” into the network. Recursive Neural Network (Socher et al., 2013; Qian et al., 2015) and LSTM over tree structures (Zhu et al., 2015; Tai et al., 2015) are able to utilize some syntactic information for sentence modeling. Kim (2014) proposed a Convolutional Neural Network (CNN) for sentence classification which models a sentence in multiple granularities. For sentence pair modeling, a simple idea is to first project the sentences to two sentence vectors separately with sentence modeling methods, and then feed these two vectors into other classifiers for classification (Tai et al., 2015; Yu et al., 2014; Yang et al., 2015). The drawback of such methods is that separately modeling the two sentences is unable to capture the complex sentence interactions. Socher et al. (2011) model the two sentences with Recursive Neural Networks (Unfolding Recursive Autoencoders), and then feed similarity scores between words and phrases (syntax tree nodes) to a CNN with dynamic pooling to capture sentence interactions. Hu et al. (2014) first create an “interaction space” (matching score matrix) by feeding word and phrase pairs into a multilayer perceptron (MLP), and then apply CNN to such a space for interaction modeling. Yin et al. (2015) proposed an Attention based Convolutional Neural Network (ABCNN) for sentence pair modeling. ABCNN introduces an attention matrix between the convolution layers of the two sentences, and feed the matrix back to CNN to model sentence interactions. There are also some methods that make use of rich lexical semantic features for sentence pair modeling (Yih et al., 2013; Yang et al., 2015), but these methods can not be easily adapted to different tasks. Our work is also related to context modeling. Hermann et al. (2015) proposed a LSTM-based method for reading comprehension. Their model is able to effectively utilize the context (given by a document) to answer questions. Ghosh et al. (2016) proposed a Contextual LSTM (CLSTM) which introduces a topic vector into LSTM for context modeling. The topic vector in CLSTM is 559 Figure 1: RNN (a) and LSTM (b) 1 computed according to those already seen words, and therefore reflects the underlying topic of the current word. 3 Method 3.1 Background: RNN and LSTM Recurrent Neural Network (RNN) (Elman, 1990; Mikolov et al., 2010), as depicted in Figure 1(a), is proposed for modeling long-distance dependence in a sequence. Its hidden layer is connected to itself so that previous information is considered in later times. RNN can be formalized as ht = f(Wxxt + Whht−1 + bh) where xt is the input at time step t and ht is the hidden state. Though theoretically, RNN is able to capture dependence of arbitrary length, it tends to suffer from the gradient vanishing and exploding problems which limit the length of reachable context. In addition, an additive function of the previous hidden layer and the current input is too simple to describe the complex interactions within a sequence. RNN with Long Short-Time Memory Network unit (LSTM, Figure 1(b)) (Hochreiter and Schmidhuber, 1997; Gers, 2001) solves such problems by introducing a “memory cell” and “gates” into the network. Each time step is associated with a subnet known as a memory block in which a “memory cell” stores the context information and “gates” control which information should be added or discarded or reserved. LSTM can be formalized as ft = σ(Wf · [xt, ht−1] + bf) it = σ(Wi · [xt, ht−1] + bi) ˜Ct = tanh(WC · [xt, ht−1] + bC) 1This figure referred to http://colah.github.io/posts/201508-Understanding-LSTMs/ Ct = ft ∗Ct−1 + it ∗˜Ct ot = σ(Wo · [xt, ht−1] + bo) ht = ot ∗tanh(Ct) where ∗ means element-wise multiplication, ft, it, ot is the forget, input and output gate that control which information should be forgot, input and output, respectively. ˜Ct is the candidate information to be added to the memory cell state Ct. ht is the hidden state which is regarded as a representation of the current time step with contexts. In this work, we use LSTM with peephole connections, namely adding Ct−1 to compute the forget gate ft and the input gate it, and adding Ct to compute the output gate ot. 3.2 Sentence Interaction Network (SIN) Sentence Interaction Network (SIN, Figure 2) models the interactions between two sentences in two steps. First, we use a LSTM (referred to as LSTM1) to model the two sentences s1 and s2 separately, and the hidden states related to the t-th word in s1 and the τ-th word in s2 are denoted as z(1) t and z(2) τ respectively. For simplicity, we will use the position (t, τ) to denote the corresponding words hereafter. Second, we propose a new mechanism to model the interactions between s1 and s2 by allowing information to flow between them. Specifically, word t in s1 may be potentially influenced by all words in s2 in some degree. Thus, for word t in s1, a candidate interaction state ˜c(i) tτ and an input gate i(i) tτ are introduced for each word τ in s2 as follows: ˜c(i) tτ = tanh(W (i) c · [z(1) t , z(2) τ ] + b(i) c ) i(i) tτ = σ(W (i) i · [z(1) t , z(2) τ ] + b(i) i ) here, the superscript “i” indicates “interaction”. W (i) c , W (i) i , b(i) c , b(i) i are model parameters. The interaction state c(i) t for word t in s1 can then be formalized as c(i) t = |s2| X τ=1 ˜c(i) tτ ∗i(i) tτ where |s2| is the length of sentence s2, and c(i) t can be viewed as the total interaction information received by word t in s1 from sentence s2. The interaction states of words in s2 can be similarly 560 Figure 2: SIN for modeling sentence s1 at timestep t. First, we model s1 and s2 separately with LSTM1 and obtain the hidden states z(1) t for s1 and z(2) τ for s2. Second, we compute interaction states based on these hidden states, and incorporate c(i) t into LSTM2. Information flows (interaction states) from s1 to s2 are not depicted here for simplicity. computed by exchanging the position of z(1) t and z(2) τ in ˜c(i) tτ and i(i) tτ while sharing the model parameters. We now introduce the interaction states into another LSTM (referred to as LSTM2) to compute the sentence vectors. Therefore, information can flow between the two sentences through these states. For sentence s1, at timestep t, we have ft = σ(Wf · [xt, ht−1, c(i) t , Ct−1] + bf) it = σ(Wi · [xt, ht−1, c(i) t , Ct−1] + bi) ˜Ct = tanh(WC · [xt, ht−1, c(i) t ] + bC) Ct = ft ∗Ct−1 + it ∗˜Ct ot = σ(Wo · [xt, ht−1, c(i) t , Ct] + bo) ht = ot ∗tanh(Ct) By averaging all hidden states of LSTM2, we obtain the sentence vector vs1 of s1, and the sentence vector vs2 of s2 can be computed similarly. vs1 and vs2 can then be used as features for different tasks. In SIN, the candidate interaction state ˜c(i) tτ represents the potential influence of word τ in s2 to word t in s1, and the related input gate i(i) tτ controls the degree of the influence. The element-wise multiplication ˜c(i) tτ ∗i(i) tτ is then the actual influence. By summing over all words in s2, the interaction state c(i) t gives the influence of the whole sentence s2 to word t. 3.3 SIN with Convolution (SIN-CONV) SIN is good at capturing the complex interactions of words in two sentences, but not strong enough for phrase interactions. Since convolutional neural network is widely and successfully used for modeling phrases, we add a convolution layer before SIN to model phrase interactions between two sentences. Let v1, v2, ..., v|s| be the word embeddings of a sentence s, and let ci ∈Rwd, 1 ≤i ≤|s| −w + 1, be the concatenation of vi:i+w−1, where w is the window size. The representation pi for phrase vi:i+w−1 is computed as: pi = tanh(F · ci + b) where F ∈Rd×wd is the convolution filter, and d is the dimension of the word embeddings. In SIN-CONV, we first use a convolution layer to obtain phrase representations for the two sentences s1 and s2, and the SIN interaction procedure is then applied to these phrase representations as before to model phrase interactions. The average of all hidden states are treated as sentence vectors vcnn s1 and vcnn s2 . Thus, SIN-CONV is SIN with word vectors substituted by phrase vectors. The 561 two phrase-based sentence vectors are then fed to a classifier along with the two word-based sentence vectors together for classification. The LSTM and interaction parameters are not shared between SIN and SIN-CONV. 4 Experiments In this section, we test our model on two tasks: Answer Selection and Dialogue Act Analysis. Both tasks require to model interactions between sentences. We also conduct auxiliary experiments for analyzing the interaction mechanism in our SIN model. 4.1 Answer Selection Selecting correct answers from a set of candidates for a given question is quite crucial for a number of NLP tasks including question-answering, natural language generation, information retrieval, etc. The key challenge for answer selection is to appropriately model the complex interactions between the question and the answer, and hence our SIN model is suitable for this task. We treat Answer Selection as a classification task, namely to classify each question-answer pair as “correct” or “incorrect”. Given a questionanswer pair (q, a), after generating the question and answer vectors vq and va using SIN, we feed them to a logistic regression layer to output a probability. And we maximize the following objective function: pθ(q, a) = σ(W · [vq, va]) + b) L = X (q,a) ˆyq,a log pθ(q, a)+ (1 −ˆyq,a) log(1 −pθ(q, a)) where ˆyq,a is the true label for the question-answer pair (q, a) (1 for correct, 0 for incorrect). For SINCONV, the sentence vector vcnn q and vcnn a are also fed to the logistic regression layer. During evaluation, we rank the answers of a question q according to the probability pθ(q, a). The evaluation metrics are mean average precision (MAP) and mean reciprocal rank (MRR). 4.1.1 Dataset The WikiQA2(Yang et al., 2015) dataset is used for this task. Following Yin et al. (2015), we filtered out those questions that do not have any 2http://aka.ms/WikiQA Q QA pair A/Q correct A/Q Train 2,118 20,360 9.61 0.49 Dev 126 1,130 8.97 1.11 Test 243 2,351 9.67 1.21 Table 1: Statistics of WikiQA (Q=Question, A=Answer) correct answers from the development and test set. Some statistics are shown in Table 1. 4.1.2 Setup We use the 100-dimensional GloVe vectors3 (Pennington et al., 2014) to initialize our word embeddings, and those words that do not appear in Glove vectors are treated as unknown. The dimension of all hidden states is set to 100 as well. The window size of the convolution layer is 2. To avoid overfitting, dropout is introduced to the sentence vectors, namely setting some dimensions of the sentence vectors to 0 with a probability p (0.5 in our experiment) randomly. No handcrafted features are used in our methods and the baselines. Mini-batch Gradient Descent (30 questionanswer pairs for each mini batch), with AdaDelta tuning learning rate, is used for model training. We update model parameters after every mini batch, check validation MAP and save model after every 10 batches. We run 10 epochs in total, and the model with highest validation MAP is treated as the optimal model, and we report the corresponding test MAP and MRR metrics. 4.1.3 Baselines We compare our SIN and SIN-CONV model with 5 baselines listed below: • LCLR: The model utilizes rich semantic and lexical features (Yih et al., 2013). • PV: The cosine similarity score of paragraph vectors of the two sentences is used to rank answers (Le and Mikolov, 2014). • CNN: Bigram CNN (Yu et al., 2014). • ABCNN: Attention based CNN, no handcrafted features are used here (Yin et al., 2015). • LSTM: The question and answer are modeled by a simple LSTM. Different from SIN, there is no interaction between sentences. 3http://nlp.stanford.edu/projects/glove/ 562 4.1.4 Results Results are shown in Table 2. SIN performs much better than LSTM, PV and CNN, this justifies that the proposed interaction mechanism well captures the complex interactions between the question and the answer. But SIN performs slightly worse than ABCNN because it is not strong enough at modeling phrases. By introducing a simple convolution layer to improve its phrase-modeling ability, SINCONV outperforms all the other models. For SIN-CONV, we do not observe much improvements by using larger convolution filters (window size ≥3) or stacking more convolution layers. The reason may be the fact that interactions between long phrases is relatively rare, and in addition, the QA pairs in the WikiQA dataset may be insufficient for training such a complex model with long convolution windows. 4.2 Dialogue Act Analysis Dialogue acts (DA), such as Statement, Yes-NoQuestion, Agreement, indicate the sentence pragmatic role as well as the intention of the speakers (Williams, 2012). They are widely used in natural language generation (Wen et al., 2015), speech and meeting summarization (Murray et al., 2006; Murray et al., 2010), etc. In a dialogue, the DA of a sentence is highly relevant to the content of itself and the previous sentences. As a result, to model the interactions and long-range dependence between sentences in a dialogue is crucial for dialogue act analysis. Given a dialogue (n sentences) d = [s1, s2, ..., sn], we first use a LSTM (LSTM1) to model all the sentences independently. The hidden states of sentence si obtained at this step are used to compute the interaction states of sentence si+1, and SIN will generate a sentence vector vsi using another LSTM (LSTM2) for each sentence si in the dialogue (see Section 3.2) . These sentence vectors can be used as features for dialogue act analysis. We refer to this method as SIN (or SIN-CONV for adding a convolution layer). For dialogue act analysis, we add a softmax layer on the sentence vector vsi to predict the probability distribution: pθ(yj|vsi) = exp(vT si · wj + bj) P k exp(vTsi · wk + bk) 4With extra handcrafted features, ABCNN’s performance is: MAP(0.692), MRR(0.711). Model MAP MRR LCLR 0.599 0.609 PV 0.511 0.516 CNN 0.619 0.628 ABCNN 0.660 0.677 LSTM 0.634 0.648 SIN 0.657 0.672 SIN-CONV 0.674 0.693 Table 2: Results on answer selection4. Figure 3: SIN-LD for dialogue act analysis. LSTM1 is not shown here for simplicity. x(sj) t means word t in sj, c(i,sj) t means the interaction state for word t in sj. where yj is the j-th DA tag, wj and bj is the weight vector and bias corresponding to yj. We maximize the following objective function: L = X d∈D |d| X i=1 log pθ(ˆysi|vsi) where D is the training set, namely a set of dialogues, |d| is the length of the dialogue, si is the i-th sentence in d, ˆysi is the true dialogue act label of si. In order to capture long-range dependence in the dialogue, we can further join up the sentence vector vsi with another LSTM (LSTM3). The hidden state hsi of LSTM3 are treated as the final sentence vector, and the probability distribution is given by substituting vsi with hsi in pθ(yj|vsi). We refer to this method as SIN-LD (or SIN-CONV-LD for adding a convolution layer), where LD means long-range dependence. Figure 3 shows the whole structure (LSTM1 is not shown here for simplicity). 563 Dialogue Act Example Train(%) Test(%) Statement-non-Opinion Me, I’m in the legal department. 37.0 31.5 Backchannel/Acknowledge Uh-huh. 18.8 18.3 Statement-Opinion I think it’s great 12.8 17.2 Abandoned/Uninterpretable So,7.6 8.6 Agreement/Accept That’s exactly it. 5.5 5.0 Appreciation I can imagine. 2.4 1.8 Yes-No-Question Do you have to have any special training? 2.3 2.0 Non-Verbal [Laughter], [Throat-clearing] 1.8 2.3 Yes-Answers Yes. 1.5 1.7 Conventional-closing Well, it’s been nice talking to you. 1.3 1.9 Other Labels(32) 9.1 9.8 Total number of sentences 196258 4186 Total number of dialogues 1115 19 Table 3: Dialogue act labels 4.2.1 Dataset We use the Switch-board Dialogue Act (SwDA) corpus (Calhoun et al., 2010) in our experiments5. SwDA contains the transcripts of several people discussing a given topic on the telephone. There are 42 dialogue act tags in SwDA,6 and we list the 10 most frequent tags in Table 3. The same data split as in Stolcke et al. (2000) is used in our experiments. There are 1,115 dialogues in the training set and 19 dialogues in the test set7. We also randomly split the original training set as a new training set (1,085 dialogues) and a validation set (30 dialogues). 4.2.2 Setup The setup is the same as that in Answer Selection except: (1) Only the most common 10,000 words are used, other words are all treated as unknown. (2) Each mini batch contains all sentences from 3 dialogues for Mini-batch Gradient Descent. (3) The evaluation metric is accuracy. (4) We run 30 epochs in total. (5) We use the last hidden state of LSTM2 as sentence representation since the sentences here are much shorter compared with those in Answer Selection. 4.2.3 Baselines We compare with the following baselines: • unigram, bigram, trigram LM-HMM: HMM variants (Stolcke et al., 2000). 5http://compprag.christopherpotts.net /swda.html. 6SwDA actually contains 43 tags in which “+” should not be treated as a valid tag since it means continuation of the previous sentence. 7http://web.stanford.edu/%7ejurafsky/ws97/ Model Accuracy(%) unigram LM-HMM 68.2 bigram LM-HMM 70.6 trigram LM-HMM 71.0 RCNN 73.9 LSTM 72.8 SIN 74.8 SIN-CONV 75.1 SIN-LD 76.0 SIN-CONV-LD 76.5 Table 4: Accuracy on dialogue act analysis. Interannotator agreement is 84%. • RCNN: Recurrent Convolutional Neural Networks (Kalchbrenner and Blunsom, 2013). Sentences are first separately embedded with CNN, and then joined up with RNN. • LSTM: All sentences are modeled separately by one LSTM. Different from SIN, there is no sentence interactions in this method. 4.2.4 Results Results are shown in Table 4. HMM variants, RCNN and LSTM model the sentences separately during sentence embedding, and are unable to capture the sentence interactions. With our interaction mechanism, SIN outperforms LSTM, and proves that well modeling the interactions between sentences in a dialogue is important for dialogue act analysis. After introducing a convolution layer, SIN-CONV performs slightly better than SIN. SIN-LD and SIN-CONV-LD model the 564 Figure 4: L2-norm of the interaction states from question to answer (linearly mapped to [0, 1]). Q: what creates a cloud A: in meteorology , a cloud is a visible mass of liquid droplets or frozen crystals made of water or various chemicals suspended in the atmosphere above the surface of a planetary body. Table 5: A question-answer pair example. long-range dependence in the dialogue with another LSTM, and obtain further improvements. 4.3 Interaction Mechanism Analysis We investigate into the interaction states of SIN for Answer Selection to see how our proposed interaction mechanism works. Given a question-answer pair in Table 5, for SIN, there is a candidate interaction state ˜c(i) τt and an input gate i(i) τt from each word t in the question to each word τ in the answer. We investigate into the L2-norm ||˜c(i) τt ∗i(i) τt ||2 to see how words in the two sentences interact with each other. Note that we have linearly mapped the original L2-norm value to [0, 1] as follows: f(x) = x −xmin xmax −xmin As depicted in Figure 4, we can see that the word “what” in the question has little impact to the answer through interactions. This is reasonable since “what” appears frequently in questions, and does not carry much information for answer selection8. On the contrary, the phrase “creates a cloud”, especially the word “cloud”, transmits much information through interactions to the answer, this conforms with human knowledge since 8Our statements focus on the interaction, in a sense of “answering” or “matching”. Definitely, such words like “what” and “why” are very important for answering questions from the general QA perspective since they determine the type of answers. we rely on these words to answer the question as well. In the answer, interactions concentrate on the phrase “a cloud is a visible mass of liquid droplets” which seems to be a good and complete answer to the question. Although there are also other highly related words in the answer, they are almost ignored. The reason may be failing to model such a complex phrase (three relatively simple sentences joined by “or”) or the existence of the previous phrase which is already a good answer. This experiment clearly shows how the interaction mechanism works in SIN. Through interaction states, SIN is able to figure out what the question is asking about, namely to detect those highly informative words in the question, and which part in the answer can answer the question. 5 Conclusion and Future Work In this work, we propose Sentence Interaction Network (SIN) which utilizes a new mechanism for modeling interactions between two sentences. We also introduce a convolution layer into SIN (SINCONV) to improve its phrase modeling ability so that phrase interactions can be handled. SIN is powerful and flexible to model sentence interactions for different tasks. Experiments show that the proposed interaction mechanism is effective, and we obtain significant improvements on Answer Selection and Dialogue Act Analysis without any handcrafted features. Previous works have showed that it is important to utilize the syntactic structures for modeling sentences. We also find out that LSTM is sometimes unable to model complex phrases. So, we are going to extend SIN to tree-based SIN for sentence modeling as future work. Moreover, applying the models to other tasks, such as semantic relatedness measurement and paraphrase identification, would 565 also be interesting attempts. 6 Acknowledgments This work was partly supported by the National Basic Research Program (973 Program) under grant No. 2012CB316301/2013CB329403, the National Science Foundation of China under grant No. 61272227/61332007, and the Beijing Higher Education Young Elite Teacher Project. The work was also supported by Tsinghua University – Beijing Samsung Telecom R&D Center Joint Laboratory for Intelligent Media Computing. References Sasha Calhoun, Jean Carletta, Jason M Brenier, Neil Mayo, Dan Jurafsky, Mark Steedman, and David Beaver. 2010. The nxt-format switchboard corpus: a rich resource for investigating the syntax, semantics, pragmatics and prosody of dialogue. Language resources and evaluation, 44(4):387–419. Jeffrey L Elman. 1990. Finding structure in time. Cognitive science, 14(2):179–211. Felix Gers. 2001. Long short-term memory in recurrent neural networks. Unpublished PhD dissertation, ´Ecole Polytechnique F´ed´erale de Lausanne, Lausanne, Switzerland. Shalini Ghosh, Oriol Vinyals, Brian Strope, Scott Roy, Tom Dean, and Larry Heck. 2016. Contextual lstm (clstm) models for large scale nlp tasks. arXiv preprint arXiv:1602.06291. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1684– 1692. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In Advances in Neural Information Processing Systems, pages 2042–2050. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent convolutional neural networks for discourse compositionality. arXiv preprint arXiv:1306.3584. Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882. Quoc V Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. arXiv preprint arXiv:1405.4053. Tomas Mikolov, Martin Karafi´at, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH 2010, 11th Annual Conference of the International Speech Communication Association, Makuhari, Chiba, Japan, September 26-30, 2010, pages 1045–1048. Gabriel Murray, Steve Renals, Jean Carletta, and Johanna Moore. 2006. Incorporating speaker and discourse features into speech summarization. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 367–374. Association for Computational Linguistics. Gabriel Murray, Giuseppe Carenini, and Raymond Ng. 2010. Generating and validating abstracts of meeting conversations: a user study. In Proceedings of the 6th International Natural Language Generation Conference, pages 105–113. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543. Qiao Qian, Bo Tian, Minlie Huang, Yang Liu, Xuan Zhu, and Xiaoyan Zhu. 2015. Learning tag embeddings and tag-specific composition functions in recursive neural network. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, volume 1, pages 1365–1374. Richard Socher, Eric H Huang, Jeffrey Pennin, Christopher D Manning, and Andrew Y Ng. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in Neural Information Processing Systems, pages 801–809. Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the conference on empirical methods in natural language processing (EMNLP), volume 1631, page 1642. Citeseer. Andreas Stolcke, Klaus Ries, Noah Coccaro, Elizabeth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema, and Marie Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational linguistics, 26(3):339–373. 566 Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075. Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, PeiHao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. arXiv preprint arXiv:1508.01745. Jason D Williams. 2012. A belief tracking challenge task for spoken dialog systems. In NAACL-HLT Workshop on Future Directions and Needs in the Spoken Dialog Community: Tools and Data, pages 23–24. Association for Computational Linguistics. Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. Wikiqa: A challenge dataset for open-domain question answering. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Citeseer. Wen-tau Yih, Ming-Wei Chang, Christopher Meek, and Andrzej Pastusiak. 2013. Question answering using enhanced lexical semantic models. Wenpeng Yin, Hinrich Sch¨utze, Bing Xiang, and Bowen Zhou. 2015. Abcnn: Attention-based convolutional neural network for modeling sentence pairs. arXiv preprint arXiv:1512.05193. Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. Deep learning for answer sentence selection. arXiv preprint arXiv:1412.1632. Xiaodan Zhu, Parinaz Sobhani, and Hongyu Guo. 2015. Long short-term memory over tree structures. arXiv preprint arXiv:1503.04881. 567
2016
53
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 568–577, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Towards more variation in text generation: Developing and evaluating variation models for choice of referential form Thiago Castro Ferreira and Emiel Krahmer and Sander Wubben Tilburg center for Cognition and Communication (TiCC) Tilburg University The Netherlands {tcastrof,e.j.krahmer,s.wubben}@tilburguniversity.edu Abstract In this study, we introduce a nondeterministic method for referring expression generation. We describe two models that account for individual variation in the choice of referential form in automatically generated text: a Naive Bayes model and a Recurrent Neural Network. Both are evaluated using the VaREG corpus. Then we select the best performing model to generate referential forms in texts from the GREC-2.0 corpus and conduct an evaluation experiment in which humans judge the coherence and comprehensibility of the generated texts, comparing them both with the original references and those produced by a random baseline model. 1 Introduction Automatic text generation is the process of converting non-linguistic data into coherent and comprehensible text (Reiter and Dale, 2000). In recent years, interest in text generation has substantially increased, due to the emergence of new applications such as “robot-journalism” (Clerwall, 2014). Even though computers these days are perfectly capable of automatically producing text, the results are arguably often rather rigid, always producing the same kind and style of text, which makes them somewhat “boring” to read, especially when reading multiple texts in succession. Human-written texts, by contrast, do not suffer from this problem, presumably because human authors have an innate tendency to produce variation in their use of words and constructions. Indeed, psycholinguistic research has shown that when speakers produce referring expressions in comparable contexts, they non-deterministically vary both the form and the contents of their references (Dale and Viethen, 2010; Van Deemter et al., 2012). In this paper, we present and evaluate models of referring expression generation that mimic this human non-determinacy and show that this enables us to generate varied references in texts, which, in terms of coherence and comprehensibility, did not yield significant differences from human-produced references according to human judges. In particular, in this study we focus on the choice of referential form, which is the first decision to be made by referring expression generation models (Reiter and Dale, 2000) and which determines whether a reference takes the form of a proper name, a pronoun, a definite description, etc. Several such models have been proposed (Reiter and Dale, 2000; Henschel et al., 2000; Callaway and Lester, 2002; Krahmer and Theune, 2002; Gupta and Bandopadhyay, 2009; Greenbacker and McCoy, 2009). However, all of these are fully deterministic, always choosing the same referential form in the same context. The fact that these models are generally based on text corpora which have only one gold standard form per reference (the one produced by the original author) does not help either. When the corpus contains, say, a description at some point in the text, this does not mean that, for example, a proper name could not occur in that position as well (Yeh and Mellish, 1997; Ferreira et al., 2016). Generally, we just don’t know. To counter this problem, a recent corpus, called VaREG, was developed in which 20 different writers were asked to produce references for a particular topic in a variety of texts, giving rise to a distribution over forms per reference (Ferreira et al., 2016). This gives us the possibility to distinguish situations where there is more or less agreement between writers in their choices of referential form. But it also enables a new paradigm for choosing referential forms, 568 where instead of predicting the most likely referential form, we can in fact predict the frequency in which a reference assumes a specific form, allowing us to turn the choice of referential form into a non-deterministic probabilistic model. In this study, we introduce two different models that take the individual variation into account for the choice of referential form, one based on Naive Bayes and one on Recurrent Neural Networks. Both are evaluated using the VaREG corpus. Furthermore, we use the best performing model to generate referential forms in texts from the GREC-2.0 corpus, based on the roulette-wheel generation process (Belz, 2008), and conduct an evaluation experiment in which humans judge the coherence and comprehensibility of the generated texts, comparing them both with the original references and those produced by a random baseline model. 2 Related Studies Several models for the choice of referential form have been proposed in the literature. They can roughly be distinguished in two groups: rulebased and data-driven models. Many rule-based models were created for pronominalization, i.e, to choose whether an object or person should be referred to using a pronoun or not. Reiter and Dale (2000) proposed one of the first rule-based models, which opts for a pronominal reference only if the referent was previously mentioned in the discourse and no mention to an entity of same gender can be found between the reference and its antecedent. Henschel et al. (2000) presented a pronominalization model based on recency, discourse status, syntactic position, parallelism and ambiguity. To decide among a pronoun or a definite description, Callaway and Lester (2002) also proposed a rulebased model which makes the choices based on information about the discourse, rhetorical structure, recency and distance. Krahmer and Theune (2002) extended the Incremental algorithm so that if a referent achieves a level of salience in the discourse (measured by a salience weight), a pronoun is used. Otherwise, a definite description is produced to distinguish the referent from the distractors. Aiming to make choices similar to humans, some studies proposed machine learning models trained on human choices of referential form. The GREC project (Belz et al., 2010) motivated the development of many of those data-driven models. One of the project’s shared tasks aimed to predict the form of the references to the main topics of texts taken from Wikipedia. Among the participants of the task, Gupta and Bandopadhyay (2009) presented a model that combined rules and a machine learning technique based on semantic and syntactic category, paragraph and sentence positions, and reference number. Similarly, Greenbacker and McCoy (2009) proposed a decision tree that, besides the features used in Gupta and Bandopadhyay (2009), was also based on recency and part-of-speech features. For more information on the GREC shared task, see Belz et al. (2010). One limitation that these models all have in common is that they fail to model individual variation. According to their predictions, a reference will always assume the most likely referential form. For example, a model that takes into account syntactic position will always choose the same referential form for the subject of a sentence, while humans tend to vary in their choices of referential form. One of the reasons for this problem arises from the data these models are trained on. Most corpora only contain one referring expression per reference. Only the newly introduced VaREG corpus takes variation into account, containing 20 different expressions for each reference, allowing us to model distributions over referential slots. 3 The VaREG corpus The VaREG corpus was collected for the study of individual variation in the choice of referential form (Ferreira et al., 2016). The corpus is based on a number of texts, which were presented to participants in such a way that all references to the main topic of the text had been replaced with gaps. Each participant was asked to fill each of those gaps with a referring expression for the topic. The resulting corpus consists of 9,588 referring expressions, produced by 78 participants for 563 referential gaps - around 20 referring expressions per reference - in 36 English texts. The texts were equally distributed over 3 genres: news texts, reviews of commercial products and encyclopedic texts. The references were annotated according to their syntactic position (subject, object, etc.), referential status (new or old, in text, paragraph and sentence) and recency (number of words between previous reference to the same object or entity), 569 and the referring expressions of the participants were classified into 5 referential forms: proper names, pronouns, definite descriptions, demonstratives and empty references. The analysis of the corpus revealed considerable variation among participants in their choices of referential forms. Various factors influenced the amount of variation that occurred. High amounts of variation, for example, were found in product reviews and also in the object position of sentences. Besides allowing us to distinguish between situations with relatively high and relatively low individual variation in choices of referential form, this corpus introduces a new paradigm for the development and evaluation of models for referential choice. Rather than predicting the most likely form of a reference, as is usually done, the new corpus allows us to develop a model that can predict the frequency with which a particular reference can assume different referential forms. In this study, we explore this possibility. 4 Models We model the individual variation in the choice of referential form in the following way: each reference consists of a tuple (X, y), where X is the set of feature values that describes the reference and y is a distribution of referential forms that indicates the frequency (in proportion) in which X assumes each form. So given X, we expect to find a distribution ˆy similar to y. Table 1 depicts the features used to describe X. The influence of those discourse factors in the choice of referential form has been often studied in the literature. Concerning syntactic position, Brennan (1995) argued that references in the subject position of a sentence are more likely to be shorter than references in the the object position. In favor of status and recency, Chafe (1994) showed that references to previously mentioned referents in the discourse and ones that are close to their antecedents are more likely to be shorter than references to new referents or ones that are distant from their antecedents. All features were defined categorically, including the recency. This latter is treated by describing if a reference’s antecedent is 10 or less words away, between 11 and 20 words, between 21 and 30 words, between 31 and 40 words and more than 40 words away. To predict a distribution ˆy based on X, we propose two models: a Naive Bayes and a Recurrent Neural Network. 4.1 Naive Bayes Given a set of referential forms F, the probability that a reference assumes a particular form f ∈F according to this model is given by: P(f | X) ∝ P(f) Q x∈X P(x | f) P f′∈F P(f′) Q x∈X P(x | f′) (1) To avoid zero probabilities, we used additive smoothing with α = 2e−308. So given a reference described by X, ˆy is the distribution over F: ˆy =   P(f1 | X) ... P(f|F| | X)   (2) 4.2 Recurrent Neural Network Some referential theories support the idea that a referential form is chosen based on previous choices to the same referent. Arnold (1998) argued that subjects of a sentence are more likely to be later pronominalized, as well as references in parallel syntactic position with their antecedents. Chafe (1994) sustained that referents mentioned in recent clauses also tend to be pronominalized. Since Naive Bayes does not take into account the sequential nature of text, we use a Recurrent Neural Network (RNN) to be able to take context into account. RNN is a powerful structure to handle sequences of data. It can map a sequence of references (X1, ..., Xt) to their referential forms distributions (y1, ..., yt) based on the previous steps. Our approach here is similar to the one presented by Mesnil et al. (2013). But instead of word continuous representations, a referential embedding is created for each combination of feature values in X. So given a reference Xt and a context window size win, the embeddings of the references Xt−1 t−win/2, Xt and Xt+win/2 t+1 are merged to form a representation et. This representation is used in equations 3 and 4 to find a distribution over the referential forms that Xt could assume. ht = sigmoid(W hxet + W hhht−1) (3) ˆyt = softmax(W yhht) (4) 570 Feature Description Syntactic position Subject, object or a genitive noun phrase in the sentence. Referential Status First mention to the referent (new) or not (old) at the level of text, paragraph and sentence. Recency Distance between a given reference and the last, previous reference to the same referent. Table 1: Features used to describe the references. We assume a sequence of tuples {(X1, y1)..., (Xt, yt)} as all the references to a referent throughout a text. We trained our RNN using Backpropagation Through Time. To measure the error among y and ˆy, we use cross entropy as a cost function. The values for the remaining parameters of the RNN are introduced in Table 2. We chose them based on an ad-hoc analysis, where we searched for an optimal combination to obtain the best predictions. Batch Size 10 Context Window Size 3 Epochs 15 Embedding Dimension 50 Hidden Layer Size 50 Learning Rate 0.1 Table 2: RNN Settings 5 Individual Variation Experiments For each reference slot encountered in the VaREG corpus, we evaluated how well a model takes the individual variation into account in the choice of referential form by comparing its predicted distribution of referential forms (ˆy) with the real distribution (y). We performed this comparison through two experiments. In the first, the models were trained and tested with VaREG corpus. In the second, we aimed to check to what extent the referring expressions from the GREC-2.0 corpus are similar in form to the referring expressions from VaREG corpus by training the models with the first corpus and testing with the second. 5.1 Method 4-fold-cross-validation was used to train the models in the first experiment. The number of folds was chosen based on the set-up of the VaREG corpus, which consists of 4 groups of texts. Given the structure of the corpus, we decided that training our model with 3 groups of texts and testing it on the held-out group was the most natural solution to avoid overfitting. Each fold has the same amount of texts per genre. Unlike VaREG, GREC-2.0 corpus does not have a set of referring expressions for the exact same reference. So, in the second experiment, the referential form distributions y were defined globally by grouping the references by X and computing the frequency of each referential form. We also re-annoted the GREC-2.0 corpus to make it compatible with the VaREG corpus. In particular, we added features for status and recency to the GREC-2.0 corpus and made the terminology consistent beween the two corpora1. Both the VaREG corpus and the re-annotated GREC-2.0 corpus are publicly available2. 5.2 Metrics For each reference, Jensen-Shannon divergence (Lin, 1991) was used to measure the similarity between y and ˆy: JSD(y||ˆy) = 1 2D(y||m) + 1 2D(ˆy||m) (5) where m = 1 2(y + ˆy) In this measure, D is the Kullback-Leibler divergence (Kullback, 1968). The Jensen-Shannon divergence ranges from 0 to 1, in which 0 indicates full convergence of the two distributions and 1 full divergence. Therefore, a lower number indicates a better individual variation modeling. To check the behaviour of ˆy based on y in each reference, the referential forms of both distributions were ranked and their relation were analysed with the Spearman’s rank correlation coefficient. This measure ranges between -1 and 1, where 1 indicates a fully opposed behaviour among the variables and 1 the exact same behaviour among them. 0 indicates a non-linear correlation among the involved variables. 1Texts also used in VaREG had their references removed from the GREC-2.0 version used in here. 2http://ilk.uvt.nl/˜tcastrof/acl2016 571 5.3 Baselines We considered two baseline models in the experiments. The first, called Random, assumes ˆy as a random distribution of forms for each reference. The second model, called ParagraphStatus, always chooses a proper name when the reference is to a new topic in the paragraph (the distribution will assume the value 1 to the proper name form and 0 to the others), and a pronoun otherwise (value 1 to the pronoun form and 0 to the others). 5.4 Results 5.4.1 Cross-validation on VaREG corpus Models JSD ρy,ˆy Random 0.63 -0.01 ParagraphStatus 0.43 0.66 NB+Syntax−Status−Recency 0.39 0.69 NB−Syntax+Status−Recency 0.32 0.75 NB−Syntax−Status+Recency 0.41 0.68 NB+Syntax+Status−Recency 0.31 0.75 NB+Syntax−Status+Recency 0.38 0.70 NB−Syntax+Status+Recency 0.33 0.73 NB+Syntax+Status+Recency 0.31 0.74 RNN+Syntax−Status−Recency 0.37 0.71 RNN−Syntax+Status−Recency 0.36 0.72 RNN−Syntax−Status+Recency 0.40 0.70 RNN+Syntax+Status−Recency 0.33 0.73 RNN+Syntax−Status+Recency 0.37 0.71 RNN−Syntax+Status+Recency 0.36 0.72 RNN+Syntax+Status+Recency 0.33 0.72 Table 3: Average Jensen-Shannon divergence and Spearman’s correlation coefficient of the models in Experiment 1. Table 3 depicts the Jensen-Shannon divergence and Spearman’s correlation coefficient of the models cross-validated on VaREG corpus. All our models outperformed the baselines. Considering the models in which the references are described by only one kind of feature, it seems that the status features (+Status) are the ones that best contributed to model the individual variation in the choice of referential form, whereas the recency (+Recency) is the worst. Syntactic position is sandwiched among the previous two. In the comparison within Naive Bayes and RNN models, the ones in which the references are described by syntactic position and referential status (+Syntax+Status−Recency) obtained the best results for both measures. Figure 1 depicts the average Jensen-Shannon divergences by genre of Naive Bayes and RNN models in which the references are described by this combination of features. Both models presented the best results in News Review Encyclopedic 0.25 0.3 0.35 0.4 NB RNN Figure 1: Jensen-Shannon divergence of NB+Syntax+Status−Recency (NB) and RNN+Syntax+Status−Recency (RNN) by genre in Experiment 1. Error bars represent 95% confidence intervals. encyclopedic texts, and the worst in product reviews. Although RNNs are able to model the individual variation in a reference based on its antecedents, they did not introduce significantly better results than Naive Bayes. In fact, NB+Syntax+Status−Recency is significantly better than RNN+Syntax+Status−Recency in modeling the individual variation in news (Wilcoxon Z = 11574.5, p < 0.01) and encyclopedic texts (Wilcoxon Z = 4232.5, p < 0.001). 5.4.2 Training on GREC-2.0 and evaluating on VaREG corpus Models JSD ρy,ˆy Random 0.63 -0.01 ParagraphStatus 0.43 0.66 NB+Syntax+Status−Recency 0.36 0.67 NB+Syntax+Status+Recency 0.37 0.64 RNN+Syntax+Status−Recency 0.37 0.62 RNN+Syntax+Status+Recency 0.37 0.64 Table 4: Average Jensen-Shannon divergence and Spearman’s correlation coefficient of the models in Experiment 2. Table 4 shows the results of models trained with GREC-2.0 and tested with VaREG corpus. These models are the two versions of Naive Bayes, and the two versions of RNN which were best evaluated in the previous experiment. The results of this experiment follow the results of the previous one. Our models outperformed the baselines and NB+Syntax+Status−Recency was the model that obtained the best results for both measures. 572 News Review Encyclopedic 0.3 0.4 0.5 NB RNN Figure 2: Average Jensen-Shannon divergence of NB+Syntax+Status−Recency (NB) and RNN+Syntax+Status-Recency (RNN) by genre in Experiment 2. Error bars represent 95% confidence intervals. Figure 2 depicts the Jensen-Shannon divergence measures of models NB+Syntax+StatusRecency and RNN+Syntax+Status-Recency by text genre. As in the previous experiment, both Naive Bayes and RNN models best modeled the individual variation in encyclopedic texts. Moreover, there was not significant difference among NB+Syntax+Status-Recency and RNN+Syntax+Status-Recency in the three text genres. In general, the models trained with VaREG corpus seemed to model the individual variation in the choice of referential form better than the models trained with GREC-2.0 corpus. 6 Coherence and comprehensibility of the texts In this section, we investigate to what extent texts generated by our method, including variation of referential form, are judged coherent and comprehensible by readers. We do this by comparing texts from the GREC-2.0 corpus in which all references were (re)generated using our method, with the original text and with a variant that includes random variation of referential form. 6.1 Our model for choice of referential form To generate the referring expressions for the topic of a given text of GREC-2.0, we first group all references by syntactic position and referential status values. Then for each group, we shuffle the references and choose their forms according to the distribution predicted by our best performing model (the NB+Syntax+Status−Recency trained on VaREG). The choice of referential forms follows the roulette-wheel generation process (Belz, 2008). This process entails that if a group has 5 references and our model predicts a distribution of 0.75 proper names and 0.25 pronouns, 4 references of the group will be proper names and 1 a pronoun. This covers the selection of referential forms (deciding which form to use at which particular point in the text). To deal with their linguistic realisation, we implemented the following heuristics. For the cases in which a proper name reference is selected, we choose a realization depending on referential status. If the reference is the first mention to the topic in the text, the reference is realized with the topic’s longest proper name. Otherwise, the reference is realized with its shortest proper name. For the cases in which a definite description is selected, but where the original GREC-2.0 corpus does not provide a description for the topic, we select the shortest predicate adjective of the first sentence of the text, immediately following the main verb. For instance, for the sentence “Alan Mathison Turing was an English mathematician, logician, and cryptographer.”, the selected definite description would be “The English mathematician”. In the cases where a reference should assume the form of a demonstrative, the definite article of the definite description is replaced by the demonstrative “this” (In the previous example, “This English mathematician”). 6.2 Evaluation Method We evaluated three versions of each text. The Original is the original text in the corpus, including the original referring expressions selected by the author. We compare this with a Random variant, which does include variation of referential forms, but selects them in a fully random way. Finally, in the third, Generated version, all references are generated according to the method outlined at Section 6.1. Table 5 depicts an example of text in the three versions. In total, we make 3 versions of 9 pseudorandomly selected texts (5 covering animate topics and 4 inanimate ones, varying in length) from the GREC-2.0 corpus, yielding 27 texts in total. These were distributed over 3 lists, such that each list contains one variant of each text, and there is an equal number of texts from the 3 conditions (Original, Random, Generated). In all texts, all 573 Version Text Original Spain, officially the Kingdom of Spain, is a country located in Southern Europe, with two small exclaves in North Africa (both bordering Morocco). Spain is a democracy which is organized as a parliamentary monarchy. It is a developed country with the ninth-largest economy in the world. It is the largest of the three sovereign nations that make up the Iberian Peninsula–the others are Portugal and the microstate of Andorra. Random It, officially the Kingdom of Spain, is a country located in Southern Europe, with two small exclaves in North Africa (both bordering Morocco). The country is a democracy that is organized as a parliamentary monarchy. It is a developed country with the ninth-largest economy in the world. This country is the largest of the three sovereign nations that make up the Iberian Peninsula–the others are Portugal and the microstate of Andorra. Generated Spain, officially the Kingdom of Spain, is a country located in Southern Europe, with two small exclaves in North Africa (both bordering Morocco). Spain is a democracy that is organized as a parliamentary monarchy. The country is a developed country with the ninth-largest economy in the world. It is the largest of the three sovereign nations that make up the Iberian Peninsula–the others are Portugal and the microstate of Andorra. Table 5: Example of text in the Original, Random and Generated version. references to the topic were highlighted in yellow. The experiment was run on CrowdFlower and is publicly available3. The experiment was performed by 30 participants (10 per list). Their average age was 36 years, and 22 were female. All were proficient in English (the language of the experiment), 26 participants were native speakers. They were asked to rate each text in terms of how coherent and comprehensible they considered it, on a scale from 1 (Very Bad) to 5 (Very Good). 6.3 Results Figure 3 depicts the average coherence and comprehensibility of the texts where their topics are described by the Original, Random and Generated approaches, respectively. Inspection of this Figure clearly shows that the Random texts are rated lower than both the Original and the Generated texts, and that the latter are rated very similarly on both dimensions. This is confirmed by the statistical analysis. According to a Friedman test, there is statistically significant difference in the coherence (χ2 = 11.79, p < 0.005) and comprehensibility (χ2 = 8.98, p = 0.01) for the three kinds of texts. We then conducted a post hoc analysis with Wilcoxon signed-rank test corrected for multiple comparisons using the Bonferroni method, resulting in a significance level set at p < 0.017. Texts of the Original approach are statistically more coherent (Z = 322, p < 0.017) and comprehensible (Z = 407.5, p < 0.017) than texts of the Random one. Texts of the Generated approach are also statistically more coherent (Z = 275, p < 0.017), but not more comprehensible (Z = 378, p < 0.05) than texts of the Random one. Finally, and cru3http://ilk.uvt.nl/˜tcastrof/acl2016 cially, comparing Original and Generated texts revealed no significant differences for coherence (Z = 540, p < 0.5) nor for comprehensibility (Z = 391.5, p < 0.5). 7 Discussion In this paper we explored the possibilities of introducing more variation in automatically generated texts, by trying to model individual variation in the selection of referential form. We relied on a new corpus (VaREG (Ferreira et al., 2016)), which does not contain a single expression for each reference in a text, but rather a distribution of referential forms produced by 20 different people. In contrast to earlier models for referential choice which always deterministically choose the most likely form of a reference, we proposed a Naive Bayes and a Recurrent Neural Network model which aimed to predict the frequency distribution with which a reference can assume a specific referential form, based on discourse features including syntactic position, referential status and recency. Given a reference, we evaluated how well each different model could capture the individual variation found in the VaREG corpus by comparing its predicted distribution of referential forms with the real one in the corpus. We trained the models in two different ways: first using the VaREG, and second using the GREC-2.0 corpus. The Naive Bayes model, trained on VaREG corpus, in which the references were described by syntactic position and referential status features was the one that best modeled the individual variation in the choice of referential form. Features Referential status features were the most helpful for modeling the individual variation in the choice of referential form. They were fol574 Original Random Generated 3.4 3.6 3.8 4 4.2 (a) Coherence Original Random Generated 3.8 4 4.2 4.4 (b) Comprehensibility Figure 3: Average coherence (3a) and comprehensibility (3b) of the texts with the original, randomized and generated referring expressions. Error bars represent 95% confidence intervals. lowed by the syntactic position feature. Both of these findings are consistent with the observations about human variation in the selection of referential forms, as discussed by Ferreira et al. (2016). This study argued that writers are more likely to vary in their choices when a reference is in the object position, and when it is an old mention in the text, but new in the sentence. Recency was not a helpful feature for our models, and this may be due to the way the feature was represented i.e., as a categorical rather than a continuous feature. Moreover, the recency feature was measured in terms of words between the current reference and the most recent previous one to the same referent. Perhaps, it would be better to measure recency in terms of different discourse entities mentioned between two references to the same referent. Genre In agreement with Ferreira et al. (2016), we also found that genre mattered. For modeling variation, our models performed best when applied to encyclopedic texts, and worst in product reviews, with news sandwiched in between. Naive Bayes model vs. RNNs Although the RNNs were able to model individual variation in the choice of referential form to some extent, they did not perform significantly better than the Naive Bayes models, which might have to do with the relatively small dataset. However, we think the size of the corpus matches the relatively low complexity of the problem we address. In the most complex case (i.e., when a reference is described by its syntactic position, status and recency), an input can be represented in 120 different ways to predict a multinomial distribution of size 5 (number of referential forms). This complexity is much smaller than other problems typically modeled by RNNs. In text production, for instance, an input may be represented by thousands of words to predict a large multinomial distribution over a vocabulary (Sutskever et al., 2014). Additionally, it is important to stress that we actually have a real multinomial distribution to compare with the distribution predicted by the RNN in each situation. We observed that it is possible to compute more fine-grained error costs in our case, which makes the RNN converge faster when it is backpropagated. In sum, we believe that those two factors combined compensate for the size of the dataset. A possible explanation for the non-difference among the Naive Bayes model and RNNs is the use of the referential status features, which perhaps are already enough to model the relation among a reference and its antecedents. VaREG corpus vs. GREC-2.0 corpus Interestingly, our proposed models yielded better performance when trained on the VaREG than on the GREC-2.0 corpus. This shows a difference among the referential choices of both corpora. We conjecture this difference is partly due to differences in text genres, since the VaREG corpus contains texts from three different genres, whereas the GREC-2.0 corpus only has encyclopedic texts. Earlier work has also highlighted the influence of text genre on the amount of individual variation in writers’ choices for referential forms (Ferreira et al., 2016). Coherence and comprehensibility In the second part of the study, we used the best performing model to generate referential forms in texts from the GREC-2.0 corpus, using a roulette-based model sampling from the predicted distributions over referential forms. We evaluated the texts gen575 erated in this way in an experiment in which humans were asked to judge the coherence and comprehensibility of the generated texts, comparing them both with the original references and those produced by a random baseline model. In terms of coherence and comprehensibility, we found that the texts in which the references were generated by our model were not significantly different than the human generated ones, and significantly better than the randomly generated ones. This shows that our solution does not only model the individual variation in the choice of referential form, but that this also does not negatively affect the quality of the texts. This is an important step towards developing new models for automatic text generation that are less predictable and more varied. Acknowledgments This work has been supported by the National Council of Scientific and Technological Development from Brazil (CNPq). References Jennifer E Arnold. 1998. Reference form and discourse patterns. Ph.D. thesis, Stanford University Stanford, CA. Anja Belz. 2008. Automatic generation of weather forecast texts using comprehensive probabilistic generation-space models. Nat. Lang. Eng. 14(4):431–455. Anja Belz, Eric Kow, Jette Viethen, and Albert Gatt. 2010. Empirical methods in natural language generation. Springer-Verlag, Berlin, Heidelberg, chapter Generating Referring Expressions in Context: The GREC Task Evaluation Challenges, pages 294–327. Susan E. Brennan. 1995. Centering attention in discourse. Language and Cognitive Processes 10(2):137–167. Charles B. Callaway and James C. Lester. 2002. Pronominalization in generated discourse and dialogue. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, Stroudsburg, PA, USA, ACL ’02, pages 88–95. Wallace L. Chafe. 1994. Discourse, Consciousness, and Time: The Flow and Displacement of Conscious Experience in Speaking and Writing. University of Chicago Press. Christer Clerwall. 2014. Enter the robot journalist: Users’ perceptions of automated content. Journalism Practice 8(5):519–531. Robert Dale and Jette Viethen. 2010. Attributecentric referring expression generation. In Empirical methods in natural language generation, Springer, pages 163–179. Thiago Castro Ferreira, Emiel Krahmer, and Sander Wubben. 2016. Individual variation in the choice of referential form. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California. Charles F Greenbacker and Kathleen F McCoy. 2009. Feature selection for reference generation as informed by psycholinguistic research. In Proceedings of the CogSci 2009 Workshop on Production of Referring Expressions (PRECogsci 2009). Samir Gupta and Sivaji Bandopadhyay. 2009. Junlg-msr: A machine learning approach of main subject reference selection with rule based improvement. In Proceedings of the 2009 Workshop on Language Generation and Summarisation. Association for Computational Linguistics, Stroudsburg, PA, USA, UCNLG+Sum ’09, pages 103–104. Renate Henschel, Hua Cheng, and Massimo Poesio. 2000. Pronominalization revisited. In Proceedings of the 18th conference on Computational linguistics-Volume 1. Association for Computational Linguistics, pages 306–312. Emiel Krahmer and Mari¨et Theune. 2002. Efficient context-sensitive generation of referring expressions. In K. van Deemter and R. Kibble, editors, Information sharing: Reference and presupposition in language generation and interpretation, CSLI, Stanford, CA, pages 223– 264. Solomon Kullback. 1968. Information theory and statistics. Courier Corporation. Jianhua Lin. 1991. Divergence measures based on the shannon entropy. Information Theory, IEEE Transactions on 37(1):145–151. Gr´egoire Mesnil, Xiaodong He, Li Deng, and Yoshua Bengio. 2013. Investigation of recurrent-neural-network architectures and 576 learning methods for spoken language understanding. In INTERSPEECH 2013, 14th Annual Conference of the International Speech Communication Association, Lyon, France, August 25-29, 2013. pages 3771–3775. Ehud Reiter and Robert Dale. 2000. Building natural language generation systems. Cambridge University Press, New York, NY, USA. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada. pages 3104–3112. Kees Van Deemter, Albert Gatt, Roger PG van Gompel, and Emiel Krahmer. 2012. Toward a computational psycholinguistics of reference production. Topics in cognitive science 4(2):166–183. Ching-Long Yeh and Chris Mellish. 1997. An empirical study on the generation of anaphora in chinese. Comput. Linguist. 23(1):171–190. 577
2016
54
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 578–587, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics How Much is 131 Million Dollars? Putting Numbers in Perspective with Compositional Descriptions Arun Tejasvi Chaganty Computer Science Department Stanford University [email protected] Percy Liang Computer Science Department Stanford University [email protected] Abstract How much is 131 million US dollars? To help readers put such numbers in context, we propose a new task of automatically generating short descriptions known as perspectives, e.g. “$131 million is about the cost to employ everyone in Texas over a lunch period”. First, we collect a dataset of numeric mentions in news articles, where each mention is labeled with a set of rated perspectives. We then propose a system to generate these descriptions consisting of two steps: formula construction and description generation. In construction, we compose formulae from numeric facts in a knowledge base and rank the resulting formulas based on familiarity, numeric proximity and semantic compatibility. In generation, we convert a formula into natural language using a sequence-to-sequence recurrent neural network. Our system obtains a 15.2% F1 improvement over a non-compositional baseline at formula construction and a 12.5 BLEU point improvement over a baseline description generation. 1 Introduction When posed with a mention of a number, such as “Cristiano Ronaldo, the player who Madrid acquired for [. . . ] a $131 million” (Figure 1), it is often difficult to comprehend the scale of large (or small) absolute values like $131 million (Paulos, 1988; Seife, 2010). Studies have shown that providing relative comparisons, or perspectives, such as “about the cost to employ everyone in Texas over a lunch period” significantly improves comprehension when measured in terms of memory retention or outlier detection (Barrio et al., 2016). Cristiano Ronaldo, the player who Madrid acquired for [. . . ] $131 million. $1.3e8 ≈71e3 $/per/yr | {z } 1 × 27e6 per | {z } 2 × 30 min | {z } 3 1 = cost of an employee 2 = population of Texas 3 = time taken for lunch about the cost to employ everyone in Texas over a lunch period. construction generation mention formula perspective Figure 1: An overview of the perspective generation task: given a numeric mention, generate a short description (a perspective) that allows the reader to appreciate the scale of the mentioned number. In our system, we first construct a formula over facts in our knowledge base and then generate a description of that formula. Previous work in the HCI community has relied on either manually generated perspectives (Barrio et al., 2016) or present a fact as is from a knowledge base (Chiacchieri, 2013). As a result, these approaches are limited to contexts in which a relevant perspective already exists. In this paper, we generate perspectives by composing facts from a knowledge base. For example, we might describe $100,000 to be “about twice the median income for a year”, and describe $5 million to be the “about how much the average person makes over their lifetime”. Leveraging compositionality allows us to achieve broad coverage of numbers from a relatively small collection of familiar facts, e.g. median income and a person’s 578 lifetime. Using compositionality in perspectives is also concordant with our understanding of how people learn to appreciate scale. Jones and Taylor (2009) find that students learning to appreciate scale do so mainly by anchoring with familiar concepts, e.g. $50,000 is slightly less than the median income in the US, and by unitization, i.e. improvising a system of units that is more relatable, e.g. using the Earth as a measure of mass when describing the mass of Jupiter to be that of 97 Earths. Here, compositionality naturally unitizes the constituent facts: in the examples above, money was unitized in terms of median income, and time was unitized in a person’s lifetime. Unitization and anchoring have also been proposed by Chevalier et al. (2013) as the basis of a design methodology for constructing visual perspectives called concrete scales. When generating compositional perspectives, we must address two key challenges: constructing familiar, relevant and meaningful formulas and generating easy-to-understand descriptions or perspectives. We tackle the first challenge using an overgenerate-and-rank paradigm, selecting formulas using signals from familiarity, compositionality, numeric proximity and semantic similarity. We treat the second problem of generation as a translation problem and use a sequence-tosequence recurrent neural network (RNN) to generate perspectives from a formula. We evaluate individual components of our system quantitatively on a dataset collected using crowdsourcing. Our formula construction method improves on F1 over a non-compositional baseline by about 17.8%. Our generation method improves over a simple baseline by 12.5 BLEU points. 2 Problem statement The input to the perspective generation task is a sentence s containing a numeric mention x: a span of tokens within the sentence which describes a quantity with value x.value and of unit x.unit. In Figure 1, the numeric mention x is “$131 million”, x.value = 1.31e8 and x.unit = $. The output is a description y that puts x in perspective. We have access to a knowledge base K with numeric tuples t = (t.value, t.unit, t.description). Table 1 has a few examples of tuples in our knowledge base. Units (e.g. $/per/yr) are fractions composed either of fundamental units (length, area, volume, mass, time) or of ordinal units (e.g. cars, Description Value Unit cost of an employee 71e3 $/year/person population of Texas 27e3 person number of employees at Google 57e3 person average household size 2.54 person time taken for a basketball game 60 minute average lifetime for a person 79 year a week 1 week time taken for lunch 30 minute cost of property in the Bay area 1e3 $/ft2 area of a city block 10e3 m2 Table 1: A subset of our knowledge base of numeric tuples. Tuples with fractional units (e.g. $/ft2) can be combined with other tuples to create formulas. people, etc.). The first step of our task, described in Section 4, is to construct a formula f over numeric tuples in K that has the same value and unit as the numeric mention x. A valid formula comprises of an arbitrary multiplier f.m and a sequence of tuples f.tuples. The value of a formula, f.value, is simply the product of the multiplier and the values of the tuples, and the unit of the formula, f.unit, is the product of the units of the tuples. In Figure 1, the formula has a multiplier of 1 and is composed of tuples 1 , 2 and 3 ; it has a value of 1.3e8 and a unit of $. The second step of our task, described in Section 5, is to generate a perspective y, a short noun phrase that realizes f. Typically, the utterance will be formed using variations of the descriptions of the tuples in f.tuples. 3 Dataset construction We break our data collection task into two steps, mirroring formula selection and description generation: first, we collect descriptions of formulas constructed exhaustively from our knowledge base (for generation), and then we use these descriptions to collect preferences for perspectives (for construction). Collecting the knowledge base. We manually constructed a knowledge base with 142 tuples and 9 fundamental units1 from the United States Bu1Namely, length, area, volume, time, weight, money, people, cars and guns. These units were chosen because they 579 reau of Statistics, the orders of magnitude topic on Wikipedia and other Wikipedia pages. The facts chosen are somewhat crude; for example, though “the cost of an employee” is a very context dependent quantity, we take its value to be the median cost for an employer in the United States, $71,000. Presenting facts at a coarse level of granularity makes them more familiar to the general reader while still being appropriate for perspective generation: the intention is to convey the right scale, not necessarily the precise quantity. Collecting numeric mentions. We collected 53,946 sentences containing numeric mentions from the newswire section of LDC2011T07 using simple regular expression patterns like $([0-9]+(,[0-9]+)*(.[0-9]+)? ((hundred)|(thousand)|(million)| (billion)|(trillion))). The values and units of the numeric mentions in each sentence were normalized and converted to fundamental units (e.g. from miles to length). We then randomly selected up to 200 mentions of each of the 9 types in bins with boundaries 10−3, 1, 103, 106, 109, 1012 leading to 4,931 mentions that are stratified by unit and magnitude.2 Finally, we chose mentions which could be described by at least one numeric expression, resulting in the 2,041 mentions that we use in our experiments (Figure 2). We note that there is a slight bias towards mentions of money and people because these are more common in the news corpus. Generating formulas. Next, we exhaustively generate valid formulas from our knowledge base. We represent the knowledge base as a graph over units with vertices and edges annotated with tuples (Figure 3). Every vertex in this graph is labeled with a unit u and contains the set of tuples with this unit: {t ∈K : t.unit = u}. Additionally, for every vertex in the graph with a unit of the form u1/u2, where u2 has no denominator, we add an edge from u1/u2 to u1, annotated with all tuples of type u2: in Figure 3 we add an edge from money/person to money annotated with the three person tuples in Table 1. The set of formulas with unit u is obtained by enumerating all paths in the graph which terminate at the vertex u. The multiplier of the formula is set so that the value of were well represented in the corpus. 2Some types had fewer than 200 mentions for some bins. 10−2 10−1 100 101 102 103 104 105 106 107 108 109 1010 Mention value 0 50 100 150 200 250 300 350 400 450 Number of mentions weight area car time gun volume person length money Figure 2: A histogram of the absolute values of numeric mentions by type. There are 100–300 mentions of each unit. person (3 tuples) time (4 tuples) area (1 tuple) money (0 tuples) money/time/person (1 tuples) money/person (0 tuples) money/area (1 tuples) time (4 tuples) person (3 tuples) area (1 tuples) Figure 3: The graph over tuples generated from the knowledge base subset in Table 1. the formula matches the value of the mention. For example, the formula in Figure 1 was constructed by traversing the graph from money/time/person to money: we start with a tuple in money/time/person (cost of an employee) and then multiply by a tuple with unit time (time for lunch) and then by unit person (population of Texas), thus traversing two edges to arrive at money. Using the 142 tuples in our knowledge base, we generate a total of 1,124 formulas sans multiplier. Collecting descriptions of formulas. The main goal of collecting descriptions of formulas is to train a language generation system, though these descriptions will also be useful while collecting training data for formula selection. For every unit in our knowledge base and every value in the set {10−7, 10−6 . . . , 1010}, we generated all valid formulas. We further restricted this set to formulas with a multiplier between 1/100 and 100, based on the rationale that human cognition of scale sharply drops beyond an order of magnitude (Tretter et al., 580 Figure 4: A screenshot of the crowdsourced task to generate natural language descriptions, or perspectives, from formulas. 2006). In total, 5000 formulas were presented to crowdworkers on Amazon Mechanical Turk, with a prompt asking them to rephrase the formula as an English expression (Figure 4).3 We obtained 5–7 descriptions of each formula, leading to a total of 31,244 unique descriptions. Collecting data on formula preference. Finally, given a numeric mention, we ask crowdworkers which perspectives from the description dataset they prefer. Note that formulas generated for a particular mention may differ in multiplier with a formula in the description dataset. We thus relax our constraints on factual accuracy while collecting this formula preference dataset: for each mention x, we choose a random perspective from the description dataset described above corresponding to a formula whose value is within a factor of 2 from the mention’s value, x.value. A smaller factor led to too many mentions without a valid comparison, while a larger one led to blatant factual inaccuracies. The perspectives were partitioned into sets of four and displayed to crowdworkers along with a “None of the above” option with the following prompt: “We would like you to pick up to two of these descriptions that are useful in understanding the scale of the highlighted number” (Figure 5). A formula is rated to be useful by simple majority.4 Figure 6 provides a summary of the dataset collected, visualizing how many formulas are useful, controlling for the size of the formula. The exhaustive generation procedure produces a large number of spurious formulas like “20 × trash generated in the US × a minute × number of employees on Medicare”. Nonetheless, compositional 3Crowdworkers were paid $0.08 per description. 4Crowdworkers were paid $0.06 to vote on each set of perspectives. Figure 5: A screenshot of the crowdsourced task to identify which formulas are useful to crowdworkers in understanding the highlighted mentioned number. formulas are quite useful in the appropriate context; Table 2 presents some mentions with highly rated perspectives and formulas. 4 Formula selection We now turn to the first half of our task: given a numeric mention x and a knowledge base K, select a formula f over K with the same value and unit as the mention. It is easy to generate a very large number of formulas for any mention. For the example, “Cristiano Ronaldo, the player who Madrid acquired for [...] $131 million.”, the small knowledge base in Table 1 can generate the 12 different formulas,5 including the following: 1. 1 × the cost of an employee × the population of Texas × the time taken for lunch. 2. 400 × the cost of an employee × average household size × a week. 3. 1 × the cost of an employee × number of employees at Google × a week. 4. 1 × cost of property in the Bay Area × area of a city block. Some of the formulas above are clearly worse than others: the key challenge is picking a formula that will lead to a meaningful and relevant perspective. Criteria for ranking formulas. We posit the following principles to guide our choice in features (Table 3). 5The full knowledge base described in Section 3 can generate 242 formulas with the unit money (sans multiplier). 581 Sentence That’s about ... Formula The Billings-based Stillwater Mining produced 601,000 ounces of platinum. 4 times the weight of an elephant. 4 × weight of an elephant. Authorities estimate there are about 60 million guns in Yemen. twice the gun ownership of the population of Texas 2 × gun ownership × population of Texas Water is flowing into Taihu lake at a rate of 150 cubic meters per second. how much water would flow from a tap left on for a week. rate of flow of water from tap × a week The bank had held auctions, selling around US$1 billion worth of three-month bills. half the cost of employing the population of Texas for a work day. 1/2 × cost of an employee × time taken for a work day × population of Texas The government[s] have promised to rent about 1.2 million sq. feet. the area of forest logged in a single minute 90 × area of forest logged × a minute Table 2: Examples of numeric mentions, perspectives and their corresponding formulas in the dataset. All the examples except the last one are rated to be useful by crowdworkers. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Fraction of times formula is rated useful 0 2 4 6 8 10 12 14 16 Number of formulas Formulas of length 1 Formulas of length 2 Formulas of length 3 Figure 6: A histogram comparing formula length to ratings of usefulness (clipped for readability). Non-compositional perspectives with a single tuple are broadly useful. Useful compositional perspectives tend to be more context-specific than non-compositional ones, and many of the formulas that can be generated from the knowledge base are spurious. Proximity: A numeric perspective should be within an order of magnitude of the mentioned value. Conception of scale quickly fails with quantities that exceed “human scales” (Tretter et al., 2006): numbers that are significantly away from 1/10 and 10. We use this principle to prune formulas with multipliers not in the range [1/100, 100] (e.g. example 2 above) and introduce features for numeric proximity. Type Features # Proximity sign(log(f.m)), | log(f.m)| 1 Familiarity I[t] 142 Compatibility I[t, t′] 20022 Similarity wvec(s)⊤ wvec(t.description) 1 Table 3: Feature templates used to score a formulas f and their counts (#), where f.m is the formula’s multiplier and t, t′ ∈f.tuples are tuples in the formula. Familiarity: A numeric perspective should be composed of concepts familiar to the reader. The most common technique cited by those who do well at scale cognition tests is reasoning in terms of familiar objects (Tretter et al., 2006; Jones and Taylor, 2009; Chevalier et al., 2013). Intuitively, the average American reader may not know exactly how many people are in Texas, but is familiar enough with the quantity to effectively reason using Texas’ population as a unit. On the other hand, it is less likely that the same reader is familiar with even the concept of Angola’s population. Of course, because it is so personal, familiarity is difficult to capture. With additional information about the reader, e.g. their location, it is possible to personalize the chosen tuples (Kim et al., 2016). Without this information, we back off to a global preference on tuples by using indicator features for each tuple in the formula. 582 Formula Score Studies estimate 36,000 people die on average each year from seasonal flu. 1/4 × global death rate × a day 0.67 5 × death rate in the US × a day 0.64 1/3 × number of employees at Microsoft 0.60 Gazprom’s exports to Europe [. . . ] will total 60 billion cubic meters . . . oil produced by the US × average lifetime 0.78 average coffee consumption × population of the world × average lifetime 0.78 2 × average coffee consumption × population of Asia × average lifetime 0.73 Table 4: The top three examples outputted by the ranking system with the scores reported by the system. Compatibility: Similarly, some tuple combinations are more natural (“median income × a month”) while others are less so (“weight of a person × population of Texas”). We model compatibility between tuples in a formula using an indicator feature. Similarity: A numeric perspective should be relevant to the context. Apart from helping with scale cognition, a perspective should also place the mentioned quantity in appropriate context: for example, NASA’s budget of $17 billion could be described as 0.1% of the United States’ budget or the amount of money it could cost to feed Los Angeles for a year. While both perspectives are appropriate, the former is more relevant than the latter. We model context relevance using word vector similarity between the tuples of the formula and the sentence containing the mention as a proxy for semantic similarity. Word vectors for a sentence or tuple description are computed by taking the mean of the word vectors for every non-stop-word token. The word vectors at the token level are computed using word2vec (Mikolov et al., 2013). Evaluation. We train a logistic regression classifier using the features described in Table 3 using the perspective ratings collected in Section 3. Recall that the formula for each perspective in the dataset is assigned a positive (“useful”) label if it was labeled to be useful to the majority of the workers. Table 5a presents results on classifying formulas as useful with a feature ablation.6 Familiarity and compatibility are the most useful features when selecting formulas, each having a significant increase in F1 over the proximity baseline. There are minor gains from combining these two features. On the other hand, semantic similarity does not affect performance relative to the baseline. We find that this is mainly due to the disproportionate number of unfamiliar formulas present in the dataset that drown out any signal. Table 4 presents two examples of the system’s ranking of formulas. 5 Perspective generation Our next goal is to generate natural language descriptions, also known as perspectives, given a formula. Our approach models the task as a sequence-to-sequence translation task from formulas to natural language. We first describe a rulebased baseline and then describe a recurrent neural network (RNN) with an attention-based copying mechanism (Jia and Liang, 2016). Baseline. As a simple approach to generate perspectives, we just combine tuples in the formula with the neutral prepositions of and for, e.g. “1/5th of the cost of an employee for the population of Texas for the time taken for lunch.” Sequence-to-sequence RNN. We use formulaperspective pairs from the dataset to create a sequence-to-sequence task: the input is composed using the formula’s multiplier and descriptions of its tuples connected with the symbol ‘*’; the output is the perspective (Figure 7). Our system is based on the model described in Jia and Liang (2016). Given a sequence of input tokens (x = (xi)), the model computes a contextdependent vector (b = (bi)) for each token using a bidirectional RNN with LSTM units. We then generate the output sequence (yj) left to right as follows. At each output position, we have a hidden state vector (sj) which is used to produce an “attention” distribution (αj = (αji)) over input tokens: αji = Attend(sj, bi). This distribution is used to generate the output token and update the hidden state vector. To generate the token, we ei6Significance results are computed by the bootstrap test as described in Berg-Kirkpatrick et al. (2012) using the output of classifiers trained on the entire training set. 583 Feature set Train Dev P R F1 P R F1 Proximity 56.4 48.7 52.2 56.3 48.8 52.3 Similarity 65.1 34.9 45.4 65.1 34.9 45.4 Familiarity∗ 70.5 63.5 66.8 69.6 62.9 66.1 Compatibility+ 66.9 74.4 70.4 65.4 73.1 69.0 F + C† 73.8 70.3 72.1 71.5 68.9 70.1 F + C + P† 73.8 70.3 72.1 71.5 68.9 70.1 F + C + P + S† 73.8 70.3 72.0 71.4 68.6 69.9 (a) the formula construction system. Precision, Recall and F1 are crossvalidated on 10-folds. ∗significant F1 versus P and S with p < 0.01. +significant F1 versus P, S and F with p < 0.01. †significant F1 versus P, S, F and C with p < 0.05. System Train BLEU Test BLEU Baseline 65.00 57.32 RNN∗ 81.50 69.79 (b) the description generation system. ∗significant BLEU score versus the baseline with p < 0.01. Table 5: Evaluation of perspective generation subsystems. 7 * the cost of an employee * a week xi input bidirectional RNN bi sj attend αji output copy 7 times the cost of employing one person for one week yj output input encoding state vector attention vector Figure 7: We model description generation as a sequence transduction task, with input as formulas (at bottom) and output as perspectives (at top). We use a RNN with an attention-based copying mechanism. ther sample a word from the current state or copy a word from the input using attention. Allowing our model to copy from the input is helpful for our task, since many of the entities are repeated verbatim in both input and output. We refer the reader to Jia and Liang (2016) for more details. Evaluation. We split the perspective description dataset into a training and test set such that no formula in the test set contains the same set of tuples as a formula in the training set.7 Table 5b compares the performance of the baseline and sequence-to-sequence RNN using BLEU. 7Note that formulas with the same set of tuples can occur multiple times in the either the training or test set with different multipliers. The sequence-to-sequence RNN performs significantly better than the baseline, producing more natural rephrasings. Table 6 shows some output generated by the system (see Table 6). 6 Human evaluation In addition to the automatic evaluations for each component of the system, we also ran an end-toend human evaluation on an independent set of 211 mentions collected using the same methodology described in Section 3. Crowdworkers were asked to choose between perspectives generated by our full system (LR+RNN) and those generated by the baseline of picking the numerically closest tuple in the knowledge base (BASELINE). They could also indicate if either both or none of the shown perspectives appeared useful.8 Table 7 summarizes the results of the evaluation and an error analysis conducted by the authors. Errors were characterized as either being errors in generation (e.g. Table 6) or violations of the criteria in selecting good formulas described in Section 4 (Table 7c). The other category mostly contains cases where the output generated by LR+RNN appears reasonable by the above criteria but was not chosen by a majority of workers. A few of the mentions shown did not properly describe a numeric quantity, e.g. “...claimed responsibility for a 2009 gun massacre ...” and were labeled invalid mentions. The most common error is the selection of a formula that is not contextually relevant to the mentioned text because no such 8Crowdworkers were paid $0.06 per to choose a perspective for each mention. Each mention and set of perspectives were presented to 5 crowdworkers. 584 Input formula Generated perspective 7 × the cost of an employee × a week 7 times the cost of employing one person for one week 1/10 × the cost of an employee × the population of California × the time taken for a football game one tenth the cost of an employee during a football game by the population of California 1 × coffee consumption × a minute × population of the world the amount of coffee consumed in one minute on the world 6 × weight of a person × population of California six times the weight of the people who is worth Table 6: Examples of perspectives generated by the sequence-to-sequence RNN. The model is able to capture rephrasings of fact descriptions and reordering of the facts. However, it often confuses prepositions and, very rarely, can produce nonsensical utterances. LR+RNN BASELINE perspective rated useful? # Yes Yes 31 Yes No 63 No Yes 61 No No 56 (a) A summary of the number of times the perspective generated by LR+RNN or BASELINE was rated useful by a majority of crowdworkers. Cause of error # Proximity 9 Familiarity 6 Compatibility 8 Similarity 49 Generation 24 Other 14 Invalid mention 7 Total 117 (b) An analysis of errors produced by LR+RNN when its perspectives were not rated useful. Errors caused by poor formula selection are further categorized by selection criteria violated. Cat. Mention LR+RNN perspective (vs. BASELINE) Prox. ...ready to ship about 2,300 miles across the Pacific to the mainland ... three times the distance from San Francisco to Los Angeles (vs. the distance from San Francisco to Dallas TX). Sim. China had disposed of about 100,000 tons of CFCs” ... one fifth of the weight of garbage produced in the United States by the population of Texas in one week. (vs. the average food wasted every year). Fam. ...the project could save New England ratepayers $4.6 billion in energy costs over 25 years. one eighth the cost of employing the population of Asia for one hour. (vs. the construction cost of The Cosmopolitan in Las Vegas.) Comp. Hominids started shaping stone tools about 2.6 million years ago. 5 times the total time taken to build the number of cars registered. (vs. 17000 times the average lifetime for a tree). (c) Examples of errors categorized by the criteria defined in Section 4. Table 7: Results of an end-to-end human evaluation of the output produced by our perspective generation system (LR+RNN) and a baseline (BASELINE) that picks the numerically closest tuple in the knowledge base for each mention. 585 Mention Perspective (that’s about...) + In 2007, Turkmenistan exported 50 billion cubic meters of gas to Russia. the amount of oil produced by the US during a lifetime + It can carry up to 10 nuclear warheads and has a range of 8,000 km. the distance from San Francisco to Beijing the 2.7 million square feet that Mission Bay’s largest developer is entitled to build twice the area of forest logged in a minute Las Vegas Sands claims the 10.5 million square feet is the largest building in Asia. one half of an area of an average farm Table 8: Examples of perspectives generated by our system that frame the mentioned quantity to be larger or smaller (top to bottom) than initially the authors thought. formula exists within the knowledge base (within an order of magnitude of the mentioned value): a larger knowledge base would significantly decrease these errors. 7 Related work and discussion We have proposed a new task of perspective generation. Compositionality is the key ingredient of our approach, which allows us synthesize information across multiple sources of information. At the same time, compositionality also poses problems for both formula selection and description generation. On the formula selection side, we must compose facts that make sense. For semantic compatibility between the mention and description, we have relied on simple word vectors (Mikolov et al., 2013), but more sophisticated forms of semantic relations on larger units of text might yield better results (Bowman et al., 2015). On the description generation side, there is a long line of work in generating natural language descriptions of structured data or logical forms Wong and Mooney (2007); Chen and Mooney (2008); Lu and Ng (2012); Angeli et al. (2010). We lean on the recent developments of neural sequence-to-sequence models (Sutskever et al., 2014; Bahdanau et al., 2014; Luong et al., 2015). Our problem bears some similarity to the semantic parsing work of Wang et al. (2015), who connect generated canonical utterances (representing logical forms) to real utterances. If we return to our initial goal of helping people understand numbers, there are two important directions to explore. First, we have used a small knowledge base, which limits the coverage of perspectives we can generate. Using Freebase (Bollacker et al., 2008) or even open information extraction (Fader et al., 2011) would dramatically increase the number of facts and therefore the scope of possible perspectives. Second, while we have focused mostly on basic compatibility, it would be interesting to explore more deeply how the juxtaposition of facts affects framing. Table 8 presents several examples generated by our system that frame the mentioned quantities to be larger or smaller than the authors originally thought. We think perspective generation is an exciting setting to study aspects of numeric framing (Teigen, 2015). Reproducibility All code, data, and experiments for this paper are available on the CodaLab platform at https: //worksheets.codalab.org/worksheets/ 0x243284b4d81d4590b46030cdd3b72633/. Acknowledgments We would like to thank Glen Chiacchieri for providing us information about the Dictionary of Numbers, Maneesh Agarwala for useful discussions and references, Robin Jia for sharing code for the sequence-to-sequence RNN, and the anonymous reviewers for their constructive feedback. This work was partially supported by the Sloan Research fellowship to the second author. References G. Angeli, P. Liang, and D. Klein. 2010. A simple domain-independent probabilistic approach to generation. In Empirical Methods in Natural Language Processing (EMNLP). D. Bahdanau, K. Cho, and Y. Bengio. 2014. Neural machine translation by jointly learn586 ing to align and translate. arXiv preprint arXiv:1409.0473 . P. J. Barrio, D. G. Goldstein, and J. M. Hofman. 2016. Improving the comprehension of numbers in the news. In Conference on Human Factors in Computing Systems (CHI). T. Berg-Kirkpatrick, D. Burkett, and D. Klein. 2012. An empirical investigation of statistical significance in NLP. In Empirical Methods in Natural Language Processing (EMNLP). pages 995–1005. K. Bollacker, C. Evans, P. Paritosh, T. Sturge, and J. Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In International Conference on Management of Data (SIGMOD). pages 1247– 1250. S. Bowman, G. Angeli, C. Potts, and C. D. Manning. 2015. A large annotated corpus for learning natural language inference. In Empirical Methods in Natural Language Processing (EMNLP). D. L. Chen and R. J. Mooney. 2008. Learning to sportscast: A test of grounded language acquisition. In International Conference on Machine Learning (ICML). pages 128–135. F. Chevalier, R. Vuillemot, and G. Gali. 2013. Using concrete scales: A practical framework for effective visual depiction of complex measures. IEEE Transactions on Visualization and Computer Graphics 19:2426–2435. G. Chiacchieri. 2013. Dictionary of numbers. http://www.dictionaryofnumbers. com/. A. Fader, S. Soderland, and O. Etzioni. 2011. Identifying relations for open information extraction. In Empirical Methods in Natural Language Processing (EMNLP). R. Jia and P. Liang. 2016. Data recombination for neural semantic parsing. In Association for Computational Linguistics (ACL). M. G. Jones and A. R. Taylor. 2009. Developing a sense of scale: Looking backward. Journal of Research in Science Teaching 46:460–475. Y. Kim, J. Hullman, and M. Agarwala. 2016. Generating personalized spatial analogies for distances and areas. In Conference on Human Factors in Computing Systems (CHI). W. Lu and H. T. Ng. 2012. A probabilistic forestto-string model for language generation from typed lambda calculus expressions. In Empirical Methods in Natural Language Processing (EMNLP). pages 1611–1622. M. Luong, H. Pham, and C. D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Empirical Methods in Natural Language Processing (EMNLP). pages 1412–1421. T. Mikolov, K. Chen, G. Corrado, and Jeffrey. 2013. Efficient estimation of word representations in vector space. arXiv . J. A. Paulos. 1988. Innumeracy: Mathematical illiteracy and its consequences. Macmillan. C. Seife. 2010. Proofiness: How you’re being fooled by the numbers. Penguin. I. Sutskever, O. Vinyals, and Q. V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems (NIPS). pages 3104–3112. K. H. Teigen. 2015. Framing of numeric quantities. The Wiley Blackwell Handbook of Judgment and Decision Making pages 568–589. T. R. Tretter, M. G. Jones, and J. Minogue. 2006. Accuracy of scale conceptions in science: Mental maneuverings across many orders of spatial magnitude. Journal of Research in Science Teaching 43:1061–1085. Y. Wang, J. Berant, and P. Liang. 2015. Building a semantic parser overnight. In Association for Computational Linguistics (ACL). Y. W. Wong and R. J. Mooney. 2007. Generation by inverting a semantic parser that uses statistical machine translation. In Human Language Technology and North American Association for Computational Linguistics (HLT/NAACL). pages 172–179. 587
2016
55
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 588–598, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Generating Factoid Questions With Recurrent Neural Networks: The 30M Factoid Question-Answer Corpus Iulian Vlad Serban∗◦ University of Montreal 2920 chemin de la Tour, Montr´eal, QC, Canada Alberto Garc´ıa-Dur´an∗⋄ Universit´e de Technologie de Compi`egne CNRS Rue du Dr Schweitzer, Compigne, France Caglar Gulcehre◦ University of Montreal 2920 chemin de la Tour, Montr´eal, QC, Canada Sungjin Ahn◦ University of Montreal 2920 chemin de la Tour, Montr´eal, QC, Canada Sarath Chandar◦ University of Montreal 2920 chemin de la Tour, Montr´eal, QC, Canada Aaron Courville◦ University of Montreal 2920 chemin de la Tour, Montr´eal, QC, Canada Yoshua Bengio†◦ University of Montreal 2920 chemin de la Tour, Montr´eal, QC, Canada Abstract Over the past decade, large-scale supervised learning corpora have enabled machine learning researchers to make substantial advances. However, to this date, there are no large-scale questionanswer corpora available. In this paper we present the 30M Factoid QuestionAnswer Corpus, an enormous questionanswer pair corpus produced by applying a novel neural network architecture on the knowledge base Freebase to transduce facts into natural language questions. The produced question-answer pairs are evaluated both by human evaluators and using automatic evaluation metrics, including well-established machine translation and sentence similarity metrics. Across all evaluation criteria the questiongeneration model outperforms the competing template-based baseline. Furthermore, when presented to human evaluators, the generated questions appear to be comparable in quality to real human-generated questions. * First authors. ◦Email: {iulian.vlad.serban,caglar.gulcehre, sungjin.ahn,sarath.chandar.anbil.parthipan, aaron.courville,yoshua.bengio}@umontreal.ca ⋄Email: [email protected] † CIFAR Senior Fellow 1 Introduction A major obstacle for training question-answering (QA) systems has been due to the lack of labeled data. The question answering field has focused on building QA systems based on traditional information retrieval procedures (Lopez et al., 2011; Dumais et al., 2002; Voorhees and Tice, 2000). More recently, researchers have started to utilize large-scale knowledge bases (KBs) (Lopez et al., 2011), such as Freebase (Bollacker et al., 2008), WikiData (Vrandeˇci´c and Kr¨otzsch, 2014) and Cyc (Lenat and Guha, 1989).1 Bootstrapping QA systems with such structured knowledge is clearly beneficial, but it is unlikely alone to overcome the lack of labeled data. To take into account the rich and complex nature of human language, such as paraphrases and ambiguity, it would appear that labeled question and answer pairs are necessary. The need for such labeled pairs is even more critical for training neural network-based QA systems, where researchers until now have relied mainly on hand-crafted rules and heuristics to synthesize artificial QA corpora (Bordes et al., 2014; Bordes et al., 2015). Motivated by these recent developments, in this paper we focus on generating questions based on the Freebase KB. We frame question generation as a transduction problem starting from a Freebase fact, represented by a triple consisting of a subject, a relationship and an object, which is trans1Freebase is now a part of WikiData. 588 duced into a question about the subject, where the object is the correct answer (Bordes et al., 2015). We propose several models, largely inspired by recent neural machine translation models (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015), and we use an approach similar to Luong et al. (2015) for dealing with the problem of rare-words. We evaluate the produced questions in a human-based experiment as well as with respect to automatic evaluation metrics, including the well-established machine translation metrics BLEU and METEOR and a sentence similarity metric. We find that the question-generation model outperforms the competing template-based baseline, and, when presented to untrained human evaluators, the produced questions appear to be indistinguishable from real human-generated questions. This suggests that the produced questionanswer pairs are of high quality and therefore that they will be useful for training QA systems. Finally, we use the best performing model to construct a new factoid question-answer corpus – The 30M Factoid Question-Answer Corpus – which is made freely available to the research community.2 2 Related Work Question generation has attracted interest in recent years with notable work by Rus et al. (2010), followed by the increasing interest from the Natural Language Generation (NLG) community. A simple rule-based approach was proposed in different studies as wh-fronting or wh-inversion (Kalady et al., 2010; Ali et al., 2010). This comes at the disadvantage of not making use of the semantic content of words apart from their syntactic role. The problem of determining the question type (e.g. that a Where-question should be triggered for locations), which requires knowledge of the category type of the elements involved in the sentence, has been addressed in two different ways: by using named entity recognizers (Mannem et al., 2010; Yao and Zhang, 2010) or semantic role labelers (Chen et al., 2009). In Curto et al. (2012) questions are split into classes according to their syntactic structure, prefix of the question and the category of the answer, and then a pattern is learned to generate questions for that class of questions. After the identification of key points, Chen et al. (2009) apply handcrafted-templates to generate questions framed in the right target expression by 2www.agarciaduran.org following the analysis of Graesser et al. (1992), who classify questions according to a taxonomy consisting of 18 categories. The works discussed so far propose ways to map unstructured text to questions. This implies a two-step process: first, transform a text into a symbolic representation (e.g. a syntactic representation of the sentence), and second, transform the symbolic representation of the text into the question (Yao et al., 2012). On the other hand, going from a symbolic representation (structured information) to a question, as we will describe in the next section, only involves the second step. Closer to our approach is the work by Olney et al. (2012). They take triples as input, where the edge relation defines the question template and the head of the triple replaces the placeholder token in the selected question template. In the same spirit, Duma et al. (2013) generate short descriptions from triples by using templates defined by the relationship and replacing accordingly the placeholder tokens for the subject and object. Our baseline is similar to that of Olney et al. (2012), where a set of relationship-specific templates are defined. These templates include placeholders to replace the string of the subject. The main difference with respect to their work is that our baseline does not explicitly define these templates. Instead, each relationship has as many templates as there are different ways of framing a question with that relationship in the training set. This yields more diverse and semantically richer questions by effectively taking advantage of the fact-question pairs, which Olney et al. did not have access to in their experiments. Unlike the work by Berant and Liang (2014), which addresses the problem of deterministically generating a set of candidate logical forms with a canonical realization in natural language for each, our work addresses the inverse problem: given a logical form (fact) it outputs the associated question. It should also be noted that recent work in question answering have used simpler rule-based and template-based approaches to generate synthetic questions to address the lack of question-answer pairs to train their models (Bordes et al., 2014; Bordes et al., 2015). 589 3 Task Definition 3.1 Knowledge Bases In general, a KB can be viewed as a multirelational graph, which consists of a set of nodes (entities) and a set of edges (relationships) linking nodes together. In Freebase (Bollacker et al., 2008) these relationships are directed and always connect exactly two entities. For example, in Freebase the two entities fires creek and nantahala national forest are linked together by the relationship contained by. Since the triple {fires creek, contained by, nantahala national forest} represents a complete and self-contained piece of information, it is also called a fact where fires creek is the subject (head of the edge), contained by is the relationship and nantahala national forest is the object (tail of the edge). 3.2 Transducing Facts to Questions We aim to transduce a fact into a question, such that: 1. The question is concerned with the subject and relationship of the fact, and 2. The object of the fact represents a valid answer to the generated question. We model this in a probabilistic framework as a directed graphical model: P(Q|F) = N Y n=1 P(wn|w<n, F), (1) where F = (subject, relationship, object) represents the fact, Q = (w1, . . . , wN) represents the question as a sequence of tokens w1, . . . , wN, and w<n represents all the tokens generated before token wn. In particular, wN represents the question mark symbol ’?’. 3.3 Dataset We use the SimpleQuestions dataset (Bordes et al., 2015) in order to train our models. This is by far the largest dataset of question-answer pairs created by humans based on a KB. It contains over 100K question-answer pairs created by users on Amazon Mechanical Turk3 in English based on the Freebase KB. In order to create the questions, human participants were shown one whole Freebase fact 3www.mturk.com Questions Entities Relationships Words 108,442 131,684 1,837 ∼77k Table 1: Statistics of SimpleQuestions at a time and they were asked to phrase a question such that the object of the presented fact becomes the answer of the question.4 Consequently, both the subject and the relationship are explicitly given in each question. But indirectly characteristics of the object may also be given since the humans have an access to it as well. Often when phrasing a question the annotators tend to be more informative about the target object by giving specific information about it in the question produced. For example, in the question What city is the American actress X from? the city name given in the object informs the human participant that it was in America - information, which was not provided by either the subject or relationship of the fact. We have also observed that the questions are often ambiguous: that is, one can easily come up with several possible answers that may fit the specifications of the question. Table 1 shows statistics of the dataset. 4 Model We propose to attack the problem with the models inspired by the recent success of neural machine translation models (Sutskever et al., 2014; Bahdanau et al., 2015). Intuitively, one can think of the transduction task as a “lossy translation” from structured knowledge (facts) to human language (questions in natural language), where certain aspects of the structured knowledge is intentionally left out (e.g. the name of the object). These models typically consist of two components: an encoder, which encodes the source phrase into one or several fixed-size vectors, and a decoder, which decodes the target phrase based on the results of the encoder. 4.1 Encoder In contrast to the neural machine translation framework, our source language is not a proper language but instead a sequence of three variables making up a fact. We propose an encoder sub-model, which encodes each atom of the fact into an embedding. Each atom {s, r, o}, may 4It is not necessary for the object to be the only answer, but it is required to be one of the possible answers. 590 stand for subject, relationship and object, respectively, of a fact F = (s, r, o) is represented as a 1-of-K vector xatom, whose embedding is obtained as eatom = Einxatom, where Ein ∈RDEnc×K is the embedding matrix of the input vocabulary and K is the size of that vocabulary. The encoder transforms this embedding into Enc(F)atom ∈RHDec as Enc(F)atom = WEnceatom, where WEnc ∈RHDec×DEnc. This embedding matrix, Ein, could be another parameter of the model to be learned, however, as discussed later (see Section 4.3), we have learned it separately and beforehand with TransE (Bordes et al., 2013), a model aimed at modeling this kind of multi-relational data. We fix it and do not allow the encoder to tune it during training. We call fact embedding Enc(F) ∈R3HDec the concatenation [Enc(F)s, Enc(F)r, Enc(F)o] of the atom embeddings, which is the input for the next module. 4.2 Decoder For the decoder, we use a GRU recurrent neural network (RNN) (Cho et al., 2014) with an attention-mechanism (Bahdanau et al., 2015) on the encoder representation to generate the associated question Q to that fact F. Recently, it has been shown that the GRU RNN performs equally well across a range of tasks compared to other RNN architectures, such as the LSTM RNN (Greff et al., 2015). The hidden state of the decoder RNN is computed at each time step n as: gr n = σ(WrEoutwn−1 + Crc(F, hn−1) + Urhn−1) (2) gu n = σ(WuEoutwn−1 + Cuc(F, hn−1) + Uuhn−1) (3) ˜h = tanh(WEoutwn−1 + Cc(F, hn−1) (4) + U(gr n ◦hn−1)) hn = gu n ◦hn−1 + (1 −gu n) ◦˜h, (5) where σ is the sigmoid function, s.t. σ(x) ∈[0, 1], and the circle, ◦, represents element-wise multiplication. The initial state h0 of this RNN is given by the output of a feedforward neural network fed with the fact embedding. The product Eoutwn ∈RDDec is the decoder embedding vector corresponding to the word wn (coded as a 1of-V vector, with V being the size of the output vocabulary), the variables Ur, Uu, U, Cr, Cu, C ∈ RHDec×HDec, Wr, Wu, W ∈RHDec×DDec are the paFigure 1: The computational graph of the question-generation model, where Enc(F) is the fact embedding produced by the encoder model, and c(F, hn−1) for n = 1, . . . , N is the fact representation weighed according to the attentionmechanism, which depends on both the fact F and the previous hidden state of the decoder RNN hn−1 . For the sake of simplicity, the attentionmechanism is not shown explicitly. rameters of the GRU and c(F, hn−1) is the context vector (defined below Eq. 6). The vector gr is called the reset gate, gu as the update gate and ˜h the candidate activation. By adjusting gr and gu appropriately, the model is able to create linear skip-connections between distant hidden states, which in turn makes the credit assignment problem easier and the gradient signal stronger to earlier hidden states. Then, at each time step n the set of probabilities over word tokens is given by applying a softmax layer over Votanh(Vhhn + VwEoutwn−1 + Vcc(F, hn−1)), where Vo ∈RV ×HDec, Vh, Vc ∈RHDec×HDec and Vw ∈RHDec×DDec. Lastly, the function c(F, hn−1) is computed using an attention-mechanism: c(F, hn−1) = αs,n−1Enc(F)s + αr,n−1Enc(F)r + αo,n−1Enc(F)o, (6) where αs,n−1, αr,n−1, αr,n−1 are real-valued scalars, which weigh the contribution of the subject, relationship and object representations. 591 They correspond to the attention of the model, and are computed by applying a one-layer neural network with tanh-activation function on the encoder representations of the fact, Enc(F), and the previous hidden state of the RNN, hn−1, followed by the sigmoid function to restrict the attention values to be between zero and one. The need for the attention-mechanism is motivated by the intuition that the model needs to attend to the subject only once during the generation process while attending to the relationship at all other times during the generation process. The model is illustrated in Figure 1. 4.3 Modeling the Source Language A particular problem with the model presented above is related to the embeddings for the entities, relationships and tokens, which all have to be learned in one way or another. If we learn these naively on the SimpleQuestions training set, the model will perform poorly when it encounters previously unseen entities, relationships or tokens. Furthermore, the multi-relational graph defined by the facts in SimpleQuestions is extremely sparse, i.e. each node has very few edges to other nodes, as can be expected due to high ratio of unique entities over number of examples. Therefore, even for many of the entities in SimpleQuestions, the model may perform poorly if the embedding is learned solely based on the SimpleQuestions dataset alone. On the source side, we can resolve this issue by initializing the subject, relationship and object embeddings to those learned by applying multi-relational embedding-based models to the knowledge base. Multi-relational embeddingbased models (Bordes et al., 2011) have recently become popular to learn distributed vector embeddings for knowledge bases, and have shown to scale well and yield good performance. Due to its simplicity and good performance, we choose to use TransE (Bordes et al., 2013) to learn such embeddings. TransE is a translation-based model, whose energy function is trained to output low values when the fact expresses true information, i.e. a fact which exists in the knowledge base, and otherwise high values. Formally, the energy function is defined as f(s, r, o) = ||es + er −eo||2, where es, er and eo are the real-valued embedding vectors for the subject, relationship and object of a fact. Further details are given by Bordes et al. (2013). Embeddings for entities with few connections are easy to learn, yet the quality of these embeddings depends on how inter-connected they are. In the extreme case where the subject and object of a triple only appears once in the dataset, the learned embeddings of the subject and object will be semantically meaningless. This happens very often in SimpleQuestions, since only around 5% of the entities have more than 2 connections in the graph. Thus, by applying TransE directly over this set of triples, we would eventually end up with a layout of entities that does not contain clusters of semantically close concepts. In order to guarantee an effective semantic representation of the embeddings, we have to learn them together with additional triples extracted from the whole Freebase graph to complement the SimpleQuestions graph with relevant information for this task. We need a coarse representation for the entities contained in SimpleQuestions, capturing the basic information, like the profession or nationality, the annotators tend to use when phrasing the questions, and accordingly we have ensured the embeddings contain this information by taking triples coming from the Freebase graph5 regarding: 1. Category information: given by the type/instance relationship, this ensures that all the entities of the same semantic category are close to each other. Although one might think that the expected category of the subject/object could be inferred directly from the relationship, there are fine-grained differences in the expected types that be extracted only directly by observing this category information. 2. Geographical information: sometimes the annotators have included information about nationality (e.g. Which French president. . . ?) or location (e.g. Where in Germany. . . ?) of the subject and/or object. This information is given by the relationships person/nationality and location/contained by. By including these facts in the learning, we ensure the existence of a fine-grained layout of the embeddings regarding this information within a same category. 5Extracted from one of the latest Freebase dumps (downloaded in mid-August 2015) https://developers. google.com/freebase/data 592 Closest neighbors to Warner Bros. Entertainment Manchester hindi language SQ Billy Gibbons Ricky Anane nepali indian Jenny Lewis Lee Dixon Naseeb Lies of Love Jerri Bryne Ghar Ek Mandir Swordfish Greg Wood standard chinese SQ + FB Paramount Pictures Oxford dutch language Sony Pictures Entertainment Sale italian language Electronic Arts Liverpool danish language CBS Guildford bengali language Table 2: Examples of differences in the local structure of the vector space embeddings when adding more FB facts 3. Gender: similarly, sometimes annotators have included information about gender (e.g. Which male audio engineer. . . ?). This information is given by the relationship person/gender. To this end, we have included more than 300, 000 facts from Freebase in addition to the facts in SimpleQuestions for training. Table 2 shows the differences in the embeddings before and after adding additional facts for training the TransE representations. 4.4 Generating Questions To resolve the problem of data sparsity and previously unseen words on the target side, we draw inspiration from the placeholders proposed for handling rare words in neural machine translation by Luong et al. (2015). For every question and answer pair, we search for words in the question which overlap with words in the subject string of the fact.6 We heuristically estimate the sequence of most likely words in the question, which correspond to the subject string. These words are then replaced by the placeholder token <placeholder>. For example, given the fact {fires creek, contained by, nantahala national forest} the original question Which forest is Fires Creek in? is transformed into the question Which forest is <placeholder>in?. The model is trained on these modified questions, which means that model only has to learn decoder embeddings for tokens which are not in the subject string. At test time, after outputting a question, all placeholder tokens are replaced by the subject string and then the outputs are evaluated. We call this the Single-Placeholder (SP) model. The main difference with respect to that of Luong et al. (2015) is that we do not use placeholder tokens in the input language, be6We use the tool difflib: https://docs.python. org/2/library/difflib.html. cause then the entities and relationships in the input would not be able to transmit semantic (e.g. topical) information to the decoder. If we had included placeholder tokens in the input language, the model would not be able to generate informative words regarding the subject in the question (e.g. it would be impossible for the model to learn that the subject Paris may be accompanied by the words French city when generating a question, because it would not see Paris but only a placeholder token). A single placeholder token for all question types could unnecessarily limit the model. We therefore also experiment with another model, called the Multi-Placeholder (MP) model, which uses 60 different placeholder tokens such that the placeholder for a given question is chosen based on the subject category extracted from the relationship (e.g. contained by is classified in the category location, and so the transformed question would be Which forest is <location placeholder> in?). This could make it easier for the model to learn to phrase questions about a diverse set of entities, but it also introduces additional parameters, since there are now 60 placeholder embeddings to be learned, and therefore the model may suffer from overfitting. This way of addressing the sparsity in the output reduces the vocabulary size to less than 7000 words. 4.5 Template-based Baseline To compare our neural network models, we propose a (non-parametric) template-based baseline model, which makes use of the entire training set when generating a question. The baseline operates on questions modified with the placeholder as in the preceding section. Given a fact F as input, the baseline picks a candidate fact Fc in the training set at uniformly random, where Fc has the same relationship as F. Then the baseline considers the questions corresponding to Fc and as in the 593 SP model, in the final step the placeholder token in the question is replaced by the subject string of the fact F. 5 Experiments 5.1 Training Procedure All neural network models were implemented in Theano (Theano Development Team, 2016). To train the neural network models, we optimized the log-likelihood using the first-order gradient-based optimization algorithm Adam (Kingma and Ba, 2015). To decide when to stop training we used early stopping with patience (Bengio, 2012) on the METEOR score obtained for the validation set. In all experiments, we use the default split of the SimpleQuestions dataset into training, validation and test sets. We trained TransE embeddings with embedding dimensionality 200 for each subject, relationship and object. Based on preliminary experiments, for all neural network models we fixed the learning rate to 0.00025 and clipped parameter gradients with norms larger than 0.1. We further fixed the embedding dimensionality of words to be 200, and the hidden state of the decoder RNN to have dimensionality 600. 5.2 Evaluation To investigate the performance of our models, we make use of both automatic evaluation metrics and human evaluators. 5.2.1 Automatic Evaluation Metrics BLEU (Papineni et al., 2002) and METEOR (Banerjee and Lavie, 2005) are two widely used evaluation metrics in statistical machine translation and automatic image-caption generation (Chen et al., 2015). Similar to statistical machine translation, where a phrase in the source language is mapped to a phrase in the target language, in this task a KB fact is mapped to a natural language question. Both tasks are highly constrained, e.g. the set of valid outputs is limited. This is true in particular for short phrases, such as one sentence questions. Furthermore, in both tasks, the majority of valid outputs are paraphrases of each other, which BLEU and METEOR have been designed to capture. We therefore believe that BLEU and METEOR constitute reasonable performance metrics for evaluating the generated questions. Although we believe that METEOR and BLEU are reasonable evaluation metrics, they may have not recognize certain paraphrases, in particular paraphrases of entities. We therefore also make use of a sentence similarity metric, as proposed by Rus and Lintean (2012), which we will denote Embedding Greedy (Emb. Greedy). The metric makes use of a word similarity score, which in our experiments is the cosine similarity between two Word2Vec word embeddings (Mikolov et al., 2013).7 The metric finds a (non-exclusive) alignment between words in the two questions, which maximizes the similarity between aligned words, and computes the sentence similarity as the mean over the word similarities between aligned words. The results are shown in Table 3. Example questions produced by the model with multiple placeholders are shown in Table 4. The neural network models outperform the templatebased baseline by a clear margin across all metrics. The template-based baseline is already a relatively strong model, because it makes use of a separate template for each relationship. Qualitatively the neural networks outperform the baseline model in cases where they are able to levage additional knowledge about the entities (see first, third and fifth example in Table 4). On the other hand, for rare relationships the baseline model appears to perform better, because it is able to produce a reasonable question if only a single example with the same relationship exists in the training set (see eighth example in Table 4). Given enough training data this suggests that neural networks are generally better at the question generation task compared to hand-crafted template-based procedures, and therefore that they may be useful for generating question answering corpora. Furthermore, it appears that the best performing models are the models where TransE are trained on the largest set of triples (TransE++). This set contains, apart from the supporting triples described in Section 4.3, triples involving entities which are highly connected to the entities found in the SimpleQuestions facts. In total, around 30 millions of facts, which have been used to generate the 30M Factoid Question-Answer Corpus. Lastly, it is not clear whether the model with a single placeholder or the model with multiple placeholders performs best. This motivates the following human study. 7We use the Word2Vec embeddings pretrained on the Google News Corpus: https://code.google.com/ p/word2vec/. 594 Model BLEU METEOR Emb. Greedy Baseline 31.36 33.12 74.02 SP Triples 33.27 35.07 76.72 MP Triples 32.76 34.97 76.70 SP Triples TransE++ 33.32 35.38 76.78 MP Triples TransE++ 33.28 35.29 77.01 Table 3: Test performance for all models w.r.t. BLEU, METEOR and Emb. Greedy performance metrics, where SP indicates models with a single placeholder and MP models with multiple placeholders. TransE++ indicates models where the TransE embeddings have been pretrained on a larger set of triples. The best performance on each metric is marked in bold font. Fact Human Baseline MP Triples TransE++ bayuvi dupki – contained by – europe where is bayuvi dupki? what state is the city of bayuvi dupki located in? what continent is bayuvi dupki in? illinois – contains – ludlow township what is in illinois? what is a tributary found in illinois? what is the name of a place within illinois? neo contra – publisher – konami who published neo contra? which company published the game neo contra? who is the publisher for the computer videogame neo contra? fumihiko maki – structures designed – makuhari messe fumihiko maki designed what structure? what park did fumihiko maki help design? what’s a structure designed by fumihiko maki? cheryl hickey – profession – actor what is cheryl hickey’s profession? what is cheryl hickey? what is cheryl hickey’s profession in the entertainment industry? cherry – drugs with this flavor – tussin expectorant for adults 100 syrup name a cherry flavored drug? what is a cherry flavored drug? what’s a drug that cherry shaped like? pop music – artists – nikki flores what artist is known for pop music? An example of pop music is what artist? who’s an american singer that plays pop music? Table 4: Test examples and corresponding questions. 5.2.2 Human Evaluation Study We carry out pairwise preference experiments on Amazon Mechanical Turk. Initially, we considered carrying out separate experiments for measuring relevancy and fluency respectively, since this is common practice in machine translation. However, the relevancy of a question is determined solely by a single factor, i.e. the relationship, since by construction the subject is always in the question. Measuring relevancy is therefore not very useful in our task. To verify this we carried out an internal pairwise preference experiment with human subjects, who were repeatedly shown a fact and two questions and asked to select the most relevant question. We found that 93% of the questions generated by the MP Triples TransE++ model were either judged better or at least as good as the human generated questions w.r.t. relevancy. The remaining 7% questions of the MP Triples TransE++ model questions were also judged relevant questions, although less so compared to the human generated questions. In the next experiment, we therefore measure the holistic quality of the questions. We setup experiments comparing: HumanBaseline (human and baseline questions), HumanMP (human and MP Triples TransE++ questions) and Baseline-MP (baseline and MP Triples TransE++ questions). We show human evaluators a fact along with two questions, one question from each model for the corresponding fact, and ask the them to choose the question which is most relevant to the fact and most natural. The human evaluator also has the option of not choosing either question. This is important if both questions are equally good or if neither of the questions make sense. At the beginning of each experiment, we show the human evaluators two examples of statements and a corresponding pair of questions, where we briefly explain the form of the statements and how questions relate to those statements. Following the introductory examples, we present the facts and cor595 Model A Model B Model A Preference (%) Model B Preference (%) Fleiss’ kappa Human Baseline ∗56.329 ± 5.469 34.177 ± 5.230 0.242 Baseline MP Triples TransE++ 32.484 ± 5.180 ∗60.828 ± 5.399 0.234 Human MP Triples TransE++ 38.652 ± 5.684 51.418 ± 5.833 0.182 Table 5: Pairwise human evaluation preferences computed across evaluators with 95% confidence intervals. The preferred model in each experiment is marked in bold font. An asterisk next to the preferred model indicates a statistically significance likelihood-ratio test, which shows that the model is preferred in at least half of the presented examples with 95% confidence. The name MP Triples TransE++ indicates the model with multiple placeholders and TransE embeddings pretrained on a larger set of triples. The last column shows the Fleiss’ kappa averaged across batches (HITs) with different evaluators and questions. responding pair of questions one by one. To avoid presentation bias, we randomly shuffle the order of the examples and the order in which questions are shown by each model. During each experiment, we also show four check facts and corresponding check questions at random, which any attentive human annotator should be able to answer easily. We discard responses of human evaluators who fail any of these four checks. The preference of each example is defined as the question which is preferred by the majority of the evaluators. Examples where neither of the two questions are preferred by the majority of the evaluators, i.e. when there is an equal number of evaluators who prefer each question, are assigned to a separate preference class called “comparable”.8 The results are shown in Table 5. In total, 3, 810 preferences were recorded by 63 independent human evaluators. The questions produced by each model model pair were evaluated in 5 batches (HITs). Each human evaluated 44-75 examples (facts and corresponding question pairs) in each batch and each example was evaluated by 3-5 evaluators. In agreement with the automatic evaluation metrics, the human evaluators strongly prefer either the human or the neural network model over the template-based baseline. Furthermore, it appears that humans cannot distinguish between the human-generated questions and the neural network questions, on average showing a preference towards the later over the former ones. We hypothesize this is because our model penalizes uncommon and unnatural ways to frame questionsand sometimes, includes specific information about the target object that the humans do not (see last example in Table 4). This confirms our earlier 8The probabilities for the “comparable” class in Table 5 can be computed in each row as 100 minus the third and fourth column in the table. assertion, that the neural network questions can be used for building question answering systems. 6 Conclusion We propose new neural network models for mapping knowledge base facts into corresponding natural language questions. The neural networks combine ideas from recent neural network architectures for statistical machine translation, as well as multi-relational knowledge base embeddings for overcoming sparsity issues and placeholder techniques for handling rare words. The produced question and answer pairs are evaluated using automatic evaluation metrics, including BLEU, METEOR and sentence similarity, and are found to outperform a template-based baseline model. When evaluated by untrained human subjects, the question and answer pairs produced by our best performing neural network appears to be comparable in quality to real human-generated questions. Finally, we use our best performing neural network model to generate a corpus of 30M question and answer pairs, which we hope will enable future researchers to improve their question answering systems. Acknowledgments The authors acknowledge IBM Research, NSERC, Canada Research Chairs and CIFAR for funding. The authors thank Yang Yu, Bing Xiang, Bowen Zhou and Gerald Tesauro for constructive feedback, and Antoine Bordes, Nicolas Usunier, Sumit Chopra and Jason Weston for providing the SimpleQuestions dataset. This research was enabled in part by support provided by Calcul Qubec (www.calculquebec.ca) and Compute Canada (www.computecanada.ca). 596 References [Ali et al.2010] Husam Ali, Yllias Chali, and Sadid A Hasan. 2010. Automation of question generation from sentences. In Proceedings of QG2010: The Third Workshop on Question Generation, pages 58– 67. [Bahdanau et al.2015] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations. [Banerjee and Lavie2005] Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for mt evaluation with improved correlation with human judgments. In ACL, Workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 65–72. [Bengio2012] Yoshua Bengio. 2012. Practical recommendations for gradient-based training of deep architectures. In Neural Networks: Tricks of the Trade, pages 437–478. Springer. [Berant and Liang2014] Jonathan Berant and Percy Liang. 2014. Semantic parsing via paraphrasing. In ACL, pages 1415–1425. [Bollacker et al.2008] Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247– 1250. [Bordes et al.2011] Antoine Bordes, Jason Weston, Ronan Collobert, and Yoshua Bengio. 2011. Learning structured embeddings of knowledge bases. In AAAI 2011. [Bordes et al.2013] Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Advances in Neural Information Processing Systems, pages 2787– 2795. [Bordes et al.2014] Antoine Bordes, Jason Weston, and Nicolas Usunier. 2014. Open question answering with weakly supervised embedding models. In Machine Learning and Knowledge Discovery in Databases - European Conference, (ECML PKDD), pages 165–180. [Bordes et al.2015] Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Largescale simple question answering with memory networks. arXiv preprint arXiv:1506.02075. [Chen et al.2009] Wei Chen, Gregory Aist, and Jack Mostow. 2009. Generating questions automatically from informational text. In Proceedings of the 2nd Workshop on Question Generation (AIED 2009), pages 17–24. [Chen et al.2015] Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, and C. Lawrence Zitnick. 2015. Microsoft COCO captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325. [Cho et al.2014] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1724–1734. [Curto et al.2012] Sergio Curto, A Mendes, and Luisa Coheur. 2012. Question generation based on lexicosyntactic patterns learned from the web. Dialogue and Discourse, 3(2):147–175. [Duma and Klein2013] Daniel Duma and Ewan Klein. 2013. Generating natural language from linked data: Unsupervised template extraction. ACL, pages 83– 94. [Dumais et al.2002] Susan Dumais, Michele Banko, Eric Brill, Jimmy Lin, and Andrew Ng. 2002. Web question answering: Is more always better? In Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, pages 291–298. [Graesser et al.1992] Arthur C Graesser, Sallie E Gordon, and Lawrence E Brainerd. 1992. QUEST: A model of question answering. Computers and Mathematics with Applications, 23(6):733–745. [Greff et al.2015] Klaus Greff, Rupesh Kumar Srivastava, Jan Koutn´ık, Bas R Steunebrink, and J¨urgen Schmidhuber. 2015. LSTM: A search space odyssey. arXiv preprint arXiv:1503.04069. [Kalady et al.2010] Saidalavi Kalady, Ajeesh Elikkottil, and Rajarshi Das. 2010. Natural language question generation using syntax and keywords. In Proceedings of QG2010: The Third Workshop on Question Generation, pages 1–10. questiongeneration. org. [Kingma and Ba2015] Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations. [Lenat and Guha1989] Douglas B. Lenat and Ramanathan V. Guha. 1989. Building large knowledge-based systems; representation and inference in the Cyc project. Addison-Wesley Longman Publishing Co., Inc. [Lopez et al.2011] Vanessa Lopez, Victoria Uren, Marta Sabou, and Enrico Motta. 2011. Is question answering fit for the semantic web? a survey. Semantic Web, 2(2):125–155. [Luong et al.2015] Minh-Thang Luong, Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In ACL, pages 11–19. 597 [Mannem et al.2010] Prashanth Mannem, Rashmi Prasad, and Aravind Joshi. 2010. Question generation from paragraphs at upenn: Qgstec system description. In Proceedings of QG2010: The Third Workshop on Question Generation, pages 84–91. [Mikolov et al.2013] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119. [Olney et al.2012] Andrew M Olney, Arthur C Graesser, and Natalie K Person. 2012. Question generation from concept maps. Dialogue and Discourse, 3(2):75–99. [Papineni et al.2002] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL, pages 311–318. [Rus and Lintean2012] Vasile Rus and Mihai Lintean. 2012. A comparison of greedy and optimal assessment of natural language student input using wordto-word similarity metrics. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP, NAACL, pages 157–162. [Rus et al.2010] Vasile Rus, Brendan Wyse, Paul Piwek, Mihai Lintean, Svetlana Stoyanchev, and Cristian Moldovan. 2010. The first question generation shared task evaluation challenge. In Proceedings of the 6th International Natural Language Generation Conference, pages 251–257. [Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems. [Theano Development Team2016] Theano Development Team. 2016. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May. [Voorhees and Tice2000] Ellen M Voorhees and DM Tice. 2000. Overview of the trec-9 question answering track. In TREC. [Vrandeˇci´c and Kr¨otzsch2014] Denny Vrandeˇci´c and Markus Kr¨otzsch. 2014. Wikidata: a free collaborative knowledgebase. Communications of the ACM, 57(10):78–85. [Yao and Zhang2010] Xuchen Yao and Yi Zhang. 2010. Question generation with minimal recursion semantics. In Proceedings of QG2010: The Third Workshop on Question Generation, pages 68–75. [Yao et al.2012] Xuchen Yao, Gosse Bouma, and Yi Zhang. 2012. Semantics-based question generation and implementation. Dialogue and Discourse, 3(2):11–42. 598
2016
56
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 599–609, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Latent Predictor Networks for Code Generation Wang Ling♦Edward Grefenstette♦Karl Moritz Hermann♦ Tom´aˇs Koˇcisk´y♦♣Andrew Senior♦Fumin Wang♦Phil Blunsom♦♣ ♦Google DeepMind ♣University of Oxford {lingwang,etg,kmh,tkocisky,andrewsenior,awaw,pblunsom}@google.com Abstract Many language generation tasks require the production of text conditioned on both structured and unstructured inputs. We present a novel neural network architecture which generates an output sequence conditioned on an arbitrary number of input functions. Crucially, our approach allows both the choice of conditioning context and the granularity of generation, for example characters or tokens, to be marginalised, thus permitting scalable and effective training. Using this framework, we address the problem of generating programming code from a mixed natural language and structured specification. We create two new data sets for this paradigm derived from the collectible trading card games Magic the Gathering and Hearthstone. On these, and a third preexisting corpus, we demonstrate that marginalising multiple predictors allows our model to outperform strong benchmarks. 1 Introduction The generation of both natural and formal languages often requires models conditioned on diverse predictors (Koehn et al., 2007; Wong and Mooney, 2006). Most models take the restrictive approach of employing a single predictor, such as a word softmax, to predict all tokens of the output sequence. To illustrate its limitation, suppose we wish to generate the answer to the question “Who wrote The Foundation?” as “The Foundation was written by Isaac Asimov”. The generation of the words “Issac Asimov” and “The Foundation” from a word softmax trained on annotated data is unlikely to succeed as these words are sparse. A robust model might, for example, employ one preFigure 1: Example MTG and HS cards. dictor to copy “The Foundation” from the input, and a another one to find the answer “Issac Asimov” by searching through a database. However, training multiple predictors is in itself a challenging task, as no annotation exists regarding the predictor used to generate each output token. Furthermore, predictors generate segments of different granularity, as database queries can generate multiple tokens while a word softmax generates a single token. In this work we introduce Latent Predictor Networks (LPNs), a novel neural architecture that fulfills these desiderata: at the core of the architecture is the exact computation of the marginal likelihood over latent predictors and generated segments allowing for scalable training. We introduce a new corpus for the automatic generation of code for cards in Trading Card Games (TCGs), on which we validate our model 1. TCGs, such as Magic the Gathering (MTG) and Hearthstone (HS), are games played between two players that build decks from an ever expanding pool of cards. Examples of such cards are shown in Figure 1. Each card is identified by its attributes 1Dataset available at https://deepmind.com/publications.html 599 (e.g., name and cost) and has an effect that is described in a text box. Digital implementations of these games implement the game logic, which includes the card effects. This is attractive from a data extraction perspective as not only are the data annotations naturally generated, but we can also view the card as a specification communicated from a designer to a software engineer. This dataset presents additional challenges to prior work in code generation (Wong and Mooney, 2006; Jones et al., 2012; Lei et al., 2013; Artzi et al., 2015; Quirk et al., 2015), including the handling of structured input—i.e. cards are composed by multiple sequences (e.g., name and description)—and attributes (e.g., attack and cost), and the length of the generated sequences. Thus, we propose an extension to attention-based neural models (Bahdanau et al., 2014) to attend over structured inputs. Finally, we propose a code compression method to reduce the size of the code without impacting the quality of the predictions. Experiments performed on our new datasets, and a further pre-existing one, suggest that our extensions outperform strong benchmarks. The paper is structured as follows: We first describe the data collection process (Section 2) and formally define our problem and our baseline method (Section 3). Then, we propose our extensions, namely, the structured attention mechanism (Section 4) and the LPN architecture (Section 5). We follow with the description of our code compression algorithm (Section 6). Our model is validated by comparing with multiple benchmarks (Section 7). Finally, we contextualize our findings with related work (Section 8) and present the conclusions of this work (Section 9). 2 Dataset Extraction We obtain data from open source implementations of two different TCGs, MTG in Java2 and HS in Python.3 The statistics of the corpora are illustrated in Table 1. In both corpora, each card is implemented in a separate class file, which we strip of imports and comments. We categorize the content of each card into two different groups: singular fields that contain only one value; and text fields, which contain multiple words representing different units of meaning. In MTG, there are six singular fields (attack, defense, rarity, set, id, and 2github.com/magefree/mage/ 3github.com/danielyule/hearthbreaker/ MTG HS Programming Language Java Python Cards 13,297 665 Cards (Train) 11,969 533 Cards (Validation) 664 66 Cards (Test) 664 66 Singular Fields 6 4 Text Fields 8 2 Words In Description (Average) 21 7 Characters In Code (Average) 1,080 352 Table 1: Statistics of the two TCG datasets. health) and four text fields (cost, type, name, and description), whereas HS cards have eight singular fields (attack, health, cost and durability, rarity, type, race and class) and two text fields (name and description). Text fields are tokenized by splitting on whitespace and punctuation, with exceptions accounting for domain specific artifacts (e.g., Green mana is described as “{G}” in MTG). Empty fields are replaced with a “NIL” token. The code for the HS card in Figure 1 is shown in Figure 2. The effect of “drawing cards until the player has as many cards as the opponent” is implemented by computing the difference between the players’ hands and invoking the draw method that number of times. This illustrates that the mapping between the description and the code is nonlinear, as no information is given in the text regarding the specifics of the implementation. class DivineFavor(SpellCard): def __init__(self): super().__init__("Divine Favor", 3, CHARACTER_CLASS.PALADIN, CARD_RARITY.RARE) def use(self, player, game): super().use(player, game) difference = len(game.other_player.hand) - len(player.hand) for i in range(0, difference): player.draw() Figure 2: Code for the HS card “Divine Favor”. 3 Problem Definition Given the description of a card x, our decoding problem is to find the code ˆy so that: ˆy = argmax y log P(y | x) (1) Here log P(y | x) is estimated by a given model. We define y = y1..y|y| as the sequence of characters of the code with length |y|. We index each input field with k = 1..|x|, where |x| quantifies the 600 number of input fields. |xk| denotes the number of tokens in xk and xki selects the i-th token. 4 Structured Attention Background When |x| = 1, the attention model of Bahdanau et al. (2014) applies. Following the chain rule, log P(y|x) = P t=1..|y| log P(yt|y1..yt−1, x), each token yt is predicted conditioned on the previously generated sequence y1..yt−1 and input sequence x1 = x11..x1|x1|. Probability are estimated with a softmax over the vocabulary Y : p(yt|y1..yt−1, x1) = softmax yt∈Y (ht) (2) where ht is the Recurrent Neural Network (RNN) state at time stamp t, which is modeled as g(yt−1, ht−1, zt). g(·) is a recurrent update function for generating the new state ht based on the previous token yt−1, the previous state ht−1, and the input text representation zt. We implement g using a Long Short-Term Memory (LSTM) RNNs (Hochreiter and Schmidhuber, 1997). The attention mechanism generates the representation of the input sequence x = x11..x1|x1|, and zt is computed as the weighted sum zt = P i=1..|x1| aih(x1i), where ai is the attention coefficient obtained for token x1i and h is a function that maps each x1i to a continuous vector. In general, h is a function that projects x1i by learning a lookup table, and then embedding contextual words by defining an RNN. Coefficients ai are computed with a softmax over input tokens x11..x1|x1|: ai = softmax x1i∈x (v(h(x1i), ht−1)) (3) Function v computes the affinity of each token x1i and the current output context ht−1. A common implementation of v is to apply a linear projection from h(x1i) : ht−1 (where : is the concatenation operation) into a fixed size vector, followed by a tanh and another linear projection. Our Approach We extend the computation of zt for cases when x corresponds to multiple fields. Figure 3 illustrates how the MTG card “Serra Angel” is encoded, assuming that there are two singular fields and one text field. We first encode each token xki using the C2W model described in Ling et al. (2015), which is a replacement for lookup tables where word representations are learned at the Figure 3: Illustration of the structured attention mechanism operating on a single time stamp t. character level (cf. C2W row). A context-aware representation is built for words in the text fields using a bidirectional LSTM (cf. Bi-LSTM row). Computing attention over multiple input fields is problematic as each input field’s vectors have different sizes and value ranges. Thus, we learn a linear projection mapping each input token xki to a vector with a common dimensionality and value range (cf. Linear row). Denoting this process as f(xki), we extend Equation 3 as: aki = softmax xki∈x (v(f(xki), ht−1)) (4) Here a scalar coefficient aki is computed for each input token xki (cf. “Tanh”, “Linear”, and “Softmax” rows). Thus, the overall input representation zt is computed as: zt = X k=1..|x|,i=1..|xk| aijf(xki) (5) 5 Latent Predictor Networks Background In order to decode from x to y, many words must be copied into the code, such as the name of the card, the attack and the cost values. If we observe the HS card in Figure 1 and the respective code in Figure 2, we observe that the name “Divine Favor” must be copied into the class name and in the constructor, along with the cost of the card “3”. As explained earlier, this problem is not specific to our task: for instance, in the dataset of Oda et al. (2015), a model must learn to map from timeout = int ( timeout ) to “convert timeout into an integer.”, where the name of the variable “timeout” must be copied into the output sequence. The same issue exists for proper nouns in machine translation 601 Figure 4: Generation process for the code init(‘Tirion Fordring’,8,6,6) using LPNs. which are typically copied from one language to the other. Pointer networks (Vinyals et al., 2015) address this by defining a probability distribution over a set of units that can be copied c = c1..c|c|. The probability of copying a unit ci is modeled as: p(ci) = softmax ci∈c (v(h(ci), q)) (6) As in the attention model (Equation 3), v is a function that computes the affinity between an embedded copyable unit h(ci) and an arbitrary vector q. Our Approach Combining pointer networks with a character-based softmax is in itself difficult as these generate segments of different granularity and there is no ground truth of which predictor to use at each time stamp. We now describe Latent Predictor Networks, which model the conditional probability log P(y|x) over the latent sequence of predictors used to generate y. We assume that our model uses multiple predictors r ∈ R, where each r can generate multiple segments st = yt..yt+|st|−1 with arbitrary length |st| at time stamp t. An example is illustrated in Figure 4, where we observe that to generate the code init(‘Tirion Fordring’,8,6,6), a pointer network can be used to generate the sequences y13 7 =Tirion and y22 14=Fordring (cf. “Copy From Name” row). These sequences can also be generated using a character softmax (cf. “Generate Characters” row). The same applies to the generation of the attack, health and cost values as each of these predictors is an element in R. Thus, we define our objective function as a marginal log likelihood function over a latent variable ω: log P(y | x) = log X ω∈¯ω P(y, ω | x) (7) Formally, ω is a sequence of pairs rt, st, where rt ∈R denotes the predictor that is used at timestamp t and st the generated string. We decompose P(y, ω | x) as the product of the probabilities of segments st and predictors rt: P(y, ω | x) = Y rt,st∈ω P(st, rt | y1..yt−1, x) = Y rt,st∈ω P(st | y1..yt−1, x, rt)P(rt | y1..yt−1, x) where the generation of each segment is performed in two steps: select the predictor rt with probability P(rt | y1..yt−1, x) and then generate st conditioned on predictor rt with probability log P(st | y1..yt−1, x, rt). The probability of each predictor is computed using a softmax over all predictors in R conditioned on the previous state ht−1 and the input representation zt (cf. “Select Predictor” box). Then, the probability of generating the segment st depends on the predictor type. We define three types of predictors: 602 Character Generation Generate a single character from observed characters from the training data. Only one character is generated at each time stamp with probability given by Equation 2. Copy Singular Field For singular fields only the field itself can be copied, for instance, the value of the attack and cost attributes or the type of card. The size of the generated segment is the number of characters in the copied field and the segment is generated with probability 1. Copy Text Field For text fields, we allow each of the words xki within the field to be copied. The probability of copying a word is learned with a pointer network (cf. “Copy From Name” box), where h(ci) is set to the representation of the word f(xki) and q is the concatenation ht−1 : zt of the state and input vectors. This predictor generates a segment with the size of the copied word. It is important to note that the state vector ht−1 is generated by building an RNN over the sequence of characters up until the time stamp t −1, i.e. the previous context yt−1 is encoded at the character level. This allows the number of possible states to remain tractable at training time. 5.1 Inference At training time we use back-propagation to maximize the probability of observed code, according to Equation 7. Gradient computation must be performed with respect to each computed probability P(rt | y1..yt−1, x) and P(st | y1..yt−1, x, rt). The derivative ∂log P(y|x) ∂P(rt|y1..yt−1,x) yields: ∂αtP(rt | y1..yt−1, x)βt,rt + ξrt P(y | x)∂P(rt | y1..yt−1, x) = αtβt,rt α|y|+1 Here αt denotes the cumulative probability of all values of ω up until time stamp t and α|y|+1 yields the marginal probability P(y | x). βt,rt = P(st | y1..yt−1)βt+|st|−1 denotes the cumulative probability starting from predictor rt at time stamp t, exclusive. This includes the probability of the generated segment P(st | y1..yt−1, x, rt) and the probability of all values of ω starting from timestamp t+ |st|−1, that is, all possible sequences that generate segment y after segment st is produced. For completeness, ξr denotes the cumulative probabilities of all ω that do not include rt. To illustrate this, we refer to Figure 4 and consider the timestamp t = 14, where the segment s14 =Fordring is generated. In this case, the cumulative probability α14 is the sum of the path that generates the sequence init(‘Tirion with characters alone, and the path that generates the word Tirion by copying from the input. β21 includes the probability of all paths that follow the generation of Fordring, which include 2×3×3 different paths due to the three decision points that follow (e.g. generating 8 using a character softmax vs. copying from the cost). Finally, ξr refers to the path that generates Fordring character by character. While the number of possible paths grows exponentially, α and β can be computed efficiently using the forward-backward algorithm for SemiMarkov models (Sarawagi and Cohen, 2005), where we associate P(rt | y1..yt−1, x) to edges and P(st | y1..yt−1, x, rt) to nodes in the Markov chain. The derivative ∂log P(y|x) ∂P(st|y1..yt−1,x,rt) can be computed using the same logic: ∂αt,stP(st | y1..yt−1, x, rt)βt+|st|−1 + ξrt P(y | x)∂P(st | y1..yt−1, x, rt) = αt,rtβt+|st|−1 α|y|+1 Once again, we denote αt,rt = αtP(rt | y1..yt−1, x) as the cumulative probability of all values of ω that lead to st, exclusive. An intuitive interpretation of the derivatives is that gradient updates will be stronger on probability chains that are more likely to generate the output sequence. For instance, if the model learns a good predictor to copy names, such as Fordring, other predictors that can also generate the same sequences, such as the character softmax will allocate less capacity to the generation of names, and focus on elements that they excel at (e.g. generation of keywords). 5.2 Decoding Decoding is performed using a stack-based decoder with beam search. Each state S corresponds to a choice of predictor rt and segment st at a given time stamp t. This state is scored as V (S) = log P(st | y1..yt−1, x, rt) + log P(rt | y1..yt−1, x) + V (prev(S)), where prev(S) denotes the predecessor state of S. At each time stamp, the n states with the highest scores V are expanded, where n is the size of the beam. For each predictor rt, each output st generates a new state. Finally, at each timestamp t, all states 603 which produce the same output up to that point are merged by summing their probabilities. 6 Code Compression As the attention-based model traverses all input units at each generation step, generation becomes quite expensive for datasets such as MTG where the average card code contains 1,080 characters. While this is not the essential contribution in our paper, we propose a simple method to compress the code while maintaining the structure of the code, allowing us to train on datasets with longer code (e.g., MTG). The idea behind that method is that many keywords in the programming language (e.g., public and return) as well as frequently used functions and classes (e.g., Card) can be learned without character level information. We exploit this by mapping such strings onto additional symbols Xi (e.g., public class copy() → “X1 X2 X3()”). Formally, we seek the string ˆv among all strings V (max) up to length max that maximally reduces the size of the corpus: ˆv = argmax v∈V (max) (len(v) −1)C(v) (8) where C(v) is the number of occurrences of v in the training corpus and len(v) its length. (len(v) −1)C(v) can be seen as the number of characters reduced by replacing v with a nonterminal symbol. To find q(v) efficiently, we leverage the fact that C(v) ≤C(v′) if v contains v′. It follows that (max −1)C(v) ≤(max −1)C(v′), which means that the maximum compression obtainable for v at size max is always lower than that of v′. Thus, if we can find a ¯v such that (len(¯v) −1)C(¯v) > (max −1)C(v′), that is ¯v at the current size achieves a better compression rate than v′ at the maximum length, then it follows that all sequences that contain v can be discarded as candidates. Based on this idea, our iterative search starts by obtaining the counts C(v) for all segments of size s = 2, and computing the best scoring segment ¯v. Then, we build a list L(s) of all segments that achieve a better compression rate than ¯v at their maximum size. At size s + 1, only segments that contain a element in L(s −1) need to be considered, making the number of substrings to be tested to be tractable as s increases. The algorithm stops once s reaches max or the newly generated list L(s) contains no elements. X v size X1 card)⇓{⇓super(card);⇓}⇓@Override⇓public 1041 X2 bility 1002 X3 ;⇓this. 964 X4 (UUID ownerId)⇓{⇓super(ownerId 934 X5 public 907 X6 new 881 X7 copy() 859 X8 }”)X3expansionSetCode = ” 837 X9 X6CardType[]{CardType. 815 X10 ffect 794 Table 2: First 10 compressed units in MTG. We replaced newlines with ⇓and spaces with . Once ˆv is obtained, we replace all occurrences of ˆv with a new non-terminal symbol. This process is repeated until a desired average size for the code is reached. While training is performed on the compressed code, the decoding will undergo an additional step, where the compressed code is restored by expanding the all Xi. Table 2 shows the first 10 replacements from the MTG dataset, reducing its average size from 1080 to 794. 7 Experiments Datasets Tests are performed on the two datasets provided in this paper, described in Table 1. Additionally, to test the model’s ability of generalize to other domains, we report results in the Django dataset (Oda et al., 2015), comprising of 16000 training, 1000 development and 1805 test annotations. Each data point consists of a line of Python code together with a manually created natural language description. Neural Benchmarks We implement two standard neural networks, namely a sequence-tosequence model (Sutskever et al., 2014) and an attention-based model (Bahdanau et al., 2014). The former is adapted to work with multiple input fields by concatenating them, while the latter uses our proposed attention model. These models are denoted as “Sequence” and “Attention”. Machine Translation Baselines Our problem can also be viewed in the framework of semantic parsing (Wong and Mooney, 2006; Lu et al., 2008; Jones et al., 2012; Artzi et al., 2015). Unfortunately, these approaches define strong assumptions regarding the grammar and structure of the output, which makes it difficult to generalize for other domains (Kwiatkowski et al., 2010). However, the work in Andreas et al. (2013) provides 604 evidence that using machine translation systems without committing to such assumptions can lead to results competitive with the systems described above. We follow the same approach and create a phrase-based (Koehn et al., 2007) model and a hierarchical model (or PCFG) (Chiang, 2007) as benchmarks for the work presented here. As these models are optimized to generate words, not characters, we implement a tokenizer that splits on all punctuation characters, except for the “ ” character. We also facilitate the task by splitting CamelCase words (e.g., class TirionFordring →class Tirion Fordring). Otherwise all class names would not be generated correctly by these methods. We used the models implemented in Moses to generate these baselines using standard parameters, using IBM Alignment Model 4 for word alignments (Och and Ney, 2003), MERT for tuning (Sokolov and Yvon, 2011) and a 4-gram Kneser-Ney Smoothed language model (Heafield et al., 2013). These models will be denoted as “Phrase” and “Hierarchical”, respectively. Retrieval Baseline It was reported in (Quirk et al., 2015) that a simple retrieval method that outputs the most similar input for each sample, measured using Levenshtein Distance, leads to good results. We implement this baseline by computing the average Levenshtein Distance for each input field. This baseline is denoted “Retrieval”. Evaluation A typical metric is to compute the accuracy of whether the generated code exactly matches the reference code. This is informative as it gives an intuition of how many samples can be used without further human post-editing. However, it does not provide an illustration on the degree of closeness to achieving the correct code. Thus, we also test using BLEU-4 (Papineni et al., 2002) at the token level. There are clearly problems with these metrics. For instance, source code can be correct without matching the reference. The code in Figure 2, could have also been implemented by calling the draw function in an cycle that exists once both players have the same number of cards in their hands. Some tasks, such as the generation of queries (Zelle and Mooney, 1996), have overcome this problem by executing the query and checking if the result is the same as the annotation. However, we shall leave the study of these methologies for future work, as adapting these methods for our tasks is not trivial. For instance, the correctness cards with conditional (e.g. if player has no cards, then draw a card) or non-deterministc (e.g. put a random card in your hand) effects cannot be simply validated by running the code. Setup The multiple input types (Figure 3) are hyper-parametrized as follows: The C2W model (cf. “C2W” row) used to obtain continuous vectors for word types uses character embeddings of size 100 and LSTM states of size 300, and generates vectors of size 300. We also report on results using word lookup tables of size 300, where we replace singletons with a special unknown token with probability 0.5 during training, which is then used for out-of-vocabulary words. For text fields, the context (cf. “Bi-LSTM” row) is encoded with a Bi-LSTM of size 300 for the forward and backward states. Finally, a linear layer maps the different input tokens into a common space with of size 300 (cf. “Linear” row). As for the attention model, we used an hidden layer of size 200 before applying the non-linearity (row “Tanh”). As for the decoder (Figure 4), we encode output characters with size 100 (cf. “output (y)” row), and an LSTM state of size 300 and an input representation of size 300 (cf. “State(h+z)” row). For each pointer network (e.g., “Copy From Name” box), the intersection between the input units and the state units are performed with a vector of size 200. Training is performed using mini-batches of 20 samples using AdaDelta (Zeiler, 2012) and we report results using the iteration with the highest BLEU score on the validation set (tested at intervals of 5000 mini-batches). Decoding is performed with a beam of 1000. As for compression, we performed a grid search over compressing the code from 0% to 80% of the original average length over intervals of 20% for the HS and Django datasets. On the MTG dataset, we are forced to compress the code up to 80% due to performance issues when training with extremely long sequences. 7.1 Results Baseline Comparison Results are reported in Table 3. Regarding the retrieval results (cf. “Retrieval” row), we observe the best BLEU scores among the baselines in the card datasets (cf. “MTG” and “HS” columns). A key advantage of this method is that retrieving existing entities guarantees that the output is well formed, with no 605 MTG HS Django BLEU Acc BLEU Acc BLEU Acc Retrieval 54.9 0.0 62.5 0.0 18.6 14.7 Phrase 49.5 0.0 34.1 0.0 47.6 31.5 Hierarchical 50.6 0.0 43.2 0.0 35.9 9.5 Sequence 33.8 0.0 28.5 0.0 44.1 33.2 Attention 50.1 0.0 43.9 0.0 58.9 38.8 Our System 61.4 4.8 65.6 4.5 77.6 62.3 – C2W 60.9 4.4 67.1 4.5 75.9 60.9 – Compress 59.7 6.1 76.3 61.3 – LPN 52.4 0.0 42.0 0.0 63.3 40.8 – Attention 39.1 0.5 49.9 3.0 48.8 34.5 Table 3: BLEU and Accuracy scores for the proposed task on two in-domain datasets (HS and MTG) and an out-of-domain dataset (Django). Compression 0% 20% 40% 60% 80% Seconds Per Card Softmax 2.81 2.36 1.88 1.42 0.94 LPN 3.29 2.65 2.35 1.93 1.41 BLEU Scores Softmax 44.2 46.9 47.2 51.4 52.7 LPN 59.7 62.8 61.1 66.4 67.1 Table 4: Results with increasing compression rates with a regular softmax (cf. “Softmax”) and a LPN (cf. “LPN”). Performance values (cf. “Seconds Per Card” block) are computed using one CPU. syntactic errors such as producing a non-existent function call or generating incomplete code. As BLEU penalizes length mismatches, generating code that matches the length of the reference provides a large boost. The phrase-based translation model (cf. “Phrase” row) performs well in the Django (cf. “Django” column), where mapping from the input to the output is mostly monotonic, while the hierarchical model (cf. “Hierarchical” row) yields better performance on the card datasets as the concatenation of the input fields needs to be reordered extensively into the output sequence. Finally, the sequence-to-sequence model (cf. “Sequence” row) yields extremely low results, mainly due to the lack of capacity needed to memorize whole input and output sequences, while the attention based model (cf. “Attention” row) produces results on par with phrase-based systems. Finally, we observe that by including all the proposed components (cf. “Our System” row), we obtain significant improvements over all baselines in the three datasets and is the only one that obtains non-zero accuracies in the card datasets. Component Comparison We present ablation results in order to analyze the contribution of each of our modifications. Removing the C2W model (cf. “– C2W” row) yields a small deterioration, as word lookup tables are more susceptible to sparsity. The only exception is in the HS dataset, where lookup tables perform better. We believe that this is because the small size of the training set does not provide enough evidence for the character model to scale to unknown words. Surprisingly, running our model compression code (cf. “– Compress” row) actually yields better results. Table 4 provides an illustration of the results for different compression rates. We obtain the best results with an 80% compression rate (cf. “BLEU Scores” block), while maximising the time each card is processed (cf. “Seconds Per Card” block). While the reason for this is uncertain, it is similar to the finding that language models that output characters tend to under-perform those that output words (J´ozefowicz et al., 2016). This applies when using the regular optimization process with a character softmax (cf. “Softmax” rows), but also when using the LPN (cf. “LPN” rows). We also note that the training speed of LPNs is not significantly lower as marginalization is performed with a dynamic program. Finally, a significant decrease is observed if we remove the pointer networks (cf. “– LPN” row). These improvements also generalize to sequence-to-sequence models (cf. “– Attention” row), as the scores are superior to the sequence-tosequence benchmark (cf. “Sequence” row). Result Analysis Examples of the code generated for two cards are illustrated in Figure 5. We obtain the segments that were copied by the pointer networks by computing the most likely predictor for those segments. We observe from the marked segments that the model effectively copies the attributes that match in the output, including the name of the card that must be collapsed. As expected, the majority of the errors originate from inaccuracies in the generation of the effect of the card. While it is encouraging to observe that a small percentage of the cards are generated correctly, it is worth mentioning that these are the result of many cards possessing similar effects. The “Madder Bomber” card is generated correctly as there is a similar card “Mad Bomber” in the training set, which implements the same effect, except that it deals 3 damage instead of 6. Yet, it is a promising result that the model was able to capture 606 this difference. However, in many cases, effects that radically differ from seen ones tend to be generated incorrectly. In the card “Preparation”, we observe that while the properties of the card are generated correctly, the effect implements a unrelated one, with the exception of the value 3, which is correctly copied. Yet, interestingly, it still generates a valid effect, which sets a minion’s attack to 3. Investigating better methods to accurately generate these effects will be object of further studies. Figure 5: Examples of decoded cards from HS. Copied segments are marked in green and incorrect segments are marked in red. 8 Related Work While we target widely used programming languages, namely, Java and Python, our work is related to studies on the generation of any executable code. These include generating regular expressions (Kushman and Barzilay, 2013), and the code for parsing input documents (Lei et al., 2013). Much research has also been invested in generating formal languages, such as database queries (Zelle and Mooney, 1996; Berant et al., 2013), agent specific language (Kate et al., 2005) or smart phone instructions (Le et al., 2013). Finally, mapping natural language into a sequence of actions for the generation of executable code (Branavan et al., 2009). Finally, a considerable effort in this task has focused on semantic parsing (Wong and Mooney, 2006; Jones et al., 2012; Lei et al., 2013; Artzi et al., 2015; Quirk et al., 2015). Recently proposed models focus on Combinatory Categorical Grammars (Kushman and Barzilay, 2013; Artzi et al., 2015), Bayesian Tree Transducers (Jones et al., 2012; Lei et al., 2013) and Probabilistic Context Free Grammars (Andreas et al., 2013). The work in natural language programming (Vadas and Curran, 2005; Manshadi et al., 2013), where users write lines of code from natural language, is also related to our work. Finally, the reverse mapping from code into natural language is explored in (Oda et al., 2015). Character-based sequence-to-sequence models have previously been used to generate code from natural language in (Mou et al., 2015). Inspired by these works, LPNs provide a richer framework by employing attention models (Bahdanau et al., 2014), pointer networks (Vinyals et al., 2015) and character-based embeddings (Ling et al., 2015). Our formulation can also be seen as a generalization of Allamanis et al. (2016), who implement a special case where two predictors have the same granularity (a sub-token softmax and a pointer network). Finally, HMMs have been employed in neural models to marginalize over label sequences in (Collobert et al., 2011; Lample et al., 2016) by modeling transitions between labels. 9 Conclusion We introduced a neural network architecture named Latent Prediction Network, which allows efficient marginalization over multiple predictors. Under this architecture, we propose a generative model for code generation that combines a character level softmax to generate language-specific tokens and multiple pointer networks to copy keywords from the input. Along with other extensions, namely structured attention and code compression, our model is applied on on both existing datasets and also on a newly created one with implementations of TCG game cards. Our experiments show that our model out-performs multiple benchmarks, which demonstrate the importance of combining different types of predictors. References M. Allamanis, H. Peng, and C. Sutton. 2016. A Convolutional Attention Network for Extreme Summarization of Source Code. ArXiv e-prints, February. Jacob Andreas, Andreas Vlachos, and Stephen Clark. 2013. Semantic parsing as machine translation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 47–52, August. 607 Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage ccg semantic parsing with amr. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1699–1710, September. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1533–1544. S. R. K. Branavan, Harr Chen, Luke S. Zettlemoyer, and Regina Barzilay. 2009. Reinforcement learning for mapping instructions to actions. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 82–90. David Chiang. 2007. Hierarchical phrase-based translation. Comput. Linguist., 33(2):201–228, June. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493–2537, November. Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable modified Kneser-Ney language model estimation. In Proceedings of the 51th Annual Meeting on Association for Computational Linguistics, pages 690–696. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735– 1780, November. Bevan Keeley Jones, Mark Johnson, and Sharon Goldwater. 2012. Semantic parsing with bayesian tree transducers. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 488–496. Rafal J´ozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language modeling. CoRR, abs/1602.02410. Rohit J. Kate, Yuk Wah Wong, and Raymond J. Mooney. 2005. Learning to transform natural to formal languages. In Proceedings of the Twentieth National Conference on Artificial Intelligence (AAAI05), pages 1062–1068, Pittsburgh, PA, July. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, pages 177–180. Nate Kushman and Regina Barzilay. 2013. Using semantic unification to generate regular expressions from natural language. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 826–836, Atlanta, Georgia, June. Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2010. Inducing probabilistic ccg grammars from logical form with higherorder unification. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1223–1233. G. Lample, M. Ballesteros, S. Subramanian, K. Kawakami, and C. Dyer. 2016. Neural Architectures for Named Entity Recognition. ArXiv e-prints, March. Vu Le, Sumit Gulwani, and Zhendong Su. 2013. Smartsynth: Synthesizing smartphone automation scripts from natural language. In Proceeding of the 11th Annual International Conference on Mobile Systems, Applications, and Services, pages 193– 206. Tao Lei, Fan Long, Regina Barzilay, and Martin Rinard. 2013. From natural language specifications to program input parsers. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1294–1303, Sofia, Bulgaria, August. Wang Ling, Tiago Lu´ıs, Lu´ıs Marujo, R´amon Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015. Finding function in form: Compositional character models for open vocabulary word representation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Wei Lu, Hwee Tou Ng, Wee Sun Lee, and Luke S. Zettlemoyer. 2008. A generative model for parsing natural language to meaning representations. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, EMNLP ’08, pages 783–792, Stroudsburg, PA, USA. Association for Computational Linguistics. Mehdi Hafezi Manshadi, Daniel Gildea, and James F. Allen. 2013. Integrating programming by example and natural language programming. In Marie desJardins and Michael L. Littman, editors, AAAI. AAAI Press. Lili Mou, Rui Men, Ge Li, Lu Zhang, and Zhi Jin. 2015. On end-to-end program generation from user intention by deep neural networks. CoRR, abs/1510.07211. 608 Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Comput. Linguist., 29(1):19–51, March. Yusuke Oda, Hiroyuki Fudaba, Graham Neubig, Hideaki Hata, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2015. Learning to generate pseudo-code from source code using statistical machine translation. In 30th IEEE/ACM International Conference on Automated Software Engineering (ASE), Lincoln, Nebraska, USA, November. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 311–318. Chris Quirk, Raymond Mooney, and Michel Galley. 2015. Language to code: Learning semantic parsers for if-this-then-that recipes. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, pages 878–888, Beijing, China, July. Sunita Sarawagi and William W. Cohen. 2005. Semimarkov conditional random fields for information extraction. In L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information Processing Systems 17, pages 1185–1192. MIT Press. Artem Sokolov and Franc¸ois Yvon. 2011. Minimum Error Rate Semi-Ring. In Mikel Forcada and Heidi Depraetere, editors, Proceedings of the European Conference on Machine Translation, pages 241–248, Leuven, Belgium. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. CoRR, abs/1409.3215. David Vadas and James R. Curran. 2005. Programming with unrestricted natural language. In Proceedings of the Australasian Language Technology Workshop 2005, pages 191–199, Sydney, Australia, December. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In C. Cortes, N.D. Lawrence, D.D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2674–2682. Curran Associates, Inc. Yuk Wah Wong and Raymond J. Mooney. 2006. Learning for semantic parsing with statistical machine translation. In Proceedings of the Main Conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 439–446. Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR, abs/1212.5701. John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In AAAI/IAAI, pages 1050–1055, Portland, OR, August. AAAI Press/MIT Press. 609
2016
57
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 610–620, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Easy Things First: Installments Improve Referring Expression Generation for Objects in Photographs Sina Zarrieß David Schlangen Dialogue Systems Group // CITEC // Faculty of Linguistics and Literary Studies Bielefeld University, Germany [email protected] Abstract Research on generating referring expressions has so far mostly focussed on “oneshot reference”, where the aim is to generate a single, discriminating expression. In interactive settings, however, it is not uncommon for reference to be established in “installments”, where referring information is offered piecewise until success has been confirmed. We show that this strategy can also be advantageous in technical systems that only have uncertain access to object attributes and categories. We train a recently introduced model of grounded word meaning on a data set of REs for objects in images and learn to predict semantically appropriate expressions. In a human evaluation, we observe that users are sensitive to inadequate object names - which unfortunately are not unlikely to be generated from low-level visual input. We propose a solution inspired from human task-oriented interaction and implement strategies for avoiding and repairing semantically inaccurate words. We enhance a word-based REG with contextaware, referential installments and find that they substantially improve the referential success of the system. 1 Introduction A speaker who wants to refer to an object in a visual scene will try to produce a referring expression (RE) that (i) is semantically adequate, i.e. accurately describes the visual properties of the target referent, and (ii) is pragmatically and contextually appropriate, i.e. distinguishes the target from girl in front man on right anywhere brown Figure 1: Example images and REs from the ReferIt corpus (Kazemzadeh et al., 2014) other objects in the scene but does not overload the listener with unnecessary information. Figure 1 illustrates this with two examples from a corpus of REs collected from human subjects for objects in images (Kazemzadeh et al., 2014). Research on referring expression generation (REG) has mostly focussed on (ii), modeling pragmatic adequacy in attribute selection tasks, using as input a fully specified, symbolic representation of the visual attributes of an object and its distractors in a scene (Dale and Reiter, 1995; Krahmer and Van Deemter, 2012). In this paper, we follow a more recent trend (Kazemzadeh et al., 2014; Gkatzia et al., 2015) and investigate REG on real-world images. In this setting, a low-level visual representation of an image (a scene) segmented into regions (objects), including the region of the target referent, constitutes the input. This task is closely related to the recently very active field of image-to-text generation, where deep learning approaches have been used to directly map low-level visual input to natural language sentences, e.g. (Vinyals et al., 2015; Chen and Lawrence Zitnick, 2015; Devlin et al., 2015). Similarly, we propose to cast REG on images as a word selection task. Thus, we base this work on a model of perceptually grounded word meaning, which associates words with classifiers that predict their semantic appropriateness given 610 the low-level visual features of an object (Kennington and Schlangen, 2015). As our first contribution, we train this model on the ReferIt corpus (Kazemzadeh et al., 2014) and define decoding mechanisms tailored to REG. Large-scale recognition of objects and their attributes in images is still a non-trivial task. Consequently, REG systems now face the challenge of dealing with semantically inadequate expressions. For instance, in Figure 1, the system might not precisely distinguish between man or woman and generate an inadequate, confusing RE like man in the middle. Therefore, we focus on evaluating our system in an object identification task with users, in contrast to previous approaches to REG on images (Mao et al., 2015). In order to assess possible sources of misunderstanding more precisely, our set-up also introduces a restricted form of interaction: instead of measuring “one-shot” performance only, users have three trials for identifying a referent. In this set-up, we find that different parameter settings of the systems (e.g. their visual inputs) have a clear effect on the referential success rates, while automatic evaluation measures reflect the interactive effectiveness rather poorly. Research on reference in human interaction has noticed that conversation partners try to minimize their joint effort and often prefer to present simple expressions that can be expanded on or repaired, if necessary (Clark and Wilkes-Gibbs, 1986). This strategy, called “referring in installments” is very effective for achieving common ground in taskoriented interaction (Fang et al., 2014) and is attested in dialogue data (Striegnitz et al., 2012). The connection between reference in installments on the one and the status of distractors and distinguishing expressions on the other hand is relatively unexplored, though it seems natural to combine the two perspectives (DeVault et al., 2005). Figure 1 shows an example for very a simple but highly effective expression - it mentions color as a salient and distinguishing property while avoiding a potentially unclear object name. As our second contribution, we extend our probabilistic word selection model to work in a simple interactive installment component that tries to avoid semantically inadequate words as much as possible and only expands the expression in case of misunderstanding. We present an algorithm that generates these installments depending on the context, based on ideas from traditional REG algorithms like (Dale and Reiter, 1995). We find that a context-aware installment strategy greatly improves referential success as it helps to avoid and repair misunderstandings and offers a combined treatment of semantic and pragmatic adequacy. 2 Background 2.1 Approaches to REG “One-shot REG” Foundational work in REG has investigated attribute selection algorithms (Dale and Reiter, 1995) that compute a distinguishing referring expression for an object in a visual scene, which is defined as a target object r, set of distractor objects D = {d1, d2, d3, ...} and a set of attributes A = {type, position, size, color, ...}. A manually specified database typically associates the target and distractors in D with atomic values for each attribute, cf. (Krahmer and Van Deemter, 2012). In this setting, an attribute a1 ∈A is said to rule out a distractor object from D, if the target and distractor have different values. This is mostly based on the assumption that we have objects of particular types (e.g. people, furniture, etc.) and that the system has perfect knowledge about these object types and, consequently, about potential distractors of the target. This does not apply to REG on real-world images which, as we will show in this paper, triggers some new challenges and research questions for this field. Subsequent work has shown that human speakers do not necessarily produce minimally distinguishing expressions (van Deemter et al., 2006; Viethen and Dale, 2008; Koolen et al., 2011), and has tried to account for the wide range of factors - such as different speakers, modalities, object categories - that are related to attribute selection, cf. (Mitchell et al., 2010; Koolen and Krahmer, 2010; Clarke et al., 2013; Tarenskeen et al., 2015). Task-oriented REG has looked at reference as a collaborative process where a speaker and a listener try to reach a common goal (Clark and Wilkes-Gibbs, 1986; Heeman and Hirst, 1995; DeVault et al., 2005). Given the real-time constraints of situated interaction, a speaker often has to start uttering before she has found the optimal expression, but at the same time, she can tailor, extend, adapt, revise or correct her referring expressions in case the listener signals that he did not understand. Thus, human speakers can flex611 ibly split and adapt their REs over several utterances during an interaction, a phenomenon called “reference in installments”. In a corpus analysis of the S-GIVE domain, (Striegnitz et al., 2012) showed that installments are pervasive in humanhuman interaction in a task-oriented environment. However, while there has been research on goaloriented and situated REG (Stoia et al., 2006; Kelleher and Kruijff, 2006; Striegnitz et al., 2011; Garoufiand Koller, 2013), installments have been rarely implemented and empirically tested in interactive systems. A noticeable exception is the work by Fang et al. (2014) who use reinforcement learning to induce an installment strategy that is targeted at robots that have uncertain knowledge about the objects in their environment. Using relatively simple computer-generated scenes and a standard representations of objects as sets of attributes, they learn a strategy that first guides the user to objects that the system can recognize with high confidence. Our work is targeted at more complex scenes in real-world images and large domains where no a priori knowledge about object types and their attributes is given. Mao et al. (2015) use a convolutional neural network and an LSTM to generate REs directly and on the same data sets as we do in this paper, but they only report automatic evaluation results. 2.2 The ReferIt corpus We train and evaluate our system on the ReferIt data set collected by Kazemzadeh et al. (2014). The basis of the corpus is a collection of “20,000 still natural images taken from locations around the world” (Grubinger et al., 2006), which was augmented by Escalante et al. (2010) with segmentation masks identifying objects in the images (an average of 5 objects per image). This dataset also provides manual annotations of region labels and a vector of visual features for each region (e.g. region area, width, height, and color-related features). There are 256 types of objects (i.e. labels), out of which 140 labels are used for more than 50 regions (Escalante et al., 2010). Kazemzadeh et al. (2014) collected a large number of expressions referring to objects (for which segmentations exist) from these images (130k REs for 96k objects), using a game-based crowd-sourcing approach, and they have assembled an annotated test set. 2.3 The WAC model Given a corpus of REs aligned with objects in images, we can train a model that predicts semantically appropriate words given the visual representation of an image region. We adopt the WAC (“words-as-classifiers”) model (Kennington and Schlangen, 2015), which was originally used for reference resolution in situated dialogue. However, WAC is essentially a task-independent approach to predicting semantic appropriateness of words in visual contexts and can be flexibly combined with task-dependent decoding procedures. The WAC model pairs each word w in its vocabulary V with an individual classifier that maps the low-level, real-valued visual properties of an object o to a semantic appropriateness score. In order to learn the meaning of e.g. the word red, the visual properties of all objects described as red in a corpus of REs are given as positive instances to a supervised (logistic regression) learner. Negative instances are randomly samples from the complementary set of utterances (e.g. not containing red). We used this relatively simple model in our work, because first of all we wanted to test wether it scales from a controlled domain of typical reference game scenes (Kennington and Schlangen, 2015) to real-world images. Second, as compared to standard object recognisers that predict abstract image labels annotated in e.g. ImageNet (Deng et al., 2009), this model directly captures the relation between actual words used in REs and visual properties of the corresponding referents. Following (Schlangen et al., 2016), we can easily base our classifiers on such a high-performance convolutional neural network (Szegedy et al., 2015), by applying it on our images and extracting the final fully-connected layer before the classification layer (see Section 3.1). 3 Word-based REG for Image Objects We describe a word selection model for REG on images, which reverses the decoding procedure of our reference resolution model (Kennington and Schlangen, 2015; Schlangen et al., 2016). The main question we pursue here is whether we can predict semantically adequate words for visually represented target objects in real-world images and achieve communicative success in a taskoriented evaluation. 612 3.1 A Basic Algorithm for REG with WAC Given a visual representation of an object, we can apply all word classifiers from the vocabulary of our WAC model and obtain an appropriateness ranking over words. As these WAC scores do not reflect appropriateness in the linguistic context, i.e. the previously generated words, we combine them with simple language model (bigram) probabilities (LM) computed on our corpus. The combination of WAC and LM scores is used to rank our vocabulary with respect to appropriateness given the visual features of the target referent and linguistic context. Algorithm 1 shows our implementation of the decoding step, a beam search that iteratively adds n words with the highest combined LM and WAC score to a its agenda and terminates after a prespecified number of maximum steps. The algorithm takes the number of iterations as input, so it searches for the optimal RE given a fixed length. Deciding how many words have to be generated is very related to deciding how many attributes to include in more traditional REG. As a first approach, we have trained an additional regression classifier that predicts the length of the RE, given the number of objects in the scene and the visual properties of the target. Algorithm 1 Baseline REG with WAC 1: function WORD-GEN(object, maxsteps, V ) 2: Agenda ←{‘start′} 3: for n ∈0..maxsteps do 4: Beam ←∅ 5: for re ∈Agenda do 6: w−1 ←LAST(re) 7: for w ∈BIGRAMS(w−1, V ) do 8: s = WAC(w, object) + LM(w, w−1) 9: renew ←APPEND(re, word) 10: Beam ←Beam ∪{(renew, s)} 11: end for 12: end for 13: Agenda ←K-BEST(Beam, k) 14: end for 15: return K-BEST(Agenda, 1) 16: end function 3.2 Experimental Set-up Data We use the same test set as Kazemzadeh et al. (2014) that is divided into the 3 subsets, each containing 500 objects: “Set A contains objects randomly sampled from the entire dataset, Set B was sampled from the most frequently occurring object categories in the dataset, excluding the less interesting categories, Set C contains objects sampled from images that contain at least 2 objects of the same category, excluding the less interesting categories.”1 For each object, there are 3 humangenerated reference REs. We train the WAC model on the set of images that are not contained in the test set, which amounts to 100384 REs. The classifiers We use Schlangen et al. (2016)’s WAC model that is a trained on the REFERIT data (Kazemzadeh et al., 2014) based on the SAIAPR collection (Grubinger et al., 2006). We train binary logistic regression classifiers (with ℓ1 regularisation) for the 400 most frequent words from the training set.2 During training, we only consider non-relational expressions, as words from relational expressions would introduce further noise. Each classifier is trained with the same balance of positive and negative examples, a fixed ratio of 1 positive to 7 negative. Additionally, we train a regression classifier that predicts the expected length of the RE given the visual features of the target object and the number of objects in the entire scene. We also train a simple bigram language model on the data. Feature sets In this experiment, we manipulate the features sets of the underlying word classifiers. We train it on (i) a small set of 27 low-level visual features extracted and provided by Escalante et al. (2010), called SAIAPR features below, and (ii) a larger set of features automatically learned by a state-of-the-art convolutional neural network, “GoogLeNet” (Szegedy et al., 2015). We derive representations of our visual inputs with this CNN, that was trained on data from the ImageNet corpus (Deng et al., 2009), and extract the final fullyconnected layer before the classification layer, to give us a 1024 dimensional representation of the region. We augment this with 7 features that encode information about the region relative to the image: the (relative) coordinates of two corners, its (relative) area, distance to the center, and orientation of the image. The full representation hence is a vector of 1031 features. The feature extraction for (ii) is described in more detail in (Schlangen et al., 2016). Generally, the SAIAPR features represent interpretable visual information on position, area, and color of an image region, they could be associated with particular visual attributes. This is not possible with the GoogLeNet features. 1Where objects mostly located in the background like ‘sky’, ‘mountain’ are considered to be less interesting. 2We used scikit learn (Pedregosa et al., 2011). 613 3.3 Automatic Evaluation To the best of our knowledge, end-to-end REG performance has not been reported on the ReferIt data set before. Table 1 shows corpus-based BLEU and NIST measures calculated on the test set (using 3 references for each RE). The results indicate a minor gain of the GoogLeNet features. We also evaluate a version of the GoogLeNetbased system that instantiates the beam search with the gold length of the RE from the corpus (GoogLeNetglen). This leads to a small improvement in BLEU and NIST, indicating that the length prediction is not a critical factor. BLEU NIST 1-gram 2-gram 1-gram 2-gram SAIAPR 0.33 0.19 1.5 1.7 GoogLeNet 0.35 0.21 1.9 2.3 GoogLeNetglen 0.38 0.19 2.0 2.6 Table 1: Automatic evaluation for word-based REG systems 3.4 A Game-based Human Evaluation Set-up In parallel to the reference game in (Kazemzadeh et al., 2014), we set up a game between a computer that generates REs and a human player who clicks on the location of the described object that he identifies based on the RE. After each click, the GUI presents some canned feedback and informs the player whether he clicked on the intended object. In case of an unsuccessful click, the player has two more trials. In the following, we report the success rates with respect to each trial and the different test sets. This setup will trigger a certain amount of user guesses such that the success rates do not correspond perfectly to semantic accuracies. But it accounts for the increased difficulty as well as the interactive nature of the task. See Section 4.4 for an analysis of learning effects in this set-up and (Gatt et al., 2009; Belz and Hastie, 2014) for general discussion on REG and NLG evaluation. Success rate/ trial Error 1st 2nd 3rd red. SAIAPR 32.2 40.3 46.3 20.8 GoogLeNet 41.6 53.4 59.1 29.9 GoogLeNetglen 37.6 51 58.7 33.8 human 90.6 94.6 98.3 81.9 Table 2: Human success and error reduction rates in object identification task, for different sets of visual features For each player, we randomly sampled the games from the entire test set, but balanced the items so that they were equally distributed across the 3 test subsets A, B, C (see above) and the three systems. We also included human REs from the corpus. In total, we collected 1201 games played by 8 participants. Results In Table 2, we report the cumulative success rates for the different systems across the different trials, i.e. the success rate in the 3rd trial corresponds to the overall proportion of successfully identified referents. First of all, this suggests that the differences in performance between the systems is much bigger in terms of their communicative effectiveness as in terms of the corpusbased measures (Table 1). Thus, on the one hand, the GoogLeNet features are clearly superior to SAIAPR, whereas differences between GoogLeNet and GoogLeNetglen are minor. Interestingly, the GoogLeNet features improve 1st trial as well as overall success, leading to a much better error reduction rate3 in object identification between the first and third trial. This means that, here, humans are more likely to recover from misunderstandings and indicates that REs generated by the SAIAPR system are more semantically inadequate. Success rate (3rd trial) Set A Set B Set C SAIAPR 35.7 63.8 40.7 GoogLeNet 57 67.7 53.1 GoogLeNetglen 50 74 53 human 99.1 99 96.5 Table 3: Human success rates for baseline REG systems trained on different visual feature sets In Table 3, we report the overall success rates for the different test sets. All systems have a clearly higher performance on the B Set which contains the most frequent object types. Surprisingly, all systems have a largely comparable performance on Set A and C whereas only C contains images with distractors in the sense of traditional REG. This shows that describing objects which belong to an infrequent type in a semantically adequate way, which is necessary in Set A, is equally challenging as reaching pragmatic adequacy which is called for in Set C. 3Calculated as (#error1st −#error3rd)/#error1st 614 3.5 Error Analysis When users click on a distractor object instead of the intended referent in our object identification task, there can be several sources of misunderstanding. For instance, it is possible that the system produced REs that are interpretable but not specific and distinguishing enough to identify the target. It is also possible that the system selected words that are semantically inadequate such that the expression becomes completely misleading. We can get some insight into possible sources of misunderstanding by comparing the the clicked-on distractor objects to their intended target, using the object labels annotated for each image region (see Section 2.2). The analysis of mismatches between the expected label of the target and the label of the object actually clicked on by the user reveals that many errors are due to semantic inadequacies and apparently severe interpretation failures: Looking at the total number of clicks on distractor objects, 80% are clicks on a distractor with a different label than the target.4, e.g. the user clicked on a ‘tree’ instead of a ‘man’. This is clear evidence for semantic inadequacies, suggesting that the systems often generate an inadequate noun for the object type. An example for such a label mismatch is shown in Figure 2 where the system generated “person” for referring to a “sign”, such that the user first clicked on distractor objects that are persons. Similarly, we can get some evidence about how users try to repair misunderstandings, by comparing a distractor clicked on in the first trial to another distractor clicked on in the subsequent second, or third trial. Interestingly, we find that users do not seem to be aware of the fact that the system does not always generate the correct noun and do not generally try to click on objects with a different label. Only in 39% of the unsuccessful second trials, users decided for a distractor object with a different label, even though the first click had been unsuccessful. For instance, in Figure 2, the user clicked on the other person in the image in the second trial, although this referent is clearly not on the right. This suggests that users do not easily revise their RE interpretation with respect to the intended type of referent. Moreover, we can compare the different distractor clicks with respect to their spatial distance 4The percentage varies between saiapr (86%), GoogLeNet (71%) regenerated: “person on the right” rehuman: “sign on the blue shelf in the back” Figure 2: Example for an unsuccessful trial in object identification; first click: ⃝, second click: ⋄, third click: △, target: ▽) to the target. We find that after an unsuccessful first trial, users click on an object that has a greater distance to the target in 70% of the cases (as e.g. in Figure 2). This means that users often try to repair the misunderstanding with respect to the intended location, rather than with respect to the intended object type. Intuitively, this behaviour makes sense: a human speaker is more likely to confuse e.g. left and right than e.g. man and tree. From the perspective of the system this is a problematic situation: words like left and right are much easier to generate (based on simple positional features) than nouns like man and tree. 4 Towards interactive, contextual REG In this Section, we extend our word-based REG to deal with semantic inadequacies. We take a first step towards interactive REG and implement installments, a pervasive strategy in human taskoriented interaction. The main idea is that the system should try to avoid semantically inadequate expressions wherever possible and, if misunderstanding occurs, try to react appropriately. 4.1 Procedure When a speaker or system refers in installments, they do not need to generate an RE in one shot, but can start with an initial, simple RE that is extended or reformulated if this becomes necessary in the interaction, i.e. if the listener cannot identify the referent. This setting is a straightforward extension of our game-based evaluation in Section 3.4, where users had 3 trials for identifying a referent: instead of generating a single RE for the target 615 and presenting it in every trial, we now produce a triple (re1, re2, re3), where re1 will be used in the first trial, re2 in the second trial, etc. In this set-up, we want to investigate whether installments and reformulations help to avoid semantic inadequacies and improve referential success, i.e. whether a dynamic approach to REG compares favourably to the non-dynamic version of our system (see Section 3). This question is, however, closely linked to another, more intricate question: what is the best strategy to realize installments that, on the one hand, provide enough information so that a user can eventually identify the referent and, on the other hand, avoid misleading words? To date, even highly interactive systems do not generally treat installments, or if they do, only realise them via templates, e.g. (Stoia et al., 2006; Staudte et al., 2012; Garoufiand Koller, 2013; Dethlefs and Cuay´ahuitl, 2015). As pointed out by Liu et al. (2012), data-driven approaches are not straightforward to set-up, due to the “mismatched perceptual basis” between a human listener and an REG system. Based on the insights of our error analysis in Section 3.4, we will rely on a general installment strategy that is mostly targeted at avoiding semantically inadequate object names, and emphasizing the fact that location words generated by the system convey more reliable information. We have implemented two versions of this general strategy: (i) pattern-based installments that always avoid object names in their initial expression and dynamically extend this if necessary, (ii) contextdependent installments that condition the initial expression on the complexity of the scene and extend the initial expression accordingly, inspired by standard approaches to attribute selection in REG (Krahmer and Van Deemter, 2012). Thus, we do not test initial or reformulated expressions in isolation, but the strategy as a whole, which is similar to (Fang et al., 2014). 4.2 Pattern-based Installments This system generates a triple of REs for each image, corresponding to the respective trials in the object identification task. The triple for patternbased installments is defined as follows: • re1: a short RE that only contains location words, e.g. bottom left • re2: a longer RE that contains location words and an object name, e.g. the car on the left • re3: a reformulation of re2 that hedges the object name and suggests an alternative object name, e.g. vehicle or bottle on the left Figure 3(a) illustrates a case where this pattern is useful: the target is a horse, the biggest and most salient object in the image, which can be easily identified with a simple locative expression. As horses are not frequent in the training data, the system unfortunately generates hat guy as the most likely object name. This RE would be very misleading indeed if presented to a listener, as one of the distractors actually is a person with a hat. Generation Procedure In order to generate the above installment triples with our REG system, we simply restrict the vocabulary of the underlying WAC-model. Thus, we divided the 400 word classifiers into the following subsets: • V1: 20 location words (manually defined) • V2: V1 + 183 object names (extracted from annotated section of the ReferIt corpus) • V3: entire vocabulary This basic installment-based system does not use V3 (but see below). For generating the hedge of the object name in the third trial (re3) we use the top-second and top-third word from the ranking that WAC produces over all object type words given the visual features of the target. 4.3 Context-dependent Installments Our context-dependent installment strategy determines the initial RE (re1) based on the surrounding scene and generates subsequent reformulations (re2,re3) accordingly. Initial REs and Distractors As we do not have a symbolic representation of the distractor objects and their properties, we use the word-based REG system to decide whether an RE can be expected to be distinguishing for the target in the scene. This is similar to (Roy, 2002). Algorithm 2 shows the procedure for determining the initial RE (re1). Same as before, we restrict the vocabulary of the underlying WAC model, e.g. to contain only location words. But now, we apply the word generation function to the target object and to all the other objects in the set of distractors (D). If the algorithm generates an identical chunk for the target and one of its distractors, it continues with a less restricted vocabulary and a longer expression. It terminates when it has found an RE that is optimal only for the target. This algorithm proceeds on the level of 616 chunks, instead of single words, as e.g. location is often described by several words (e.g. bottom left). Algorithm 2 A Context-aware REG Algorithm 1: function INC-GEN(object, maxsteps, D, V ) 2: for n ∈2..maxsteps do 3: Vn ←RESTRICT(V, n) 4: re ←WORD-GEN(object, Vn) 5: for d ∈D do 6: red ←WORD-GEN(d, Vn) 7: if red = re then 8: break 9: end if 10: end for 11: return re 12: end for 13: end function As we found that the linguistic quality degrades for longer REs, we limit the maximal RE length to 6 words. We obtain 3 types of initial REs predicted to be distinguishing for a target by Algorithm 2: • refloc: 2 word RE, only location words (V1), Figure 3(a) • refobject: 4 word RE, location words and object names (V2), Figure 3(b) • refatt: 6 word RE, all attributes from the entire vocabulary (V3), Figure 3(c) On our test set, this produces distinguishing REs for all targets, except 4 cases for which we use an initial 6 word RE as well. Reformulations We have several options for generating the reformulation REs (re2,re3) - e.g. hedging the object name, extending the RE with more words, removing potentially misleading words, etc. - which are more or less appropriate, depending on the initial RE predicted by Algorithm 2. Therefore, we implemented the following types of installment triples that dynamically extend or reduce the initial RE: 1. (refloc, refobject, refobject,hedge), this corresponds to the pattern in Section 4.2 2. (refobject, refobject,hedge,refatt) 3. (refatt, refatt,hedge,refloc) Figure 3 shows examples for each triple. 4.4 Human Evaluation Set-up We use the task-oriented setup from Section 3.4 with 3 trials per image. But instead of presenting the same RE in each trial, the system now updates the phrases according to the RE triples described above. We have recruited 5 players and collected 1200 games, split equally between (a) Start with Location: re1: „in front“ re2: „hat guy in front“ re3: „hat or mountain in front“ (b) Start with Location, Object Type: re1: „building on left side“ re2: „house or bus on left side“ re3: „yellow house or bus on top left side“ (c) Start with Location, Object Type,Other: re1: „green plants on far right side“ re2: „shrub or stand on right side“ re3: „on right“ Figure 3: Examples for context-dependent installments the pattern-based installment (Section 4.2) and the context-dependent installment strategy (Section 4.3). In this evaluation, we only use word classifiers trained on GoogLeNet features. Results Table 4 shows that even the simple, pattern-based installment system improves the 1st trial success rate compared to the non-interactive baseline (the GoogLeNet-based system from Section 3) and is clearly superior with respect to its overall success and error reduction rate over trials. This suggests that a fair amount of target objects can be identified by users based on very simple, locative REs as semantically inadequate object names are avoided. Another important finding here is the high rate of error reduction during the 2nd and 3rd trial achieved by the installmentbased system. In the non-interactive system, users did not have additional cues for repairing their misunderstanding and probably guessed other possible targets in individual, more or less systematic ways. Apparently, even simple strategies for extending and hedging the initially presented RE provide very helpful cues for repairing initial misunderstandings. As we expected, the pattern-based installment system is clearly improved by our contextdependent approach to generating installments. This systems seems to strike a much better balance between generating simple expressions that avoid 617 Success rate/ trial Error 1st 2nd 3rd red. No install. 41.6 53.4 59.1 29.9 Pattern install. 46.8 69.2 80.9 64.1 Contextual install. 50.5 74.9 86 71.71 Table 4: Human evaluation for installment-based REG systems inadequate object names on the one and contextually appropriate expressions on the other hand. It improves the pattern-based installments in terms of 1st trial success rate and overall success and error reduction rate. The finding that installment strategies should be combined with insights from traditional distractororiented REG is further corroborated when we compare the success rates on the different subsets of our test set, see Table 5. Thus, the performance of the context-dependent installment system is much more stable on the different subsets than the pattern-based system which has a clear dip in success rate on Set C, which contains target referents with distractors of the same object type. This result suggests that our approach to determine distinguishing REs based purely on predictions of word-based REG (Section 4.3) presents a viable solution for REG on images, where information on distractors is not directly assessable in the lowlevel representation of the scene. Success rate (3rd trial) Set A Set B Set C No install. 57 67.7 53.1 Pattern install. 80.8 84.3 77.5 Contextual install. 86 87.5 84.5 Table 5: Human evaluation on different test sets for installment-based REG systems Finally, the graph in Figure 4 shows the average success rates over time and provides more evidence for the effectiveness of installments. We observe a clear learning effect in the non-interactive system, meaning that users faced unexpected interpretation problems due to inaccurate expressions, but adapted to the situation to some extent. In contrast, both installment systems have stable performance over time, which indicates that system behaviour is immediately understandable and predictable for human users. 0 50 100 150 200 # Game 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 % Accuracy Context install. Pattern install. No install. Figure 4: Participants’ success rates in object identification over time 5 Discussion and Conclusion We have presented an REG system that approaches the task as a word selection problem and circumvents manual specification of attributes in symbolic scene representations as required in traditional REG (Krahmer and Van Deemter, 2012), or manual specification of attribute-specific functions that map particular low-level visual features to attributes or words as in (Roy, 2002; Kazemzadeh et al., 2014). This knowledge-lean approach allows us to use automatically learned ConvNet features and obtain a promising baseline that predicts semantically appropriate words based on visual object representations. We have argued and demonstrated that REG in more realistic settings greatly benefits from a taskoriented, interactive account and should explore principled strategies for repairing and avoiding misunderstandings due to semantically inaccurate REs. In order to achieves this, we have augmented our approach with some manually designed installment strategies. An obvious direction for future work is to automatically induce such a strategy, based on confidence measures that automatically predict the trust-worthiness of a word for an object. Another extension that we have planned for future work is to implement relational expressions, similar to (Kennington and Schlangen, 2015). Based on relational expressions, we will be able to generate reformulations and installments tailored to the interaction with the user. For instance, a very natural option for installments is to relate the wrong target object clicked on by the user to the intended target, e.g. something like to the left of that one, the bigger object. 618 Acknowledgments We acknowledge support by the Cluster of Excellence “Cognitive Interaction Technology” (CITEC; EXC 277) at Bielefeld University, which is funded by the German Research Foundation (DFG). References Anja Belz and Helen Hastie. 2014. Comparative evaluation and shared tasks for nlg in interactive systems. In Amanda Stent and Srinivas Bangalore, editors, Natural Language Generation in Interactive Systems, pages 302–350. Cambridge University Press. Cambridge Books Online. Xinlei Chen and C Lawrence Zitnick. 2015. Mind’s eye: A recurrent visual representation for image caption generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2422–2431. Herbert H Clark and Deanna Wilkes-Gibbs. 1986. Referring as a collaborative process. Cognition, 22(1):1–39. Alasdair DF Clarke, Micha Elsner, and Hannah Rohde. 2013. Where’s wally: the influence of visual salience on referring expression generation. Frontiers in psychology, 4. Robert Dale and Ehud Reiter. 1995. Computational interpretations of the gricean maxims in the generation of referring expressions. Cognitive Science, 19(2):233–263. Jia Deng, W. Dong, Richard Socher, L.-J. Li, K. Li, and L. Fei-Fei. 2009. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09. Nina Dethlefs and Heriberto Cuay´ahuitl. 2015. Hierarchical reinforcement learning for situated natural language generation. Natural Language Engineering, 21(03):391–435. David DeVault, Natalia Kariaeva, Anubha Kothari, Iris Oved, and Matthew Stone. 2005. An information-state approach to collaborative reference. In Proceedings of the ACL 2005 on Interactive poster and demonstration sessions, pages 1–4. Jacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong He, Geoffrey Zweig, and Margaret Mitchell. 2015. Language models for image captioning: The quirks and what works. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 100–105, Beijing, China, July. Association for Computational Linguistics. Hugo Jair Escalante, Carlos a. Hern´andez, Jesus a. Gonzalez, a. L´opez-L´opez, Manuel Montes, Eduardo F. Morales, L. Enrique Sucar, Luis Villase˜nor, and Michael Grubinger. 2010. The segmented and annotated IAPR TC-12 benchmark. Computer Vision and Image Understanding, 114(4):419–428. Rui Fang, Malcolm Doering, and Joyce Y. Chai. 2014. Collaborative Models for Referring Expression Generation in Situated Dialogue. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence. Konstantina Garoufiand Alexander Koller. 2013. Generation of effective referring expressions in situated context. Language and Cognitive Processes. Albert Gatt, Anja Belz, and Eric Kow. 2009. The tuna-reg challenge 2009: Overview and evaluation results. In Proceedings of the 12th European Workshop on Natural Language Generation, pages 174–182. Association for Computational Linguistics. Dimitra Gkatzia, Verena Rieser, Phil Bartie, and William Mackaness. 2015. From the virtual to the real world: Referring to objects in real-world spatial scenes. In Proceedings of EMNLP 2015. Association for Computational Linguistics. Michael Grubinger, Paul Clough, Henning M¨uller, and Thomas Deselaers. 2006. The IAPR TC-12 benchmark: a new evaluation resource for visual information systems. In Proceedings of the International Conference on Language Resources and Evaluation (LREC 2006), pages 13– 23, Genoa, Italy. Peter A Heeman and Graeme Hirst. 1995. Collaborating on referring expressions. Computational linguistics, 21(3):351–382. Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara L Berg. 2014. ReferItGame: Referring to Objects in Photographs of Natural Scenes. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), pages 787–798, Doha, Qatar. John D Kelleher and Geert-Jan M Kruijff. 2006. Incremental generation of spatial referring expressions in situated dialog. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 1041–1048. Casey Kennington and David Schlangen. 2015. Simple learning and compositional application of perceptually grounded word meanings for incremental reference resolution. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 292–301, Beijing, China, July. Association for Computational Linguistics. Ruud Koolen and Emiel Krahmer. 2010. The d-tuna corpus: A dutch dataset for the evaluation of referring expression generation algorithms. In LREC. Ruud Koolen, Albert Gatt, Martijn Goudbeek, and Emiel Krahmer. 2011. Factors causing overspecification in definite descriptions. Journal of Pragmatics, 43(13):3231– 3250. Emiel Krahmer and Kees Van Deemter. 2012. Computational generation of referring expressions: A survey. Computational Linguistics, 38(1):173–218. Changsong Liu, Rui Fang, and Joyce Y Chai. 2012. Towards mediating shared perceptual basis in situated dialogue. In Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 140–149. Association for Computational Linguistics. Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L. Yuille, and Kevin Murphy. 2015. Generation and comprehension of unambiguous object descriptions. CoRR, abs/1511.02283. 619 Margaret Mitchell, Kees van Deemter, and Ehud Reiter. 2010. Natural reference to objects in a visual domain. In Proceedings of the 6th international natural language generation conference, pages 95–104. Association for Computational Linguistics. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830. Deb K Roy. 2002. Learning visually grounded words and syntax for a scene description task. Computer Speech & Language, 16(3):353–385. David Schlangen, Sina Zarriess, and Casey Kennington. 2016. Resolving references to objects in photographs using the words-as-classifiers model. In Proceedings of the 54rd Annual Meeting of the Association for Computational Linguistics (ACL 2016). Maria Staudte, Alexander Koller, Konstantina Garoufi, and Matthew W Crocker. 2012. Using listener gaze to augment speech generation in a virtual 3d environment. In Proceedings of the 34th Annual Conference of the Cognitive Science Society. To appear. Laura Stoia, Darla Magdalene Shockley, Donna K Byron, and Eric Fosler-Lussier. 2006. Noun phrase generation for situated dialogs. In Proceedings of the fourth international natural language generation conference, pages 81–88. Association for Computational Linguistics. Kristina Striegnitz, Alexandre Denis, Andrew Gargett, Konstantina Garoufi, Alexander Koller, and Mari¨et Theune. 2011. Report on the second second challenge on generating instructions in virtual environments (give-2.5). In Proceedings of the 13th European Workshop on Natural Language Generation, pages 270–279. Kristina Striegnitz, Hendrik Buschmeier, and Stefan Kopp. 2012. Referring in installments: a corpus study of spoken object references in an interactive virtual environment. In Proceedings of the Seventh International Natural Language Generation Conference, pages 12–16. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In CVPR 2015, Boston, MA, USA, June. Sammie Tarenskeen, Mirjam Broersma, and Bart Geurts. 2015. hand me the yellow stapler or hand me the yellow dress: Colour overspecification depends on object category. page 140. Kees van Deemter, Ielka van der Sluis, and Albert Gatt. 2006. Building a semantically transparent corpus for the generation of referring expressions. In Proceedings of the Fourth International Natural Language Generation Conference, pages 130–132. Association for Computational Linguistics. Jette Viethen and Robert Dale. 2008. The use of spatial relations in referring expression generation. In Proceedings of the Fifth International Natural Language Generation Conference, pages 59–67. Association for Computational Linguistics. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In Computer Vision and Pattern Recognition. 620
2016
58
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 621–631, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Collective Entity Resolution with Multi-Focal Attention Amir Globerson∗and Nevena Lazic and Soumen Chakrabarti† and Amarnag Subramanya and Michael Ringgaard and Fernando Pereira Google, Mountain View CA, USA [email protected], [email protected], [email protected], {asubram, ringgaard, pereira}@google.com Abstract Entity resolution is the task of linking each mention of an entity in text to the corresponding record in a knowledge base (KB). Coherence models for entity resolution encourage all referring expressions in a document to resolve to entities that are related in the KB. We explore attentionlike mechanisms for coherence, where the evidence for each candidate is based on a small set of strong relations, rather than relations to all other entities in the document. The rationale is that documentwide support may simply not exist for non-salient entities, or entities not densely connected in the KB. Our proposed system outperforms state-of-the-art systems on the CoNLL 2003, TAC KBP 2010, 2011 and 2012 tasks. 1 Introduction Entity resolution (ER) is the task of mapping mentions of entities in text to corresponding records in a knowledge base (KB) (Bunescu and Pasca, 2006; Cucerzan, 2007; Kulkarni et al., 2009; Dredze et al., 2010; Hoffart et al., 2011; Hachey et al., 2013). ER is a challenging problem because mentions are often ambiguous on their own, and can only be resolved given appropriate context. For example, the mention Beirut may refer to the capital of Lebanon, the band from New Mexico, or a drinking game (Figure 1). Names may also refer to entities that are not in the KB, a problem known as NIL detection. Most ER systems consist of a mention model, a context model, and a coherence model (Milne and Witten, 2008; Cucerzan, 2007; Ratinov et al., ∗Currently at Tel Aviv University † Currently at IIT Bombay 2011; Hoffart et al., 2011; Hachey et al., 2013). The mention model associates each entity with its possible textual representations (also known as aliases or surface forms). The context model helps resolve an ambiguous mention using textual features extracted from the surrounding context. The coherence model, the focus of this work, encourages all mentions to resolve to entities that are related to each other. Relations may be established via the KB, Web links, embeddings, or other resources. Coherence models often define an objective function that includes local and pairwise candidate scores, where the pairwise scores correspond to some notion of coherence or relation strength.1 Support for a candidate is typically aggregated over relations to all other entities in the document. One problem with this approach is that it may dilute evidence for entities that are not salient in the document, or not well-connected in the KB. Our work aims to address this issue. We introduce a novel coherence model with an attention mechanism, where the score for each candidate only depends on a small subset of mentions. Attention has recently been used with considerable empirical success in tasks such as translation (Bahdanau et al., 2014) and image caption generation (Xu et al., 2015). We argue that attention is also desirable for collective ER due to the discussed imbalance in the number of relations for different entities. Attention models typically have a single focus, implemented using the softmax function. Our model allows each candidate to focus on multiple mentions, and, to implement it, we introduce a novel smooth version of the multi-focus attention 1An exception to this framework are topic models in which a topic may generate both entities and words, e.g., (Kataria et al., 2011; Han and Sun, 2012; Houlsby and Ciaramita, 2014). 621 Beirut (city in Leb.) Beirut (band) Beirut (game) y1 Santa Fe (city in NM) Santa Fe (film) Santa Fe (city in Cuba) y2 New Mexico (state) New Mexico (university) New Mexico (ship) y3 Figure 1: Illustration of the ER problem for three mentions “Beirut”, “New Mexico” and “Santa Fe”. each mention has three possible disambiguations. Edges link disambiguations that have Wikipedia links between their respective pages. function, which generalizes soft-max. Our system uses mention and context models similar to those of Lazic et al. (2015), along with our novel multi-focal attention model to enforce coherence, leading to significant performance improvements on CoNLL 2003 (Hoffart et al., 2011) and TAC KBP 2010–2012 tasks (Ji et al., 2010; Ji et al., 2011; Mayfield et al., 2012). In particular, we achieve a 20% relative reduction in error from Chisholm and Hachey (2015) on CoNLL, and a 22% error reduction from Cucerzan (2012) on TAC 2012. Our contributions thus consist of defining a novel multi-focal attention model and applying it successfully to an entity resolution system. 2 Definitions and notation We are given a document with n mentions, where each mention i has a set of ni candidate entities Ci = {ci,1, ..., ci,ni}. The goal is to assign a label yi ∈Ci to each mention. Similarly to previous work, our approach to disambiguation relies on local and pairwise candidate scores, which we denote by si(yi) and sij(yi, yj) respectively. The local score is based only on local evidence, such as the mention phrase and textual features, while the pairwise score is based on the relatedness of the two candidates. In Sections 3.2 and 3.3 we discuss how these scores may be parameterized and learned. Many systems (Cucerzan, 2007; Milne and Witten, 2008; Kulkarni et al., 2009) simply hardwire pairwise scores. Coherence models typically attempt to maximize a global objective function that assigns a score to each complete labeling y = (y1, . . . , yn). An example of such a function is the sum of all singleton and pairwise scores for each label:2 g(y) = X i si(yi) + X i X j:j̸=i sij(yi, yj). (1) One disadvantage of this approach is that maximizing g corresponds to finding the MAP assignment of a general pairwise Markov random field, and is hence NP hard for the general case (Wainwright and Jordan, 2008). Another limitation is that non-salient entities may be related to very few other entities mentioned in the document, and summing over all mentions may dilute the evidence for such entities. In this paper we explore alternative objectives, relying on attention and tractable inference. 3 Attention model We now describe our multi-focal attention model. We first introduce the inference approach and optimization objective, and then provide details on how scores are calculated and learned. 3.1 Inference As noted earlier, the global score function in Eq. (1) is hard to maximize. Here we simplify inference by decomposing the task over mentions, which makes it easy to integrate attention in terms of both inference and learning. 3.1.1 Star model We start by considering a simple attention-free model in which inference is tractable, which we call a star model. For a particular mention i, the star model is a graphical model that contains yi, 2The scores usually depend not only on the labels, but also on the input text. We omit this dependence for brevity. 622 y1 y2 y3 y4 (a) y1 y2 y3 y4 (b) y1 y2 y3 y4 (c) Figure 2: (a) The complete graph corresponding to Eq. (1). (b) A star shaped subgraph corresponding to y2. This will be used to obtaining the label y2. (c) The star graph for y3. all interactions between yi and other labels, and no other interactions, as illustrated in Fig. 2. While the star graph centered at i contains up to n variables, we will only use it to infer the label of mention i. Let qij(yi) be the support for label yi from mention j, defined as follows: qij(yi) = max yj sij(yi, yj) + sj(yj), (2) and we also define qii(yi) = −∞to simplify notation for later. We define the following score function for mention i: fi(yi) = si(yi) + X j:j̸=i qij(yi) (3) and predict the label yi = arg maxy fi(y). Due to the structure of the star graph, inference is easy and can be done in O(nC2), where C is the maximum number of candidates. A similar decomposition has previously been used in the context of approximate learning for structured prediction (Sontag et al., 2011). Note that we do not view this approach as an approximation to the global problem, but rather as our inference procedure. 3.1.2 Adding attention The score function in Eq. (3) aggregates pairwise scores for each label yi over all mentions. In this section, we restrict this to only consider K mentions with the strongest relations to yi.3 Let amxK(z) be the sum of the largest K values in the vector z = (z1, . . . , zn). For each label yi, we redefine the score function to be fi(yi) = si(yi) + amxK(qi(yi)), (4) 3It is possible to relax this to allow up to K relations, but we focus on exactly K for simplicity. where qi(yi) = (qi1(yi), . . . , qin(yi)) and qij(yi) is as defined in Eq. (2). The inference rule is again yi = arg maxy fi(y), and the computational cost is O(nC2 + n log n) since sorting is required.4 3.1.3 Soft attention Previous work on attention has shown that it is advantageous to use a soft form of attention, where the level of attention is not zero or one, but can rather take intermediate values. Existing attention models focus on a single object, such as a single word (Bahdanau et al., 2014) or a single image window (Xu et al., 2015). In such models, it is natural to change the max function in the attention operator to a soft-max. In our case, the attention beam contains K elements, and we require a different notion of a soft-max, which we develop below. To obtain a soft version of the function amxK(z), we first use an alternative definition. Denote by S the set u = (u1, . . . , un) such that 0 ≤ui ≤1 and P i ui = K. Then amxK(z) is equivalent to the optimization problem: max ·u∈S z · u (5) The optimization problem above is a linear program, whose solution is the sum of top K elements of z as required. This follows since the optimal ui can easily be shown to attain only integral values. Given this optimization view of amxK(z) it is natural to smooth it (Nesterov, 2005) by adding a non-linearity to the optimization. Since the variables are non-negative, one possible choice is an entropy-like regularizer. We shall see that this choice results in a closed form solution, and also recovers the standard soft-max case for K = 1. Consider the optimization problem: smxK(z) = max u∈S X i ziui −β−1 X i ui log ui, (6) where β is a tuned hyperparameter.5 The following proposition provides a closed form solution for smxK, as well as its gradient. Proposition 3.1. Assume w.l.o.g. that z is sorted such that z1 ≥. . . ≥zn. Denote by R the maximum index r ∈{1, . . . , K −1} such that: zr ≥β−1 log Pn j=r+1 exp (βzj) K −r (7) 4Note that if K < log n, we spend only nK instead of n log n time. 5Note that −P i ui log ui is different from the entropy function since variables ui sum to K and not to 1. 623 If this doesn’t hold for any r, set R = 0. Then: smxK(z) = R X j=1 zj+K −R β log Pn j=R+1 exp (βzj) K −R (8) The function smxK(z) is differentiable with a gradient v given by: vi = ( 1 1 ≤i ≤R (K −R) exp(βzi) Pn j=R+1 exp(βzj) R < i ≤n ) (9) Proof is provided in the appendix. As noted, K = 1 recovers the standard softmax function.6 As β →∞, smxK will approach the sum of the top K elements as expected. For finite β we have a soft version of amxK. Our soft attention based model will therefore consider the soft-variant of Eq. (4): fi(yi) = si(yi) + smxK(qi(yi)) , (10) and maximize f(yi) to obtain the label. 3.2 Score parameterization Thus far we assumed the singleton and pairwise scores were given. We next discuss how to parameterize and learn these scores. As in other structured prediction work, we will assume that the scores are functions of the features of the input x and labels. Specifically, denote a set of singleton features for mention i and label yi by φs i(x, yi) ∈ Rns and a set of pairwise features for mentions i and j and their labels by φp ij(x, yi, yj) ∈Rnp. Then the model has two sets of weights ws and wp and the scores are obtained as a linear combination of the features. Namely:7 si(yi; ws) = ws · φs i(x, yi) sij(yi, yj; wp) = wp · φp ij(x, yi, yj) , where we have explicitly denoted the dependence of the scores on the weight vectors. See Sec. 6.2.2 for details on how the features are chosen. It is of course possible to consider non-linear alternatives for the score function, as in recent deep learning 6When we refer to the soft-max function, we mean the function β−1 log P exp (βai), which is an often used differentiable convex upper bound of the max function (e.g., see (Gimpel and Smith, 2010)). Soft-max sometimes also refers to the activation function exp(ai) P j exp(aj). The latter is in fact the gradient of the former (for β = 1). 7We again omit the dependence of the scores on the input x for brevity. parsing models (Chen and Manning, 2014; Weiss et al., 2015), but we focus on the linear case for simplicity. 3.3 Parameter learning The parameters ws, wp are learned from labeled data, as explained next. Since inference decomposes over mentions, we use a simple hinge loss for each mention. Denote by y∗ i the ground truth label for mention i, and let si(yi) ≡ (si1(yi), . . . , sin(yi)). Then the hinge loss for mention i is: Li = max yi [si(yi) + smxK(si(yi)) −si(y∗ i ) −smxK(si(y∗ i )) + ∆(yi, y∗ i )] where ∆(yi, y∗ i ) is zero if yi = y∗ i and one otherwise. If there are unlabeled mentions in the training data, we add those to the star graph, and maximize over the unknown labels in the positive and negative part of the hinge loss. The overall loss is simply the sum of losses for all the mentions, plus ℓ2 regularization over ws, wp. We minimize the loss using AdaGrad (Duchi et al., 2011) with learning rate η = 0.1. 4 Single-link model To motivate our modeling choices of using multifocal attention and decomposed inference, we additionally consider a simple baseline model with single-focus attention and global inference. In this approach, which we name single-link, each mention i attends to exactly one other mention that maximizes the pairwise relation score. The corresponding objective can be written as gSL(y) = X i si(yi) + max j sij(yi, yj)  (11) where sij(yi, yj) = −∞if there is no relation between yi and yj, and we set sii(yi, yi) = 0. While exact inference in this model remains intractable, we can find approximate solutions using max-sum belief propagation (Kschischang et al., 2001). As a reminder, max-sum is an iterative algorithm for MAP inference which can be described in terms of messages sent from model factors ga(ya) to each of their variables y ∈ya. At convergence, each variable is assigned to the value that maximizes belief b(y), defined as the sum of incoming messages. The message updates 624 have the following form: µga→Y (y) = max ya\y  ga(ya) + X j̸=i q\a j (yj)  (12) where q\a j (yj) is the sum of all messages to yj except the one from factor ga. While the singlelink model contains high-order factors over n variables, computing the messages from these factors is tractable and requires sorting. 5 Related work Ji (2016) and Ling et al. (2015) provide summaries of recent ER research. Here we review work related to the three main facets of our approach. 5.1 Coherence scores Several systems (Milne and Witten, 2008; Kulkarni et al., 2009; Hoffart et al., 2011) use the “Milne and Witten” measure for relatedness between a pair of entities, which is based on the number of Wikipedia articles citing each entity page, and the number of articles citing both; Cucerzan (2007) has also relied on the Wikipedia category structure. Internal links from one entity page to another in Wikipedia also provide direct evidence of relatedness between them. Another (possibly more noisy) source of information are Web pages containing links (Singh et al., 2012) to Wikipedia pages of both entities. Such links have been used in several recent systems (Cheng and Roth, 2013; Chisholm and Hachey, 2015). Yamada et al. (2016) train embedding vectors for entities, and use them to define similarities. 5.2 Collective inference for ER Optimizing most global coherence objectives is intractable. Milne and Witten (2008) and Ferragina and Scaiella (2010) decompose the problem over mentions and select the candidate that maximizes their relatedness score, which includes relations to all other mentions. Hoffart et al. (2011) use an iterative heuristic to remove unpromising mentionentity edges. Cucerzan (2007) creates a relation vector for each candidate, and disambiguates each entity to the candidate whose vector is most similar to the aggregate (which includes both correct and incorrect labels). Cheng and Roth (2013) use an integer linear program solver and Kulkarni et al. (2009) use a convex relaxation. Ratinov et al. (2011) use relation scores as features in a ranking SVM. Belief propagation without attention has been used by Ganea et al. (2015). Personalized PageRank (PPR) (Jeh and Widom, 2003) is another tractable alternative, adopted by several recent systems (Han and Sun, 2011; He et al., 2013; Alhelbawy and Gaizauskas, 2014; Pershina et al., 2015). Laplacian smoothing (Huang et al., 2014) is closely related. 5.3 Attention models Attention models have shown great promise in several applications, including machine translation (Bahdanau et al., 2014) and image caption generation (Xu et al., 2015). We address a new application of attention, and introduce a significantly different attention mechanism, which allows each variable to focus on multiple objects. We develop a novel smooth version of the multi-focus attention function, which generalizes the single focus softmax-function. While some existing entity resolution systems (Jin et al., 2014; Lazic et al., 2015) may be viewed as having attention mechanisms, these are intended for single textual features and not readily extensible to structured inference. 6 Experiments 6.1 Evaluation data CoNLL: The CoNLL dataset (Hoffart et al., 2011) contains 1393 articles with about 34K mentions, and the standard performance metric is mention-averaged accuracy. The documents are partitioned into train, test-a and test-b. Like most authors, we report performance on the 231 test-b documents with 4483 linkable mentions. TAC KBP: The TAC KBP 2010, 2011, and 2012 evaluation datasets (Ji et al., 2010; Ji et al., 2011; Mayfield et al., 2012) include 2250, 2250, and 2226 mentions respectively, of which roughly half are linkable to the reference KB. The competition evaluation includes NIL entities; participants are required to cluster NIL mentions across documents so that all mentions of each unknown entity are assigned a unique identifier. For these datasets, we report in-KB accuracy, overall accuracy (with all NILs in one cluster), and the competition metric B3+F1 which evaluates NIL clustering. 6.2 Experimental setup 6.2.1 KB and entity aliases Our KB is derived from the Wikipedia subset of Freebase (Bollacker et al., 2008), with about 4M 625 entities. To obtain our mention prior (the probability of candidate entities given a mention), we collect alias counts from Wikipedia page titles (including redirects and disambiguation pages), Freebase aliases, and Wikipedia anchor text. 99.31% of CoNLL test-b mentions are covered by the KB, and 96.19% include the gold entity in the candidates. We optionally use the mapping from aliases to candidate entities released by Hoffart et al. (2011), obtained by extending the “means” tables of YAGO (Hoffart et al., 2013). When released, it had 100% mention and gold recall on CoNLL, i.e. every annotated mention could be mapped to at least one entity, and the set of entities included the gold entity. However, changes in canonical Wikipedia URLs, accented characters and unicode usually result in mention losses over time, as not all URLs can be mapped to the KB (Hasibi et al., 2016, Sec. 4). For CoNLL only, we experiment with a third alias-entity mapping derived from Hoffart et al. (2011) by Pershina et al. (2015); we call it “HP”. It is not known how candidates were pruned, but it has high recall and very low ambiguity: only 12.6 on CoNLL test-b, compared to 22.34 in our KB and 65.9 in YAGO. Unsurprisingly, using only this source of aliases results in high accuracy on CoNLL (Pershina et al., 2015; Yamada et al., 2016). Table 1 lists the statistics of the three alias-entity mappings and some of their combinations on the CoNLL test-b dataset. Table 2 provides the same statistics on the TAC KBP datasets (restricted to non-NIL mentions) for the of the YAGO+KB aliasentity mapping. 6.2.2 Local and pairwise scores Our baseline system is similar in design and accuracy to Plato (Lazic et al., 2015). Given the referent phrase mi and textual context features bi, it computes the probability of a candidate entity as pi(c) ∝p(c|mi)p(bi|c). The system resolves mentions independently and does not have an explicit coherence model; however, it does capture some coherence information indirectly as referent phrases are included as string context features. We experiment with several versions of the mention prior p(c|mi) as described in the previous section. Scores for single-link model: In the single-link model, we simply set the local score for mention i Alias Mention Gold Uniq. Avg. map recall recall % ambig. KB 99.31 96.19 17.93 22.3 YAGO 97.17 96.30 15.50 65.9 +KB 99.84 99.51 16.28 73.6 HP 99.87 99.84 17.98 12.6 +KB 99.87 99.87 16.40 28.7 All 99.87 99.87 15.37 78.7 Table 1: Alias-entity map statistics on CoNLL test-b, 4483 gold mentions. Mention recall is the percentage of mentions with at least one known entity; gold recall is the percentage of mentions where the gold entity was included in the candidates. Unique aliases map to exactly one entity. The last column shows the number of candidates averaged over test-b mentions. Dataset Mention Gold Uniq. Avg. recall recall % ambig. TAC 2010 98.14 93.04 22.45 45.34 TAC 2011 98.40 89.23 27.82 49.13 TAC 2012 97.36 87.83 20.00 68.93 Table 2: YAGO+KB alias-entity map statistics on the TAC KBP datasets, restricted to non-NIL mentions. and candidate c to si(c) = ln pi(c) 1−pi(c), so that likely candidates get positive scores. We set the pairwise score between two candidates heuristically to sij(yi, yj) = ln o(yi, yj) + 2.3, where o(yi, yj) is the number of outlinks from the Wikipedia page of yi to the page of yj. We consider up to three candidates for each mention for CONLL, and ten for TAC; if the baseline probability of the top candidate exceeds 0.9, we only consider the top candidate. Including more candidates did not make a difference in performance, as additional candidates had low baseline scores and were almost never chosen in practice. Scores for attention model: Local features φs i(x, yi) for the attention model are derived from pi(c). As the attention models have no probabilistic interpretation, we inject as features log pi(c) and log(1 −pi(c)). We set log 0 = 0 by convention, and handle the case where log is undefined by introducing two additional binary indicator features for pi(c) = 0 and pi(c) = 1. Edge features φp ij are set based on three sources of information: (1) number of Freebase relations 626 System Alias map In-KB acc. % Lazic (2015) N/A 86.4 Our baseline KB 87.9 Single link KB 88.2 Attention KB 89.5 Chisholm (2015) YAGO 88.7 Ganea (2015) YAGO 87.6 Our baseline KB+YAGO 85.2 Single link KB+YAGO 86.6 Attention KB+YAGO 91.0 Our baseline KB+HP 89.9 Single link KB+HP 89.9 Attention KB+HP 91.7 Our baseline KB+HP* 91.9 Single link KB+HP* 92.1 Attention KB+HP* 92.7 Pershina (2015) HP 91.8 Yamada (2016) HP 93.1 Table 3: CoNLL test-b evaluation for recent competitive systems and our models, using different alias-entity maps. “KB+HP*” means we train and score entities using KB+HP, but output entities only in HP. between yi and yj, (2) number of hyperlinks between Wikipedia pages of yi and yj (in either direction), and (3) number of mentions of yi on the Wikipedia page of yj and vice versa, after annotating Wikipedia with our baseline resolver. We cap each count to five and encode it using five binary indicator features, where the jth feature is set to 1 if the count is j and 0 otherwise. Additionally, for each count c we add a feature log (1 + c). We also added a binary feature which is one if yi = yj. We train the scores for the attention model on the 946 CoNLL train documents for CoNLL, and on the TAC 2009 evaluation and TAC 2010 training documents for TAC. 6.3 Results CoNLL: Table 3 compares our models to recent competitive systems on CoNLL test-b in terms of mention-averaged (micro) accuracy. We also note the alias-entity map used in each system, as the corresponding gold recall is an upper bound on accuracy, and alias ambiguity determines the difficulty of the task. Therefore performance is not strictly comparable between maps. Our baseline is slightly better than Lazic et al. (2015), but degrades after adding YAGO aliases which increase ambiguity. The attention model provides a substantial gain over the baseline, and outperforms Chisholm and Hachey (2015) by 2.3% in absolute accuracy. The extremely low ambiguity (Tab. 1) of the HP alias mapping, coupled with guaranteed gold recall, makes the task too easy to be considered a realistic benchmark. Although we match Pershina et al. (2015) using KB+HP, for completeness, we provide the performance of our system with candidate entities restricted to those in HP (KB+HP*), but this is not equivalent to using only HP during training and inference. With KB+HP*, we outperform Pershina et al. (2015), and are competitive with recent unpublished work by Yamada et al. (2016), which uses entity and word embeddings. Including embeddings as features in our system may lead to further gains. TAC KBP: Table 4 shows our results for the TAC KBP 2010, 2011, and 2012 evaluation datasets, where we used the KB+YAGO entityalias map for all our experiments. To compute NIL clusters required for B3 + F1, we simply rely on the fact that our KB is larger than the TAC reference KB, similarly to previous work. We assign a unique NIL label to all mentions of an entity that is in our KB but not in TAC. For mentions that cannot be linked to our KB, we simply use the mention string as the NIL identifier. Once again, our attention models improve the performance over the baseline system in nearly all experiments, with multi-focus attention outperforming single-link. Compared to prior work, we achieve competitive performance on TAC 2010 and the best results to date on TAC 2011 and TAC 2012. Table 5 shows two examples from the TAC 2011 dataset in which our multi-focus attention model improves over the baseline, along with the focus mentions in the document. 6.4 Effect of K and β on attention We set the size of the multi-focus attention beam K based on accuracy on CoNLL test-a (for CoNLL) and training accuracy (for TAC). Fig. 3 shows the effect of K on the performance on CoNLL test-a dataset. Performance peaks for K = 6, with a sharp decrease after K = 10. This validates our central premise: all-pairs label coupling may hurt accuracy. In Sec. 3.1.3 we proposed an extension of softmax smoothing to the K attention case. In our 627 System In-KB Overall B3+F1 acc.(%) acc.(%) Chisholm (2015) 80.7 Ling (2015) 88.8 Yamada (2016) 85.2 Our baseline 84.5 87.6 83.0 Single link 84.3 87.5 82.8 Attention 87.2 88.7 84.4 Cucerzan (2011) 86.8 84.1 Lazic (2015) 79.3 86.5 84.0 Ling (2015) 81.6 Our baseline 81.5 86.8 84.3 Single link 82.8 87.3 84.9 Attention 84.3 88.0 85.6 Cucerzan (2012) R1 72.0 76.2 72.1 Cucerzan (2012) R3 71.2 76.6 73.0 Lazic (2015) 74.2 76.6 71.2 Ling (2015) 66.7 Our baseline 78.8 80.3 76.9 Single link 79.7 80.7 77.3 Attention 82.4 81.9 78.9 Table 4: Results on the TAC 2010 (top), TAC 2011 (middle), and TAC 2012 bottom evaluation datasets. 1 2 3 4 5 6 7 8 10 15 88 89 90 91 K Accuracy (%) Figure 3: Effect of parameter K on entity linking accuracy. Trained on CoNLL train and tested on CoNLL test-a. experiments we cross-validated over a wide range of β values, including β = ∞which corresponds to taking the exact sum of K largest values. We found that the optimal value in most cases was large: β = 10, 100, or even ∞. This suggests that a hard attention model, where exactly K mentions are picked is adequate in the current settings. 7 Conclusion We have described an attention-based approach to collective entity resolution, motivated by the observation that a non-salient entity in a long document may only have relations to a small subset of other entities. We explored two approaches to attention: a multi-focus attention model with tractable inference decomposed over mentions, and a single-focus model with global inference implemented using belief propagation. Our empirical results show that the methods results in significant performance gains across several benchmarks. Experiments in varying the size of the attention beam K in the star-shaped model suggest that multi-focus attention is beneficial. It is of course possible to extend the global single-link model to the multi-focus case, by modifying the model factors and resulting messages. However, the simplicity of the star-shaped model, its empirical effectiveness, and ease of learning parameters make it an attractive approach for easily incorporating attention into existing resolution models. The model can also readily be applied to other structured prediction problems in language processing, such as selecting antecedents in coreference resolution. Deep learning has recently been used in mutliple NLP applications, including parsing (Chen and Manning, 2014) and translation (Bahdanau et al., 2014). Learning the local and pairwise scores in our model using a deep architecture rather than a linear model would likely lead to performance improvements. The star-shaped model is particularly amenable to this architecture, as it can be implemented via a feed-forward sequence of operations (including sorting, which can be implemented with soft-max gates). Finally, one may consider a more elaborate model in which attention depends on the current state of the system; for example, the state can summarize the mention context. The dynamics of the underlying state can be modeled by recurrent neural networks or LSTMs (Bahdanau et al., 2014). In conclusion, we have shown that attention is an effective mechanism for improving entity resolution models, and that it can be implemented via a simple inference mechanism, where model parameters can be easily learned. 8 Proof of Proposition 3.1 Begin with the optimization problem in Eq. (6). Introduce the following Lagrange multipliers: λ for the P i ui = K constraint, and αi ≥0 for the ui ≤1 constraint. We can ignore the ui ≥0 constraint, as it will turn out to be satisfied. Denote 628 Sentence with mention Entity Attn. focus mentions Caroline has dropped her name base: Caroline (given name) Democratic Party from consideration for the seat attn: Caroline Kennedy New York that Hillary has left vacant. Robert Kennedy Chris Johnson had just 13 tackles last base: Chris Johnson (running back) Oakland Raiders season, and the Raiders currently have attn: Chris Johnson (cornerback) Oakland Raiders have 11 defensive backs on their roster. Oakland Raiders Table 5: Examples of gains by our algorithm, showing the resolved mention, the entities it resolves to in the baseline and the attention models, and the mentions in the document that are attended to (here K = 3). In the first example, the baseline labels the mention “Caroline” as the given name, whereas the attention model attends to mentions that identify it as the diplomat Caroline Kennedy. In the second example, both models resolve “Chris Johnson” to football players, but the attention model finds the correct one by attending to three mentions of his former team, the Oakland Raiders. the corresponding Lagrangian by L(u, λ, α). We will show the result by using the dual g(λ, α) = maxu L(u, λ, α) and the fact that the solution of Eq. (6) is minλ,α g(λ, α). Maximizing L with respect to ui yields: ui = eβzi−1+βλ−βαi (13) From this we can obtain the convex dual g(λ, α), and after minimizing over λ we arrive at: g(α) = Kβ−1 log P i eβzi−βαi K + X i αi (14) Next, we maximize the above with respect to α ≥ 0. Introduce Lagrange multipliers γi for the constraint αi ≥0 and the corresponding Lagrangian ¯L(α, γ). We propose a solution for α, γ and show that it satisfies the KKT conditions. Minimizing ¯L wrt α we can characterize the optimal γ as: γi = −K eβzi−βαi P i eβzi−βαi + 1 (15) Set αi as follows: αi = ( zi −1 β log Pn i=R+1 eβzi K−R 1 ≤i ≤R 0 R < i ≤n (16) It can now be confirmed that the α, γ from Equations 16 and 15 satisfy the KKT conditions. Plugging the α value into g(α) yields the solution in the proposition. Differentiability follows from Nesterov (2005) and the gradient is ui in Eq. (13). References [Alhelbawy and Gaizauskas2014] Ayman Alhelbawy and Robert Gaizauskas. 2014. Graph ranking for collective named entity disambiguation. In Proc. 52nd Annual Meeting of the Association for Computational Linguistics, ACL 14, pages 75–80. [Bahdanau et al.2014] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. [Bollacker et al.2008] Kurt D. Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proc. of the 2008 ACM SIGMOD International Conference on Management of Data, pages 1247–1250. ACM. [Bunescu and Pasca2006] Razvan C. Bunescu and Marius Pasca. 2006. Using encyclopedic knowledge for named entity disambiguation. In Proc. 11th Conference of the European Chapter of the Association for Computational Linguistics, EACL 06. [Chen and Manning2014] Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural networks. In EMNLP, pages 740–750. [Cheng and Roth2013] Xiao Cheng and Dan Roth. 2013. Relational inference for wikification. In EMNLP Conference, pages 1787–1796. [Chisholm and Hachey2015] Andrew Chisholm and Ben Hachey. 2015. Entity disambiguation with web links. Transactions of the Association for Computational Linguistics, 3:145–156. [Cucerzan2007] Silviu Cucerzan. 2007. Large-scale named entity disambiguation based on Wikipedia data. In Proc. of EMNLP-CoNLL 2007, pages 708– 716. [Cucerzan2012] Silviu Cucerzan. 2012. The MSR system for entity linking at TAC 2012. In In Proc. of the Text Analysis Conference, TAC 12. [Dredze et al.2010] Mark Dredze, Paul McNamee, Delip Rao, Adam Gerber, and Tim Finin. 2010. 629 Entity disambiguation for knowledge base population. In Proc. of the 23rd International Conference on Computational Linguistics, COLING 10, pages 277–285. [Duchi et al.2011] John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121–2159. [Ferragina and Scaiella2010] Paolo Ferragina and Ugo Scaiella. 2010. TAGME: on-the-fly annotation of short text fragments (by Wikipedia entities). In Proc. of the 19th ACM International Conference on Information Knowledge and Management, CIKM 10, pages 1625–1628. ACM. [Ganea et al.2015] Octavian-Eugen Ganea, Marina Horlescu, Aurelien Lucchi, Carsten Eickhoff, and Thomas Hofmann. 2015. Probabilistic bag-ofhyperlinks model for entity linking. arXiv preprint arXiv:1509.02301. [Gimpel and Smith2010] Kevin Gimpel and Noah A Smith. 2010. Softmax-margin crfs: Training loglinear models with cost functions. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 733–736. Association for Computational Linguistics. [Hachey et al.2013] Ben Hachey, Will Radford, Joel Nothman, Matthew Honnibal, and James R. Curran. 2013. Evaluating entity linking with Wikipedia. Artificial Intelligence, 194(0):130 – 150. [Han and Sun2011] Xianpei Han and Le Sun. 2011. A generative entity-mention model for linking entities with knowledge base. In Proc. of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, volume 1 of ACLHLT 11. ACL. [Han and Sun2012] Xianpei Han and Le Sun. 2012. An entity-topic model for entity linking. In EMNLP-CoNLL, pages 105–115. [Hasibi et al.2016] Faegheh Hasibi, Krisztian Balog, and Svein Erik Bratsberg. 2016. On the reproducibility of the TagMe entity linking system. In Advances in Information Retrieval, pages 436–449. Springer. [He et al.2013] Zhengyan He, Shujie Liu, Mu Li, Ming Zhou, Longkai Zhang, and Houfeng Wang. 2013. Learning entity representation for entity disambiguation. In Proc. of the 51st Annual Meeting of the Association for Computational Linguistics, ACL 13, pages 30–34. [Hoffart et al.2011] Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen F¨urstenau, Manfred Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named entities in text. In Proc. of the 2011 Conference on Empirical Methods in Natural Language Processing, EMNLP11. ACL. [Hoffart et al.2013] Johannes Hoffart, Fabian M Suchanek, Klaus Berberich, and Gerhard Weikum. 2013. Yago2: A spatially and temporally enhanced knowledge base from wikipedia. Artificial Intelligence, 194:28–61. [Houlsby and Ciaramita2014] Neil Houlsby and Massimiliano Ciaramita. 2014. A scalable Gibbs sampler for probabilistic entity linking. In Advances in Information Retrieval, pages 335–346. Springer. [Huang et al.2014] Hongzhao Huang, Yunbo Cao, Xiaojiang Huang, Heng Ji, and Chin-Yew Lin. 2014. Collective tweet wikification based on semisupervised graph regularization. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 380–390, Baltimore, Maryland, June. Association for Computational Linguistics. [Jeh and Widom2003] Glen Jeh and Jennifer Widom. 2003. Scaling personalized web search. In Proceedings of the 12th international conference on World Wide Web, pages 271–279. ACM. [Ji et al.2010] Heng Ji, Ralph Grishman, Hoa Trang Dang, Kira Griffitt, and Joe Ellis. 2010. Overview of the TAC 2010 knowledge base population track. In Proc. of the 3rd Text Analysis Conference, TAC 10. [Ji et al.2011] Heng Ji, Ralph Grishman, and Hoa Trang Dang. 2011. Overview of the TAC 2011 knowledge base population track. In Proc. of the 4th Text Analysis Conference, TAC 11. [Ji2016] Heng Ji. 2016. Entity discovery and linking and Wikification reading list. Online. http://nlp.cs.rpi.edu/kbp/2014/ elreading.html. [Jin et al.2014] Yuzhe Jin, Emre Kiciman, Kuansan Wang, and Ricky Loynd. 2014. Entity linking at the tail: sparse signals, unknown entities, and phrase models. In Proc. of the 7th ACM International Conference on Web Search and Data Mining, WSDM ’14, pages 453–462, New York, NY, USA. ACM. [Kataria et al.2011] Saurabh S. Kataria, Krishnan S. Kumar, Rajeev R. Rastogi, Prithviraj Sen, and Srinivasan H. Sengamedu. 2011. Entity disambiguation with hierarchical topic models. In Proc. of the 17th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 1037–1045. ACM. [Kschischang et al.2001] Frank R Kschischang, Brendan J Frey, and Hans-Andrea Loeliger. 2001. Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory, 47(2):498–519. [Kulkarni et al.2009] Sayali Kulkarni, Amit Singh, Ganesh Ramakrishnan, and Soumen Chakrabarti. 2009. Collective annotation of Wikipedia entities in web text. In Proc. of the 15th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 457–466. ACM. 630 [Lazic et al.2015] Nevena Lazic, Amarnag Subramanya, Michael Ringgaard, and Fernando Pereira. 2015. Plato: A selective context model for entity resolution. Transactions of the Association for Computational Linguistics, 3:503–515. [Ling et al.2015] Xiao Ling, Sameer Singh, and Daniel S Weld. 2015. Design challenges for entity linking. Transactions of the Association for Computational Linguistics, 3:315–328. [Mayfield et al.2012] James Mayfield, Javier Artiles, and Hoa Trang Dang. 2012. Overview of the TAC 2012 knowledge base population track. In Proc. of the 5th Text Analysis Conference, TAC 12. [Milne and Witten2008] David N. Milne and Ian H. Witten. 2008. Learning to link with Wikipedia. In Proc. of the 17th ACM Conference on Information and Knowledge Management, CIKM 07, pages 509–518. [Nesterov2005] Yu Nesterov. 2005. Smooth minimization of non-smooth functions. Mathematical programming, 103(1):127–152. [Pershina et al.2015] Maria Pershina, Yifan He, and Ralph Grishman. 2015. Personalized Page Rank for named entity disambiguation. In Proc. 2015 Annual Conference of the North American Chapter of the ACL, NAACL HLT 14, pages 238–243. [Ratinov et al.2011] Lev-Arie Ratinov, Dan Roth, Doug Downey, and Mike Anderson. 2011. Local and global algorithms for disambiguation to Wikipedia. In Proc. of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, ACLHLT 11, pages 1375– 1384. ACL. [Singh et al.2012] Sameer Singh, Amarnag Subramanya, Fernando Pereira, and Andrew McCallum. 2012. Wikilinks: A large-scale cross-document coreference corpus labeled via links to Wikipedia. Technical Report UM-CS-2012-015, University of Massachusetts, Amherst. [Sontag et al.2011] D. Sontag, O. Meshi, T. Jaakkola, and A. Globerson. 2011. More data means less inference: A pseudo-max approach to structured learning. In R. Zemel and J. Shawe-Taylor, editors, Advances in Neural Information Processing Systems 23, pages 2181–2189. MIT Press, Cambridge, MA. [Wainwright and Jordan2008] Martin J Wainwright and Michael I Jordan. 2008. Graphical models, exponential families, and variational inference. Foundations and Trends R⃝in Machine Learning, 1(1-2):1– 305. [Weiss et al.2015] David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. 2015. Structured training for neural network transition-based parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 323– 333, Beijing, China, July. Association for Computational Linguistics. [Xu et al.2015] Kelvin Xu, Jimmy Ba, Ryan Kiros, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. arXiv preprint arXiv:1502.03044. [Yamada et al.2016] Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. 2016. Joint learning of the embedding of words and entities for named entity disambiguation. CoRR, abs/1601.01343. 631
2016
59
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 54–65, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics A Multi-media Approach to Cross-lingual Entity Knowledge Transfer Di Lu1, Xiaoman Pan1, Nima Pourdamghani2, Shih-Fu Chang3, Heng Ji1, Kevin Knight2 1 Computer Science Department, Rensselaer Polytechnic Institute {lud2,panx2,jih}@rpi.edu 2 Information Sciences Institute, University of Southern California {damghani,knight}@isi.edu 3 Electrical Engineering Department, Columbia University [email protected] Abstract When a large-scale incident or disaster occurs, there is often a great demand for rapidly developing a system to extract detailed and new information from lowresource languages (LLs). We propose a novel approach to discover comparable documents in high-resource languages (HLs), and project Entity Discovery and Linking results from HLs documents back to LLs. We leverage a wide variety of language-independent forms from multiple data modalities, including image processing (image-to-image retrieval, visual similarity and face recognition) and sound matching. We also propose novel methods to learn entity priors from a large-scale HL corpus and knowledge base. Using Hausa and Chinese as the LLs and English as the HL, experiments show that our approach achieves 36.1% higher Hausa name tagging F-score over a costly supervised model, and 9.4% higher Chineseto-English Entity Linking accuracy over state-of-the-art. 1 Introduction In many situations such as disease outbreaks and natural calamities, we often need to develop an Information Extraction (IE) component (e.g., a name tagger) within a very limited time to extract information from low-resource languages (LLs) (e.g., locations where Ebola outbreaks from Hausa documents). The main challenge lies in the lack of labeled data and linguistic processing tools in these languages. A potential solution is to extract and project knowledge from high-resource languages (HLs) to LLs. A large amount of non-parallel, domain-rich, topically-related comparable corpora naturally exist across LLs and HLs for breaking incidents, such as coordinated news streams (Wang et al., 2007) and code-switching social media (Voss et al., 2014; Barman et al., 2014). However, without effective Machine Translation techniques, even just identifying such data in HLs is not a trivial task. Fortunately many of such comparable documents are presented in multiple data modalities (text, image and video), because press releases with multimedia elements generate up to 77% more views than text-only releases (Newswire, 2011). In fact, they often contain the same or similar images and videos, which are languageindependent. In this paper we propose to use images as a hub to automatically discover comparable corpora. Then we will apply Entity Discovery and Linking (EDL) techniques in HLs to extract entity knowledge, and project results back to LLs by leveraging multi-source multi-media techniques. In the following we will elaborate motivations and detailed methods for two most important EDL components: name tagging and Cross-lingual Entity Linking (CLEL). For CLEL we choose Chinese as the LL and English as HL because Chineseto-English is one of the few language pairs for which we have ground-truth annotations from official shared tasks (e.g., TAC-KBP (Ji et al., 2015)). Since Chinese name tagging is a well-studied problem, we choose Hausa instead of Chinese as the LL for name tagging experiment, because we can use the ground truth from the DARPA LORELEI program1 for evaluation. Entity and Prior Transfer for Name Tagging: In the first case study, we attempt to use HL extraction results directly to validate and correct names 1http://www.darpa.mil/program/low-resource-languagesfor-emergent-incidents 54 Figure 1: Image Anchored Comparable Corpora Retrieval. extracted from LLs. For example, in the Hausa document in Figure 1, it would be challenging to identify the location name “Najeriya” directly from the Hausa document because it’s different from its English counterpart. But since its translation “Nigeria” appears in the topically-related English document, we can use it to infer and validate its name boundary. Even if topically-related documents don’t exist in an HL, similar scenarios (e.g., disease outbreaks) and similar activities of the same entity (e.g., meetings among politicians) often repeat over time. Moreover, by running a highperforming HL name tagger on a large amount of documents, we can obtain entity prior knowledge which shows the probability of a related name appearing in the same context. For example, if we already know that “Nigeria”, “Borno”, “Goodluck Jonathan”, “Boko Haram” are likely to appear, then we could also expect “Mouhammed Ali Ndume” and “Mohammed Adoke” might be mentioned because they were both important politicians appointed by Goodluck Jonathan to consider opening talks with Boko Haram. Or more generally if we know the LL document is about politics in China in 1990s, we could estimate that famous politicians during that time such as “Deng Xiaoping” are likely to appear in the document. Next we will project these names extracted from HL documents directly to LL documents to identify and verify names. In addition to textual evidence, we check visual similarity to match an HL name with its equivalent in LL. And we apply face recognition techniques to verify person names by image search. This idea matches human knowledge acquisition procedure as well. For example, when a child is watching a cartoon and shifting between versions in two languages, s/he can easily infer translation pairs for the same conFigure 2: Examples of Cartoons in Chinese (left) and English (right). cept whose images appear frequently (e.g., “宝宝 (baby)” and “螃蟹(crab)” in “Dora Exploration”, “海盗(pirate)” in “the Garden Guardians”, and “亨利(Henry)” in “Thomas Train”), as illustrated in Figure 2. Representation and Structured Knowledge Transfer for Entity Linking: Besides data sparsity, another challenge for low-resource language IE lies in the lack of knowledge resources. For example, there are advanced knowledge representation parsing tools available (e.g., Abstract Meaning Representation (AMR) (Banarescu et al., 2013)) and large-scale knowledge bases for English Entity Linking, but not for other languages, including some medium-resource ones such as Chinese. For example, the following documents are both about the event of Pistorius killing his girl friend Reeva: • LL document: 南非残疾运动员皮斯托瑞 斯被指控杀害女友瑞娃于其茨瓦内的家 中。皮斯托瑞斯是南非著名的残疾人田 径选手,有“刀锋战士”之称。...(The disabled South African sportsman Oscar Pis55 Pistorius Oscar Pistorius coreference South Africa modify Blade Runner modify Reeva model modify die-01 kill-01 en/South_Africa en/Pretoria langlink zh/ྲۦᴩڥԵ ᝮ኎ٖ (Tshwane) en/Reeva_Steenkamp zh/ታষ·ේୋࣔฦ ታষ (Reeva) redirect ጼේಓታේ ܖᶋ ታষ ᝮ኎ٖ (South Africa) (Tshwane) langlink renamed link Retrieve relevant entity set en/Johannesburg en/Cape_Town en/Bivane_River en/Jacob_Zuma …… capital Knowledge Base Walker LL Documents HL Documents LL entity mentions AMR based Knowledge Graph Image anchored comparable document retrieval Entity Linking (Reeva) (Pistorius) Figure 3: Cross-lingual Knowledge Transfer for Entity Linking. torius was charged to killing his girl friend Reeva at his home in Tshwane. Pistorius is a famous runner in South Africa, also named as “Blade Runner”...) • HL document: In the early morning of Thursday, 14 February 2013, “Blade Runner” Oscar Pistorius shot and killed South African model Reeva Steenkamp... From the LL documents we may only be able to construct co-occurrence based knowledge graph and thus it’s difficult to link rare entity mentions such as “瑞娃(Reeva)” and “茨瓦内(Tshwane)” to an English knowledge base (KB). But if we apply an HL (e.g., English) entity linker, we could construct much richer knowledge graphs from HL documents using deep knowledge representations such as AMR, as shown in Figure 3, and link all entity mentions to the KB accurately. Moreover, if we start to walk through the KB, we can easily reach from English related entities to the entities mentioned in LL documents. For example, we can walk from “South Africa” to its capital “Pretoria” in the KB, which is linked to its LL form “比勒陀利亚” through a language link and then is re-directed to “茨瓦内” mentioned in the LL document through a redirect link. Therefore we can infer that “茨瓦内” should be linked to “Pretoria” in the KB. Compared to most previous cross-lingual projection methods, our approach does not require domain-specific parallel corpora or lexicons, or in fact, any parallel data at all. It also doesn’t require any labeled data in LLs. Using Hausa and Chinese as the LLs and English as HL for case study, experiments demonstrate that our approach can achieve 36.1% higher Hausa name tagging over a costly supervised model trained from 337 documents, and 9.4% higher Chinese-to-English Entity Linking accuracy over a state-of-the-art system. 2 Approach Overview Figure 4 illustrates the overall framework. It consists of two steps: (1) Apply languageindependent key phrase extraction methods on each LL document, then use key phrases as a query to retrieve seed images, and then use the seed images to retrieve matching images, and retrieve HL documents containing these images (Section 3); (2) Extract knowledge from HL documents, and design knowledge transfer methods to refine LL extraction results. Figure 4: Overall Framework. We will present two case studies on name tagging (Section 5) and cross-lingual entity linking (CLEL) (Section 6) respectively. Our projection approach consists of a series of non-traditional multi-media multi-source methods based on textual and visual similarity, face recognition, as well as entity priors learned from both unstructured data and structured KB. 56 3 Comparable Corpora Discovery In this section we will describe the detailed steps of acquiring HL documents for a given LL document via anchoring images. Using a cluster of images as a hub, we attempt to connect topicallyrelated documents in LL and HL. We will walk through each step for the motivating example in Figure 1. 3.1 Key Phrase Extraction For an LL document (e.g., Figure 1 for the walk-through example), we start by extracting its key phrases using the following three languageindependent methods: (1) TextRank (Mihalcea and Tarau, 2004), which is a graph-based ranking model to determine key phrases. (2) Topic modeling based on Latent Dirichlet allocation (LDA) model (Blei et al., 2003), which can generate a small number of key phrases representing the main topics of each document. (3) The title of the document if it’s available. 3.2 Seed Image Retrieval Using the extracted key phrases together as one single query, we apply Google Image Search to retrieve top 15 ranked images as seeds. To reduce the noise introduced by image search, we filter out images smaller than 100×100 pixels because they are unlikely to appear in the main part of web pages. We also filter out an image if its web page contains less than half of the tokens in the query. Figure 1 shows the anchoring images retrieved for the walk-through example. 3.3 HL Document Retrieval Using each seed image, we apply Google imageto-image search to retrieve more matching images, and then use the TextCat tool (Cavnar et al., 1994) as a language identifier to select HL documents containing these images. It shows three English documents retrieved for the first image in Figure 1. For related topics, more images may be available in HLs than LLs. To compensate this data sparsity problem, using the HL documents retrieved as a seed set, we repeat the above steps one more time by extracting key phrases from the HL seed set to retrieve more images and gather more HL documents. For example, a Hausa document about “Arab Spring” includes protests that happened in Algeria, Bahrain, Iran, Libya, Yemen and Jordan. The HL documents retrieved by LL key phrases and images in the first step missed the detailed information about protests in Iran. However the second step based on key phrases and images from HL successfully retrieved detailed related documents about protests in Iran. Applying the above multimedia search, we automatically discover domain-rich non-parallel data. Next we will extract facts from HLs and project them to LLs. 4 HL Entity Discovery and Linking 4.1 Name Tagging After we acquire HL (English in this paper) comparable documents, we apply a state-of-the-art English name tagger (Li et al., 2014) based on structured perceptron to extract names. From the output we filter out uninformative names such as news agencies. If the same name receives multiple types across documents, we use the majority one. 4.2 Entity Linking We apply a state-of-the-art Abstract Meaning Representation (AMR) parser (Wang et al., 2015a) to generate rich semantic representations. Then we apply an AMR based entity linker (Pan et al., 2015) to link all English entity mentions to the corresponding entities in the English KB. Given a name nh, this entity linker first constructs a Knowledge Graph g(nh) with nh at the hub and leaf nodes obtained from names reachable by AMR graph traversal from nh. A subset of the leaf nodes are selected as collaborators of nh. Names connected by AMR conjunction relations are grouped into sets of coherent names. For each name nh, an initial ranked list of entity candidates E = {e1, ..., eM} is generated based on a salience measure (Medelyan and Legg, 2008). Then a Knowledge Graph g(em) is generated for each entity candidate em in nh’s entity candidate list E. The entity candidates are then re-ranked according to Jaccard Similarity, which computes the similarity between g(nh) and g(em): J(g(nh), g(em)) = |g(nh)∩g(em)| |g(nh)∪g(em)|. Finally, the entity candidate with the highest score is selected as the appropriate entity for nh. Moreover, the Knowledge Graphs of coherent mentions will be merged and linked collectively. 4.3 Entity Prior Acquisition Given the English entities discovered from the above, we aim to automatically mine related en57 tities to further expand the expected entity set. We use a large English corpus and English knowledge base respectively as follows. If a name nh appears frequently in these retrieved English documents,2 we further mine other related names n′ h which are very likely to appear in the same context as nh in a large-scale news corpus (we use English Gigaword V5.0 corpus3 in our experiment). For each pair of names ⟨n′ h, nh⟩, we compute P(n′ h|nh) based on their co-occurrences in the same sentences. If P(n′ h|nh) is larger than a threshold,4 and n′ h is a person name, then we add n′ h into the expected English name set. Let E0 = {e1, ..., eN} be the set of entities in the KB that all mentions in English documents are linked to. For each ei ∈E0, we ‘walk’ one step from it in the KB to retrieve all of its neighbors N(ei). We denote the set of neighbor nodes as E1 = {N(e1), ..., N(eN)}. Then we extend the expected English entity set as E0 ∪E1. Table 1 shows some retrieved neighbors for entity “Elon Musk”. Relation Neighbor is founder of SpaceX is founder of Tesla Motors is spouse of Justine Musk birth place Pretoria alma mater University of Pennsylvania parents Errol Musk relatives Kimbal Musk Table 1: Neighbors of Entity “Elon Musk”. 5 Knowledge Transfer for Name Tagging In this section we will present the first case study on name tagging, using English as HL and Hausa as LL. 5.1 Name Projection After expanding the English expected name set using entity prior, next we will try to carefully select, match and project each expected name (nh) from English to the one (nl) in Hausa documents. We scan through every n-gram (n in the order 3, 2, 1) in Hausa documents to see if any of them match an English name based on the following multi-media language-independent low-cost heuristics. 2for our experiment we choose those that appear more than 10 times 3https://catalog.ldc.upenn.edu/LDC2011T07 40.02 in our experiment. Spelling: If nh and nl are identical (e.g., “Brazil”), or with an edit distance of one after lower-casing and removing punctuation (e.g., nh = “Mogadishu” and nl = “Mugadishu”), or substring match (nh = “Denis Samsonov” and nl = “Samsonov”). Pronunciation: We check the pronunciations of nh and nl based on Soundex (Odell, 1956), Metaphone (Philips, 1990) and NYSIIS (Taft, 1970) algorithms. We consider two codes match if they are exactly the same or one code is a part of the other. If at least two coding systems match between nh and nl, we consider they are equivalents. Visual Similarity: When two names refer to the same entity, they usually share certain visual patterns in their related images. For example, using the textual clues above is not sufficient to find the Hausa equivalent “Majalisar Dinkin Duniya” for “United Nations”, because their pronunciations are quite different. However, Figure 5 shows the images retrieved by “Majalisar Dinkin Duniya” and “United Nations” are very similar.5 We first retrieve top 50 images for each mention using Google image search. Let Ih and Il denote two sets of images retrieved by an nh and a candidate nl (e.g., nh = “United Nations” and nl = “Majalisar Dinkin Duniya” in Figure 5), ih ∈Ih and il ∈Il. We apply the Scale-invariant feature transform (SIFT) detector (Lowe, 1999) to count the number of matched key points between two images, K(ih, il), as well as the key points in each image, P(ih) and P(il). SIFT key point is a circular image region with an orientation, which can provide feature description of the object in the image. Key points are maxima/minima of the Difference of Gaussians after the image is convolved with Gaussian filters at different scales. They usually lie in high-contrast regions. Then we define the similarity (0 ∼1) between two phrases as: S(nh, nl) = max ih∈Ih max il∈Il K(ih, il) min(P(ih), P(il)) (1) Based on empirical results from a separate small development set, we decide two phrases match if S(nh, nl) > 10%. This visual similarity computation method, though seemingly simple, has been 5Although existing Machine Translation (MT) tools like Google Translate can correctly translate this example phrase from Hausa to English, here we use it as an example to illustrate the low-resource setting when direct MT is not available. 58 one of the principal techniques in detecting nearduplicate visual content (Ke et al., 2004). (a) Majalisar Dinkin Duniya (b) United Nations Figure 5: Matched SIFT Key points. 5.2 Person Name Verification through Face Recognition For each name candidate, we apply Google image search to retrieve top 10 images (examples in Figure 6). If more than 5 images contain and only contain 1-2 faces, we classify the name as a person. We apply face detection technique based on Haar Feature (Viola and Jones, 2001). This technique is a machine learning based approach where a cascade function is trained from a large amount of positive and negative images. In the future we will try other alternative methods using different feature sets such as Histograms of Oriented Gradients (Dalal and Triggs, 2005). Figure 6: Face Recognition for Validating Person Name ‘Nawaz Shariff’. 6 Knowledge Transfer for Entity Linking In this section we will present the second case study on Entity Linking, using English as HL and Chinese as LL. We choose this language pair because its ground-truth Entity Linking annotations are available through the TAC-KBP program (Ji et al., 2015). 6.1 Baseline LL Entity Linking We apply a state-of-the-art language-independent cross-lingual entity linking approach (Wang et al., 2015b) to link names from Chinese to an English KB. For each name n, this entity linker uses the cross-lingual surface form dictionary ⟨f, {e1, e2, ..., eM}⟩, where E = {e1, e2, ..., eM} is the set of entities with surface form f in the KB according to their properties (e.g., labels, names, aliases), to locate a list of candidate entities e ∈E and compute the importance score by an entropy based approach. 6.2 Representation and Structured Knowledge Transfer Then for each expected English entity eh, if there is a cross-lingual link to link it to an LL (Chinese) entry el in the KB, we added the title of the LL entry or its redirected/renamed page cl as its LL translation. In this way we are able to collect a set of pairs of ⟨cl, eh⟩, where cl is an expected LL name, and eh is its corresponding English entity in the KB. For example, in Figure 3, we can collect pairs including “(瑞娃, Reeva Steenkamp)”, “(瑞娃·斯廷坎普, Reeva Steenkamp)”, “(茨瓦内, Pretoria)” and “(比勒陀利亚, Pretoria)”. For each mention in an LL document, we then check whether it matches any cl, if so then use eh to override the baseline LL Entity Linking result. Table 2 shows some ⟨cl, eh⟩pairs with frequency. Our approach not only successfully retrieves translation variants of “Beijing” and “China Central TV”, but also alias and abbreviations. eh el cl Freq. Beijing 北京市 北京(Beijing) 553 北京市(Beijing City) 227 燕京(Yanjing) 15 京师(Jingshi) 3 北平(Beiping) 2 首都(Capital) 1 蓟(Ji) 1 燕都(Yan capital) 1 China Central TV 中国 中央 电视 台 央视(Central TV) 19 CCTV 16 中央电视台(Central TV) 13 中国央视(China Central TV) 3 Table 2: Representation and Structured Knowledge Transfer for Expected English Entities “Beijing” and “China Central TV”. 7 Experiments In this section we will evaluate our approach on name tagging and Cross-lingual Entity Linking. 7.1 Data For name tagging, we randomly select 30 Hausa documents from the DARPA LORELEI program as our test set. It includes 63 person names (PER), 64 organizations (ORG) 225 geo-political entities 59 (GPE) and locations (LOC). For this test set, in total we retrieved 810 topically-related English documents. We found that 80% names in the ground truth appear at least once in the retrieved English documents, which shows the effectiveness of our image-anchored comparable data discovery method. For comparison, we trained a supervised Hausa name tagger based on Conditional Random Fields (CRFs) from the remaining 337 labeled documents, using lexical features (character ngrams, adjacent tokens, capitalization, punctuations, numbers and frequency in the training data). We learn entity priors by running the Stanford name tagger (Manning et al., 2014) on English Gigaword V5.0 corpus.6 The corpus includes 4.16 billion tokens and 272 million names (8.28 million of which are unique). For Cross-lingual Entity Linking, we use 30 Chinese documents from the TAC-KBP2015 Chinese-to-English Entity Linking track (Ji et al., 2015) as our test set. It includes 678 persons, 930 geo-political names, 437 organizations and 88 locations. The English KB is derived from BaseKB, a cleaned version of English Freebase. 89.7% of these mentions can be linked to the KB. Using the multi-media approach, we retrieved 235 topicallyrelated English documents. 7.2 Name Tagging Performance Table 3 shows name tagging performance. We can see that our approach dramatically outperforms the supervised model. We conduct the Wilcoxon Matched-Pairs Signed-Ranks Test on ten folders. The results show that the improvement using visual evidence is significant at a 95% confidence level and the improvement using entity prior is significant at a 99% confidence level. Visual Evidence greatly improves organization tagging because most of them cannot be matched by spelling or pronunciation. Face detection helps identify many person names missed by the supervised name tagger. For example, in the following sentence, “Nawaz Shariff” is mistakenly classified as a location by the supervised model due to the designator “kasar (country)” appearing in its left context. Since faces can be detected from all of the top 10 retrieved images (Figure 6), we fix its type to person. • Hausa document: “Yansanda sun dauki 6https://catalog.ldc.upenn.edu/LDC2011T07 wannan matakin ne kwana daya bayanda PM kasar Nawaz Shariff ya fidda sanarwar inda ya bukaci... (The Police took this step a day after the PM of the country Nawaz Shariff threw out the statement in which he demanded that...)” Face detection is also effective to resolve classification ambiguity. For example, the common person name “Haiyan” can also be used to refer to the Typhoon in Southeast Asia. Both of our HL and LL name taggers mistakenly label “Haiyan” as a person in the following documents: • Hausa document: “...a yayinda mahaukaciyar guguwar teku da aka lakawa suna Haiyan ta fada tsibiran Leyte da Samar. (...as the violent typhoon, which has been given the name, Haiyan, has swept through the island of Leyte and Samar.)” • Retrieved English comparable document: “As Haiyan heads west toward Vietnam, the Red Cross is at the forefront of an international effort to provide food, water, shelter and other relief...” In contrast using face detection results we successfully remove it based on processing the retrieved images as shown in Figure 7. Figure 7: Top 9 Retrieved Images for ‘Haiyan’. Entity priors successfully provide more detailed and richer background knowledge than the comparable English documents. For example, the main topic of one Hausa document is the former president of Nigeria Olusegun Obasanjo accusing the current President Goodluck Jonathan, and a comment by the former 1990s military administrator of Kano Bawa Abdullah Wase is quoted. But Bawa Abdullah Wase is not mentioned in any related English documents. However, based on entity priors we observe that “Bawa Abdullah Wase” appears frequently in the same contexts as “Nigeria” and “Kano”, and thus we successfully project it back to the Hausa sentence: “Haka ma Bawa Abdullahi Wase ya ce akawai abun dubawa a kalamun 60 System Identification F-score Classification Accuracy Overall F-score PER ORG LOC7 ALL Supervised 36.52 38.64 42.38 40.25 76.84 30.93 Our Approach 77.69 60.00 70.55 70.59 95.00 67.06 Our Approach w/o Visual Evidence 73.77 46.58 70.74 67.98 94.77 64.43 Our Approach w/o Entity Prior 64.91 60.00 70.55 67.59 94.71 64.02 Table 3: Name Tagging Performance (%). tsohon shugaban kasa kuma tsohon jamiin tsaro. (In the same vein, Bawa Abdullahi Wase said that there were things to take away from the former President’s words.)”. The impact of entity priors on person names is much more significant than other categories because multiple person entities often co-occur in some certain events or related topics which might not be fully covered in the retrieved English documents. In contrast most expected organizations and locations already exist in the retrieved English documents. For the same local topic, Hausa documents usually describe more details than English documents, and include more unsalient entities. For example, for the president election in Ivory Coast, a Hausa document mentions the officials of the electoral body such as “Damana Picasse”: “Wani wakilin hukumar zaben daga jamiyyar shugaba Gbagbo, Damana Picasse, ya kekketa takardun sakamakon a gaban yan jarida, ya kuma ce ba na halal ba ne. (An official of the electoral body from president Gbagbo’s party, Damana Picasse, tore up the result document in front of journalists, and said it is not legal.)”. In contrast, no English comparable documents mention their names. The entity prior method is able to extract many names which appear frequently together with the president name “Gbagbo”. 7.3 Entity Linking Performance Table 4 presents the Cross-lingual Entity Linking performance. We can see that our approach significantly outperforms our baseline and the best reported results on the same test set (Ji et al., 2015). Our approach is particularly effective for rare nicknames (e.g., “C罗” (C Luo) is used to refer to Cristiano Ronaldo) or ambiguous abbreviations (e.g., “邦联” (federal) can refer to Confederate States of America, 邦联制(Confederation) and many other entities) for which the contexts in LLs are not sufficient for making correct linking decisions due to the lack of rich knowledge representation. Our approach produces worse linking results than the baseline for a few cases when the same abbreviation is used to refer to multiple entities in the same document. For example, when “巴” is used to refer to both “巴西(Brazil)” or “巴 勒斯坦(Palestine)” in the same document, our approach mistakenly links all mentions to the same entity. Figure 8: An Example of Cross-lingual Crossmedia Knowledge Graph. 7.4 Cross-lingual Cross-media Knowledge Graph As an end product, our framework will construct cross-lingual cross-media knowledge graphs. An example about the Ebola scenario is presented in Figure 8, including entity nodes extracted from both Hausa (LL) and English (HL), anchored by images; and edges extracted from English. 8 Related Work Some previous cross-lingual projection methods focused on transferring data/annotation (e.g., (Pad´o and Lapata, 2009; Kim et al., 2010; Faruqui and Kumar, 2015)), shared feature representation/model (e.g., (McDonald et al., 2011; Kozhevnikov and Titov, 2013; Kozhevnikov and Titov, 2014)), or expectation (e.g., (Wang and Manning, 2014)). Most of them relied on a large 61 Overall Linkable Entities Approach PER ORG GPE LOC ALL PER ORG GPE LOC ALL Baseline 49.12 60.18 80.97 80.68 66.57 67.27 67.61 81.05 80.68 74.70 State-of-the-art 49.85 64.30 75.38 96.59 65.87 68.28 72.24 75.46 96.59 73.91 Our Approach 52.36 67.05 93.33 93.18 74.92 71.72 75.32 93.43 93.18 84.06 Our Approach w/o KB Walker 50.44 67.05 84.41 90.91 70.32 69.09 75.32 84.50 90.91 78.91 Table 4: Cross-lingual Entity Linking Accuracy (%). amount of parallel data to derive word alignment and translations, which are inadequate for many LLs. In contrast, we do not require any parallel data or bi-lingual lexicon. We introduce new cross-media techniques for projecting HLs to LLs, by inferring projections using domain-rich, nonparallel data automatically discovered by image search and processing. Similar image-mediated approaches have been applied to other tasks such as cross-lingual document retrieval (Funaki and Nakayama, 2015) and bilingual lexicon induction (Bergsma and Van Durme, 2011). Besides visual similarity, their method also relied on distributional similarity computed from a large amount of unlabeled data, which might not be available for some LLs. Our name projection and validation approaches are similar to other previous work on bi-lingual lexicon induction from non-parallel corpora (e.g., (Fung and Yee, 1998; Rapp, 1999; Shao and Ng, 2004; Munteanu and Marcu, 2005; Sproat et al., 2006; Klementiev and Roth, 2006; Hassan et al., 2007; Udupa et al., 2009; Ji, 2009; Darwish, 2010; Noeman and Madkour, 2010; Bergsma and Van Durme, 2011; Radford et al., ; Irvine and Callison-Burch, 2013; Irvine and Callison-Burch, 2015)) and name translation mining from multilingual resources such as Wikipedia (e.g. (Sorg and Cimiano, 2008; Adar et al., 2009; Nabende, 2010; Lin et al., 2011)). We introduce new multimedia evidence such as visual similarity and face recognition for name validation, and also exploit a large amount of monolingual HL data for mining entity priors to expand the expected entity set. For Cross-lingual Entity Linking, some recent work (Finin et al., 2015) also found cross-lingual coreference resolution can greatly reduce ambiguity. Some other methods also utilized global knowledge in the English KB to improve linking accuracy via quantifying link types (Wang et al., 2015b), computing pointwise mutual information for the Wikipedia categories of consecutive pairs of entities (Sil et al., 2015), or using linking as feedback to improve name classification (Sil and Yates, 2013; Heinzerling et al., 2015; Besancon et al., 2015; Sil et al., 2015). 9 Conclusions and Future Work We describe a novel multi-media approach to effectively transfer entity knowledge from highresource languages to low-resource languages. In the future we will apply visual pattern recognition and concept detection techniques to perform deep content analysis of the retrieved images, so we can do matching and inference on concept/entity level instead of shallow visual similarity. We will also extend anchor image retrieval from documentlevel into phrase-level or sentence-level to obtain richer background information. Furthermore, we will exploit edge labels while walking through a knowledge base to retrieve more relevant entities. Our long-term goal is to extend this framework to other knowledge extraction and population tasks such as event extraction and slot filling to construct multimedia knowledge bases effectively from multiple languages with low cost. Acknowledgments This work was supported by the U.S. DARPA LORELEI Program No. HR0011-15-C-0115, ARL/ARO MURI W911NF-10-1-0533, DARPA Multimedia Seedling grant, DARPA DEFT No. FA8750-13-2-0041 and FA8750-13-2-0045, and NSF CAREER No. IIS-1523198. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. References Eytan Adar, Michael Skinner, and Daniel S Weld. 2009. Information arbitrage across multi-lingual 62 wikipedia. In Proceedings of the Second ACM International Conference on Web Search and Data Mining, pages 94–103. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of Association for Computational Linguistics 2013 Workshop on Linguistic Annotation and Interoperability with Discourse. Utsab Barman, Amitava Das, Joachim Wagner, and Jennifer Foster. 2014. Code mixing: A challenge for language identification in the language of social media. In Proceedings of Conference on Empirical Methods in Natural Language Processing Workshop on Computational Approaches to Code Switching. Shane Bergsma and Benjamin Van Durme. 2011. Learning bilingual lexicons using the visual similarity of labeled web images. In Proceedings of International Joint Conference on Artificial Intelligence. Romaric Besancon, Hani Daher, Herv´e Le Borgne, Olivier Ferret, Anne-Laure Daquo, and Adrian Popescu. 2015. Cea list participation at tac edl english diagnostic task. In Proceedings of the Text Analysis Conference. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. the Journal of machine Learning research, 3(993-1022). William B Cavnar, John M Trenkle, et al. 1994. Ngram-based text categorization. In Proceedings of SDAIR1994. Navneet Dalal and Bill Triggs. 2005. Histograms of oriented gradients for human detection. In Proceedings of the Conference on Computer Vision and Pattern Recognition. Kareem Darwish. 2010. Transliteration mining with phonetic conflation and iterative training. In Proceedings of the 2010 Named Entities Workshop. Manaal Faruqui and Shankar Kumar. 2015. Multilingual open relation extraction using cross-lingual projection. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics. Tim Finin, Dawn Lawrie, Paul McNamee, James Mayfield, Douglas Oard, Nanyun Peng, Ning Gao, YiuChang Lin, Josh MacLin, and Tim Dowd. 2015. HLTCOE participation in TAC KBP 2015: Cold start and TEDL. In Proceedings of the Text Analysis Conference. Ruka Funaki and Hideki Nakayama. 2015. Imagemediated learning for zero-shot cross-lingual document retrieval. In Proceedings of Conference on Empirical Methods in Natural Language Processing. Pascale Fung and Lo Yuen Yee. 1998. An ir approach for translating new words from nonparallel and comparable texts. In Proceedings of the 17th international conference on Computational linguisticsVolume 1. Ahmed Hassan, Haytham Fahmy, and Hany Hassan. 2007. Improving named entity translation by exploiting comparable and parallel corpora. AMML07. Benjamin Heinzerling, Alex Judea, and Michael Strube. 2015. Hits at tac kbp 2015: Entity discovery and linking, and event nugget detection. In Proceedings of the Text Analysis Conference. Ann Irvine and Chris Callison-Burch. 2013. Combining bilingual and comparable corpora for low resource machine translation. In Proc. WMT. Ann Irvine and Chris Callison-Burch. 2015. Discriminative Bilingual Lexicon Induction. Computational Linguistics, 1(1). Heng Ji, Joel Nothman, Ben Hachey, and Radu Florian. 2015. Overview of tac-kbp2015 tri-lingual entity discovery and linking. In Proceedings of the Text Analysis Conference. Heng Ji. 2009. Mining name translations from comparable corpora by creating bilingual information networks. In Proceedings of the 2nd Workshop on Building and Using Comparable Corpora: from Parallel to Non-parallel Corpora. Yan Ke, Rahul Sukthankar, Larry Huston, Yan Ke, and Rahul Sukthankar. 2004. Efficient near-duplicate detection and sub-image retrieval. In Proceedings of ACM International Conference on Multimedia. Seokhwan Kim, Minwoo Jeong, Jonghoon Lee, and Gary Geunbae Lee. 2010. A cross-lingual annotation projection approach for relation detection. In Proceedings of the 23rd International Conference on Computational Linguistics. Alexandre Klementiev and Dan Roth. 2006. Weakly supervised named entity transliteration and discovery from multilingual comparable corpora. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. Mikhail Kozhevnikov and Ivan Titov. 2013. Crosslingual transfer of semantic role labeling models. In Proceedings of Association for Computational Linguistics. Mikhail Kozhevnikov and Ivan Titov. 2014. Crosslingual model transfer using feature representation projection. In Proceedings of Association for Computational Linguistics. Qi Li, Heng Ji, Yu Hong, and Sujian Li. 2014. Constructing information networks using one single model. In Proceedings of Conference on Empirical Methods in Natural Language Processing. 63 Wen-Pin Lin, Matthew Snover, and Heng Ji. 2011. Unsupervised language-independent name translation mining from wikipedia infoboxes. In Proceedings of the First workshop on Unsupervised Learning in NLP, pages 43–52. Association for Computational Linguistics. David G Lowe. 1999. Object recognition from local scale-invariant features. In Proceedings of the seventh IEEE international conference on Computer vision. Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In In Proceedings of Association for Computational Linguistics. Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-source transfer of delexicalized dependency parsers. In Proceedings of Conference on Empirical Methods in Natural Language Processing. Olena Medelyan and Catherine Legg. 2008. Integrating cyc and wikipedia: Folksonomy meets rigorously defined common-sense. In Wikipedia and Artificial Intelligence: An Evolving Synergy, Papers from the 2008 AAAI Workshop. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into texts. In Proceedings of Conference on Empirical Methods in Natural Language Processing. Dragos Stefan Munteanu and Daniel Marcu. 2005. Improving machine translation performance by exploiting non-parallel corpora. Computational Linguistics, 31(4)(477–504). Peter Nabende. 2010. Mining transliterations from wikipedia using pair hmms. In Proceedings of the 2010 Named Entities Workshop, pages 76–80. Association for Computational Linguistics. PR Newswire. 2011. Earned media evolved. In White Paper. Sara Noeman and Amgad Madkour. 2010. Language independent transliteration mining system using finite state automata framework. In Proceedings of the 2010 Named Entities Workshop, pages 57–61. Association for Computational Linguistics. Margaret King Odell. 1956. The Profit in Records Management. Systems (New York). Sebastian Pad´o and Mirella Lapata. 2009. Crosslingual annotation projection for semantic roles. Journal of Artificial Intelligence Research, 36:307– 340. Xiaoman Pan, Taylor Cassidy, Ulf Hermjakob, Heng Ji, and Kevin Knight. 2015. Unsupervised entity linking with abstract meaning representation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics–Human Language Technologies. Lawrence Philips. 1990. Hanging on the metaphone. Computer Language, 7(12). Will Radford, Xavier Carreras, and James Henderson. Named entity recognition with document-specific KB tag gazetteers. Reinhard Rapp. 1999. Automatic identification of word translations from unrelated English and German corpora. In Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics, pages 519– 526. Li Shao and Hwee Tou Ng. 2004. Mining new word translations from comparable corpora. In Proceedings of the 20th international conference on Computational Linguistics, page 618. Avirup Sil and Alexander Yates. 2013. Re-ranking for joint named-entity recognition and linking. In Proceedings of the 22nd ACM international conference on Conference on information & knowledge management. Avirup Sil, Georgiana Dinu, and Radu Florian. 2015. The ibm systems for trilingual entity discovery and linking at tac 2015. In Proceedings of the Text Analysis Conference. Philipp Sorg and Philipp Cimiano. 2008. Enriching the crosslingual link structure of wikipedia-a classification-based approach. In Proceedings of the AAAI 2008 Workshop on Wikipedia and Artifical Intelligence. Richard Sproat, Tao Tao, and ChengXiang Zhai. 2006. Named entity transliteration with comparable corpora. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. Robert L Taft. 1970. Name Search Techniques. New York State Identification and Intelligence System, Albany, New York, US. Raghavendra Udupa, K Saravanan, A Kumaran, and Jagadeesh Jagarlamudi. 2009. Mint: A method for effective and scalable mining of named entity transliterations from large comparable corpora. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, pages 799–807. Paul Viola and Michael Jones. 2001. Rapid object detection using a boosted cascade of simple features. In Proceedings of the Conference on Computer Vision and Pattern Recognition. Clare R Voss, Stephen Tratz, Jamal Laoudi, and Douglas M Briesch. 2014. Finding romanized arabic dialect in code-mixed tweets. In Proceedings of Language Resources and Evaluation Conference. 64 Mengqiu Wang and Christopher D Manning. 2014. Cross-lingual projected expectation regularization for weakly supervised learning. Transactions of the Association for Computational Linguistics. Xuanhui Wang, ChengXiang Zhai, Xiao Hu, and Richard Sproat. 2007. Mining correlated bursty topic patterns from coordinated text streams. In Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining. Chuan Wang, Nianwen Xue, and Sameer Pradhan. 2015a. Boosting Transition-based AMR Parsing with Refined Actions and Auxiliary Analyzers. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Han Wang, Jin Guang Zheng, Xiaogang Ma, Peter Fox, and Heng Ji. 2015b. Language and domain independent entity linking with quantified collective validation. In Proceedings of Conference on Empirical Methods in Natural Language Processing. 65
2016
6
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 632–642, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Which Coreference Evaluation Metric Do You Trust? A Proposal for a Link-based Entity Aware Metric Nafise Sadat Moosavi and Michael Strube Heidelberg Institute for Theoretical Studies gGmbH Schloss-Wolfsbrunnenweg 35 69118 Heidelberg, Germany {nafise.moosavi|michael.strube}@h-its.org Abstract Interpretability and discriminative power are the two most basic requirements for an evaluation metric. In this paper, we report the mention identification effect in the B3, CEAF, and BLANC coreference evaluation metrics that makes it impossible to interpret their results properly. The only metric which is insensitive to this flaw is MUC, which, however, is known to be the least discriminative metric. It is a known fact that none of the current metrics are reliable. The common practice for ranking coreference resolvers is to use the average of three different metrics. However, one cannot expect to obtain a reliable score by averaging three unreliable metrics. We propose LEA, a Link-based Entity-Aware evaluation metric that is designed to overcome the shortcomings of the current evaluation metrics. LEA is available as branch LEA-scorer in the reference implementation of the official CoNLL scorer. 1 Introduction There exists a variety of models (e.g. pairwise, entity-based, and ranking) and feature sets (e.g. string match, lexical, syntactic, and semantic) to be used in coreference resolution. There is no known formal way to prove which coreference model is superior to the others and which set of features is more beneficial/less useful in coreference resolution. The only way to compare different models, features or implementations of coreference resolvers is to compare the values of the existing coreference resolution evaluation metrics. By comparing the evaluation scores, we determine which system performs best, which model suits coreference resolution better, and which feature set is useful for improving the recall or precision of a coreference resolver. Therefore, evaluation metrics play an important role in the advancement of the underlying technology. It is imperative for the evaluation metrics to be reliable. However, it is not a trivial task to score output entities with various kinds of coreference errors. Several evaluation metrics have been introduced for coreference resolution (Vilain et al., 1995; Bagga and Baldwin, 1998; Luo, 2005; Recasens and Hovy, 2011; Tuggener, 2014). Metrics that are being used widely are MUC (Vilain et al., 1995), B3 (Bagga and Baldwin, 1998), CEAF (Luo, 2005), and BLANC (Recasens and Hovy, 2011). There are known flaws for each of these metrics. Besides, the agreement between all these metrics is relatively low (Holen, 2013), and it is not clear which metric is the most reliable. The CoNLL-2011/2012 shared tasks (Pradhan et al., 2011; Pradhan et al., 2012) ranked participating systems using an average of three metrics, i.e. MUC, B3, and CEAF, following a proposal by (Denis and Baldridge, 2009a). Averaging three unreliable scores does not result in a reliable one. Besides, when an average score is used for comparisons, it is not possible to analyse recall and precision to determine which output is more precise and which one covers more coreference information. This is indeed a requirement for coreference resolvers to be used in end-tasks. Therefore, averaging individual metrics is nothing but a compromise. As mentioned by Luo (2005), interpretability and discriminative power are two basic requirements for a reasonable evaluation metric. In regard to the interpretability requirement a high score should indicate that the vast majority of coreference relations and entities are detected correctly. Similarly, a system that resolves none of the coreference relations or entities should get a zero score. 632 MUC B3 CEAFe BLANC R P F1 R P F1 R P F1 R P F1 Base 69.31 76.23 72.60 55.83 66.07 60.52 54.88 59.41 57.05 57.46 65.77 61.31 More precise 69.31 82.29 75.24 53.94 69.32 60.67 50.92 55.85 53.27 53.60 66.45 59.25 Less precisea 69.31 69.46 69.38 60.53 63.98 62.21 64.82 53.06 58.35 68.68 67.02 67.68 Less preciseb 69.31 74.70 71.90 60.61 69.14 64.60 69.50 47.61 56.51 68.74 67.37 67.87 Table 1: Counterintuitive values of B3, CEAF and BLANC recall and precision. An evaluation metric should also be discriminative. It should be able to discriminate between good and bad coreference decisions. In this paper, we report on a drawback for B3, CEAF, and BLANC which violates the interpretability requirement. We also show that this flaw invalidates the recall/precision analysis of coreference outputs based on these three metrics. We then review the current evaluation metrics with their known flaws to explain why we cannot trust them and need a new reliable one. Finally, we propose LEA, a Link-based Entity Aware evaluation metric that is designed to overcome problems of the existing metrics. We have begun the process of integrating the LEA metric in the official CoNLL scorer1 so as to continue the progress made in recent years to produce replicable evaluation metrics. In order to use the LEA metric, there is no additional requirement than that of the CoNLL scorer v8.01 2. 2 The Mention Identification Effect All the proposed evaluation metrics for coreference resolution use recall, precision and F1 for reporting the performance of a coreference resolver. Recall is an indicator of the fraction of correct coreference information, i.e. coreference links or entities, that is resolved. Precision is an indicator of the fraction of resolved coreference information that is correct. F1 is the weighted harmonic mean of recall and precision. While we usually use F1 for comparing coreference resolution systems, it is also important for the corresponding recall and precision values to be interpretable and discriminative. Coreference resolution is not an end-task itself but it is an important step toward text understanding. Depending on the task, recall or precision may be more important. For example, as Stuckhardt (2003) argues, a coreference resolver needs high precision 1Currently available as branch LEA-scorer in https://github.com/conll/ reference-coreference-scorers. 2LEA scores will be obtained by running the command perl scorer.pl lea goldFile systemFile. to meet the specific requirements of text summarization and question answering. In this section, we show that the recall and precision of the B3, CEAF and BLANC metrics are neither interpretable nor reliable. We choose the output of the state-of-the-art coreference resolver of Wiseman et al. (2015) on the CoNLL 2012 English test set as the base output. The CoNLL 2012 English test set contains 222 documents (comprising 348 partially annotated sections). This test set contains 19,764 coreferring mentions that belong to 4,532 different entities. In Table 1, Base represents the scores of (Wiseman et al., 2015) on the CoNLL 2012 test set. All reported scores in this paper are computed by the official CoNLL scorer v8.01 (Pradhan et al., 2014). Assume Mk,r is the set of mentions that exists in both key and response entities. Let Lk(m) and Lr(m) be the set of coreference links of mention m in the key and response entities, respectively. Mention m is an incorrectly resolved mention if m ∈Mk,r and Lk(m) ∩Lr(m) = ∅. Therefore, m is a coreferent mention that has at least one coreference link in the response entities. However, none of its detected coreference links in the response entities are correct. By removing the incorrectly resolved mentions, the response entities will become more precise. The precision improves because the wrong links that are related to the incorrectly resolved mentions have been removed. Besides, the recall will not change because no correct coreference relations or entities have been added or removed. We make the Base output more precise by removing all 1075 incorrectly resolved mentions from the response entities. The score for this more precise output is shown as More precise in Table 1. As can be seen, (1) recall changes for all the metrics except for MUC; (2) both CEAFe recall and precision significantly decrease; and (3) BLANC recall notably decreases so that F1 drops significantly in comparison to Base. On the other hand, adding completely incorrect 633 entities to the response entities should not affect the recall and it should decrease the precision. Assume Md,k,¯r is the set of mentions of document d that exists in the key entities but is missing from the response entities. We can add completely incorrect entities to the Base output as follows: (1) By linking m1 ∈Md,k,¯r to mention m2 ∈Md,k,¯r that is non-coreferent with m1. All the new wrong entities are of size two (Less precisea). (2) By linking m1 ∈Md,k,¯r to all mentions of Md,k,¯r that are non-coreferent with m1. In this case the new entities are larger but their number is smaller (Less preciseb). The number of new entities is 1350 and 283 for the first and second case, respectively. As can be seen from the results of Table 1, (1) recall changes for all metrics except for MUC; and (2) the B3, CEAF and BLANC scores improve significantly over those of Base when the output is doubtlessly worse. These experiments show that B3, CEAF and BLANC are not reliable for recall-precision analysis. We refer to the problem that is causing these contradictory results as the mention identification effect. 3 Reasons for the Unreliable Results In this section, we briefly give an overview of the common evaluation metrics for coreference resolution. We also discuss the shortcomings of each metric, including the mention identification effect, that may lead to counterintuitive and unreliable results. In all metrics, K is the key entity set and R is the response entity set. 3.1 MUC MUC is the earliest systematic coreference evaluation metric and is introduced by Vilain et al. (1995). MUC is a link-based metric. It computes recall based on the minimum number of missing links in the response entities in comparison to the key entities. MUC recall is defined as: Recall = P ki∈K(|ki| −|p(ki)|) P ki∈K(|ki| −1) where p(ki) is the set of partitions that is created by intersecting ki with the corresponding response entities. MUC precision is computed by switching the role of the key and response entities. It is not trivial to determine which evaluation metric discriminates coreference responses best. However, MUC is known to be the least discriminative coreference resolution metric (Bagga and Baldwin, 1998; Luo, 2005; Recasens and Hovy, 2011). The MUC evaluation is only based on the minimum number of missing/extra links in the response compared to the key entities. For instance, MUC does not differentiate whether an extra link merges two singletons or the two most prominent entities of the text. However, the latter error does more damage than the first one. Another major problem with MUC is that it has an incorrect preference in ranking coreference outputs. MUC favors the outputs in which entities are over-merged (Luo, 2005). For instance, if we link all the key mentions of the CoNLL 2012 test set into a single response entity, the corresponding MUC scores, i.e. Recall=100, Precision=78.44 and F1=87.91, will be all higher than those of the state-of-the-art system (Base in Table 1). 3.2 BCUBED The B3 score is introduced by Bagga and Baldwin (1998). B3 is a mention-based metric, i.e., the overall recall/precision is computed based on the recall/precision of the individual mentions. For each mention m in the key entities, B3 recall considers the fraction of the correct mentions that are included in the response entity of m. B3 recall is computed as follows: Recall = P ki∈K P rj∈R |ki∩rj|2 |ki| P ki∈K |ki| Similar to MUC, B3 precision is computed by switching the role of the key and response entities. The mention identification effect arises in B3, because B3 uses mentions instead of coreference relations to evaluate the response entities. Therefore, if a mention exists in a response entity, it is considered as a resolved mention regardless of whether it has a correct coreference relation in the response entity. Luo (2005) argues that B3 leads to counterintuitive results for boundary cases: (1) consider a system that makes no decision and leaves every key mention as a singleton. B3 precision for this system is 100%. However, not all of the recognized system entities (i.e. singletons), or the detected coreference relations (i.e. every mention only coreferent with itself) are correct; (2) consider a system that merges all key mentions into a single entity. B3 recall for this system is 100%. 634 Luo (2005) interprets this recall as counterintuitive because the key entities have not been found in the response. The intuitiveness or counterintuitiveness of this recall value depends on the evaluator’s point of view. From one point of view, all of the key mentions, that are supposed to be in the same entity, are indeed in the same entity. Finally, as discussed by Luo and Pradhan (2016), B3 cannot properly handle repeated mentions in the response entities. If a gold mention is repeated in several response entities, B3 receives credit for all the repetitions. The repeated response mentions issue is not an imaginary problem (Luo and Pradhan, 2016). It can happen if system mentions are read from a parse tree where an NP node has a single child, a pronoun, and where both the nodes are considered as candidate mentions. 3.3 CEAF The CEAF metric is introduced by Luo (2005). CEAF’s main assumption is that each key entity should only be mapped to one reference entity, and vice versa. CEAF uses a similarity measure (φ) to evaluate the similarity of two entities. It uses the Kuhn-Munkres algorithm to find the best one-toone mapping of the key to the response entities (g∗) using the given similarity measure. Assuming K∗is the set of key entities that is included in the optimal mapping, recall is computed as: Recall = P ki∈K∗φ(ki, g∗(ki)) P ki∈K φ(ki, ki) (1) For computing CEAF precision, the denominator of Equation 1 is changed to P Ri∈R φ(ri, ri). Based on φ, there are two variants of CEAF: (1) mention-based CEAF (CEAFm), which computes the similarity as the number of common mentions between two entities, i.e. φ(ki, rj) = |ki ∩rj|; and (2) entity-based CEAF (CEAFe), in which φ(ki, rj) = 2×|ki∩rj| |ki|+|rj| . The denominator of Equation 1 for CEAFe is the number of key entities. Similar to B3, the mention identification effect of CEAF is caused by both similarity measures of CEAF using the number of common mentions between two entities, i.e. |ki ∩rj|. In this way, even if the two mapped entities (ki and rj) have only one mention in common, CEAFm rewards recall and precision by 1 P ki |ki| and 1 P rj |rj|, respectively. CEAFe rewards recall and precision by 2 (|ki|+|rj|)×|K| and 2 (|ki|+|rj|)×|R|, respectively. If instead of the number of common mentions, [The American administration](1) committed a fatal mistake when [it1](1) [executed](2) [this man](3), in a way for which [it2](1) will pay a hefty price in the near future. [[His1](3) survival](4) would have benefited [it3](1) much more than [[his2](3) execution](2) if [they1](1) understood politics as [they2](1) should, because [[his3](3) survival](4) could have been a card to threaten [the sectarians](5) and keep [them1](5) as servants to [them1](1) and [their](1) schemes. Figure 1: Sample text from CoNLL 2012. Response entities cr1 r1={the American administration, it1, it2, it3} , r2={they1, they2, them, their} cr2 r1={the American administration, it1, it2, it3} Table 2: Different system outputs for Figure 1. we would use the number of common coreference links between two entities in both CEAFm and CEAFe similarity measures, this problem would be solved. However, even if we handle the mention identification effect by using coreference relations rather than mentions in the similarity measures, CEAF may still result in counterintuitive results. As mentioned by Denis and Baldridge (2009b), CEAF ignores all correct decisions of unaligned response entities that may lead to unreliable results. In order to illustrate this, we use a sample text from the CoNLL 2012 development set as an example (Figure 1). Gold mentions are enclosed in square brackets. Mentions with the same text are marked with different indices. The indices in parentheses denote to which key entity the mentions belong Consider cr1 and cr2 in Table 2, which are different responses for entity (1) of Figure 1. cr1 resolves many coreference relations of entity (1). However, it misses that they1 could refer to an entity which is already referred to by ’it’. Therefore cr1 produces two entities instead of one because of this missing relation. On the other hand, cr2 only recognizes half of the correct coreference relations of entity (1). As can be seen from Table 3, CEAF prefers cr2 over cr1 even though cr1 makes more correct decisions. CEAF only selects one of the output entities of cr1 for giving credit to the correct decisions. MUC B3 CEAFm CEAFe BLANC cr1 92.30 66.66 50.00 44.44 60.00 cr2 60.00 40.00 66.66 66.66 32.29 Table 3: F1 scores for Table 2’s response entities. 635 The other response entity is only used for penalizing the precision of cr1. This counterintuitive result is only because of the stringent constraint of CEAF that the mapping of key to response entities should be one-to-one. Another problem with CEAFe, mentioned by Stoyanov et al. (2009), is that it weights entities equally regardless of their sizes. The system that does not detect entity (1), the most prominent entity of Figure 1, gets the same score as that of a system which does not detect entity (4) of size 2. 3.4 BLANC BLANC (Recasens and Hovy, 2011; Luo et al., 2014) is a link-based metric that adapts the Rand index (Rand, 1971) to coreference resolution evaluation. Let Ck and Cr be the sets of coreference links in the key and response entities, respectively. Assume Nk and Nr are the sets of non-coreference links in the key and response entities, respectively. Recall and precision of coreference links are computed as: Rc = |Ck ∩Cr| |Ck| , Pc = |Ck ∩Cr| |Cr| Recall and precision of non-coreference links are computed as: Rn = |Nk ∩Nr| |Nk| , Pn = |Nk ∩Nr| |Nr| BLANC recall and precision are computed by averaging the recall and precision of coreference and non-coreference links, e.g. Recall= Rc+Rn 2 . The BLANC measure is the newest but the least popular metric for evaluating coreference resolvers. Because of considering non-coreferent relations, the mention identification effect affects BLANC most strongly. When the number of gold mentions that exist in the response entities is larger, the number of detected non-coreference links will also get larger. Therefore, it results in higher values for BLANC recall and precision ignoring whether those gold mentions are resolved. 4 LEA In this section, we present our new evaluation metric, namely the Link-Based Entity-Aware metric (LEA). LEA is designed to overcome the shortcomings of the current evaluation metrics. For each entity, LEA considers how important the entity is and how well it is resolved. Therefore, LEA evaluates a set of entities as follows: P ei∈E(importance(ei) × resolution-score(ei)) P ek∈E importance(ek) We consider the size of an entity as a measure of importance, i.e. importance(e) = |e|. Therefore, the more prominent entities of the text get higher importance values. However, according to the end-task or domain used, one can choose other importance measures based on factors besides ei’s size, e.g. ei’s entity type or ei’s mention types. For example, as suggested by Holen (2013), each mention carries different information values, and considering this information could benefit the quantitative evaluation of coreference resolution. The importance measure of LEA is the appropriate place to incorporate this kind of information. Entity e with n mentions has link(e) = n × (n−1)/2 unique coreference links. The resolution score of key entity ki is computed as the fraction of correctly resolved coreference links of ki: resolution-score(ki) = X rj∈R link(ki ∩rj) link(ki) For each ki, LEA checks all the response entities to see whether they are partial matches for ki. rj is a partial match for ki, if it contains at least one of the coreference links of ki. Thus, if a response entity only contains one mention of ki, it is not a partial mapping of ki. Having the definitions of importance and resolution-score, LEA recall is computed as: Recall = P ki∈K(|ki| × P rj∈R link(ki∩rj) link(ki) ) P kz∈K |kz| LEA precision is computed by switching the role of the key and response entities: Precision = P ri∈R(|ri| × P kj∈K link(ri∩kj) link(ri) ) P rz∈R |rz| LEA handles singletons by self-links. A self-link is a link connecting a mention to itself. Self-links indicate that a mention is only coreferent with itself and not with other mentions. By considering self-links, the number of links in a singleton is one. If entity ki is a singleton, link(ki ∩rj) is one only if rj is a singleton and contains the same mention as ki. In summary, LEA is a link-based metric with the following properties: 636 – LEA takes into account all coreference links instead of only extra/missing links. Therefore, it has more discriminative power than MUC. – LEA evaluates resolved coreference relations instead of resolved mentions. LEA also does not rely on non-coreferent links in order to detect entity structures or singletons. Therefore, the mention identification effect does not apply to LEA recall and precision. As a result, one can trust LEA recall or precision. – LEA allows one-to-many mappings of entities. Unlike CEAF, all correct coreference relations are rewarded by LEA. More splits (or similarly merges) in entity ki result in a smaller P rj∈R link(ki ∩rj). Therefore, splitting (merging) of an entity in several entities will be penalized implicitly in resolution-score. – LEA takes the importance of missing/extra entities into account. Therefore, unlike CEAFe, it differentiates between the outputs missing the most prominent and the smallest entities. – LEA considers resolved coreference relations instead of resolved mentions. Therefore, the existence of repeated mentions in different response entities is not troublesome for LEA. 5 An Illustrative Example In this section, we use the example from Pradhan et al. (2014) to show the process of computing the LEA scores. In this example, K = {k1 = {a, b, c}, k2 = {d, e, f, g}} is the set of key entities and R = {r1 = {a, b}, r2 = {c, d}, r3 = {f, g, h, i}} is the set of response entities. Here we assume that importance corresponds to entity size. Hence, importance(k1) = 3 and importance(k2) = 4. The sets of coreference links in k1 and k2 are {ab, ac, bc} and {de, df, dg, ef, eg, fg}, respectively. Therefore, link(k1) = 3 and link(k2) = 6. ab is the only common link between k1 and r1. There are no common links between k1 and the two other response entities. Similarly, k2 has one common link with r3 and it has no common links with r1 or r2. Therefore, resolution-score(k1) = 1+0+0 3 and resolution-score(k2) = 0+0+1 6 . As a result LEA recall is computed as: P importance(ki) × resolution-score(ki) P importance(kj) = 3 × 1 3 + 4 × 1 6 3 + 4 ≈0.24 By changing the roles of key and response entities, LEA precision is computed as: 2 × 1+0 1 + 2 × 0+0 1 + 4 × 0+1 6 2 + 2 + 4 ≈0.33 6 Evaluation on Real Data Table 4 shows the scores of the state-of-the-art coreference resolvers developed by Wiseman et al. (2015), Martschat and Strube (2015), and Peng et al. (2015). Clark and Manning (2015)’s resolver is also among the state-of-the-art systems but we did not have access to their output. Considering the average score of MUC, B3, and CEAFe, Martschat, and Peng perform equally. However, according to LEA, Martschat performs significantly better based on an approximate randomization test (Noreen, 1989). CEAFe also agrees with LEA for this ranking. However, CEAFe recall and precision are similar for Peng while based on LEA, Peng’s precision is marginally better than recall. In addition to the state-of-the-art systems, we report the scores of boundary cases in the CoNLL 2012 test set in Table 4: (1) sys-sing: all system mentions as singletons; and (2) sys-1ent: all system mentions in a single entity. Table 5 presents the evaluations of the participating systems in the CoNLL 2012 shared task (closed task with predicted mentions). The rankings are specified in parentheses. For the LEA rankings we also perform a significance test. The systems without significant differences have the same ranking. The main difference between the rankings of avg. and LEA is the rank of xu. Based on LEA, xu is significantly better than chen and chunyuang, while avg. ranks these two above xu. The recall values of chen and chunyuang for mention identification are 75.08 and 75.23, which are higher than those of the best performing systems, i.e 72.75 for fernandes, and 74.23 for martschat. chen and chunyuang include 1850 and 1735 gold mentions in their outputs that have not a single correct coreference link. On the other hand, the number of these gold mentions in xu is 757. Therefore, these different rankings could be a direct result of the mention identification effect. Overall, using one reliable metric instead of an average score benefits us in two additional ways: (1) we can perform a significance test to check whether there is a meaningful difference, and (2) the recall and precision values are meaningful. 637 MUC B3 CEAFe CoNLL LEA R P F1 R P F1 R P F1 Avg. F1 R P F1 Wiseman 69.31 76.23 72.60 55.83 66.07 60.52 54.88 59.41 57.05 63.39 51.78 62.12 56.48 Martschat 68.55 77.22 72.63 54.64 66.78 60.11 52.85 60.30 56.33 63.02 50.64 62.87 56.10 Peng 69.54 75.80 72.53 56.91 65.40 60.86 55.49 55.98 55.73 63.04 51.91 58.97 55.21 sys-sing 0.00 0.00 0.00 19.72 39.05 26.20 50.32 4.99 9.08 11.76 0.00 0.00 0.00 sys-1ent 88.01 29.58 44.28 84.87 2.53 4.91 1.50 19.63 2.80 17.33 82.31 2.27 4.43 Table 4: Results on the CoNLL 2012 test set. MUC B3 CEAFm CEAFe BLANC CoNLL avg. LEA fernandes 70.51 (1) 57.58 (1) 61.42 53.86 (1) 58.75 60.65 (1) 53.28 (1) martschat 66.97 (3) 54.62 (2) 58.77 51.46 (2) 55.04 57.68 (2) 49.99 (2) bjorkelund 67.58 (2) 54.47 (3) 58.19 50.21(3) 55.42 57.42 (3) 49.98 (2) chang 66.38 (4) 52.99 (4) 57.10 48.94 (4) 53.86 56.10 (4) 48.50 (4) chen 63.71 (7) 51.76 (5) 55.77 48.10 (5) 52.87 54.52 (5) 46.24 (6) chunyuang 63.82 (6) 51.21 (6) 55.10 47.58 (6) 52.65 54.20 (6) 45.84 (6) shou 62.91 (8) 49.44 (9) 53.16 46.66 (7) 50.44 53.00 (7) 43.97 (8) yuan 62.55 (9) 50.11 (8) 54.53 45.99 (8) 52.10 52.88 (8) 44.76 (8) xu 66.18 (5) 50.30 (7) 51.31 41.25 (11) 46.47 52.58 (9) 46.83 (5) uryupina 60.89 (10) 46.24 (10) 49.31 42.93 (9) 46.04 50.02 (10) 41.15 (10) songyang 59.83 (12) 45.90 (11) 49.58 42.36 (10) 45.10 49.36 (11) 41.25 (10) zhekova 53.52 (13) 35.66 (13) 39.66 32.16 (12) 34.80 40.45 (12) 29.98 (12) xinxin 48.27 (14) 35.73 (12) 37.99 31.90 (13) 36.54 38.63 (13) 29.22 (12) li 50.84 (11) 32.29 (14) 36.28 25.21 (14) 31.85 36.11 (14) 27.32 (14) Table 5: The results of the CoNLL 2012 shared task. 0% 20% 40% 60% 80% 100% 0 20 40 60 80 100 F1 MUC B3 CEAFm CEAFe BLANC LEA Figure 2: Resolved coreference links ratio without incorrect links. 7 Analysis In this section we analyze the behavior of the evaluation metrics based on various coreference resolution errors. The set of key entities in all experiments contains: one entity of size 20, two entities of size 10, three entities of size 5, one entity of size 4, and ten entities of size 2. 7.1 Correct Links We analyze different metrics based on the ratio of correctly resolved coreference links: (1) without wrong coreference links (Figure 2), and (2) with wrong coreference links (Figure 3). In the 0% 20% 40% 60% 80% 100% 0 20 40 60 80 100 F1 MUC B3 CEAFm CEAFe BLANC LEA Figure 3: Resolved coreference links ratio in the presence of incorrect links. experiments of Figure 2, only mentions that are correctly resolved exist in the response. In Figure 3, apart from the mentions that are resolved correctly, other mentions are linked to at least one non-coreferent mention. Therefore, mention detection F1 is always 100%. The following observations can be drawn from these experiments: (1) MUC and LEA are the only measures which give a zero score to the response that contains no correct coreference relations; (2) in our experiments, CEAFe shows an unreasonable drop when the correct link ratio changes from 0% to 20%; and (3), in Figure 2, the BLANC 638 0% 20% 40% 60% 80% 100% 0 20 40 60 80 100 F1 MUC B3 CEAFm CEAFe BLANC LEA %correct links Figure 4: Resolving entities in decreasing order. F1 of B3, CEAF, and LEA are the same. F1 values are less than or equal to those of B3 and LEA. However, in Figure 3 that contains both coreferent and non-coreferent links, BLANC F1 is at least 20% higher than that of other metrics. 7.2 Correct Entities Apart from the correctly resolved links, a coreference metric should also take into account the resolved entities. In this section, we analyze the coreference resolution metrics based on the number and the size of the correctly resolved entities. In these experiments, each entity is either resolved completely, or all of its mentions are absent from the response. In Figure 4, the key entities are added to the response in decreasing order of their size. Figure 5 shows the experiments in which the entities are resolved in increasing order. The ratio of the correctly resolved coreference links is shown in both figures. We can observe the following points from Figure 4 and Figure 5: (1) CEAFe results in the same F1 values regardless of the size of entities that are resolved or are missing; (2) B3, CEAFm and LEA result in the same F1 values; and (3) BLANC is very sensitive to the total number of links. 7.3 Splitting/Merging Entities The effect of splitting a single entity into two or more entities is studied in Figure 6. The overall effect of merging entities would be similar to that of splitting if the roles of the key and response entities change. In each experiment, only one key entity is split in a way that no singletons are created. For example, 18-2 in the horizontal axis indicates 0% 20% 40% 60% 80% 100% 0 20 40 60 80 100 F1 MUC B3 CEAFm CEAFe BLANC LEA %correct links Figure 5: Resolving entities in increasing order. F1 of B3, CEAF, and LEA are the same. 18-2 3-2 2-2 16-4 10-10 5-3-2 9-9-2 9-5-6 85 90 95 100 F1 MUC B3 CEAFm CEAFe BLANC LEA Figure 6: Effect of splitting entities. that an entity of size 20 is split into two entities of size 18 and 2. The following observations can be drawn from Figure 6: (1) MUC only recognizes the number of splits regardless of the size of entities; (2) CEAFe does not differentiate 2-2 from 10-10, and 9-92 from 9-5-6; and (3) the highest disagreement is for ranking different numbers of splits in entities with different sizes, i.e., B3: 18-2>5-3-2>164, BLANC: 5-3-2>18-2>16-4, CEAF: 18-2>164>5-3-2, and LEA: 18-2>16-4>5-3-2. These are the cases that are even for humans hard to rank. 7.4 Extra/Missing Mentions Figure 7 shows the effect of extra mentions, i.e. mentions that are not included in any key entity. If we change the roles of the key and response enti639 1-2 1-10 2-0 2-2 2-10 3-0 3-2 3-10 96 98 100 F1 MUC B3 CEAFm CEAFe BLANC LEA Figure 7: Effect of extra mentions. ties, the overall effect of missing mentions would be similar. In the horizontal axis, the first number shows the number of extra mentions. The second number shows the size of the entity to which extra mentions are added. A zero entity size indicates the extra mentions are linked together. The following points are worth noting from the results of Figure 7: (1) MUC and CEAFm are the least discriminative metrics when the system output includes extra mentions; (2) except for CEAFe, other metrics rank 3-10 as the worst output;(3) CEAFe recognizes both 2-0 and 3-0 as the worst outputs. However, in these outputs the extra mentions are linked together and therefore no incorrect information is added to the correctly resolved entities; and (4) LEA is the only metric that recognizes error 2-0 is less harmful than 1-2 or 1-10. However, LEA does not discriminate the different outputs in which only one extra mention is added to an entity. If k extra mentions are added to an entity of size n, the corresponding resolution error multiplied by the importance of the response entity is (n + k) × (1 − n×(n−1) (n+k)×(n+k−1)) . If k = 1, this equation is 2 regardless of n’s value. 7.5 Mention Identification The mention identification effect is shown in Figure 8. In all experiments, the number of correct coreference links is zero. The horizontal axis shows the mention identification accuracy in the system output. The F1 of B3, CEAF and BLANC in these experiments clearly contrast the interpretability requirement. A coreference resolver with a non-zero score should have resolved some of the coreference relations. 0% 20% 40% 60% 80% 100% 0 20 40 F1 MUC B3 CEAFm CEAFe BLANC LEA Figure 8: Effect of mention identification. 8 Conclusions Current coreference resolution evaluation metrics have flaws which make them unreliable for comparing coreference resolvers. There is also a low agreement between the rankings of different metrics. The current solution is to use an average value of different metrics for comparisons. Averaging unreliable scores does not result in a reliable one. Indeed, recall and precision comparisons of coreference resolvers are not possible based on an average score. We first report the mention identification effect on B3, CEAF and BLANC which causes these metrics to report misleading values. The only metric that is resistant to the mention identification effect is the least discriminative one, i.e. MUC. We introduce LEA, the Link-based Entity-Aware metric, as a new evaluation metric for coreference resolution. LEA is a simple intuitive metric that overcomes the drawbacks of the current metrics. It can be easily adapted for entity evaluation in different domains or applications in which entities with various attributes are of different importance. Acknowledgments The authors would like to thank Sameer Pradhan, Mark-Christoph M¨uller, Mohsen Mesgar and Sebastian Martschat for their helpful comments. We would also like to thank Sam Wiseman and Haoruo Peng for providing us with their coreference system outputs. This work has been funded by the Klaus Tschira Foundation, Heidelberg, Germany. The first author has been supported by a Heidelberg Institute for Theoretical Studies PhD. scholarship. 640 References Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In Proceedings of the 1st International Conference on Language Resources and Evaluation, Granada, Spain, 28–30 May 1998, pages 563–566. Kevin Clark and Christopher D. Manning. 2015. Entity-centric coreference resolution with model stacking. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Beijing, China, 26–31 July 2015, pages 1405–1415. Pascal Denis and Jason Baldridge. 2009a. Global joint models for coreference resolution and named entity classification. Procesamiento del Lenguaje Natural, (42):87–96. Pascal Denis and Jason Baldridge. 2009b. Global joint models for coreference resolution and named entity classification. Procesamiento del Lenguaje Natural, 42:87–96, March. Gordana Ilic Holen. 2013. Critical reflections on evaluation practices in coreference resolution. In Proceedings of the 2013 NAACL HLT Student Research Workshop, Atlanta, Georgia, 9-14 June 2013, pages 1–7. Xiaoqiang Luo and Sameer Pradhan. 2016. Evaluation metrics. In M. Poesio, R. Stuckardt, and Y. Versley, editors, Anaphora Resolution: Algorithms, Resources, and Applications. Springer. To appear. Xiaoqiang Luo, Sameer Pradhan, Marta Recasens, and Eduard Hovy. 2014. An extension of BLANC to system mentions. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 24–29, Baltimore, Maryland, June. Association for Computational Linguistics. Xiaoqiang Luo. 2005. On coreference resolution performance metrics. In Proceedings of the Human Language Technology Conference and the 2005 Conference on Empirical Methods in Natural Language Processing, Vancouver, B.C., Canada, 6–8 October 2005, pages 25–32. Sebastian Martschat and Michael Strube. 2015. Latent structures for coreference resolution. Transactions of the Association for Computational Linguistics, 3:405–418. Eric W. Noreen. 1989. Computer Intensive Methods for Hypothesis Testing: An Introduction. Wiley, New York, N.Y. Haoruo Peng, Kai-Wei Chang, and Dan Roth. 2015. A joint framework for coreference resolution and mention head detection. In Proceedings of the 19th Conference on Computational Natural Language Learning, Beijing, China, 30–31 July 2015, pages 12–21. Sameer Pradhan, Lance Ramshaw, Mitchell Marcus, Martha Palmer, Ralph Weischedel, and Nianwen Xue. 2011. CoNLL-2011 Shared Task: Modeling unrestricted coreference in OntoNotes. In Proceedings of the Shared Task of the 15th Conference on Computational Natural Language Learning, Portland, Oreg., 23–24 June 2011, pages 1–27. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL2012 Shared Task: Modeling multilingual unrestricted coreference in OntoNotes. In Proceedings of the Shared Task of the 16th Conference on Computational Natural Language Learning, Jeju Island, Korea, 12–14 July 2012, pages 1–40. Sameer Pradhan, Xiaoqiang Luo, Marta Recasens, Eduard Hovy, Vincent Ng, and Michael Strube. 2014. Scoring coreference partitions of predicted mentions: A reference implementation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Baltimore, Md., 22–27 June 2014, pages 30– 35. William R. Rand. 1971. Objective criteria for the evaluation of clustering methods. Journal of the American Statistical Association, 66(336):846–850. Marta Recasens and Eduard Hovy. 2011. BLANC: Implementing the Rand index for coreference evaluation. Natural Language Engineering, 17(4):485– 510. Veselin Stoyanov, Nathan Gilbert, Claire Cardie, and Ellen Riloff. 2009. Conundrums in noun phrase coreference resolution: Making sense of the stateof-the-art. In Proceedings of the Joint Conference of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing, Singapore, 2–7 August 2009, pages 656–664. Roland Stuckhardt. 2003. Coreference-based summarization and question answering: A case for high precision anaphor resolution. In Proceedings of the 2003 International Symposium on Reference Resolution and Its Applications to Question Answering and Summarization, Venice, Italy, 23–24 June 2003, pages 33–42. Don Tuggener. 2014. Coreference resolution evaluation for higher level applications. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, Volume 2: Short Papers, Gothenburg, Sweden, 26–30 April 2014, pages 231–235. Marc Vilain, John Burger, John Aberdeen, Dennis Connolly, and Lynette Hirschman. 1995. A modeltheoretic coreference scoring scheme. In Proceedings of the 6th Message Understanding Conference (MUC-6), pages 45–52, San Mateo, Cal. Morgan Kaufmann. 641 Sam Wiseman, Alexander M. Rush, Stuart Shieber, and Jason Weston. 2015. Learning anaphoricity and antecedent ranking features for coreference resolution. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Beijing, China, 26–31 July 2015, pages 1416–1426. 642
2016
60
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 643–653, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Improving Coreference Resolution by Learning Entity-Level Distributed Representations Kevin Clark Computer Science Department Stanford University [email protected] Christopher D. Manning Computer Science Department Stanford University [email protected] Abstract A long-standing challenge in coreference resolution has been the incorporation of entity-level information – features defined over clusters of mentions instead of mention pairs. We present a neural network based coreference system that produces high-dimensional vector representations for pairs of coreference clusters. Using these representations, our system learns when combining clusters is desirable. We train the system with a learning-to-search algorithm that teaches it which local decisions (cluster merges) will lead to a high-scoring final coreference partition. The system substantially outperforms the current state-of-the-art on the English and Chinese portions of the CoNLL 2012 Shared Task dataset despite using few hand-engineered features. 1 Introduction Coreference resolution, the task of identifying which mentions in a text refer to the same realworld entity, is fundamentally a clustering problem. However, many recent state-of-the-art coreference systems operate solely by linking pairs of mentions together (Durrett and Klein, 2013; Martschat and Strube, 2015; Wiseman et al., 2015). An alternative approach is to use agglomerative clustering, treating each mention as a singleton cluster at the outset and then repeatedly merging clusters of mentions deemed to be referring to the same entity. Such systems can take advantage of entity-level information, i.e., features between clusters of mentions instead of between just two mentions. As an example for why this is useful, it is clear that the clusters {Bill Clinton} and {Clinton, she} are not referring to the same entity, but it is ambiguous whether the pair of mentions Bill Clinton and Clinton are coreferent. Previous work has incorporated entity-level information through features that capture hard constraints like having gender or number agreement between clusters (Raghunathan et al., 2010; Durrett et al., 2013). In this work, we instead train a deep neural network to build distributed representations of pairs of coreference clusters. This captures entity-level information with a large number of learned, continuous features instead of a small number of hand-crafted categorical ones. Using the cluster-pair representations, our network learns when combining two coreference clusters is desirable. At test time it builds up coreference clusters incrementally, starting with each mention in its own cluster and then merging a pair of clusters each step. It makes these decisions with a novel easy-first cluster-ranking procedure that combines the strengths of cluster-ranking (Rahman and Ng, 2011) and easy-first (Stoyanov and Eisner, 2012) coreference algorithms. Training incremental coreference systems is challenging because the coreference decisions facing a model depend on previous decisions it has already made. We address this by using a learning-to-search algorithm inspired by SEARN (Daum´e III et al., 2009) to train our neural network. This approach allows the model to learn which action (a cluster merge) available from the current state (a partially completed coreference clustering) will eventually lead to a high-scoring coreference partition. Our system uses little manual feature engineering, which means it is easily extended to multiple languages. We evaluate our system on the English and Chinese portions of the CoNLL 2012 Shared Task dataset. The cluster-ranking model significantly outperforms a mention-ranking model that 643 does not use entity-level information. We also show that using an easy-first strategy improves the performance of the cluster-ranking model. Our final system achieves CoNLL F1 scores of 65.29 for English and 63.66 for Chinese, substantially outperforming other state-of-the-art systems.1 2 System Architecture Our cluster-ranking model is a single neural network that learns which coreference cluster merges are desirable. However, it is helpful to think of the network as being composed of distinct subnetworks. The mention-pair encoder produces distributed representations for pairs of mentions by passing relevant features through a feedforward neural network. The cluster-pair encoder produces distributed representations for pairs of clusters by applying a pooling operation over the representations of relevant mention pairs, i.e., pairs where one mention is in each cluster. The clusterranking model then scores pairs of clusters by passing their representations through a single neural network layer. We also train a mention-ranking model that scores pairs of mentions by passing their representations through a single neural network layer. Its parameters are used to initialize the clusterranking model, and the scores it produces are used to prune which candidate cluster merges the cluster-ranking model considers, allowing the cluster-ranking model to run much faster. The system architecture is summarized in Figure 1. Mention-Pair Encoder Cluster-Pair Encoder Cluster-Ranking Model Mention-Ranking Model Pretraining, Search space pruning Figure 1: System architecture. Solid arrows indicate one neural network is used as a component of the other; the dashed arrow indicates other dependencies. 3 Building Representations In this section, we describe the neural networks producing distributed representations of pairs of 1Code and trained models are available at https:// github.com/clarkkev/deep-coref. Candidate Antecedent Embeddings Candidate Antecedent Features Mention Features Mention Embeddings Hidden Layer h2 Mention-Pair Representation rm Input Layer h0 Hidden Layer h1 ReLU(W1h0 + b1) ReLU(W2h1 + b2) ReLU(W3h2 + b3) Pair and Document Features Figure 2: Mention-pair encoder. mentions and pairs of coreference clusters. We assume that a set of mentions has already been extracted from each document using a method such as the one in Raghunathan et al. (2010). 3.1 Mention-Pair Encoder Given a mention m and candidate antecedent a, the mention-pair encoder produces a distributed representation of the pair rm(a, m) ∈Rd with a feedforward neural network, which is shown in Figure 2. The candidate antecedent may be any mention that occurs before m in the document or NA, indicating that m has no antecedent. We also experimented with models based on Long Short-Term Memory recurrent neural networks (Hochreiter and Schmidhuber, 1997), but found these to perform slightly worse when used in an end-to-end coreference system due to heavy overfitting to the training data. Input Layer. For each mention, the model extracts various words and groups of words that are fed into the neural network. Each word is represented by a vector wi ∈Rdw. Each group of words is represented by the average of the vectors of each word in the group. For each mention and pair of mentions, a small number of binary features and distance features are also extracted. Distances and mention lengths are binned into one of the buckets [0, 1, 2, 3, 4, 5-7, 8-15, 16-31, 32-63, 64+] and then encoded in a one-hot vector in addition to being included as continuous features. The full set of features is as follows: Embedding Features: Word embeddings of the head word, dependency parent, first word, last word, two preceding words, and two following words of the mention. Averaged word embeddings of the five preceding words, five following 644 words, all words in the mention, all words in the mention’s sentence, and all words in the mention’s document. Additional Mention Features: The type of the mention (pronoun, nominal, proper, or list), the mention’s position (index of the mention divided by the number of mentions in the document), whether the mentions is contained in another mention, and the length of the mention in words. Document Genre: The genre of the mention’s document (broadcast news, newswire, web data, etc.). Distance Features: The distance between the mentions in sentences, the distance between the mentions in intervening mentions, and whether the mentions overlap. Speaker Features: Whether the mentions have the same speaker and whether one mention is the other mention’s speaker as determined by string matching rules from Raghunathan et al. (2010). String Matching Features: Head match, exact string match, and partial string match. The vectors for all of these features are concatenated to produce an I-dimensional vector h0, the input to the neural network. If a = NA, the features defined over mention pairs are not included. For this case, we train a separate network with an identical architecture to the pair network except for the input layer to produce anaphoricity scores. Our set of hand-engineered features is much smaller than the dozens of complex features typically used in coreference systems. However, we found these features were crucial for getting good model performance. See Section 6.1 for a feature ablation study. Hidden Layers. The input gets passed through three hidden layers of rectified linear (ReLU) units (Nair and Hinton, 2010). Each unit in a hidden layer is fully connected to the previous layer: hi(a, m) = max(0, Wihi−1(a, m) + bi) where W1 is a M1 × I weight matrix, W2 is a M2 × M1 matrix, and W3 is a d × M2 matrix. The output of the last hidden layer is the vector representation for the mention pair: rm(a, m) = h3(a, m). Cluster-Pair Representation Mention-Pair Representations Pooling !! ! c2 c1 Mention-Pair Encoder !! ! !! ! rc(c1, c2) Rm(c1, c2) !! ! Figure 3: Cluster-pair encoder. 3.2 Cluster-Pair Encoder Given two clusters of mentions ci = {mi 1, mi 2, ..., mi |ci|} and cj = {mj 1, mj 2, ..., mj |cj|}, the cluster-pair encoder produces a distributed representation rc(ci, cj) ∈R2d. The architecture of the encoder is summarized in Figure 3. The cluster-pair encoder first combines the information contained in the matrix of mention-pair representations Rm(ci, cj) = [rm(mi 1, mj 1), rm(mi 1, mj 2), ..., rm(mi |ci|, mj |cj|)] to produce rc(ci, cj). This is done by applying a pooling operation. In particular it concatenates the results of max-pooling and average-pooling, which we found to be slightly more effective than using either one alone: rc(ci, cj)k = ( max {Rm(ci, cj)k,·} for 0 ≤k < d avg {Rm(ci, cj)k−d,·} for d ≤k < 2d 4 Mention-Ranking Model Rather than training a cluster-ranking model from scratch, we first train a mention-ranking model that assigns each mention its highest scoring candidate antecedent. There are two key advantages of doing this. First, it serves as pretraining for the cluster-ranking model; in particular the mentionranking model learns effective weights for the mention-pair encoder. Second, the scores produced by the mention-ranking model are used to provide a measure of which coreference decisions are easy (allowing for an easy-first clustering strategy) and which decisions are clearly wrong (these decisions can be pruned away, significantly reducing the search space of the cluster-ranking model). The mention-ranking model assigns a score sm(a, m) to a mention m and candidate an645 tecedent a representing their compatibility for coreference. This is produced by applying a single fully connected layer of size one to the representation rm(a, m) produced by the mention-pair encoder: sm(a, m) = Wmrm(a, m) + bm where Wm is a 1 × d weight matrix. At test time, the mention-ranking model links each mention with its highest scoring candidate antecedent. Training Objective. We train the mentionranking model with the slack-rescaled maxmargin training objective from Wiseman et al. (2015), which encourages separation between the highest scoring true and false antecedents of the current mention. Suppose the training set consists of N mentions m1, m2, ..., mN. Let A(mi) denote the set of candidate antecedents of a mention mi (i.e., mentions preceding mi and NA), and T (mi) denote the set of true antecedents of mi (i.e., mentions preceding mi that are coreferent with it or {NA} if mi has no antecedent). Let ˆti be the highest scoring true antecedent of mention mi: ˆti = argmax t∈T (mi) sm(t, mi) Then the loss is given by NP i=1 max a∈A(mi)∆(a, mi)(1 + sm(a, mi) −sm(ˆti, mi)) where ∆(a, mi) is the mistake-specific cost function ∆(a, mi) =            αFN if a = NA ∧T (mi) ̸= {NA} αFA if a ̸= NA ∧T (mi) = {NA} αWL if a ̸= NA ∧a /∈T (mi) 0 if a ∈T (mi) for “false new,” “false anaphoric,” “wrong link,” and correct coreference decisions. The different error penalties allow the system to be tuned for coreference evaluation metrics by biasing it towards making more or fewer coreference links. Finding Effective Error Penalties. We fix αWL = 1.0 and search for αFA and αFN out of {0.1, 0.2, ..., 1.5} with a variant of grid search. Each new trial uses the unexplored set of hyperparameters that has the closest Manhattan distance to the best setting found so far on the dev set. We stopped the search when all immediate neighbors (within 0.1 distance) of the best setting had been explored. We found (αFN, αFA, αWL) = (0.8, 0.4, 1.0) to be best for English and (αFN, αFA, αWL) = (0.7, 0.4, 1.0) to be best for Chinese on the CoNLL 2012 data. We attribute our smaller false new cost from the one used by Wiseman et al. (they set αFN = 1.2) to using more precise mention detection, which results in fewer links to NA. Training Details. We initialized our word embeddings with 50 dimensional ones produced by word2vec (Mikolov et al., 2013) on the Gigaword corpus for English and 64 dimensional ones provided by Polyglot (Al-Rfou et al., 2013) for Chinese. Averaged word embeddings were held fixed during training while the embeddings used for single words were updated. We set our hidden layer sizes to M1 = 1000, M2 = d = 500 and minimized the training objective using RMSProp (Hinton and Tieleman, 2012). To regularize the network, we applied L2 regularization to the model weights and dropout (Hinton et al., 2012) with a rate of 0.5 on the word embeddings and the output of each hidden layer. Pretraining. As in Wiseman et al. (2015), we found that pretraining is crucial for the mentionranking model’s success. We pretrained the network in two stages, minimizing the following objectives from Clark and Manning (2015): All-Pairs Classification − NP i=1 [ P t∈T (mi) log p(t, mi) + P f∈F(mi) log(1 −p(f, mi))] Top-Pairs Classification − NP i=1 [ max t∈T (mi) log p(t, mi) + min f∈F(mi) log(1 −p(f, mi))] Where F(mi) is the set of false antecedents for mi and p(a, mi) = sigmoid(s(a, mi)). The top pairs objective is a middle ground between the all-pairs classification and mention ranking objectives: it only processes high-scoring mentions, but is probabilistic rather than max-margin. We first pretrained the network with all-pairs classification for 150 epochs and then with top-pairs classification for 50 epochs. See Section 6.1 for experiments on 646 the two-stage pretraining. 5 Cluster-Ranking Model Although a strong coreference system on its own, the mention-ranking model has the disadvantage of only considering local information between pairs of mentions, so it cannot consolidate information at the entity-level. We address this problem by training a cluster-ranking model that scores pairs of clusters instead of pairs of mentions. Given two clusters of mentions ci and cj, the cluster-ranking model produces a score sc(ci, cj) representing their compatibility for coreference. This is produced by applying a single fully connected layer of size one to the representation rc(ci, cj) produced by the cluster-pair encoder: sc(ci, cj) = Wcrc(ci, cj) + bc where Wc is a 1 × 2d weight matrix. Our cluster-ranking approach also uses a measure of anaphoricity, or how likely it is for a mention m to have an antecedent. This is defined as sNA(m) = WNArm(NA, m) + bNA where WNA is a 1 × d matrix. 5.1 Cluster-Ranking Policy Network At test time, the cluster ranker iterates through every mention in the document, merging the current mention’s cluster with a preceding one or performing no action. We view this procedure as a sequential decision process where at each step the algorithm observes the current state x and performs some action u. Specifically, we define a state x = (C, m) to consist of C = {c1, c2, ...}, the set of existing coreference clusters, and m, the current mention being considered. At a start state, each cluster in C contains a single mention. Let cm ∈C be the cluster containing m and A(m) be a set of candidate antecedents for m: mentions occurring previously in the document. Then the available actions U(x) from x are • MERGE[cm, c], where c is a cluster containing a mention in A(m). This combines cm and c into a single coreference cluster. • PASS. This leaves the clustering unchanged. After determining the new clustering C′ based on the existing clustering C and action u, we consider another mention m′ to get the next state x′ = (C′, m′). Using the scoring functions sc and sNA, we define a policy network π that assigns a probability distribution over U(x) as follows: π(MERGE[cm, c]|x) ∝esc(cm,c) π(PASS|x) ∝esNA(m) During inference, π is executed by taking the highest-scoring (most probable) action at each step. 5.2 Easy-First Cluster Ranking The last detail needed is the ordering in which to consider mentions. Cluster-ranking models in prior work order the mentions according to their positions in the document, processing them leftto-right (Rahman and Ng, 2011; Ma et al., 2014). However, we instead sort the mentions in descending order by their highest scoring candidate coreference link according to the mention-ranking model. This causes inference to occur in an easyfirst fashion where hard decisions are delayed until more information is available. Easy-first orderings have been shown to improve the performance of other incremental coreference strategies (Raghunathan et al., 2010; Stoyanov and Eisner, 2012) because they reduce the problem of errors compounding as the algorithm runs. We also find it beneficial to prune the set of candidate antecedents A(m) for each mention m. Rather than using all previously occurring mentions as candidate antecedents, we only include high-scoring ones, which greatly reduces the size of the search space. This allows for much faster learning and inference; we are able to remove over 95% of candidate actions with no decrease in the model’s performance. For both of these two preprocessing steps, we use s(a, m) −s(NA, m) as the score of a coreference link between a and m. 5.3 Deep Learning to Search We face a sequential prediction problem where future observations (visited states) depend on previous actions. This is challenging because it violates the common i.i.d. assumption made in machine learning. Learning-to-search algorithms are effective for this sort of problem, and have been applied successfully to coreference resolution (Daum´e III and Marcu, 2005; Clark and Manning, 2015) as 647 Algorithm 1 Deep Learning to Search for i = 1 to num epochs do Initialize the current training set Γ = ∅ for each example (x, y) ∈D do Run the policy π to completion from start state x to obtain a trajectory of states {x1, x2, ..., xn} for each state xi in the trajectory do for each possible action u ∈U(xi) do Execute u on xi and then run the reference policy πref until reaching an end state e Assign u a cost by computing the loss on the end state: l(u) = L(e, y) end for Add the state xi and associated costs l to Γ end for end for Update π with gradient descent, minimizing P (x,l)∈Γ P u∈U(x) π(u|x)l(u). end for well as other structured prediction tasks in natural language processing (Daum´e III et al., 2014; Chang et al., 2015a). We train the cluster-ranking model using a learning-to-search algorithm inspired by SEARN (Daum´e III et al., 2009), which is described in Algorithm 1. The algorithm takes as input a dataset D of start states x (in our case documents with each mention in its own singleton coreference cluster) and structured labels y (in our case gold coreference clusters). Its goal is to train the policy π so when it executes from x, reaching a final state e, the resulting loss L(e, y) is small. We use the negative of the B3 coreference metric for this loss (Bagga and Baldwin, 1998). Although our system evaluation also includes the MUC (Vilain et al., 1995) and CEAFφ4 (Luo, 2005) metrics, we do not incorporate them into the loss because MUC has the flaw of treating all errors equally and CEAFφ4 is slow to compute. For each example (x, y) ∈D, the algorithm obtains a trajectory of states x1, x2, ..., xn visited by the current policy by running it to completion (i.e., repeatedly taking the highest scoring action until reaching an end state) from the start state x. This exposes the model to states at train time similar to the ones it will face at test time, allowing it to learn how to cope with mistakes. Given a state x in a trajectory, the algorithm then assigns a cost l(u) to each action u ∈U(x) by executing the action, “rolling out” from the resulting state with a reference policy πref until reaching an end state e, and computing the resulting loss L(e, y). This rolling out procedure allows the model to learn how a local action will affect the final score, which cannot be otherwise computed because coreference evaluation metrics do not decompose over cluster merges. The policy network is then trained to minimize the risk associated with taking each action: P u∈U(x) π(u|x)l(u). Reference policies typically refer to the gold labels to find actions that are likely to be beneficial. Our reference policy πref takes the action that increases the B3 score the most each step, breaking ties randomly. It is generally recommended to use a stochastic mixture of the reference policy and the current learned policy during rollouts when the reference policy is not optimal (Chang et al., 2015b). However, we find only using the reference policy (which is close to optimal) to be much more efficient because it does not require neural network computations and is deterministic, which means the costs of actions can be cached. Training details. We update π using RMSProp and apply dropout with a rate of 0.5 to the input layer. For most experiments, we initialize the mention-pair encoder component of the clusterranking model with the learned weights from the mention-ranking model, which we find to greatly improve performance (see Section 6.2). Runtime. The full cluster-ranking system runs end-to-end in slightly under 1 second per document on the English test set when using a GPU (including scoring all pairs of mentions with the mention-ranking model for search-space pruning). This means the bottleneck for the overall system is the syntactic parsing required for mention detection (around 4 seconds per document). 648 Model English F1 Chinese F1 Full Model 65.52 64.41 – MENTION –1.27 –0.74 – GENRE –0.25 –2.91 – DISTANCE –2.42 –2.41 – SPEAKER –1.26 –0.93 – MATCHING –2.07 –3.44 Table 1: CoNLL F1 scores of the mention-ranking model on the dev sets without mention, document genre, distance, speaker, and string matching hand-engineered features. 6 Experiments and Results Experimental Setup. We run experiments on the English and Chinese portions of the CoNLL 2012 Shared Task data (Pradhan et al., 2012). The models are evaluated using three of the most popular coreference metrics: MUC, B3, and Entity-based CEAF (CEAFφ4). We generally report the average F1 score (CoNLL F1) of the three, which is common practice in coreference evaluation. We used the most recent version of the CoNLL scorer (version 8.01), which implements the original definitions of the metrics. Mention Detection. Our experiments were run using system-produced predicted mentions. We used the rule-based mention detection algorithm from Raghunathan et al. (2010), which first extracts pronouns and maximal NP projections as candidate mentions and then filters this set with rules that remove spurious mentions such as numeric entities and pleonastic it pronouns. 6.1 Mention-Ranking Model Experiments Feature Ablations. We performed a feature ablation study to determine the importance of the hand-engineered features included in our model. The results are shown in Table 1. We find the small number of non-embedding features substantially improves model performance, especially the distance and string matching features. This is unsurprising, as the additional features are not easily captured by word embeddings and historically such features have been very important in coreference resolvers (Bengtson and Roth, 2008). The Importance of Pretraining. We evaluate the benefit of the two-step pretraining for the All-Pairs Top-Pairs English F1 Chinese F1 Yes Yes 65.52 64.41 Yes No –0.36 –0.24 No Yes –0.54 –0.33 No No –3.58 –5.43 Table 2: CoNLL F1 scores of the mention-ranking model on the dev sets with different pretraining methods. Model English F1 Chinese F1 Full Model 66.01 64.86 – PRETRAINING –5.01 –6.85 – EASY-FIRST –0.15 –0.12 – L2S –0.32 –0.25 Table 3: CoNLL F1 scores of the cluster-ranking model on the dev sets with various ablations. – PRETRAINING: initializing model parameters randomly instead of from the mention-ranking model, – EASY-FIRST: iterating through mentions in order of occurrence instead of according to their highest scoring candidate coreference link, – L2S: training on a fixed trajectory of correct actions instead of using learning to search. mention-ranking model and report results in Table 2. Consistent with Wiseman et al. (2015), we find pretraining to greatly improve the model’s accuracy. We note in particular that the model benefits from using both pretraining steps from Section 4, which more smoothly transitions the model from a mention-pair classification objective that is easy to optimize to a max-margin objective better suited for a ranking task. 6.2 Cluster-Ranking Model Experiments We evaluate the importance of three key details of the cluster ranker: initializing it with the mentionranking model’s weights, using an easy-first ordering of mentions, and using learning to search. The results are shown in Table 3. Pretrained Weights. We compare initializing the cluster-ranking model randomly with initializing it with the weights learned by the mentionranking model. Using pretrained weights greatly improves performance. We believe the clusterranking model has difficulty learning effective weights from scratch due to noise in the signal coming from cluster-level decisions (an overall bad cluster merge may still involve a few cor649 rect pairwise links) and the smaller amount of data used to train the cluster-ranking model (many possible actions are pruned away during preprocessing). We believe the score would be even lower without search-space pruning, which stops the model from considering many bad actions. Easy-First Cluster Ranking. We compare the effectiveness of easy-first cluster-ranking with the commonly used left-to-right approach. Using a left-to-right strategy simply requires changing the preprocessing step ordering the mentions so mentions are sorted by their position in the document instead of their highest scoring coreference link according to the mention-ranking model. We find the easy-first approach slightly outperforms using a left-to-right ordering of mentions. We believe this is because delaying hard decisions until later reduces the problem of early mistakes causing later decisions to be made incorrectly. Learning to Search. We also compare learning to search with the simpler approach of training the model on a trajectory of gold coreference decisions (i.e., training on a fixed cost-sensitive classification dataset). Using this approach significantly decreases performance. We attribute this to the model not learning how to deal with mistakes when it only sees correct decisions during training. 6.3 Capturing Semantic Similarity Using semantic information to improve coreference accuracy has had mixed in results in previous research, and has been called an “uphill battle” in coreference resolution (Durrett and Klein, 2013). However, word embeddings are well known for being effective at capturing semantic relatedness, and we show here that neural network coreference models can take advantage of this. Perhaps the case where semantic similarity is most important is in linking nominals with no head match (e.g., “the nation” and “the country”). We compare the performance of our neural network model with our earlier statistical system (Clark and Manning, 2015) at classifying mention pairs of this type as being coreferent or not. The neural network shows substantial improvement (18.9 F1 vs. 10.7 F1) on this task compared to the more modest improvement it gets at classifying any pair of mentions as coreferent (68.7 F1 vs. 66.1 F1). Some example wins are shown in Table 4. These types of coreference links are quite rare in the CoNLL data (about 1.2% of the positive corefAntecedent Anaphor the country’s leftist rebels the guerrillas the company the New York firm the suicide bombing the attack the gun the rifle the U.S. carrier the ship Table 4: Examples of nominal coreferences with no head match that the neural model gets correct, but the system from Clark and Manning (2015) gets incorrect. erence links in the test set), so the improvement does not significantly contribute to the final system’s score, but it does suggest progress on this difficult type of coreference problem. 6.4 Final System Performance In Table 5 we compare the results of our system with state-of-the-art approaches for English and Chinese. Our mention-ranking model surpasses all previous systems. We attribute its improvement over the neural mention ranker from Wiseman et al. (2015) to our model using a deeper neural network, pretrained word embeddings, and more sophisticated pretraining. The cluster-ranking model improves results further across both languages and all evaluation metrics, demonstrating the utility of incorporating entity-level information. The improvement is largest in CEAFφ4, which is encouraging because CEAFφ4 is the most recently proposed metric, designed to correct flaws in the other two (Luo, 2005). We believe entity-level information is particularly useful for preventing bad merges between large clusters (see Figure 4 for an example). However, it is worth noting that in practice the much more complicated cluster-ranking model brings only fairly modest gains in performance. 7 Related Work There has been extensive work on machine learning approaches to coreference resolution (Soon et al., 2001; Ng and Cardie, 2002), with mentionranking models being particularly popular (Denis and Baldridge, 2007; Durrett and Klein, 2013; Bj¨orkelund and Kuhn, 2014). We train a neural mention-ranking model inspired by Wiseman et al. (2015) as a starting point, but then use it to pretrain a cluster-ranking model that benefits from entity-level information. Wise650 MUC B3 CEAFφ4 Prec. Rec. F1 Prec. Rec. F1 Prec. Rec. F1 Avg. F1 CoNLL 2012 English Test Data Clark and Manning (2015) 76.12 69.38 72.59 65.64 56.01 60.44 59.44 52.98 56.02 63.02 Peng et al. (2015) – – 72.22 – – 60.50 – – 56.37 63.03 Wiseman et al. (2015) 76.23 69.31 72.60 66.07 55.83 60.52 59.41 54.88 57.05 63.39 Wiseman et al. (2016) 77.49 69.75 73.42 66.83 56.95 61.50 62.14 53.85 57.70 64.21 NN Mention Ranker 79.77 69.10 74.05 69.68 56.37 62.32 63.02 53.59 57.92 64.76 NN Cluster Ranker 78.93 69.75 74.06 70.08 56.98 62.86 62.48 55.82 58.96 65.29 CoNLL 2012 Chinese Test Data Chen & Ng (2012) 59.92 64.69 62.21 60.26 51.76 55.69 51.61 58.84 54.99 57.63 Bj¨orkelund & Kuhn (2014) 69.39 62.57 65.80 61.64 53.87 57.49 59.33 54.65 56.89 60.06 NN Mention Ranker 72.53 65.72 68.96 65.49 56.87 60.88 61.93 57.11 59.42 63.09 NN Cluster Ranker 73.85 65.42 69.38 67.53 56.41 61.47 62.84 57.62 60.12 63.66 Table 5: Comparison with the current state-of-the-art approaches on the CoNLL 2012 test sets. NN Mention Ranker and NN Cluster Ranker are contributions of this work. Russian President Vladimir Putin, his, the Russian President, President Clinton’s, Bill Clinton, Mr. Clinton’s he … , { { } } incorrect link predicted by the mention-ranking model … , … , Figure 4: Thanks to entity-level information, the cluster-ranking model correctly declines to merge these two large clusters when running on the test set. However, the mention-ranking model incorrectly links the Russian President and President Clinton’s, which greatly reduces the final precision score. man et al. (2016) extend their mention-ranking model by incorporating entity-level information produced by a recurrent neural network running over the candidate antecedent-cluster. However, this is an augmentation to a mention-ranking model, and not fundamentally a clustering model as our cluster ranker is. Entity-level information has also been incorporated in coreference systems using joint inference (McCallum and Wellner, 2003; Poon and Domingos, 2008; Haghighi and Klein, 2010) and systems that build up coreference clusters incrementally (Luo et al., 2004; Yang et al., 2008; Raghunathan et al., 2010). We take the latter approach, and in particular combine the cluster-ranking (Rahman and Ng, 2011; Ma et al., 2014) and easy-first (Stoyanov and Eisner, 2012; Clark and Manning, 2015) clustering strategies. These prior systems all express entity-level information in the form of hand-engineered features and constraints instead of entity-level distributed representations that are learned from data. We train our system using a learning-to-search algorithm similar to SEARN (Daum´e III et al., 2009). Learning-to-search style algorithms have been employed to train coreference resolvers on trajectories of decisions similar to those that would be seen at test-time by Daum´e et al. (2005), Ma et al. (2014), and Clark and Manning (2015). Other works use structured perceptron models for the same purpose (Stoyanov and Eisner, 2012; Fernandes et al., 2012; Bj¨orkelund and Kuhn, 2014). 8 Conclusion We have presented a coreference system that captures entity-level information with distributed representations of coreference cluster pairs. These learned, dense, high-dimensional feature vectors provide our cluster-ranking coreference model with a strong ability to distinguish beneficial cluster merges from harmful ones. The model is trained with a learning-to-search algorithm that allows it to learn how local decisions will affect the final coreference score. We evaluate our system on the English and Chinese portions of the CoNLL 2012 Shared Task and report a substantial improvement over the current state-of-the-art. Acknowledgments We thank Will Hamilton, Jon Gauthier, and the anonymous reviewers for their thoughtful comments and suggestions. This work was supported by NSF Award IIS-1514268. 651 References Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual NLP. Conference on Natural Language Learning (CoNLL), pages 183–192. Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In The First International Conference on Language Resources and Evaluation Workshop on Linguistics Coreference, pages 563–566. Eric Bengtson and Dan Roth. 2008. Understanding the value of features for coreference resolution. In Empirical Methods in Natural Language Processing (EMNLP), pages 294–303. Anders Bj¨orkelund and Jonas Kuhn. 2014. Learning structured perceptrons for coreference resolution with latent antecedents and non-local features. In Association of Computational Linguistics (ACL), pages 47–57. Kai-Wei Chang, He He, Hal Daum´e III, and John Langford. 2015a. Learning to search for dependencies. arXiv preprint arXiv:1503.05615. Kai-Wei Chang, Akshay Krishnamurthy, Alekh Agarwal, Hal Daum´e III, and John Langford. 2015b. Learning to search better than your teacher. In International Conference on Machine Learning (ICML). Chen Chen and Vincent Ng. 2012. Combining the best of two worlds: A hybrid approach to multilingual coreference resolution. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Conference on Computational Natural Language Learning - Shared Task, pages 56–63. Kevin Clark and Christopher D. Manning. 2015. Entity-centric coreference resolution with model stacking. In Association for Computational Linguistics (ACL). Hal Daum´e III and Daniel Marcu. 2005. A largescale exploration of effective global features for a joint entity detection and tracking model. In Empirical Methods in Natural Language Processing (EMNLP), pages 97–104. Hal Daum´e III, John Langford, and Daniel Marcu. 2009. Search-based structured prediction. Machine Learning, 75(3):297–325. Hal Daum´e III, John Langford, and Stephane Ross. 2014. Efficient programmable learning to search. arXiv preprint arXiv:1406.1837. Pascal Denis and Jason Baldridge. 2007. A ranking approach to pronoun resolution. In International Joint Conferences on Artificial Intelligence (IJCAI), pages 1588–1593. Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In Empirical Methods in Natural Language Processing (EMNLP), pages 1971–1982. Greg Durrett, David Leo Wright Hall, and Dan Klein. 2013. Decentralized entity-level modeling for coreference resolution. In Association for Computational Linguistics (ACL), pages 114–124. Eraldo Rezende Fernandes, C´ıcero Nogueira Dos Santos, and Ruy Luiz Milidi´u. 2012. Latent structure perceptron with feature induction for unrestricted coreference resolution. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Conference on Computational Natural Language Learning - Shared Task, pages 41–48. Aria Haghighi and Dan Klein. 2010. Coreference resolution in a modular, entity-centered model. In Human Language Technology and North American Association for Computational Linguistics (HLTNAACL), pages 385–393. Geoffrey Hinton and Tijmen Tieleman. 2012. Lecture 6.5-RmsProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4. Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. 2012. Improving neural networks by preventing coadaptation of feature detectors. arXiv preprint arXiv:1207.0580. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Xiaoqiang Luo, Abe Ittycheriah, Hongyan Jing, Nanda Kambhatla, and Salim Roukos. 2004. A mentionsynchronous coreference resolution algorithm based on the Bell tree. In Association for Computational Linguistics (ACL), page 135. Xiaoqiang Luo. 2005. On coreference resolution performance metrics. In Empirical Methods in Natural Language Processing (EMNLP), pages 25–32. Chao Ma, Janardhan Rao Doppa, J Walker Orr, Prashanth Mannem, Xiaoli Fern, Tom Dietterich, and Prasad Tadepalli. 2014. Prune-and-score: Learning for greedy coreference resolution. In Empirical Methods in Natural Language Processing (EMNLP). Sebastian Martschat and Michael Strube. 2015. Latent structures for coreference resolution. Transactions of the Association for Computational Linguistics (TACL), 3:405–418. Andrew McCallum and Ben Wellner. 2003. Toward conditional models of identity uncertainty with application to proper noun coreference. In Proceedings of the IJCAI Workshop on Information Integration on the Web. 652 Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems (NIPS), pages 3111–3119. Vinod Nair and Geoffrey E. Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In International Conference on Machine Learning (ICML), pages 807–814. Vincent Ng and Claire Cardie. 2002. Improving machine learning approaches to coreference resolution. In Association of Computational Linguistics (ACL), pages 104–111. Haoruo Peng, Kai-Wei Chang, and Dan Roth. 2015. A joint framework for coreference resolution and mention head detection. Conference on Natural Language Learning (CoNLL), 51:12. Hoifung Poon and Pedro Domingos. 2008. Joint unsupervised coreference resolution with markov logic. In Empirical Methods in Natural Language Processing (EMNLP), pages 650–659. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Conference on Computational Natural Language Learning - Shared Task, pages 1–40. Karthik Raghunathan, Heeyoung Lee, Sudarshan Rangarajan, Nathanael Chambers, Mihai Surdeanu, Dan Jurafsky, and Christopher Manning. 2010. A multi-pass sieve for coreference resolution. In Empirical Methods in Natural Language Processing (EMNLP), pages 492–501. Altaf Rahman and Vincent Ng. 2011. Narrowing the modeling gap: a cluster-ranking approach to coreference resolution. Journal of Artificial Intelligence Research (JAIR), pages 469–521. Wee Meng Soon, Hwee Tou Ng, and Daniel Chung Yong Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521–544. Veselin Stoyanov and Jason Eisner. 2012. Easyfirst coreference resolution. In International Conference on Computational Linguistics (COLING), pages 2519–2534. Marc Vilain, John Burger, John Aberdeen, Dennis Connolly, and Lynette Hirschman. 1995. A modeltheoretic coreference scoring scheme. In Proceedings of the 6th conference on Message understanding, pages 45–52. Sam Wiseman, Alexander M Rush, Stuart M Shieber, and Jason Weston. 2015. Learning anaphoricity and antecedent ranking features for coreference resolution. In Association of Computational Linguistics (ACL), pages 92–100. Sam Wiseman, Alexander M Rush, Stuart M Shieber, and Jason Weston. 2016. Learning global features for coreference resolution. In Human Language Technology and North American Association for Computational Linguistics (HLT-NAACL). Xiaofeng Yang, Jian Su, Jun Lang, Chew Lim Tan, Ting Liu, and Sheng Li. 2008. An entity-mention model for coreference resolution with inductive logic programming. In Association of Computational Linguistics (ACL), pages 843–851. 653
2016
61
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 654–665, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Effects of Creativity and Cluster Tightness on Short Text Clustering Performance Catherine Finegan-Dollak1 Reed Coke1 Rui Zhang1 Xiangyi Ye2 Dragomir Radev1 {cfdollak, reedcoke, ryanzh, yexy, radev}@umich.edu 1Department of EECS, University of Michigan, Ann Arbor, MI USA 2Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI USA Abstract Properties of corpora, such as the diversity of vocabulary and how tightly related texts cluster together, impact the best way to cluster short texts. We examine several such properties in a variety of corpora and track their effects on various combinations of similarity metrics and clustering algorithms. We show that semantic similarity metrics outperform traditional n-gram and dependency similarity metrics for kmeans clustering of a linguistically creative dataset, but do not help with less creative texts. Yet the choice of similarity metric interacts with the choice of clustering method. We find that graphbased clustering methods perform well on tightly clustered data but poorly on loosely clustered data. Semantic similarity metrics generate loosely clustered output even when applied to a tightly clustered dataset. Thus, the best performing clustering systems could not use semantic metrics. 1 Introduction Corpora of collective discourse—texts generated by multiple authors in response to the same stimulus—have varying properties depending on the stimulus and goals of the authors. For instance, when multiple puzzle-composers write crossword puzzle clues for the same word, they will try to write creative, unique clues to make the puzzle interesting and challenging; clues for “star” could be “Paparazzi’s target” or “Sky light.” In contrast, people writing a descriptive caption for a photograph can adopt a less creative style. Corpora may also differ on how similar texts within a particular class are to one another, compared to how similar they are to texts from other classes. For example, entries in a cartoon captioning contest that all relate to the same cartoon may vary widely in subject, while crossword clues for the same word would likely be more tightly clustered. This paper studies how such text properties affect the best method of clustering short texts. Choosing how to cluster texts involves two major decisions: choosing a similarity metric to determine which texts are alike, and choosing a clustering method to group those texts. We hypothesize that creativity may drive authors to express the same concept in a wide variety of ways, leading to data that can benefit from different similarity metrics than less creative texts. At the same time, we hypothesize that tightly clustered datasets—datasets where each text is much more similar to texts in its cluster than to texts from other clusters—can be clustered by powerful graph-based methods such as Markov Clustering (MCL) and Louvain, which may fail on more loosely clustered data. This paper explores the interaction of these effects. Recently, distributional semantics has been popular and successful for measuring text similarity (Socher et al., 2011; Cheng and Kartsaklis, 2015; He et al., 2015; Kenter and de Rijke, 2015; Kusner et al., 2015; Ma et al., 2015; Tai et al., 2015; Wang et al., 2015). Word embeddings represent similar words in similar locations in vector space: “cat” is closer to “feline” than to “bird.” It would be natural to expect such semantics-based approaches to be useful for clustering, particularly for corpora where authors have tried to express similar ideas in unique ways. And indeed, this paper will show that, depending on the choice of clustering method, semantics-based similarity 654 measures such as summed word embeddings and deep neural networks can have an advantage over more traditional similarity metrics, such as n-gram counts, n-gram tf-idf vectors, and dependency tree kernels, when applied to creative texts. However, unlike in most text similarity tasks, in clustering the choice of similarity metric interacts with both the choice of clustering method and the properties of the text. Graph-based clustering techniques can be quite effective in clustering short texts (Rangrej et al., 2011), yet this paper will show that they are sensitive to how tightly clustered the data is. Moreover, the tightness of clusters in a dataset is a property of both the underlying data and the similarity metric. We show that when the underlying data can be clustered tightly enough to use powerful graph-based clustering methods, using semantics-based similarity metrics actually creates a disadvantage compared to methods that rely on the surface form of the text, because semantic metrics reduce tightness. The remainder of this paper is organized as follows. Section 2 summarizes related work. Section 3 describes four datasets of short texts. In Section 4, we describe the similarity metrics and clustering methods used in our experiments, as well as the evaluation measures. Section 5 shows that semantics-based similarity metrics have some advantage when clustering short texts from the most creative dataset, but ultimately do not perform the best when graph-based clustering is an option. In Section 6, we demonstrate the powerful effect that tightness of clusters has on the best combination of similarity metric and clustering method for a given dataset. Finally, Section 7 draws conclusions. 2 Related Work The most similar work to the present paper is Shrestha et al. (2012), which acknowledged that the similarity metric and the clustering method could both contribute to clustering results. It compared four similarity methods and also tested four clustering methods. Unlike the present work, it did not consider distributional semantics-based similarity measures or similarity measures that incorporated deep learning. In addition, it reported that the characteristics of the corpora “overshadow[ed] the effect of the similarity measures,” making it difficult to conclude that there were any significant differences between the similarity measures. Several papers address the choice of similarity metric for short text clustering without varying the clustering method. Yan et al. (2012) proposed an alternative term weighting scheme to use in place of tf-idf when clustering using non-negative matrix factorization. King et al. (2013) used the cosine similarity between feature vectors that included context word and part-of-speech features and spelling features and applied Louvain clustering to the resulting graph. Xu et al. (2015) used a convolutional neural network to represent short texts and found that, when used with the k-means clustering algorithm, this deep semantic representation outperformed tf-idf, Laplacian eigenmaps, and average embeddings for clustering. Other papers focused on choosing the best clustering method for short texts, but kept the similarity metric constant. Rangrej et al. (2011) compared k-means, singular value decomposition, and affinity propagation for tweets, finding affinity propagation the most effective, using tf-idf with cosine similarity or Jaccard for a similarity measure. Errecalde et al. (2010) describe an AntTreebased clustering method. They used the cosine similarity of tf-idf vectors as well. Yin (2013) also use the cosine similarity of tf-idf vectors for a twostage clustering algorithm for tweets. One common strategy for short text clustering has been to take advantage of outside sources of knowledge (Banerjee et al., 2007; Wang et al., 2009a; Petersen and Poon, 2011; Rosa et al., 2011; Wang et al., 2014). The present work relies only on the texts themselves, not external information. 3 Datasets Collective discourse (Qazvinian and Radev, 2011; King et al., 2013) involves multiple writers generating texts in response to the same stimulus. In a corpus of texts relating to several stimuli, it may be desirable to cluster according to which stimulus each text relates to—for instance, grouping all of the news headlines about the same event together. Here, we consider texts triggered by several types of stimuli: photographs that need descriptive captions, cartoons that need humorous captions, and crossword answers that need original clues. Each need shapes the properties of the texts. Pascal and Flickr Captions. The Pascal Captions dataset (hereinafter PAS) and the 8K ImageFlickr dataset (Rashtchian et al., 2010) are sets of captions solicited from Mechanical Turkers for photographs from Flickr and from the Pat655 tern Analysis, Statistical Modeling, and Computational Learning (PASCAL) Visual Object Classes Challenge (Everingham et al., 2010). PAS includes twenty categories of images (e.g., dogs, as in Example (1)) and 4998 captions. Each category has fifty images with approximately five captions for each image. We use the category as the gold standard cluster. The 8K ImageFlickr set includes 38,390 captions for 7663 photographs; we treat the image a caption is associated with as the gold standard cluster. To keep dataset sizes comparable, we use a randomly selected subset of 5000 captions (998 clusters) from ImageFlickr (hereinafter FLK). (1) “a man walking a small dog on a very wavy beach” “A person in a large black coats walks a white dog on the beach through rough waves.” “Walking a dog on the edge of the ocean” This task did not encourage creativity; instructions said to “describe the image in one complete but simple sentence.” This could lead to sentences within a cluster being rather similar to each other. However, because photographs may contain overlapping elements—for instance, a photograph in the “bus” category of PAS might also show cars, while a photograph in the “cars” category could also contain a bus—texts in one cluster can also be quite similar to texts from other clusters. Thus, these datasets should not be very tightly clustered. New Yorker Cartoon Captions. The New Yorker magazine has a weekly competition in which readers submit possible captions for a captionless cartoon (Example (2)) (Radev et al., 2015). We use the cartoon each caption is associated with as its gold standard cluster. The complete dataset includes over 1.9 million captions for 366 cartoons. For this work, we use a total of 5000 captions from 20 randomly selected cartoons as the “TOON” dataset. (2) “Objection, Your Honor! Alleged killer whale.” “My client maintains that the penguin had a gun!” “I demand a change of venue to a maritime court!” Since caption writers seek to stand out from the crowd, we expect high creativity. This may encourage a more varied vocabulary than the FLK and PAS captions that merely describe the image. We also expect wide variation in the meanings of captions for the same cartoon, due to the different joke senses submitted for each, leading to low intra-cluster similarity. Moreover, some users may submit the same caption for more than one cartoon, so we can expect surprisingly high intercluster similarity despite the wide variation in cartoon prompt images. We therefore do not expect TOON to be tightly clustered. Crossword Clues. A dataset of particularly creative texts is comprised of crossword clues.1 We use the clues as texts and the answer words as their gold standard cluster; all of the clues in Example (3) belong to the “toe” cluster. (3) Part of the foot Little piggy tic-tacThe third O of OOO The complete crossword clues dataset includes 1.7M different clues corresponding to 174,638 unique answers. The “CLUE” dataset includes 5000 clues corresponding to 20 unique answers selected by randomly choosing answers that have 250 or more unique clues, and then randomly choosing 250 of those clues for each answer. Since words repeat, crossword authors must be creative to come up with clues that will not bore cruciverbalists. CLUE should thus contain many alternative phrasings for essentially the same idea. At the same time, there is likely to be relatively little overlap between clues for different answers, so CLUE should be tightly clustered. 1Collected from http://crosswordgiant.com/ 656 4 Method Here we describe the similarity metrics and clustering methods, as well as evaluation measures. 4.1 Similarity Metrics We hypothesize that creative texts with wide vocabularies will benefit from similarity metrics based on semantic representation of the text, rather than its surface form. We therefore compare three metrics that rely on surface forms of words—ngram count vectors, tf-idf vectors, and dependency tree segment counts—to three semantic ones— summed Word2Vec embeddings, LSTM autoencoders, and skip-thought vectors. In each case, we represent texts as vectors and find their cosine similarities; if cosine similarity can be negative, we add one and normalize by two to ensure similarity in the range [0, 1]. N-Gram Counts. First we consider n-gram count vectors. We use three variations: (1) unigrams, (2) unigrams and bigrams, and (3) unigrams, bigrams, and trigrams. N-Gram tf-idf. We also consider weighting n-grams by tf-idf, as calculated by sklearn (Pedregosa et al., 2011). Dependency Counts. Grammatical information has been found to be useful in text, particularly short text, similarity. (Liu and Gildea, 2005; Zhang et al., 2005; Wang et al., 2009b; Heilman and Smith, 2010; Tian et al., 2010; ˇSari´c et al., 2012; Tai et al., 2015). To leverage this information, previous work has used dependency kernels (Tian et al., 2010), which measure similarity by the fraction of identical dependency parse segments between two sentences. Here, we accomplish the same effect using a count vector for each sentence, with the dependency parse segments as the vocabulary. We define the set of segments for a dependency parse to consist of, for each word, the word, its parent, and the dependency relation that connects them as shown in Example (4). (4) Part of shoe a. Segment 1: (part, ROOT, nsubj) b. Segment 2: (of, part, prep) c. Segment 3: (shoe, of, pobj) Word2Vec. For each word, we obtain, if possible, a vector learned via Word2Vec (Mikolov et al., 2013) from the Google News corpus.2 We repre2https://code.google.com/archive/p/ word2vec/ sent a sentence as the normalized sum of its word vectors. LSTM Autoencoder. We use Long ShortTerm Memory (LSTM) networks (Hochreiter and Schmidhuber, 1997) to build another semanticsbased sentence representation. We train an LSTM autoencoder consisting of an encoder network and a decoder network. The encoder reads the input sentence and produces a single vector as the hidden state at the last time step. The decoder takes this hidden state vector as input and attempts to reconstruct the original sentence. The LSTM autoencoder is trained to minimize the reconstruction loss. After training, we extract the hidden state at the last time step of encoder as the vector representation for a sentence. We use 300-dimensional word2vec vectors pretrained on GoogleNews and generate 300-dimensional hidden vectors. LSTM autoencoders are separately trained for each dataset with 20% for validation. Skip-thoughts (Kiros et al., 2015) trains encoder-decoder Recurrent Neural Networks (RNN) without supervision to predict the next and the previous sentences given the current sentence. The pretrained skip-thought model computes vectors as sentence representations. 4.2 Clustering Methods We explore five clustering methods: k-means, spectral, affinity propagation, Louvain, and MCL. K-means is a popular and straightforward clustering algorithm (Berkhin, 2006) that takes a parameter k, the number of clusters, and uses an expectation-maximization approach to find k centroids in the data. In the expectation phase points are assigned to their nearest cluster centroid. In the maximization phase the centroids of are recomputed for each cluster of assigned points. Kmeans is not a graph-based clustering algorithm, but rather operates in a vector space. Spectral clustering (Donath and Hoffman, 1973; Shi and Malik, 2000; Ng et al., 2001) is a graph-based clustering approach that finds the graph Laplacian of a similarity matrix, builds a matrix of the first k eigenvectors of the Laplacian, and then applies further clustering to this matrix. The method can be viewed as an approximation of a normalized min-cuts algorithm or of a random walks approach. We use the default implementation provided by sklearn, which applies a Gaussian kernel to determine the graph Laplacian and uses 657 k-means for the subsequent clustering step. Affinity propagation finds exemplars for each cluster and then assigns nodes to a cluster based on these exemplars (Frey and Dueck, 2007). This involves updating two matrices R and A, respectively representing the responsibility and availability of each node. A high value for R(i,k) indicates that node xi would be a good exemplar for cluster k. A high value for A(i,k) indicates that node xi is likely to belong to cluster k. We use the default implementation provided by sklearn. Louvain initializes each node to be its own cluster, then greedily maximizes modularity (Section 6.1) by iteratively merging clusters that are highly interconnected (Blondel et al., 2008). Markov Cluster Algorithm (MCL) simulates flow on a network via random walk (Van Dongen, 2000). The sequence of nodes is represented via a Markov chain. By applying inflation to the transition matrix, the algorithm can maintain the cluster structure pronounced in the transition matrix of this random walk—a structure that would otherwise disappear over time.3 4.3 Evaluation Methods Adjusted Rand Index We use the sklearn implementation of the Adjusted Rand Index (ARI)4 (Hubert and Arabie, 1985): ARI = RI −Expected RI max RI −Expected RI (1) where RI is the Rand Index, RI = TP + TN TP + FP + FN + TN (2) TP is the number of true positives, TN is true negatives, and FP and FN are false positives and false negatives, respectively. The Rand Index ranges from 0 to 1. ARI adjusts the Rand Index for chance, so that the score ranges from -1 to 1. Random labeling will achieve an ARI score close to 0; perfect labeling achieves an ARI of 1. Purity is a score in the [0, 1] range that indicates to what extent sentences in the same predicted cluster actually belong to the same cluster. Given Ω= {ω1, ω2, ..., ωK}, the predicted clusters, C = {c1, c2, ..., cJ}, the true clusters, and N, the number of examples, purity is Purity(Ω, C) = 1 N X k∈K max j∈J |ωk ∩cj| (3) 3We use the implementation from http://micans. org/mcl/ with inflation=2.0. 4Equivalent to Cohen’s Kappa (Warrens, 2008). Normalized Mutual Information (NMI). We use the sklearn implementation of NMI: NMI(Ω, C) = MI(Ω, C) p H(C) · H(Ω) (4) The numerator is the mutual information (MI) of predicted cluster labels Ωand true cluster labels C. MI describes how much knowing what the predicted clusters are increases knowledge about what the actual classes are. Using marginal entropy (H(x)), NMI normalizes MI so that it ranges from 0 to 1. If C and Ωare identical—that is, if the clusters are perfect—NMI will be 1. 5 Vocabulary Width 5.1 Descriptive Statistics for Vocabulary Width We predict that creative texts have a wider vocabulary than functional texts. We use two measures to reflect this wide vocabulary: the type/token ratio in the dataset (TTR), and that ratio normalized by the mean length of a text in the dataset. TTR is an obvious estimate of the width of the vocabulary of a corpus. However, all other things being equal, a corpus of many very short texts triggered by the same stimulus would have more repeated words, proportional to the total number of tokens in the corpus, than would a corpus of a smaller number of longer texts. We might therefore normalize the ratio of types to tokens by dividing by the mean length of a text in the dataset, leading to the normalized type-to-token ratio (NTTR) and TTR values shown in Table 1. CLUE TOON PAS FLK TTR 0.1680 0.1064 0.0625 0.0561 NTTR 0.0377 0.0086 0.0058 0.0047 Table 1: Vocabulary properties of each dataset FLK, PAS, and CLUE conform to expectations. The creative CLUE has TTR more than double that of the more functional PAS and FLK. The effect is more pronounced using NTTR. Surprisingly, TOON falls closer to the PAS and FLK end of the spectrum, suggesting that vocabulary width does not capture the creativity in the captioning competition; perhaps the creativity of cartoon captions is about expressing different ideas, rather than finding unique ways to express the same idea. For the experiments based on vocabulary width, we therefore compare PAS and CLUE. 658 5.2 Experiments We hypothesize that if a dataset uses a wide variety of words to express the same ideas, similarity metrics that rely on the surface form of the sentence will be at a disadvantage compared to similarity metrics based in distributional semantics. Thus, word2vec, LSTM autoencoders, and skip-thoughts ought to perform better than the n-gram-based methods and dependency count method when applied to CLUE, but should enjoy no advantage when applied to PAS. We begin by comparing the performance of all similarity metrics on PAS and CLUE, using kmeans for clustering. We then also examine their performance with MCL. 5.3 Results and Discussion Table 2 compares the performance of all similarity metrics on PAS and CLUE using k-means and MCL. Using k-means on PAS, the unigram tf-idf similarity metric gives the strongest performance for purity and NMI and came in a close second for ARI. LSTM slightly outperformed the other similarity metrics on ARI, but had middle-of-theroad results on the other evaluations. Overall, the semantics-based similarity metrics gave reasonable but not exceptional ARI and purity results, but were at the low end on NMI. This is consistent with our hypothesis that when authors are not trying to express creativity by using a wider vocabulary, surface-based similarity metrics suffice. For k-means on CLUE, the picture is quite different: the semantics-based similarity metrics markedly outperformed any other similarity metric on ARI. LSTM also provides the best purity score, followed by skip-thought. The semantics-based metrics do not stand out for NMI, though. Based on these results, we conclude that semantics-based measures provide a significant advantage over traditional similarity metrics when using k-means on the wide-vocabulary, creative CLUE. When clustering with MCL, however, the semantics-based methods perform exceptionally poorly on both datasets. Interestingly, the n-grambased similarity metrics performed very well when paired with MCL on CLUE—outperforming the best of the k-means scores—while the same metrics performed terribly with MCL on PAS. We hypothesize that the semantics-based similarity metrics produce less tightly clustered data than the surface-form-based metrics do, and that this may make clustering difficult for some graphbased clustering methods. The next section describes how we test this hypothesis. 6 Tightness of Clusters 6.1 Descriptive Statistics for Tightness Two pieces contribute to cluster tightness: the dataset itself and the choice of similarity metric. To illustrate, we represent each text with the vector for its similarity metric—for instance, the sum of its word2vec vectors or the unigram tf-idf vector— and reduce it to two dimensions using linear discriminant analysis. We plot five randomly selected gold standard clusters. Plots for unigram tf-idf and word2vec representations of PAS and CLUE are shown in Figures 1 and 2. These support the intuition that semantics-based similarity metrics are not as tightly clustered as n-gram-based metrics. Note also that the CLUE unigram tf-idf clusters appear tighter than the PAS unigram tf-idf clusters. To quantify this, we compute modularity (Newman, 2004; Newman, 2006):5 Q = 1 2m X ij  Aij −kikj 2m  δ(ci, cj) (5) Aij is the edge weight between nodes i and j. δ(ci, cj) indicates whether i and j belong to the same cluster. m is the number of edges. ki is the degree of vertex i, so kikj 2m is the expected number of edges between i and j in a random graph. Thus, modularity is highest when nodes in a cluster are highly interconnected, but sparsely connected to nodes in different clusters. We use this statistic in an unconventional way, determining the modularity of the golden clusters. Table 3 shows the modularities for all four datasets using the unigram, trigram, unigram tfidf, trigram tf-idf, dependency, word2vec, and skipthoughts similarity metrics. As suggested by Figures 1 and 2, the CLUE surface-formbased similarities have the highest modularity by far. The surface-form-based similarities for all datasets have much higher modularity than any of the semantics-based similarities; indeed, the 5Newman (2010) notes that modularity for even a perfectly mixed network generally cannot be 1 and describes a normalized modularity formula. We calculated both normalized and non-normalized modularity and found the pattern of results to be the same, so we report only modularity. 659 k-Means MCL PAS CLUE PAS CLUE Metric ARI Purity NMI ARI Purity NMI ARI Purity NMI ARI Purity NMI Unigram 0.0286 0.141 0.110 0.0137 0.173 0.153 1.00E-05 0.058 0.051 0.0620 0.527 0.439 Bigram 0.0230 0.143 0.111 0.0124 0.165 0.142 2.50E-05 0.065 0.070 0.0835 0.585 0.465 Trigram 0.0289 0.139 0.108 0.0148 0.178 0.156 3.60E-05 0.069 0.081 0.1034 0.608 0.478 Uni. tf-idf 0.0445 0.189 0.169 0.0180 0.202 0.188 2.20E-05 0.061 0.060 0.1482 0.643 0.506 Bi. tf-idf 0.0287 0.158 0.135 0.0156 0.205 0.205 3.86E-04 0.104 0.135 0.1327 0.722 0.544 Tri. tf-idf 0.0345 0.176 0.142 0.0134 0.195 0.213 6.49E-03 0.212 0.230 0.1280 0.751 0.561 Dependency 0.0122 0.131 0.104 0.0071 0.169 0.207 2.07E-02 0.280 0.264 0.0832 0.745 0.543 Word2Vec 0.0274 0.142 0.103 0.0527 0.189 0.165 0.000 0.050 0.000 0.0000 0.050 0.000 LSTM 0.0453 0.170 0.142 0.0837 0.240 0.202 0.000 0.050 0.000 0.0000 0.050 0.000 Skipthought 0.0311 0.140 0.106 0.0691 0.215 0.180 0.000 0.050 0.000 0.0000 0.050 0.0009 Table 2: A comparison of all similarity metrics on PAS and CLUE datasets, clustered using k-means and MCL. For all evaluations, higher scores are better. Figure 1: Plots of unigram tf-idf (left) and word2vec (right) vectors representing five randomly selected clusters of CLUE: clues for words “ets,” “stay,” “yes,” “easel,” and “aha.” Figure 2: Plots of unigram tf-idf (left) and word2vec (right) vectors representing five randomly selected clusters of PAS: images containing “bus,” “boat,” “car,” “bird,” and “motorbike.” semantics-based similarities rarely have modularity much higher than zero. Thus, we conclude both that CLUE is more tightly clustered than the other datasets and that surface-form-based measures yield tighter clusters than semantics-based measures. CLUE’s tight clustering could be due in part to its particularly short texts. Additionally, it might reflect the semantics of the dataset: words that the clues hint at may be less similar to one another than the categories in PAS are to each other. For instance, some images in PAS’s “bus” category include cars, and vice-versa. The difference between semantics-based and surface-form similarity metrics likely arises from the fact that similarity of a word pair is a yes-or660 Metric PAS Clues TOON FLK Unigram 0.0254 0.1849 0.0214 0.0065 Bigram 0.0312 0.2216 0.0293 0.0103 Trigram 0.0347 0.2447 0.0352 0.0135 Uni. tf-idf 0.0587 0.3005 0.0519 0.0184 Bi. tf-idf 0.0877 0.3875 0.0950 0.0394 Tri. tf-idf 0.0347 0.4339 0.1311 0.0618 Dependency 0.0799 0.4729 0.0451 0.0299 Word2Vec 0.0020 0.0036 0.0008 0.0004 LSTM 0.0072 0.0121 0.0020 0.0009 Skipthought 0.0009 0.0028 0.0006 0.0003 Table 3: Modularity for all datasets no question to surface-form-based metrics, but a question of degree to semantics-based ones. According to semantics-based methods, “cat” is more similar to “feline” than it is to “dog,” but more similar to “dog” than to “motorcycle.” This creates some similarity between texts from different clusters, blurring the lines between them. Thus, “The man walks his dog” and “A woman with a cat” are entirely dissimilar according to surface form methods, but not according to the semantics-based measures. Even if the nodes in a cluster are highly interconnected, if the connections between nodes in different clusters are too strong, modularity will be low. To determine whether cluster tightness influences the best clustering method, we tested all clustering methods on all four datasets using unigram, trigram, unigram tf-idf, trigram tf-idf, word2vec, and skipthought similarity metrics. 6.2 Results and Discussion As can be seen in Figure 3, the best ARI results by a large margin were those on the tightly clustered CLUE. Louvain, which provides the best ARI for CLUE, and MCL, which provides the second best, both performed most strongly when paired with the surface-form-based similarity metrics (n-gram counts, tf-idf, and dependency count), which had high modularity relative to the semantics-based metrics. Although CLUE also differs from the other datasets in that it has the shortest mean text length, text length by itself cannot explain the observed differences in performance, since the pattern of graph-based clustering methods working best with modular data is consistent within each dataset as well as between datasets. CLUE is also the only dataset where the semantics-based similarity metrics performed exceptionally well with any of the clustering methods. Recall from Table 1 that CLUE had a markedly wider vocabulary than any other dataset. This further supports our findings in Section 5.3 regarding how creativity affects the usefulness of semantics-based similarity metrics. FLK, which had the lowest modularity, cannot be clustered by the spectral, Louvain, or MCL algorithms. K-means provides the strongest performance, followed by affinity propagation. TOON has the worst ARI results. Its bestperforming clustering methods are the graphbased Louvain and MCL methods. Both perform well only when paired with the most modular similarity metrics. Louvain seems less sensitive to modularity than MCL does. MCL’s best performance by far for TOON is when it is paired with trigram tf-idf, which also had the highest modularity; its performance when paired with the lowermodularity similarity metrics rapidly falls away. In contrast, Louvain fares reasonably well with the lower n-gram tf-idfs, which also had lower modularity than trigram tf-idf. Louvain and MCL follow a similar pattern on PAS: both perform at their peak on the most modular similarity metric (dependency), but Louvain handles slightly less modular similarity metrics nearly as well as the most modular one, while MCL quickly falters. K-means’ performance is not correlated with modularity. This makes sense, as k-means is the only non-graph-based method. Methods like MCL, which is based on a random walk, may be stymied by too many highly-weighted paths between clusters; the random walk can too easily reach a different neighborhood from where it started. But k-means relies on how close texts are to centroids, not to other texts, and so would be less affected. The fact that k-means nevertheless performs poorly on TOON suggests that this dataset may be particularly difficult to cluster. An interesting test would be to measure inter-annotator agreement on TOON. 7 Conclusions and Future Work This work has shown that creativity can influence the best way to cluster text. When using k-means to cluster a dataset where authors tried to be creative, similarity metrics utilizing distri661 Figure 3: All similarity metrics and all clustering methods for the four datasets. butional semantics outperformed those that relied on surface forms. We also showed that semanticsbased methods do not provide a notable advantage when applying k-means to less creative datasets. Since traditional similarity metrics are often faster to calculate, use of slower semantics-based methods should be limited to creative datasets. Unlike most work on clustering short texts, we examined how the similarity metric interacts with the clustering method. Even for a creative dataset, if the underlying data is tightly clustered, the use of semantics-based similarity measures can actually hurt performance. Traditional metrics applied to such tightly clustered data generate more modular output that enables the use of sophisticated, graph-based clustering methods such as MCL and Louvain. When either the underlying data or the similarity metrics applied to it produce loose clusters with low modularity, the sophisticated graph clustering algorithms fail, and we must fall back on simpler methods. Future work can manipulate datasets’ text properties to confirm that a specific property is the cause of observed differences in clustering. Such work should alter the datasets TTR and NTTR while holding mean length of texts constant. A pilot effort to use word embeddings to alter the variety of vocabulary in a dataset has so far not succeeded, but future experiments altering vocabulary width or modularity of a dataset and finding that the modified dataset behaved like natural datasets with the same properties could increase confidence in causality. Future work can also explore finer clusters within these datasets, such as clustering CLUE by word sense of the answers and TOON by joke sense. These results are a first step towards determining the best way to cluster a new dataset based on properties of the text. Future work will explore further how the goals of short text authors translate into measurable properties of the texts they write, and how measuring those properties can help predict which similarity metrics and clustering methods will combine to provide the best performance. Acknowledgments The authors are grateful for the help and insights of Jonathan Kummerfeld, Spruce Bondera, Kyle Bouwens, Kurt McEwan, Francisco Rivera Reyes, Clayton Thorrez, Chongruo Wu, Yue Xu, Harry Zhang, and Joel Tetreault. We appreciate the comments of the anonymous reviewers, which helped us improve the paper. Xiangyi Ye was sponsored by the University of Michigan Undergraduate Research Opportunity Program (UROP). 662 References Somnath Banerjee, Krishnan Ramanathan, and Ajay Gupta. 2007. Clustering short texts using Wikipedia. In Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 787– 788. ACM. Pavel Berkhin. 2006. A survey of clustering data mining techniques. In Grouping Multidimensional Data, pages 25–71. Springer. Vincent D. Blondel, Jean-Loup Guillaume, Renaud Lambiotte, and Etienne Lefebvre. 2008. Fast unfolding of communities in large networks. Journal of statistical mechanics: theory and experiment, 2008(10):P10008. Jianpeng Cheng and Dimitri Kartsaklis. 2015. Syntaxaware multi-sense word embeddings for deep compositional models of meaning. arXiv preprint arXiv:1508.02354. William E. Donath and Alan J. Hoffman. 1973. Lower bounds for the partitioning of graphs. IBM Journal of Research and Development, 17(5):420–425. Marcelo Luis Errecalde, Diego Alejandro Ingaramo, and Paolo Rosso. 2010. A new anttree-based algorithm for clustering short-text corpora. Journal of Computer Science & Technology, 10. Mark Everingham, Luc Van Gool, Christopher K. I. Williams, John Winn, and Andrew Zisserman. 2010. The PASCAL visual object classes (VOC) challenge. International Journal of Computer Vision, 88(2):303–338. Brendan J. Frey and Delbert Dueck. 2007. Clustering by passing messages between data points. Science, 315(5814):972–976. Hua He, Kevin Gimpel, and Jimmy Lin. 2015. Multiperspective sentence similarity modeling with convolutional neural networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1576–1586. Michael Heilman and Noah A. Smith. 2010. Tree edit models for recognizing textual entailments, paraphrases, and answers to questions. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT ’10, pages 1011–1019, Stroudsburg, PA, USA. Association for Computational Linguistics. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Lawrence Hubert and Phipps Arabie. 1985. Comparing partitions. Journal of Classification, 2(1):193– 218. Tom Kenter and Maarten de Rijke. 2015. Short text similarity with word embeddings. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pages 1411– 1420. ACM. Ben King, Rahul Jha, Dragomir R Radev, and Robert Mankoff. 2013. Random walk factoid annotation for collective discourse. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. Ryan Kiros, Yukun Zhu, Ruslan R. Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in Neural Information Processing Systems, pages 3276–3284. Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Q. Weinberger. 2015. From word embeddings to document distances. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 957–966. Ding Liu and Daniel Gildea. 2005. Syntactic features for evaluation of machine translation. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 25–32. Chenglong Ma, Weiqun Xu, Peijia Li, and Yonghong Yan. 2015. Distributional representations of words for short text classification. In Proceedings of NAACL-HLT, pages 33–38. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119. Mark E. J. Newman. 2004. Analysis of weighted networks. Physical Review E, 70(5):056131. Mark E. J. Newman. 2006. Modularity and community structure in networks. Proceedings of the National Academy of Sciences, 103(23):8577–8582. Mark Newman. 2010. Networks: An Introduction. Oxford University Press. Andrew Y. Ng, Michael I. Jordan, and Yair Weiss. 2001. On spectral clustering: Analysis and an algorithm. In Advances in Neural Information Processing Systems, pages 849–856. MIT Press. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830. 663 Henry Petersen and Josiah Poon. 2011. Enhancing short text clustering with small external repositories. In Proceedings of the Ninth Australasian Data Mining Conference-Volume 121, pages 79–90. Australian Computer Society, Inc. Vahed Qazvinian and Dragomir R. Radev. 2011. Learning from collective human behavior to introduce diversity in lexical choice. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT ’11, pages 1098–1108, Stroudsburg, PA, USA. Association for Computational Linguistics. Dragomir R. Radev, Amanda Stent, Joel R. Tetreault, Aasish Pappu, Aikaterini Iliakopoulou, Agustin Chanfreau, Paloma de Juan, Jordi Vallmitjana, Alejandro Jaimes, Rahul Jha, and Robert Mankoff. 2015. Humor in collective discourse: Unsupervised funniness detection in the new yorker cartoon caption contest. CoRR, abs/1506.08126. Aniket Rangrej, Sayali Kulkarni, and Ashish V. Tendulkar. 2011. Comparative study of clustering techniques for short text documents. In Proceedings of the 20th International Conference Companion on World Wide Web, WWW ’11, pages 111–112, New York, NY, USA. ACM. Cyrus Rashtchian, Peter Young, Micah Hodosh, and Julia Hockenmaier. 2010. Collecting image annotations using amazon’s mechanical turk. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, pages 139–147. Association for Computational Linguistics. Kevin Dela Rosa, Rushin Shah, Bo Lin, Anatole Gershman, and Robert Frederking. 2011. Topical clustering of tweets. Proceedings of the ACM SIGIR: SWSM. Frane ˇSari´c, Goran Glavaˇs, Mladen Karan, Jan ˇSnajder, and Bojana Dalbelo Baˇsi´c. 2012. Takelab: Systems for measuring semantic text similarity. In Proceedings of the First Joint Conference on Lexical and Computational Semantics, pages 441–448. Association for Computational Linguistics. Jianbo Shi and Jitendra Malik. 2000. Normalized cuts and image segmentation. Technical report. Prajol Shrestha, Christine Jacquin, and B´eatrice Daille, 2012. Computational Linguistics and Intelligent Text Processing: 13th International Conference, CICLing 2012, New Delhi, India, March 11-17, 2012, Proceedings, Part II, chapter Clustering Short Text and Its Evaluation, pages 169–180. Springer Berlin Heidelberg, Berlin, Heidelberg. Richard Socher, Eric H. Huang, Jeffrey Pennin, Christopher D. Manning, and Andrew Y. Ng. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in Neural Information Processing Systems, pages 801– 809. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075. Yun Tian, Haisheng Li, Qiang Cai, and Shouxiang Zhao. 2010. Measuring the similarity of short texts by word similarity and tree kernels. In Information Computing and Telecommunications (YC-ICT), 2010 IEEE Youth Conference on, pages 363–366. IEEE. Stijn Van Dongen. 2000. Graph Clustering by Flow Simulation. Ph.D. thesis, University of Utrecht. Jun Wang, Yiming Zhou, Lin Li, Biyun Hu, and Xia Hu. 2009a. Improving short text clustering performance with keyword expansion. In The Sixth International Symposium on Neural Networks (ISNN 2009), pages 291–298. Springer. Kai Wang, Zhaoyan Ming, and Tat-Seng Chua. 2009b. A syntactic tree matching approach to finding similar questions in community-based qa services. In Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’09, pages 187–194, New York, NY, USA. ACM. Yu Wang, Lihui Wu, and Hongyu Shao. 2014. Clusters merging method for short texts clustering. Open Journal of Social Sciences, 2(09):186. Peng Wang, Jiaming Xu, Bo Xu, Cheng-Lin Liu, Heng Zhang, Fangyuan Wang, and Hongwei Hao. 2015. Semantic clustering and convolutional neural network for short text categorization. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, volume 2, pages 352–357. Matthijs J. Warrens. 2008. On the equivalence of cohen’s kappa and the hubert-arabie adjusted rand index. Journal of Classification, 25(2):177–183. Jiaming Xu, Peng Wang, Guanhua Tian, Bo Xu, Jun Zhao, Fangyuan Wang, and Hongwei Hao. 2015. Short text clustering via convolutional neural networks. In Proceedings of NAACL-HLT, pages 62– 69. Xiaohui Yan, Jiafeng Guo, Shenghua Liu, Xue-qi Cheng, and Yanfeng Wang. 2012. Clustering short text using ncut-weighted non-negative matrix factorization. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management, pages 2259–2262. ACM. Jie Yin. 2013. Clustering microtext streams for event identification. In IJCNLP, pages 719–725. 664 Min Zhang, Jian Su, Danmei Wang, Guodong Zhou, and Chew Lim Tan. 2005. Discovering relations between named entities from a large raw corpus using tree similarity-based clustering. In Natural Language Processing–IJCNLP 2005, pages 378–389. Springer. 665
2016
62
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 666–675, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Generative Topic Embedding: a Continuous Representation of Documents Shaohua Li1,2 Tat-Seng Chua1 Jun Zhu3 Chunyan Miao2 [email protected] [email protected] [email protected] [email protected] 1. School of Computing, National University of Singapore 2. Joint NTU-UBC Research Centre of Excellence in Active Living for the Elderly (LILY) 3. Department of Computer Science and Technology, Tsinghua University Abstract Word embedding maps words into a lowdimensional continuous embedding space by exploiting the local word collocation patterns in a small context window. On the other hand, topic modeling maps documents onto a low-dimensional topic space, by utilizing the global word collocation patterns in the same document. These two types of patterns are complementary. In this paper, we propose a generative topic embedding model to combine the two types of patterns. In our model, topics are represented by embedding vectors, and are shared across documents. The probability of each word is influenced by both its local context and its topic. A variational inference method yields the topic embeddings as well as the topic mixing proportions for each document. Jointly they represent the document in a low-dimensional continuous space. In two document classification tasks, our method performs better than eight existing methods, with fewer features. In addition, we illustrate with an example that our method can generate coherent topics even based on only one document. 1 Introduction Representing documents as fixed-length feature vectors is important for many document processing algorithms. Traditionally documents are represented as a bag-of-words (BOW) vectors. However, this simple representation suffers from being high-dimensional and highly sparse, and loses semantic relatedness across the vector dimensions. Word Embedding methods have been demonstrated to be an effective way to represent words as continuous vectors in a low-dimensional embedding space (Bengio et al., 2003; Mikolov et al., 2013; Pennington et al., 2014; Levy et al., 2015). The learned embedding for a word encodes its semantic/syntactic relatedness with other words, by utilizing local word collocation patterns. In each method, one core component is the embedding link function, which predicts a word’s distribution given its context words, parameterized by their embeddings. When it comes to documents, we wish to find a method to encode their overall semantics. Given the embeddings of each word in a document, we can imagine the document as a “bag-of-vectors”. Related words in the document point in similar directions, forming semantic clusters. The centroid of a semantic cluster corresponds to the most representative embedding of this cluster of words, referred to as the semantic centroids. We could use these semantic centroids and the number of words around them to represent a document. In addition, for a set of documents in a particular domain, some semantic clusters may appear in many documents. By learning collocation patterns across the documents, the derived semantic centroids could be more topical and less noisy. Topic Models, represented by Latent Dirichlet Allocation (LDA) (Blei et al., 2003), are able to group words into topics according to their collocation patterns across documents. When the corpus is large enough, such patterns reflect their semantic relatedness, hence topic models can discover coherent topics. The probability of a word is governed by its latent topic, which is modeled as a categorical distribution in LDA. Typically, only a small number of topics are present in each document, and only a small number of words have high probability in each topic. This intuition motivated Blei et al. (2003) to regularize the topic distributions with Dirichlet priors. 666 Semantic centroids have the same nature as topics in LDA, except that the former exist in the embedding space. This similarity drives us to seek the common semantic centroids with a model similar to LDA. We extend a generative word embedding model PSDVec (Li et al., 2015), by incorporating topics into it. The new model is named TopicVec. In TopicVec, an embedding link function models the word distribution in a topic, in place of the categorical distribution in LDA. The advantage of the link function is that the semantic relatedness is already encoded as the cosine distance in the embedding space. Similar to LDA, we regularize the topic distributions with Dirichlet priors. A variational inference algorithm is derived. The learning process derives topic embeddings in the same embedding space of words. These topic embeddings aim to approximate the underlying semantic centroids. To evaluate how well TopicVec represents documents, we performed two document classification tasks against eight existing topic modeling or document representation methods. Two setups of TopicVec outperformed all other methods on two tasks, respectively, with fewer features. In addition, we demonstrate that TopicVec can derive coherent topics based only on one document, which is not possible for topic models. The source code of our implementation is available at https://github.com/askerlee/topicvec. 2 Related Work Li et al. (2015) proposed a generative word embedding method PSDVec, which is the precursor of TopicVec. PSDVec assumes that the conditional distribution of a word given its context words can be factorized approximately into independent log-bilinear terms. In addition, the word embeddings and regression residuals are regularized by Gaussian priors, reducing their chance of overfitting. The model inference is approached by an efficient Eigendecomposition and blockwiseregression method (Li et al., 2016b). TopicVec differs from PSDVec in that in the conditional distribution of a word, it is not only influenced by its context words, but also by a topic, which is an embedding vector indexed by a latent variable drawn from a Dirichlet-Multinomial distribution. Hinton and Salakhutdinov (2009) proposed to model topics as a certain number of binary hidden variables, which interact with all words in the document through weighted connections. Larochelle and Lauly (2012) assigned each word a unique topic vector, which is a summarization of the context of the current word. Huang et al. (2012) proposed to incorporate global (document-level) semantic information to help the learning of word embeddings. The global embedding is simply a weighted average of the embeddings of words in the document. Le and Mikolov (2014) proposed Paragraph Vector. It assumes each piece of text has a latent paragraph vector, which influences the distributions of all words in this text, in the same way as a latent word. It can be viewed as a special case of TopicVec, with the topic number set to 1. Typically, however, a document consists of multiple semantic centroids, and the limitation of only one topic may lead to underfitting. Nguyen et al. (2015) proposed Latent Feature Topic Modeling (LFTM), which extends LDA to incorporate word embeddings as latent features. The topic is modeled as a mixture of the conventional categorical distribution and an embedding link function. The coupling between these two components makes the inference difficult. They designed a Gibbs sampler for model inference. Their implementation1 is slow and infeasible when applied to a large corpous. Liu et al. (2015) proposed Topical Word Embedding (TWE), which combines word embedding with LDA in a simple and effective way. They train word embeddings and a topic model separately on the same corpus, and then average the embeddings of words in the same topic to get the embedding of this topic. The topic embedding is concatenated with the word embedding to form the topical word embedding of a word. In the end, the topical word embeddings of all words in a document are averaged to be the embedding of the document. This method performs well on our two classification tasks. Weaknesses of TWE include: 1) the way to combine the results of word embedding and LDA lacks statistical foundations; 2) the LDA module requires a large corpus to derive semantically coherent topics. Das et al. (2015) proposed Gaussian LDA. It uses pre-trained word embeddings. It assumes that words in a topic are random samples from a multivariate Gaussian distribution with the topic embedding as the mean. Hence the probability that a 1https://github.com/datquocnguyen/LFTM/ 667 Name Description S Vocabulary {s1, · · · , sW } V Embedding matrix (vs1, · · · , vsW ) D Document set {d1, · · · , dM} vsi Embedding of word si asisj, A Bigram residuals tik, T i Topic embeddings in doc di rik, ri Topic residuals in doc di zij Topic assignment of the j-th word j in doc di φi Mixing proportions of topics in doc di Table 1: Table of notations word belongs to a topic is determined by the Euclidean distance between the word embedding and the topic embedding. This assumption might be improper as the Euclidean distance is not an optimal measure of semantic relatedness between two embeddings2. 3 Notations and Definitions Throughout this paper, we use uppercase bold letters such as S, V to denote a matrix or set, lowercase bold letters such as vwi to denote a vector, a normal uppercase letter such as N, W to denote a scalar constant, and a normal lowercase letter as si, wi to denote a scalar variable. Table 1 lists the notations in this paper. In a document, a sequence of words is referred to as a text window, denoted by wi, · · · , wi+l, or wi:wi+l. A text window of chosen size c before a word wi defines the context of wi as wi−c, · · · , wi−1. Here wi is referred to as the focus word. Each context word wi−j and the focus word wi comprise a bigram wi−j, wi. We assume each word in a document is semantically similar to a topic embedding. Topic embeddings reside in the same N-dimensional space as word embeddings. When it is clear from context, topic embeddings are often referred to as topics. Each document has K candidate topics, arranged in the matrix form T i = (ti1 · · · tiK), referred to as the topic matrix. Specifically, we fix ti1 = 0, referring to it as the null topic. In a document di, each word wij is assigned to a topic indexed by zij ∈{1, · · · , K}. Geometrically this means the embedding vwij tends to align 2Almost all modern word embedding methods adopt the exponentiated cosine similarity as the link function, hence the cosine similarity may be assumed to be a better estimate of the semantic relatedness between embeddings derived from these methods. with the direction of ti,zij. Each topic tik has a document-specific prior probability to be assigned to a word, denoted as φik = P(k|di). The vector φi = (φi1, · · · , φiK) is referred to as the mixing proportions of these topics in document di. 4 Link Function of Topic Embedding In this section, we formulate the distribution of a word given its context words and topic, in the form of a link function. The core of most word embedding methods is a link function that connects the embeddings of a focus word and its context words, to define the distribution of the focus word. Li et al. (2015) proposed the following link function: P(wc | w0 : wc−1) ≈P(wc) exp  v⊤ wc c−1 X l=0 vwl + c−1 X l=0 awlwc  . (1) Here awlwc is referred as the bigram residual, indicating the non-linear part not captured by v⊤ wcvwl. It is essentially the logarithm of the normalizing constant of a softmax term. Some literature, e.g. (Pennington et al., 2014), refers to such a term as a bias term. (1) is based on the assumption that the conditional distribution P(wc | w0 : wc−1) can be factorized approximately into independent logbilinear terms, each corresponding to a context word. This approximation leads to an efficient and effective word embedding algorithm PSDVec (Li et al., 2015). We follow this assumption, and propose to incorporate the topic of wc in a way like a latent word. In particular, in addition to the context words, the corresponding embedding tik is included as a new log-bilinear term that influences the distribution of wc. Hence we obtain the following extended link function: P(wc | w0:wc−1, zc, di) ≈P(wc)· exp n v⊤ wc c−1 X l=0 vwl + tzc  + c−1 X l=0 awlwc+rzc o , (2) where di is the current document, and rzc is the logarithm of the normalizing constant, named the topic residual. Note that the topic embeddings tzc may be specific to di. For simplicity of notation, we drop the document index in tzc. To restrict the impact of topics and avoid overfitting, we constrain the magnitudes of all topic embeddings, so that they are always within a hyperball of radius γ. 668 w1 · · · w0 wc zc θd α vsi µi Word Embeddings asisj hij Residuals Gaussian Gaussian Mult Dir t Topic Embeddings wc ∈d T d ∈D Documents V A Figure 1: Graphical representation of TopicVec. It is infeasible to compute the exact value of the topic residual rk. We approximate it by the context size c = 0. Then (2) becomes: P(wc | k, di) = P(wc) exp  v⊤ wctk + rk . (3) It is required that P wc∈S P(wc | k) = 1 to make (3) a distribution. It follows that rk = −log  X sj∈S P(sj) exp{v⊤ sjtk}  . (4) (4) can be expressed in the matrix form: r = −log(u exp{V ⊤T }), (5) where u is the row vector of unigram probabilities. 5 Generative Process and Likelihood The generative process of words in documents can be regarded as a hybrid of LDA and PSDVec. Analogous to PSDVec, the word embedding vsi and residual asisj are drawn from respective Gaussians. For the sake of clarity, we ignore their generation steps, and focus on the topic embeddings. The remaining generative process is as follows: 1. For the k-th topic, draw a topic embedding uniformly from a hyperball of radius γ, i.e. tk ∼ Unif(Bγ); 2. For each document di: (a) Draw the mixing proportions φi from the Dirichlet prior Dir(α); (b) For the j-th word: i. Draw topic assignment zij from the categorical distribution Cat(φi); ii. Draw word wij from S according to P(wij | wi,j−c:wi,j−1, zij, di). The above generative process is presented in plate notation in Figure (1). 5.1 Likelihood Function Given the embeddings V , the bigram residuals A, the topics T i and the hyperparameter α, the complete-data likelihood of a single document di is: p(di, Zi, φi|α, V , A, T i) =p(φi|α)p(Zi|φi)p(di|V , A, T i, Zi) =Γ(PK k=1 αk) QK k=1 Γ(αk) K Y j=1 φαj−1 ij · Li Y j=1 φi,zijP(wij) · exp  v⊤ wij  j−1 X l=j−c vwil + tzij  + j−1 X l=j−c awilwij+ ri,zij ! , (6) where Zi = (zi1, · · · , ziLi), and Γ(·) is the Gamma function. Let Z, T , φ denote the collection of all the document-specific {Zi}M i=1, {T i}M i=1, {φi}M i=1, respectively. Then the complete-data likelihood of the whole corpus is: p(D, A, V , Z, T , φ|α, γ, µ) = W Y i=1 P(vsi; µi) W,W Y i,j=1 P(asisj; f(hij)) K Y k Unif(Bγ) · M Y i=1 {p(φi|α)p(Zi|φi)p(di|V , A, T i, Zi)} = 1 Z(H, µ)UK γ exp{− W,W X i,j=1 f(hi,j)a2 sisj− W X i=1 µi∥vsi∥2} · M Y i=1 Γ(PK k=1 αk) QK k=1 Γ(αk) K Y j=1 φαj−1 ij · Li Y j=1  φi,zijP(wij) · exp n v⊤ wij  j−1 X l=j−c vwil+tzij  + j−1 X l=j−c awilwij+ri,zij o , (7) where P(vsi; µi) and P(asisj; f(hij)) are the two Gaussian priors as defined in (Li et al., 2015). 669 Following the convention in (Li et al., 2015), hij, H are empirical bigram probabilities, µ are the embedding magnitude penalty coefficients, and Z(H, µ) is the normalizing constant for word embeddings. Uγ is the volume of the hyperball of radius γ. Taking the logarithm of both sides, we obtain log p(D, A, V , Z, T , φ|α, γ, µ) =C0 −log Z(H, µ) −∥A∥2 f(H) − W X i=1 µi∥vsi∥2 + M X i=1  K X k=1 log φik(mik + αk −1) + Li X j=1  ri,zij +v⊤ wij  j−1 X l=j−c vwil + tzij  + j−1 X l=j−c awilwij  , (8) where mik = PLi j=1 δ(zij = k) counts the number of words assigned with the k-th topic in di, C0 = M log Γ(PK k=1 αk) QK k=1 Γ(αk) +PM,Li i,j=1 log P(wij)−K log Uγ is constant given the hyperparameters. 6 Variational Inference Algorithm 6.1 Learning Objective and Process Given the hyperparameters α, γ, µ, the learning objective is to find the embeddings V , the topics T , and the word-topic and document-topic distributions p(Zi, φi|di, A, V , T ). Here the hyperparameters α, γ, µ are kept constant, and we make them implicit in the distribution notations. However, the coupling between A, V and T , Z, φ makes it inefficient to optimize them simultaneously. To get around this difficulty, we learn word embeddings and topic embeddings separately. Specifically, the learning process is divided into two stages: 1. In the first stage, considering that the topics have a relatively small impact to word distributions and the impact might be “averaged out” across different documents, we simplify the model by ignoring topics temporarily. Then the model falls back to the original PSDVec. The optimal solution V ∗, A∗is obtained accordingly; 2. In the second stage, we treat V ∗, A∗as constant, plug it into the likelihood function, and find the corresponding optimal T ∗, p(Z, φ|D, A∗, V ∗, T ∗) of the full model. As in LDA, this posterior is analytically intractable, and we use a simpler variational distribution q(Z, φ) to approximate it. 6.2 Mean-Field Approximation and Variational GEM Algorithm In this stage, we fix V = V ∗, A = A∗, and seek the optimal T ∗, p(Z, φ|D, A∗, V ∗, T ∗). As V ∗, A∗are constant, we also make them implicit in the following expressions. For an arbitrary variational distribution q(Z, φ), the following equalities hold Eq log p(D, Z, φ|T ) q(Z, φ)  =Eq [log p(D, Z, φ|T )] + H(q) = log p(D|T ) −KL(q||p), (9) where p = p(Z, φ|D, T ), H(q) is the entropy of q. This implies KL(q||p) = log p(D|T ) −  Eq [log p(D, Z, φ|T )] + H(q)  = log p(D|T ) −L(q, T ). (10) In (10), Eq [log p(D, Z, φ|T )] + H(q) is usually referred to as the variational free energy L(q, T ), which is a lower bound of log p(D|T ). Directly maximizing log p(D|T ) w.r.t. T is intractable due to the hidden variables Z, φ, so we maximize its lower bound L(q, T ) instead. We adopt a mean-field approximation of the true posterior as the variational distribution, and use a variational algorithm to find q∗, T ∗maximizing L(q, T ). The following variational distribution is used: q(Z, φ; π, θ) = q(φ; θ)q(Z; π) = M Y i=1   Dir(φi; θi) Li Y j=1 Cat(zij; πij)   . (11) We can obtain (Li et al., 2016a) L(q, T ) = M X i=1 ( K X k=1  Li X j=1 πk ij + αk −1  ψ(θik) −ψ(θi0)  + Tr(T ⊤ i Li X j=1 vwijπ⊤ ij) + r⊤ i Li X j=1 πij ) + H(q) + C1, (12) 670 where T i is the topic matrix of the i-th document, and ri is the vector constructed by concatenating all the topic residuals rik. C1 = C0−log Z(H, µ)−∥A∥2 f(H)−PW i=1 µi∥vsi∥2+ PM,Li i,j=1  v⊤ wij Pj−1 k=j−c vwik +Pj−1 k=j−c awikwij  is constant. We proceed to optimize (12) with a Generalized Expectation-Maximization (GEM) algorithm w.r.t. q and T as follows: 1. Initialize all the topics T i = 0, and correspondingly their residuals ri = 0; 2. Iterate over the following two steps until convergence. In the l-th step: (a) Let the topics and residuals be T = T (l−1), r = r(l−1), find q(l)(Z, φ) that maximizes L(q, T (l−1)). This is the Expectation step (E-step). In this step, log p(D|T ) is constant. Then the q that maximizes L(q, T (l)) will minimize KL(q||p), i.e. such a q is the closest variational distribution to p measured by KL-divergence; (b) Given the variational distribution q(l)(Z, φ), find T (l), r(l) that improve L(q(l), T ), using Gradient descent method. This is the generalized Maximization step (M-step). In this step, π, θ, H(q) are constant. 6.2.1 Update Equations of π, θ in E-Step In the E-step, T = T (l−1), r = r(l−1) are constant. Taking the derivative of L(q, T (l−1)) w.r.t. πk ij and θik, respectively, we can obtain the optimal solutions (Li et al., 2016a) at: πk ij ∝exp{ψ(θik) + v⊤ wijtik + rik}. (13) θik = Li X j=1 πk ij + αk. (14) 6.2.2 Update Equation of T i in M-Step In the Generalized M-step, π = π(l), θ = θ(l) are constant. For notational simplicity, we drop their superscripts (l). To update T i, we first take the derivative of (12) w.r.t. T i, and then take the Gradient Descent method. The derivative is obtained as (Li et al., 2016a): ∂L(q(l), T ) ∂T i = Li X j=1 vwijπ⊤ ij + K X k=1 ¯mik ∂rik ∂T i , (15) where ¯mik = PLi j=1 πk ij = E[mik], the sum of the variational probabilities of each word being assigned to the k-th topic in the i-th document. ∂rik ∂T i is a gradient matrix, whose j-th column is ∂rik ∂tij . Remind that rik = −log  EP(s)[exp{v⊤ s tik}]  . When j ̸= k, it is easy to verify that ∂rik ∂tij = 0. When j = k, we have ∂rik ∂tik = e−rik · EP(s)[exp{v⊤ s tik}vs] = e−rik · X s∈W exp{v⊤ s tik}P(s)vs = e−rik · exp{t⊤ ikV }(u ◦V ), (16) where u◦V is to multiply each column of V with u element-by-element. Therefore ∂rik ∂T i = (0, · · · ∂rik ∂tik , · · · , 0). Plugging it into (15), we obtain ∂L(q(l), T ) ∂T i = Li X j=1 vwijπ⊤ ij+( ¯mi1 ∂ri1 ∂ti1 , · · · , ¯miK ∂riK ∂tiK ). We proceed to optimize T i with a gradient descent method: T (l) i = T (l−1) + λ(l, Li)∂L(q(l), T ) ∂T i , where λ(l, Li) = L0λ0 l·max{Li,L0} is the learning rate function, L0 is a pre-specified document length threshold, and λ0 is the initial learning rate. As the magnitude of ∂L(q(l),T ) ∂T i is approximately proportional to the document length Li, to avoid the step size becoming too big a on a long document, if Li > L0, we normalize it by Li. To satisfy the constraint that ∥t(l) ik ∥≤γ, when t(l) ik > γ, we normalize it by γ/∥t(l) ik ∥. After we obtain the new T , we update r(m) i using (5). Sometimes, especially in the initial few iterations, due to the excessively big step size of the gradient descent, L(q, T ) may decrease after the update of T . Nonetheless the general direction of L(q, T ) is increasing. 6.3 Sharing of Topics across Documents In principle we could use one set of topics across the whole corpus, or choose different topics for different subsets of documents. One could choose a way to best utilize cross-document information. For instance, when the document category information is available, we could make the documents in each category share their respective set 671 of topics, so that M categories correspond to M sets of topics. In the learning algorithm, only the update of πk ij needs to be changed to cater for this situation: when the k-th topic is relevant to the document i, we update πk ij using (13); otherwise πk ij = 0. An identifiability problem may arise when we split topic embeddings according to document subsets. In different topic groups, some highly similar redundant topics may be learned. If we project documents into the topic space, portions of documents in the same topic in different documents may be projected onto different dimensions of the topic space, and similar documents may eventually be projected into very different topic proportion vectors. In this situation, directly using the projected topic proportion vectors could cause problems in unsupervised tasks such as clustering. A simple solution to this problem would be to compute the pairwise similarities between topic embeddings, and consider these similarities when computing the similarity between two projected topic proportion vectors. Two similar documents will then still receive a high similarity score. 7 Experimental Results To investigate the quality of document representation of our TopicVec model, we compared its performance against eight topic modeling or document representation methods in two document classification tasks. Moreover, to show the topic coherence of TopicVec on a single document, we present the top words in top topics learned on a news article. 7.1 Document Classification Evaluation 7.1.1 Experimental Setup Compared Methods Two setups of TopicVec were evaluated: • TopicVec: the topic proportions learned by TopicVec; • TV+WV: the topic proportions, concatenated with the mean word embedding of the document (same as the MeanWV below). We compare the performance of our methods against eight methods, including three topic modeling methods, three continuous document representation methods, and the conventional bag-ofwords (BOW) method. The count vector of BOW is unweighted. The topic modeling methods include: • LDA: the vanilla LDA (Blei et al., 2003) in the gensim library3; • sLDA: Supervised Topic Model4 (McAuliffe and Blei, 2008), which improves the predictive performance of LDA by modeling class labels; • LFTM: Latent Feature Topic Modeling5 (Nguyen et al., 2015). The document-topic proportions of topic modeling methods were used as their document representation. The document representation methods are: • Doc2Vec: Paragraph Vector (Le and Mikolov, 2014) in the gensim library6. • TWE: Topical Word Embedding7 (Liu et al., 2015), which represents a document by concatenating average topic embedding and average word embedding, similar to our TV+WV; • GaussianLDA: Gaussian LDA8 (Das et al., 2015), which assumes that words in a topic are random samples from a multivariate Gaussian distribution with the mean as the topic embedding. Similar to TopicVec, we derived the posterior topic proportions as the features of each document; • MeanWV: The mean word embedding of the document. Datasets We used two standard document classification corpora: the 20 Newsgroups9 and the ApteMod version of the Reuters-21578 corpus10. The two corpora are referred to as the 20News and Reuters in the following. 20News contains about 20,000 newsgroup documents evenly partitioned into 20 different categories. Reuters contains 10,788 documents, where each document is assigned to one or more categories. For the evaluation of document classification, documents appearing in two or more categories were removed. The numbers of documents in the categories of Reuters are highly imbalanced, and we only selected the largest 10 categories, leaving us with 8,025 documents in total. 3https://radimrehurek.com/gensim/models/ldamodel.html 4http://www.cs.cmu.edu/˜chongw/slda/ 5https://github.com/datquocnguyen/LFTM/ 6https://radimrehurek.com/gensim/models/doc2vec.html 7https://github.com/largelymfs/topical word embeddings/ 8https://github.com/rajarshd/Gaussian LDA 9http://qwone.com/˜jason/20Newsgroups/ 10http://www.nltk.org/book/ch02.html 672 The same preprocessing steps were applied to all methods: words were lowercased; stop words and words out of the word embedding vocabulary (which means that they are extremely rare) were removed. Experimental Settings TopicVec used the word embeddings trained using PSDVec on a March 2015 Wikipedia snapshot. It contains the most frequent 180,000 words. The dimensionality of word embeddings and topic embeddings was 500. The hyperparameters were α = (0.1, · · · , 0.1), γ = 5. For 20news and Reuters, we specified 15 and 12 topics in each category on the training set, respectively. The first topic in each category was always set to null. The learned topic embeddings were combined to form the whole topic set, where redundant null topics in different categories were removed, leaving us with 281 topics for 20News and 111 topics for Reuters. The initial learning rate was set to 0.1. After 100 GEM iterations on each dataset, the topic embeddings were obtained. Then the posterior document-topic distributions of the test sets were derived by performing one E-step given the topic embeddings trained on the training set. LFTM includes two models: LF-LDA and LFDMM. We chose the better performing LF-LDA to evaluate. TWE includes three models, and we chose the best performing TWE-1 to compare. LDA, sLDA, LFTM and TWE used the specified 50 topics on Reuters, as this is the optimal topic number according to (Lu et al., 2011). On the larger 20news dataset, they used the specified 100 topics. Other hyperparameters of all compared methods were left at their default values. GaussianLDA was specified 100 topics on 20news and 70 topics on Reuters. As each sampling iteration took over 2 hours, we only had time for 100 sampling iterations. For each method, after obtaining the document representations of the training and test sets, we trained an ℓ-1 regularized linear SVM one-vs-all classifier on the training set using the scikit-learn library11. We then evaluated its predictive performance on the test set. Evaluation metrics Considering that the largest few categories dominate Reuters, we adopted macro-averaged precision, recall and F1 measures as the evaluation metrics, to avoid the average results being dominated by the performance of the 11http://scikit-learn.org/stable/modules/svm.html 20News Reuters Prec Rec F1 Prec Rec F1 BOW 69.1 68.5 68.6 92.5 90.3 91.1 LDA 61.9 61.4 60.3 76.1 74.3 74.8 sLDA 61.4 60.9 60.9 88.3 83.3 85.1 LFTM 63.5 64.8 63.7 84.6 86.3 84.9 MeanWV 70.4 70.3 70.1 92.0 89.6 90.5 Doc2Vec 56.3 56.6 55.4 84.4 50.0 58.5 TWE 69.5 69.3 68.8 91.0 89.1 89.9 GaussianLDA 30.9 26.5 22.7 46.2 31.5 35.3 TopicVec 71.4 71.3 71.2 91.8 92.0 91.7 TV+WV1 72.1 71.9 71.8 91.4 91.9 91.5 1Combined features of TopicVec topic proportions and MeanWV. Table 2: Performance on multi-class text classification. Best score is in boldface. Avg. Features BOW MeanWV TWE TopicVec TV+WV 20News 50381 500 800 281 781 Reuters 17989 500 800 111 611 Table 3: Number of features of the five best performing methods. top categories. Evaluation Results Table 2 presents the performance of the different methods on the two classification tasks. The highest scores were highlighted with boldface. It can be seen that TV+WV and TopicVec obtained the best performance on the two tasks, respectively. With only topic proportions as features, TopicVec performed slightly better than BOW, MeanWV and TWE, and significantly outperformed four other methods. The number of features it used was much lower than BOW, MeanWV and TWE (Table 3). GaussianLDA performed considerably inferior to all other methods. After checking the generated topic embeddings manually, we found that the embeddings for different topics are highly similar to each other. Hence the posterior topic proportions were almost uniform and non-discriminative. In addition, on the two datasets, even the fastest Alias sampling in (Das et al., 2015) took over 2 hours for one iteration and 10 days for the whole 100 iterations. In contrast, our method finished the 100 EM iterations in 2 hours. 673 Figure 2: Topic Cloud of the pharmaceutical company acquisition news. 7.2 Qualitative Assessment of Topics Derived from a Single Document Topic models need a large set of documents to extract coherent topics. Hence, methods depending on topic models, such as TWE, are subject to this limitation. In contrast, TopicVec can extract coherent topics and obtain document representations even when only one document is provided as input. To illustrate this feature, we ran TopicVec on a New York Times news article about a pharmaceutical company acquisition12, and obtained 20 topics. Figure 2 presents the most relevant words in the top-6 topics as a topic cloud. We first calculated the relevance between a word and a topic as the frequency-weighted cosine similarity of their embeddings. Then the most relevant words were selected to represent each topic. The sizes of the topic slices are proportional to the topic proportions, and the font sizes of individual words are proportional to their relevance to the topics. Among these top-6 topics, the largest and smallest topic proportions are 26.7% and 9.9%, respectively. As shown in Figure 2, words in obtained topics were generally coherent, although the topics were 12http://www.nytimes.com/2015/09/21/business/a-hugeovernight-increase-in-a-drugs-price-raises-protests.html only derived from a single document. The reason is that TopicVec takes advantage of the rich semantic information encoded in word embeddings, which were pretrained on a large corpus. The topic coherence suggests that the derived topic embeddings were approximately the semantic centroids of the document. This capacity may aid applications such as document retrieval, where a “compressed representation” of the query document is helpful. 8 Conclusions and Future Work In this paper, we proposed TopicVec, a generative model combining word embedding and LDA, with the aim of exploiting the word collocation patterns both at the level of the local context and the global document. Experiments show that TopicVec can learn high-quality document representations, even given only one document. In our classification tasks we only explored the use of topic proportions of a document as its representation. However, jointly representing a document by topic proportions and topic embeddings would be more accurate. Efficient algorithms for this task have been proposed (Kusner et al., 2015). Our method has potential applications in various scenarios, such as document retrieval, classification, clustering and summarization. Acknowlegement We thank Xiuchao Sui and Linmei Hu for their help and support. We thank the anonymous mentor provided by ACL for the careful proofreading. This research is funded by the National Research Foundation, Prime Minister’s Office, Singapore under its IDM Futures Funding Initiative and IRC@SG Funding Initiative administered by IDMPO. References Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research, pages 1137–1155. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. the Journal of machine Learning research, 3:993–1022. Rajarshi Das, Manzil Zaheer, and Chris Dyer. 2015. Gaussian LDA for topic models with word embeddings. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics 674 and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 795–804, Beijing, China, July. Association for Computational Linguistics. Geoffrey E Hinton and Ruslan R Salakhutdinov. 2009. Replicated softmax: an undirected topic model. In Advances in neural information processing systems, pages 1607–1614. Eric H Huang, Richard Socher, Christopher D Manning, and Andrew Y Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 873–882. Association for Computational Linguistics. Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Q. Weinberger. 2015. From word embeddings to document distances. In David Blei and Francis Bach, editors, Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 957– 966. JMLR Workshop and Conference Proceedings. Hugo Larochelle and Stanislas Lauly. 2012. A neural autoregressive topic model. In Advances in Neural Information Processing Systems, pages 2708–2716. Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1188–1196. Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211–225. Shaohua Li, Jun Zhu, and Chunyan Miao. 2015. A generative word embedding model and its low rank positive semidefinite solution. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1599–1609, Lisbon, Portugal, September. Association for Computational Linguistics. Shaohua Li, Tat-Seng Chua, Jun Zhu, and Chunyan Miao. 2016a. Generative topic embedding: a continuous representation of documents (extended version with proofs). Technical report. https://github.com/askerlee/topicvec/blob/ master/topicvec-ext.pdf. Shaohua Li, Jun Zhu, and Chunyan Miao. 2016b. PSDVec: a toolbox for incremental and scalable word embedding. To appear in Neurocomputing. Yang Liu, Zhiyuan Liu, Tat-Seng Chua, and Maosong Sun. 2015. Topical word embeddings. In AAAI, pages 2418–2424. Yue Lu, Qiaozhu Mei, and ChengXiang Zhai. 2011. Investigating task performance of probabilistic topic models: an empirical study of PLSA and LDA. Information Retrieval, 14(2):178–203. Jon D McAuliffe and David M Blei. 2008. Supervised topic models. In Advances in neural information processing systems, pages 121–128. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS 2013, pages 3111–3119. Dat Quoc Nguyen, Richard Billingsley, Lan Du, and Mark Johnson. 2015. Improving topic models with latent feature word representations. Transactions of the Association for Computational Linguistics, 3:299–313. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global vectors for word representation. Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014), 12. 675
2016
63
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 676–685, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics Detecting Common Discussion Topics Across Culture From News Reader Comments Bei Shi1, Wai Lam1, Lidong Bing2 and Yinqing Xu1 1Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong, Hong Kong 2Machine Learning Department Carnegie Mellon University, Pittsburgh, PA 15213 {bshi,wlam,yqxu}@se.cuhk.edu.hk [email protected] Abstract News reader comments found in many on-line news websites are typically massive in amount. We investigate the task of Cultural-common Topic Detection (CTD), which is aimed at discovering common discussion topics from news reader comments written in different languages. We propose a new probabilistic graphical model called MCTA which can cope with the language gap and capture the common semantics in different languages. We also develop a partially collapsed Gibbs sampler which effectively incorporates the term translation relationship into the detection of cultural-common topics for model parameter learning. Experimental results show improvements over the state-of-the-art model. 1 Introduction Nowadays the rapid development of information and communication technology enables more and more people around the world to engage in the movement of globalization. One effect of globalization is to facilitate greater connections between people bringing cultures closer than before. This also contributes to the convergence of some elements of different cultures (Melluish, 2014). For example, there is a growing tendency of people watching the same movie, listening to the same music, and reading the news about the same event. This kind of cultural homogenization brings the emergence of commonality of some aspects of different cultures worldwide. It would be beneficial to identify such common aspects among cultures. For example, it can provide some insights for global market and international business (Cavusgil et al., 2014). Many news websites from different regions in the world report significant events which are of interests to people from different continents. These websites also allow readers around the world to give their comments in their own languages. The volume of comments is often enormous especially for popular events. In a news website, readers from a particular culture background tend to write comments in their own preferred languages. For some important or global events, we observe that readers from different cultures, via different languages, express common discussion topics. For instance, on March 8 2014, Malaysia Airlines Flight MH370, carrying 227 passengers and 12 crew members, disappeared. Upon the happening of this event, many news articles around the world reported it and many readers from different continents commented on this event. Through analyzing the reader comments manually, we observe that both English-speaking and Chinese-speaking readers expressed in their corresponding languages their desire for praying for the MH370 flight. This is an example of a cultural-common discussion topic. Identifying such cultural-common topics automatically can facilitate better understanding and organization of the common concerns or interests of readers with different language background. Such technology can be deployed for developing various applications. One application is to build a reader comment digest system that can organize comments by cultural-common discussion topics and rank the topics by popularity. This provides a functionality of analyzing the common focus of readers from different cultures on a particular event. An example of such application is shown in Figure 3. Under each event, reader comments are grouped by cultural-common topics. 676 In this paper, we investigate the task of Cultural-common Topic Detection (CTD) on multilingual news reader comments. Reader comments about a global event, written in different languages, from different news websites around the world exist in massive amount. The main goal of this task is to discover cultural-common discussion topics from raw multilingual news reader comments for a news event. One challenge is that the discussion topics are unknown. Another challenge is related to the language gap issue. Precisely, the words of reader comments in different languages are composed of different terms in their corresponding languages. Such language gap issue poses a great deal of challenge for identifying cultural-common discussion topics in multilingual news comments settings. One recent work done by Prasojo et al. (2015) is to organize news reader comments around entities and aspects discussed by readers. Such organization of reader comments cannot handle the identification of common discussion topics. On the other hand, the Muto model proposed by BoydGraber and Blei (2009) can extract common topics from multilingual documents. This model merely outputs cross-lingual topics of matching word pairs. One example of such kind of topic contains key terms of word pairs such as “plane:飞 机ocean:海洋. . . ”. The assumption of one-toone mapping of words has some drawbacks. One drawback is that the correspondence of identified common topics is restricted to the vocabulary level. Another drawback is that the one-toone mapping of words cannot fit the original word occurrences well. For example, the English term “plane” appears in the English documents frequently while the Chinese translation “飞机” appears less. It is not reasonable that “plane” and “飞 机” share the same probability mass in common topics. Another closely related existing work is the PCLSA model proposed by Zhang et al. (2010). PCLSA employs a mixture of English words and Chinese words to represent common topics. It incorporates bilingual constraints into the Probabilistic Latent Semantic Analysis (PLSA) model (Hofmann, 2001) and assumes that word pairs in the dictionary share similar probability in a common topic. However, similar to one-to-one mapping of words, such bilingual constraints cannot handle well the original word co-occurrence in each language resulting in a degradation of the coherence and interpretability of common topics. We propose a new probabilistic graphical model which is able to detect cultural-common topics from multilingual news reader comments in an unsupervised manner. In principle, no labeled data is needed. In this paper, we focus on dealing with two languages, namely, English and Chinese news reader comments. Different from prior works, we design a technique based on auxiliary distributions which incorporates word distributions from the other language and can capture the common semantics on the topic level. We develop a partially collapsed Gibbs sampler which decouples the inference of topic distribution and word distribution. We also incorporate the term translation relationship, derived from a bilingual dictionary, into the detection of cultural-common topics for model parameter learning. We have prepared a data set by collecting English and Chinese reader comments from different regions reflecting different culture. Our experimental results are encouraging showing improvements over the state-of-the-art model. 2 Related Work Prasojo et al. (2015) and Biyani et al. (2015) organized news reader comments via identified entities or aspects. Such kind of organization via entities or aspects cannot capture common topics discussed by readers. Digesting merely based on entities fails to work in multilingual settings due to the fact that the common entities have distinct mentions in different languages. Zhai et al. (2004) discovered common topics from comparable texts via a PLSA based mixture model. Paul and Girju (2009) proposed a MixedCollection Topic Model for finding common topics from different collections. Despite the fact that the above models can find a kind of common topic, they only deal with a single language setting without considering the language gap. Some works discover common latent topics from multilingual corpora. For aligned corpora, they assume that the topic distribution in each document is the same (Vuli´c et al., 2011; Vuli´c and Moens, 2014; Erosheva et al., 2004; Fukumasu et al., 2012; Mimno et al., 2009; Ni et al., 2009; Zhang et al., 2013; Peng et al., 2014). However, aligned corpora are often unavailable for most domains. For unaligned corpora, cross-lingual topic models use some language resources, such 677 as a bilingual dictionary or a bilingual knowledge base to bridge the language gap (Boyd-Graber and Blei, 2009; Zhang et al., 2010; Jagarlamudi and Daum´e III, 2010). As mentioned above, the goals of Boyd-Graber and Blei (2009) as well as Jagarlamud and Daum´e (2010) focus on mining the correspondence of topics at the vocabulary level, which are different from that of Zhang et al. (2010) and ours. The model in Zhang et al. (2010) adds the constraints of word translation pairs into PLSA. These constraints cannot handle the original word co-occurrences well. In contrast, we consider the language gap by incorporating word distributions from the other language, capturing the common semantics on the topic level. Moreover, we use a fully Bayesian paradigm with a prior distribution. Some existing topic methods conduct crosslingual sentiment analysis (Lu et al., 2011; Guo et al., 2010; Lin et al., 2014; Boyd-Graber and Resnik, 2010). These models are not suitable for our CTD task because they mainly detect common elements related to product aspects. Moreover some works focus more on detecting sentiments. 3 Our Proposed Model 3.1 Model Description The problem definition of the CTD task is described as follows. For a particular event, both English and Chinese news reader comments are collected from different regions reflecting different culture. The set of English comments is denoted by E and the set of Chinese comments is denoted by C. The goal of the CTD task is to extract cultural-common topics k ∈{1, 2, . . . , K} from E and C. The set of multilingual news reader comments of each event are processed within the same event. Our proposed model is called Multilingual Cultural-common Topic Analysis (MCTA) which is based on graphical model paradigm as depicted in Figure 1. The plate on the right represents cultural-common topics. Each cultural-common topic k is represented by an English word distribution ϕe k over English vocabulary Λe and a Chinese word distribution ϕc k over Chinese vocabulary Λc. We make use of a bilingual dictionary, which is composed of many-to-many word translations among English and Chinese words. To capture common semantics of multilingual news reader comments, we design two auxiliary distriβ ϕe ηe α θe d θc d ze n zc n we n wc n ηc ϕc Ne d K Nc d Ne dw Nc dw Figure 1: Our proposed graphical model butions ηe, with dimension Λe, and ηc, with dimension Λc, to help the generation of ϕe k and ϕc k. Precisely, we generate ηe and ηc from the Dirichlet prior distributions Dir(β·1|Λe|) and Dir(β·1|Λc|) respectively, where 1D denotes a D-dimensional vector whose components are 1. Then we draw ϕe k from the mixture of ηe k and the translation of ηc k. It is formulated as: ϕe k ∝λ(ηc k)T Mc→e + (1 −λ)ηe k where ηe, ηc ∼Dir(β) (1) where λ ∈(0, 1) is a parameter which balances the nature of original topics and transferred information from the other language. Mc→e is a mapping |Λc| × |Λe| matrix from Λc to Λe. Each element Mc→e ij is the mapping occurrence probability of the English term we j given the Chinese term wc i in the set of news reader comments. This probability is calculated as: Mc→e ij = C(we j) + 1 |T(wc i)| + P we∈T(wc i ) C(we) (2) where C(we j) is the count of we j in all news reader comments and T(wc i) is the set of English translations of wc i found in the bilingual dictionary. The “add-one” smoothing is adopted. Note that the sum of each row is equal to 1. Using the same principle, we can derive ϕc k which can be formulated as: ϕc k ∝λ(ηe k)T Me→c + (1 −λ)ηc k where ηe, ηc ∼Dir(β) (3) As a result, the incorporation of ηe k and ηc k on the topic level encourages the word distribution ϕe k and ϕc k to share common semantic components of reader comments in different languages. The upper left plate in Figure 1 represents English reader comments. Ne d denotes the number of English reader comments and Ne dw denotes 678 the number of words in the English comment de. Each English reader comment de is characterized by a K-dimensional topic membership vector θe d, which is assumed to be generated by the prior Dir(α · 1K). For each word we n in an English comment de, we generate the topic zn e from θe d. We generate the word we n from the corresponding distribution ϕe k. The bottom left plate in Figure 1 represents Chinese reader comments. Similarly, we generate the topic distribution θc d from the prior Dir(α · 1K). The topic zn c of each word wc n in a Chinese comment dc is generated from θc d. We generate word wc n from the corresponding distribution ϕc k. The generative process is formally depicted as: • For each topic k ∈K - choose auxiliary distributions ηe k ∼Dir(β· 1|Λe|) and ηc k ∼Dir(β · 1|Λc|) - choose English word distribution ϕe k and ϕc k using Eq. 1 and Eq. 3 respectively. • For each English comment de ∈E, choose θe d ∼Dir(α · 1K) - For each position n in de - draw ze n ∼Multi(θe d) - draw we n ∼Multi(ϕe k) • For each Chinese comment dc ∈C, choose θc d ∼Dir(α · 1K) - For each position n in dc - draw zc n ∼Multi(θc d) - draw wc n ∼Multi(ϕc k) Note that for simplicity, we present our model on the bilingual setting of Chinese and English. It can be extended to multilingual setting via introducing auxiliary distributions for each language. Each topic word distribution for each language is generated by the convex combination of all the auxiliary distributions. 3.2 Posterior Inference In order to decouple the inference of zn and ϕk for each language, we develop a partially collapsed Gibbs method which just discards θe d and θc d. Given ϕe k, we sample the new assignments of the topic ze di in English news reader comments de with the following conditional probability: P(ze di = k|ze,¬i, W e, α, ϕe k) ∝(Ne,¬i dk +αk)×ϕe k (4) where ze,¬i denotes the topic assignments except the assignment of the ith word. Ne dk is the number Algorithm 1 Partially Collapsed Gibbs Sampling for MCTA 1: Initialize z, ϕe k, ϕc k, ηe k, ηc k 2: for iter = 1 to Maxiter do 3: for each English comment d in E do 4: for each word we n in d do 5: draw ze n using Eq. 4 6: end for 7: end for 8: for each Chinese comment d in C do 9: for each word wc n in d do 10: draw zc n using Eq. 5. 11: end for 12: end for 13: Update ηe k, ηc k by Eq. 8 and Eq. 9 14: Update ϕe k, ϕc k according to Eq. 1 and Eq. 3 15: end for 16: Output θdk by Eq. 10 of words in English document de whose topics are assigned to k. Similarly, we sample zc di with the following equation: P(zc di = k|zc,¬i, W c, α, ϕc k) ∝(Nc,¬i dk +αk)×ϕc k (5) Given the topic assignments, the probability of the entire comment set can be: p(W|z, ϕe k, ϕc k) = Y w∈Λe (ϕe kw)Ne kw× Y w∈Λc (ϕc kw)Nc kw (6) where Ne kw is the number of words w in English news reader comments assigned to the topic k and Nc kw is the number of words w in Chinese news reader comments assigned to the topic k. Using Eq. 6, we can obtain the posterior likelihood related to ηe k and ηc k: LMAP = X wi∈Λe Ne kwi log(λ X wj∈Λc Mc→e ji ηc kwj + (1 −λ)ηe kwi) + X wi∈Λc Nc kwi log(λ X wj∈Λe Me→c ji ηe kwj + (1 −λ)ηc kwi) + X wi∈Λe (β −1) log ηe kwi + X wi∈Λc (β −1) log ηc kwi (7) We optimize Eq. 7 under the constraints of P wi∈Λe ηe kwi = 1 and P wi∈Λc ηc kwi = 1. Using the fixed-point method, we obtain the update 679 equations of ηe kwt and ηc kwt shown in Eq. 8 and Eq. 9. ηe kwt ∝[ (1 −λ)Ne kwt λ P wj∈Λc Mc→e jt ηc kwj + (1 −λ)ηe kwt + X wi∈Λc λN c kwiMe→c ti λ P wj∈Λe Me→c ji ηe kwj + (1 −λ)ηc kwi ]ηe kwt + β (8) ηc kwt ∝[ (1 −λ)Nc kwt λ P wj∈Λe Me→c jt ηe kwj + (1 −λ)ηc kwt + X wi∈Λe λN e kwiMc→e ti λ P wj∈Λc Mc→e ji ηc kwj + (1 −λ)ηe kwi ]ηc kwt + β (9) Moreover, the posterior estimates for the topic distribution θd can be computed as follows. θdk = Ndk + α P k∈K Ndk + Kα (10) The whole detailed algorithm is depicted in Algorithm 1. When λ = 0, the updated equations of ηe k and ηc k can be simplified as: ηe kwt ∝Ne kwt + β ηc kwt ∝Nc kwt + β (11) Then we have: ϕe k ∼Dir(Ne kw1 + β, Ne kw2 + β, . . . ) ϕc k ∼Dir(Nc kw1 + β, Nc kw2 + β, . . . ) (12) Therefore, the algorithm degrades to a Gibbs sampler of LDA. 4 Experiments 4.1 Data Set and Preprocessing We have prepared a data set by collecting English and Chinese comments from different regions reflecting different culture for some significant events as depicted in Table 1. The English reader comments are collected from Yahoo1 and the Chinese reader comments are collected from Sina News2. We first remove news reader comments whose length is less than 5 words. We remove the punctuations and the stop words. For English comments, we also stem each word to its root 1http://news.yahoo.com 2http://news.sina.com.cn/world/ Event Title #English comments #Chinese comments 1 MH370 flight accident 8608 5223 2 ISIS in Iraq 6341 3263 3 Ebola occurs 2974 1622 4 Taiwan Crashed Plane 6780 2648 5 iphone6 publish 5837 4352 6 Shooting of Michael Brown 17547 3693 7 Charlie Hebdo shooting 1845 551 8 Shanghai stampede 3824 3175 9 Lee Kuan Yew death 2418 1534 10 AIIB foundation 7221 3198 Table 1: The statistics for the data set form using Porter Stemmer (Porter, 1980). For the Chinese reader comments, we use the Jieba package3 to segment and remove Chinese stop words. We utilize an English-Chinese dictionary from MDBG4. 4.2 Comparative Methods The PCLSA model proposed by Zhang et al. (2010) can be regarded as the state-of-the-art model for detecting latent common topics from multilingual text documents. We implemented PCLSA as one of the comparative methods in our experiments. Another comparative model used in the experiment is LDA (Blei et al., 2003), which can generate K English topics and K Chinese topics from English and Chinese reader comments respectively. Then we translate Chinese topics into English topics and use symmetric KL divergence to align translated Chinese topics with original English topics. Each aligned topic pair is regarded as a cultural-common topic. 4.3 Experiment Settings For each event, we partitioned the comments into a subset of 90% for the graphical model parameter estimation. The remaining 10% is used as a holdout data for the evaluation of the CCP metric as discussed in Section 4.4.1. We repeated the runs five times. For each run, we randomly split the comments to obtain the holdout data. As a result, we have five runs for our method as well as comparative methods. We make use of the holdout data of one event, namely the event “MH370 3https://github.com/fxsjy/jieba 4http://www.mdbg.net/chindict/chindict.php?page=cccedict 680 Flight Accident”, to estimate the number of topics K for all models and λ in Eq. 1 for our model. The setting of K is described in Section 4.4.3. We set λ = 0.5 after tuning. For hyper-parameters, we set α to 0.5 and β to 0.01. When performing our Gibbs algorithm, we set the maximum iteration number as 1000, and the burn-in sweeps as 100. 4.4 Cultural-common Topic Evaluation We conduct quantitative experiments to evaluate how well our MCTA model can discover culturalcommon topics. 4.4.1 Evaluation Metrics We use two metrics to evaluate the topic quality. The first metric is the “cross-collection perplexity” measure denoted as CCP which is similar to the one used in Zhang et al. (2010). The CCP of high quality cultural-common topics should be lower than those topics which are not shared by the English and Chinese reader comments. The calculation of CCP consists of two steps: 1) For each k ∈K, we translate ϕe k into Chinese word distribution T(ϕe k) and translate ϕc k English word distribution T(ϕc k). To translate ϕe k and ϕc k, we look up the bilingual dictionary and conduct word-toword translation. If one word has several translations, we distribute its probability mass equally to each English translation. 2) We use T(ϕe k) to fit the holdout Chinese comments C and T(ϕc k) to fit the holdout English comments E using Eq. 13 (Blei et al., 2003). Eq. 13 depicts the calculation of CCP. The lower the CCP value is, the better the performance is. CCP = 1 2 exp{− P d∈E P w∈d P k∈K log p(k|θd)p(w|T(ϕc k)) P d∈E N e d }+ 1 2 exp{− P d∈C P w∈d P k∈K log p(k|θd)p(w|T(ϕe k)) P d∈C N c d } (13) For each detected common topic, we wish to evaluate the degree of commonality. We design another metric called “topic commonality distance” denoted by TCD. We first evaluate the KLdivergence between the English topic and translated Chinese topic. We also evaluate the KLdivergence between the Chinese topic and translated English topic. Then TCD is computed as the average sum of the two KL-divergences. The lower the TCD measure is, the better the topic is. Event LDA PCLSA MCTA 1 1963.57 1842.24 1784.05 2 1940.03 1831.55 1756.92 3 1958.09 1905.43 1808.01 4 1916.49 1847.16 1775.32 5 1901.44 1797.92 1744.07 6 1916.70 1853.66 1786.77 7 1945.22 1897.15 1824.10 8 1942.29 1862.14 1749.43 9 1943.53 1856.70 1739.66 10 1866.23 1815.44 1749.49 avg. 1929.36 1850.94 1771.78 Table 2: Topic quality evaluation as measured by CCP The topic detected by PCLSA is a mixture of English and Chinese words. We obtain English representation and Chinese representation of the topic by the conditional probabilities as given in Eq. 14. p(we|ϕe k) = p(we|ϕk) P w∈Λe p(w|ϕk) p(wc|ϕc k) = p(wc|ϕk) P w∈Λc p(w|ϕk) (14) 4.4.2 Experimental Results The average CCP values of the three models are shown in Table 2. Our MCTA model achieves the best performance compared with PCLSA and LDA. Both MCTA and PCLSA achieve a better CCP than LDA because they can bridge the language gap in the multilingual news reader comments to some extent. Compared with PCLSA, our MCTA model demonstrates a 4.2% improvement. Our MCTA model provides a better characterization of the collections. One reason is that our MCTA model learns the word distribution of cultural-common topics using an effective topic modeling with a prior Dirichlet distribution. It is similar to the advantage of LDA over PLSA. Moreover, the bilingual constraints in PCLSA cannot handle the original natural word co-occurrence well in each language. In contrast, MCTA represents cultural-common topics as a mixture of the original topics and the translated topics, which capture the comment semantics more effectively. The average TCD of three models are shown in Table 3. Our MCTA outperforms the two comparative methods. The cultural-common topics iden681 Event LDA PCLSA MCTA 1 0.029 0.0075 0.0042 2 0.029 0.0072 0.0043 3 0.033 0.0076 0.0046 4 0.031 0.0075 0.0046 5 0.033 0.0086 0.0069 6 0.029 0.0066 0.0058 7 0.036 0.0080 0.0044 8 0.033 0.0079 0.0034 9 0.034 0.0088 0.0036 10 0.029 0.0067 0.0036 avg. 0.032 0.0076 0.0045 Table 3: Topic quality evaluation as measured by TCD tified by MCTA have better topic commonality because our MCTA model can capture the common semantics between news reader comments in different languages. 4.4.3 Determining Number of Topics As mentioned in Section 4.3, we use the holdout data of one event to determine K. For each λ ∈{0.2, 0.5, 0.8}, we vary K in the range of [5, 200]. Figure 2 depicts the effect of K on the cross-collection perplexity as measured by CCP. We can see that CCP decreases with the increase of the number of topics. Moreover, through manual inspection we observed that when K is 30 or more, even though CCP decreases, the topics will be repeated. Similar observations for the number of topics can be found in Paul and Girju (2009). Therefore, we set K = 30. We can also see that our model is not very sensitive to the balance parameter λ. 0 50 100 150 200 K 1680 1700 1720 1740 1760 1780 1800 1820 1840 CCP λ = 0.2 λ = 0.5 λ = 0.8 Figure 2: The effect of K Event LDA PCLSA MCTA 1 0.128 0.117 0.138 2 0.144 0.126 0.158 3 0.122 0.117 0.120 4 0.138 0.138 0.169 5 0.128 0.109 0.152 6 0.134 0.138 0.152 7 0.103 0.108 0.111 8 0.110 0.099 0.124 9 0.080 0.085 0.096 10 0.138 0.133 0.154 avg. 0.122 0.117 0.137 Table 4: Topic coherence evaluation 4.5 Topic Coherence Evaluation We also evaluate the coherence of topics generated by PCLSA and MCTA, which indicates the interpretability of topics. Following Newman et al. (2010), we use a pointwise mutual information (PMI) score to measure the topic coherence. We compute the average PMI score of top 20 topic word pairs using Eq. 15. Newman et al. (2010) observed that it is important to use an external data set to evaluate PMI. Therefore, we use a 20word sliding window in Wikipedia (Shaoul, 2010) to identify the co-occurrence of word pairs. PMI(wi, wj) = log P(wi, wj) P(wi)P(wj) (15) The experimental results are shown in Table 4. We can see that our MCTA model generally improves the coherence of the learned topics compared with PCLSA. The word-to-word bilingual constraints in PCLSA are not as effective. On the other hand, our MTCA model incorporates the bilingual translations using auxiliary distributions which incorporate word distributions from the other language on the topic level and can capture common semantics of multilingual reader comments. 5 Application and Case Study We present an application for news comment digest and show some examples of detected culturalcommon discussion topics in Figure 3. Under each event, the system can group reader comments into cultural-common discussion topics which can capture common concerns of readers in different languages. For each common topic, it shows top ranked words and corresponding reader comments 682 Event: MH370 Flight Accident Topic Terms Reader Comments family hope love dead people victim passenger sad sorry life 家庭(family) 家 属(family) 家人(family) 亲人(family) 失 望(disappoint) 希望(hope) 心酸(sad) 愿(wish) 痛(pain) 心痛(sad) I feel sorry for the families of the victims of this flight - this aircraft piece being found probably brings that terrible day back I feel so sorry for the relatives of the missing passengers who are doomed to spend the rest of their lives getting their hopes continuously... The family members should now begin to have a closure as the plane’s flaperon has been identified. The Australians have been proven correct as... 时间真快,一年半的时间过去了,不知那些失去亲人的朋友们走出悲痛了没有?唯愿逝 者在天堂安息,生者在人间安康! 家属朋友们,失去亲人的痛苦的,但生活是美好的,一定要好好生活,让逝者安心! 每一次的提起370 就会让那些失去亲人的家人心痛折磨一次! ... ocean island search India mile area locate Australia drift west 洋流(ocean currents) 印度 洋(Indian ocean) 区 域(Region) 海里(mile) 搜 索(search) 搜寻(search) 搜救(rescure) 海底(sea floor) 澳洲(Australia) 海 域(sea area) They were looking in the West Australian current. That would have brought the part to the north of Australia. If it got into I equatorial current... They need to start their sonar scans about 1000 miles south of the tip of India seeing how the currents in that ocean work, and how long it took for that piece to float to the island so far out. It’s pretty simple to estimate seeing how Fukushima fishing boats travelled a set distance over a set time, given a set current... look at current maps. well off the western coast of Aus is the S. Equitorial Current in the Indian Ocean which flows in a circular counter clockwise pattern. It most certainly could have come from a plane that crashed off the AUS coast. 这么多阴谋论者说这是美国搞的鬼,我只能呵呵了,美国的调查结论说是在南印度洋, 在澳大利亚那边,现在发现的位置是不是和美国的调查结果一致?洋流的运动方向和推 测地点、残骸地点是否符合?为什么一定要把空难说成某国的阴谋才甘心? 南印度洋的洋流是自东向西,这个残骸落在这里,那么飞机应该坠毁在东方的海面上。 即使这片残骸属于MH370客机,在留尼汪被发现也并不意味着飞机的失事地点就在留尼 汪。假设飞机在澳大利亚海域坠毁,其残骸很有可能被洋流带到印度洋,一年以后被海 浪冲上安德烈海滩. ... ... Event:ISIS in Iraq Topic Terms Reader Comments muslim islam religion world christian god people believe jew human 信仰(belief) 宗 教(religion) 世界(world) 相信(believe) 全世 界(world) 伊斯兰(Islam) 穆斯林(muslim) 人(people) 犹太(jew) 人 类(human) 1 don’t understand Muslims, Islam or the Holy Qur’an! The aim of Islam is not to instil Sharia over the entire world, Islam preaches that you believe in God worship Him alone and do right good by your belief..... Oh, I get it. It’s about the badness of Muslims being humbled and humiliated in prison by Americans. But IS rapes and mutilates and pilliages... If there was no Muslim religion in Iraq, there would be no ISIS because there would have been no necessity for a thug like Saddam to control... ISIS是个宗教极端组织没错,但是如果ISIS没有下这个令,而是被故意栽赃,其用意显然 不在针对ISIS本身 宗教不是祸首,真正的魁首是打着宗教旗号的极端分子,都说国人没信仰但是一样有右 翼激进分子,激进的民族主义,然后是民粹,最后是种族主义,纳粹不也是这么一步一 步上台的么,历史总是似曾相识。 可以看出一切邪恶都是出自宗教!宗教欺骗人类的另一面!以后别拿我们汉族没有信仰 来说事了!看看你们信仰的后果吧! ... ... Event:AIIB Foundation Topic Terms Reader Comments bank aiib world imf asian develop investment institution infrastructure member 银行(bank) 金融(finance) 世界银行(The World Bank) 世行(The World Bank) 金融机构(Finance institution) 亚洲开发银 行(Asian Development Bank) 成员(member) 国 际货币基金组织(IMF) 国 际(International) 世 界(world) Looks like all the rats are jumping off the sinking world bank and IMF ship. America has pushed their bulling ways long enough and people are... The Federal Reserve, the World Bank, The IMF, and the BIS are failed, self-serving institutions. One can only hope that China will stimulate world growth and the suppression by the west will finally come to an end. The US dollar no longer deserves to be the world’s reserve currency. Bank shopping !!! No more stranglehold by the IMF and World Bank. If Ukraine had only waited another year. Too bad. 把米国和日本排除在亚投行之外,让他们自己单独经营亚洲开发银行和世界银行![哈哈] 欧洲国家被美国坑惨了,世界银行、国际货币基金的钱都在为美国服务,反过来美国又 利用乌克兰危机打压欧元,如今欧洲国家看明白了,还是中国靠普。 世界银行和亚洲开发银行都对亚投行表示欢迎,明显有些言不由衷。信他才怪 ... Figure 3: Some sample common discussion topics of some events 683 according to θe dk and θc dk. Considering the event “MH370 Flight accident”, it shows two of the detected cultural-common topics. The first one indicates that readers pray for the family in the accident and the second one is related to the search of the crashed plane. For the common topic about praying for the family, we can see that the topics contain both English words and Chinese words which are very relevant and share common semantics of “family” and “hope”. Moreover, the corresponding English and Chinese reader comments, both of which mention the family in the accident, illustrate a high coherent common discussion topic. Similarly for the second common topic, there are common semantics between English and Chinese top ranked words about the search of the crashed plane. Some of the English comments and Chinese comments mention the query of the position of the crashed plane. Interesting common topics are also generated for other events, such as the common topic of religion for the event “ISIS in Iraq” and the topic of economic organization for the event “AIIB foundation”. 6 Conclusions We investigate the task of cultural-common discussion topic detection from multilingual news reader comments. To tackle the task, we develop a new model called MCTA which can cope with the language gap and extract coherent culturalcommon topics from multilingual news reader comments. We also develop a partially collapsed Gibbs sampler which incorporates the term translation relationship into the detection of culturalcommon topics effectively for model parameter learning. Acknowledgments The work described in this paper is substantially supported by grants from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Code: 14203414) and the Direct Grant of the Faculty of Engineering, CUHK (Project Code: 4055034). This work is also affiliated with the CUHK MoE-Microsoft Key Laboratory of Human-centric Computing and Interface Technologies. We also thank the anonymous reviewers for their insightful comments. References Prakhar Biyani, Cornelia Caragea, and Narayan Bhamidipati. 2015. Entity-specific sentiment classification of yahoo news comments. arXiv preprint arXiv:1506.03775. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research, 3:993–1022. Jordan Boyd-Graber and David M Blei. 2009. Multilingual topic models for unaligned text. In Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, pages 75–82. Jordan Boyd-Graber and Philip Resnik. 2010. Holistic sentiment analysis across languages: Multilingual supervised latent dirichlet allocation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 45–55. S Tamer Cavusgil, Gary Knight, John R Riesenberger, Hussain G Rammal, and Elizabeth L Rose. 2014. International business: strategy, management and the new realities. Pearson Australia. Elena Erosheva, Stephen Fienberg, and John Lafferty. 2004. Mixed-membership models of scientific publications. Proceedings of the National Academy of Sciences, 101(suppl 1):5220–5227. Kosuke Fukumasu, Koji Eguchi, and Eric P Xing. 2012. Symmetric correspondence topic models for multilingual text analysis. In Advances in Neural Information Processing Systems, pages 1286–1294. Honglei Guo, Huijia Zhu, Zhili Guo, Xiaoxun Zhang, and Zhong Su. 2010. Opinionit: a text mining system for cross-lingual opinion analysis. In Proceedings of the 19th ACM International Conference on Information and Knowledge Management, pages 1199–1208. Thomas Hofmann. 2001. Unsupervised learning by probabilistic latent semantic analysis. Machine Learning, 42(1-2):177–196. Jagadeesh Jagarlamudi and Hal Daum´e III. 2010. Extracting multilingual topics from unaligned comparable corpora. In Proceedings of the 32nd European Conference on IR Research, pages 444–456. Springer. Zheng Lin, Xiaolong Jin, Xueke Xu, Weiping Wang, Xueqi Cheng, and Yuanzhuo Wang. 2014. A crosslingual joint aspect/sentiment model for sentiment analysis. In Proceedings of the 23rd ACM International Conference on Information and Knowledge Management, pages 1089–1098. Bin Lu, Chenhao Tan, Claire Cardie, and Benjamin K Tsou. 2011. Joint bilingual sentiment classification with unlabeled parallel corpora. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 320–330. 684 Steve Melluish. 2014. Globalization, culture and psychology. International Review of Psychiatry, 26(5):538–543. David Mimno, Hanna M Wallach, Jason Naradowsky, David A Smith, and Andrew McCallum. 2009. Polylingual topic models. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 880–889. David Newman, Jey Han Lau, Karl Grieser, and Timothy Baldwin. 2010. Automatic evaluation of topic coherence. In Human Language Technologies: the 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 100–108. Xiaochuan Ni, Jian-Tao Sun, Jian Hu, and Zheng Chen. 2009. Mining multilingual topics from wikipedia. In Proceedings of the 18th International Conference on World Wide Web, pages 1155–1156. Michael Paul and Roxana Girju. 2009. Cross-cultural analysis of blogs and forums with mixed-collection topic models. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 1408–1417. Nanyun Peng, Yiming Wang, and Mark Dredze. 2014. Learning polylingual topic models from codeswitched social media documents. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, volume 2, pages 674– 679. Martin F Porter. 1980. An algorithm for suffix stripping. Program, 14(3):130–137. Radityo Eko Prasojo, Mouna Kacimi, and Werner Nutt. 2015. Entity and aspect extraction for organizing news comments. In Proceedings of the 24th ACM International Conference on Information and Knowledge Management, pages 233–242. Cyrus Shaoul. 2010. The westbury lab wikipedia corpus. Edmonton, AB: University of Alberta. Ivan Vuli´c and Marie-Francine Moens. 2014. Probabilistic models of cross-lingual semantic similarity in context based on latent cross-lingual concepts induced from comparable data. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 349–362. Ivan Vuli´c, Wim De Smet, and Marie-Francine Moens. 2011. Identifying word translations from comparable corpora using latent topic models. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 479–484. ChengXiang Zhai, Atulya Velivelli, and Bei Yu. 2004. A cross-collection mixture model for comparative text mining. In Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 743–748. Duo Zhang, Qiaozhu Mei, and ChengXiang Zhai. 2010. Cross-lingual latent topic extraction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1128– 1137. Tao Zhang, Kang Liu, and Jun Zhao. 2013. Cross lingual entity linking with bilingual topic model. In Proceedings of the 23rd International Joint Conference on Artificial Intelligence, pages 2218–2224. 685
2016
64
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 686–696, Berlin, Germany, August 7-12, 2016. c⃝2016 Association for Computational Linguistics A Discriminative Topic Model using Document Network Structure Weiwei Yang Computer Science University of Maryland College Park, MD [email protected] Jordan Boyd-Graber Computer Science University of Colorado Boulder, CO Jordan.Boyd.Graber@ colorado.edu Philip Resnik Linguistics and UMIACS University of Maryland College Park, MD [email protected] Abstract Document collections often have links between documents—citations, hyperlinks, or revisions—and which links are added is often based on topical similarity. To model these intuitions, we introduce a new topic model for documents situated within a network structure, integrating latent blocks of documents with a max-margin learning criterion for link prediction using topicand word-level features. Experiments on a scientific paper dataset and collection of webpages show that, by more robustly exploiting the rich link structure within a document network, our model improves link prediction, topic quality, and block distributions. 1 Introduction Documents often appear within a network structure: social media mentions, retweets, and follower relationships; Web pages by hyperlinks; scientific papers by citations. Network structure interacts with the topics in the text, in that documents linked in a network are more likely to have similar topic distributions. For instance, a citation link between two papers suggests that they are about a similar field, and a mentioning link between two social media users often indicates common interests. Conversely, documents’ similar topic distributions can suggest links between them. For example, topic model (Blei et al., 2003, LDA) and block detection papers (Holland et al., 1983) are relevant to our research, so we cite them. Similarly, if a social media user A finds another user B with shared interests, then A is more likely to follow B. Our approach is part of a natural progression of network modeling in which models integrate more information in more sophisticated ways. Some past methods only consider the network itself (Kim and Leskovec, 2012; Liben-Nowell and Kleinberg, 2007), which loses the rich information in text. In other cases, methods take both links and text into account (Chaturvedi et al., 2012), but they are modeled separately, not jointly, limiting the model’s ability to capture interactions between the two. The relational topic model (Chang and Blei, 2010, RTM) goes further, jointly modeling topics and links, but it considers only pairwise document relationships, failing to capture network structure at the level of groups or blocks of documents. We propose a new joint model that makes fuller use of the rich link structure within a document network. Specifically, our model embeds the weighted stochastic block model (Aicher et al., 2014, WSBM) to identify blocks in which documents are densely connected. WSBM basically categorizes each item in a network probabilistically as belonging to one of L blocks, by reviewing its connections with each block. Our model can be viewed as a principled probabilistic extension of Yang et al. (2015), who identify blocks in a document network deterministically as strongly connected components (SCC). Like them, we assign a distinct Dirichlet prior to each block to capture its topical commonalities. Jointly, a linear regression model with a discriminative, max-margin objective function (Zhu et al., 2012; Zhu et al., 2014) is trained to reconstruct the links, taking into account the features of documents’ topic and word distributions (Nguyen et al., 2013), block assignments, and inter-block link rates. We validate our approach on a scientific paper abstract dataset and collection of webpages, with citation links and hyperlinks respectively, to predict links among previously unseen documents and from those new documents to training documents. Embedding the WSBM in a network/topic 686    L L a b y A D D D Figure 1: Weighted Stochastic Block Model model leads to substantial improvements in link prediction over previous models; it also improves block detection and topic interpretability. The key advantage in embedding WSBM is its flexibility and robustness in the face of noisy links. Our results also lend additional support for using maxmargin learning for a “downstream” supervised topic model (McAuliffe and Blei, 2008), and that predictions from lexical as well as topic features improves performance (Nguyen et al., 2013). The rest of this paper is organized as follows. Section 2 introduces two previous link-modeling methods, WSBM and RTM. Section 3 presents our methods to incorporate block priors in topic modeling and include various features in link prediction, as well as the aggregated discriminative topic model whose posterior inference is introduced in Section 4. In Section 5 we show how our model can improve link prediction and (often) improve topic coherence. 2 Dealing with Links 2.1 Weighted Stochastic Block Model WSBM (Aicher et al., 2014) is a generalized stochastic block model (Holland et al., 1983; Wang and Wong, 1987, SBM) and predicts nonnegative integer-weight links, instead of binaryweight links. A block is a collection of documents which are densely connected with each other but sparsely connected with documents in other blocks. WSBM assumes that a document belongs to exactly one block. A link connecting two documents in blocks l and l′ has a weight generated from a Poisson distribution with parameters Ωl,l′ which has a Gamma prior with parameters a and b, as Figure 1 shows. The whole generative process is: 1. For each pair of blocks (l, l′) ∈{1, . . . , L}2 (a) Draw inter-block link rate Ωl,l′ ∼Gamma(a, b) 2. Draw block distribution µ ∼Dir(γ) 3. For each document d ∈{1, . . . , D} (a) Draw block assignment yd ∼Mult(µ) Figure 2: SCC can be distracted by spurious links connecting two groups, while WSBM maintains the distinction.   K  ' d N d N d  ' d  ' ,d d B dz d w 'dz ' d w   Figure 3: A Two-document Segment of RTM 4. For each link (d, d′) ∈{1, . . . , D}2 (a) Draw link weight Ad,d′ ∼Poisson(Ωyd,yd′ ) WSBM is a probabilistic block detection algorithm and more robust than some deterministic algorithms like SCC, which is vulnerable to noisy links. For instance, we would intuitively say Figure 2 has two blocks—as denoted by coloring— whether or not the dashed link exists. If the dashed link does not exist, both WSBM and SCC can identify two blocks. However, if the dashed link does exist, SCC will return only one big block that contains all nodes, while WSBM still keeps the nodes in two reasonable blocks. 2.2 Relational Topic Model RTM (Chang and Blei, 2010) is a downstream model that generates documents and links simultaneously (Figure 3). Its generative process is: 1. For each topic k ∈{1, . . . , K} (a) Draw word distribution φk ∼Dir(β) (b) Draw topic regression parameter ηk ∼N(0, ν2) 2. For each document d ∈{1, . . . , D} (a) Draw topic distribution θd ∼Dir(α) (b) For each token td,n in document d i. Draw topic assignment zd,n ∼Mult(θd) ii. Draw word wd,n ∼Mult(φzd,n) 3. For each explicit link (d, d′) (a) Draw link weight Bd,d′ ∼Ψ(· | zd, zd′, η) In the inference process, the updating of topic assignments is guided by links so that linked documents are more likely to have similar topic distributions. Meanwhile, the linear regression (whose 687   '  K  L  d N D  z w y Figure 4: Graphical Model of BP-LDA output is fed into link probability function Ψ) is updated to maximize the network likelihood using current topic assignments. 3 Discriminative Topic Model with Block Prior and Various Features Our model is able to identify blocks from the network with an embedded WSBM, extract topic patterns of each block as prior knowledge, and use all this information to reconstruct the links. 3.1 LDA with Block Priors (BP-LDA) As argued in the introduction, linked documents are likely to have similar topic distributions, which can be generalized to documents in the same block. Inspired by this intuition and the block assignment we obtain in the previous section, we want to extract some prior knowledge from these blocks. Thus we propose an LDA with block priors, hence BP-LDA, as shown in Figure 4, which has the following generative process: 1. For each topic k ∈{1, . . . , K} (a) Draw word distribution φk ∼Dir(β) 2. For each block l ∈{1, . . . , L} (a) Draw topic distribution πl ∼Dir(α′) 3. For each document d ∈{1, . . . , D} (a) Draw topic distribution θd ∼Dir(απyd) (b) For each token td,n in document d i. Draw topic assignment zd,n ∼Mult(θd) ii. Draw word wd,n ∼Mult(φzd,n) Unlike conventional LDA, which uses an uninformative topic prior, BP-LDA puts a Dirichlet prior π on each block to capture the block’s topic distribution and use it as an informative prior when drawing each document’s topic distribution. In other words, a document’s topic distribution— i.e., what the document is about—is not just informed by the words present in the document but the broader context of its network neighborhood.   K  ' d N d N d  ' d   L L  d w ' d w dz 'dz  d y ' dy   ' ,d d B Topical Feature Lexical Feature Block Feature Figure 5: A two-document segment of VF-RTM. Various features are denoted by grayscale. Bd,d′ is observed, but we keep it in white background to avoid confusion. 3.2 RTM with Various Features (VF-RTM) Building on Chang and Blei (2010), we want to generate the links between documents based on various features, hence VF-RTM. In addition to topic distributions, VF-RTM also includes documents’ word distributions (Nguyen et al., 2013) and the link rate of two documents’ assigned blocks, with the intent that these additional features improve link generation. VF-RTM involves the relationship between a pair of documents, so it is difficult to show the whole model; therefore Figure 5 illustrates with a two-document segment. The generative process is: 1. For each pair of blocks (l, l′) ∈{1, . . . , L}2 (a) Draw block regression parameter ρl,l′ ∼N(0, ν2) 2. For each topic k ∈{1, . . . , K} (a) Draw word distribution φk ∼Dir(β) (b) Draw topic regression parameter ηk ∼N(0, ν2) 3. For each word v ∈{1, . . . , V } (a) Draw lexical regression parameter τv ∼N(0, ν2) 4. For each document d ∈{1, . . . , D} (a) Draw topic distribution θd ∼Dir(α) (b) For each token td,n in document d i. Draw topic assignment zd,n ∼Mult(θd) ii. Draw word wd,n ∼Mult(φzd,n) 5. For each explicit link (d, d′) (a) Draw link weight Bd,d′ ∼Ψ(· | yd, yd′, Ω, zd, zd′, wd, wd′, η, τ, ρ) Links are generated by a link probability function Ψ which takes the regression value Rd,d′ of documents d and d′ as an argument. Assuming documents d and d′ belong to blocks l and l′ respectively, Rd,d′ is Rd,d′ = ηT(zd ◦zd′) + τ T(wd ◦wd′) + ρl,l′Ωl,l′, (1) 688 where zd is a K-length vector with each element zd,k = 1 Nd P n 1 (zd,n = k); wd is a V -length vector with each element wd,v = 1 Nd P n 1 (wd,n = v); ◦denotes the Hadamard (element-wise) product;1 η, τ, and ρ are the weight vectors and matrix for topic-based, lexicalbased and rate-based predictions, respectively. A common choice of Ψ is a sigmoid (Chang and Blei, 2010). However, we instead use hinge loss so that VF-RTM can use the max-margin principle, making more effective use of side information when inferring topic assignments (Zhu et al., 2012). Using hinge loss, the probability that documents d and d′ are linked is Pr (Bd,d′) = exp (−2 max(0, ζd,d′)) , (2) where ζd,d′ = 1−Bd,d′Rd,d′. Positive and negative link weights are denoted by 1 and -1, respectively, in contrast to sigmoid loss. 3.3 Aggregated Model Finally, we put all the pieces together and propose LBH-RTM: RTM with lexical weights (L), block priors (B), and hinge loss (H). Its graphical model is given in Figure 6. 1. For each pair of blocks (l, l′) ∈{1, . . . , L}2 (a) Draw inter-block link rate Ωl,l′ ∼Gamma(a, b) (b) Draw block regression parameter ρl,l′ ∼N(0, ν2) 2. Draw block distribution µ ∼Dir(γ) 3. For each block l ∈{1, . . . , L} (a) Draw topic distribution πl ∼Dir(α′) 4. For each topic k ∈{1, . . . , K} (a) Draw word distribution φk ∼Dir(β) (b) Draw topic regression parameter ηk ∼N(0, ν2) 5. For each word v ∈{1, . . . , V } (a) Draw lexical regression parameter τv ∼N(0, ν2) 6. For each document d ∈{1, . . . , D} (a) Draw block assignment yd ∼Mult(µ) (b) Draw topic distribution θd ∼Dir(απyd) (c) For each token td,n in document d i. Draw topic assignment zd,n ∼Mult(θd) ii. Draw word wd,n ∼Mult(φzd,n) 7. For each link (d, d′) ∈{1, . . . , D}2 (a) Draw link weight Ad,d′ ∼Poisson(Ωyd,yd′ ) 8. For each explicit link (d, d′) (a) Draw link weight Bd,d′ ∼Ψ(· | yd, yd′, Ω, zd, zd′, wd, wd′, η, τ, ρ) A and B are assumed independent in the model, but they can be derived from the same set of links in practice. 1As Chang and Blei (2010) point out, the Hadamard product is able to capture similarity between hidden topic representations of two documents. Algorithm 1 Sampling Process 1: Set λ = 1 and initialize topic assignments 2: for m = 1 to M do 3: Optimize η, τ, and ρ using L-BFGS 4: for d = 1 to D do 5: Draw block assignment yd 6: for each token n do 7: Draw a topic assignment zd,n 8: end for 9: for each explicit link (d, d′) do 10: Draw λ−1 d,d′ (and then λd,d′) 11: end for 12: end for 13: end for Link set A is primarily used to find blocks, so it treats all links deterministically. In other words, the links observed in the input are considered explicit positive links, while the unobserved links are considered explicit negative links, in contrast to the implicit links in B. In terms of link set B, while it adopts all explicit positive links from the input, it does not deny the existence of unobserved links, or implicit negative links. Thus B consists of only explicit positive links. However, to avoid overfitting, we sample some implicit links and add them to B as explicit negative links. 4 Posterior Inference Posterior inference (Algorithm 1) consists of the sampling of topic and block assignments and the optimization of weight vectors and matrix.2 We add an auxiliary variable λ for hinge loss (see Section 4.2), so the updating of λ is not necessary when using sigmoid loss. The sampling procedure is an iterative process after initialization (Line 1). In each iteration, we first optimize the weight vectors and matrix (Line 3) before updating documents’ block assignments (Line 5) and topic assignments (Line 7). When using hinge loss, the auxiliary variable λ for every explicit link needs to be updated (Line 10). 4.1 Sampling Block Assignments Block assignment sampling is done by Gibbs sampling, using the block assignments and links in A 2More details about sampling procedures and equations in this section (including the sampling and optimization equations using sigmoid loss) are available in the supplementary material. 689    '   K  L   L L ' d N d N d  ' d  a b d y ' dy ' ,d d A ' ,d d B dz d w 'dz ' d w     Figure 6: The graphical model of LBH-RTM for two documents, in which a weighted stochastic block model is embedded (γ, µ, y, a, b, Ω, and A). Each document’s topic distribution has an informative prior π. The model predicts links between documents (B) based on topics (z), words (w), and interblock link rates (Ω), using a max-margin objective. excluding document d and its related links.3 The probability that d is assigned to block l is Pr(yd = l | A−d, y−d, a, b, γ) ∝  N −d l + γ  × Y l′ S−d e (l, l′) + b S−d w (l,l′)+a S−d e (l, l′) + b + Se(d, l′) S−d w (l,l′)+a+Sw(d,l′) Sw(d,l′)−1 Y i=0  S−d w (l, l′) + a + i  , (3) where Nl is the number of documents assigned to block l; −d denotes that the count excludes document d; Sw(d, l) and Sw(l, l′) are the sums of link weights from document d to block l and from block l to block l′, respectively: Sw(d, l) = X d′:yd′ =l Ad,d′ (4) Sw(l, l′) = X d:yd=l Sw(d, l′). (5) Se(d, l) is the number of possible links from document d to l (i.e., assuming document d connects to every document in block l), which equals Nl. The number of possible links from block l to l′ is Se(l, l′) (i.e., assuming every document in block l connects to every document in block l′): Se(l, l′) =  Nl × Nl′ l ̸= l′ 1 2Nl(Nl −1) l = l′. (6) If we rearrange the terms of Equation 3 and put the terms which have Sw(d, l′) together, the value 3These equations deal with undirected edges, but they can be adapted for directed edges. See supplementary material. of Sw(d, l′) increases (i.e., document d is more densely connected with documents in block l′), the probability of assigning d to block l decreases exponentially. Thus if d is more densely connected with block l and sparsely connected with other blocks, it is more likely to be assigned to block l. 4.2 Sampling Topic Assignments Following Polson and Scott (2011), by introducing an auxiliary variable λd,d′, the conditional probability of assigning td,n, the n-th token in document d, to topic k is Pr(zd,n = k | z−d,n, w−d,n, wd,n = v, yd = l) ∝  N −d,n d,k + απ−d,n l,k  N −d,n k,v + β N −d,n k,· + V β Y d′ exp  −(ζd,d′ + λd,d′)2 2λd,d′  , (7) where Nd,k is the number of tokens in document d that are assigned to topic k; Nk,v denotes the count of word v assigned to topic k; Marginal counts are denoted by ·; −d,n denotes that the count excludes td,n; d′ denotes all documents that have explicit links with document d. The block topic prior π−d,n l,k is estimated based on the maximal path assumption (Cowans, 2006; Wallach, 2008): π−d,n l,k = P d′:yd′ =l N −d,n d′,k + α′ P d′:yd′ =l N −d,n d′,· + Kα′ . (8) the link prediction argument ζd,d′ is ζd,d′ = 1 −Bd,d′  ηk Nd,· Nd′,k Nd′,· + R−d,n d,d′  . (9) 690 where R−d,n d,d′ = K X k=1 ηk N −d,n d,k Nd,· Nd′,k Nd′,· + V X v=1 τv Nd,v Nd,· Nd′,v Nd′,· + ρyd,yd′ Ωyd,yd′ . (10) Looking at the first term of Equation 7, the probability of assigning td,n to topic k depends not only on its own topic distribution, but also the topic distribution of the block it belongs to. The links also matter: Equation 9 gives us the intuition that a topic which could increase the likelihood of links is more likely to be selected, which forms an interaction between topics and the link graph— the links are guiding the topic sampling while updating topic assignments is maximizing the likelihood of the link graph. 4.3 Parameter Optimization While topic assignments are updated iteratively, the weight vectors and matrix η, τ, and ρ are optimized in each global iteration over the whole corpus using L-BFGS (Liu and Nocedal, 1989). It takes the likelihood of generating B using η, τ, ρ, and current topic and block assignments as the objective function, and optimizes it using the partial derivatives with respect to every weight vector/matrix element. The log likelihood of B using hinge loss is L(B) ∝− X d,d′ R2 d,d′ −2(1 + λd,d′)Bd,d′Rd,d′ 2λd,d′ − K X k=1 η2 k 2ν2 − V X v=1 τ 2 v 2ν2 − L X l=1 L X l′=1 ρ2 l,l′ 2ν2 . (11) We also need to update the auxiliary variable λd,d′. Since the likelihood of λd,d′ follows a generalized inverse Gaussian distribution GIG  λd,d′; 1 2, 1, ζ2 d,d′  , we sample its reciprocal λ−1 d,d′ from an inverse Gaussian distribution as Pr λ−1 d,d′ | z, w, η, τ, ρ  = IG  λ−1 d,d′; 1 |ζd,d′|, 1  . (12) 5 Experimental Results We evaluate using the two datasets. The first one is CORA dataset (McCallum et al., 2000). After removing stopwords and words that appear in fewer than ten documents, as well as documents with no Model PLR CORA WEBKB RTM (Chang and Blei, 2010) 419.33 141.65 LCH-RTM (Yang et al., 2015) 459.55 150.32 BS-RTM 391.88 127.25 LBS-RTM 383.25 125.41 LBH-RTM 360.38 111.79 Table 1: Predictive Link Rank Results words or links, our vocabulary has 1,240 unique words. The corpus has 2,362 computer science paper abstracts with 4,231 citation links. The second dataset is WEBKB. It is already preprocessed and has 1,703 unique words in vocabulary. The corpus has 877 web pages with 1,608 hyperlinks. We treat all links as undirected. Both datasets are split into 5 folds, each further split into a development and test set with approximately the same size when used for evaluation. 5.1 Link Prediction Results In this section, we evaluate LBH-RTM and its variations on link prediction tasks using predictive link rank (PLR). A document’s PLR is the average rank of the documents to which it has explicit positive links, among all documents, so lower PLR is better. Following the experiment setup in Chang and Blei (2010), we train the models on the training set and predict citation links within held-out documents as well as from held-out documents to training documents. We tune two important parameters—α and negative edge ratio (the ratio of the number of sampled negative links to the number of explicit positive links)—on the development set and apply the trained model which performs the best on the development set to the test set.4 The cross validation results are given in Table 1, where models are differently equipped with lexical weights (L), WSBM prior (B), SCC prior (C), hinge loss (H), and sigmoid loss (S).5 Link prediction generally improves with incremental application of prior knowledge and more sophisticated learning techniques. The embedded WSBM brings around 6.5% and 10.2% improvement over RTM in PLR on the 4We also tune the number of blocks for embedded WSBM and set it to 35 (CORA) and 20 (WEBKB). The block topic priors are not applied on unseen documents, since we don’t have available links. 5The values of RTM are different from the result reported by Chang and Blei (2010), because we re-preprocessed the CORA dataset and used different parameters. 691 CORA and WEBKB datasets, respectively. This indicates that the blocks identified by WSBM are reasonable and consistent with reality. The lexical weights also help link prediction (LBS-RTM), though less for BS-RTM. This is understandable since word distributions are much sparser and do not make as significant a contribution as topic distributions. Finally, hinge loss improves PLR substantially (LBH-RTM), about 14.1% and 21.1% improvement over RTM on the CORA and WEBKB datasets respectively, demonstrating the effectiveness of max-margin learning. The only difference between LCH-RTM and LBH-RTM is the block detection algorithm. However, their link prediction performance is poles apart—LCH-RTM even fails to outperform RTM. This implies that the quality of blocks identified by SCC is not as good as WSBM, which we also illustrate in Section 5.4. 5.2 Illustrative Example We illustrate our model’s behavior qualitatively by looking at two abstracts, Koplon and Sontag (1997) and Albertini and Sontag (1992) from the CORA dataset, designated K and A for short. Paper K studies the application of Fourier-type activation functions in fully recurrent neural networks. Paper A shows that if two neural networks have equal behaviors as “black boxes”, they must have the same number of neurons and the same weights (except sign reversals). From the titles and abstracts, we can easily find that both of them are about neural networks (NN). They both contain words like neural, neuron, network, recurrent, activation, and nonlinear, which corresponds to the topic with words neural, network, train, learn, function, recurrent, etc. There is a citation between K and A. The ranking of this link improves as the model gets more sophisticated (Table 2), except LCH-RTM, which is consistent with our PLR results. In Figure 7, we also show the proportions of topics that dominate the two documents according to the various models. There are multiple topics dominating K and A according to RTM (Figure 7(a)). As the model gets more sophisticated, the NN topic proportion gets higher. Finally, only the NN topic dominates the two documents when LBH-RTM is applied (Figure 7(e)). LCH-RTM gives the highest proportion to the NN topic (Figure 7(b)). However, the NN topic Model Rank of the Link RTM 1,265 LCH-RTM 1,385 BS-RTM 635 LBS-RTM 132 LBH-RTM 106 Table 2: PLR of the citation link between example documents K and A (described in Section 5.2) Model FET LLR CORA WEBKB CORA WEBKB RTM 0.1330 0.1312 3.001 6.055 LCH-RTM 0.1418 0.1678 3.071 6.577 BS-RTM 0.1415 0.1950 3.033 6.418 LBS-RTM 0.1342 0.1963 2.984 6.212 LBH-RTM 0.1453 0.2628 3.105 6.669 Table 3: Average Association Scores of Topics splits into two topics and the proportions are not assigned to the same topic, which greatly brings down the link prediction performance. The splitting of the NN topic also happens in other models (Figure 7(a) and 7(d)), but they assign proportions to the same topic(s). Further comparing with LBH-RTM, the blocks detected by SCC are not improving the modeling of topics and links—some documents that should be in two different blocks are assigned to the same one, as we will show in Section 5.4. 5.3 Topic Quality Results We use an automatic coherence detection method (Lau et al., 2014) to evaluate topic quality. Specifically, for each topic, we pick out the top n words and compute the average association score of each pair of words, based on the held-out documents in development and test sets. We choose n = 25 and use Fisher’s exact test (Upton, 1992, FET) and log likelihood ratio (Moore, 2004, LLR) as the association measures (Table 3). The main advantage of these measures is that they are robust even when the reference corpus is not large. Coherence improves with WSBM and maxmargin learning, but drops a little when adding lexical weights except the FET score on the WEBKB dataset, because lexical weights are intended to improve link prediction performance, not topic quality. Topic quality of LBH-RTM is also better than that of LCH-RTM, suggesting that WSBM benefits topic quality more than SCC. 692 0.0 0.2 0.4 0.6 0.8 1.0 NN-1 NN-2 Sequential Model Vision Belief Network Knowledge Base Parallel Computing A K (a) RTM Topic Proportions 0.0 0.2 0.4 0.6 0.8 1.0 NN-1 NN-2 Sequential Model Algorithm Bound A K (b) LCH-RTM Topic Proportions 0.0 0.2 0.4 0.6 0.8 1.0 NN System Behavior Research Grant Optimization-1 Optimization-2 A K (c) BS-RTM Topic Proportions 0.0 0.2 0.4 0.6 0.8 1.0 NN-1 NN-2 Random Process Optimization Evolutionary Comput. A K (d) LBS-RTM Topic Proportions 0.0 0.2 0.4 0.6 0.8 1.0 NN Bayesian Network Linear Function MCMC A K (e) LBH-RTM Topic Proportions Figure 7: Topic proportions given by various models on our two illustrative documents (K and A, described in described in Section 5.2). As the model gets more sophisticated, the NN topic proportion gets higher and finally dominates the two documents when LBH-RTM is applied. Though LCH-RTM gives the highest proportion to the NN topic, it splits the NN topic into two and does not assign the proportions to the same one. Block 1 2 #Nodes 42 84 #Links in the Block 55 142 #Links across Blocks 2 Table 4: Statistics of Blocks 1 (learning theory) and 2 (Bayes nets), which are merged in SCC. 5.4 Block Analysis In this section, we illustrate the effectiveness of the embedded WSBM over SCC.6 As we have argued, WSBM is able to separate two internally densely-connected blocks even if there are few links connecting them, while SCC tends to merge them in this case. As an example, we focus on two blocks in the CORA dataset identified by WSBM, designated Blocks 1 and 2. Some statistics are given in Table 4. The two blocks are very sparsely connected, but comparatively quite densely connected inside either block. The two blocks’ topic distributions also reveal their differences: abstracts in Block 1 mainly focus on learning theory (learn, algorithm, bound, result, etc.) and MCMC (markov, chain, distribution, converge, etc.). Abstracts in Block 2, however, have higher 6We omit the comparison of WSBM with other models, because this has been done by Aicher et al. (2014). In addition, WSBM is a probabilistic method while SCC is deterministic. They are not comparable quantitatively, so we compare them qualitatively. weights on Bayesian networks (network, model, learn, bayesian, etc.) and Bayesian estimation (estimate, bayesian, parameter, analysis, etc.), which differs from Block 1’s emphasis. Because of the two inter-block links, SCC merges the two blocks into one, which makes the block topic distribution unclear and misleads the sampler. WSBM, on the other hand, keeps the two blocks separate, which generates a high-quality prior for the sampler. 6 Related Work Topic models are widely used in information retrieval (Wei and Croft, 2006), word sense disambiguation (Boyd-Graber et al., 2007), dialogue segmentation (Purver et al., 2006), and collaborative filtering (Marlin, 2003). Topic models can be extended in either upstream or downstream way. Upstream models generate topics conditioned on supervisory information (Daum´e III, 2009; Mimno and McCallum, 2012; Li and Perona, 2005). Downstream models, on the contrary, generates topics and supervisory data simultaneously, which turns unsupervised topic models to (semi-)supervised ones. Supervisory data, like labels of documents and links between documents, can be generated from either a maximum likelihood estimation approach (McAuliffe and Blei, 2008; Chang and 693 Blei, 2010; Boyd-Graber and Resnik, 2010) or a maximum entropy discrimination approach (Zhu et al., 2012; Yang et al., 2015). In block detection literature, stochastic block model (Holland et al., 1983; Wang and Wong, 1987, SBM) is one of the most basic generative models dealing with binary-weighted edges. SBM assumes that each node belongs to only one block and each link exists with a probability that depends on the block assignments of its connecting nodes. It has been generalized for degreecorrection (Karrer and Newman, 2011), bipartite structure (Larremore et al., 2014), and categorial values (Guimer`a and Sales-Pardo, 2013), as well as nonnegative integer-weight network (Aicher et al., 2014, WSBM). Our model combines both topic model and block detection in a unified framework. It takes text, links, and the interaction between text and links into account simultaneously, contrast to the methods that only consider graph structure (Kim and Leskovec, 2012; Liben-Nowell and Kleinberg, 2007) or separate text and links (Chaturvedi et al., 2012). 7 Conclusions and Future Work We introduce LBH-RTM, a discriminative topic model that jointly models topics and document links, detecting blocks in the document network probabilistically by embedding the weighted stochastic block model, rather via connectedcomponents as in previous models. A separate Dirichlet prior for each block captures its topic preferences, serving as an informed prior when inferring documents’ topic distributions. Maxmargin learning learns to predict links from documents’ topic and word distributions and block assignments. Our model better captures the connections and content of paper abstracts, as measured by predictive link rank and topic quality. LBH-RTM yields topics with better coherence, though not all techniques contribute to the improvement. We support our quantitative results with qualitative analysis looking at a pair of example documents and at a pair of blocks, highlighting the robustness of embedded WSBM over blocks defined as SCC. As next steps, we plan to explore model variations to support a wider range of use cases. For example, although we have presented a version of the model defined using undirected binary weight edges in the experiment, it would be straightforward to adapt to model both directed/undirected and binary/nonnegative real weight edges. We are also interested in modeling changing topics and vocabularies (Blei and Lafferty, 2006; Zhai and Boyd-Graber, 2013). In the spirit of treating links probabilistically, we plan to explore application of the model in suggesting links that do not exist but should, for example in discovering missed citations, marking social dynamics (Nguyen et al., 2014), and identifying topically related content in multilingual networks of documents (Hu et al., 2014). Acknowledgment This research has been supported in part, under subcontract to Raytheon BBN Technologies, by DARPA award HR0011-15-C-0113. Boyd-Graber is also supported by NSF grants IIS/1320538, IIS/1409287, and NCSE/1422492. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsors. References Christopher Aicher, Abigail Z. Jacobs, and Aaron Clauset. 2014. Learning latent block structure in weighted networks. Journal of Complex Networks. Francesca Albertini and Eduardo D. Sontag. 1992. For neural networks, function determines form. In Proceedings of IEEE Conference on Decision and Control. David M. Blei and John D. Lafferty. 2006. Dynamic topic models. In Proceedings of the International Conference of Machine Learning. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research. Jordan Boyd-Graber and Philip Resnik. 2010. Holistic sentiment analysis across languages: Multilingual supervised latent Dirichlet allocation. In Proceedings of Empirical Methods in Natural Language Processing. Jordan Boyd-Graber, David M. Blei, and Xiaojin Zhu. 2007. A topic model for word sense disambiguation. In Proceedings of Empirical Methods in Natural Language Processing. Jonathan Chang and David M. Blei. 2010. Hierarchical relational models for document networks. The Annals of Applied Statistics. 694 Snigdha Chaturvedi, Hal Daum´e III, Taesun Moon, and Shashank Srivastava. 2012. A topical graph kernel for link prediction in labeled graphs. In Proceedings of the International Conference of Machine Learning. Philip J. Cowans. 2006. Probabilistic Document Modelling. Ph.D. thesis, University of Cambridge. Hal Daum´e III. 2009. Markov random topic fields. In Proceedings of the Association for Computational Linguistics. Roger Guimer`a and Marta Sales-Pardo. 2013. A network inference method for large-scale unsupervised identification of novel drug-drug interactions. PLoS Computational Biology. Paul W. Holland, Kathryn Blackmond Laskey, and Samuel Leinhardt. 1983. Stochastic blockmodels: First steps. Social Networks. Yuening Hu, Ke Zhai, Vlad Eidelman, and Jordan Boyd-Graber. 2014. Polylingual tree-based topic models for translation domain adaptation. In Proceedings of the Association for Computational Linguistics. Brian Karrer and Mark EJ Newman. 2011. Stochastic blockmodels and community structure in networks. Physical Review E. Myunghwan Kim and Jure Leskovec. 2012. Latent multi-group membership graph model. In Proceedings of the International Conference of Machine Learning. Ren´ee Koplon and Eduardo D. Sontag. 1997. Using Fourier-neural recurrent networks to fit sequential input/output data. Neurocomputing. Daniel B. Larremore, Aaron Clauset, and Abigail Z. Jacobs. 2014. Efficiently inferring community structure in bipartite networks. Physical Review E. Jey Han Lau, David Newman, and Timothy Baldwin. 2014. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In Proceedings of the Association for Computational Linguistics. Fei-Fei Li and Pietro Perona. 2005. A Bayesian hierarchical model for learning natural scene categories. In Computer Vision and Pattern Recognition. David Liben-Nowell and Jon Kleinberg. 2007. The link-prediction problem for social networks. Journal of the American Society for Information Science and Technology. Dong C. Liu and Jorge Nocedal. 1989. On the limited memory BFGS method for large scale optimization. Mathematical Programming. Benjamin Marlin. 2003. Modeling user rating profiles for collaborative filtering. In Proceedings of Advances in Neural Information Processing Systems. Jon D. McAuliffe and David M. Blei. 2008. Supervised topic models. In Proceedings of Advances in Neural Information Processing Systems. Andrew Kachites McCallum, Kamal Nigam, Jason Rennie, and Kristie Seymore. 2000. Automating the construction of Internet portals with machine learning. Information Retrieval. David Mimno and Andrew McCallum. 2012. Topic models conditioned on arbitrary features with Dirichlet-multinomial regression. In Proceedings of Uncertainty in Artificial Intelligence. Robert Moore. 2004. On log-likelihood-ratios and the significance of rare events. In Proceedings of Empirical Methods in Natural Language Processing. Viet-An Nguyen, Jordan Boyd-Graber, and Philip Resnik. 2013. Lexical and hierarchical topic regression. In Proceedings of Advances in Neural Information Processing Systems. Viet-An Nguyen, Jordan Boyd-Graber, Philip Resnik, Deborah Cai, Jennifer Midberry, and Yuanxin Wang. 2014. Modeling topic control to detect influence in conversations using nonparametric topic models. Machine Learning. Nicholas G. Polson and Steven L. Scott. 2011. Data augmentation for support vector machines. Bayesian Analysis. Matthew Purver, Thomas L. Griffiths, Konrad P. K¨ording, and Joshua B. Tenenbaum. 2006. Unsupervised topic modelling for multi-party spoken discourse. In Proceedings of the Association for Computational Linguistics. Graham JG Upton. 1992. Fisher’s exact test. Journal of the Royal Statistical Society. Hanna M. Wallach. 2008. Structured Topic Models for Language. Ph.D. thesis, University of Cambridge. Yuchung J. Wang and George Y. Wong. 1987. Stochastic blockmodels for directed graphs. Journal of the American Statistical Association. Xing Wei and W. Bruce Croft. 2006. LDA-based document models for ad-hoc retrieval. In Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval. Weiwei Yang, Jordan Boyd-Graber, and Philip Resnik. 2015. Birds of a feather linked together: A discriminative topic model using link-based priors. In Proceedings of Empirical Methods in Natural Language Processing. Ke Zhai and Jordan Boyd-Graber. 2013. Online latent Dirichlet allocation with infinite vocabulary. In Proceedings of the International Conference of Machine Learning. 695 Jun Zhu, Amr Ahmed, and Eric P. Xing. 2012. MedLDA: Maximum margin supervised topic models. Journal of Machine Learning Research. Jun Zhu, Ning Chen, Hugh Perkins, and Bo Zhang. 2014. Gibbs max-margin topic models with data augmentation. Journal of Machine Learning Research. 696
2016
65