text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 753–762 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1070 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 753–762 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1070 A Nested Attention Neural Hybrid Model for Grammatical Error Correction Jianshu Ji†, Qinlong Wang†, Kristina Toutanova‡∗, Yongen Gong†, Steven Truong†, Jianfeng Gao§ †Microsoft AI & Research ‡Google Research §Microsoft Research, Redmond †§{jianshuj,qinlwang,yongeg,stevetr,jfgao}@microsoft.com ‡[email protected] Abstract Grammatical error correction (GEC) systems strive to correct both global errors in word order and usage, and local errors in spelling and inflection. Further developing upon recent work on neural machine translation, we propose a new hybrid neural model with nested attention layers for GEC. Experiments show that the new model can effectively correct errors of both types by incorporating word and character-level information, and that the model significantly outperforms previous neural models for GEC as measured on the standard CoNLL14 benchmark dataset. Further analysis also shows that the superiority of the proposed model can be largely attributed to the use of the nested attention mechanism, which has proven particularly effective in correcting local errors that involve small edits in orthography. 1 Introduction One of the most successful approaches to grammatical error correction (GEC) is to cast the problem as (monolingual) machine translation (MT), where we translate from possibly ungrammatical English sentences to corrected ones (Brockett et al., 2006; Gao et al., 2010; Junczys-Dowmunt and Grundkiewicz, 2016). Such systems, which are based on phrasebased MT models that are typically trained on large sets of sentence-correction pairs, can correct global errors such as word order and usage and local errors in spelling and inflection. The approach has proven superior to systems based on local classifiers that can only fix focused errors in prepositions, determiners, or inflected forms (Rozovskaya and Roth, 2016). ∗This work was conducted while the third author worked at Microsoft Research. Recently, neural machine translation (NMT) systems have achieved substantial improvements in translation quality over phrase-based MT systems (Sutskever et al., 2014; Bahdanau et al., 2015). Thus, there is growing interest in applying neural systems to GEC (Yuan and Briscoe, 2016; Xie et al., 2016). In this paper, we significantly extend previous work, and explore new neural models to meet the unique challenges of GEC. The core component of most NMT systems is a sequence-to-sequence (S2S) model which encodes a sequence of source words into a vector and then generates a sequence of target words from the vector. Unlike the phrase-based MT models, the S2S model can capture long-distance, or even global, word dependencies, which are crucial to correcting global grammatical errors and helping users achieve native speaker fluency (Sakaguchi et al., 2016). Thus, the S2S model is expected to perform better on GEC than phrase-based models. However, as we will show in this paper, to achieve the best performance on GEC, we still need to extend the standard S2S model to address several task-specific challenges, which we will describe below. First, a GEC model needs to deal with an extremely large vocabulary that consists of a large number of words and their (mis)spelling variations. Second, the GEC model needs to capture structure at different levels of granularity in order to correct errors of different types. For example, while correcting spelling and local grammar errors requires only word-level or sub-word level information, e.g., violets →violates (spelling) or violate →violates (verb form), correcting errors in word order or usage requires global semantic relationships among phrases and words. Standard approaches in neural machine translation, also applied to grammatical error correction by Yuan and Briscoe (2016), address the large vocabulary problem by restricting the vocabulary to a limited number of high-frequency words and re753 sorting to standard word translation dictionaries to provide translations for the words that are out of the vocabulary (OOV). However, this approach often fails to take into account the OOVs in context for making correction decisions, and does not generalize well to correcting words that are unseen in the parallel training data. An alternative approach, proposed by Xie et al. (2016), applies a character-level sequence to sequence neural model. Although the model eliminates the OOV issue, it cannot effectively leverage word-level information for GEC, even if it is used together with a separate word-based language model. Our solution to the challenges mentioned above is a novel, hybrid neural model with nested attention layers that infuse both word-level and character-level information. The architecture of the model is illustrated in Figure 1. The word-level information is used for correcting global grammar and fluency errors while the character-level information is used for correcting local errors in spelling or inflected forms. Contextual information is crucial for GEC. Using the proposed model, by combining embedding vectors and attention at both word and character levels, we model all contextual words, including OOVs, in a unified context vector representation. In particular, as we will discuss in Section 5, the character-level attention layer captures most useful information for correcting local errors that involve small edits in orthography. Our model differs substantially from the wordlevel S2S model of Yuan and Briscoe (2016) and the character-level S2S model of Xie et al. (2016) in the way we infuse information at both the word level and the character level. We extend the wordcharacter hybrid model of Luong and Manning (2016), which was originally developed for machine translation, by introducing a character attention layer. This allows the model to learn substitution patterns at both the character level and the word level in an end-to-end fashion, using sentencecorrection pairs. We validate the effectiveness of our model on the CoNLL-14 benchmark dataset (Ng et al., 2014). Results show that the proposed model outperforms all previous neural models for GEC, including the hybrid model of Luong and Manning (2016), which we apply to GEC for the first time. When integrated with a large word-based n-gram language model, our GEC system achieves an F0.5 of 45.15 on CoNLL-14, substantially exceeding the previFigure 1: Architecture of Nested Attention Hybrid Model ously reported top performance of 40.56 achieved by using a neural model and an external language model (Xie et al., 2016). 2 Related Work A variety of classifier-based and MT-based techniques have been applied to grammatical error correction. The CoNLL-14 shared task overview paper of Ng et al. (2014) provides a comparative evaluation of approaches. Two notable advances after the shared task have been in the areas of combining classifiers and phrase-based MT (Rozovskaya and Roth, 2016) and adapting phrase-based MT to the GEC task (Junczys-Dowmunt and Grundkiewicz, 2016). The latter work has reported the highest performance to date on the task of 49.5 in F0.5 score on the CoNLL-14 test set. This method integrates discriminative training toward the task-specific evaluation function, a rich set of features, and multiple large language models. Neural approaches to the task are less explored. We believe that the advances from Junczys-Dowmunt and Grundkiewicz (2016) are complementary to the ones we propose for neural MT, and could be integrated with neural models to achieve even higher performance. Two prior works explored sequence to sequence neural models for GEC (Xie et al., 2016; Yuan and Briscoe, 2016), while Chollampatt et al. (2016) integrated neural features in a phrase-based system for the task. Neural models were also applied to the related sub-task of grammatical error identification (Schmaltz et al., 2016). Yuan and Briscoe (2016) demonstrated the promise of neural MT for GEC but did not adapt the basic sequence-to-sequence with attention to its unique challenges, falling back to traditional word-alignment models to address vocabulary coverage with a post-processing heuristic. Xie et al. (2016) built a character-level sequence 754 to sequence model, which achieves open vocabulary and character-level modeling, but has difficulty with global word-level decisions. The primary focus of our work is integration of character and word-level reasoning in neural models for GEC, to capture global fluency errors and local errors in spelling and closely related morphological variants, while obtaining open vocabulary coverage. This is achieved with the help of character and word-level encoders and decoders with two nested levels of attention. Our model is inspired by advances in sub-word level modeling in neural machine translation. We build mostly on the hybrid model of Luong and Manning (2016) to expand its capability to correct rare words by fine-grained character-level attention. We directly compare our model to the one of Luong and Manning (2016) on the grammar correction task. Alternative methods for MT include modeling of word pieces to achieve open vocabulary (Sennrich et al., 2016), and more recently, fully character-level modeling (Lee et al., 2017). None of these models integrate two nested levels of attention although an empirical evaluation of these approaches for GEC would also be interesting. 3 Nested Attention Hybrid Model Our model is hybrid, and uses both word-level and character-level representations. It consists of a word-based sequence-to-sequence model as a backbone, and additional character-level encoder, decoder, and attention components, which focus on words that are outside the word-level model’s vocabulary. 3.1 Word-based sequence-to-sequence model as backbone The word-based backbone closely follows the basic neural sequence-to-sequence architecture with attention as proposed by Bahdanau et al. (2015) and applied to grammatical error correction by Yuan and Briscoe (2016). For completeness, we give a sketch here. It uses recurrent neural networks to encode the input sentence and to decode the output sentence. Given a sequence of embedding vectors, corresponding to a sequence of input words x: x = (x1, . . . , xT ), (1) the encoder creates a corresponding contextspecific sequence of hidden state vectors e: e = (h1, . . . , hT ) The hidden state ht at time t is computed as: ft = GRUencf (ft−1, xt) , bt = GRUencb(bt+1, xt), ht = [ft; bt], where GRUencf and GRUencb stand for gated recurrent unit functions as described in Cho et al. (2014). We use the symbol GRU with different subscripts to represent GRU functions using different sets of parameters (for example, we used the encf and encb subscripts to denote the parameters of the forward and backward word-level encoder units.) The decoder network is also an RNN using GRU units, and defines a sequence of hidden states ¯d1, . . . , ¯dS used to define the probability of an output sequence y1, . . . , yS as follows: The context vector cs at time step s is computed as follows: cs = T X j=1 αsjhj (2) where: αsk = usk PT j=1 usj (3) usk = φ1(ds)T φ2(hk) (4) Here φ1 and φ2 denote feedforward linear transformations followed by a tanh nonlinearity. The next hidden state ¯ds is then defined as: ds = GRUdec( ¯ ds−1, ys−1), ¯ds = ReLU(W[cs; ds]) where ys−1 is the embedding of the output token at time s-1. ReLU indicates rectified linear units (Hahnloser et al., 2000). The probability of each target word ys is computed as: p(ys|y<s, x) = softmax(g( ¯ds)), where g is a function that maps the decoder state into a vector of size the dimensionality of the target vocabulary. The model is trained by minimizing the crossentropy loss, which for a given (x, y) pair is: Loss(x, y) = − S X s=1 log p(ys|y<s, x) (5) For parallel training data C, the loss is: Loss = − X (x,y)∈C S X s=1 log p(ys|y<s, x) 755 3.2 Hybrid encoder and decoder with two nested levels of attention The word-level backbone models a limited vocabulary of source and target words, and represents out-of-vocabulary tokens with special UNK symbols. In the standard word-level NMT approach, valuable information is lost for source OOV words and target OOV words are predicted using postprocessing heuristics. Hybrid encoder Our hybrid architecture overcomes the loss of source information in the word-level backbone by building up compositional representations of the source OOV words using a character-level recurrent neural network with GRU units. These representations are used in place of the special source UNK embeddings in the backbone, and contribute to the contextual encoding of all source tokens. For example, a three word input sentence where the last term is out-of-vocabulary will be represented as the following vector of embeddings in the word-level model: x = (x1, x2, x3), where x3 would be the embedding for the UNK symbol. The hybrid encoder builds up a word embedding for the third word based on its character sequence: xc1, . . . , xcM. The encoder computes a sequence of hidden states ec for this character sequence, by a forward character-level GRU network: ec = (hc1, . . . , hcM), (6) The last state hcM is used as an embedding of the unknown word. The sequence of embeddings for our example three-word sequence becomes: x = (x1, x2, hcM). We use the same dimensionality for word embedding vectors xi and composed character sequence vectors hcM to ensure the two ways to define embeddings are compatible. Our hybrid source encoder architecture is similar to the one proposed by Luong and Manning (2016). Nested attention hybrid decoder In traditional word-based sequence-to-sequence models special target UNK tokens are used to represent outputs that are outside the target vocabulary. A post-processing UNK-replacement method is then used (Cho et al., 2015; Yuan and Briscoe, 2016) to replace these special tokens with target words. The hybrid model of (Luong and Manning, 2016) uses a jointly trained character-level decoder to generate target words corresponding to UNK tokens, and outperforms the traditional approach in the machine translation task. However, unlike machine translation, models for grammar correction conduct “translation” in the same language, and often need to apply a small number of local edits to the character sequence of a source word corresponding to the target UNK word. For example, rare but correct words such as entity names need to be copied as is, and local spelling errors or errors in inflection need to be corrected. The architecture of Luong and Manning (2016) does not have direct access to a source character sequence, but only uses a single fixed-dimensionality embedding of source unknown words aggregated with additional contextual information from the source. To address the needs of the grammatical error correction task, we propose a novel hybrid decoder with two nested levels of attention: word level and character-level. The character-level attention serves to provide the decoder with direct access to the relevant source character sequence. More specifically, the probability of each target word is defined as follows: For words in the target vocabulary, the probability is defined by the wordlevel backbone. For words outside the vocabulary, the probability of each token is the probability of UNK according to the backbone, multiplied by the probability of the word’s character sequence. The probability of the target character sequence corresponding to an UNK token at position s in the target is defined using a character-level decoder. As in Luong and Manning (2016), the “separate path” architecture is used to capture the relevant context and define the initial state for the character-level decoder: ˆds = ReLU( ˆW[cs; ds]) where ˆW are parameters different from W, and ˆds is not used by the word-level model in predicting the subsequent tokens, but is only used to initialize the character-level decoder. To be able to attend to the relevant source character sequence when generating the target character sequence, we use the concept of hard attention (Xu et al., 2015), but use an arg-max approximation for inference instead of sampling. A similar approach to represent discrete hidden structure in a variety of architectures is used in Kong et al. (2017). The source index zs corresponding to the target 756 position s is defined according to the word-level attention model: zs = arg max k∈0...T−1 αsk where αsk are the intermediate outputs of the word-level attention model we described in Eq.(3). The character-level decoder generates a character sequence yc = (yc 1, . . . , yc N), conditioned on the initial vector ˆds and the source index zs. The characters are generated using a hidden state vector dcn at each time step, via a softmax(gc(dcn)), where gc maps the state to the target character vocabulary space. If the source word xzs is in the source vocabulary, the model is analogous to the one of Luong and Manning (2016) and does not use characterlevel attention: the source context is available only in aggregated form to initialize the state of the decoder. The state dcn for step n in the characterlevel decoder is defined as follows, where GRUcdec are parameters for the gated recurrent cell of this decoder: dcn =  GRUcdec( ˆds, yc n−1) n = 0 GRUcdec(dcn−1, yc n−1) n > 0 In contrast, if the corresponding token in the source xzs is also an out-of-vocabulary word, we define a second nested level of character attention and use it in the character-level decoder. The character-level attention focuses on individual characters from the source word xzs. If ec are the source character hidden vectors computed as in Eq.(6), the recurrence equations for the characterlevel decoder with nested attention are: ¯ dcn = ReLU(Wc[ccn; dcn]) dcn =  GRUcdecNested( ˆds, yc n−1) n = 0 GRUcdecNested( ¯ dcn−1, yc n−1) n > 0 where ccn is the context vector obtained using character-level attention on the sequence ec and the last state of the character-level decoder dcn, computed following equations 2, 3 and 4, but using a different set of parameters. These equations show that the character-level decoder with nested attention can use both the wordlevel state ˆds, and the character-level context ccn and hidden state dcn to perform global and local editing operations. Since we introduced two architectures for the character-level decoder depending on whether the source word xzs is OOV, the combined loss function is defined as follows for end-to-end training: Losstotal = Lossw + αLossc1 + βLossc2 Here Lossw is the standard word-level loss in Eq.(5); character level losses Lossc1 and Lossc2 are losses for target OOV words corresponding to source known and unknown tokens, respectively. α and β are hyper-parameters to balance the loss terms. As seen, our proposed nested attention hybrid model uses character-level attention only when both a predicted target word and its corresponding source input word are OOV. While the model can be naturally generalized to integrate characterlevel attention for known words, the original hybrid model proposed by Luong and Manning (2016) does not use any character-level information for known words. Thus for a controlled evaluation of the impact of the addition of character-level attention only, in this paper we limit character-level attention to OOV words, which already use characters as a basis for building their embedding vectors. A thorough investigation of the impact of characterlevel information in the encoder, attention, and decoder for known words as well is an interesting topic for future research. Decoding for word-level and hybrid models Beam-search is used to decode hypotheses according to the word-level backbone model. For the hybrid model architecture, word-level beam search is conducted first; for each target UNK token, character-level beam-search is used to generate a corresponding target word. 4 Experiments 4.1 Dataset and Evaluation We use standard publicly available datasets for training and evaluation. One data source is the NUS Corpus of Learner English (NUCLE) (Dahlmeier et al., 2013), which is provided as a training set for the CoNLL-13 and CoNLL-14 shared tasks. From the original corpus of size about 60K parallel sentences, we randomly selected close to 5K sentence pairs for use as a validation set, and 45K parallel sentences for use in training. A second data source 757 Training Validation Development Test #Sent pairs 2,608,679 4,771 1,381 1,312 Table 1: Overview of the datasets used. Source #Sent pairs NUCLE 45,422 CLC 1,517,174 lang-8 1,046,083 Total 2,608,679 Table 2: Training data by source. is the Cambridge Learner Corpus (CLC) (Nicholls, 2003), from which we extracted a substantially larger set of parallel sentences. Finally, we used additional training examples from the Lang-8 Corpus of Learner English v1.0 (Tajiri et al., 2012). As Lang-8 data is crowd-sourced, we used heuristics to filter out noisy examples: we removed sentences longer than 100 words and sentence pairs where the correction was substantially shorter than the input text. Table 2 shows the number of sentence pairs from each source used for training. We evaluate the performance of the models on the standard sets from the CoNLL-14 shared task (Ng et al., 2014). We report final performance on the CoNLL-14 test set without alternatives, and analyze model performance on the CoNLL-13 development set (Dahlmeier et al., 2013). We use the development and validation sets for model selection. The sizes of all datasets in number of sentences are shown in Table 1. We report performance in F0.5-measure, as calculated by the m2scorer— the official implementation of the scoring metric in the shared task. 1 Given system outputs and gold-standard edits, m2scorer computes the F0.5 measure of a set of system edits against a set of gold-standard edits. 4.2 Baseline We evaluate our model in comparison to the strong baseline of a word-based neural sequenceto-sequence model with attention, with postprocessing for handling out-of-vocabulary words (Yuan and Briscoe, 2016); we refer to this model as word NMT+UNK replacement. Like Yuan and Briscoe (2016), we use a traditional wordalignment model (GIZA++) to derive a wordcorrection lexicon from the parallel training set. However, in decoding, we don’t use GIZA++ to find the corresponding source word for each tar1http://www.comp.nus.edu.sg/˜nlp/sw/ m2scorer.tar.gz get OOV, but follow Cho et al. (2015), Section 3.3 to use the NMT system’s attention weights instead. The target OOV is then replaced by the most likely correction of the source word from the wordcorrection lexicon, or by the source word itself if there are no available corrections. 4.3 Training Details and Results The embedding size for all word and characterlevel encoders and decoders is set to 1000, and the hidden unit size is also 1000. To reproduce the model of Yuan and Briscoe (2016), we selected the word vocabulary for the baseline by choosing the 30K most frequent words in the source and target respectively to form the source and target vocabularies. In preliminary experiments for the hybrid models, we found that selecting the same vocabulary of 30K words for the source and target based on combined frequency was better (.003 in F0.5) and use that method for vocabulary selection instead. However, there was no gain observed by using such a vocabulary selection method in the baseline. Although the source and target vocabularies in the hybrid models are the same, like in the word-level model, the embedding parameters for source and target words are not shared. The hyper-parameters for the losses in our models are selected based on the development set and set as follows: α = β = 0.5. All models are trained with mini-batch size of 128 (batches are shuffled), initial learning rate of 0.0003 and a 0.95 decay ratio if the cost increases in two consecutive 100 iterations. The gradient is rescaled whenever its norm exceeds 10, and dropout is used with a probability of 0.15. Parameters are uniformly initialized in [− √ (3) √ 1000, √ (3) √ 1000]. We perform inference on the validation set every 5000 iterations to log word-level cost and characterlevel costs; we save parameter values for the model every 10000 iterations as well as the end of each epoch. The stopping point for training is selected based on development set F0.5 among the top 20 parameter sets with best validation set value of the loss function. Training of the nested attention hybrid model takes approximately five days on a Tesla k40m GPU. The basic hybrid model trains in around four days and the word-level backbone trains in approximately three days. Table 3 shows the performance of the baseline and our nested attention hybrid model on the development and test sets. In addition to the word-level 758 Model Performance Dev Test Word NMT + UNK replacement 26.17 38.77 Hybrid model 28.49 40.44 Nested Attention Hybrid Model 28.61 41.53 Table 3: F0.5 results on the CoNLL-13 and CoNLL-14 test sets of main model architectures. baseline, we include the performance of a hybrid model with a single level of attention, which follows the work of Luong and Manning (2016) for machine translation, and is the first application of a hybrid word/character-level model to grammatical error correction. Based on hyper-parameter selection, the character-level component weight of the loss is α = 1 for the basic hybrid model. As shown in Table 3, our implementation of the word NMT+UNK replacement baseline approaches the performance of the one reported in Yuan and Briscoe (2016) (38.77 versus 39.9). We attribute the difference to differences in the training set and the word-alignment methods used. Our reimplementation serves to provide a controlled experimental evaluation of the impact of hybrid models and nested attention on the GEC task. As seen, our nested attention hybrid model substantially improves upon the baseline, achieving a gain of close to 3 points on the test set. The hybrid word/character model with a single level of attention brings a large improvement as well, showing the importance of character-level information for this task. We delve deeper into the impact of nested attention for the hybrid model in Section 5. 4.4 Integrating a Web-scale Language Model The value of large language models for grammatical error correction is well known, and such models have been used in classifier and MT-based systems. To establish the potential of such models in word-based neural sequence-to-sequence systems, we integrate a web-scale count-based language model. In particular, we use the modified Kneser-Ney 5-gram language model trained from Common Crawl (Buck et al., 2014), made available for download by Junczys-Dowmunt and Grundkiewicz (2016). Candidates generated by neural models are reranked using the following linear interpolation of log probabilities: sy|x = log PNN(y|x) + λ log PLM(y). Here λ is a hyper-parameter that balances the weights of the neural network model and the language model. We tuned λ separately Model Performance Dev Test Character-based NMT + LM (Xie et al., 2016) 40.56 Word NMT + UNK replacement + LM 31.73 42.82 Hybrid model + LM 33.21 44.99 Nested Attention Hybrid Model + LM 33.47 45.15 Table 4: F0.5 results on the CoNLL-13 and CoNLL-14 test sets of main model architectures, when combined with a large language model. for each neural model variant, by exploring values in the range [0.0, 2.0] with step size 0.1, and selecting according to development set F0.5. The selected values of λ are: 1.6 for word NMT + UNK replacement and 1.0 for the nested attention model. Table 4 shows the impact of the LM when combined with the neural models implemented in this work. The table also lists the results reported by Xie et al. (2016), for their character-level neural model combined with a large word-level language model. Our best results exceed the ones reported in the prior work by more than 4 points, although we should note that Xie et al. (2016) used a smaller parallel data set for training. 5 Analysis We analyze the impact of sub-word level information and the two nested levels of attention in more detail by looking at the performance of the models on different segments of the data. In particular, we analyze the performance of the models on sentences containing OOV source words versus ones without OOV words, and corrections to orthographically similar versus dissimilar word forms. 5.1 Performance by Segment: OOV versus Non-OOV We present a comparative performance analysis of models on the CoNLL-13 development set. First, we divide the set into two segments: OOV and NonOOV, based on whether there is at least one OOV word in the given source input. Table 5 shows that both hybrid architectures substantially outperform the word-level model in both segments of the data. The additional nested character-level attention of our hybrid model brings a sizable improvement over the basic hybrid model in the OOV segment and a small degradation in the non-OOV segment. We should note that in future work characterlevel attention can be added for non-OOV source words in the nested attention model, which could improve performance on this segment as well. 759 Model NonOOV OOV Overall Word NMT + UNK replacement 27.61 21.57 26.17 Hybrid model 29.36 25.92 28.49 Nested Attention Hybrid Model 29.00 27.39 28.61 Table 5: F0.5 results on the CoNLL-13 set of main model architectures, on different segments of the set according to whether the input contains OOVs. source This greatly violets the rights of people . gold This greatly violates the rights of people . word NMT + UNK replacement This greatly violets the rights of people . Nested Attention Hybrid Model This greatly violates the rights of people . Table 6: An example sentence from the OOV segment where the nested attention hybrid model improves performance. Table 6 shows an example where the nested attention hybrid model successfully corrects a misspelling resulting in an OOV word on the source, whereas the baseline word-level system simply copies the source word without fixing the error (since this particular error is not observed in the parallel training set). 5.2 Impact of Nested Attention on Different Error Types To analyze more precisely the impact of the additional character-level attention introduced by our design, we continue to investigate the OOV segment in more detail. The concept of edit, which is also used by the official M2 score metric, is defined as a minimal pair of corresponding sub-strings in a source sentence and a correction. For example, in the sentence fragment pair: “Even though there is a risk of causing harms to someone, people still are prefers to keep their pets without a leash.” →“Even though there is a risk of causing harm to someone, people still prefer to keep their pets without a leash.”, the minimal edits are “harms →harm” and “are prefers → prefer”. The F0.5 score is computed using weighted precision and recall of the set of a system’s edits against one or more sets of reference edits. For our in-depth analysis, we classify edits in the OOV segment into two types: small changes and large changes, based on whether the source and target phrase of the edit are orthographically similar or not. More specifically, we say that the target and Model Performance P R F0.5 Small Changes Portion Hybrid model 43.86 16.29 32.77 Nested Attention Hybrid Model 48.25 17.92 36.04 Large Changes Portion Hybrid model 32.52 8.32 20.56 Nested Attention Hybrid Model 33.05 8.11 20.46 Table 7: Precision, Recall and F0.5 results on CoNLL-13,on the ”small changes” and “large changes” portions of the OOV segment. source phrases are orthographically similar, iff: the character edit distance is at most 2 and the source or target is at most 8 characters long, or edit ratio < 0.25, where edit ratio = character edit distance min(len(src),len(tar))+0.1, len(∗) denotes number of characters in ∗, and src and tgt denote the pairs in the edit. There are 307 gold edits in the “small changes” portion of the CoNLL-13 OOV segment, and 481 gold edits in the “large changes” portion. Our hypothesis is that the additional characterlevel attention layer is particularly useful to model edits among orthographically similar words. Table 7 contrasts the impact of character-level attention on the two portions of the data. We can see that the gains in the “small changes” portion are indeed quite large, indicating that the fine-grained character-level attention empowers the model to more accurately correct confusions among phrases with high character-level similarity. The impact in the “large changes” portion is slightly positive in precision and slightly negative in recall. Thus most of the benefit of the additional character-level attention stems from improvements in the “small changes” portion. Table 8 shows an example input which illustrates the precision gain of the nested attention hybrid model. The input sentence has a source OOV word which is correct. The hybrid model introduces an error in this word, because it uses only a single source context vector, aggregating the characterlevel embedding of the source OOV word together with other source words. The additional characterlevel attention layer in the nested hybrid model enables the correct copying of this long source OOV word, without employing the heuristic mechanism of the word-level NMT system. 760 source Population ageing : A more and more attention-getting topic gold Population ageing : A more and more attention-getting topic Word NMT + UNK replacement Population ageing : A more and more attention-getting topic Hybrid Model Population ageing : A more and more attention-teghting topic Nested Attention Hybrid Model Population ageing : A more and more attention-getting topic Table 8: An example where the nested attention hybrid model outperforms the non-nested model. 6 Conclusions We have introduced a novel hybrid neural model with two nested levels of attention: word-level and character-level. The model addresses the unique challenges of the grammatical error correction task and achieves the best reported results on the CoNLL-14 benchmark among fully neural systems. Our nested attention hybrid model deeply combines the strengths of word and character level information in all components of an end-to-end neural model: the encoder, the attention layers, and the decoder. This enables it to correct both global wordlevel and local character-level errors in a unified way. The new architecture contributes substantial improvement in correction of confusions among rare or orthographically similar words compared to word-level sequence-to-sequence and non-nested hybrid models. Acknowledgements We would like to thank the ACL reviewers for their insightful suggestions, Victoria Zayats for her help with reproducing the baseline word-level NMT system and Yu Shi, Daxin Jiang and Michael Zeng for the helpful discussions. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR. Chris Brockett, William B Dolan, and Michael Gamon. 2006. Correcting ESL errors using phrasal SMT techniques. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. pages 249–256. Christian Buck, Kenneth Heafield, and Bas Van Ooyen. 2014. N-gram counts and language models from the Common Crawl. In LREC. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). S´ebastien Jean Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In Proceedings of ACL. Shamil Chollampatt, Kaveh Taghipour, and Hwee Tou Ng. 2016. Neural network translation models for grammatical error correction. In Proceedings of IJCAI. Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu. 2013. Building a large annotated corpus of learner English: The NUS corpus of learner English. In Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications. pages 22–31. Jianfeng Gao, Xiaolong(Shiao-Long) Li, Daniel Micol, Chris Quirk, and Xu Sun. 2010. A large scale rankerbased system for search query spelling correction. In The 23rd International Conference on Computational Linguistics. Richard HR Hahnloser, Rahul Sarpeshkar, Misha A Mahowald, Rodney J Douglas, and H Sebastian Seung. 2000. Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature 405(6789):947–951. Marcin Junczys-Dowmunt and Roman Grundkiewicz. 2016. Phrase-based machine translation is state-ofthe-art for automatic grammatical error correction. In EMNLP. Lingpeng Kong, Chris Alberti, Daniel Andor, Ivan Bogatyy, and David Weiss. 2017. Dragnn: A transitionbased framework for dynamically connected neural networks. Jason Lee, Kyunghyun Cho, and Thomas Hofmann. 2017. Fully character-level neural machine translation without explicit segmentation. TACL 5. Minh-Thang Luong and Christopher D. Manning. 2016. Achieving open vocabulary neural machine translation with hybrid word-character models. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 761 Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christopher Bryant. 2014. The CoNLL-2014 shared task on grammatical error correction. In CoNLL Shared Task. pages 1–14. Diane Nicholls. 2003. The Cambridge Learner Corpus: Error coding and analysis for lexicography and ELT. In Proceedings of the Corpus Linguistics 2003 conference. volume 16, pages 572–581. Alla Rozovskaya and Dan Roth. 2016. Grammatical error correction: Machine translation and classifiers. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). pages 2205–2215. Keisuke Sakaguchi, Courtney Napoles, Matt Post, and Joel Tetreault. 2016. Reassessing the goals of grammatical error correction: Fluency instead of grammaticality. Transactions of the Association for Computational Linguistics 4:169–182. Allen Schmaltz, Yoon Kim, Alexander M Rush, and Stuart M Shieber. 2016. Sentence-level grammatical error identification as sequence-to-sequence correction. arXiv preprint arXiv:1604.04677 . Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. pages 3104–3112. Toshikazu Tajiri, Mamoru Komachi, and Yuji Matsumoto. 2012. Tense and aspect error correction for ESL learners using global context. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume 2. pages 198–202. Ziang Xie, Anand Avati, Naveen Arivazhagan, Dan Jurafsky, and Andrew Y. Ng. 2016. Neural language correction with character-based attention. CoRR abs/1603.09727. http://arxiv.org/abs/1603.09727. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C Courville, Ruslan Salakhutdinov, Richard S Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In ICML. volume 14, pages 77–81. Zheng Yuan and Ted Briscoe. 2016. Grammatical error correction using neural machine translation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 762
2017
70
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 763–772 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1071 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 763–772 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1071 TextFlow: A Text Similarity Measure based on Continuous Sequences Yassine Mrabet [email protected] Halil Kilicoglu [email protected] Lister Hill National Center for Biomedical Communications U.S. National Library of Medicine 8600 Rockville Pike, 20894, Bethesda, MD, USA Dina Demner-Fushman [email protected] Abstract Text similarity measures are used in multiple tasks such as plagiarism detection, information ranking and recognition of paraphrases and textual entailment. While recent advances in deep learning highlighted further the relevance of sequential models in natural language generation, existing similarity measures do not fully exploit the sequential nature of language. Examples of such similarity measures include ngrams and skip-grams overlap which rely on distinct slices of the input texts. In this paper we present a novel text similarity measure inspired from a common representation in DNA sequence alignment algorithms. The new measure, called TextFlow, represents input text pairs as continuous curves and uses both the actual position of the words and sequence matching to compute the similarity value. Our experiments on eight different datasets show very encouraging results in paraphrase detection, textual entailment recognition and ranking relevance. 1 Background The number of pages required to print the content of the World Wide Web was estimated to 305 billion in a 2015 article1. While a big part of this content consists of visual information such as pictures and videos, texts also continue growing at a very high pace. A recent study shows that the average webpage weights 1,200 KB with plain text accounting for up to 16% of that size2. While efficient distribution of textual data and computations are the key to deal with the increas1http://goo.gl/p9lt7V 2http://goo.gl/c41wpa ing scale of textual search, similarity measures still play an important role in refining search results to more specific needs such as the recognition of paraphrases and textual entailment, plagiarism detection and fine-grained ranking of information. These tasks are also often performed on dedicated document collections for domain-specific applications where text similarity measures can be directly applied. Finding relevant approaches to compute text similarity motivated a lot of research in the last decades (Sahami and Heilman, 2006; Hatzivassiloglou et al., 1999), and more recently with deep learning methods (Socher et al., 2011; Yih et al., 2011; Severyn and Moschitti, 2015). However, most of the recent advances focused on designing high performance classification methods, trained and tested for specific tasks and did not offer a standalone similarity measure that could be applied (i) regardless of the application domain and (ii) without requiring training corpora. For instance, Yih and Meek (2007) presented an approach to improve text similarity measures for web search queries with a length ranging from one word to short sequences of words. The proposed method is tailored to this specific task, and the training models are not expected to perform well on different kinds of data such as sentences, questions or paragraphs. In a more general study, Achananuparp et al. (2008) compared several text similarity measures for paraphrase recognition, textual entailment, and the TREC 9 question variants task. In their experiments the best performance was obtained with a linear combination of semantic and lexical similarities, including a word order similarity proposed in (Li et al., 2006). This word order similarity is computed by constructing first two vectors representing the common words between two given sentences and using their respective positions in the sentences as term 763 weights. The similarity value is then obtained by subtracting the two vectors and taking the absolute value. While such representation takes into account the actual positions of the words, it does not allow detecting sub-sequence matches and takes into account missing words only by omission. More generally, existing standalone (or traditional) text similarity measures rely on the intersections between token sets and/or text sizes and frequency, including measures such as the Cosine similarity, Euclidean distance, Levenshtein (Sankoff and Kruskal, 1983), Jaccard (Jain and Dubes, 1988) and Jaro (Jaro, 1989). The sequential nature of natural language is taken into account mostly through word n-grams and skipgrams which capture distinct slices of the analysed texts but do not preserve the order in which they appear. In this paper, we use intuitions from a common representation in DNA sequence alignment to design a new standalone similarity measure called TextFlow (XF). The proposed measure uses at the same time the full sequence of input texts in a natural sub-sequence matching approach together with individual token matches and mismatches. Our contributions can be detailed further as follows: • A novel standalone similarity measure which: – exploits the full sequence of words in the compared texts. – is asymmetric in a way that allows it to provide the best performance on different tasks (e.g., paraphrase detection, textual entailment and ranking). – when required, it can be trained with a small set of parameters controlling the impact of sub-sequence matching, position gaps and unmatched words. – provides consistent high performance across tasks and datasets compared to traditional similarity measures. • A neural network architecture to train TextFlow parameters for specific tasks. • An empirical study on both performance consistency and standard evaluation measures, performed with eight datasets from three different tasks. Figure 1: Dot matrix example for 2 DNA sequences (Mount, 2004) • A new evaluation measure, called CORE, used to better show the consistency of a system at high performance using both its rank average and rank variance when compared to competing systems over a set of datasets. 2 The TextFlow Similarity XF is inspired from a dot matrix representation commonly used in pairwise DNA sequence alignment (cf. figure 1). We use a similar dot matrix representation for text pairs and draw a curve oscillating around the diagonal (cf. figure 2). The area under the curve is considered to be the distance between the two text pairs which is then normalized with the matrix surface. For practical computation, we transform this first intuitive representation using the delta of positions as in figure 3. In this setting, the Y axis is the delta of positions of a word occurring in the two texts being compared. If the word does not occur in the target text, the delta is considered to be a maximum reference value (l in figure 2). The semantics are: the bigger the area under curve is, the lower the similarity between the compared texts. XF values are real numbers in the [0,1] interval, with 1 indicating a perfect match, and 0 indicating that the compared texts do not have any common tokens. With this representation, we are able to take into account all matched words and sub-sequences at the same time. The exact value for the XF similarity between two texts X = {x1, x2, .., xn} and Y = {y1, y2, .., ym} is therefore computed as: 764 Figure 2: Illustration of TextFlow Intuition Figure 3: Practical TextFlow Computation XF(X, Y ) = 1 − 1 nm n X i=2 1 Si Ti,i−1(X, Y ) − 1 nm n X i=2 1 Si Ri,i−1(X, Y ) (1) With Ti,i−1(X, Y ) corresponding to the triangular area in the [i −1, i] step (cf. figure 3) and Ri,i−1(X, Y ) corresponding to the rectangular component. They are expressed as: Ti,i−1(X, Y ) = |∆P(xi, X, Y ) −∆P(xi−1, X, Y )| 2 (2) and: Ri,i−1(X, Y ) = Min(∆P(xi, X, Y ), ∆P(xi−1, X, Y )) (3) With: • ∆P(xi, X, Y ) the minimum difference between xi positions in X and Y . xi position in X is multiplied by the factor |Y | |X| for normalization. If xi /∈X ∩Y , ∆P(xi, X, Y ) is set to the same reference value equal to m, (i.e., the cost of a missing word is set by default to the length of the target text), and: • Si is the length of the longest matching sequence between X and Y including the word xi, if xi ∈X ∩Y , or 1 otherwise. XF computation is performed in O(nm) in the worst case where we have to check all tokens in the target text Y for all tokens in the input text X. XF is an asymmetric similarity measure. Its asymmetric aspect has interesting semantic applications as we show in the example below (cf. figure 2). The minimum value of XF provided the best differentiation between positive and negative text pairs when looking for semantic equivalence (i.e., paraphrases), the maximum value was among the top three for the textual entailment example. We conduct this comparison at a larger scale in the evaluation section. We add 3 parameters to XF in order to represent the importance that should be given to position deltas (Position factor α), missing words (sensitivity factor β), and sub-sequence matching (sequence factor γ), such that: XFα,β,γ(X, Y ) = 1 − 1 βnm n X i=2 α Sγ i T β i,i−1(X, Y ) − 1 βnm n X i=2 α Sγ i Rβ i,i−1(X, Y ) (4) With: T β i,i−1(X, Y ) = |∆βP(xi, X, Y ) −∆βP(xi−1, X, Y )| 2 (5) Rβ i,i−1(X, Y ) = Min(∆βP(xi, X, Y ), ∆βP(xi−1, X, Y )) (6) and: • ∆βP(xi, X, Y ) = βm, if xi /∈X ∩Y • α < β: forces missing words to always cost more than matched words. • Sγ i =  1ifSi = 1orxi /∈X ∩Y γ SiforSi > 1 The γ factor increases or decreases the impact of sub-sequence matching, α applies to individual token matches whether inside or outside a sequence, and β increases or decreases the impact of 765 Positive Entailment E1 Under a blue sky with white clouds, a child reaches up to touch the propeller of a plane standing parked on a field of grass. E2 A child is reaching to touch the propeller of a plane. Negative Entailment E3 Two men on bicycles competing in a race. E4 Men are riding bicycles on the street. Positive Paraphrase P1 The most serious breach of royal security in recent years occurred in 1982 when 30year-old Michael Fagan broke into the queen’s bedroom at Buckingham Palace. P2 It was the most serious breach of royal security since 1982 when an intruder, Michael Fagan, found his way into the Queen’s bedroom at Buckingham Palace. Negative Paraphrase P3 “Americans don’t cut and run, we have to see this misadventure through,” she said. P4 She also pledged to bring peace to Iraq: “Americans don’t cut and run, we have to see this misadventure through.” Task Entailment Recognition Paraphrase Detection Sentence Pair (E1, E2) (E3, E4) (E1, E2) - (E3, E4) (P1, P2) (P3, P4) (P1, P2) - (P3, P4) Example class (Pos/Neg) (Pos) (Neg) (Gap) (Pos) (Neg) (Gap) Jaro-Winkler 0.629 0.712* -0.083** 0.884 0.738 0.146 Levenshtein 0.351 0.259 0.092 0.708 0.577 0.130 Jaccard 0.250* 0.143 0.107 0.571* 0.583 -0.012 Cosine 0.462 0.250 0.212 0.730 0.746** -0.016 Word Overlap 0.800 0.250 0.550 0.800 0.875* -0.075 MIN(XF (x,y), XF(y,x)) 0.260** 0.563** -0.303* 0.693** 0.497 0.196 MAX(XF(x,y), XF(y,x)) 0.707 0.563** 0.144 0.832 0.739 0.093 Figure 4: Example sentences and similarity values. The best value per column is highlighted. The second best is underlined. Worst and second worst values are followed by one and two stars. Entailment examples are taken from SNLI (Bowman et al., 2015). Paraphrase examples are taken from MSRP 4. missing tokens as well as the normalization quantity βnm in equation 4 to keep the similarity values in the [0,1] range. 2.1 Parameter Training By default XF has canonical parameters set to 1. However, when needed, α, β, and γ can be learned on training data for a specific task. We designed a neural network to perform this task, with a hidden layer dedicated to compute the exact XF value. To do so we compute, for each input text pair, the coefficients vector that would lead exactly to the XF value when multiplied by the vector < α β , α βγ , 1 >. Figure 5) presents the training neural network considering several types of sequences (or translations) of the input text pairs (e.g., lemmas, words, synsets). We use identity as activation function in the dedicated XF layer in order to have a correct comparison with the other similarity measures, including canonical XF where the similarity value is provided in the input layer (cf. figure 6). 3 Evaluation Datasets. This evaluation was performed on 8 datasets from 3 different classification tasks: Textual Entailment Recognition, Paraphrase Detection, and ranking relevance. The datasets are as follows: • RTE 1, 2, and 3: the first three datasets from the Recognizing Textual Entailment (RTE) challenge (Dagan et al., 2006). Each dataset consists of sentence pairs which are annotated with 2 labels: entailment, and nonentailment. They contain respectively (200, 800), (800, 800), and (800, 800) (train, test) pairs. • Guardian: an RTE dataset collected from 78,696 Guardian articles5 published from January 2004 onwards and consisting of 32K pairs which we split randomly in 90%/10% training/test sets. Positive examples were collected from the titles and first sentences. Negative examples were collected from the same source by selecting consecutive sentences and random sentences. • SNLI: a recent RTE dataset consisting of 560K training sentence pairs annotated with 5https://github.com/daoudclarke/ rte-experiment 766 Figure 5: NN architecture A1 for XF Parameter Training 3 labels: entailment, neutral and contradiction (Bowman et al., 2015). We discarded the contradiction pairs as they do not necessarily represent dissimilar sentences and are therefore a random noise w.r.t. our similarity measure evaluation. • MSRP: the Microsoft Research Paraphrase corpus, consisting of 5,800 sentence pairs annotated with a binary label indicating whether the two sentences are paraphrases or not. • Semeval-16-3B: a dataset of questionquestion similarity collected from StackOverflow (Nakov et al., 2016). The dataset contains 3,169 training pairs and 700 test pairs. Three labels are considered: ”Perfect Match”, ”Relevant” or ”Irrelevant”. We combined the first two into the same positive category for our evaluation. • Semeval-14-1: a corpus of Sentences Involving Compositional Knowledge (Marelli et al., 2014) consisting of 10,000 English sentence pairs annotated with both similarity scores and relevance labels. Features. After a preprocessing step where we removed stopwords, we computed the similarity values using 7 different types of sequences constructed, respectively, with the following value from each token: • Word (plain text value) • Lemma • Part-Of-Speech (POS) tag • WordNet Synset6 OR Lemma • WordNet Synset OR Lemma for Nouns • WordNet Synset OR Lemma for Verbs • WordNet Synset OR Lemma for Nouns and Verbs. In the last 4 types of sequences the lemma is used when there is no corresponding WordNet synset. In a first experiment we compare different aggregation functions on top of XF (minimum, maximum and average) in table 1. We used the LibLinear7 SVM classifier for this task. In the second part of the evaluation, we use neural networks to compare the efficiency of XFc, XFt and other similarity measures with in the same setting. We use the neural net described in figure 5 for the trained version XFt and the equivalent architecture presented in figure 6 for all other similarity measures. For canonical XFc we use by default the best aggregation for the task as observed in table 3. 6https://wordnet.princeton.edu/ 7https://www.csie.ntu.edu.tw/˜cjlin/ liblinear/ 767 Task Entailment Recognition Paraphrase Detection Ranking Relevance Datasets RTE 1 RTE 2 RTE 3 Guardian SNLI MSRP Semeval16-t3B Semeval12-t17 XF MIN 55.3 53.8 60.0 77.3 58.0 72.1 77.4 77.8 XF AVG 51.4 1 57.2 62.5 84.9 62.0 72.0 77.6 79.5 XF MAX 53.9 61.3 64.7 86.7 64.3 71.4 76.7 77.7 Table 1: Accuracy evaluation with different aggregations of XF using an SVM classifier. Figure 6: NN Architecture A2 for the equivalent evaluation of other similarity measures. Similarity Measures. We considered nine traditional similarity measures included in the Simmetrics distribution8: Cosine, Euclidean distance, Word Overlap, Dice coefficient (Dice, 1945), Jaccard(Jain and Dubes, 1988), Damerau, Jaro-Winkler (JW) (Porter et al., 1997), Levenshtein (LEV) (Sankoff and Kruskal, 1983), and Longest Common Subsequence (LCS) (Friedman and Sideli, 1992). Implementation. XF was implemented in Java as an extension of the Simmetrics package, made available at this address9. The neural networks were implemented in Python with TensorFlow10. We also share the training sets used for both parameter training and evaluation. The evaluation was performed on a 4-core laptop with 32GB of RAM. The initial parameters for XFt were chosen with a random function. Evaluation Measures. We use the standard accuracy values and F1, precision and recall for the 8https://github.com/Simmetrics/ simmetrics 9https://github.com/ymrabet/TextFlow 10https://www.tensorflow.org/ positive class (i.e., entailment, paraphrase, and ranking relevance). We also study the relative rank in performance of each similarity measure across all datasets using the average rank, the rank variance and a new evaluation measure called Consistent peRformancE (CORE), computed as follows for a system m, a set of datasets D, a set of systems S, and an evaluation measure E ∈ {F1, Precision, Recall, Accuracy}: CORE D,S,E (m) = MIN p∈S AV G d∈D (RS(Ed(p)) + Vd∈D(RS(Ed(p)))  AV G d∈D RS(Ed(m))  + Vd∈D RS(Ed(m))  (7) With RS(Ed(m)) the rank of m according to the evaluation measure E on dataset d w.r.t. competing systems S. Vd∈D(RS(Ed(m))) is the rank variance of m over datasets. The results in tables 2, 3, and 4 demonstrate the intuition. Basically, CORE tells us how consistent a system/method is in having high performance, relatively to the set of competing systems S. The maximum value of CORE is 1 for the best performing system according to its rank. It also allows quantifying how consistently a system achieves high performance for the remaining systems. TextFlow outperformed the results obtained with a combination of word order similarity and semantic similarities tested in (Achananuparp et al., 2008), with gaps of +1.0 in F1 and +6.1 accuracy on MSRP and +4.2 F1 and +2.7% accuracy on RTE 3. 4 Discussion 4.1 Canonical Text Flow TFc had the best average and micro-average accuracy on the 8 classification datasets, with a gap of +0.4 to +6.3 when compared to the state-of-the-art measures. It also reached the best precision average with a gap of +1.8 to +6.3. On the F1 score level XFc achieved the second best performance with a gap of -1.7, mainly caused by its underperformance in recall, where it had the third best performance with a gap of -6.3 (cf. table 3). On a rank level, XFc had the best consistent rank for 768 Cosine Euc Overlap Dice Jaccard Damerau JW LEV LCS XFC XFT RTE 1 .561 .564 .550 .504 .511 .557 .532 .561 .568 .550 .575 RTE 2 .575 .555 .598 .566 .572 .548 .541 .551 .548 .597 .612 RTE 3 .652 .562 .636 .637 .630 .567 .538 .567 .562 .627 .647 Guardian .748 .750 .820 .778 .780 .847 .726 .847 .848 .867 .876 SNLI .621 .599 .665 .612 .608 .631 .556 .630 .619 .641 .656 MSRP .719 .689 .720 .729 .731 .687 .699 .685 .717 .724 .732 Semeval-16-3B .756 .734 .769 .781 .780 .759 .751 .759 .737 .777 .782 Semeval-14-1 .790 .756 .779 .783 .786 .749 .719 .749 .757 .783 .798 AVG .678 .651 .692 .674 .675 .668 .633 .669 .670 .696 .710 Micro Avg .699 .675 .725 .700 .700 .701 .646 .701 .701 .726 .739 RANK Avg 5.1 8.2 4.5 5.6 5.5 6.9 10.1 6.7 6.7 4.1 1.2 RANK Var. 9.0 5.9 4.3 10.0 8.6 5.3 1.6 6.2 8.2 2.7 0.2 CORE 0.104 0.103 0.167 0.094 0.104 0.121 0.125 0.113 0.098 0.215 1.000 Table 2: Accuracy values using. The best result is highlighted, the second best is underlined. Cosine Euc Overlap Dice Jaccard Damerau JW LEV LCS XFC XFT RTE 1 .612 .564 .636 .512 .523 .578 .513 .583 .494 .565 .599 RTE 2 .579 .590 .662 .565 .558 .549 .516 .551 .555 .616 .646 RTE 3 .705 .598 .682 .695 .682 .608 .556 .607 .603 .665 .690 Guardian .742 .749 .816 .774 .776 .849 .713 .849 .850 .862 .873 SNLI .582 .576 .641 .562 .564 .627 .479 .627 .611 .594 .585 MSRP .808 .797 .812 .814 .813 .784 .802 .783 .804 .804 .810 Semeval-16-3B .632 .462 .625 .648 .644 .544 .545 .547 .508 .633 .662 Semeval-14-1 .764 .707 .748 .753 .746 .706 .680 .706 .714 .744 .673 AVG .678 .630 .702 .665 .663 .655 .600 .656 .642 .685 .692 Micro Avg .684 .656 .716 .679 .677 .691 .608 .692 .688 .702 .687 RANK Avg 4.5 8.12 3.12 5.12 5.5 6.89 9.88 6.62 7.12 4.62 3.88 RANK Var. 9.7 4.7 4.4 14.7 6.6 8.7 1.8 9.1 8.1 2.3 11.0 CORE 0.485 0.538 0.915 0.348 0.571 0.443 0.588 0.438 0.452 1.000 0.464 Table 3: F1 scores. The best result is highlighted, the second best is underlined. Cosine Euc Overlap Dice Jaccard Damerau JW LEV LCS XFC XFT RTE 1 .548 .564 .534 .503 .510 .552 .535 .555 .596 .546 .566 RTE 2 .574 .547 .571 .567 .578 .547 .546 .551 .546 .588 .594 RTE 3 .624 .565 .618 .611 .610 .568 .547 .568 .564 .616 .627 Guardian .759 .753 .836 .789 .789 .839 .749 .840 .839 .891 .894 SNLI .644 .608 .690 .642 .632 .631 .577 .630 .621 .679 .735 MSRP .740 .705 .732 .749 .755 .723 .713 .722 .743 .760 .765 Semeval-16-3B .634 .708 .678 .698 .698 .732 .698 .729 .674 .700 .686 Semeval-14-1 .745 .738 .738 .743 .769 .716 .672 .716 .727 .762 .740 AVG .659 .649 .675 .663 .668 .664 .630 .664 .664 .693 .701 Micro Avg .693 .674 .721 .699 .704 .694 .645 .693 .693 .737 .752 RANK Avg. 5.6 7.5 5.9 5.9 5.1 6.1 9.6 6.1 7.1 3.2 2.5 RANK Var. 9.4 10.0 6.4 5.3 7.8 7.0 4.6 7.6 11.6 3.1 6.9 CORE 0.420 0.361 0.515 0.567 0.488 0.482 0.446 0.462 0.338 1.000 0.676 Table 4: Precision values. The best result is highlighted, the second best is underlined. 769 Cosine Euc Overlap Dice Jaccard Damerau JW LEV LCS XFC XFT RTE 1 .693 .564 .786 .521 .536 .607 .493 .614 .421 .585 .635 RTE 2 .585 .640 .787 .562 .540 .550 .490 .550 .565 .647 .707 RTE 3 .810 .634 .761 .805 .773 .654 .566 .651 .649 .724 .768 Guardian .726 .744 .797 .758 .764 .858 .681 .858 .862 .835 .853 SNLI .531 .548 .600 .499 .510 .624 .409 .625 .601 .527 .486 MSRP .890 .916 .912 .890 .881 .857 .915 .856 .876 .854 .860 Semeval-16 .631 .343 .579 .605 .597 .433 .446 .438 .408 .579 .639 SICK .784 .678 .759 .763 .724 .696 .688 .695 .701 .727 .616 AVG .706 .633 .748 .675 .665 .660 .586 .661 .635 .685 .696 Micro Avg .683 649 .720 .668 .659 .695 .587 .695 .688 .677 .645 RANK Avg. 3.9 7.1 3.5 5.5 6.4 6.1 9.0 6.1 6.6 5.9 5.4 RANK Var. 9.6 12.4 3.4 9.4 5.4 8.1 10.0 11.0 11.7 5.8 14.3 CORE 0.516 0.355 1.000 0.464 0.588 0.486 0.365 0.405 0.378 0.591 0.353 Table 5: Recall values. The best result is highlighted, the second best is underlined. accuracy, F1 and precision, and the second best for recall. 4.2 Trained Text Flow When compared to state-of-the-art measures and to canonical XF, the trained version, XFt, obtained the best accuracy with a gap ranging from +1.4 to +7.8. XFt also obtained the second best F1 average with a -1.0 gap, but with clear inconsistencies according to the dataset. XFt obtained the best precision with a gap ranging from +0.8 to +7.1. XFt did not perform well on recall with 64.5% micro-average compared to WordOverlap with 72%. Both its recall and F1 performance can be explained by the fact that the measure was trained to optimize accuracy, and not the F1 score for the positive class; which also suggests that the approach could be adapted to F1 optimization if needed. 4.3 Synthesis Canonical XF was more consistent than trained XF on all dimensions except accuracy, for which XFt was optimized. We argue that this consistency was made possible through the asymmetry of XF which allowed it to adapt to different kinds of similarities (i.e., equivalence/paraphrase, inference/entailment, and mutual distance/ranking). These results also show that the actual position difference is a relevant factor for text similarity. We explain it mainly by the natural flow of language where the important entities and relations are often expressed first, in contrast with a purely logical-driven approach which has to consider, for instance, that active forms and passive forms are equivalent in meaning. The difference in positions is also not read literally, both because of the higher impact associated to missed words and to the α parameter which allows leveraging their impact in the trained version. 4.4 Additional Experiments In additional experiments, we compared TFc and TFt with the other similarity measures when applied to bi-grams and tri-grams instead of individual tokens. The results were significantly lower on all datasets (between 3 and 10 points loss in accuracy) for both the soa measures and TextFlow variants. This result could be explained by the fact that n-grams are too rigid when a sub-sequence varies even slightly, e.g., the insertion of a new word inside a 3-words sequence leads to a tri-gram mismatch and reduces bi-gram overlap from 100% to 50% for the considered sub-sequence. This issue is not encountered with TextFlow as it relies on the token level, and such an insertion will not cancel or reduce significantly the gains from the correct ordering of the words. It must be noted here that not all languages grant the same level of importance to sequences and that additional multilingual tests have to be carried out. In addition to binary classification output such as textual entailment and paraphrase recognition, text similarity measures can be evaluated more precisely when we consider the correlation of their values for ranking purposes. We conducted ranking correlation experiments on three test datasets provided at the semantic text similarity task at Semeval 2012, with gold score values for their text pairs. The datasets have 750 sentence pairs each, and are extracted from 770 the Microsoft Research video descriptions corpus, MSRP and the SMTeuroparl11. When compared to the traditional similarity measures, TextFlow had the best correlation on the first two datasets with, for instance, 0.54 and 0.46 pearson correlation on the lemmas sequences and the second best on the MSRP extract where the Cosine similarity had the best performance with 0.71 vs 0.68, noting that the Cosine similarity uses word frequencies when the evaluated version of TextFlow did not use word-level weights. Including word weights is one of the promising perspectives in line with this work as it could be done simply by making the deltas vary according to the weight/importance of the (un)matched word. Also, in such a setting, the impact of a sequence of N words will naturally increase or decrease according to the word weights (cf. figure 3). We conducted a preliminary test using the inverse document frequency of the words as extracted from Wikipedia with Gensim12, which led to an improvement of up to 2% for most datasets, with performance decreasing slightly on two of them. Other kinds of weights could also be included just as easily, such as contextual word relatedness using embeddings or other semantic relatedness factors such as WordNet distances (Pedersen et al., 2004). 5 Conclusion We presented a novel standalone similarity measure that takes into account continuous word sequences. An evaluation on eight datasets show promising results for textual entailment recognition, paraphrase detection and ranking. Among the potential extensions of this work are the inclusion of different kinds of weights such as TF-IDF, embedding relatedness and semantic relatedness. We also intend to test other variants around the same concept, including considering the matched words and sequences to have a negative weight to balance further the weight of missing words. Acknowledgements This work was supported in part by the Intramural Research Program of the NIH, National Library of Medicine. 11goo.gl/NVnybD 12https://radimrehurek.com/gensim/ References Palakorn Achananuparp, Xiaohua Hu, and Xiajiong Shen. 2008. The evaluation of sentence similarity measures. In Data warehousing and knowledge discovery, Springer, pages 305–316. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326 . Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment challenge. In Machine learning challenges. evaluating predictive uncertainty, visual object classification, and recognising tectual entailment, Springer, pages 177–190. Lee R Dice. 1945. Measures of the amount of ecologic association between species. Ecology 26(3):297– 302. Carol Friedman and Robert Sideli. 1992. Tolerating spelling errors during patient validation. Computers and Biomedical Research 25(5):486–509. Vasileios Hatzivassiloglou, Judith L Klavans, and Eleazar Eskin. 1999. Detecting text similarity over short passages: Exploring linguistic feature combinations via machine learning. In Proceedings of the 1999 joint sigdat conference on empirical methods in natural language processing and very large corpora. Citeseer, pages 203–212. Anil K Jain and Richard C Dubes. 1988. Algorithms for clustering data. Prentice-Hall, Inc. Matthew A Jaro. 1989. Advances in record-linkage methodology as applied to matching the 1985 census of tampa, florida. Journal of the American Statistical Association 84(406):414–420. Yuhua Li, David McLean, Zuhair A Bandar, James D O’shea, and Keeley Crockett. 2006. Sentence similarity based on semantic nets and corpus statistics. IEEE transactions on knowledge and data engineering 18(8):1138–1150. Marco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, Stefano Menini, and Roberto Zamparelli. 2014. Semeval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. SemEval-2014 . David W Mount. 2004. Bioinformatics: sequence and genome analysis. Cold Spring Harbor Laboratory Press. Preslav Nakov, Llu´ıs M`arquez, Alessandro Moschitti, Walid Magdy, Hamdy Mubarak, Abed Alhakim Freihat, Jim Glass, and Bilal Randeree. 2016. Semeval-2016 task 3: Community question answering. In Proceedings of the 771 10th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2016, San Diego, CA, USA, June 16-17, 2016. pages 525–545. http://aclweb.org/anthology/S/S16/S16-1083.pdf. Ted Pedersen, Siddharth Patwardhan, and Jason Michelizzi. 2004. Wordnet:: Similarity: measuring the relatedness of concepts. In Demonstration papers at HLT-NAACL 2004. Association for Computational Linguistics, pages 38–41. Edward H Porter, William E Winkler, et al. 1997. Approximate string comparison and its effect on an advanced record linkage system. In Advanced record linkage system. US Bureau of the Census, Research Report. Citeseer. Mehran Sahami and Timothy D Heilman. 2006. A web-based kernel function for measuring the similarity of short text snippets. In Proceedings of the 15th international conference on World Wide Web. AcM, pages 377–386. David Sankoff and Joseph B Kruskal. 1983. Time warps, string edits, and macromolecules: the theory and practice of sequence comparison. Reading: Addison-Wesley Publication, 1983, edited by Sankoff, David; Kruskal, Joseph B. 1. Aliaksei Severyn and Alessandro Moschitti. 2015. Learning to rank short text pairs with convolutional deep neural networks. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, pages 373–382. Richard Socher, Eric H Huang, Jeffrey Pennington, Andrew Y Ng, and Christopher D Manning. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In NIPS. volume 24, pages 801–809. Wen-Tau Yih and Christopher Meek. 2007. Improving similarity measures for short segments of text. In AAAI. volume 7, pages 1489–1494. Wen-tau Yih, Kristina Toutanova, John C Platt, and Christopher Meek. 2011. Learning discriminative projections for text similarity measures. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning. Association for Computational Linguistics, pages 247–256. 772
2017
71
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 773–783 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1072 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 773–783 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1072 Friendships, Rivalries, and Trysts: Characterizing Relations between Ideas in Texts Chenhao Tan∗ Dallas Card† Noah A. Smith∗ ∗Paul G. Allen School of Computer Science & Engineering †School of Computer Science University of Washington Carnegie Mellon University Seattle, WA 98195, USA Pittsburgh, PA 15213, USA [email protected] [email protected] [email protected] Abstract Understanding how ideas relate to each other is a fundamental question in many domains, ranging from intellectual history to public communication. Because ideas are naturally embedded in texts, we propose the first framework to systematically characterize the relations between ideas based on their occurrence in a corpus of documents, independent of how these ideas are represented. Combining two statistics—cooccurrence within documents and prevalence correlation over time—our approach reveals a number of different ways in which ideas can cooperate and compete. For instance, two ideas can closely track each other’s prevalence over time, and yet rarely cooccur, almost like a “cold war” scenario. We observe that pairwise cooccurrence and prevalence correlation exhibit different distributions. We further demonstrate that our approach is able to uncover intriguing relations between ideas through in-depth case studies on news articles and research papers. 1 Introduction Ideas exist in the mind, but are made manifest in language, where they compete with each other for the scarce resource of human attention. Milton (1644) used the “marketplace of ideas” metaphor to argue that the truth will win out when ideas freely compete; Dawkins (1976) similarly likened the evolution of ideas to natural selection of genes. We propose a framework to quantitatively characterize competition and cooperation between ideas in texts, independent of how they might be represented. By “ideas”, we mean any discrete conceptual units that can be identified as being present or absent in a document. In this work, we consider representing ideas using keywords and topics obtained in an unsupervised fashion, but our way of characterizing the relations between ideas could be applied to many other types of textual representations, such as frames (Card et al., 2015) and hashtags. What does it mean for two ideas to compete in texts, quantitatively? Consider, for example, the issue of immigration. There are two strongly competing narratives about the roughly 11 million people1 who are residing in the United States without permission. One is “illegal aliens”, who “steal” jobs and deny opportunities to legal immigrants; the other is “undocumented immigrants”, who are already part of the fabric of society and deserve a path to citizenship (Merolla et al., 2013). Although prior knowledge suggests that these two narratives compete, it is not immediately obvious what measures might reveal this competition in a corpus of writing about immigration. One question is whether or not these two ideas cooccur in the same documents. In the example above, these narratives are used by distinct groups of people with different ideologies. The fact that they don’t cooccur is one clue that they may be in competition with each other. However, cooccurrence is insufficient to express the selection process of ideas, i.e., some ideas fade out over time, while others rise in popularity, analogous to the populations of species in nature. Of the two narratives on immigration, we may expect one to win out at the expense of another as public opinion shifts. Alternatively, we might expect to see these narratives reinforcing each other, as both sides intensify their messaging in response to growing opposition, much like the U.S.S.R. and 1As of 2014, according to the most recent numbers from the Center for Migration Studies (Warren, 2016). 773 anti-correlated correlated likely to cooccur tryst friendship 1 9 8 0 1 9 9 0 2 0 0 0 2 0 1 0 immigration, deportation detainee, detention 1 9 8 0 1 9 9 0 2 0 0 0 2 0 1 0 immigrant, undocumented obama, president unlikely to cooccur head-to-head arms-race 1 9 8 0 1 9 9 0 2 0 0 0 2 0 1 0 immigrant, undocumented illegal, alien 1 9 8 0 1 9 9 0 2 0 0 0 2 0 1 0 immigration, deportation republican, party Figure 1: Relations between ideas in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions). We use topics from LDA (Blei et al., 2003) to represent ideas. Each topic is named with a pair of words that are most strongly associated with the topic in LDA. Subplots show examples of relations between topics found in U.S. newspaper articles on immigration from 1980 to 2016, color coded to match the description in text. The y-axis represents the proportion of news articles in a year (in our corpus) that contain the corresponding topic. All examples are among the top 3 strongest relations in each type except (“immigrant, undocumented”, “illegal, alien”), which corresponds to the two competing narratives. We explain the formal definition of strength in §2. the U.S. during the cold war. To capture these possibilities, we use prevalence correlation over time. Building on these insights, we propose a framework that combines cooccurrence within documents and prevalence correlation over time. This framework gives rise to four possible types of relation that correspond to the four quadrants in Fig. 1. We explain each type using examples from news articles in U.S. newspapers on immigration from 1980 to 2016. Here, we have used LDA to identify ideas in the form of topics, and we denote each idea with a pair of words most strongly associated with the corresponding topic. Friendship (correlated over time, likely to cooccur). The “immigrant, undocumented” topic tends to cooccur with “obama, president” and both topics have been rising during the period of our dataset, likely because the “undocumented immigrants” narrative was an important part of Obama’s framing of the immigration issue (Haynes et al., 2016). Head-to-head (anti-correlated over time, unlikely to cooccur). “immigrant, undocumented” and “illegal, alien” are in a head-to-head competition: these two topics rarely cooccur, and “immigrant, undocumented” has been growing in prevalence, while the usage of “illegal, alien” in newspapers has been declining. This observation agrees with a report from Pew Research Center (Guskin, 2013). Tryst (anti-correlated over time, likely to cooccur). The two off-diagonal examples use topics related to law enforcement. Overall, “immigration, deportation” and “detention, jail” often cooccur but “detention, jail” has been declining, while “immigration, deportation” has been rising. This possibly relates to the promises to overhaul the immigration detention system (Kalhan, 2010).2 Arms-race (correlated over time, unlikely to cooccur). One of the above law enforcement topics (“immigration, deportation”) and a topic on the Republican party (“republican, party”) hold an armsrace relation: they are both growing in prevalence over time, but rarely cooccur, perhaps suggesting an underlying common cause. 2The tryst relation is the least intuitive, yet is nevertheless observed. The pattern of being anti-correlated yet likely to cooccur is typically found when two ideas exhibit a friendship pattern (cooccurring and correlated), but only briefly, and then diverge. 774 Note that our terminology describes the relations between ideas in texts, not necessarily between the entities to which the ideas refer. For example, we find that the relation between “Israel” and “Palestine” is “friendship” in news articles on terrorism, based on their prevalence correlation and cooccurrence in that corpus. We introduce the formal definition of our framework in §2 and apply it to news articles on five issues and research papers from ACL Anthology and NIPS as testbeds. We operationalize ideas using topics (Blei et al., 2003) and keywords (Monroe et al., 2008). To explore whether the four relation types exist and how strong these relations are, we first examine the marginal and joint distributions of cooccurrence and prevalence correlation (§3). We find that cooccurrence shows a unimodal normal-shaped distribution but prevalence correlation demonstrates more diverse distributions across corpora. As we would expect, there are, in general, more and stronger friendship and head-to-head relations than arms-race and tryst relations. Second, we demonstrate the effectiveness of our framework through in-depth case studies (§4). We not only validate existing knowledge about some news issues and research areas, but also identify hypotheses that require further investigation. For example, using keywords to represent ideas, a top pair with the tryst relation in news articles on terrorism is “arab” and “islam”; they are likely to cooccur, but “islam” is rising in relative prevalence while “arab” is declining. This suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group. We also show relations between topics in ACL that center around machine translation. Our work is a first step towards understanding relations between ideas from text corpora, a complex and important research question. We provide some concluding thoughts in §6. 2 Computational Framework The aim of our computational framework is to explore relations between ideas. We thus assume that the set of relevant ideas has been identified, and those expressed in each document have been tabulated. Our open-source implementation is at https://github.com/Noahs-ARK/ idea_relations/. In the following, we introduce our formal definitions and datasets. ∀x, y ∈I, d PMI(x, y) = log ˆP(x, y) ˆP(x) ˆP(y) = C + log 1+P t P k 1{x∈dtk}·1{y∈dtk} (1+P t P k 1{x∈dtk})·(1+P t P k 1{y∈dtk}) (1) ˆr(x, y) = P t  ˆP(x|t)−ˆP(x|t)  ˆP(y|t)−ˆP(y|t)  r P t  ˆP(x|t)−ˆP(x|t) 2r P t  ˆP(y|t)−ˆP(y|t) 2 (2) Figure 2: Eq. 1 is the empirical pointwise mutual information for two ideas, our measure of cooccurrence of ideas; note that we use add-one smoothing in estimating PMI. Eq. 2 is the Pearson correlation between two ideas’ prevalence over time. 2.1 Cooccurrence and Prevalence Correlation As discussed in the introduction, we focus on two dimensions to quantify relations between ideas: 1. cooccurrence reveals to what extent two ideas tend to occur in the same contexts; 2. similarity between the relative prevalence of ideas over time reveals how two ideas relate in terms of popularity or coverage. Our input is a collection of documents, each represented by a set of ideas and indexed by time. We denote a static set of ideas as I and a text corpus that consists of these ideas as C = {D1, . . . , DT }, where Dt = {dt1, . . . , dtNt} gives the collection of documents at timestep t, and each document, dtk, is represented as a subset of ideas in I. Here T is the total number of timesteps, and Nt is the number of documents at timestep t. It follows that the total number of documents N = PT t=1 Nt. In order to formally capture the two dimensions above, we employ two commonly-used statistics. First, we use empirical pointwise mutual information (PMI) to capture the cooccurrence of ideas within the same document (Church and Hanks, 1990); see Eq. 1 in Fig. 2. Positive d PMI indicates that ideas occur together more frequently than would be expected if they were independent, while negative d PMI indicates the opposite. Second, we compute the correlation between normalized document frequency of ideas to capture the relation between the relative prevalence of ideas across documents over time; see Eq. 2 in Fig. 2. Positive ˆr indicates that two ideas have similar prevalence over time, while negative ˆr sug775 gests two anti-correlated ideas (i.e., when one goes up, the other goes down). The four types of relations in the introduction can now be obtained using d PMI and ˆr, which capture cooccurrence and prevalence correlation respectively. We further define the strength of the relation between two ideas as the absolute value of the product of their d PMI and ˆr scores: ∀x, y ∈I, strength(x, y) = | d PMI(x, y)׈r(x, y)|. (3) 2.2 Datasets and Representation of Ideas We use two types of datasets to validate our framework: news articles and research papers. We choose these two domains because competition between ideas has received significant interest in history of science (Kuhn, 1996) and research on framing (Chong and Druckman, 2007; Entman, 1993; Gitlin, 1980; Lakoff, 2014). Furthermore, interesting differences may exist in these two domains as news evolves with external events and scientific research progresses through innovations. • News articles. We follow the strategy in Card et al. (2015) to obtain news articles from LexisNexis on five issues: abortion, immigration, same-sex marriage, smoking, and terrorism. We search for relevant articles using LexisNexis subject terms in U.S. newspapers from 1980 to 2016. Each of these corpora contains more than 25,000 articles. Please refer to the supplementary material for details. • Research papers. We consider full texts of papers from two communities: our own ACL community captured by papers from ACL, NAACL, EMNLP, and TACL from 1980 to 2014 (Radev et al., 2009); and the NIPS community from 1987 to 2016.3 There are 4.8K papers from the ACL community and 6.6K papers from the NIPS community. The processed datasets are available at https://chenhaot.com/ pages/idea-relations.html. In order to operationalize ideas in a text corpus, we consider two ways to represent ideas. • Topics. We extract topics from each document by running LDA (Blei et al., 2003) on each corpus C. In all datasets, we set the number of topics to 50.4 Formally, I is the 50 topics learned 3 http://papers.nips.cc/. 4We chose 50 topics based on past experience, though this could be tuned for particular applications. Recall that from the corpus, and each document is represented as the set of topics that are present with greater than 0.01 probability in the topic distribution for that document. • Keywords. We identify a list of distinguishing keywords for each corpus by comparing its word frequencies to the background frequencies found in other corpora using the informative Dirichlet prior model in Monroe et al. (2008). We set the number of keywords to 100 for all corpora. For news articles, the background corpus for each issue is comprised of all articles from the other four issues. For research papers, we use NIPS as the background corpus for ACL and vice versa to identify what are the core concepts for each of these research areas. Formally, I is the 100 top distinguishing keywords in the corpus and each document is represented as the set of keywords within I that are present in the document. Refer to the supplementary material for a list of example keywords in each corpus. In both procedures, we lemmatize all words and add common bigram phrases to the vocabulary following Mikolov et al. (2013). Note that in our analysis, ideas are only present or absent in a document, and a document can in principle be mapped to any subset of ideas in I. In our experiments 90% of documents are marked as containing between 7 and 14 ideas using topics, 8 and 33 ideas using keywords. 3 Characterizing the Space of Relations To provide an overview of the four relation types in Fig. 1, we first examine the empirical distributions of the two statistics of interest across pairs of ideas. In most exploratory studies, however, we are most interested in pairs that exemplify each type of relation, i.e., the most extreme points in each quadrant. We thus look at these pairs in each corpus to observe how the four types differ in salience across datasets. 3.1 Empirical Distribution Properties To the best of our knowledge, the distributions of pairwise cooccurrence and prevalence correlation have not been examined in previous literature. We thus first investigate the marginal distributions of cooccurrence and prevalence correlation and then our framework is to analyze relations between ideas, so this choice is not essential in this work. 776 -1 .0 -0 .5 0 .0 0 .5 1 .0 prev alence correlation -0 .6 -0 .4 -0 .2 0 .0 0 .2 0 .4 0 .6 0 .8 cooccurrence pearsonr = 0 .3 7 (a) Terrorism topics -1 .0 -0 .5 0 .0 0 .5 1 .0 prev alence correlation -0 .6 -0 .4 -0 .2 0 .0 0 .2 0 .4 0 .6 cooccurrence pearsonr = 0 .5 5 (b) Immigration topics -1 .0 -0 .5 0 .0 0 .5 1 .0 prev alence correlation -2 .0 -1 .5 -1 .0 -0 .5 0 .0 0 .5 1 .0 1 .5 cooccurrence pearsonr = 0 .5 (c) ACL topics Figure 3: Overall distributions of cooccurrence and prevalence correlation. In the main plot, each point represents a pair of ideas: color density shows the kernel density estimation of the joint distribution (Scott, 2015). The plots along the axes show the marginal distribution of the corresponding dimension. In each plot, we give the Pearson correlation, and all Pearson correlations’ p-values are less than 10−40. In these plots, we use topics to represent ideas. their joint distribution. Fig. 3 shows three examples: two from news articles and one from research papers. We will also focus our case studies on these three corpora in §4. The corresponding plots for keywords have been relegated to supplementary material due to space limitations. Cooccurrence tends to be unimodal but not normal. In all of our datasets, pairwise cooccurrence ( d PMI) presents a unimodal distribution that somewhat resembles a normal distribution, but it is rarely precisely normal. We cannot reject the hypothesis that it is unimodal for any dataset (using topics or keywords) using the dip test (Hartigan and Hartigan, 1985), though D’Agostino’s K2 test (D’Agostino et al., 1990) rejects normality in almost all cases. Prevalence correlation exhibits diverse distributions. Pairwise prevalence correlation follows different distributions in news articles compared to research papers: they are unimodal in news articles, but not in ACL or NIPS. The dip test only rejects the unimodality hypothesis in NIPS. None follow normal distributions based on D’Agostino’s K2 test. Cooccurrence is positively correlated with prevalence correlation. In all of our datasets, cooccurrence is positively correlated with prevalence correlation whether we use topics or keywords to represent ideas, although the Pearson correlation coefficients vary. This suggests that there are more friendship and head-to-head relations than tryst and arms-race relations. Based on the results of kernel density estimation, we also observe that this correlation is often loose, e.g., in ACL topics, cooccurrence spreads widely at each mode of prevalence correlation. 3.2 Relative Strength of Extreme Pairs We are interested in how our framework can identify intriguing relations between ideas. These potentially interesting pairs likely correspond to the extreme points in each quadrant instead of the ones around the origin, where PMI and prevalence correlation are both close to zero. Here we compare the relative strength of extreme pairs in each dataset. We will discuss how these extreme pairs confirm existing knowledge and suggest new hypotheses via case studies in §4. For each relation type, we average the strengths of the 25 pairs with the strongest relations in that type, with strength defined in Eq. 3. This heuristic (henceforth collective strength) allows us to collectively compare the strengths of the most prominent friendship, tryst, arms-race, and head-to-head relations. The results are not sensitive to the choice of 25. Fig. 4 shows the collective strength of the four types in all of our datasets. The most common ordering is: friendship > head-to-head > arms-race > tryst. The fact that friendship and head-to-head relations are strong is consistent with the positive correlation between cooccurrence and prevalence correlation. In news, friendship is the strongest relation type, but head-to-head is the strongest in ACL topics and NIPS topics. This suggests, unsurprisingly, that there are stronger head-to-head competitions 777 terrorism abortion marriage immigration tobacco A C L N IP S news research 0 .0 0 .1 0 .2 0 .3 0 .4 0 .5 0 .6 0 .7 0 .8 0 .9 collectiv e strength friends tryst head-to-head arms-race (a) Topics terrorism abortion marriage immigration tobacco A C L N IP S news research 0 .0 0 .2 0 .4 0 .6 0 .8 1 .0 1 .2 1 .4 1 .6 1 .8 collectiv e strength friends tryst head-to-head arms-race (b) Keywords Figure 4: Collective strength of the four relation types in each dataset (news is the average of the news corpora and research is for ACL and NIPS). Fig. 4a uses topics to represent ideas, while Fig. 4b uses keywords to represent ideas. Each bar presents the average strength of the top 25 pairs in a relation type in the corresponding dataset. Error bars represent standard errors calculated in the usual way, but note that since the top 25 pairs are not random samples, they cannot be interpreted in the usual way. (i.e., one idea takes over another) between ideas in scientific research than in news. We also see that topics show greater strength in our scientific article collections, while keywords dominate in news, especially in friendship. We conjecture that terms in scientific literature are often overloaded (e.g., a tree could be a parse tree or a decision tree), necessitating some abstraction when representing ideas. In contrast, news stories are more self-contained and seek to employ consistent usage. 4 Exploratory Studies We present case studies based on strongly related pairs of ideas in the four types of relation. Throughout this section, “rank” refers to the rank of the relation strength between a pair of ideas in its corresponding relation type. 4.1 International Relations in Terrorism Following a decade of declining violence in the 90s, the events of September 11, 2001 precipitated a dramatic increase in concern about terrorism, and a major shift in how it was framed (Kern et al., 2003). As a showcase, we consider a topic which encompasses much of the U.S. government’s response to terrorism: “federal, state”.5 We observe two topics engaging in an “arms race” with this one: “afghanistan, taliban” and “pakistan, india”. These correspond to two geopolitical regions closely linked to the U.S. government’s concern with terrorism, and both were sites of U.S. military action during the period of our dataset. Events abroad and the U.S. government’s responses follow the arms-race pattern, each holding increasing 5As in §1, we summarize each topic using a pair of strongly associated words, instead of assigning a name. 1 9 8 0 1 9 9 0 2 0 0 0 2 0 1 0 0 0 .1 0 .2 0 .3 freq uency arab islam Figure 6: Tryst relation between arab and islam using keywords to represent ideas (#2 in tryst): these two words tend to cooccur but are anti-correlated in prevalence over time. In particular, islam was rarely used in coverage of terrorism in the 1980s. attention with the other, likely because they share the same underlying cause. We also observe two head-to-head rivals to the “federal, state” topic: “iran, libya” and “israel, palestinian”. While these topics correspond to regions that are hotly debated in the U.S., their coverage in news tends not to correlate temporally with the U.S. government’s responses to terrorism, at least during the time period of our corpus. Discussion of these regions was more prevalent in the 80s and 90s, with declining media coverage since then (Kern et al., 2003). The relations between these topics are consistent with structural balance theory (Cartwright and Harary, 1956; Heider, 1946), which suggests that the enemy of an enemy is a friend. The “afghanistan, taliban” topic has the strongest friendship relation with the “pakistan, india” topic, i.e., they are likely to cooccur and are positively correlated in prevalence. Similarly, the “iran, libya” topic is a close “friend” with the “israel, palestinian” topic (ranked 8th in friendship). 778 pakistan, india federal, state afghanistan, taliban israel, palestinian iran, libya arms-race (#5) friends (#1) head-to-head (#2) friends (#8) arms-race (#2) head-to-head (#11) (a) Relations between a United States topic and international topics. 1 9 8 0 1 9 9 0 2 0 0 0 2 0 1 0 0 .0 0 .1 0 .2 0 .3 freq uency federal, state afghanistan, taliban (b) (“federal, state”, “afghanistan, taliban”) 1 9 8 0 1 9 9 0 2 0 0 0 2 0 1 0 0 .1 0 .2 0 .3 0 .4 0 .5 0 .6 0 .7 0 .8 0 .9 freq uency federal, state iran, libya (c) (“federal, state”, “iran, libya”) Figure 5: Fig. 5a shows the relations between the “federal, state” topic and four international topics. Edge colors indicate relation types and the number in an edge label presents the ranking of its strength in the corresponding relation type. Fig. 5b and Fig. 5c represent concrete examples in Fig. 5a: “federal, state” and “afghanistan, taliban” follow similar trends, although “afghanistan, taliban” fluctuates over time due to significant events such as the September 11 attacks in 2001 and the death of Bin Laden in 2011; while “iran, lybia” is negatively correlated with “federal, state”. In fact, more than 70% of terrorism news in the 80s contained the “iran, lybia” topic. When using keywords to represent ideas, we observe similar relations between the term homeland security and terms related to the above foreign countries. In addition, we highlight an interesting but unexpected tryst relation between arab and islam (Fig. 6). It is not surprising that these two words tend to cooccur in the same news articles, but the usage of islam in the news is increasing while arab is declining. The increasing prevalence of islam and decreasing prevalence of arab over this time period can also be seen, for example, using Google’s n-gram viewer, but it of course provides no information about cooccurrence. This trend has not been previously noted to the best of our knowledge, although an article in the Huffington Post called for news editors to distinguish Muslim from Arab.6 Our observation suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group, perhaps in part due to the tie between the events of 9/11 and Afghanistan, which is not an Arab or Arabic-speaking country. We leave it to further investigation to confirm or reject this hypothesis. To further demonstrate the effectiveness of our approach, we compare a pair’s rank using only cooccurrence or prevalence correlation with its rank in our framework. Table 1 shows the results for three pairs above. If we had looked at only cooccurrence or prevalence correlation, we would probably have missed these interesting pairs. 6http://www.huffingtonpost.com/ haroon-moghul/even-the-new-york-times-d_ b_766658.html PMI Corr “federal, state”, “afghanistan, taliban” (#2 in arms-race) 43 99 “federal, state”, “iran, lybia” (#2 in head-to-head) 36 56 arab, islam (#2 in tryst) 106 1,494 Table 1: Ranks of pairs by using the absolute value of only cooccurrence or prevalence correlation. 4.2 Ethnicity Keywords in Immigration In addition to results on topics in §1, we observe unexpected patterns about ethnicity keywords in immigration news. Our observation starts with a top tryst relation between latino and asian. Although these words are likely to cooccur, their prevalence trajectories differ, with the discussion of Asian immigrants in the 1990s giving way to a focus on the word latino from 2000 onward. Possible theories to explain this observation include that undocumented immigrants are generally perceived as a Latino issue, or that Latino voters are increasingly influential in U.S. elections. Furthermore, latino holds head-to-head relations with two subgroups of Latin American immigrants: haitian and cuban. In particular, the strength of the relation with haitian is ranked #18 in headto-head relations. Meanwhile, haitian and cuban have a friendship relation, which is again consistent with structural balance theory. The decreasing prevalence of haitian and cuban perhaps speaks to the shifting geographical focus of recent immigration to the U.S., and issues of the Latino panethnicity. In fact, a majority of Latinos prefer to identify with their national origin relative to the 779 latino asian cuban haitian tryst (#8) HtH (#305) HtH (#18) friendship (#19) (a) Relations graph. 1 9 8 0 1 9 9 0 2 0 0 0 2 0 1 0 0 .0 0 .1 0 .2 0 .3 freq uency latino asian (b) (latino, asian) 1 9 8 0 1 9 9 0 2 0 0 0 2 0 1 0 0 .0 0 .1 0 .2 0 .3 freq uency latino haitian (c) (latino, haitian) 1 9 8 0 1 9 9 0 2 0 0 0 2 0 1 0 0 .0 0 .1 0 .2 0 .3 freq uency cuban haitian (d) (cuban, haitian) Figure 7: Relations between ethnicity keywords in immigration news (HtH for head-to-head): latino holds a tryst relation with asian and head-to-head relations with two subgroups from Latin America, haitian and cuban. We do not show the relations between asian and haitian, cuban, because their strength is close to 0. machine translation rule,forest methods word alignment sentiment analysis discourse (coherence) tryst (#5) friendship (#1) arms-race (#1) head-to-head (#1) head-to-head (#38) arms-race (#23) arms-race (#2) head-to-head (#7) Figure 8: Top relations between the topics in ACL Anthology. The top 10 words for the rule, forest methods topic are rule, grammar, derivation, span, algorithm, forest, parsing, figure, set, string. pan-ethnic terms (Taylor et al., 2012). However, we should also note that much of this coverage relates to a set of specific refugee crises, temporarily elevating the political importance of these nations in the U.S. Nevertheless, the underlying social and political reasons behind these head-to-head relations are worth further investigation. 4.3 Relations between Topics in ACL Finally, we analyze relations between topics in the ACL Anthology. It turns out that “machine translation” is at a central position among top ranked relations in all the four types (Fig. 8).7 It is part of the strongest relation in all four types except tryst (ranked #5). The full relation graph presents further patterns. Friendship demonstrates transitivity: both “machine translation” and “word alignment” have similar relations with other topics. But such transitivity does not hold for tryst: although the prevalence of “rule, forest methods” is anti-correlated with both “machine translation” and “sentiment analysis”, “sentiment analysis” seldom cooccurs with “rule, for7In the ranking, we filtered a topic where the top words are ion, ing, system, process, language, one, input, natural language, processing, grammar. For the purposes of this corpus, this is effectively a stopword topic. est methods” because “sentiment analysis” is seldom built on parsing algorithms. Similarly, “rule, forest methods” and “discourse (coherence)” hold an armsrace relation: they do not tend to cooccur and both decline in relative prevalence as “machine translation” rises. The prevalence of each of these ideas in comparison to machine translation is shown in in Fig. 9, which reveals additional detail. 5 Related Work We present two strands of related studies in addition to what we have discussed. Trends in ideas. Most studies have so far examined the trends of ideas individually (Michel et al., 2011; Hall et al., 2008; Rule et al., 2015). For instance, Hall et al. (2008) present various trends in our own computational linguistics community, including the rise of statistical machine translation. More recently, rhetorical framing has been used to predict these sorts of patterns (Prabhakaran et al., 2016). An exception is that Shi et al. (2010) use prevalence correlation to analyze lag relations between topics in publications and research grants. Anecdotally, Grudin (2009) observes a “head-tohead” relation between artificial intelligence and human-computer interaction in research funding. However, to our knowledge, our work is the first study to systematically characterize relations between ideas. Representation of ideas. In addition to topics and keywords, studies have also sought to operationalize the “memes” metaphor using quotes and text reuse in the media (Leskovec et al., 2009; Niculae et al., 2015; Smith et al., 2013; Wei et al., 2013). In topic modeling literature, Blei and Lafferty (2006) also point out that topics do not cooccur independently and explicitly model the cooccurrence within documents. 780 anti-correlated correlated likely to cooccur tryst friendship 1 9 8 0 1 9 9 0 2 0 0 0 2 0 1 0 machine translation rule,forest methods 1 9 8 0 1 9 9 0 2 0 0 0 2 0 1 0 machine translation word alignment unlikely to cooccur head-to-head arms-race 1 9 8 0 1 9 9 0 2 0 0 0 2 0 1 0 machine translation discourse (coherence) 1 9 8 0 1 9 9 0 2 0 0 0 2 0 1 0 machine translation sentiment analysis Figure 9: Relations between topics in ACL Anthology in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions), color coded to match the text. The y-axis represents the relative proportion of papers in a year that contain the corresponding topic. The top 10 words for the rule, forest methods topic are rule, grammar, derivation, span, algorithm, forest, parsing, figure, set, string. 6 Concluding Discussion We proposed a method to characterize relations between ideas in texts through the lens of cooccurrence within documents and prevalence correlation over time. For the first time, we observe that the distribution of pairwise cooccurrence is unimodal, while the distribution of pairwise prevalence correlation is not always unimodal, and show that they are positively correlated. This combination suggests four types of relations between ideas, and these four types are all found to varying extents in our experiments. We illustrate our computational method by exploratory studies on news corpora and scientific research papers. We not only confirm existing knowledge but also suggest hypotheses around the usage of arab and islam in terrorism and latino and asian in immigration. It is important to note that the relations found using our approach depend on the nature of the representation of ideas and the source of texts. For instance, we cannot expect relations found in news articles to reflect shifts in public opinion if news articles do not effectively track public opinion. Our method is entirely observational. It remains as a further stage of analysis to understand the underlying reasons that lead to these relations between ideas. In scientific research, for example, it could simply be the progress of science, i.e., newer ideas overtake older ones deemed less valuable at a given time; on the other hand, history suggests that it is not always the correct ideas that are most expressed, and many other factors may be important. Similarly, in news coverage, underlying sociological and political situations have significant impact on which ideas are presented, and how. There are many potential directions to improve our method to account for complex relations between ideas. For instance, we assume that both ideas and relations are statically grounded in keywords or topics. In reality, ideas and relations both evolve over time: a tryst relation might appear as friendship if we focus on a narrower time period. Similarly, new ideas show up and even the same idea may change over time and be represented by different words. Acknowledgments. We thank Amber Boydstun, Justin Gross, Lillian Lee, anonymous reviewers, and all members of Noah’s ARK for helpful comments and discussions. This research was made possible by a Natural Sciences and Engineering Research Council of Canada Postgraduate Scholarship (to D.C.) and a University of Washington Innovation Award. 781 References David M. Blei and John Lafferty. 2006. Correlated topic models. In NIPS. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research 3:993–1022. Dallas Card, Amber E. Boydstun, Justin H. Gross, Philip Resnik, and Noah A. Smith. 2015. The Media Frames Corpus: Annotations of frames across issues. In Proceedings of ACL. Dorwin Cartwright and Frank Harary. 1956. Structural balance: A generalization of Heider’s theory. Psychological Review 63(5):277. Dennis Chong and James N. Druckman. 2007. A theory of framing and opinion formation in competitive elite environments. Journal of Communication 57(1):99–118. Kenneth W. Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Computational Linguistics 16(1):22–29. Ralph B. D’Agostino, Albert Belanger, and Ralph B. D’Agostino Jr. 1990. A suggestion for using powerful and informative tests of normality. The American Statistician 44(4):316–321. Richard Dawkins. 1976. The Selfish Gene. Oxford University Press. Robert M. Entman. 1993. Framing: Toward clarification of a fractured paradigm. Journal of Communication 43(4):51–58. Todd Gitlin. 1980. The Whole World is Watching: Mass Media in the Making and Unmaking of the New Left. Berkeley: University of California Press. Jonathan Grudin. 2009. AI and HCI: Two fields divided by a common focus. AI Magazine 30(4):48. Emily Guskin. 2013. ‘Illegal’, ‘undocumented’, ‘unauthorized’: News media shift language on immigration. Pew Research Center. David Hall, Daniel Jurafsky, and Christopher D. Manning. 2008. Studying the history of ideas using topic models. In Proceedings of EMNLP. John A. Hartigan and P. M. Hartigan. 1985. The dip test of unimodality. The Annals of Statistics pages 70–84. Chris Haynes, Jennifer L. Merolla, and S. Karthick Ramakrishnan. 2016. Framing Immigrants: News Coverage, Public Opinion, and Policy. Russell Sage Foundation. Fritz Heider. 1946. Attitudes and cognitive organization. The Journal of Psychology 21(1):107–112. Anil Kalhan. 2010. Rethinking immigration detention. Columbia Law Review Sidebar 110:42. Montague Kern, Marion Just, and Pippa Norris. 2003. The lessons of framing terrorism. In Pippa Norris, Montague Kern, and Marion Just, editors, Framing Terrorism: The News Media, the Government and the Public, Routledge. Thomas S. Kuhn. 1996. The Structure of Scientific Revolutions. University of Chicago Press. George Lakoff. 2014. The All New Don’t Think of an Elephant!: Know your Values and Frame the Debate. Chelsea Green Publishing. Jure Leskovec, Lars Backstrom, and Jon M. Kleinberg. 2009. Meme-tracking and the dynamics of the news cycle. In Proceedings of KDD. Jennifer Merolla, S. Karthick Ramakrishnan, and Chris Haynes. 2013. “Illegal,”, “undocumented,” or “unauthorized”: Equivalency frames, issue frames, and public opinion on immigration. Perspectives on Politics 11(03):789–807. Jean-Baptiste Michel, Yuan Kui Shen, Aviva Presser Aiden, Adrian Veres, Matthew K. Gray, The Google Books Team, Joseph P. Pickett, Dale Hoiberg, Dan Clancy, Peter Norvig, Jon Orwant, Steven Pinker, Martin A. Nowak, and Erez Lieberman Aiden. 2011. Quantitative analysis of culture using millions of digitized books. Science 331(6014):176–182. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In NIPS. John Milton. 1644. Areopagitica, A speech of Mr. John Milton for the Liberty of Unlicenc’d Printing to the Parliament of England. Burt L. Monroe, Michael P. Colaresi, and Kevin M. Quinn. 2008. Fightin’ words: Lexical feature selection and evaluation for identifying the content of political conflict. Political Analysis 16(4):372–403. Vlad Niculae, Caroline Suen, Justine Zhang, Cristian Danescu-Niculescu-Mizil, and Jure Leskovec. 2015. Quotus: The structure of political media coverage as revealed by quoting patterns. In Proceedings of WWW. Vinodkumar Prabhakaran, William L. Hamilton, Dan McFarland, and Dan Jurafsky. 2016. Predicting the rise and fall of scientific topics from trends in their rhetorical framing. In Proceedings of ACL. Dragomir R. Radev, Pradeep Muthukrishnan, and Vahed Qazvinian. 2009. The ACL anthology network corpus. In Proceedings of ACL Workshop on Natural Language Processing and Information Retrieval for Digital Libraries. Alix Rule, Jean-Philippe Cointet, and Peter S. Bearman. 2015. Lexical shifts, substantive changes, and 782 continuity in state of the union discourse, 17902014. Proceedings of the National Academy of Sciences 112(35):10837–10844. David W. Scott. 2015. Multivariate Density Estimation: Theory, Practice, and Visualization. John Wiley & Sons. Xiaolin Shi, Ramesh Nallapati, Jure Leskovec, Dan McFarland, and Dan Jurafsky. 2010. Who leads whom: Topical lead-lag analysis across corpora. In Proceedings of NIPS Workshop on Computational Social Science. David A. Smith, Ryan Cordell, and Elizabeth M. Dillon. 2013. Infectious texts: Modeling text reuse in nineteenth-century newspapers. In Proceedings of the Workshop on Big Humanities. Paul Taylor, Mark H. Lopez, Jessica Mart´ınez, and Gabriel Velasco. 2012. When labels don’t fit: Hispanics and their views of identity. Washington, DC: Pew Hispanic Center . Robert Warren. 2016. US undocumented population drops below 11 million in 2014, with continued declines in the Mexican undocumented population. Journal on Migration and Human Security 4(1):1– 15. Xuetao Wei, Nicholas Valler, B. Aditya Prakash, Iulian Neamtiu, Michalis Faloutsos, and Christos Faloutsos. 2013. Competing memes propagation on networks: A network science perspective. IEEE Journal on Selected Areas in Communications 31:1049– 1060. 783
2017
72
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 784–792 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1073 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 784–792 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1073 Polish evaluation dataset for compositional distributional semantics models Alina Wróblewska Katarzyna Krasnowska-Kiera´s Institute of Computer Science, Polish Academy of Sciences [email protected] [email protected] Abstract The paper presents a procedure of building an evaluation dataset1. for the validation of compositional distributional semantics models estimated for languages other than English. The procedure generally builds on steps designed to assemble the SICK corpus, which contains pairs of English sentences annotated for semantic relatedness and entailment, because we aim at building a comparable dataset. However, the implementation of particular building steps significantly differs from the original SICK design assumptions, which is caused by both lack of necessary extraneous resources for an investigated language and the need for language-specific transformation rules. The designed procedure is verified on Polish, a fusional language with a relatively free word order, and contributes to building a Polish evaluation dataset. The resource consists of 10K sentence pairs which are human-annotated for semantic relatedness and entailment. The dataset may be used for the evaluation of compositional distributional semantics models of Polish. 1 Introduction and related work 1.1 Distributional semantics The basic idea of distributional semantics, i.e. determining the meaning of a word based on its co-occurrence with other words, is derived from the empiricists – Harris (1954) and Firth (1957). John R. Firth drew attention to the contextdependent nature of meaning especially with his 1The dataset is obtainable at: http://zil.ipipan.waw.pl/Scwad/CDSCorpus famous maxim “You shall know a word by the company it keeps” (Firth, 1957, p. 11). Nowadays, distributional semantics models are estimated with various methods, e.g. word embedding techniques (Bengio et al., 2003, 2006; Mikolov et al., 2013). To ascertain the purport of a word, e.g. bath, you can use the context of other words that surround it. If we assume that the meaning of this word expressed by its lexical context is associated with a distributional vector, the distance between distributional vectors of two semantically similar words, e.g bath and shower, should be smaller than between vectors representing semantically distinct words, e.g. bath and tree. 1.2 Compositional distributional semantics Based on empirical observations that distributional vectors encode certain aspects of word meaning, it is expected that similar aspects of the meaning of phrases and sentences can also be represented with vectors obtained via composition of distributional word vectors. The idea of semantic composition is not new. It is well known as the principle of compositionality:2 “The meaning of a compound expression is a function of the meaning of its parts and of the way they are syntactically combined.” (Janssen, 2012, p. 19). Modelling the meaning of textual units larger than words using compositional and distributional information is the main subject of compositional distributional semantics (Mitchell and Lapata, 2010; Baroni and Zamparelli, 2010; Grefenstette and Sadrzadeh, 2011; Socher et al., 2012, to name a few studies). The fundamental principles of compositional distributional semantics, henceforth referred to as CDS, are mainly propagated with papers written on the topic. Apart from the papers, it was the SemEval-2014 Shared Task 1 2As the principle of compositionality is attributed to Gottlob Frege, it is often called Frege’s principle. 784 (Marelli et al., 2014) that essentially contributed to the expansion of CDS and increased an interest in this domain. The goal of the task was to evaluate CDS models of English in terms of semantic relatedness and entailment on proper sentences from the SICK corpus. 1.3 The SICK corpus The SICK corpus (Bentivogli et al., 2014) consists of 10K pairs of English sentences containing multiple lexical, syntactic, and semantic phenomena. It builds on two external data sources – the 8K ImageFlickr dataset (Rashtchian et al., 2010) and SemEval-2012 Semantic Textual Similarity dataset (Agirre et al., 2012). Each sentence pair is human-annotated for relatedness in meaning and entailment. The relatedness score corresponds to the degree of semantic relatedness between two sentences and is calculated as the average of ten human ratings collected for this sentence pair on the 5-point Likert scale. This score indicates the extent to which the meanings of two sentences are related. The entailment relation between two sentences, in turn, is labelled with entailment, contradiction, or neutral. According to the SICK guidelines, the label assigned by the majority of human annotators is selected as the valid entailment label. 1.4 Motivation and organisation of the paper Studying approaches to various natural language processing (henceforth NLP) problems, we have observed that the availability of language resources (e.g. training or testing data) stimulates the development of NLP tools and the estimation of NLP models. English is undoubtedly the most prominent in this regard and English resources are the most numerous. Therefore, NLP methods are mostly designed for English and tested on English data, even if there is no guarantee that they are universal. In order to verify whether an NLP algorithm is adequate, it is not enough to evaluate it solely for English. It is also valuable to have high-quality resources for languages typologically different to English. Hence, we aim at building datasets for the evaluation of CDS models in languages other than English, which are often underresourced. We strongly believe that the availability of test data will encourage development of CDS models in these languages and allow to better test the universality of CDS methods. We start with a high-quality dataset for Polish, which is a completely different language than English in at least two dimensions. First, it is a rather under-resourced language in contrast to the resource-rich English. Second, it is a fusional language with a relatively free word order in contrast to the isolated English with a relatively fixed word order. If some heuristics is tested on e.g. Polish, the evaluation results can be approximately generalised to other Slavic languages. We hope the Slavic NLP community will be interested in designing and evaluating methods of semantic modelling for Slavic languages. The procedure of building an evaluation dataset for validating compositional distributional semantics models of Polish generally builds on steps designed to assemble the SICK corpus (described in Section 1.3) because we aim at building an evaluation dataset which is comparable to the SICK corpus. However, the implementation of particular building steps significantly differs from the original SICK design assumptions, which is caused by both lack of necessary extraneous resources for Polish (see Section 2.1) and the need for Polish-specific transformation rules (see Section 2.2). Furthermore, the rules of arranging sentences into pairs (see Section 2.3) are defined anew taking into account the characteristic of data and bidirectional entailment annotations, since an entailment relation between two sentences must not be symmetric. Even if our assumptions of annotating sentence pairs coincide with the SICK principles to a certain extent (see Section 3.1), the annotation process differs from the SICK procedure, in particular by introducing an element of human verification of correctness of automatically transformed sentences (see Section 3.2) and some additional post-corrections (see Section 3.3). Finally, a summary of the dataset is provided in Section 4.1 and the dataset evaluation is given in Section 4.2. 2 Procedure of collecting data 2.1 Selection and description of images The first step of building the SICK corpus consisted in the random selection of English sentence pairs from existing datasets (Rashtchian et al., 2010; Agirre et al., 2012). Since we are not aware of accessibility of analogous resources for Polish, we have to select images first and then describe the selected images. Images are selected from the 8K ImageFlickr 785 dataset (Rashtchian et al., 2010). At first we wanted to take only these images the descriptions of which were selected for the SICK corpus. However, a cursory check shows that these images are quite homogeneous, with a predominant number of dogs depictions. Therefore, we independently extract 1K images and split them into 46 thematic groups (e.g. children, musical instruments, motorbikes, football, dogs). The numbers of images within individual thematic groups vary from 6 images in the volleyball and telephoning groups to 94 images in the various people group. The second largest groups are children and dogs with 50 images each. The chosen images are given to two authors who independently of each other formulate their descriptions based on a short instruction. The authors are instructed to write one single sentence (with a sentence predicate) describing the action in a displayed image. They should not describe an imaginable context or an interpretation of what may lie behind the scene in the picture. If some details in the picture are not obvious, they should not be described either. Furthermore, the authors should avoid multiword expressions, such as idioms, metaphors, and named entities, because those are not compositional linguistic phenomena. Finally, descriptions should contain Polish diacritics and proper punctuation. 2.2 Transformation of descriptions The second step of building the SICK corpus consisted in pre-processing extracted sentences, i.e. normalisation and expansion (Bentivogli et al., 2014, p. 3–4). Since the authors of Polish descriptions are asked to follow the guidelines (presented in Section 2.1), the normalisation step is not essential for our data. The expansion step, in turn, is implemented and the sentences provided by the authors are lexically and syntactically transformed in order to obtain derivative sentences with similar, contrastive, or neutral meanings. The following transformations are implemented: 1. dropping conjunction concerns sentences with coordinated predicates sharing a subject, e.g. Rowerzysta odpoczywa i obserwuje morze. (Eng. ‘A cyclist is resting and watching the sea.’). The finite form of one of the coordinated predicates is transformed into: • an active adjectival participle, e.g. Odpoczywaj ˛acy rowerzysta obserwuje morze. (Eng. ‘A resting cyclist is watching the sea.’) or Obserwuj ˛acy morze rowerzysta odpoczywa. (Eng. ‘A cyclist, who is watching the sea, is resting.’), • a contemporary adverbial participle, e.g. Rowerzysta, odpoczywaj ˛ac, obserwuje morze. (Eng. ‘A cyclist is watching the sea, while resting.’) or Rowerzysta odpoczywa, obserwuj ˛ac morze. (Eng. ‘A cyclist is resting, while watching the sea.’). 2. removing conjunct in adjuncts, i.e. the deletion of one of coordinated elements of an adjunct, e.g. Mały, ale zwinny kot miauczy. (Eng. ‘A small but agile cat miaows.’) can be changed into either Mały kot miauczy. (Eng. ‘A small cat miaows.’) or Zwinny kot miauczy. (Eng. ‘An agile cat miaows.’). 3. passivisation, e.g. Człowiek uje˙zd˙za byka. (Eng. ‘A man is breaking a bull in.’) can be transformed into Byk jest uje˙zd˙zany przez człowieka. (Eng. ‘A bull is being broken in by a man.’). 4. removing adjuncts, e.g. Dwa białe króliki siedz ˛a na trawie. (Eng. ‘Two small rabbits are sitting on the grass.’) can be changed into Króliki siedz ˛a. (Eng. ‘The rabbits are sitting.’). 5. swapping relative clause for participles, i.e. a relative clause swaps with a participle (and vice versa), e.g. Kobieta przytula psa, którego trzyma na smyczy. (Eng. ‘A woman hugs a dog which she keeps on a leash.’). The relative clause is interchanged for a participle construction, e.g. Kobieta przytula trzymanego na smyczy psa. (Eng. ‘A woman hugs a dog kept on a leash.’). 6. negation, e.g. M˛e˙zczy´zni w turbanach na głowach siedz ˛a na słoniach. (Eng. ‘Men in turbans on their heads are sitting on elephants.’) can be transformed into Nikt nie siedzi na słoniach. (Eng. ‘Nobody is sitting on elephants.’), ˙Zadni m˛e˙zczy´zni w turbanach na głowach nie siedz ˛a na słoniach. (Eng. ‘No men in turbans on their heads are sitting on elephants.’), and M˛e˙zczy´zni w turbanach na głowach nie siedz ˛a na słoniach. (Eng. ‘Men in turbans on their heads are not sitting on elephants.’). 786 7. constrained mixing of dependents from various sentences, e.g. Dwoje dzieci siedzi na wielbł ˛adach w pobli˙zu wysokich gór. (Eng. ‘Two children are sitting on camels near high mountains.’) can be changed into Dwoje dzieci siedzi przy zastawionym stole w pobli˙zu wysokich gór. (Eng. ‘Two children are sitting at the table laid with food near high mountains.’). The first five transformations are designed to produce sentences with a similar meaning, the sixth transformation outputs sentences with a contradictory meaning, and the seventh transformation should generate sentences with a neutral (or unrelated) meaning. All transformations are performed on the dependency structures of input sentences (Wróblewska, 2014). Some of the transformations are very productive (e.g. mixing dependents). Other, in turn, are sparsely represented in the output (e.g. dropping conjunction). The number of transformed sentences randomly selected to build the dataset is in the second column of Table 1. transformation selected dropping conjunction 139 2.0% removing conjunct in adjunct 485 6.9% passivisation 893 12.8% removing adjuncts 1013 14.5% swapping rc↔ptcp 1291 18.4% negation 1304 18.6% mixing dependents 1878 26.8% Table 1: Numbers of transformed sentences selected for annotation. 2.3 Data ensemble The final step of building the SICK corpus consisted in arranging normalised and expanded sentences into pairs. Since our data diverges from SICK data, the process of arranging Polish sentences into pairs also differs from pairing in the SICK corpus. The general idea behind the pair-ensembling procedure was to introduce sentence pairs with different levels of relatedness into the dataset. Apart from pairs connecting two sentences originally written by humans (as described in Section 2.1), there are also pairs in which an original sentence is connected with a transformed sentence. For each of the 1K images, the following 10 pairs are constructed (for A being the set of all sentences originally written by the first author, B being the set of all sentences originally written by the second author, a ∈A and b ∈B being the original descriptions of the picture): 1. (a, b) 2. (a, a1), where a1 ∈t(a), and t(a) is the set of all transformations of the sentence a 3. (b, b1), where b1 ∈t(b) 4. (a, b2), where b2 ∈t(b) 5. (b, a2), where a2 ∈t(a) 6. (a, a3), where a3 ∈t(a′), a′ ∈A, T (a′) = T (a), a′ ̸= a, for T (a) being the thematic group3 of a 7. (b, b3), where b3 ∈t(b′), b′ ∈B, T (b′) = T (b), b′ ̸= b 8. (a, a4), where a4 ∈A, T (a4) ̸= T (a)4 9. (b, b4), where b4 ∈B, T (b4) ̸= T (b) 10. (a, a5), where a5 ∈ t(a), a5 ̸= a1 for 50% images, (b, b5) (analogously) for other 50%.5 For each sentence pair (a, b) created according to this procedure, its reverse (b, a) is also included in our corpus. As a result, the working set consists of 20K sentence pairs. 3 Corpus annotation 3.1 Annotation assumptions The degree of semantic relatedness between two sentences is calculated as the average of all human ratings on the Likert scale with the range from 0 to 5. Since we do not want to excessively influence 3The thematic group of a sentence a corresponds to the thematic group of an image being the source of a (as described in Section 2.1). 4The pairs (a, a4) of the same authors’ descriptions of two images from different thematic groups are expected to be unrelated. The same applies to (b, b4). 5A repetition of point 2 with a restriction that a different pair is created (pairs of very related sentences are expected). We alternate between authors A and B to obtain equal author proportions in the final ensemble of pairs. 787 the annotations, the guidelines given to annotators are mainly example-based:6 • 5 (very related): Kot siedzi na płocie. (Eng. ‘A cat is sitting on the fence.’) vs. Na płocie jest du˙zy kot. (Eng. ‘There is a large cat on the fence.’), • 1–4 (more or less related): Kot siedzi na płocie. (Eng. ‘A cat is sitting on the fence.’) vs. Kot nie siedzi na płocie. (Eng. ‘A cat is not sitting on the fence.’); Kot siedzi na płocie. (Eng. ‘A cat is sitting on the fence.’) vs. Wła´sciciel dał kotu chrupki. (Eng. ‘The owner gave kibble to his cat.’); Kot siedzi na płocie. (Eng. ‘A cat is sitting on the fence.’) vs. Kot miauczy pod płotem. (Eng. ‘A cat miaows by the fence.’). • 0 (unrelated): Kot siedzi na płocie. (Eng. ‘A cat is sitting on the fence.’) vs. Zacz ˛ał pada´c deszcz. (Eng. ‘It started to rain.’). Apart from these examples, there is a note in the annotation guidelines indicating that the degree of semantic relatedness is not equivalent to the degree of semantic similarity. Semantic similarity is only a special case of semantic relatedness, semantic relatedness is thus a more general term than the other one. Polish entailment labels correspond directly to the SICK labels (i.e. entailment, contradiction, neutral). The entailment label assigned by the majority of human judges is selected as the gold label. The entailment labels are defined as follows: • a wynika z b (b entails a) – if a situation or an event described by sentence b occurs, it is recognised that a situation or an event described by a occurs as well, i.e. a and b refer to the same event or the same situation, • a jest zaprzeczeniem b (a is the negation of b) – if a situation or an event described by b occurs, it is recognised that a situation or an event described by a may not occur at the same time, 6We realise that the boundary between semantic perception of a sentence by various speakers is fuzzy (it depends on speakers’ education, origin, age, etc.). It was thus our wellthought-out decision to draw only general annotation frames and to enable annotators to rely on their feel for language. • a jest neutralne wobec b (a is neutral to b) – the truth of a situation described by a cannot be determined on the basis of b. 3.2 Annotation procedure Similar to the SICK corpus, each Polish sentence pair is human-annotated for semantic relatedness and entailment by 3 human judges experienced in Polish linguistics.7 Since for each annotated pair (a, b), its reverse (b, a) is also subject to annotation, the entailment relation is in practice determined ‘in both directions’ for 10K sentence pairs. For the task of relatedness annotation, the order of sentences within pairs seems to be irrelevant, we can thus assume to obtain 6 relatedness scores for 10K unique pairs. Since the transformation process is fully automatic and to a certain extent based on imperfect dependency parsing, we cannot ignore errors in the transformed sentences. In order to avoid annotating erroneous sentences, the annotation process is divided into two stages: 1. a sentence pair is sent to a judge with the leader role, who is expected to edit and to correct the transformed sentence from this pair before annotation, if necessary, 2. the verified and possibly enhanced sentence pair is sent to the other two judges, who can only annotate it. The leader judges should correct incomprehensible and ungrammatical sentences with a minimal number of necessary changes. Unusual sentences which could be accepted by Polish speakers should not be modified. Moreover, the modified sentence may not be identical with the other sentence in the pair. The classification and statistics of distinct corrections made by the leader judges are provided in Table 2. A strict classification of error types is quite hard to provide because some sentences contain more than one error. We thus order the error types from the most serious errors (i.e. ‘sense’ errors) to the redundant corrections (i.e. ‘other’ type). If a sentence contains several errors, it is qualified for the higher order error type. In the case of sentences with ‘sense’ errors, the need for correction is uncontroversial and 7Our annotators have relatively strong linguistic background. Five of them have PhD in linguistics, five are PhD students, one is a graduate, and one is an undergraduate. 788 error type # of errors % of errors sense 171 12.3 semantic 407 29.2 grammatical 243 17.4 word order 141 10.1 punctuation 366 26.2 other 68 4.9 Table 2: Classification and statistics of corrections. arises from an internal logical contradiction.8 The sentences with ‘semantic’ changes are syntactically correct, but deemed unacceptable by the leader annotators from the semantic or pragmatic point of view.9 The ‘grammatical’ errors mostly concern missing agreement.10 The majority of ‘word order’ corrections are unnecessary, but we found some examples which can be classified as actual word or phrase order errors.11 The correction of punctuation consists in adding or deleting a comma.12 The sentences in the ‘other’ group, in turn, could as well have been left unchanged because they are proper Polish sentences, but were apparently considered odd by the leader annotators. 8An example of ‘sense’ error: the sentence Chłopak w zielonej bluzie i czapce zje˙zd˙za na rolkach na le˙z ˛aco. (Eng. ‘A boy in a green sweatshirt and a cap roller-skates downhill in a lying position.’) is corrected into Chłopak w zielonej bluzie i czapce zje˙zd˙za na rolkach. (Eng. ‘A boy in a green sweatshirt and a cap roller-skates downhill.’). 9An example of ‘semantic’ correction: the sentence Dziewczyna trzyma w pysku patyk. (Eng. ‘A girl holds a stick in her muzzle.’) is corrected into Dziewczyna trzyma w ustach patyk. (Eng. ‘A girl holds a stick in her mouth.’). 10An example of ‘grammatical’ error: the sentence Grupasg.nom u´smiechaj ˛acych si˛e ludzi ta´ncz ˛apl. (Eng. *‘A group of smiling people are dancing.’) is corrected into Grupasg.nom u´smiechaj ˛acych si˛e ludzi ta´nczysg. (Eng. ‘A group of smiling people is dancing.’). 11An example of word order error: the sentence Samochód, który jest uszkodzony, koloru białego stoi na lawecie du˙zego auta. (lit. ‘A car that is damaged, of the white color stands on the trailer of a large car.’, Eng. ‘A white car that is damaged is standing on the trailer of a large car.’) is corrected into Samochód koloru białego, który jest uszkodzony, stoi na lawecie du˙zego auta. 12An example of punctuation correction: the wrong comma in the sentence Nad brzegiem wody, stoj ˛a dwaj m˛e˙zczy´zni z w˛edkami. (lit. ‘On the water’s edge, two men are standing with rods.’; Eng. ‘Two men with rods are standing on the water’s edge.’) should be deleted, i.e. Nad brzegiem wody stoj ˛a dwaj m˛e˙zczy´zni z w˛edkami. 3.3 Impromptu post-corrections During the annotation process it came out that sentences accepted by some human annotators are unacceptable for other annotators. We thus decided to garner annotators’ comments and suggestions for improving sentences. After validation of these suggestions by an experienced linguist, it turns out that most of these proposals concern punctuation errors (e.g. missing comma) and typos in 312 distinct sentences. These errors are fixed directly in the corpus because they should not impact the annotations of sentence pairs. The other suggestions concern more significant changes in 29 distinct sentences (mostly minor grammatical or semantic problems overlooked by the leader annotators). The annotations of pairs with modified sentences are resent to the annotators so that they can verify and update them. 4 Corpus summary and evaluation 4.1 Corpus statistics Tables 3 and 4 summarise the annotations of the resulting 10K sentence pairs corpus. Table 3 aggregates the occurrences of 6 possible relatedness scores, calculated as the mean of all 6 individual annotations, rounded to an integer. relatedness # of pairs 0 1978 1 1428 2 1082 3 2159 4 2387 5 966 Table 3: Final relatedness scores rounded to integers (total: 10K pairs). Table 4 shows the number of the particular entailment labels in the corpus. Since each sentence pair is annotated for entailment in both directions, the final entailment label is actually a pair of two labels: • entailment+neutral points to ‘one-way’ entailment, • contradiction+neutral points to ‘one-way’ contradiction, • entailment+entailment, contradiction+contradiction, and neutral+neutral point to equivalence. 789 While the actual corpus labels are ordered in the sense that there is a difference between e.g. entailment+neutral and neutral+entailment (the entailment occurs in different directions), we treat all labels as unordered for the purpose of this summary (e.g. entailment+neutral covers neutral+entailment as well, representing the same type of relation between two sentences). entailment # of pairs neutral+neutral 6483 entailment+neutral 1748 entailment+entailment 933 contradiction+contradiction 721 contradiction+neutral 115 Table 4: Final entailment labels (total: 10K pairs). 4.2 Inter-annotator agreement The standard measure of inter-annotator agreement in various natural language labelling tasks is Cohen’s kappa (Cohen, 1960). However, this coefficient is designed to measure agreement between two annotators only. Since there are three annotators of each pair of ordered sentences, we decided to apply Fleiss’ kappa13 (Fleiss, 1971) designed for measuring agreement between multiple raters who give categorical ratings to a fixed number of items. An additional advantage of this measure is that different items can be rated by different human judges, which doesn’t impact measurement. The normalised Fleiss’ measure of inter-annotator agreement is: κ = ¯P −¯Pe 1 −¯Pe where the quantity ¯P −¯Pe measures the degree of agreement actually attained in excess of chance, while “[t]he quantity 1 −¯Pe measures the degree of agreement attainable over and above what would be predicted by chance” (Fleiss, 1971, p. 379). We recognise Fleiss’ kappa as particularly useful for measuring inter-annotator agreement with respect to entailment labelling in our evaluation dataset. First, there are more than two raters. Second, entailment labels are categorically. Measured 13As Fleiss’ kappa is actually the generalisation of Scott’s π (Scott, 1955), it is sometimes referred to as Fleiss’ multi-π, cf. Artstein and Poesio (2008). with Fleiss’ kappa, there is an inter-annotator agreement of κ = 0.734 for entailment labels in Polish evaluation dataset, which is quite satisfactory as for a semantic labelling task. Relative to semantic relatedness, the distinction in meaning of two sentences made by human judges is often very subtle. This is also reflected in the inter-annotator agreement scores measured with Fleiss’ kappa. Inter-annotator agreement measured for six semantic relatedness groups corresponding to points on the Likert scale is quite low: κ = 0.337. If we measure interannotator agreement for three classes corresponding to the three relatedness groups from the annotation guidelines (see Section 3.1), i.e. <0>, <1, 2, 3, 4>, and <5>, the Fleiss’ score is significantly higher: κ = 0.543. Hence, we conclude that Fleiss’ kappa is not a reliable measure of inter-annotator agreement in relation to relatedness scores. Therefore, we decided to use Krippendorff’s α instead. Krippendorff’s α (Krippendorff, 1980, 2013) is a coefficient appropriate for measuring the interannotator agreement of a dataset which is annotated with multiple judges and characterised by different magnitudes of disagreement and missing values. Krippendorff proposes distance metrics suitable for various scales: binary, nominal, interval, ordinal, and ratio. In ordinal measurement14 the attributes can be rank-ordered, but distances between them do not have any meaning. Measured with Krippendorff’s ordinal α, there is an inter-annotator agreement of α = 0.780 for relatedness scores in the Polish evaluation dataset, which is quite satisfactory as well. Hence, we conclude that our dataset is a reliable resource for the purpose of evaluating compositional distributional semantics model of Polish. 5 Conclusions The goal of this paper is to present the procedure of building a Polish evaluation dataset for the validation of compositional distributional semantics models. As we aim at building an evalua14Nominal measurement is useless for measuring agreement between relatedness scores (α = 0.340 is the identical value as Fleiss’ kappa, since all disagreements are considered equal). We also test interval measurement, in which the distance between the attributes does have meaning and an average of an interval variable is computed. The interval score measured for relatedness annotations is quite high α = 0.785, but we doubt whether the distance between relatedness scores is meaningful in this case. 790 tion dataset which is comparable to the SICK corpus, the general assumptions of our procedure correspond to the design principles of the SICK corpus. However, the procedure of building the SICK corpus cannot be adapted without modifications. First, the Polish seed-sentences have to be written based on the images which are selected from 8K ImageFlickr dataset and split into thematic groups, since usable datasets are not publicly available. Second, since the process of transforming sentences seems to be language-specific, the linguistic transformation rules appropriate for Polish have to be defined from scratch. Third, the process of arranging Polish sentences into pairs is defined anew taking into account the data characteristic and bidirectional entailment annotations. The discrepancies relative to the SICK procedure also concern the annotation process itself. Since an entailment relation between two sentences must not be symmetric, each sentence pair is annotated for entailment in both directions. Furthermore, we introduce an element of human verification of correctness of automatically transformed sentences and some additional post-corrections. The presented procedure of building a dataset was tested on Polish. However, it is very likely that the annotation framework will work for other Slavic languages (e.g. Czech with an excellent dependency parser). The presented procedure results in building the Polish test corpus of relatively high quality, confirmed by the inter-annotator agreement coefficients of κ = 0.734 (measured with Fleiss’ kappa) for entailment labels and of α = 0.780 (measured with Krippendorff’s ordinal alpha) for relatedness scores. Acknowledgments We would like to thank the reliable and tenacious annotators of our dataset: Alicja DziedzicRawska, Bo˙zena Itoya, Magdalena Król, Anna Latusek, Justyna Małek, Małgorzata Michalik, Agnieszka Norwa, Małgorzata Szajbel-Keck, Alicja Walichnowska, Konrad Zieli´nski, and some other. The research presented in this paper was supported by SONATA 8 grant no 2014/15/D/HS2/03486 from the National Science Centre Poland. References Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. 2012. SemEval-2012 Task 6: A Pilot on Semantic Textual Similarity. In Proceedings of the First Joint Conference on Lexical and Computational Semantics (*SEM). pages 385–393. Ron Artstein and Massimo Poesio. 2008. Inter-Coder Agreement for Computational Linguistics. Computational Linguistics 34:557–596. Marco Baroni and Roberto Zamparelli. 2010. Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. pages 1183–1193. Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A Neural Probabilistic Language Model. Journal of Machine Learning Research 3:1137–1155. Yoshua Bengio, Holger Schwenk, Jean-Sébastien Senécal, Fréderic Morin, and Jean-Luc Gauvain. 2006. Neural Probabilistic Language Models. In D.E. Holmes and L.C. Jain, editors, Innovations in Machine Learning. Theory and Applications, Springer-Verlag, Berlin Heidelberg, volume 194 of Studies in Fuzziness and Soft Computing, pages 137–186. Luisa Bentivogli, Raffaella Bernardi, Marco Marelli, Stefano Menini, Marco Baroni, and Roberto Zamparelli. 2014. SICK through the SemEval Glasses. Lesson learned from the evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. Journal of Language Resources and Evaluation 50:95–124. Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement 20:37–46. John Rupert Firth. 1957. A synopsis of linguistic theory, 1930-1955. Studies in Linguistic Analysis. Special volume of the Philological Society pages 1–32. Joseph L. Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological Bulletin 75:378–382. Edward Grefenstette and Mehrnoosh Sadrzadeh. 2011. Experimental Support for a Categorical Compositional Distributional Model of Meaning. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP 2011). pages 1394–1404. Zellig Harris. 1954. Distributional structure. Word 10:146–162. 791 Theo M. V. Janssen. 2012. Compositionality: its historic context. In Wolfram Hinzen, Edouard Machery, and Markus Werning, editors, The Oxford Handbook of Compositionality, Oxford University Press, Studies in Fuzziness and Soft Computing, pages 19– 46. Klaus Krippendorff. 1980. Content Analysis: An Introduction to Its Methodology. Sage Publications, Beverly Hills. Klaus Krippendorff. 2013. Content Analysis: An Introduction to Its Methodology. Sage Publication, Thousand Oaks, 3rd edition. Marco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, Stefano Menini, and Roberto Zamparelli. 2014. SemEval-2014 Task 1: Evaluation of Compositional Distributional Semantic Models on Full Sentences through Semantic Relatedness and Textual Entailment. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014). pages 1–8. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed Representations of Words and Phrases and their Compositionality. In Advances in Neural Information Processing Systems 26. Proceedings of Neural Information Processing Systems 2013. pages 3111–3119. Jeff Mitchell and Mirella Lapata. 2010. Composition in Distributional Models of Semantics. Cognitive Science 34:1388–1429. Cyrus Rashtchian, Peter Young, Micah Hodosh, and Julia Hockenmaier. 2010. Collecting Image Annotations Using Amazon’s Mechanical Turk. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk. pages 139–147. William A. Scott. 1955. Reliability of Content Analysis: The Case of Nominal Scale Coding. Public Opinion Quarterly 19:321–325. Richard Socher, Brody Huval, Christopher Manning, and Andrew Ng. 2012. Semantic Compositionality through Recursive Matrix-Vector Spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. pages 1201–1211. Alina Wróblewska. 2014. Polish Dependency Parser Trained on an Automatically Induced Dependency Bank. Ph.D. dissertation, Institute of Computer Science, Polish Academy of Sciences, Warsaw. 792
2017
73
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 793–805 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1074 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 793–805 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1074 Automatic Annotation and Evaluation of Error Types for Grammatical Error Correction Christopher Bryant Mariano Felice Ted Briscoe ALTA Institute Computer Laboratory University of Cambridge Cambridge, UK {cjb255, mf501, ejb}@cl.cam.ac.uk Abstract Until now, error type performance for Grammatical Error Correction (GEC) systems could only be measured in terms of recall because system output is not annotated. To overcome this problem, we introduce ERRANT, a grammatical ERRor ANnotation Toolkit designed to automatically extract edits from parallel original and corrected sentences and classify them according to a new, dataset-agnostic, rulebased framework. This not only facilitates error type evaluation at different levels of granularity, but can also be used to reduce annotator workload and standardise existing GEC datasets. Human experts rated the automatic edits as “Good” or “Acceptable” in at least 95% of cases, so we applied ERRANT to the system output of the CoNLL-2014 shared task to carry out a detailed error type analysis for the first time. 1 Introduction Grammatical Error Correction (GEC) systems are often only evaluated in terms of overall performance because system hypotheses are not annotated. This can be misleading however, and a system that performs poorly overall may in fact outperform others at specific error types. This is significant because a robust specialised system is actually more desirable than a mediocre general system. Without an error type analysis however, this information is completely unknown. The main aim of this paper is hence to rectify this situation and provide a method by which parallel error correction data can be automatically annotated with error type information. This not only facilitates error type evaluation, but can also be used to provide detailed error type feedback to non-native learners. Given that different corpora are also annotated according to different standards, we also attempted to standardise existing datasets under a common error type framework. Our approach consists of two main steps. First, we automatically extract the edits between parallel original and corrected sentences by means of a linguistically-enhanced alignment algorithm (Felice et al., 2016) and second, we classify them according to a new, rule-based framework that relies solely on dataset-agnostic information such as lemma and part-of-speech. We demonstrate the value of our approach, which we call the ERRor ANnotation Toolkit (ERRANT)1, by carrying out a detailed error type analysis of each system in the CoNLL-2014 shared task on grammatical error correction (Ng et al., 2014). It is worth mentioning that despite an increased interest in GEC evaluation in recent years (Dahlmeier and Ng, 2012; Felice and Briscoe, 2015; Bryant and Ng, 2015; Napoles et al., 2015; Grundkiewicz et al., 2015; Sakaguchi et al., 2016), ERRANT is the only toolkit currently capable of producing error types scores. 2 Edit Extraction The first stage of automatic annotation is edit extraction. Specifically, given an original and corrected sentence pair, we need to determine the start and end boundaries of any edits. This is fundamentally an alignment problem: We took a guide tour on center city . We took a guided tour of the city center . Table 1: A sample alignment between an original and corrected sentence (Felice et al., 2016). 1https://github.com/chrisjbryant/errant 793 The first attempt at automatic edit extraction was made by Swanson and Yamangil (2012), who simply used the Levenshtein distance to align parallel original and corrected sentences. As the Levenshtein distance only aligns individual tokens however, they also merged all adjacent nonmatches in an effort to capture multi-token edits. Xue and Hwa (2014) subsequently improved on Swanson and Yamangil’s work by training a maximum entropy classifier to predict whether edits should be merged or not. Most recently, Felice et al. (2016) proposed a new method of edit extraction using a linguistically-enhanced alignment algorithm supported by a set of merging rules. More specifically, they incorporated various linguistic information, such as part-of-speech and lemma, into the cost function of the Damerau-Levenshtein2 algorithm to make it more likely that tokens with similar linguistic properties aligned. This approach ultimately proved most effective at approximating human edits in several datasets (80-85% F1), and so we use it in the present study. 3 Automatic Error Typing Having extracted the edits, the next step is to assign them error types. While Swanson and Yamangil (2012) did this by means of maximum entropy classifiers, one disadvantage of this approach is that such classifiers are biased towards their particular training corpora. For example, a classifier trained on the First Certificate in English (FCE) corpus (Yannakoudakis et al., 2011) is unlikely to perform as well on the National University of Singapore Corpus of Learner English (NUCLE) (Dahlmeier and Ng, 2012) or vice versa, because both corpora have been annotated according to different standards (cf. Xue and Hwa (2014)). Instead, a dataset-agnostic error type classifier is much more desirable. 3.1 A Rule-Based Error Type Framework To solve this problem, we took inspiration from Swanson and Yamangil’s (2012) observation that most error types are based on part-of-speech (POS) categories, and wrote a rule to classify an edit based only on its automatic POS tags. We then added another rule to similarly differentiate between Missing, Unnecessary and Replace2Damerau-Levenshtein is an extension of Levenshtein that also handles transpositions; e.g. AB→BA ment errors depending on whether tokens were inserted, deleted or substituted. Finally, we extended our approach to classify errors that are not well-characterised by POS, such as Spelling or Word Order, and ultimately assigned all error types based solely on automatically-obtained, objective properties of the data. In total, we wrote roughly 50 rules. While many of them are very straightforward, significant attention was paid to discriminating between different kinds of verb errors. For example, despite all having the same correction, the following sentences contain different types of common learner errors: (a) He IS asleep now. [IS →is]: orthography (b) He iss asleep now. [iss →is]: spelling (c) He has asleep now. [has →is]: verb (d) He being asleep now. [being →is]: form (e) He was asleep now. [was →is]: tense (f) He are asleep now. [are →is]: SVA To handle these cases, we hence wrote the following ordered rules: 1. Are the lower case forms of both sides of the edit the same? (a) 2. Is the original token a real word? (b) 3. Do both sides of the edit have the same lemma? (c) 4. Is one side of the edit a gerund (VBG) or participle (VBN)? (d) 5. Is one side of the edit in the past tense (VBD)? (e) 6. Is one side of the edit in the 3rd person present tense (VBZ)? (f) While the final three rules could certainly be reordered, we informally found the above sequence performed best during development. It is also worth mentioning that this is a somewhat simplified example and that there are additional rules to discriminate between auxiliary verbs, main verbs and multi verb expressions. Nevertheless, the above case exemplifies our approach, and a more complete description of all rules is provided with the software. 794 Code Meaning Description / Example ADJ Adjective big →wide ADJ:FORM Adjective Form Comparative or superlative adjective errors. goodest →best, bigger →biggest, more easy →easier ADV Adverb speedily →quickly CONJ Conjunction and →but CONTR Contraction n’t →not DET Determiner the →a MORPH Morphology Tokens have the same lemma but nothing else in common. quick (adj) →quickly (adv) NOUN Noun person →people NOUN:INFL Noun Inflection Count-mass noun errors. informations →information NOUN:NUM Noun Number cat →cats NOUN:POSS Noun Possessive friends →friend’s ORTH Orthography Case and/or whitespace errors. Bestfriend →best friend OTHER Other Errors that do not fall into any other category (e.g. paraphrasing). at his best →well, job →professional PART Particle (look) in →(look) at PREP Preposition of →at PRON Pronoun ours →ourselves PUNCT Punctuation ! →. SPELL Spelling genectic →genetic, color →colour UNK Unknown The annotator detected an error but was unable to correct it. VERB Verb ambulate →walk VERB:FORM Verb Form Infinitives (with or without “to”), gerunds (-ing) and participles. to eat →eating, dancing →danced VERB:INFL Verb Inflection Misapplication of tense morphology. getted →got, fliped →flipped VERB:SVA Subject-Verb Agreement (He) have →(He) has VERB:TENSE Verb Tense Includes inflectional and periphrastic tense, modal verbs and passivization. eats →ate, eats →has eaten, eats →can eat, eats →was eaten WO Word Order only can →can only Table 2: The list of 25 main error categories in our new framework with examples and explanations. 3.2 A Dataset-Agnostic Classifier One of the key strengths of a rule-based approach is that by being dependent only on automatic mark-up information, our classifier is entirely dataset independent and does not require labelled training data. This is in contrast with machine learning approaches which not only learn dataset specific biases, but also presuppose the existence of sufficient quantities of training data. A second significant advantage of our approach is that it is also always possible to determine precisely why an edit was assigned a particular error category. In contrast, human and machine learning classification decisions are often much less transparent. Finally, by being fully deterministic, our approach bypasses bias effects altogether and should hence be more consistent. 3.3 Automatic Markup The prerequisites for our rule-based classifier are that each token in both the original and corrected sentence is POS tagged, lemmatized, stemmed and dependency parsed. We use spaCy3 v1.7.3 for all but the stemming, which is performed by the Lancaster Stemmer in NLTK.4 Since fine-grained POS tags are often too detailed for the purposes of error evaluation, we also map spaCy’s Penn Treebank style tags to the coarser set of Universal Dependency tags.5 We use the latest Hunspell GB-large word list6 to help classify non-word errors. The marked-up tokens in an edit span are then input to the classifier and an error type is returned. 3.4 Error Categories The complete list of 25 error types in our new framework is shown in Table 2. Note that most of them can be prefixed with ‘M:’, ‘R:’ or ‘U:’, depending on whether they describe a Missing, Replacement, or Unnecessary edit, to enable 3https://spacy.io/ 4http://www.nltk.org/ 5http://universaldependencies.org/tagset-conversion/ en-penn-uposf.html 6https://sourceforge.net/projects/wordlist/files/speller/ 2017.01.22/ 795 evaluation at different levels of granularity (see Appendix A for all valid combinations). This means we can choose to evaluate, for example, only replacement errors (anything prefixed by ‘R:’), only noun errors (anything suffixed with ‘NOUN’) or only replacement noun errors (‘R:NOUN’). This flexibility allows us to make more detailed observations about different aspects of system performance. One caveat concerning error scheme design is that it is always possible to add new categories for increasingly detailed error types; for instance, we currently label [could →should] a tense error, when it might otherwise be considered a modal error. The reason we do not call it a modal error, however, is because it would then become less clear how to handle other cases such as [can →should] and [has eaten →should eat], which might be considered a more complex combination of modal and tense error. As it is impractical to create new categories and rules to differentiate between such narrow distinctions however, our final framework aims to be a compromise between informativeness and practicality. 3.5 Classifier Evaluation As our new error scheme is based solely on automatically obtained properties of the data, there are no gold standard labels against which to evaluate classifier performance. For this reason, we instead carried out a small-scale manual evaluation, where we simply asked 5 GEC researchers to rate the appropriateness of the predicted error types for 200 randomly chosen edits in context (100 from FCE-test and 100 from CoNLL-2014) as “Good”, “Acceptable” or “Bad”. “Good’ meant the chosen type was the most appropriate for the given edit, “Acceptable” meant the chosen type was appropriate, but probably not optimum, while “Bad” meant the chosen type was not appropriate for the edit. Raters were warned that the edit boundaries had been determined automatically and hence might be unusual, but that they should focus on the appropriateness of the error type regardless of whether they agreed with the boundary or not. It is worth stating that the main purpose of this evaluation was not to evaluate the specific strengths and weaknesses of the classifier, but rather ascertain how well humans believed the predicted error types characterised each edit. GEC is known to be a highly subjective task (Bryant and Rater Good Acceptable Bad 1 92.0% 4.0% 4.0% 2 89.5% 6.5% 4.0% 3 83.0% 13.0% 4.0% 4 84.5% 11.0% 4.5% 5 82.5% 15.5% 2.0% OVERALL 86.3% 10.0% 3.7% Table 3: The percent distribution for how each expert rated the appropriateness of the predicted error types. E.g. Rater 3 considered 83% of all predicted types to be “Good”. Ng, 2015) and so we were more interested in overall judgements than specific disagreements. The results from this evaluation are shown in Table 3. Significantly, all 5 raters considered at least 95% of the predicted error types to be either “Good” or “Acceptable”, despite the degree of noise introduced by automatic edit extraction. Furthermore, whenever raters judged an edit as “Bad”, this could usually be traced back to a POS or parse error; e.g. [ring →rings] might be considered a NOUN:NUM or VERB:SVA error depending on whether the POS tagger considered both sides of the edit nouns or verbs. Interannotator agreement was also good at 0.724 κfree (Randolph, 2005). In contrast, although incomparable on account of the different metric and error scheme, the best results using machine learning were between 5070% F1 (Felice et al., 2016). Ultimately however, we believe the high scores awarded by the raters validates the efficacy of our rule-based approach. 4 Error Type Scoring Having described how to automatically annotate parallel sentences with ERRANT, we now also have a method to annotate system hypotheses; this is the first step towards an error type evaluation. Since no scorer is currently capable of calculating error type performance however (Dahlmeier and Ng, 2012; Felice and Briscoe, 2015; Napoles et al., 2015), we instead built our own. Fortunately, one benefit of explicitly annotating system hypotheses is that it makes evaluation much more straightforward. In particular, for each sentence, we only need to compare the edits in the hypothesis against the edits in each respective reference and measure the overlap. Any edit with the same span and correction in both files is hence a 796 true positive (TP), while unmatched edits in the hypothesis and references are false positives (FP) and false negatives (FN) respectively. These results can then be grouped by error type for the purposes of error type evaluation. Finally, it is worth noting that this scorer is much simpler than other scorers in GEC which typically incorporate edit extraction or alignment directly into their algorithms. Our approach, on the other hand, treats edit extraction and evaluation as separate tasks. 4.1 Gold Reference vs. Auto Reference Before evaluating an automatically annotated hypothesis against its reference, we must also address another mismatch: namely that hypothesis edits must be extracted and classified automatically, while reference edits are typically extracted and classified manually using a different framework. Since evaluation is now reduced to a straightforward comparison between two files however, it is especially important that the hypothesis and references are both processed in the same way. For instance, a hypothesis edit [have eating →has eaten] will not match the reference edits [have →has] and [eating →eaten] because the former is one edit while the latter is two edits, even though they equate to the same thing. To solve this problem, we can reprocess the references in the same way as the hypotheses. In other words, we can apply ERRANT to the references such that each reference edit is subject to the same automatic extraction and classification criteria as each hypothesis edit. While it may seem unorthodox to discard gold reference information in favour of automatic reference information, this is necessary to minimise the difference between hypothesis and reference edits and also standardise error type annotations. To show that automatic references are feasible alternatives to gold references, we evaluated each team in the CoNLL-2014 shared task using both types of reference with the M2 scorer (Dahlmeier and Ng, 2012), the de facto standard of GEC evaluation, and our own scorer. Table 4 hence shows that there is little difference between the overall scores for each team, and we formally validated this hypothesis for precision, recall and F0.5 by means of bootstrap significance testing (Efron and Tibshirani, 1993). Ultimately, we found no statistically significant difference M2 Scorer Our Scorer Team Gold Auto Gold Auto AMU 35.01 35.05 31.95 32.25 CAMB 37.33 37.34 33.39 34.01 CUUI 36.79 37.59 33.32 34.64 IITB 5.90 5.96 5.67 5.74 IPN 7.09 7.68 5.86 6.14 NTHU 29.92 29.77 25.62 25.66 PKU 25.32 25.38 23.40 23.60 POST 30.88 31.01 27.54 27.99 RAC 26.68 26.88 22.83 23.15 SJTU 15.19 15.22 14.85 14.89 UFC 7.84 7.89 7.84 7.89 UMC 25.37 25.45 23.08 23.52 Table 4: Overall scores for each team in CoNLL2014 using gold and auto references with both the M2 scorer and our simpler edit comparison approach. All scores are in terms of F0.5. between automatic and gold references (1,000 iterations, p > .05) which leads us to conclude that our automatic references are qualitatively as good as human references. 4.2 Comparison with the M2 Scorer Despite using the same metric, Table 4 also shows that the M2 scorer tends to produce slightly higher F0.5 scores than our own. This initially led us to believe that our scorer was underestimating performance, but we subsequently found that instead the M2 scorer tends to overestimate performance (cf. Felice and Briscoe (2015) and Napoles et al. (2015)). In particular, given a choice between matching [have eating →has eaten] from Annotator 1 or [have →has] and [eating →eaten] from Annotator 2, the M2 scorer will always choose Annotator 2 because two true positives (TP) are worth more than one. Similarly, whenever the scorer encounters two false positives (FP) within a certain distance of each other,7 it merges them and treats them as one false positive; e.g. [is a cat →are a cats] is selected over [is →are] and [cat →cats] even though these edits are best handled separately. In other words, the M2 scorer exploits its dynamic edit boundary prediction to artificially maximise true positives and minimise false positives and hence produce slightly inflated scores. 7The distance is controlled by the max unchanged words parameter which is set to 2 by default. 797 AMU CAMB CUUI IITB Type P R F0.5 P R F0.5 P R F0.5 P R F0.5 Missing 43.94 14.32 31.08 45.96 29.71 41.43 26.37 18.16 24.18 15.38 0.59 2.56 Replacement 37.22 26.92 34.57 37.53 28.12 35.18 45.90 22.98 38.27 29.85 1.49 6.22 Unnecessary 25.51 27.47 25.88 34.20 33.33 34.02 46.15 1.53 6.77 IPN NTHU PKU POST Type P R F0.5 P R F0.5 P R F0.5 P R F0.5 Missing 2.86 0.29 1.04 34.33 11.39 24.47 33.33 4.37 14.34 31.14 13.13 24.44 Replacement 9.87 3.86 7.53 27.61 19.15 25.37 29.62 18.33 26.37 33.16 19.33 29.01 Unnecessary 0.00 0.00 0.00 34.76 15.97 28.14 0.00 0.00 0.00 26.32 32.84 27.40 RAC SJTU UFC UMC Type P R F0.5 P R F0.5 P R F0.5 P R F0.5 Missing 1.52 0.27 0.79 62.50 4.44 17.28 40.08 23.57 35.16 Replacement 29.41 20.82 27.17 50.54 3.43 13.47 72.00 2.64 11.52 34.71 9.70 22.90 Unnecessary 0.00 0.00 0.00 17.65 11.36 15.89 16.86 17.17 16.92 Table 5: Precision, recall and F0.5 for Missing, Unnecessary, and Replacement errors for each team. A dash indicates the team’s system did not attempt to correct the given error type (TP+FP = 0). 5 CoNLL-2014 Shared Task Analysis To demonstrate the value of ERRANT, we applied it to the data produced in the CoNLL-2014 shared task (Ng et al., 2014). Specifically, we automatically annotated all the system hypotheses and official reference files.8 Although ERRANT can be applied to any dataset of parallel sentences, we chose to evaluate on CoNLL-2014 because it represents the largest collection of publicly available GEC system output. For more information about the systems in CoNLL-2014, we refer the reader to the shared task paper. 5.1 Edit Operation In our first category experiment, we simply investigated the performance of each system in terms of Missing, Replacement and Unnecessary edits. The results are shown in Table 5 with additional information in Appendix B, Table 10. The most surprising result is that five teams (AMU, IPN, PKU, RAC, UFC) failed to correct any unnecessary token errors at all. This is noteworthy because unnecessary token errors account for roughly 25% of all errors in the CoNLL-2014 test data and so failing to address them significantly limits a system’s maximum performance. While the reason for this is clear in some cases, e.g. UFC’s rule-based system was never designed to tackle unnecessary tokens (Gupta, 2014), it is less clear in others, e.g. there is no obvious reason why AMU’s SMT system failed to learn when 8http://www.comp.nus.edu.sg/∼nlp/conll14st.html to delete tokens (Junczys-Dowmunt and Grundkiewicz, 2014). AMU’s result is especially remarkable given that their system still came 3rd overall despite this limitation. In contrast, CUUI’s classifier approach (Rozovskaya et al., 2014) was the most successful at correcting not only unnecessary token errors, but also replacement token errors, while CAMB’s hybrid MT approach (Felice et al., 2014) significantly outperformed all others in terms of missing token errors. It would hence make sense to combine these two approaches, and indeed recent research has shown this improves overall performance (Rozovskaya and Roth, 2016). 5.2 General Error Types Table 6 shows precision, recall and F0.5 for each of the error types in our proposed framework for each team in CoNLL-2014. As some error types are more common than others, we also provide the TP, FP and FN counts used to make this table in Appendix B, Table 11. Overall, CAMB was the most successful team in terms of error types, achieving the highest Fscore in 10 (out of 24) error categories, followed by AMU, who scored highest in 6 categories. All but 3 teams (IITB, IPN and POST) achieved the best score in at least 1 category, which suggests that different approaches to GEC complement different error types. Only CAMB attempted to correct at least 1 error from every category. Other interesting observations we can make from this table include: 798 AMU CAMB CUUI IITB IPN NTHU PKU POST RAC SJTU UFC UMC ADJ P 4.88 9.09 0.00 0.00 0.00 66.67 0.00 12.50 0.00 0.00 R 6.67 13.89 0.00 0.00 0.00 7.14 0.00 3.57 0.00 0.00 F0.5 5.15 9.77 0.00 0.00 0.00 25.00 0.00 8.33 0.00 0.00 ADJ:FORM P 55.56 75.00 100.00 100.00 0.00 33.33 100.00 50.00 8.00 100.00 R 62.50 60.00 33.33 40.00 0.00 37.50 28.57 14.29 40.00 60.00 F0.5 56.82 71.43 71.43 76.92 0.00 34.09 66.67 33.33 9.52 88.24 ADV P 6.67 11.54 0.00 0.00 0.00 0.00 0.00 0.00 4.76 8.77 R 2.94 20.45 0.00 0.00 0.00 0.00 0.00 0.00 3.03 12.50 F0.5 5.32 12.64 0.00 0.00 0.00 0.00 0.00 0.00 4.27 9.33 CONJ P 6.25 0.00 0.00 0.00 0.00 0.00 R 7.69 0.00 0.00 0.00 0.00 0.00 F0.5 6.49 0.00 0.00 0.00 0.00 0.00 CONTR P 29.17 40.00 46.15 0.00 33.33 0.00 66.67 28.57 R 100.00 33.33 85.71 0.00 57.14 0.00 40.00 33.33 F0.5 33.98 38.46 50.85 0.00 36.36 0.00 58.82 29.41 DET P 33.33 36.16 30.92 21.43 0.00 36.03 29.35 26.09 0.00 43.88 36.21 R 14.09 43.03 51.91 0.92 0.00 28.46 7.85 49.41 0.00 12.54 23.66 F0.5 26.18 37.35 33.64 3.92 0.00 34.21 18.96 28.81 0.00 29.25 32.74 MORPH P 55.56 59.15 55.88 28.57 1.16 27.87 20.80 27.78 32.69 100.00 40.00 43.75 R 48.91 47.73 20.88 5.41 1.39 21.52 30.59 12.50 21.25 2.74 5.00 15.91 F0.5 54.09 56.45 41.85 15.38 1.20 26.32 22.22 22.32 29.51 12.35 16.67 32.41 NOUN P 20.90 25.27 0.00 28.57 4.35 0.00 0.00 10.00 10.53 0.00 27.78 R 12.39 19.49 0.00 2.20 2.17 0.00 0.00 1.92 1.92 0.00 9.90 F0.5 18.37 23.86 0.00 8.40 3.62 0.00 0.00 5.43 5.56 0.00 20.41 NOUN:INFL P 60.00 60.00 50.00 25.00 100.00 62.50 66.67 66.67 0.00 R 85.71 66.67 71.43 16.67 33.33 62.50 57.14 66.67 0.00 F0.5 63.83 61.22 53.19 22.73 71.43 62.50 64.52 66.67 0.00 NOUN:NUM P 49.42 44.20 44.06 41.18 14.38 44.05 29.39 31.05 29.00 54.29 44.29 R 56.14 53.74 59.49 3.87 11.28 47.62 42.54 56.20 36.45 10.27 16.94 F0.5 50.63 45.83 46.47 14.06 13.63 44.72 31.33 34.10 30.23 29.23 33.48 NOUN:POSS P 20.00 66.67 14.29 0.00 0.00 25.00 50.00 R 14.29 10.53 5.26 0.00 0.00 4.55 5.00 F0.5 18.52 32.26 10.64 0.00 0.00 13.16 17.86 ORTH P 60.00 66.67 73.81 3.45 0.00 28.57 49.32 16.57 50.00 R 11.11 40.00 59.62 4.55 0.00 6.90 64.29 49.12 17.24 F0.5 31.91 58.82 70.45 3.62 0.00 17.54 51.72 19.10 36.23 OTHER P 20.34 23.60 10.34 0.00 2.33 1.37 14.29 10.00 0.00 0.00 11.58 R 6.92 10.03 0.83 0.00 0.31 0.58 0.58 1.13 0.00 0.00 3.15 F0.5 14.65 18.57 3.14 0.00 1.01 1.07 2.49 3.90 0.00 0.00 7.54 PART P 71.43 33.33 25.00 16.67 50.00 20.00 R 20.83 15.38 4.76 21.74 9.52 11.11 F0.5 48.08 27.03 13.51 17.48 27.03 17.24 PREP P 47.56 41.44 33.33 75.00 0.00 10.71 21.74 0.00 36.59 20.53 R 16.05 35.66 13.49 1.44 0.00 12.35 2.17 0.00 7.18 13.36 F0.5 34.15 40.14 25.76 6.70 0.00 11.01 7.76 0.00 20.11 18.54 PRON P 41.18 20.37 0.00 0.00 11.11 50.00 100.00 27.27 5.00 0.00 22.92 R 9.72 13.41 0.00 0.00 1.69 2.82 1.54 4.62 1.52 0.00 13.92 F0.5 25.00 18.46 0.00 0.00 5.26 11.49 7.25 13.76 3.42 0.00 20.30 PUNCT P 25.00 60.47 37.21 100.00 0.00 44.83 27.27 0.00 5.00 43.02 R 3.52 15.48 10.60 1.85 0.00 8.97 6.34 0.00 0.96 23.13 F0.5 11.26 38.24 24.77 8.62 0.00 24.90 16.42 0.00 2.72 36.71 SPELL P 76.92 77.55 0.00 0.00 25.00 0.00 44.17 68.63 73.98 100.00 R 63.83 41.76 0.00 0.00 4.23 0.00 71.29 71.43 85.85 1.37 F0.5 73.89 66.20 0.00 0.00 12.61 0.00 47.81 69.17 76.09 6.49 VERB P 18.84 15.12 0.00 7.69 0.00 14.29 0.00 0.00 0.00 16.33 R 8.23 8.33 0.00 0.74 0.00 0.70 0.00 0.00 0.00 5.37 F0.5 14.98 13.00 0.00 2.66 0.00 2.94 0.00 0.00 0.00 11.59 VERB:FORM P 34.92 36.36 68.75 0.00 8.77 35.11 30.77 25.00 34.41 28.57 31.11 R 23.40 25.00 24.18 0.00 5.75 35.11 35.56 3.45 32.65 4.65 16.09 F0.5 31.79 33.33 50.23 0.00 7.94 35.11 31.62 11.11 34.04 14.08 26.22 VERB:INFL P 100.00 100.00 100.00 100.00 50.00 100.00 100.00 0.00 R 100.00 100.00 50.00 50.00 50.00 50.00 100.00 0.00 F0.5 100.00 100.00 83.33 83.33 50.00 83.33 100.00 0.00 VERB:SVA P 49.09 44.05 54.80 50.00 24.56 50.56 56.25 32.69 35.56 59.09 81.58 60.00 R 27.55 32.74 71.85 1.12 14.58 67.16 18.75 17.35 31.07 13.83 29.25 15.00 F0.5 42.45 41.20 57.53 5.15 21.60 53.19 40.18 27.78 34.56 35.71 60.08 37.50 VERB:TENSE P 20.55 26.27 70.00 66.67 3.70 31.25 9.38 20.00 22.78 14.81 100.00 31.25 R 8.72 17.51 4.12 1.25 0.61 2.98 3.66 2.31 20.57 2.45 0.63 12.05 F0.5 16.16 23.88 16.67 5.81 1.84 10.78 7.14 7.91 22.30 7.38 3.05 23.70 WO P 38.89 0.00 66.67 0.00 0.00 41.18 R 33.33 0.00 14.29 0.00 0.00 35.00 F0.5 37.63 0.00 38.46 0.00 0.00 39.77 Table 6: Precision, recall and F0.5 for each team and error type. A dash indicates the team’s system did not attempt to correct the given error type (TP+FP = 0). The highest F-score for each type is highlighted. 799 CAMB Type P R F0.5 M:DET 43.20 51.77 44.68 R:DET 19.33 35.37 21.26 U:DET 43.75 39.90 42.92 DET 36.16 43.03 37.35 CUUI Type P R F0.5 M:DET 23.86 45.00 26.34 R:DET 27.03 24.39 26.46 U:DET 36.19 66.37 39.81 DET 30.92 51.91 33.64 Table 7: Detailed breakdown of Determiner errors for two teams. • Despite the prevalence of spell checkers nowadays, many teams did not seem to employ them; this would have been an easy way to boost overall performance. • Although several teams built specialised classifiers for DET and PREP errors, CAMB’s hybrid MT approach still outperformed them. This might be because the classifiers were trained using a different error type framework however. • CUUI’s classifiers significantly outperformed all other approaches at ORTH and VERB:FORM errors. This suggests classifiers are well-suited to these error types. • Although UFC’s rule-based approach was the best at VERB:SVA errors, CUUI’s classifier was not very far behind. • Only AMU managed to correct any CONJ errors. • Content word errors (i.e. ADJ, ADV, NOUN and VERB) were unsurprisingly very difficult for all teams. 5.3 Detailed Error Types In addition to analysing general error types, the modular design of our framework also allows us to evaluate error type performance at an even greater level of detail. For example, Table 7 shows the breakdown of Determiner errors for two teams using different approaches in terms of edit operation. Note that this is a representative example of detailed error type performance, as an analysis of all error type combinations for all teams would take up too much space. Team P R F0.5 AMU 16.90 5.33 11.79 CAMB 27.22 17.06 24.32 CUUI 15.69 3.67 9.48 IITB 28.57 0.94 4.15 IPN 3.33 0.47 1.51 NTHU 0.00 0.00 0.00 PKU 25.00 1.40 5.73 POST 12.77 2.82 7.48 RAC 2.96 2.82 2.93 SJTU 10.00 0.47 1.99 UFC UMC 19.82 9.82 16.47 Table 8: Each team’s performance at correcting multi-token edits; i.e. there are at least two tokens on one side of the edit. While CAMB’s hybrid MT approach achieved a higher score than CUUI’s classifier overall, our more detailed evaluation reveals that CUUI actually outperformed CAMB at Replacement Determiner errors. We also learn that CAMB scored twice as highly on M:DET and U:DET than it did on R:DET and that CUUI’s significantly higher U:DET recall was offset by a lower precision. Ultimately, this shows that even though one approach might be better than another overall, different approaches may still have complementary strengths. 5.4 Multi Token Errors Another benefit of explicitly annotating all hypothesis edits is that edit spans become fixed; this means we can evaluate system performance in terms of edit size. Table 8 hence shows the overall performance for each team at correcting multitoken edits, where a multi-token edit is an edit that has at least two tokens on either side. In the CoNLL-2014 test set, there are roughly 220 such edits (about 10% of all edits). In general, teams did not do well at multi-token edits. In fact only three teams achieved scores greater than 10% F0.5 and all of them used MT (AMU, CAMB, UMC). This is significant because recent work has suggested that the main goal of GEC should be to produce fluent-sounding, rather than just grammatical sentences, even though this often requires complex multi-token edits (Sakaguchi et al., 2016). If no system is particularly adept at correcting multi-token errors however, robust fluency correction will likely require more sophisticated methods than are currently available. 800 AMU CAMB CUUI IITB IPN NTHU PKU POST RAC SJTU UFC UMC 0 10 20 30 40 50 F0.5 Detection Correction Figure 1: The difference between detection and correction scores for each team overall. 5.5 Detection vs. Correction Another important aspect of GEC that is seldom reported in the literature is that of error detection; i.e. the extent to which a system can identify erroneous tokens in text. This can be calculated by comparing the edit overlap between the hypothesis and reference files regardless of the proposed correction in a manner similar to Recognition evaluation in the HOO shared tasks for GEC (Dale and Kilgarriff, 2011). Figure 1 hence shows how each team’s score for detection differed in relation to their score for correction. While CAMB scored highest for detection overall, it is interesting to note that CUUI ultimately performed slightly better than CAMB at correction. This suggests CUUI was more successful at correcting the errors they detected than CAMB. In contrast, IPN and PKU are notable for detecting significantly more errors than they were able to correct. Nevertheless, a system’s ability to detect errors, even if it is unable to correct them, is still likely to be valuable information to a learner (Rei and Yannakoudakis, 2016). Finally, although we do not do so here, our scorer is also capable of providing a detailed error type breakdown for detection. 6 Conclusion In this paper, we described ERRANT, a grammatical ERRor ANnotation Toolkit designed to automatically annotate parallel error correction data with explicit edit spans and error type information. ERRANT can be used to not only facilitate a detailed error type evaluation in GEC, but also to standardise existing error correction corpora and reduce annotator workload. We release ERRANT with this paper. Our approach makes use of previous work to align sentences based on linguistic intuition and then introduces a new rule-based framework to classify edits. This framework is entirely dataset independent, and relies only on automatically obtained information such as POS tags and lemmas. A small-scale evaluation of our classifier found that each rater considered >95% of the predicted error types as either “Good” (85%) or “Acceptable” (10%). We demonstrated the value of ERRANT by carrying out a detailed evaluation of system error type performance for all teams in the CoNLL2014 shared task on Grammatical Error Correction. We found that different systems had different strengths and weaknesses which we hope researchers can exploit to further improve general performance. References Christopher Bryant and Hwee Tou Ng. 2015. How far are we from fully automatic high quality grammatical error correction? In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 697–707. http://www.aclweb.org/anthology/P15-1068. Daniel Dahlmeier and Hwee Tou Ng. 2012. Better evaluation for grammatical error correction. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Montr´eal, Canada, pages 568–572. http://www.aclweb.org/anthology/N12-1067. Robert Dale and Adam Kilgarriff. 2011. Helping Our Own: The HOO 2011 pilot shared task. In Proceedings of the 13th European Workshop on Natural Language Generation. Association for Computational Linguistics, Stroudsburg, PA, USA, ENLG ’11, pages 242–249. http://dl.acm.org/citation.cfm?id=2187681.2187725. Bradley Efron and Robert J. Tibshirani. 1993. An Introduction to the Bootstrap. Chapman & Hall, New York. 801 Mariano Felice and Ted Briscoe. 2015. Towards a standard evaluation method for grammatical error detection and correction. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Denver, Colorado, pages 578– 587. http://www.aclweb.org/anthology/N15-1060. Mariano Felice, Christopher Bryant, and Ted Briscoe. 2016. Automatic extraction of learner errors in ESL sentences using linguistically enhanced alignments. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee, Osaka, Japan, pages 825–835. http://aclweb.org/anthology/C16-1079. Mariano Felice, Zheng Yuan, Øistein E. Andersen, Helen Yannakoudakis, and Ekaterina Kochmar. 2014. Grammatical error correction using hybrid systems and type filtering. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task. Association for Computational Linguistics, Baltimore, Maryland, pages 15– 24. http://www.aclweb.org/anthology/W14-1702. Roman Grundkiewicz, Marcin Junczys-Dowmunt, and Edward Gillian. 2015. Human evaluation of grammatical error correction systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 461–470. http://aclweb.org/anthology/D15-1052. Anubhav Gupta. 2014. Grammatical error detection using tagger disagreement. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task. Association for Computational Linguistics, Baltimore, Maryland, pages 49–52. http://www.aclweb.org/anthology/W14-1706. Marcin Junczys-Dowmunt and Roman Grundkiewicz. 2014. The AMU system in the CoNLL-2014 shared task: Grammatical error correction by dataintensive and feature-rich statistical machine translation. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task. Association for Computational Linguistics, Baltimore, Maryland, pages 25–33. http://www.aclweb.org/anthology/W14-1703. Courtney Napoles, Keisuke Sakaguchi, Matt Post, and Joel Tetreault. 2015. Ground truth for grammatical error correction metrics. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Association for Computational Linguistics, Beijing, China, pages 588–593. http://www.aclweb.org/anthology/P15-2097. Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christopher Bryant. 2014. The CoNLL-2014 shared task on grammatical error correction. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task. ACL, Baltimore, Maryland, USA, pages 1–14. http://aclweb.org/anthology/W/W14/W141701.pdf. Justus J. Randolph. 2005. Free-marginal multirater kappa: An alternative to Fleiss’ fixedmarginal multirater kappa. Joensuu University Learning and Instruction Symposium http://files.eric.ed.gov/fulltext/ED490661.pdf. Marek Rei and Helen Yannakoudakis. 2016. Compositional sequence labeling models for error detection in learner writing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1181–1191. http://www.aclweb.org/anthology/P16-1112. Alla Rozovskaya, Kai-Wei Chang, Mark Sammons, Dan Roth, and Nizar Habash. 2014. The IllinoisColumbia system in the CoNLL-2014 shared task. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task. Association for Computational Linguistics, Baltimore, Maryland, pages 34–42. http://www.aclweb.org/anthology/W14-1704. Alla Rozovskaya and Dan Roth. 2016. Grammatical error correction: Machine translation and classifiers. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Berlin, Germany, pages 2205–2215. http://aclweb.org/anthology/P16-1208. Keisuke Sakaguchi, Courtney Napoles, Matt Post, and Joel Tetreault. 2016. Reassessing the goals of grammatical error correction: Fluency instead of grammaticality. Transactions of the Association for Computational Linguistics 4:169–182. https://tacl2013.cs.columbia.edu/ojs/index.php/tacl/ article/view/800. Ben Swanson and Elif Yamangil. 2012. Correction detection and error type selection as an ESL educational aid. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Montr´eal, Canada, pages 357– 361. http://www.aclweb.org/anthology/N12-1037. Huichao Xue and Rebecca Hwa. 2014. Improved correction detection in revised ESL sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Baltimore, Maryland, pages 599–604. http://www.aclweb.org/anthology/P14-2098. 802 Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading ESOL texts. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Portland, Oregon, USA, pages 180–189. http://www.aclweb.org/anthology/P11-1019. 803 A Complete list of valid error code combinations Operation Tier Type Missing Unnecessary Replacement Token Tier Part Of Speech Adjective M:ADJ U:ADJ R:ADJ Adverb M:ADV U:ADV R:ADV Conjunction M:CONJ U:CONJ R:CONJ Determiner M:DET U:DET R:DET Noun M:NOUN U:NOUN R:NOUN Particle M:PART U:PART R:PART Preposition M:PREP U:PREP R:PREP Pronoun M:PRON U:PRON R:PRON Punctuation M:PUNCT U:PUNCT R:PUNCT Verb M:VERB U:VERB R:VERB Other Contraction M:CONTR U:CONTR R:CONTR Morphology R:MORPH Orthography R:ORTH Other M:OTHER U:OTHER R:OTHER Spelling R:SPELL Word Order R:WO Morphology Tier Adjective Form R:ADJ:FORM Noun Inflection R:NOUN:INFL Noun Number R:NOUN:NUM Noun Possessive M:NOUN:POSS U:NOUN:POSS R:NOUN:POSS Verb Form M:VERB:FORM U:VERB:FORM R:VERB:FORM Verb Inflection R:VERB:INFL Verb Agreement R:VERB:SVA Verb Tense M:VERB:TENSE U:VERB:TENSE R:VERB:TENSE Table 9: There are 55 total possible error types. This table shows all of them except UNK, which indicates an uncorrected error. A dash indicates an impossible combination. B TP, FP and FN counts for various CoNLL-2014 results AMU CAMB CUUI IITB Type TP FP FN TP FP FN TP FP FN TP FP FN Missing 58 74 347 131 154 310 77 215 347 2 11 336 Replacement 428 722 1162 477 794 1219 381 449 1277 20 47 1320 Unnecessary 0 0 412 125 365 330 158 304 316 6 7 385 IPN NTHU PKU POST Type TP FP FN TP FP FN TP FP FN TP FP FN Missing 1 34 339 46 88 358 16 32 350 52 115 344 Replacement 53 484 1319 299 784 1262 279 663 1243 312 629 1302 Unnecessary 0 2 389 65 122 342 0 1 397 155 434 317 RAC SJTU UFC UMC Type TP FP FN TP FP FN TP FP FN TP FP FN Missing 1 65 368 15 9 323 0 0 339 99 148 321 Replacement 325 780 1236 47 46 1325 36 14 1326 143 269 1331 Unnecessary 0 5 407 45 210 351 0 0 381 74 365 357 Table 10: True Positive, False Positive and False Negative counts for each team in terms of Missing, Replacement and Unnecessary edits. The total number of edits may vary for each system, as this depends on the individual references that are chosen during evaluation. These results were used to make Table 5. 804 AMU CAMB CUUI IITB IPN NTHU PKU POST RAC SJTU UFC UMC ADJ TP 2 5 0 0 0 0 2 0 1 0 0 0 FP 39 50 0 3 2 3 1 4 7 8 0 21 FN 28 31 30 23 23 25 26 33 27 20 20 26 ADJ:FORM TP 5 6 3 2 0 3 2 1 2 0 0 3 FP 4 2 0 0 1 6 0 1 23 0 0 0 FN 3 4 6 3 5 5 5 6 3 5 5 2 ADV TP 1 9 0 0 0 0 0 0 0 1 0 5 FP 14 69 1 1 1 2 1 0 4 20 0 52 FN 33 35 36 32 33 35 37 41 37 32 33 35 CONJ TP 1 0 0 0 0 0 0 0 0 0 0 0 FP 15 18 0 0 1 1 0 0 0 6 0 26 FN 12 15 14 12 12 13 12 15 13 13 12 14 CONTR TP 7 2 6 0 0 0 0 4 0 2 0 2 FP 17 3 7 0 1 0 0 8 1 1 0 5 FN 0 4 1 5 5 5 5 3 5 3 5 4 DET TP 52 179 231 3 0 107 27 210 0 43 0 88 FP 104 316 516 11 13 190 65 595 9 55 0 155 FN 317 237 214 324 325 269 317 215 346 300 327 284 MORPH TP 45 42 19 4 1 17 26 10 17 2 4 14 FP 36 29 15 10 85 44 99 26 35 0 6 18 FN 47 46 72 70 71 62 59 70 63 71 76 74 NOUN TP 14 23 0 2 2 0 0 2 2 0 0 10 FP 53 68 5 5 44 9 29 18 17 16 0 26 FN 99 95 109 89 90 102 103 102 102 93 92 91 NOUN:INFL TP 6 6 5 0 1 2 5 4 4 0 0 0 FP 4 4 5 0 3 0 3 2 2 1 0 0 FN 1 3 2 6 5 4 3 3 2 6 6 6 NOUN:NUM TP 128 122 141 7 22 100 97 136 78 19 0 31 FP 131 154 179 10 131 127 233 302 191 16 0 39 FN 100 105 96 174 173 110 131 106 136 166 178 152 NOUN:POSS TP 3 2 0 0 0 0 1 0 0 1 0 1 FP 12 1 0 0 0 0 6 1 38 3 0 1 FN 18 17 20 18 19 22 18 21 20 21 20 19 ORTH TP 3 14 31 0 1 0 2 36 28 0 0 5 FP 2 7 11 0 28 1 5 37 141 0 0 5 FN 24 21 21 21 21 27 27 20 29 24 21 24 OTHER TP 24 38 3 0 1 2 2 4 0 0 0 11 FP 94 123 26 8 42 144 12 36 52 11 0 84 FN 323 341 358 329 322 345 343 349 346 323 327 338 PART TP 5 4 1 0 0 5 0 0 0 2 0 2 FP 2 8 3 0 0 25 0 0 0 2 0 8 FN 19 22 20 19 21 18 21 20 19 19 17 16 PREP TP 39 92 34 3 0 30 0 5 0 15 0 31 FP 43 130 68 1 2 250 0 18 3 26 0 120 FN 204 166 218 205 207 213 219 225 215 194 206 201 PRON TP 7 11 0 0 1 2 1 3 1 0 0 11 FP 10 43 1 5 8 2 0 8 19 22 0 37 FN 65 71 63 57 58 69 64 62 65 62 62 68 PUNCT TP 5 26 16 2 0 13 0 9 0 1 0 37 FP 15 17 27 0 16 16 0 24 29 19 0 49 FN 137 142 135 106 114 132 123 133 129 103 109 123 SPELL TP 60 38 0 0 3 0 72 70 91 0 0 1 FP 18 11 1 1 9 2 91 32 32 0 0 0 FN 34 53 74 68 68 74 29 28 15 70 70 72 VERB TP 13 13 0 0 1 0 1 0 0 0 0 8 FP 56 73 0 6 12 12 6 4 5 17 0 41 FN 145 143 165 133 135 152 141 164 151 139 131 141 VERB:FORM TP 22 24 22 0 5 33 32 3 32 4 0 14 FP 41 42 10 1 52 61 72 9 61 10 0 31 FN 72 72 69 87 82 61 58 84 66 82 82 73 VERB:INFL TP 2 2 0 0 1 1 1 1 2 0 0 0 FP 0 0 0 0 0 0 1 0 0 0 1 0 FN 0 0 2 2 1 1 1 1 0 2 2 2 VERB:SVA TP 27 37 97 1 14 90 18 17 32 13 31 15 FP 28 47 80 1 43 88 14 35 58 9 7 10 FN 71 76 38 88 82 44 78 81 71 81 75 85 VERB:TENSE TP 15 31 7 2 1 5 6 4 36 4 1 20 FP 58 87 3 1 26 11 58 16 122 23 0 44 FN 157 146 163 158 163 163 158 169 139 159 159 146 WO TP 0 7 0 2 0 0 0 0 0 0 0 7 FP 0 11 10 1 0 0 0 2 1 0 0 10 FN 12 14 14 12 12 11 12 12 12 11 11 13 Table 11: True Positive, False Positive and False Negative counts for each error type for each team. These results were used to make Table 6. 805
2017
74
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 806–817 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1075 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 806–817 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1075 Evaluation Metrics for Machine Reading Comprehension: Prerequisite Skills and Readability Saku Sugawara♠, Yusuke Kido♠, Hikaru Yokono♣, and Akiko Aizawa♦♠ ♠The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan ♣Fujitsu Laboratories Ltd., 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki, Japan ♦Natural Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo, Japan [email protected] [email protected] [email protected] [email protected] Abstract Knowing the quality of reading comprehension (RC) datasets is important for the development of natural-language understanding systems. In this study, two classes of metrics were adopted for evaluating RC datasets: prerequisite skills and readability. We applied these classes to six existing datasets, including MCTest and SQuAD, and highlighted the characteristics of the datasets according to each metric and the correlation between the two classes. Our dataset analysis suggests that the readability of RC datasets does not directly affect the question difficulty and that it is possible to create an RC dataset that is easy to read but difficult to answer. 1 Introduction A major goal of natural language processing (NLP) is to develop agents that can understand natural language. Such an ability can be tested with a reading comprehension (RC) task that requires the agent to read open-domain documents and answer questions about them. Constructing systems with RC competence is challenging because RC comprises multiple processes including parsing, understanding cohesion, and inference with linguistic and general knowledge. Clarifying what a system achieves is important in the development of RC systems. To achieve robust improvement, systems should be measured according to a variety of metrics beyond simple accuracy. However, a current problem is that most RC datasets are presented only with superficial categories, such as question types (e.g., what, where, and who) and answer types (e.g., numeric, location, and person). In addition, Chen et al. (2016) noted that some questions in datasets may not be suited to the testing of RC systems. In such ID: SQuAD, United Methodist Church Context: The United Methodist Church (UMC) practices infant and adult baptism. Baptized Members are those who have been baptized as an infant or child, but who have not subsequently professed their own faith. Question: What are members who have been baptized as an infant or child but who have not subsequently professed their own faith? Answer: Baptized Members ID: MCTest, mc160.dev.8 Context: Sara wanted to play on a baseball team. She had never tried to swing a bat and hit a baseball before. Her Dad gave her a bat and together they went to the park to practice. Question: Why was Sara practicing? Answer: She wanted to play on a team Figure 1: Examples of RC questions from SQuAD (Rajpurkar et al., 2016) and MCTest (Richardson et al., 2013) (the Contexts are excerpts). situations, it is difficult to obtain an accurate assessment of the RC system. Norvig (1989) argued that questions that are easy for humans to answer often turn out to be difficult for machines. For example, consider the two RC questions in Figure 1. The first example is from SQuAD (Rajpurkar et al., 2016), although the document is taken from a Wikipedia article and was therefore written for adults. The question is answerable simply by noticing one sentence, without needing to fully understand the content of the text. On the other hand, consider the second example from MCTest (Richardson et al., 2013), which was written for children and is easy to read. Here, answering the question involves gathering information from multiple sentences and utilizing a combination of several skills, such as understanding causal relations (Sara wanted... →they went to...), coreference resolution (Sara and Her Dad = they), and complementing ellipsis (baseball team = team). These two examples show that the readability of the text does not necessarily correlate with the difficulty of answering questions about it. 806 Furthermore, the accompanying categories of existing RC datasets cannot help with the analysis of this issue. In this study, our goal is to investigate how these two types of difficulty, namely “answering questions” and “reading text,” are correlated in RC. Corresponding to each type, we formalize two classes of evaluation metrics, prerequisite skills and readability, and analyze existing RC datasets. Our intention is to provide the basis of an evaluation methodology of RC systems to help their robust development. Our two classes of metrics are inspired by the analysis in McNamara and Magliano (2009) of human text comprehension in psychology. They considered two aspects of text comprehension, namely “strategic/skilled comprehension” and “text ease of processing.” Our first class defines metrics for “strategic/skilled comprehension,” namely the difficulty of comprehending the context when answering questions. We adopted the set of prerequisite skills that Sugawara et al. (2017) proposed for the finegrained analysis of RC capability. Their study also presented an important observation of the relation between the difficulty of an RC task and prerequisite skills: the more skills that are required to answer a question, the more difficult is the question. Based on this observation, in this work, we assume that the number of skills required to answer a question is a reasonable indication of the difficulty of the question. This is because each skill corresponds to one of the functions of an NLP system, which has to be capable of that functionality. Our second class defines metrics for “text ease of processing,” namely the difficulty of reading the text. We regard it as readability of the text in terms of syntactic and lexical complexity. From among readability studies in NLP, we adopt a wide range of linguistic features proposed by Vajjala and Meurers (2012), which can be used for texts with no available annotations. The contributions of this paper are as follows. 1. We adopt two classes of evaluation metrics to show the qualitative features of RC datasets. Through analyses of RC datasets, we demonstrate that there is only a weak correlation between the difficulty of questions and the readability of context texts in RC datasets. 2. We revise a previous classification of prerequisite skills for RC. Specifically, skills of knowledge reasoning are organized by using insights of entailment phenomena in NLP and human text comprehension in psychology. 3. We annotate six existing RC datasets, compared to the two datasets considered in Sugawara and Aizawa (2016), with our organized metrics being used in the comparison. We have made the results publicly available1 and report on the characteristics of the datasets and the differences between them. We should note that, in this study, RC datasets with different task formulations were annotated with prerequisite skills under the same conditions. Annotators first saw a context, a question, and its answer. They selected the sentences required to provide the answer, and then annotated them with appropriate prerequisite skills. That is, the datasets were annotated from the point of view of whether the context entailed the hypothesis constructed from the pair of the question and answer. This means that our methodology cannot quantify the systems’ competence in searching the context for necessary sentences and answer candidates. In other words, our methodology can be only used to evaluate the competence of understanding RC questions as contextual entailments. The remainder of this paper is divided into the following sections. First, we discuss related work in Section 2. Next, we specify our two classes of metrics in Section 3. In Section 4, we annotate existing RC datasets with the prerequisite skills. Section 5 gives the results of our dataset analysis and Section 6 discusses their implications. Section 7 presents our conclusions. 2 Related Work 2.1 Reading Comprehension Datasets In this section, we present a short history of RC datasets. To our knowledge, Hirschman et al. (1999) were the first to use NLP methods for RC. Their dataset comprised reading materials for grades 3–6 with simple 5W (wh-) questions. Subsequent investigations into questions of natural language understanding focused on other formulations, such as question answering (Yang et al., 2015; Wang et al., 2007; Voorhees et al., 1999) and 1http://www-al.nii.ac.jp/rc_dataset_ analysis 807 textual entailment (Bentivogli et al., 2010; Sammons et al., 2010; Dagan et al., 2006). One of the RC tasks of the time was QA4MRE (Sutcliffe et al., 2013). The highest accuracy achieved for this task was 59% and the size of the dataset was very limited: there were only 224 gold-standard questions, which is insufficient for machine learning methods. This means that an important issue for designing RC datasets is their scalability. Richardson et al. (2013) presented MCTest, which is an opendomain narrative dataset for gauging comprehension at a child’s level. This dataset was created by crowdsourcing and was based on a scalable methodology. Since then, additional large-scale datasets have been proposed with the development of machine learning methods in NLP. For example, the CNN/Daily Mail dataset (Hermann et al., 2015) and CBTest (Hill et al., 2016) have approximately 1.4M and 688K passages, respectively. These context texts and questions were automatically curated and generated from large corpora. However, Chen et al. (2016) indicated that approximately 25% of the questions in the CNN/Daily Mail dataset are either unsolvable or nonsensical. This dataset-quality issue highlights the demand for more stable and robust sourcing methods. Several additional RC datasets were presented in the last half of 2016, involving large documents and sensible queries that were guaranteed by crowdsourcing or other human testing. They were intended to provide large and high-quality content for machine learning models. Nonetheless, as shown in the examples of Figure 1, they were not offered with metrics that could evaluate NLP systems adequately with respect to the difficulty of questions and the surface features of texts. 2.2 Reading Comprehension in Psychology In psychology, there is a rich tradition of research on human text comprehension. The construction– integration (C–I) model (Kintsch, 1988) is one of the most basic and influential theories. This model assumes a connectional and computational architecture for text comprehension. It assumes that comprehension is the processing of information based on the following two steps.2 1. Construction: read sentences or clauses as inputs; form and elaborate concepts and propositions corresponding to the inputs. 2Note that this is a very simplified overview. 2. Integration: associate the contents to understand them consistently (e.g., coreference, discourse, and coherence). During these steps, three levels of representation are constructed (van Dijk and Kintsch, 1983): the surface code (i.e., wording and syntax), the textbase (i.e., text propositions with cohesion), and the situation model (i.e., mental representation). Based on these assumptions, McNamara and Magliano (2009) proposed two aspects of text comprehension, namely “strategic/skilled comprehension” and “text ease of processing.” We adopted these assumptions as the basis of our two classes of evaluation metrics (Section 3). In an alternative approach, Kintsch (1993) proposed two dichotomies for the classification of human inferences, including the knowledge-based inference assumed in the C–I model. The first dichotomy is between inferences that are automatic and those that are controlled. However, Graesser et al. (1994) indicated that this distinction is ambiguous, because there is a continuum between the two states that depends on individuals. Therefore, this dichotomy is unsuited to empirical evaluation, which is our focus. The second dichotomy is between inferences that are retrieved and those that are generated. Retrieved means that the information used for inference is retrieved entirely from the context. In contrast, when inferences are generated, the reader uses external knowledge that goes beyond the context. A similar distinction was proposed by McNamara and Magliano (2009), namely that between bridging and elaboration. A bridging inference connects current information to other information that has been encountered previously. Elaboration connects current information to external knowledge that is not included in the context. We use these two types of inference in the classification of knowledge reasoning. 3 Evaluation Metrics for Datasets Following the depiction of text comprehension by McNamara and Magliano (2009), we adopted two classes for the evaluation of RC datasets: prerequisite skills and readability. For the prerequisite skills class (Section 3.1), we refined RC skills that were proposed by Sugawara et al. (2017) and Sugawara and Aizawa (2016). However, a problem in these studies is that their categorization of knowledge reasoning 808 was provisional and with a weak theoretical background. Therefore, in this study, we reorganized the category of knowledge reasoning in terms of textual entailment in NLP and human text comprehension in psychology. In research on textual entailment, several methodologies have been proposed for the precise analysis of entailment phenomena (Dagan et al., 2013; LoBue and Yates, 2011). In psychology research, as described in Section 2.2, McNamara and Magliano (2009) proposed a similar distinction for inferences: bridging versus elaboration. We utilized these insights in developing a comprehensive but not overly specific classification of knowledge reasoning. Our prerequisite skills class includes the textbase and situation model (van Dijk and Kintsch, 1983). In our terminology, this means understanding each fact and associating multiple facts in a text, such as the relations of events, characters, or the topic of a story. The skills also involve knowledge reasoning, which is divided into several metrics according to the distinctions of human inferences. This point is discussed by Kintsch (1993) and McNamara and Magliano (2009). It also accords with the classification of entailment phenomena by Dagan et al. (2013) and LoBue and Yates (2011). Readability metrics (Section 3.2) are quantitative measures used to assess the difficulty of reading, with respect to vocabulary and the complexity of texts. In this study, they measure the competence in understanding the first basic representation of a text, called the surface code (van Dijk and Kintsch, 1983). 3.1 Prerequisite Skills Based on the 10 RC skills in Sugawara et al. (2017), we identified 13 prerequisite skills, which are presented below. (We use ∗and † to indicate skills that have been modified/elaborated from the original definition or have been newly introduced in this study, respectively.) 1. Object tracking∗: jointly tracking or grasping of multiple objects, including sets or memberships (Clark, 1975). This skill is a version of the list/enumeration used in the original classification, renamed to emphasize its scope with respect to multiple objects. 2. Mathematical reasoning∗: we merged statistical and quantitative reasoning with mathematical reasoning. This skill is a renamed version of mathematical operations. 3. Coreference resolution∗: this skill has a small modification to include an anaphora (Dagan et al., 2013). It is similar to direct reference (Clark, 1975). 4. Logical reasoning∗: we identified this skill as the understanding of predicate logic, e.g., conditionals, quantifiers, negation, and transitivity. Note that this skill, together with mathematical reasoning, is intended to align with the offline skills described by Graesser et al. (1994). 5. Analogy∗: understanding of metaphors including metonymy and synecdoche (see LoBue and Yates (2011) for examples of synecdoche.) 6. Causal relation: understanding of causality that is represented by explicit expressions such as “why,” “because,” and “the reason for” (only if they exist). 7. Spatiotemporal relation: understanding of spatial and/or temporal relationships between multiple entities, events, and states. In addition, we propose the following four categories by refining the “commonsense reasoning” category proposed originally in Sugawara et al. (2017). 8. Ellipsis†: recognizing implicit/omitted information (argument, predicate, quantifier, time, or place). This skill is inspired by Dagan et al. (2013) and the discussion in Sugawara et al. (2017). 9. Bridging†: inference supported by grammatical and lexical knowledge (e.g., synonymy, hypernymy, thematic role, part of events, idioms, and apposition). This skill is inspired by the concept of indirect reference in the literature (Clark, 1975). Note that we exclude direct reference because it is covered by coreference resolution (pronominalization) and elaboration (epithets). 10. Elaboration†: inference using known facts, general knowledge (e.g., kinship, exchange, typical event sequence, and naming), and implicit relations (e.g., noun compounds and possessives) (see Dagan et al. (2013) for details). Bridging and elaboration are distinguished by the knowledge used in inferences being grammatical/lexical or general/commonsense, respectively. 11. Meta-knowledge†: using knowledge that includes a reader, writer, or text genre (e.g., narratives and expository documents) from metaviewpoints (e.g., Who are the principal characters of the story? or What is the main subject of 809 this article?). Although this skill can be regarded as part of elaboration, we defined it as an independent skill because this knowledge is specific to RC. We were motivated by the discussion in Smith et al. (2015). Whereas the above 11 skills involve multiple items, the final pair of skills involve only a single sentence. 12. Schematic clause relation: understanding of complex sentences that have coordination or subordination, including relative clauses. 13. Punctuation∗: understanding of punctuation marks (e.g., parenthesis, dash, quotation, colon, or semicolon). This skill is a renamed version of special sentence structure. Concerning the original definition, we regarded “scheme” in figures of speech as ambiguous and excluded it. We defined ellipsis as a independent skill, and apposition was merged into bridging. Similarly, understanding of constructions was merged into the idioms in bridging. Note that we did not construct this classification to be dependent on particular RC systems in NLP. This was because our methodology is intended to be general and applicable to many kinds of architectures. For example, we did not consider the dichotomy between automatic and controlled inferences because the usage of knowledge is not necessarily the same for all RC systems. 3.2 Readability Metrics In this study, we evaluated the readability of texts based on metrics in NLP. Several studies have examined readability in various applications, such as second-language learning (Razon and Barnden, 2015) and text simplification (Aluisio et al., 2010), and from various aspects, such as development measures in second-language acquisition (Vajjala and Meurers, 2012) and discourse relations (Pitler and Nenkova, 2008). Of these, we adopted the classification of linguistic features proposed by Vajjala and Meurers (2012). This was because they presented a comparison of a wide range of linguistic features focusing on second-language acquisition and their method can be applied to plain text.3 We list the readability metrics in Table 1, which were reported by Vajjala and Meurers (2012) as 3The classification in Pitler and Nenkova (2008) is more suited to measuring text quality. However, we could not use their results because we could not use discourse annotations. - Ave. no. of characters per word (NumChar) - Ave. no. of syllables per word (NumSyll) - Ave. sentence length in words (MLS) - Proportion of words in AWL (AWL) - Modifier variation (ModVar) - No. of coordinate phrases per sentence (CoOrd) - Coleman–Liau index (Coleman) - Dependent clause-to-clause ratio (DC/C) - Complex nominals per clause (CN/C) - Adverb variation (AdvVar) Table 1: Readability metrics. AWL refers to the Academic Word List.4 the top 10 features that affect human readability. To classify these metrics, we can identify three classes: lexical features (NumChar, NumSyll, AWL, AdvVar, and ModVar), syntactic features (MLS, CoOrd, DC/C, and CN/C), and traditional features (Coleman). We applied these metrics only to sentences that needed to be read in answering questions. However, because these metrics were proposed for human readability, they do not necessarily correlate with those used in RC systems. Therefore, in any system analysis, ideally we would have to consult a variety of features. 4 Annotation of Reading Comprehension Datasets We annotated six existing RC datasets with the prerequisite skills. We explain the annotation procedure in Section 4.1 and the annotated RC datasets in Section 4.2. 4.1 Annotation Procedure We prepared annotation guidelines according to Sugawara et al. (2017). The guidelines include the definitions and examples of the skills and annotation instructions. Four annotators were asked to simulate the process of answering questions in RC datasets, using only the prerequisite skills, and to annotate questions with one or more skills required in answering. For each task in the datasets, the annotators saw simultaneously the context, question, and its answer. When a dataset contained multiple-choice questions, we showed all candidate answers and labeled the correct one with an asterisk. The an4http://en.wikipedia.org/wiki/ Academic_Word_List 810 RC dataset Genre Query sourcing Task formulation QA4MRE (2013) Technical documents Handcrafted by experts Multiple choice MCTest (2013) Narratives by crowd workers Crowdsourced Multiple choice SQuAD (2016) Wikipedia articles Crowdsourced Text span selection Who-did-What (2016) News articles Automated Cloze MS MARCO (2016) Segmented web pages Search engine queries Description NewsQA (2016) News articles Crowdsourced Text span selection Table 2: Analyzed RC datasets, their genres, query sourcing methods, and task formulations. notators then selected the sentences that needed to be read to be able to answer the question and decided on the set of prerequisite skills required. The annotators were allowed to select nonsense for unsolvable or unanswerable questions (e.g., the “coreference error” and “ambiguous” questions described in Chen et al. (2016)) to distinguish them from any solvable questions that required no skills. 4.2 Datasets As summarized in Table 2, the annotation was performed on six existing RC datasets: QA4MRE (Sutcliffe et al., 2013), MCTest (Richardson et al., 2013), SQuAD (Rajpurkar et al., 2016), Who-didWhat (Onishi et al., 2016), MS MARCO (Nguyen et al., 2016), and NewsQA (Trischler et al., 2016). We selected these datasets to enable coverage of a variety of genres, query sourcing methods, and task formulations. From each dataset, we randomly selected 100 questions. This number was considered sufficient for the degree of analysis of RC datasets performed by Chen et al. (2016). The questions were sampled from the gold-standard dataset of QA4MRE and the development sets of the other RC datasets. (We explain the method of choosing questions for the annotation in Appendix A.) For a variety of reasons, there were other datasets we did not annotate in this study. CNN/Daily Mail (Hermann et al., 2015) is anonymized and contains errors, according to Chen et al. (2016), making it unsuitable for annotation. We considered CBTest (Hill et al., 2016) to be devised as language-modeling tasks rather than RC-related tasks. LAMBADA (Paperno et al., Skills QA4MRE MCTest SQuAD WDW MARCO NewsQA 1. Tracking 11.0 6.0 3.0 8.0 6.0 2.0 2. Math. 4.0 4.0 0.0 3.0 0.0 1.0 3. Coref. resol. 32.0 49.0 13.0 19.0 15.0 24.0 4. Logical rsng. 15.0 2.0 0.0 8.0 1.0 2.0 5. Analogy 7.0 0.0 0.0 7.0 0.0 3.0 6. Causal rel. 1.0 6.0 0.0 2.0 0.0 4.0 7. Sptemp rel. 26.0 9.0 2.0 2.0 0.0 3.0 8. Ellipsis 13.0 4.0 3.0 16.0 2.0 15.0 9. Bridging 69.0 26.0 42.0 59.0 36.0 50.0 10. Elaboration 60.0 8.0 13.0 57.0 18.0 36.0 11. Meta 1.0 1.0 0.0 0.0 0.0 0.0 12. Clause rel. 52.0 40.0 28.0 42.0 27.0 34.0 13. Punctuation 34.0 1.0 24.0 20.0 14.0 25.0 Nonsense 10.0 1.0 3.0 27.0 14.0 1.0 Table 3: Frequencies (%) of prerequisite skills needed for the RC datasets. #Skills QA4MRE MCTest SQuAD WDW MARCO NewsQA 0 2.0 18.0 27.0 2.0 15.0 13.0 1 13.0 36.0 33.0 5.0 35.0 26.0 2 13.0 24.0 24.0 14.0 29.0 23.0 3 20.0 15.0 6.0 22.0 6.0 25.0 4 14.0 4.0 6.0 16.0 2.0 9.0 5 13.0 1.0 1.0 6.0 0.0 2.0 6 10.0 1.0 0.0 6.0 0.0 1.0 7 1.0 0.0 0.0 2.0 0.0 0.0 8 1.0 0.0 0.0 0.0 0.0 0.0 9 0.0 0.0 0.0 0.0 0.0 0.0 10 3.0 0.0 0.0 0.0 0.0 0.0 Ave. 3.25 1.56 1.28 2.43 1.19 1.99 Table 4: Frequencies (%) of the number of required prerequisite skills for the RC datasets. 2016) texts are formatted for machine reading, with all tokens in lower case, which would seem to disallow inferences based on proper nouns and render them unsuitable for human reading and annotation. 5 Results of the Dataset Analysis We now present the results of evaluating the RC datasets according to the two classes of metrics. In the annotation of prerequisite skills, the interannotator agreement was 90.1% for 62 randomly sampled questions. The evaluation was performed with respect to the following four aspects: (i) frequencies of prerequisite skills required for each RC dataset; (ii) number of prerequisite skills required per question; (iii) readability metrics for each RC dataset; and (iv) correlation between readability metrics and the number of required prerequisite skills. (i) Frequencies of prerequisite skills (see Table 3): QA4MRE had the highest scores for frequencies among the datasets. This seems to reflect 811 Metrics QA4MRE MCTest SQuAD WDW MARCO NewsQA NumChar 5.026 3.892 5.378 4.988 5.016 5.017 NumSyll 1.663 1.250 1.791 1.657 1.698 1.635 MLS 28.488 11.858 23.479 29.146 19.634 22.933 AWL 0.067 0.003 0.071 0.033 0.047 0.038 ModVar 0.174 0.114 0.188 0.150 0.186 0.138 CoOrd 0.922 0.309 0.722 0.467 0.651 0.507 Coleman 12.553 4.333 14.095 12.398 11.836 12.138 DC/C 0.343 0.223 0.243 0.254 0.220 0.264 CN/C 1.948 0.614 1.887 2.310 1.935 1.702 AdvVar 0.038 0.035 0.032 0.019 0.022 0.019 F–K 14.953 3.607 14.678 15.304 12.065 12.624 Words 1545.7 174.1 130.4 253.7 70.7 638.4 Table 5: Results of readability metrics for the RC datasets. F–K is the Flesch–Kincaid grade level (Kincaid et al., 1975). Words is the average word count of the context for each question. the fact that QA4MRE involves technical documents that contain a wide range of knowledge, multiple clauses, and punctuation. Moreover, the questions are devised by experts. MCTest achieved a high score for several skills (best for causal relation and meta-knowledge and second-best for coreference resolution and spatiotemporal relation), but a low score for punctuation. These scores seem to be because the MCTest dataset consists of narratives. Another dataset that achieved notable scores is Who-did-What. This dataset achieved the highest score for ellipsis. This is because the questions of Who-did-What are automatically generated from articles not used as context. This methodology tends to avoid textual overlap between a question and its context, thereby requiring frequently the skills of ellipsis, bridging, and elaboration. With regard to nonsense, MS MARCO and Who-did-What received relatively high scores. This appears to have been caused by the automated sourcing methods, which may generate a separation between the contents of the context and question (i.e., web segments and a search query in MS MARCO, and a context article and question article in Who-did-What). In contrast, NewsQA had no nonsense questions. Although this result was affected by our filtering (described in Appendix A), it is important to note that the NewsQA dataset includes annotations of meta-information whether or not a question makes sense (is question bad). (ii) Number of required prerequisite skills (see Table 4): QA4MRE had the highest score. On average, each question required 3.25 skills. There were few questions in QA4MRE that re1.0 1.5 2.0 2.5 3.0 3.5 Average number of required skills 2 4 6 8 10 12 14 16 F-K grade level QA4MRE MCTest SQuAD WDW MARCO NewsQA Figure 2: Flesch–Kincaid grade levels and average number of required prerequisite skills for the RC datasets. 0 2 4 6 8 10 Number of required skills 10 0 10 20 30 40 50 F-K grade level QA4MRE MCTest SQuAD Figure 3: Flesch–Kincaid grade levels and number of required prerequisite skills for all questions in the selected RC datasets. quired zero or one skill, whereas such questions were contained more frequently in other datasets. Table 4 also indicates that more than 90% of the MS MARCO questions required fewer than three skills according to the annotation. (iii) Readability metrics for each dataset (see Table 5): SQuAD and QA4MRE achieved the highest scores for most metrics. This reflects the fact that Wikipedia articles and technical documents usually require a high-grade level of understanding. In contrast, MCTest had the lowest scores, with its dataset consisting of narratives for children. (iv) Correlation between numbers of required prerequisite skills and readability metrics (see Figures 2 and 3, and Table 6): our main interest was in the correlation between prerequisite skills and readability. To investigate this, we examined the relation between the number of required prerequisite skills and readability metrics. 812 Metrics r p Metrics r p NumChar 0.068 0.095 CoOrd 0.166 0.000 NumSyll 0.057 0.161 Coleman 0.140 0.001 MLS 0.416 0.000 DC/C 0.188 0.000 AWL 0.114 0.005 CN/C 0.131 0.001 ModVar 0.025 0.545 AdvVar 0.026 0.515 F–K 0.343 0.000 Words 0.355 0.000 Table 6: Pearson’s correlation coefficients (r) with the p-values (p) for the readability metrics and number of required prerequisite skills for all questions in the RC datasets. We used the Flesch–Kincaid grade level (Kincaid et al., 1975) as an intuitive reference for readability. This value represents the typical number of years of education required to understand texts based on counts of syllables, words, and sentences. Figures 2 and 3 show the relation between two values for each dataset and for each question, respectively. Figure 2 shows the trends of the datasets. QA4MRE was relatively difficult both to read and to answer, whereas SQuAD was difficult to read but easy to answer. For further investigation, we selected three datasets (QA4MRE, MCTest, and SQuAD) and plotted all of their questions in Figure 3. Three separate domains can be seen. Table 6 presents Pearson’s correlation coefficients between the number of required prerequisite skills and each readability metric for all questions in the RC datasets. Although there are weak correlations, from 0.025 to 0.416, these results demonstrate that there is not necessarily a strong correlation between the two values. This leads to the following two insights. First, the readability of RC datasets does not directly affect the difficulty of their questions. That is, RC datasets that are difficult to read are not necessarily difficult to answer. Second, it is possible to create difficult questions from the context that are easy to read. MCTest is a good example. The context texts in the MCTest dataset are easy to read, but the difficulty of its questions compares to that for the other datasets. To summarize our results in terms of each RC dataset, we can make the following observations. - QA4MRE is difficult both to read and to answer among the datasets analyzed. This would seem to follow its questions being devised by experts. - MCTest is a good example of an RC dataset that is easy to read but difficult to answer. We presume that this is because the corpus genre (i.e., narrative) reflects the trend in required skills for the questions. - SQuAD is difficult to read, along with QA4MRE, but relatively easy to answer compared with the other datasets. - Who-did-What performs well in terms of its query-sourcing method. Although its questions are created automatically, they are sophisticated in terms of knowledge reasoning. However, the automated sourcing method must be improved to exclude nonsense questions. - MS MARCO is a relatively easy dataset in terms of prerequisite skills. However, one problem is that the dataset contained nonsense questions. - NewsQA is advantageous in that it provides meta-information on the reliability of the questions. Such information enabled us to avoid using nonsense questions, as for the training of machine learning models. 6 Discussion In this section, we discuss several issues regarding the construction of RC datasets and the development of RC systems using our methodology. How to utilize the two classes of metrics for system development: one possible scenario for developing an RC system is that it is first built to solve an easy-to-read and easy-to-answer dataset. The next step would be to improve the system so that it can solve an easy-to-read but difficult-toanswer dataset (or its converse). Finally, only after it can solve such datasets should the system be applied to difficult-to-read and difficult-to-answer datasets. The metrics of this study may be useful in preparing appropriate datasets for each step by measuring their properties. The datasets can then be ordered according to the grades of the metrics and applied to each step of the development, as in curriculum learning (Bengio et al., 2009) and transfer learning (Pan and Yang, 2010). Corpus genre: attention should be paid to the genre of the corpus used to construct a dataset. Expository documents such as news articles tend to require factorial understanding. Most existing RC datasets use such texts because of their availability. On the other hand, narrative texts may have a 813 closer correspondence to our everyday experience, involving the emotions and intentions of characters (Graesser et al., 1994). To build agents that work in the real world, RC datasets may have to be constructed from narratives. Question type: in contrast to factorial understanding, comprehensive understanding of natural language texts needs a better grasp of global coherence (e.g., the main point or moral of the text, the goal of a story, or the intention of characters) from the broad context (Graesser et al., 1994). Most questions in current use require only local coherence (e.g., referential relations and thematic roles) within a narrow context. An example of a question based on global coherence would be to give a summary of the text, as used in Hermann et al. (2015). It could be generated automatically by techniques of abstractive text summarization (Rush et al., 2015; Ganesan et al., 2010). Annotation issues: we found questions for which there were disagreements regarding nonsense decisions. For example, some questions can be solved by external knowledge without even seeing their context. Therefore, we should clarify what constitutes a “solvable” or “reasonable” question for RC. In addition, annotators reported that the prerequisite skills did not easily treat questions whose answer was “none of the above” in QA4MRE. We considered these “no answer” questions difficult, in that systems have to decide not to select any of the candidate answers, and our methodology failed to specify them. Competence in selecting necessary sentences: as mentioned in Section 1, our methodology cannot evaluate competence in selecting sentences that need to be read to answer questions. In a brief analysis, we further investigated sentences in the context of the datasets that were selected in the annotation. Analyses were performed in two ways. For each question, we counted the number of required sentences and their distance apart.4 The first row of Table 7 gives the average number of required sentences per question for each RC dataset. Although the scores are reasonably close, MCTest required multiple sentences to be read most frequently. The second row gives the average dis4The distance of sentences was calculated as follows. If a question required only one sentence to be read, its distance was zero. If a question required two adjacent sentences to be read, its distance was one. If a question required more than two sentences to be read, its distance was the sum of the distances of any two sentences. Sentence QA4MRE MCTest SQuAD WDW MARCO NewsQA Number 1.120 1.180 1.040 1.110 1.080 1.170 Distance 1.880 0.930 0.090 0.730 0.280 0.540 Table 7: Average number and distance apart of sentences that need to be read to answer a question in the RC datasets. tance apart of the required sentences. QA4MRE required the longest distance because readers had to look for clues in the long context texts. In contrast, SQuAD and MS MARCO had lower scores. Most of their questions seemed to be answered by reading only a single sentence. Of course, the scores for distances will depend on the length of the context texts. Metrics of RC for machines: our underlying assumption in this study is that, in the development of interactive agents such as dialogue systems, it is important to make the systems behave in a human-like way. This has also become a distinguishing feature of recent RC task design, and one that has never been explicitly considered in conventional NLP tasks. To date, the difference between human and machine RC has not attracted much research attention. We believe that our human-based evaluation metrics and analysis will help researchers to develop a method for the step-by-step construction of better RC datasets and improved RC systems. 7 Conclusion In this study, we adopted evaluation metrics that comprise two classes, namely refined prerequisite skills and readability, for analyzing the quality of RC datasets. We applied these classes to six existing datasets and highlighted their characteristics according to each metric. Our dataset analysis suggests that the readability of RC datasets does not directly affect the difficulty of the questions and that it is possible to create an RC dataset that is easy to read but difficult to answer. In future work, we plan to use the analysis from the present study in constructing a system that can be applied to multiple datasets. Acknowledgments We would like to thank anonymous reviewers for their insightful comments. This work was supported by JSPS KAKENHI Grant Numbers 15H02754 and 16K16120. 814 References Sandra Aluisio, Lucia Specia, Caroline Gasperin, and Carolina Scarton. 2010. Readability assessment for text simplification. In Proceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications. Association for Computational Linguistics, pages 1–9. https://aclweb.org/anthology/W10-1001. Yoshua Bengio, J´erˆome Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning. ACM, pages 41–48. https://doi.org/10.1145/1553374.1553380. Luisa Bentivogli, Elena Cabrio, Ido Dagan, Danilo Giampiccolo, Medea Lo Leggio, and Bernardo Magnini. 2010. Building textual entailment specialized data sets: a methodology for isolating linguistic phenomena relevant to inference. In Proceedings of the 7th International Conference on Language Resources and Evaluation. Citeseer. Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A thorough examination of the CNN/Daily Mail reading comprehension task. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 2358–2367. https://aclweb.org/anthology/P16-1223. Herbert H Clark. 1975. Bridging. In Proceedings of the 1975 workshop on Theoretical issues in natural language processing. Association for Computational Linguistics, pages 169–174. https://doi.org/10.3115/980190.980237. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL recognising textual entailment challenge. In Machine learning challenges. evaluating predictive uncertainty, visual object classification, and recognising tectual entailment, Springer, pages 177–190. https://doi.org/10.1007/11736790 9. Ido Dagan, Dan Roth, Mark Sammons, and Fabio Massimo Zanzotto. 2013. Recognizing textual entailment: Models and applications. Synthesis Lectures on Human Language Technologies 6(4):1–220. Kavita Ganesan, ChengXiang Zhai, and Jiawei Han. 2010. Opinosis: a graph-based approach to abstractive summarization of highly redundant opinions. In Proceedings of the 23rd international conference on computational linguistics. Association for Computational Linguistics, pages 340–348. Arthur C Graesser, Murray Singer, and Tom Trabasso. 1994. Constructing inferences during narrative text comprehension. Psychological review 101(3):371. https://doi.org/10.1037/0033-295X.101.3.371. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems. pages 1693– 1701. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2016. The goldilocks principle: Reading children’s books with explicit memory representations. In Proceedings of the International Conference on Learning Representations. Lynette Hirschman, Marc Light, Eric Breck, and John D Burger. 1999. Deep read: A reading comprehension system. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 325–332. https://doi.org/10.3115/1034678.1034731. J Peter Kincaid, Robert P Fishburne Jr, Richard L Rogers, and Brad S Chissom. 1975. Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. Chief of Naval Technical Training, Research Branch Report 8-75. Walter Kintsch. 1988. The role of knowledge in discourse comprehension: A constructionintegration model. Psychological review 95(2):163. https://doi.org/10.1037/0033-295X.95.2.163. Walter Kintsch. 1993. Information accretion and reduction in text processing: Inferences. Discourse processes 16(1-2):193–202. Peter LoBue and Alexander Yates. 2011. Types of common-sense knowledge needed for recognizing textual entailment. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 329– 334. https://aclweb.org/anthology/P11-2057. Danielle S McNamara and Joe Magliano. 2009. Toward a comprehensive model of comprehension. Psychology of learning and motivation 51:297–384. https://doi.org/10.1016/S0079-7421(09)51009-2. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. CoRR abs/1611.09268. Peter Norvig. 1989. Marker passing as a weak method for text inferencing. Cognitive Science 13(4):569– 620. https://doi.org/10.1207/s15516709cog1304 4. Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. 2016. Who did What: A large-scale person-centered cloze dataset. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 2230–2235. https://aclweb.org/anthology/D16-1241. 815 Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. IEEE Transactions on knowledge and data engineering 22(10):1345–1359. https://doi.org/10.1109/TKDE.2009.191. Denis Paperno, Germ´an Kruszewski, Angeliki Lazaridou, Ngoc Quan Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernandez. 2016. The LAMBADA dataset: Word prediction requiring a broad discourse context. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 1525– 1534. https://aclweb.org/anthology/P16-1144. Emily Pitler and Ani Nenkova. 2008. Revisiting readability: A unified framework for predicting text quality. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 186–195. https://aclweb.org/anthology/D081020. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 2383–2392. https://aclweb.org/anthology/D16-1264. Abigail Razon and John Barnden. 2015. A new approach to automated text readability classification based on concept indexing with integrated part-of-speech n-gram features. In Proceedings of the International Conference Recent Advances in Natural Language Processing. pages 521–528. https://aclweb.org/anthology/R15-1068. Matthew Richardson, J.C. Christopher Burges, and Erin Renshaw. 2013. MCTest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. pages 193–203. http://aclweb.org/anthology/D131020. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 379–389. https://aclweb.org/anthology/D15-1044. Mark Sammons, V.G.Vinod Vydiswaran, and Dan Roth. 2010. “ask not what textual entailment can do for you...”. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 1199–1208. https://aclweb.org/anthology/P10-1122. Ellery Smith, Nicola Greco, Matko Bosnjak, and Andreas Vlachos. 2015. A strong lexical matching method for the machine comprehension test. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1693–1698. Saku Sugawara and Akiko Aizawa. 2016. An analysis of prerequisite skills for reading comprehension. In Proceedings of the Workshop on Uphill Battles in Language Processing: Scaling Early Achievements to Robust Methods. Association for Computational Linguistics, pages 1–5. https://aclweb.org/anthology/W16-6001. Saku Sugawara, Hikaru Yokono, and Akiko Aizawa. 2017. Prerequisite skills for reading comprehension: Multi-perspective analysis of mctest datasets and systems. In AAAI Conference on Artificial Intelligence. pages 3089–3096. Richard Sutcliffe, Anselmo Pe˜nas, Eduard Hovy, Pamela Forner, ´Alvaro Rodrigo, Corina Forascu, Yassine Benajiba, and Petya Osenova. 2013. Overview of QA4MRE main task at CLEF 2013. Working Notes, CLEF . Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2016. NewsQA: A machine comprehension dataset. CoRR abs/1611.09830. Sowmya Vajjala and Detmar Meurers. 2012. On improving the accuracy of readability classification using insights from second language acquisition. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP. Association for Computational Linguistics, pages 163–173. https://aclweb.org/anthology/W12-2019. Teun Adrianus van Dijk and Walter Kintsch. 1983. Strategies of discourse comprehension. Citeseer. Ellen M Voorhees et al. 1999. The TREC-8 question answering track report. In TREC. volume 99, pages 77–82. Mengqiu Wang, Noah A. Smith, and Teruko Mitamura. 2007. What is the Jeopardy model? a quasi-synchronous grammar for QA. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics, pages 22–32. https://aclweb.org/anthology/D/D07/D07-1003. Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. WikiQA: A challenge dataset for opendomain question answering. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 2013–2018. https://aclweb.org/anthology/D15-1237. 816 A Sampling Methods for Questions In this appendix, we explain the method of choosing questions for annotation. QA4MRE (Sutcliffe et al., 2013): the goldstandard dataset comprised four different topics and four documents for each topic. We randomly selected 100 main and auxiliary questions so that at least one question for each document was included. MCTest (Richardson et al., 2013): this dataset comprised two sets: MC160 and MC500. Their development sets had 80 tasks in total, with each containing context texts and four questions. We randomly chose 25 tasks (100 questions) from the development sets. SQuAD (Rajpurkar et al., 2016): this dataset included Wikipedia articles involving various topics, with the articles being divided into paragraphs. We randomly chose 100 paragraphs from 15 articles and used only one question from each paragraph for the annotation. Who-did-What (WDW) (Onishi et al., 2016): this dataset was constructed from the English Gigaword newswire corpus (v5). Its questions were automatically created using a different article from that used for context. In addition, questions that could be solved by a simple baseline method were excluded from the dataset. MS MARCO (MARCO) (Nguyen et al., 2016): each task in this dataset comprised several segments, one question, and its answer. We randomly chose 100 tasks (100 questions) and only used segments whose attribute was is selected = 1 as context. NewsQA (Trischler et al., 2016): we randomly chose questions that satisfied the following conditions: is answer absent = 0, is question bad = 0, and validated answers do not include bad question or none. 817
2017
75
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 818–827 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1076 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 818–827 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1076 A Minimal Span-Based Neural Constituency Parser Mitchell Stern Jacob Andreas Dan Klein Computer Science Division University of California, Berkeley {mitchell,jda,klein}@cs.berkeley.edu Abstract In this work, we present a minimal neural model for constituency parsing based on independent scoring of labels and spans. We show that this model is not only compatible with classical dynamic programming techniques, but also admits a novel greedy top-down inference algorithm based on recursive partitioning of the input. We demonstrate empirically that both prediction schemes are competitive with recent work, and when combined with basic extensions to the scoring model are capable of achieving state-of-the-art single-model performance on the Penn Treebank (91.79 F1) and strong performance on the French Treebank (82.23 F1). 1 Introduction This paper presents a minimal but surprisingly effective span-based neural model for constituency parsing. Recent years have seen a great deal of interest in parsing architectures that make use of recurrent neural network (RNN) representations of input sentences (Vinyals et al., 2015). Despite evidence that linear RNN decoders are implicitly able to respect some nontrivial well-formedness constraints on structured outputs (Graves, 2013), researchers have consistently found that the best performance is achieved by systems that explicitly require the decoder to generate well-formed tree structures (Chen and Manning, 2014). There are two general approaches to ensuring this structural consistency. The most common is to encode the output as a sequence of operations within a transition system which constructs trees incrementally. This transforms the parsing problem back into a sequence-to-sequence problem, while making it easy to force the decoder to take only actions guaranteed to produce well-formed outputs. However, transition-based models do not admit fast dynamic programs and require careful feature engineering to support exact search-based inference (Thang et al., 2015). Moreover, models with recurrent state require complex training procedures to benefit from anything other than greedy decoding (Wiseman and Rush, 2016). An alternative line of work focuses on chart parsers, which use log-linear or neural scoring potentials to parameterize a tree-structured dynamic program for maximization or marginalization (Finkel et al., 2008; Durrett and Klein, 2015). These models enjoy a number of appealing formal properties, including support for exact inference and structured loss functions. However, previous chart-based approaches have required considerable scaffolding beyond a simple well-formedness potential, e.g. pre-specification of a complete context-free grammar for generating output structures and initial pruning of the output space with a weaker model (Hall et al., 2014). Additionally, we are unaware of any recent chartbased models that achieve results competitive with the best transition-based models. In this work, we present an extremely simple chart-based neural parser based on independent scoring of labels and spans, and show how this model can be adapted to support a greedy topdown decoding procedure. Our goal is to preserve the basic algorithmic properties of span-oriented (rather than transition-oriented) parse representations, while exploring the extent to which neural representational machinery can replace the additional structure required by existing chart parsers. On the Penn Treebank, our approach outperforms a number of recent models for chart-based and transition-based parsing—including the state-ofthe-art models of Cross and Huang (2016) and Liu and Zhang (2016)—achieving an F1 score of 91.79. We additionally obtain a strong F1 score of 82.23 on the French Treebank. 818 2 Model A constituency tree can be regarded as a collection of labeled spans over a sentence. Taking this view as a guiding principle, we propose a model with two components, one which assigns scores to span labels and one which assigns scores directly to span existence. The former is used to determine the labeling of the output, and the latter provides its structure. At the core of both of these components is the issue of span representation. Given that a span’s correct label and its quality as a constituent depend heavily on the context in which it appears, we naturally turn to recurrent neural networks as a starting point, since they have previously been shown to capture contextual information suitable for use in a variety of natural language applications (Bahdanau et al., 2014; Wang et al., 2015) In particular, we run a bidirectional LSTM over the input to obtain context-sensitive forward and backward encodings for each position i, denoted by fi and bi, respectively. Our representation of the span (i, j) is then the concatenatation the vector differences fj −fi and bi −bj. This corresponds to a bidirectional version of the LSTMMinus features first proposed by Wang and Chang (2016). On top of this base, our label and span scoring functions are implemented as one-layer feedforward networks, taking as input the concatenated span difference and producing as output either a vector of label scores or a single span score. More formally, letting sij denote the vector representation of span (i, j), we define slabels(i, j) = Vℓg(Wℓsij + bℓ), sspan(i, j) = v⊤ s g(Wssij + bs), where g denotes an elementwise nonlinearity. For notational convenience, we also let the score of an individual label ℓbe denoted by slabel(i, j, ℓ) = [slabels(i, j)]ℓ, where the right-hand side is the corresponding element of the label score vector. One potential issue is the existence of unary chains, corresponding to nested labeled spans with the same endpoints. We take the common approach of treating these as additional atomic labels alongside all elementary nonterminals. To accommodate n-ary trees, our inventory additionally includes a special empty label ∅used for spans that are not themselves full constituents but arise during the course of implicit binarization. Our model shares several features in common with that of Cross and Huang (2016). In particular, our representation of spans and the form of our label scoring function were directly inspired by their work, as were our handling of unary chains and our use of an empty label. However, our approach differs in its treatment of structural decisions, and consequently, the inference algorithms we describe below diverge significantly from their transition-based framework. 3 Chart Parsing Our basic model is compatible with traditional chart-based dynamic programming. Representing a constituency tree T by its labeled spans, T := {(ℓt, (it, jt)) : t = 1, . . . , |T|}, we define the score of a tree to be the sum of its constituent label and span scores, stree(T) = X (ℓ,(i,j))∈T [slabel(i, j, ℓ) + sspan(i, j)] . To find the tree with the highest score for a given sentence, we use a modified CKY recursion. As with classical chart parsing, the running time of our procedure is O(n3) for a sentence of length n. 3.1 Dynamic Program for Inference The base case is a span (i, i + 1) consisting of a single word. Since every valid tree must include all singleton spans, possibly with an empty label, we need not consider the span score in this case and perform only a single maximization over the choice of label: sbest(i, i + 1) = max ℓ [slabel(i, i + 1, ℓ)] . For a general span (i, j), we define the score of the split (i, k, j) as the sum of its subspan scores, ssplit(i, k, j) = sspan(i, k) + sspan(k, j). (1) For convenience, we also define an augmented split score incorporating the scores of the corresponding subtrees, ˜ssplit(i, k, j) = ssplit(i, k, j) + sbest(i, k) + sbest(k, j). 819 Using these quantities, we can then write the general joint label and split decision as sbest(i, j) = max ℓ,k [slabel(i, j, ℓ) + ˜ssplit(i, k, j)] . (2) Because our model assigns independent scores to labels and spans, this maximization decomposes into two disjoint subproblems, greatly reducing the size of the state space: sbest(i, j) = max ℓ [slabel(i, j, ℓ)] + max k [˜ssplit(i, k, j)] . We also note that the span scores sspan(i, j) for each span (i, j) in the sentence can be computed once at the beginning of the procedure and shared across different subproblems with common left or right endpoints, allowing for a quadratic rather than cubic number of span score computations. 3.2 Margin Training Training the model under this inference scheme is accomplished using a margin-based approach. When presented with an example sentence and its corresponding parse tree T ∗, we compute the best prediction under the current model using the above dynamic program, bT = argmax T [stree(T)] . If bT = T ∗, then our prediction was correct and no changes need to be made. Otherwise, we incur a hinge penalty of the form max  0, 1 −stree(T ∗) + stree( bT)  to encourage the model to keep a margin of at least 1 between the gold tree and the best alternative. The loss to be minimized is then the sum of penalties across all training examples. Prior work has found that it can be beneficial in a variety of applications to incorporate a structured loss function into this margin objective, replacing the hinge penalty above with one of the form max  0, ∆( bT, T ∗) −stree(T ∗) + stree( bT)  for a loss function ∆that measures the similarity between the prediction bT and the reference T ∗. Here we take ∆to be a Hamming loss on labeled spans. To incorporate this loss into the training objective, we modify the dynamic program of Section 3.1 to support loss-augmented decoding (Taskar et al., 2005). Since the label decisions are isolated from the structural decisions, it suffices to replace every occurrence of the label scoring function slabel(i, j, ℓ) by slabel(i, j, ℓ) + 1(ℓ̸= ℓ∗ ij), where ℓ∗ ij is the label of span (i, j) in the gold tree T ∗. This has the effect of requiring larger margins between the gold tree and predictions that contain more mistakes, offering a greater degree of robustness and better generalization. 4 Top-Down Parsing While we have so far motivated our model from the perspective of classical chart parsing, it also allows for a novel inference algorithm in which trees are constructed greedily from the top down. At a high level, given a span, we independently assign it a label and pick a split point, then repeat this process for the left and right subspans; the recursion bottoms out with length-one spans that can no longer be split. Figure 1 gives an illustration of the process, which we describe in more detail below. The base case is again a singleton span (i, i+1), and follows the same form as the base case for the chart parser. In particular, we select the label bℓthat satisfies bℓ= argmax ℓ [slabel(i, i + 1, ℓ)] , omitting span scores from consideration since singleton spans cannot be split. To construct a tree over a general span (i, j), we aim to solve the maximization problem (bℓ, bk) = argmax ℓ,k [slabel(i, j, ℓ) + ssplit(i, k, j)] , where ssplit(i, k, j) is defined as in Equation (1). The independence of our label and span scoring functions again yields the decomposed form bℓ= argmax ℓ [slabel(i, j, ℓ)] , bk = argmax k [ssplit(i, k, j)] , (3) leading to a significant reduction in the size of the state space. To generate a tree for the whole sentence, we call this procedure on the full sentence span (0, n) and return the result. As there are O(n) spans each 820 PRP She VBZ enjoys VBG playing NN tennis . . input 0 1 2 3 4 5 S NP ∅ VP ∅ ∅ S–VP ∅ NP top-down parsing (a) Execution of the top-down parsing algorithm. S . . VP S VP NP NN tennis VBG playing VBZ enjoys NP PRP She (b) Output parse tree. Figure 1: An execution of our top-down parsing algorithm (a) and the resulting parse tree (b) for the sentence “She enjoys playing tennis.” Part-of-speech tags, shown here together with the words, are predicted externally and are included as part of the input to our system. Beginning with the full sentence span (0, 5), the label S and the split point 1 are predicted, and recursive calls are made on the child spans (0, 1) and (1, 5). The left child span (0, 1) is assigned the label NP, and with no further splits to make, recursion terminates on this branch. The right child span (1, 5) is assigned the empty label ∅, indicating that it does not represent a constituent in the tree. A split point of 4 is selected, and further recursive calls are made on the grandchild spans (1, 4) and (4, 5). This process of labeling and splitting continues until every branch of recursion bottoms out in singleton spans, at which point the full parse tree can be returned. Note that the unary chain S–VP is produced in a single labeling step. requiring one label evaluation and at most n −1 split point evaluations, the running time of the procedure is O(n2). The algorithm outlined here bears a strong resemblance to the chart parsing dynamic program discussed in Section 3, but differs in one key aspect. When performing inference from the bottom up, we have already computed the scores of all of the subtrees below the current span, and we can take this knowledge into consideration when selecting a split point. In contrast, when producing a tree from the top down, we can only select a split point based on top-level evaluations of span quality, without knowing anything about the subtrees that will be generated below them. This difference is manifested in the augmented split score ˜ssplit used in the definition of sbest in Equation (2), where the scores of the subtrees associated with a split point are included in the chart recursion but necessarily excluded from the top-down recursion. While this apparent deficiency may be a cause for concern, we demonstrate the surprising empirical result in Section 6 that there is no loss in performance when moving from the globally-optimal chart parser to the greedy top-down procedure. 4.1 Margin Training As with the chart parsing formulation, we also use a margin-based method for learning under the topdown model. However, rather than requiring separation between the scores of full trees, we instead enforce a local margin at every decision point. For a span (i, j) occurring in the gold tree, let ℓ∗ and k∗represent the correct label and split point, and let bℓand bk be the predictions made by computing the maximizations in Equation (3). If bℓ̸= ℓ∗, meaning the prediction is incorrect, we incur a hinge penalty of the form max  0, 1 −slabel(i, j, ℓ∗) + slabel(i, j, bℓ)  . Similarly, if bk ̸= k∗, we incur a hinge penalty of the form max  0, 1 −ssplit(i, k∗, j) + ssplit(i, bk, j)  . 821 To obtain the loss for a given training example, we trace out the actions corresponding to the gold tree and accumulate the above penalties over all decision points. As before, the total loss to be minimized is the sum of losses across all training examples. Loss augmentation is also beneficial for the local decisions made by the top-down model, and can be implemented in a manner akin to the one discussed in Section 3.2. 4.2 Training with Exploration The hinge penalties given above are only defined for spans (i, j) that appear in the example tree. The model must therefore be constrained at training time to follow decisions that exactly reproduce the gold tree, since supervision cannot be provided otherwise. As a result, the model is never exposed to its mistakes, which can lead to a lack of calibration and poor performance at test time. To circumvent this issue, a dynamic oracle can be defined to inform the model about correct behavior even after it has deviated from the gold tree. Cross and Huang (2016) propose such an oracle for a related transition-based parsing system, and prove its optimality for the F1 metric on labeled spans. We adapt their result here to obtain a dynamic oracle for the present model with similar guarantees. The oracle for labeling decisions carries over without modification: the correct label for a span is the label assigned to that span if it is part of the gold tree, or the empty label ∅otherwise. For split point decisions, the oracle can be broken down into two cases. If a span (i, j) appears as a constituent in the gold tree T, we let b(i, j) denote the collection of its interior boundary points. For example, if the constituent over (1, 7) has children spanning (1, 3), (3, 6), and (6, 7), then we would have the two interior boundary points, b(1, 7) = {3, 6}. The oracle for a span appearing in the gold tree is then precisely the output of this function. Otherwise, for spans (i, j) not corresponding to gold constituents, we must instead identify the smallest enclosing gold constituent: (i∗, j∗) = min{(i′, j′) ∈T : i′ ≤i < j ≤j′}, where the minimum is taken with respect to the partial ordering induced by span length. The output of the oracle is then the set of interior boundary points of this enclosing span that also lie inside the original, {k ∈b(i∗, j∗) : i < k < j}. The proof of correctness is similar to the proof in Cross and Huang (2016); we refer to the Dynamic Oracle section in their paper for a more detailed discussion. As presented, the dynamic oracle for split point decisions returns a collection of one or more splits rather than a single correct answer. Any of these is a valid choice, with different splits corresponding to different binarizations of the original n-ary tree. We choose to use the leftmost split point for consistency in our implementation, but remark that the oracle split with the highest score could also be chosen at training time to allow for additional flexibility. Having defined the dynamic oracle for our system, we note that training with exploration can be implemented by a single modification to the procedure described in Section 4.1. Local penalties are accumulated as before, but instead of tracing out the decisions required to produce the gold tree, we instead follow the decisions predicted by the model. In this way, supervision is provided at states within the prediction procedure that are more likely to arise at test time when greedy inference is performed. 5 Scoring and Loss Alternatives The model presented in Section 2 is designed to be as simple as possible. However, there are many variations of the label and span scoring functions that could be explored; we discuss some of the options here. 5.1 Top-Middle-Bottom Label Scoring Our basic model treats the empty label, elementary nonterminals, and unary chains each as atomic units, obscuring similarities between unary chains and their component nonterminals or between different unary chains with common prefixes or suffixes. To address this lack of structure, we consider an alternative scoring scheme in which labels are predicted in three parts: a top nonterminal, a middle unary chain, and a bottom nonterminal (each of which is possibly empty).1 This not only allows for parameter sharing across labels with common subcomponents, but also has the added benefit of allowing the model to produce novel unary chains at test time. 1In more detail, ∅decomposes as (∅, ∅, ∅), X decomposes as (X, ∅, ∅), X–Y decomposes as (X, ∅, Y ), and X–Z1– · · · –Zk–Y decomposes as (X, Z1– · · · –Zk, Y ). 822 More precisely, we introduce the decomposition slabel(i, j, (ℓt, ℓm, ℓb)) = stop(i, j, ℓt) + smiddle(i, j, ℓm) + sbottom(i, j, ℓb), where stop, smiddle, and sbottom are independent one-layer feedforward networks of the same form as slabel that output vectors of scores for all label tops, label middle chains, and label bottoms encountered in the training corpus, respectively. The best label for a span (i, j) is then computed by solving the maximization problem max ℓt,ℓm,ℓb [slabel(i, j, (ℓt, ℓm, ℓb))] , which decomposes into three independent subproblems corresponding to the three label components. The final label is obtained by concatenating ℓt, ℓm, and ℓb, with empty components being omitted from the concatenation. 5.2 Left and Right Span Scoring The basic model uses the same span scoring function sspan to assign a score to the left and right subspans of a given span. One simple extension is to replace this by a pair of distinct left and right feedforward networks of the same form, giving the decomposition ssplit(i, k, j) = sleft(i, k) + sright(k, j). 5.3 Span Concatenation Scoring Since span scores are only used to score splits in our model, we also consider directly scoring a split by feeding the concatenation of the span representations of the left and right subspans through a single feedforward network, giving ssplit(i, k, j) = v⊤ s g (Ws[sik; skj] + bs) . This is similar to the structural scoring function used by Cross and Huang (2016), although whereas they additionally include features for the outside spans (0, i) and (j, n) in their concatenation, we omit these from our implementation, finding that they do not improve performance. 5.4 Deep Biaffine Span Scoring Inspired by the success of deep biaffine scoring in recent work by Dozat and Manning (2016) for dependency parsing, we also consider a split scoring function of a similar form for our model. Specifically, we let hik = fleft(sik) and hkj = fright(skj) be deep left and right span representations obtained by passing the child vectors through corresponding left and right feedforward networks. We then define the biaffine split scoring function ssplit(i, k, j) = h⊤ ikWshkj + v⊤ lefthik + v⊤ righthkj, which consists of the sum of a bilinear form between the two hidden representations together with two inner products. 5.5 Structured Label Loss The three-way label scoring scheme described in Section 5.1 offers one path towards the incorporation of label structure into the model. We additionally consider a structured Hamming loss on labels. More specifically, given two labels ℓ1 and ℓ2 consisting of zero or more nonterminals, we define the loss as |ℓ1 \ ℓ2| + |ℓ2 \ ℓ1|, treating each label as a multiset of nonterminals. This structured loss can be incorporated into the training process using the methods described in Sections 3.2 and 4.1. 6 Experiments We first describe the general setup used for our experiments. We use the Penn Treebank (Marcus et al., 1993) for our English experiments, with standard splits of sections 2-21 for training, section 22 for development, and section 23 for testing. We use the French Treebank from the SPMRL 2014 shared task (Seddah et al., 2014) with its provided splits for our French experiments. No token preprocessing is performed, and only a single <UNK> token is used for unknown words at test time. The inputs to our system are concatenations of 100-dimensional word embeddings and 50-dimensional part-of-speech embeddings. In the case of the French Treebank, we also include 50-dimensional embeddings of each morphological tag. We use automatically predicted tags for training and testing, obtaining predicted part-ofspeech tags for the Penn Treebank using the Stanford tagger (Toutanova et al., 2003) with 10-way jackknifing, and using the provided predicted partof-speech and morphological tags for the French Treebank. Words are replaced by <UNK> with probability 1/(1+freq(w)) during training, where freq(w) is the frequency of w in the training data. We use a two-layer bidirectional LSTM for our base span features. Dropout with a ratio selected from {0.2, 0.3, 0.4} is applied to all non-recurrent 823 WSJ Dev, Atomic Labels, Basic 0-1 Label Loss Parser Minimal Left-Right Concat. Biaffine Chart 91.95 92.09 92.15 91.96 Top-Down 92.16 92.25 92.24 92.14 (a) WSJ Dev, Atomic Labels, Structured Label Loss Parser Minimal Left-Right Concat. Biaffine Chart 91.86 92.12 92.09 91.95 Top-Down 92.12 92.31 92.26 92.20 (b) WSJ Dev, 3-Part Labels, Basic 0-1 Label Loss Parser Minimal Left-Right Concat. Biaffine Chart 92.08 92.05 91.94 91.79 Top-Down 92.12 92.18 92.14 92.02 (c) WSJ Dev, 3-Part Labels, Structured Label Loss Parser Minimal Left-Right Concat. Biaffine Chart 91.92 91.96 91.97 91.78 Top-Down 91.98 92.27 92.17 92.06 (d) Table 1: Development F1 scores on the Penn Treebank. Each table corresponds to a particular choice of label loss (either the basic 0-1 loss or the structured Hamming label loss of Section 5.5) and labeling scheme (either the basic atomic scheme or the top-middle-bottom labeling scheme of Section 5.1). The columns within each table correspond to different split scoring schemes: basic minimal scoring, the leftright scoring of Section 5.2, the concatenation scoring of Section 5.3, and the deep biaffine scoring of Section 5.4. connections of the LSTM, including its inputs and outputs. We tie the hidden dimension of the LSTM and all feedforward networks, selecting a size from {150, 200, 250}. All parameters (including word and tag embeddings) are randomly initialized using Glorot initialization (Glorot and Bengio, 2010), and are tuned on development set performance. We use the Adam optimizer (Kingma and Ba, 2014) with its default settings for optimization, with a batch size of 10. Our system is implemented in C++ using the DyNet neural network library (Neubig et al., 2017). We begin by training the minimal version of our proposed chart and top-down parsers on the Penn Treebank. Out of the box, we obtain test F1 scores of 91.69 for the chart parser and 91.58 for the topdown parser. The higher of these matches the recent state-of-the-art score of 91.7 reported by Liu and Zhang (2016), demonstrating that our simple neural parsing system is already capable of achieving strong results. Building on this, we explore the effects of different split scoring functions when using either the basic 0-1 label loss or the structured label loss discussed in Section 5.5. Our results are presented in Tables 1a and 1b. We observe that regardless of the label loss, the minimal and deep biaffine split scoring schemes perform a notch below the left-right and concatenation scoring schemes. That the minimal scoring scheme performs worse than the left-right scheme is unsurprising, since the latter is a strict generalization of the former. It is evident, however, that joint scoring of left and right subspans is not required for strong results—in fact, the left-right scheme which scores child subspans in isolation slightly outperforms the concatenation scheme in all but one case, and is stronger than the deep biaffine scoring function across the board. Comparing results across the choice of label loss, however, we find that fewer trends are apparent. The scores obtained by training with a 0-1 loss are all within 0.1 of those obtained using a structured Hamming loss, being slightly higher in four out of eight cases and slightly lower in the other half. This leads us to conclude that the more elementary approach is sufficient when selecting atomic labels from a fixed inventory. We also perform the same set of experiments under the setting where the top-middle-bottom label scoring function described in Section 5.1 is used in place of an atomic label scoring function. These results are shown in Tables 1c and 1d. A priori, we might expect that exposing additional structure would allow the model to make better predictions, but on the whole we find that the scores in this set of experiments are worse than those in the previous set. Trends similar to before hold across the different choices of scoring functions, though in this case the minimal setting has scores closer to those of the left-right setting, even exceeding its performance in the case of a chart parser with a 0-1 label loss. Our final test results are given in Table 2, along with the results of other recent single-model parsers trained without external parse data. We 824 Final Parsing Results on Penn Treebank Parser LR LP F1 Durrett and Klein (2015) – – 91.1 Vinyals et al. (2015) – – 88.3 Dyer et al. (2016) – – 89.8 Cross and Huang (2016) 90.5 92.1 91.3 Liu and Zhang (2016) 91.3 92.1 91.7 Best Chart Parser 90.63 92.98 91.79 Best Top-Down Parser 90.35 93.23 91.77 Table 2: Comparison of final test F1 scores on the Penn Treebank. Here we only include scores from single-model parsers trained without external parse data. Final Parsing Results on French Treebank Parser LR LP F1 Bj¨orkelund et al. (2014) – – 82.53 Durrett and Klein (2015) – – 81.25 Cross and Huang (2016) 81.90 84.77 83.11 Best Chart Parser 80.26 84.12 82.14 Best Top-Down Parser 79.60 85.05 82.23 Table 3: Comparison of final test F1 scores on the French Treebank. achieve a new state-of-the-art F1 score of 91.79 with our best model. Interestingly, we observe that our parsers have a noticeably higher gap between precision and recall than do other top parsers, likely owing to the structured label loss which penalizes mismatching nonterminals more heavily than it does a nonterminal and empty label mismatch. In addition, there is little difference between the best top-down model and the best chart model, indicating that global normalization is not required to achieve strong results. Processing one sentence at a time on a c4.4xlarge Amazon EC2 instance, our best chart and top-down parsers operate at speeds of 20.3 sentences per second and 75.5 sentences per second, respectively, as measured on the test set. We additionally train parsers on the French Treebank using the same settings from our English experiments, selecting the best model of each type based on development performance. We list our test results along with those of several other recent papers in Table 3. Although we fall short of the scores obtained by Cross and Huang (2016), we achieve competitive performance relative to the neural CRF parser of Durrett and Klein (2015). 7 Related Work Many early successful approaches to constituency parsing focused on rich modeling of correlations in the output space, typically by engineering proabilistic context-free grammars with state spaces enriched to capture long-distance dependencies and lexical phenomena (Collins, 2003; Klein and Manning, 2003; Petrov and Klein, 2007). By contrast, the approach we have described here continues a recent line of work on direct modeling of correlations in the input space, by using rich feature representations to parameterize local potentials that interact with a comparatively unconstrained structured decoder. As noted in the introduction, this class of feature-based tree scoring functions can be implemented with either a linear transition system (Chen and Manning, 2014) or a global decoder (Finkel et al., 2008). Kiperwasser and Goldberg (2016) describe an approach closely related to ours but targeted at dependency formalisms, and which easily accommodates both sparse log-linear scoring models (Hall et al., 2014) and deep neural potentials (Henderson, 2004; Ballesteros et al., 2016). The best-performing constituency parsers in the last two years have largely been transition-based rather than global; examples include the models of Dyer et al. (2016), Cross and Huang (2016) and Liu and Zhang (2016). The present work takes many of the insights developed in these models (e.g. the recurrent representation of spans (Kiperwasser and Goldberg, 2016), and the use of a dynamic oracle and exploration policy during training (Goldberg and Nivre, 2013)) and extends these insights to span-oriented models, which support a wider range of decoding procedures. Our approach differs from other recent chart-based neural models (e.g. Durrett and Klein (2015)) in the use of a recurrent input representation, structured loss function, and comparatively simple parameterization of the scoring function. In addition to the globally optimal decoding procedures for which these models were designed, and in contrast to the left-to-right decoder typically employed by transition-based models, our model admits an additional greedy top-to-bottom inference procedure. 8 Conclusion We have presented a minimal span-oriented parser that uses a recurrent input representation to score 825 trees with a sum of independent potentials on their constituent spans and labels. Our model supports both exact chart-based decoding and a novel top-down inference procedure. Both approaches achieve state-of-the-art performance on the Penn Treebank, and our best model achieves competitive performance on the French Treebank. Our experiments show that many of the key insights from recent neural transition-based approaches to parsing can be easily ported to the chart parsing setting, resulting in a pair of extremely simple models that nonetheless achieve excellent performance. Acknowledgments We would like to thank Nick Altieri and the anonymous reviewers for their valuable comments and suggestions. MS is supported by an NSF Graduate Research Fellowship. JA is supported by a Facebook graduate fellowship and a Berkeley AI / Huawei fellowship. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473. http://arxiv.org/abs/1409.0473. Miguel Ballesteros, Yoav Goldberg, Chris Dyer, and Noah A Smith. 2016. Training with exploration improves a greedy stack-lstm parser. arXiv preprint arXiv:1603.03793 . Anders Bj¨orkelund, Ozlem Cetinoglu, Agnieszka Falenska, Rich´ard Farkas, Thomas M¨uller, Wolfgang Seeker, and Zsolt Sz´ant´o. 2014. The imswrocław-szeged-cis entry at the spmrl 2014 shared task: Reranking and morphosyntax meet unlabeled data. Notes of the SPMRL . Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural networks. In EMNLP. pages 740–750. Michael Collins. 2003. Head-driven statistical models for natural language parsing. Computational linguistics 29(4):589–637. James Cross and Liang Huang. 2016. Span-based constituency parsing with a structure-label system and provably optimal dynamic oracles. In EMNLP. Timothy Dozat and Christopher D. Manning. 2016. Deep biaffine attention for neural dependency parsing. CoRR abs/1611.01734. http://arxiv.org/abs/1611.01734. Greg Durrett and Dan Klein. 2015. Neural crf parsing. arXiv preprint arXiv:1507.03641 . Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A Smith. 2016. Recurrent neural network grammars. arXiv preprint arXiv:1602.07776 . Jenny Rose Finkel, Alex Kleeman, and Christopher D Manning. 2008. Efficient, feature-based, conditional random field parsing. In ACL. volume 46, pages 959–967. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS10). Society for Artificial Intelligence and Statistics. Yoav Goldberg and Joakim Nivre. 2013. Training deterministic parsers with non-deterministic oracles. Transactions of the association for Computational Linguistics 1:403–414. Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 . David Leo Wright Hall, Greg Durrett, and Dan Klein. 2014. Less grammar, more features. In ACL (1). pages 228–237. James Henderson. 2004. Discriminative training of a neural network statistical parser. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, page 95. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR abs/1412.6980. http://arxiv.org/abs/1412.6980. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional lstm feature representations. arXiv preprint arXiv:1603.04351 . Dan Klein and Christopher D Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. pages 423–430. Jiangming Liu and Yue Zhang. 2016. Shiftreduce constituent parsing with neural lookahead features. CoRR abs/1612.00567. http://arxiv.org/abs/1612.00567. Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Comput. Linguist. 19(2):313–330. http://dl.acm.org/citation.cfm?id=972470.972475. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, 826 Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. Dynet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980 . Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics. Assocation for Computational Linguistics. Djam´e Seddah, Sandra K¨ubler, and Reut Tsarfaty. 2014. Introducing the spmrl 2014 shared task on parsing morphologically-rich languages. In Proceedings of the First Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of Non-Canonical Languages. Dublin City University, Dublin, Ireland, pages 103– 109. http://www.aclweb.org/anthology/W14-6111. Ben Taskar, Vassil Chatalbashev, Daphne Koller, and Carlos Guestrin. 2005. Learning structured prediction models: A large margin approach. In Proceedings of the 22Nd International Conference on Machine Learning. ACM, New York, NY, USA, ICML ’05, pages 896–903. https://doi.org/10.1145/1102351.1102464. Le Quang Thang, Hiroshi Noji, and Yusuke Miyao. 2015. Optimal shift-reduce constituent parsing with structured perceptron. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. volume 1, pages 1534–1544. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1. Association for Computational Linguistics, Stroudsburg, PA, USA, NAACL ’03, pages 173–180. https://doi.org/10.3115/1073445.1073478. Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Advances in Neural Information Processing Systems. pages 2773–2781. Peilu Wang, Yao Qian, Frank K. Soong, Lei He, and Hai Zhao. 2015. A unified tagging solution: Bidirectional LSTM recurrent neural network with word embedding. CoRR abs/1511.00215. http://arxiv.org/abs/1511.00215. Wenhui Wang and Baobao Chang. 2016. Graphbased dependency parsing with bidirectional LSTM. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. http://aclweb.org/anthology/P/P16/P16-1218.pdf. Sam Wiseman and Alexander M Rush. 2016. Sequence-to-sequence learning as beam-search optimization. arXiv preprint arXiv:1606.02960 . 827
2017
76
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 828–838 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1077 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 828–838 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1077 Semantic Dependency Parsing via Book Embedding Weiwei Sun, Junjie Cao and Xiaojun Wan Institute of Computer Science and Technology, Peking University The MOE Key Laboratory of Computational Linguistics, Peking University {ws,junjie.cao,wanxiaojun}@pku.edu.cn Abstract We model a dependency graph as a book, a particular kind of topological space, for semantic dependency parsing. The spine of the book is made up of a sequence of words, and each page contains a subset of noncrossing arcs. To build a semantic graph for a given sentence, we design new Maximum Subgraph algorithms to generate noncrossing graphs on each page, and a Lagrangian Relaxation-based algorithm to combine pages into a book. Experiments demonstrate the effectiveness of the book embedding framework across a wide range of conditions. Our parser obtains comparable results with a state-of-the-art transition-based parser. 1 Introduction Dependency analysis provides a lightweight and effective way to encode syntactic and semantic information of natural language sentences. One of its branches, syntactic dependency parsing (K¨ubler et al., 2009) has been an extremely active research area, with high-performance parsers being built and applied for practical use of NLP. Semantic dependency parsing, however, has only been addressed in the literature recently (Oepen et al., 2014, 2015; Du et al., 2015; Zhang et al., 2016; Cao et al., 2017). Semantic dependency parsing employs a graphstructured semantic representation. On the one hand, it is flexible enough to provide analysis for various semantic phenomena (Ivanova et al., 2012). This very flexibility, on the other hand, brings along new challenges for designing parsing algorithms. For graph-based parsing, no previously defined Maximum Subgraph algorithm has simultaneously a high coverage and a polynomial complexity to low degrees. For transition-based parsing, no principled decoding algorithms, e.g. dynamic programming (DP), has been developed for existing transition systems. In this paper, we borrow the idea of book embedding from graph theory, and propose a novel framework to build parsers for flexible dependency representations. In graph theory, a book is a kind of topological space that consists of a spine and a collection of one or more half-planes. In our “book model” of semantic dependency graph, the spine is made up of a sequence of words, and each half-plane contains a subset of dependency arcs. In particular, the arcs on each page compose a noncrossing dependency graph, a.k.a. planar graph. Though a dependency graph in general is very flexible, its subgraph on each page is rather regular. Under the new perspective, semantic dependency parsing can be cast as a two-step task: Each page is first analyzed separately, and then all the pages are bound coherently. Our work is motivated by the extant low-degree polynomial time algorithm for first-order Maximum Subgraph parsing for noncrossing dependency graphs (Kuhlmann and Jonsson, 2015). We enhance existing work with new exact second- and approximate higher-order algorithms. Our algorithms facilitate building with high accuracy the partial semantic dependency graphs on each page. To produce a full semantic analysis, we also need to integrate partial graphs on all pages into one coherent book. To this end, we formulate the problem as a combinatorial optimization problem, and propose a Lagrangian Relaxation-based algorithm for solutions. We implement a practical parser in the new framework with a statistical disambiguation model. We evaluate this parser on four data sets: those used in SemEval 2014 Task 8 (Oepen et al., 2014), and the dependency graphs extracted from 828 . . The . company . that . Mark . wants .to . buy . arg1 . arg1 . arg1 . arg1 . arg2 . arg1 . arg2 . arg2 . arg2 Figure 1: A fragment of a semantic dependency graph. CCGbank (Hockenmaier and Steedman, 2007). On all data sets, we find that our higher-order parsing models are more accurate than the first-order baseline. Experiments also demonstrate the effectiveness of our page binding algorithm. Our new parser can be taken as a graph-based parser extended for more general dependency graphs. It parallels the state-of-the-art transition-based system of Zhang et al. (2016) in performance. The implementation of our parser is available at http://www.icst.pku.edu.cn/ lcwm/grass. 2 Background 2.1 Semantic Dependency Graphs A dependency graph G = (V, A) is a labeled directed graph for a sentence s = w1, . . . , wn. The vertex set V consists of n vertices, each of which corresponds to a word and is indexed by an integer. The arc set A represents the labeled dependency relations of the particular analysis G. Specifically, an arc, viz. a(i,j,l), represents a dependency relation l from head wi to dependent wj. Semantic dependency parsing is the task of mapping a natural language sentence into a formal meaning representation in the form of a dependency graph. Figure 1 shows a graph fragment of a noun phrase. This semantic graph is grounded on Combinatory Categorial Grammar (CCG; Steedman, 2000), and can be taken as a proxy for predicate–argument structure. The graph includes most semantically relevant non-anaphoric local (e.g. from “wants” to “Mark”) and long-distance (e.g. from “buy” to “company”) dependencies. 2.2 Maximum Subgraph Parsing Usually, syntactic dependency analysis employs tree-shaped representations. Dependency parsing, thus, can be formulated as the search for a maximum spanning tree (MST) of an arc-weighted graph. For semantic dependency parsing, where the target representations are not necessarily trees, Kuhlmann and Jonsson (2015) proposed to generalize the MST model to other types of subgraphs. In general, dependency parsing is formulated as the search for Maximum Subgraph for graph class G: Given a graph G = (V, A), find a subset A′ ⊆A with maximum total weight such that the induced subgraph G′ = (V, A′) belongs to G. Formally, we have the following optimization problem: G′(s) = arg max H∈G(s,G) ∑ p∈H SCOREPART(s, p) Here, G(s, G) is the set of all graphs that belong to G and are compatible with s and G. For parsing, G is usually a complete graph. SCOREPART(s, p) evaluates the event that a small subgraph p of a candidate graph H is good. We define the order of a part according to the number of dependencies it contains, in analogy with tree parsing in terminology. Previous work only discussed the first-order case for Maximum Subgraph parsing (Kuhlmann and Jonsson, 2015). In this paper, we are also interested in higher-order parsing, with a special focus on factorizations utilizing the following parts: . dependency . single-side neighbors . both-side neighbors . both-side tri-neighbors If G is the set of projective trees or noncrossing graphs the first-order Maximum Subgraph problem can be solved in cubic-time (Eisner, 1996; Kuhlmann and Jonsson, 2015). Unfortunately, these two graph classes are not expressive enough to encode semantic dependency graphs. Moreover, this problem for several wellmotivated graph classes, including acyclic or 2planar graphs, is NP-hard, even if one only considers first-order factorization. The lack of appropriate decoding algorithms results in one major challenge for semantic dependency parsing. 2.3 Book Embedding This section introduces the basic idea about book embedding from a graph theoretical point of view. Definition 1. A book is a kind of topological space that consists of a line, called the spine, together with a collection of one or more halfplanes, called the pages, each having the spine as its boundary. Definition 2. A book embedding of a finite graph G onto a book B satisfies the following conditions. 829 . . The . company . that . Mark . wants .to . buy . arg1 . arg1 . arg1 . arg1 . arg2 . arg1 . arg2 . arg2 . arg2 Figure 2: Book embedding for the graph in Figure 1. Arcs are assigned to two pages. 1. Every vertex of G is depicted as a point on the spine of B. 2. Every edge of G is depicted as a curve that lies within a single page of B. 3. Every page of B does not have any edge crossings. A book embedding separates a graph into several subgraphs, each of which contains all vertices, but only a subset of arcs that are not crossed with each other. This kind of graph is named noncrossing dependency graph by Kuhlmann and Jonsson (2015) and planar by Titov et al. (2009), G´omezRodr´ıguez and Nivre (2010) and many others. We can formalize a semantic dependency graph as a book. Take the graph in Figure 1 for example. We can separate the edges into two sets and take each set as a single page, as shown in Figure 2. Empirically, a semantic dependency graph is sparse enough that it can be that it can be usually embedded onto a very thin book. To measure the thickness, we can use pagenumber that is defined as follows. Definition 3. The book pagenumber of G is the minimum number of pages required for a book embedding of G. We look into the pagenumber of graphs on four linguistic graph banks (as defined in Section 5). These corpora are also used for training and evaluating our data-driven parsers. The pagenumbers are calculated using sentences in the training sets. Table 1 lists the percentages of complete graphs that can be accounted with books of different thickness. The percentages of noncrossing graphs, i.e. graphs that have pagenumber 1, vary between 48.23% and 78.26%. The practical usefulness of the algorithms for computing maximum noncrossing graphs will be limited by the relatively low coverage. The class of graphs with pagenumber no more than two has a considerably satisfactory coverage. PN DM PAS CCD PSD 1 69.83% 60.07% 48.23% 78.26% 2 29.85% 39.46% 49.86% 20.12% 3 0.31% 0.46% 1.71% 1.39% 4 0 0.02% 0.18% 0.21% 5 0 0 0.02% 0.02% 6 0 0 0 0.01% Table 1: Coverage in terms of complete graphs with respect to different pagenumbers (“PN” for short). “DM,” “PAS,” “CCD” and “PSD” are short for DeepBank, Enju HPSGBank, CCGBank and Prague Dependency Treebank. It can account for more than 98% of the graphs and sometimes close to 100% in each data set. Unfortunately, the power of Maximum Subgraph parsing is limited given that finding the maximum acyclic subgraph when pagenumber is at most k is NP-hard for k ≥2 (Kuhlmann and Jonsson, 2015). As an alternative, we propose to model a semantic graph as a book, in which the spine is made up of a sequence of words, and each halfplane contains a subset of dependency arcs. To build a semantic graph for a given sentence, we design new parsing algorithms to generate noncrossing graphs on each page (Section 3), and a Lagrangian Relaxation-based algorithm to integrate pages into a book (Section 4). 3 Maximum Subgraph for Noncrossing Graphs We introduce several DP algorithms for calculating the maximum noncrossing dependency graphs. Each algorithm visits all the spans from bottom to top, finding the best combination of smaller structures to form a new structure, according to the scores of first- or higher-order features. For sake of conciseness, we focus on undirected graphs and treat direction of linguistic dependencies as edge labels1. We will use e(i,j,l)(i < j) or simply e(i,j) to indicate an edge in either direction 1 The single-head property does not hold. We currently do not consider other constraints of directions. So prediction of the direction of one edge does not affect prediction of other edges as well as their directions. The directions can be assigned locally, and our parser builds directed rather than undirected graphs in this way. Undirected graphs are only used to conveniently illustrate our algorithms. All experimental results in Section 5 consider directed dependencies in a standard way. We use the official evaluation tool provided by SDP2014 shared task. The numberic results reported in this paper are directly comparable to results in other papers. 830 . O[s, e].s. e . C[s, e, l] . s . e .s. e . = . s + 1 . e . .s. e . = . s . k . + . k . e Figure 3: The sub-problems of first-order factorization and the decomposition for C[s, e, l]. between i and j. For sake of formal concision, we introduce the algorithm of which the goal is to calculate the maximum score of a subgraph. Extracting corresponding optimal graphs can be done in a number of ways. For example, we can maintain an auxiliary arc table which is populated parallel to the procedure of obtaining maximum scores. We define two score functions: (1) sfst(s, e, l) assigns a score to an individual edge e(s,e,l) and (2) sscd(s, e1, e2, l1, l2) assigns a score to a pair of neighboring edges e(s,e1,l1) and e(s,e2,l2). 3.1 First-Order Factorization Given a sentence, we define two DP tables, namely O[s, e] and C[s, e, l] which represents the value of the highest scoring noncrossing graphs that spans sequences of words of a sentence. The two tables are related to two sub-problems, as graphically shown in Figure 3. The following is their explaination. Open O[s, e] is intended to represent the highest weighted subgraph spanning ws to we. The subgraphs related to O[s, e] may or may not contain e(s,e). Closed C[s, e, l] represents the highest weighted subgraph spanning ws to we too. But the subgraphs related to C[s, e, l] must contain e(s,e,l). O[s, e] can be obtained by one of the following combinations: • C[s, e, l](l ∈L), if there is an edge between s and e with label l. • C[s, k, l] + O[k, e](l ∈L, s < k < e), if e(s,e) does not exist and there is an edge with .s. e . = . s + 1 . e −1 . .s. e . = . s . rs . + . rs . e −1 . .s. e . = . s + 1 . le . + . le . e . .s. e . = . s . rs . + . rs . le . + . le . e Figure 4: The decomposition for C[s, e, l] in exact single-side second-order factorization. label l between s and some node in this span. k is the farthest node linked to s. • O[s + 1, e], if e(s,e) does not exist and there is no edge to its right in this span. C[s, e, l] can be obtained by one of the following combinations: • O[s + 1, e] + sfst(s, e, l), if s has no edge to its right; • C[s, k, l′] + O[k, e] + sfst(s, e, l)(l′ ∈L, s < k < e), if there is an edge from s to some node in the span. For each edge, there are two directions for the edge, we encode the directions into the label l, and treat it as undirected edge. We need to search for a best split and a best label for every span, so the time complexity of the algorithm is O(n3|L|) where n is the length of the sentence and L is the set of labels. 3.2 Second-Order Single Side Factorization We propose a new algorithm concerning singleside second-order factorization. The DP tables, as well as the decomposition for the open problem, are the same as in the first order factorization. The decomposition of C[s, e, l] is very different. In order to score second-order features from adjacent edges in the same side, which is similar to sibling features for tree parsing (McDonald and Pereira, 831 2006), we need to find the rightmost node adjacent to s, denoted as rs, and the leftmost node adjacent to e, denoted as le, and here we have s < rs ≤le < e. And, sometimes, we split C[s, e, l] into three parts to capture the neighbor factors on both endpoints. In summary, C[s, e, l] can be obtained by one of the following combination (as graphically shown in Figure 4): • O[s + 1, e − 1] + sfst(s, e, l) + sscd(s, nil, e, nil, l) + sscd(e, nil, s, nil, l), if there is no edge from s/e to any node in the span. • C[s, rs, l′] + O[rs, e −1] + sfst(s, e, l) + sscd(s, rs, e, l′, l) + sscd(e, nil, s, nil, l) (s < rs < e), if there is no edge from e to any node in the span. • O[s + 1, le] + C[le, e, l′] + sfst(s, e, l) + sscd(e, le, s, l′, l) + sscd(s, nil, e, nil, l) (s < le < e), if there is no edge from s to any node in the span. • C[s, rs, l′] + O[rs, le] + C[le, e, l′′] + sfst(s, e, l) + sscd(s, rs, e, l′, l) + sscd(e, le, s, l′′, l) (s < rs ≤ le < e), otherwise. For the last combination, we need to search for two best separating words, namely sr and le, and two best labels, namely l′ and l′, so the time complexity of this second-order algorithm is O(n4|L|2). 3.3 Generalized Higher-Order Parsing Both of the above two algorithms are exact decoding algorithms. Solutions allow for exact decoding with higher-order features typically at a high cost in terms of efficiency. A trade-off between rich features and exact decoding benefit tree parsing (McDonald and Nivre, 2011). In particular, Zhang and McDonald (2012) proposed a generalized higher-order model that abandons exact search in graph-based parsing in favor of freedom in feature scope. They kept intact Eisner’s algorithm for first-order parsing problems, while enhanced the scoring function in an approximate way by introducing higher-order features. We borrow Zhang and McDonald’s idea and develop a generalized parsing model for noncrossing dependency representations. The sub-problems and their decomposition are much like the firstorder algorithm. The difference is that we expand . O .s. rs . le . e . C . s . e .s. e . = . s + 1 . rs . le . e . .s. e . = . s . k . + . k . rk . le . e Figure 5: Sub-problems of generalized higherorder factorization and some of the combinations. the signature of each structure to include all the larger context required to compute higher-order features. For example, we can record the leftmost and the rightmost edges in the open structure to get the tri-neighbor features. The time complexity is thus always O(n3B2), no matter how complicatedly higher-order features are incorporated. We focus on five factors introduced in Section 2.2. Still consider single-side second-order factorization. We keep the closed structure the same but modify the open one to O[s, e; rs, le, ls,rs, lle,e]. During parsing, we only record the top-B combinations of label concerning e(s,e) and related rs, le, ls,rs and lle,e. The split of a structure is similar to the first-order algorithm, shown in Figure 5. Note that rs may be e and le may be s. In this way, we know exactly whether or not there is an edge from s to e in a refined open structure. This is different from the intuition of the design of the open structure when we consider first-order factorization. 4 Finding and Binding Pages Statistics presented in Table 1 indicate that the coverage of noncrossing dependency graphs is relatively low. If we treat semantic dependency parsing as Maximum Subgraph parsing, the practical usefulness of the algorithms introduced above is rather limited accordingly. To deal with this problem, we model a semantic graph as a book, and view semantic dependency parsing as finding a book with coherent optimal pages. Given the considerably high coverage of pagenumber at most 2, we only consider 2-page books. 832 . . The . company . that . Mark . wants .to . buy . arg1 . arg1 . arg2 . arg2 . arg2 . arg2 . arg1 . arg1 . arg1 . arg1 . arg1 . arg2 . arg1 Figure 6: Every non-crossing arc is repeatedly assigned to every page. 4.1 Finding Pages via Coloring In general, finding the pagenumber of a graph is NP-hard (G´omez-Rodr´ıguez and Nivre, 2010). However, it is easy to figure out that the problem is solvable if the pagenumber is at most 2. Fortunately, a semantic dependency graph is not so dense that it can be usually embedded onto a very thin book with only 2 pages. For a structured prediction problem, the structural information of the output produced by a parser is very important. The density of semantic dependency graphs therefore results in a defect: The output’s structural information is limited because only a half of arcs on average are included in one page. To enrich the structural information, we put into each page the arcs that do not cross with any other arcs. See Figure 6 for example. We utilize an algorithm based on coloring to decompose a graph G = (V, A) into two noncrossing subgraphs GA = (V, AB) and GB = (V, AB). A detailed description is included in the supplementary note. The key idea of our algorithm is to color each crossing arc in two colors using depthfirst search. When we color an arc ex, we examine all arcs crossing with ex. If one of them, say ey, has not been examined and can be colored in the other color (no crossing arc of ey has the same color with ey), we color ey and then recursively process ey. Otherwise, ey is marked as a bad arc and dropped from both AA and AB. After coloring all the crossing arcs, we add every arc in different color to different subgraphs. Specially, all noncrossing arcs are assigned to both AA and AB. 4.2 Binding Pages via Lagrangian Relaxation Applying the above algorithm, we can obtain two corpora to train two noncrossing dependency parsing models. In other words, we can learn two score functions fA and fB to score noncrossing dependency graphs. Given the trained models and a sentence, we can find two optimal noncrossing graphs, i.e. find the solutions for arg maxg fA(g) and arg maxg fB(g), respectively. We can put all the arcs contained in gA = arg maxg fA(g) and gB = arg maxg fB(g) together as our parse for the sentence. This naive combination always gives a graph with a recall much higher than the precision. The problem is that a naive combination does not take the agreements of the graphs on the two pages into consideration, and thus loses some information. To combine the two pages in a principled way, we must do joint decoding to find two graphs gA and gB to maximize the score fA(gA) + fB(gB), under the following constraints. gA(i, j) ≤ ∑ cross((i,j),(i′,j′)) gB(i′, j′) + gB(i, j) gB(i, j) ≤ ∑ cross((i,j),(i′,j′)) gA(i′, j′) + gA(i, j) ∀i, j The functionality of cross is to figure out whether e(i,j) and e(i′,j′) cross. The meaning of the first constraint is: When there is an arc e(i,j) in the first graph, e(i,j) is also in the second graph, or there is an arc e(i′,j′) in the second graph which cross with e(i,j). So is the second one. All constraints are linear and can be written in a simplified way as, AgA + BgB ≤0 where A and B are matrices that can be constructed by checking all possible crossing arc pairs. In summary, we have the following constrained optimization problem, min. −fA(gA) −fB(gB) s.t. gA, gB are noncrossing graphs AgA + BgB ≤0 The Lagrangian of the optimization problem is L(gA, gB; u) = −fg(gA) −ft(gB) + u⊤(AgA + BgB) where u is the Lagrangian multiplier. Then the dual is L(u) = min gA,gB L(gA, gB; u) = max gA (fg(gA) −u⊤AgA) + max gB (fy(gB) −u⊤BgB) 833 BINDTWOPAGES(gA, gB) 1 u(0) ←0 2 for k ←0..T do 3 gA ←arg maxg fA(g) −u(k)⊤Ag 4 gB ←arg maxg fB(g) −u(k)⊤Bg 5 if AgA + BgB ≤0 then 6 return gA, gB 7 else 8 u(k+1) ←u(k) + α(k)(AgA + BgB) 9 return gA, gB Figure 7: The page binding algorithm. We instead try to find the solution for maxu L(u). By using a subgradient method to calculate maxu L(u), we have an algorithm for joint decoding (see Figure 7). L(u) is divided into two optimization problems which can be decoded easily. Each sub-problem is still a parsing problem for noncrossing graphs. Only the scores of factors are modified (see Line 3 and 4). Specifically, to modify the first order weights of edges, we take a subtraction of u⊤A in the first model and a substraction of u⊤B in the second one. In each iteration, after obtaining two new parsing results, we check whether the constraints are satisfied. If the answer is “yes,” we stop and return the merged graph. Otherwise, we update u in a way to increase L(u) (see Line 8). 5 Experiments 5.1 Data Sets To evaluate the effectiveness of book embedding in practice, we conduct experiments on unlabeled parsing using four corpora: CCGBank (Hockenmaier and Steedman, 2007), DeepBank (Flickinger et al., 2012), Enju HPSGBank (EnjuBank; Miyao et al., 2004) and Prague Dependency TreeBank (PCEDT; Hajic et al., 2012), We use “standard” training, validation, and test splits to facilitate comparisons. Following previous experimental setup for CCG parsing, we use section 02-21 as training data, section 00 as the development data, and section 23 for testing. The other three data sets are from SemEval 2014 Task 8 (Oepen et al., 2014), and the data splitting policy follows the shared task. All the four data sets are publicly available from LDC (Oepen et al., 2016). Experiments for CCG analysis were performed using automatically assigned POS-tags generated by a symbol-refined HMM tagger (Huang et al., 2010). For the other three data sets we use POStags provided by the shared task. We also use features extracted from trees. We consider two types of trees: (1) syntactic trees provided as a companion analysis by the shared task and CCGBank, (2) pseudo trees (Zhang et al., 2016) automatically extracted from semantic dependency annotations. We utilize the Mate parser (Bohnet, 2010) to generate pseudo trees for all data sets and also syntactic trees for CCG analysis, and use the companion syntactic analysis provided by the shared task for the other three data sets. 5.2 Statistical Disambiguation Our parsing algorithms can be applied to scores originated from any source, but in our experiments we chose to use the framework of global linear models, deriving our scores as: SCOREPART(s, p) = w⊤ϕ(s, p) ϕ is a feature-vector mapping and w is a parameter vector. p may refer to a single arc, a pair of neighboring arcs, or a general tuple of arcs, according to the definition of a parsing model. For details we refer to the source code. We chose the averaged structured perceptron (Collins, 2002) for parameter estimation. 5.3 Results of Practical Parsing We evaluate five decoding algorithms: M1 first-order exact algorithm, M2 second-order exact algorithm with singleside factorization, M3 second-order approximate algorithm2 with single-side factorization, M4 second-order approximate algorithm with single- and both-side factorization, M5 third-order approximate algorithm with single- and both-side factorization. 5.3.1 Effectiveness of Higher-Order Features Table 2 lists the accuracy of Maximum Subgraph parsing. The output of our parser was evaluated against each dependency in the corpus. We report unlabeled precision (UP), recall (UR) and f-score (UF). We can see that the first-order model obtains a considerably good precision, with rich features. 2The beam size is set to 4 for all approximate algorithms. 834 DeepBank EnjuBank CCGBank PCEDT UP UR UF UP UR UF UP UR UF UP UR UF Syntax Tree M1 MS 90.97 86.11 88.47 92.92 89.71 91.29 94.21 88.70 91.37 91.49 86.39 88.87 M2 91.04 87.47 89.22 93.03 90.48 91.74 93.95 88.96 91.39 91.11 87.56 89.30 M3 90.94 87.65 89.27 93.27 90.62 91.93 93.93 89.11 91.46 91.25 87.66 89.42 M4 91.02 87.78 89.37 93.18 90.65 91.90 94.02 89.14 91.51 91.43 87.98 89.67 M5 90.91 87.51 89.18 93.15 90.57 91.84 93.91 89.19 91.49 91.29 87.96 89.59 M4 NC 88.17 90.46 89.30 91.42 93.42 92.41 92.36 93.10 92.73 89.25 90.34 89.79 LR 90.72 88.80 89.75 92.75 92.49 92.62 93.50 92.48 92.98 90.98 89.04 90.00 Pseudo Tree M1 MS 90.75 86.13 88.38 93.38 90.20 91.76 94.21 88.55 91.29 90.62 85.69 88.08 M2 90.13 87.01 88.54 93.18 90.63 91.89 93.96 88.54 91.17 89.92 86.55 88.20 M3 90.39 87.20 88.77 93.20 90.64 91.90 93.90 88.98 91.37 90.07 86.69 88.35 M4 90.31 87.25 88.76 93.18 90.67 91.91 94.01 89.04 91.46 90.03 86.84 88.40 M5 90.17 87.11 88.61 93.13 90.62 91.86 93.87 89.00 91.37 90.21 86.93 88.54 M4 NC 88.39 89.85 89.11 91.63 93.24 92.43 92.83 92.97 92.90 88.51 88.97 88.74 LR 90.01 88.55 89.27 92.79 92.59 92.69 93.78 92.28 93.02 90.04 87.92 88.97 Table 2: Parsing accuracy evaluated on the development sets. “MS” is short for Maximum Subgraph parsing. “NC” and “LR” are short for naive combination and Lagrangian Relaxation. 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100 Percentage of decoding termination Iteration DeepBank EnjuBank CCGBank PCEDT 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100 Percentage of decoding termination Iteration DeepBank EnjuBank CCGBank PCEDT Figure 8: The termination rate of page binding. The left and right diagrams show the results obtained when applying syntactic and pseudo tree features respectively. But due to the low coverage of the noncrossing dependency graphs, a set of dependencies can not be built. This property has a great impact on recall. Furthermore, we can see that the introduction of higher-order features improves parsing substantially for all data sets, as expected. When pseudo trees are utilized, the improvement is marginal. We think the reason is that we have already included many higher-order features at the stage of pseudo tree parsing. 5.3.2 Effectiveness of Approximate Parsing Perhaps surprisingly approximate parsing with single-side second order features and cube pruning is even slightly better than exact parsing. This result demonstrates the effectiveness of generalized dependency parsing. Further including third-order features does not improve parsing accuracy. 5.3.3 Effectiveness of Page Binding When arcs are assigned to two sets, we can separately train two parsers for producing two types of noncrossing dependency graphs. These two parsers can be integrated using a naive merger or a LR-based merger. Table 2 also shows the accuracy obtained by the second-order model M4. The effectivenss of the Lagrangian Relaxation-based algorithm for binding pages is confirmed. 5.3.4 Termination Rate of Page Binding Figure 8 presents the termination rate with respective to the number of iterations. Here we apply M4 with syntax and pseudo tree features. In practice the Lagrangian Relaxation-based algorithm finds solutions in a few iterations for a majority of sentences. This suggests that even though the joint decoding is an iterative procedure, satisfactory efficiency is still available. 835 DeepBank EnjuBank CCGBank PCEDT UP UR UF UP UR UF UP UR UF UP UR UF M4-LR Syn 89.99 87.77 88.87 92.87 92.04 92.46 93.45 92.51 92.98 89.58 87.73 88.65 Pse 90.01 88.16 89.08 93.17 92.48 92.83 93.66 92.06 92.85 89.27 87.37 88.31 ZDSW Pse 89.04 88.85 88.95 92.92 92.83 92.87 92.49 92.30 92.40 - - - Peking 91.72 89.92 90.81 94.46 91.61 93.02 - - - 91.79 86.02 88.81 Table 3: Parsing accuracy evaluated on the test sets. 5.4 Comparison with Other Parsers We show the parsing results on the test data together with some relevant results from related work. We compare our parser with two other systems: (1) ZDSW (Zhang et al., 2016) is a transition-based system that obtains state-of-theart accuracy; we present the results of their best single parsing model; (2) Peking (Du et al., 2014) is the best-performing system in the shared task; it is a hybrid system that integrate more than ten submodels to achieve high accuracy. Our parser can be taken as a graph-based parser. It reaches stateof-the-art performance produced by the transitionbased system. On DeepBank and EnjuBank, the accuracy of our parser is equivalent to ZDSW, while on CCGBank, our parser is significantly better. There is still a gap between our single parsing model and Peking hybrid model. For a majority of NLP tasks, e.g. parsing (Surdeanu and Manning, 2010), semantic role labeling (Koomen et al., 2005), hybrid systems that combines complementary strength of heterogeneous models perform better. But good individual system is the cornerstone of hybrid systems. Better design of single system almost always benefits system ensemble. 6 Conclusion We propose a new data-driven parsing framework, namely book embedding, for semantic dependency analysis, viz. mapping from natural language sentences to bilexical semantic dependency graphs. Our work includes two contributions: 1. new algorithms for maximum noncrossing dependency parsing. 2. a Lagrangian Relaxation based algorithm to combine noncrossing dependency subgraphs. Experiments demonstrate the effectiveness of the book embedding framework across a wide range of conditions. Our graph-based parser obtains state-of-the-art accuracy. Acknowledgments This work was supported by 863 Program of China (2015AA015403), NSFC (61331011), and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). Xiaojun Wan is the corresponding author. References Bernd Bohnet. 2010. Top accuracy and fast dependency parsing is not a contradiction. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010). Coling 2010 Organizing Committee, Beijing, China, pages 89–97. http://www.aclweb.org/anthology/C10-1011. Junjie Cao, Sheng Huang, Weiwei Sun, and Xiaojun Wan. 2017. Parsing to 1-endpoint-crossing, pagenumber-2 graphs. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1–8. https://doi.org/10.3115/1118693.1118694. Yantao Du, Weiwei Sun, and Xiaojun Wan. 2015. A data-driven, factorization parser for CCG dependency structures. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 1545– 1555. http://www.aclweb.org/anthology/P15-1149. Yantao Du, Fan Zhang, Weiwei Sun, and Xiaojun Wan. 2014. Peking: Profiling syntactic tree parsing techniques for semantic graph parsing. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014). Association for Computational Linguistics and Dublin City University, Dublin, Ireland, pages 459–464. http://www.aclweb.org/anthology/S14-2080. 836 Jason M. Eisner. 1996. Three new probabilistic models for dependency parsing: an exploration. In Proceedings of the 16th conference on Computational linguistics - Volume 1. Association for Computational Linguistics, Stroudsburg, PA, USA, pages 340–345. Daniel Flickinger, Yi Zhang, and Valia Kordoni. 2012. Deepbank: A dynamically annotated treebank of the wall street journal. In Proceedings of the Eleventh International Workshop on Treebanks and Linguistic Theories. pages 85–96. Carlos G´omez-Rodr´ıguez and Joakim Nivre. 2010. A transition-based parser for 2-planar dependency structures. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Uppsala, Sweden, pages 1492–1501. http://www.aclweb.org/anthology/P10-1151. Jan Hajic, Eva Hajicov´a, Jarmila Panevov´a, Petr Sgall, Ondej Bojar, Silvie Cinkov´a, Eva Fuc´ıkov´a, Marie Mikulov´a, Petr Pajas, Jan Popelka, Jir´ı Semeck´y, Jana Sindlerov´a, Jan Step´anek, Josef Toman, Zdenka Uresov´a, and Zdenek Zabokrtsk´y. 2012. Announcing prague czech-english dependency treebank 2.0. In Proceedings of the 8th International Conference on Language Resources and Evaluation. Istanbul, Turkey. Julia Hockenmaier and Mark Steedman. 2007. CCGbank: A corpus of CCG derivations and dependency structures extracted from the penn treebank. Computational Linguistics 33(3):355–396. Zhongqiang Huang, Mary Harper, and Slav Petrov. 2010. Self-training with products of latent variable grammars. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Cambridge, MA, pages 12–22. http://www.aclweb.org/anthology/D10-1002. Angelina Ivanova, Stephan Oepen, Lilja Øvrelid, and Dan Flickinger. 2012. Who did what to whom? A contrastive study of syntacto-semantic dependencies. In Proceedings of the Sixth Linguistic Annotation Workshop. Jeju, Republic of Korea, pages 2–11. Peter Koomen, Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2005. Generalized inference with multiple semantic role labeling systems. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005). Association for Computational Linguistics, Ann Arbor, Michigan, pages 181–184. Sandra K¨ubler, Ryan T. McDonald, and Joakim Nivre. 2009. Dependency Parsing. Synthesis Lectures on Human Language Technologies. Morgan & Claypool. Marco Kuhlmann and Peter Jonsson. 2015. Parsing to noncrossing dependency graphs. Transactions of the Association for Computational Linguistics 3:559– 570. Ryan McDonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proceedings of 11th Conference of the European Chapter of the Association for Computational Linguistics (EACL-2006)). volume 6, pages 81–88. Ryan T. McDonald and Joakim Nivre. 2011. Analyzing and integrating dependency parsers. Computational Linguistics 37(1):197–230. Yusuke Miyao, Takashi Ninomiya, and Jun ichi Tsujii. 2004. Corpus-oriented grammar development for acquiring a head-driven phrase structure grammar from the penn treebank. In IJCNLP. pages 684–693. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkov´a, Dan Flickinger, Jan Hajiˇc, Angelina Ivanova, and Zdeˇnka Ureˇsov´a. 2016. Semantic Dependency Parsing (SDP) graph banks release 1.0 LDC2016T10. Web Download. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkov´a, Dan Flickinger, Jan Hajic, and Zdenka Uresov´a. 2015. Semeval 2015 task 18: Broad-coverage semantic dependency parsing. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015). Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Dan Flickinger, Jan Hajic, Angelina Ivanova, and Yi Zhang. 2014. Semeval 2014 task 8: Broad-coverage semantic dependency parsing. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014). Association for Computational Linguistics and Dublin City University, Dublin, Ireland, pages 63–72. http://www.aclweb.org/anthology/S14-2008. Mark Steedman. 2000. The syntactic process. MIT Press, Cambridge, MA, USA. Mihai Surdeanu and Christopher D. Manning. 2010. Ensemble models for dependency parsing: Cheap and good? In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Los Angeles, California, pages 649–652. http://www.aclweb.org/anthology/N10-1091. Ivan Titov, James Henderson, Paola Merlo, and Gabriele Musillo. 2009. Online graph planarisation for synchronous parsing of semantic and syntactic dependencies. In Proceedings of the 21st international jont conference on Artifical intelligence. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, pages 1562–1567. http://dl.acm.org/citation.cfm?id=1661445.1661696. Hao Zhang and Ryan McDonald. 2012. Generalized higher-order dependency parsing with cube pruning. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational 837 Linguistics, Jeju Island, Korea, pages 320–331. http://www.aclweb.org/anthology/D12-1030. Xun Zhang, Yantao Du, Weiwei Sun, and Xiaojun Wan. 2016. Transition-based parsing for deep dependency structures. Computational Linguistics 42(3):353–389. http://aclweb.org/anthology/J163001. 838
2017
77
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 839–849 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1078 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 839–849 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1078 Neural Word Segmentation with Rich Pretraining Jie Yang∗and Yue Zhang∗and Fei Dong Singapore University of Technology and Design {jie yang, fei dong}@mymail.sutd.edu.sg yue [email protected] Abstract Neural word segmentation research has benefited from large-scale raw texts by leveraging them for pretraining character and word embeddings. On the other hand, statistical segmentation research has exploited richer sources of external information, such as punctuation, automatic segmentation and POS. We investigate the effectiveness of a range of external training sources for neural word segmentation by building a modular segmentation model, pretraining the most important submodule using rich external sources. Results show that such pretraining significantly improves the model, leading to accuracies competitive to the best methods on six benchmarks. 1 Introduction There has been a recent shift of research attention in the word segmentation literature from statistical methods to deep learning (Zheng et al., 2013; Pei et al., 2014; Morita et al., 2015; Chen et al., 2015b; Cai and Zhao, 2016; Zhang et al., 2016b). Neural network models have been exploited due to their strength in non-sparse representation learning and non-linear power in feature combination, which have led to advances in many NLP tasks. So far, neural word segmentors have given comparable accuracies to the best statictical models. With respect to non-sparse representation, character embeddings have been exploited as a foundation of neural word segmentors. They serve to reduce sparsity of character ngrams, allowing, for example, “猫(cat) 躺(lie) 在(in) 墙角(corner)” to be connected with “狗(dog) 蹲(sit) 在(in) 墙 ∗Equal contribution. 角(corner)” (Zheng et al., 2013), which is infeasible by using sparse one-hot character features. In addition to character embeddings, distributed representations of character bigrams (Mansur et al., 2013; Pei et al., 2014) and words (Morita et al., 2015; Zhang et al., 2016b) have also been shown to improve segmentation accuracies. With respect to non-linear modeling power, various network structures have been exploited to represent contexts for segmentation disambiguation, including multi-layer perceptrons on fivecharacter windows (Zheng et al., 2013; Mansur et al., 2013; Pei et al., 2014; Chen et al., 2015a), as well as LSTMs on characters (Chen et al., 2015b; Xu and Sun, 2016) and words (Morita et al., 2015; Cai and Zhao, 2016; Zhang et al., 2016b). For structured learning and inference, CRF has been used for character sequence labelling models (Pei et al., 2014; Chen et al., 2015b) and structural beam search has been used for word-based segmentors (Cai and Zhao, 2016; Zhang et al., 2016b). Previous research has shown that segmentation accuracies can be improved by pretraining character and word embeddings over large Chinese texts, which is consistent with findings on other NLP tasks, such as parsing (Andor et al., 2016). Pretraining can be regarded as one way of leveraging external resources to improve accuracies, which is practically highly useful and has become a standard practice in neural NLP. On the other hand, statistical segmentation research has exploited raw texts for semi-supervised learning, by collecting clues from raw texts more thoroughly such as mutual information and punctuation (Li and Sun, 2009; Sun and Xu, 2011), and making use of selfpredictions (Wang et al., 2011; Liu and Zhang, 2012). It has also utilised heterogenous annotations such as POS (Ng and Low, 2004; Zhang and Clark, 2008) and segmentation under different 839 State Recognized words Partial word Incoming chars Next Action state0 [ ] φ [我去过火车站那边] SEP state1 [ ] 我 [去过火车站那边] SEP state2 [我] 去 [过火车站那边] SEP state3 [我,去] 过 [火车站那边] SEP state4 [我,去,过] 火 [车站那边] APP state5 [我,去,过] 火车 [站那边] APP state6 [我,去,过] 火车站 [那边] SEP state7 [我,去,过,火车站] 那 [边] APP state8 [我,去,过,火车站] 那边 [ ] FIN state9 [我,去,过,火车站,那边] φ [ ] - Table 1: A transition based word segmentation example. standards (Jiang et al., 2009). To our knowledge, such rich external information has not been systematically investigated for neural segmentation. We fill this gap by investigating rich external pretraining for neural segmentation. Following Cai and Zhao (2016) and Zhang et al. (2016b), we adopt a globally optimised beam-search framework for neural structured prediction (Andor et al., 2016; Zhou et al., 2015; Wiseman and Rush, 2016), which allows word information to be modelled explicitly. Different from previous work, we make our model conceptually simple and modular, so that the most important sub module, namely a five-character window context, can be pretrained using external data. We adopt a multi-task learning strategy (Collobert et al., 2011), casting each external source of information as a auxiliary classification task, sharing a five-character window network. After pretraining, the character window network is used to initialize the corresponding module in our segmentor. Results on 6 different benchmarks show that our method outperforms the best statistical and neural segmentation models consistently, giving the best reported results on 5 datasets in different domains and genres. Our implementation is based on LibN3L1 (Zhang et al., 2016a). Code and models can be downloaded from http://gitHub. com/jiesutd/RichWordSegmentor 2 Related Work Work on statistical word segmentation dates back to the 1990s (Sproat et al., 1996). State-of-the-art approaches include character sequence labeling models (Xue et al., 2003) using CRFs (Peng et al., 1https://github.com/SUTDNLP/LibN3L 2004; Zhao et al., 2006) and max-margin structured models leveraging word features (Zhang and Clark, 2007; Sun et al., 2009; Sun, 2010). Semisupervised methods have been applied to both character-based and word-based models, exploring external training data for better segmentation (Sun and Xu, 2011; Wang et al., 2011; Liu and Zhang, 2012; Zhang et al., 2013). Our work belongs to recent neural word segmentation. To our knowledge, there has been no work in the literature systematically investigating rich external resources for neural word segmentation training. Closest in spirit to our work, Sun and Xu (2011) empirically studied the use of various external resources for enhancing a statistical segmentor, including character mutual information, access variety information, punctuation and other statistical information. Their baseline is similar to ours in the sense that both character and word contexts are considered. On the other hand, their model is statistical while ours is neural. Consequently, they integrate external knowledge as features, while we integrate it by shared network parameters. Our results show a similar degree of error reduction compared to theirs by using external data. Our model inherits from previous findings on context representations, such as character windows (Mansur et al., 2013; Pei et al., 2014; Chen et al., 2015a) and LSTMs (Chen et al., 2015b; Xu and Sun, 2016). Similar to Zhang et al. (2016b) and Cai and Zhao (2016), we use word context on top of character context. However, words play a relatively less important role in our model, and we find that word LSTM, which has been used by all previous neural segmentation work, is unnecessary for our model. Our model is conceptually simpler and more modularised compared with 840 S A hidden layer output 车 站 那 边 w-k Recognized words Partial word Incoming chars w-2 我 之前 去 过 火 w-1 P c0 c1 . . . cm . . . . . . . . . . . . XW XP XC h Figure 1: Overall model. Zhang et al. (2016b) and Cai and Zhao (2016), allowing a central sub module, namely a fivecharacter context window, to be pretrained. 3 Model Our segmentor works incrementally from left to right, as the example shown in Table 1. At each step, the state consists of a sequence of words that have been fully recognized, denoted as W = [w−k, w−k+1, ..., w−1], a current partially recognized word P, and a sequence of next incoming characters, denoted as C = [c0, c1, ..., cm], as shown in Figure 1. Given an input sentence, W and P are initialized to [ ] and φ, respectively, and C contains all the input characters. At each step, a decision is made on c0, either appending it as a part of P, or seperating it as the beginning of a new word. The incremental process repeats until C is empty and P is null again (C = [ ], P = φ). Formally, the process can be regarded as a state-transition process, where a state is a tuple S = ⟨W, P, C⟩, and the transition actions include SEP (seperate) and APP (append), as shown by the deduction system in Figure 22. In the figure, V denotes the score of a state, given by a neural network model. The score of the initial state (i.e. axiom) is 0, and the score of a non-axiom state is the sum of scores of all incremental decisions resulting in the state. Similar to Zhang et al. (2016b) and Cai and Zhao (2016), our model is a global structural model, using the overall score to disambiguate states, which correspond to sequences of inter-dependent transition actions. Different from previous work, the structure of 2An end of sentence symbol ⟨/s⟩is added to the input so that the last partial word can be put onto W as a full word before segmentation finishes. Axiom: S = ⟨[ ], φ, C⟩, V = 0 Goal: S = ⟨W, φ, [ ]⟩, V = Vfinal SEP: S = ⟨W, P, c0|C⟩, V S ′ = ⟨W|P, c0, C⟩, V ′ = V + Score(S, SEP) APP: S = ⟨W, P, c0|C⟩, V S ′ = ⟨W, P ⊕c0, C⟩, V ′ = V + Score(S, APP) Figure 2: Deduction system, where ⊕denotes string concatenation. our scoring network is shown in Figure 1. It consists of three main layers. On the bottom is a representation layer, which derives dense representations XW , XP and XC for W, P and C, respectively. We compare various distributed representations and neural network structures for learning XW , XP and XC, detailed in Section 3.1. On top of the representation layer, we use a hidden layer to merge XW , XP and XC into a single vector h = tanh(WhW ·XW +WhP ·XP +WhC·XC+bh) (1) The hidden feature vector h is used to represent the state S = ⟨W, P, C⟩, for calculating the scores of the next action. In particular, a linear output layer with two nodes is employed: o = Wo · h + bo (2) The first and second node of o represent the scores of SEP and APP given S, namely Score(S, SEP), Score(S, APP) respectively. 3.1 Representation Learning Characters. We investigate two different approaches to encode incoming characters, namely a window approach and an LSTM approach. For the former, we follow prior methods (Xue et al., 2003; Pei et al., 2014), using five-character window [c−2, c−1, c0, c1, c2] to represent incoming characters. Shown in Figure 3, a multi-layer perceptron (MLP) is employed to derive a five-character window vector DC from single-character vector representations Vc−2, Vc−1, Vc0, Vc1, Vc2. DC = MLP([Vc−2; Vc−1; Vc0; Vc1; Vc2]) (3) For the latter, we follow recent work (Chen et al., 2015b; Zhang et al., 2016b), using a bidirectional LSTM to encode input character sequence.3 In particular, the bi-directional LSTM 3The LSTM variation with coupled input and forget gate but without peephole connections is applied (Gers and Schmidhuber, 2000) 841 hidden vector [←− hC(c0); −→ hC(c0)] of the next incoming character c0 is used to represent the coming characters [c0, c1, ...] given a state. Intuitively, a five-character window provides a local context from which the meaning of the middle character can be better disambiguated. LSTM, on the other hand, captures larger contexts, which can contain more useful clues for dismbiguation but also irrelevant information. It is therefore interesting to investigate a combination of their strengths, by first deriving a locally-disambiguated version of c0, and then feed it to LSTM for a globally disambiguated representation. Now with regard to the single-character vector representation Vci(i ∈[−2, 2]), we follow previous work and consider both character embedding ec(ci) and character-bigram embedding eb(ci, ci+1) , investigating the effect of each on the accuracies. When both ec(ci) and eb(ci, ci+1) are utilized, the concatenated vector is taken as Vci. Partial Word. We take a very simple approach to representing the partial word P, by using the embedding vectors of its first and last characters, as well as the embedding of its length. Length embeddings are randomly initialized and then tuned in model training. XP has relatively less influence on the empirical segmentation accuracies. XP = [ec(P[0]); ec(P[−1]); el(LEN(P))] (4) Word. Similar to the character case, we investigate two different approaches to encoding incoming characters, namely a window approach and an LSTM approach. For the former, we follow prior methods (Zhang and Clark, 2007; Sun, 2010), using the two-word window [w−2, w−1] to represent recognized words. A hidden layer is employed to derive a two-word vector XW from single word embeddings ew(w−2) and ew(w−1). XW = tanh(Ww[ew(w−2); ew(w−1)] + bw) (5) For the latter, we follow Zhang et al. (2016b) and Cai and Zhao (2016), using an uni-directional LSTM on words that have been recognized. 3.2 Pretraining Neural network models for NLP benefit from pretraining of word/character embeddings, learning distributed sementic information from large raw texts for reducing sparsity. The three basic elements in our neural segmentor, namely characters, character bigrams and words, can all be pretrained . . . . . . . . . . . . .  .  . . . . MLP ... ... ... punc. silver hete. POS shared parameters main training pretraining Bi-LSTM S A hidden layer output ... ... ... ... XW XP XC h DC Vc-2 Vc-1 Vc0 Vc1 Vc2 Figure 3: Shared character representation. over large unsegmented data. We pretrain the fivecharacter window network in Figure 3 as an unit, learning the MLP parameter together with character and bigram embeddings. We consider four types of commonly explored external data to this end, all of which have been studied for statistical word segmentation, but not for neural network segmentors. Raw Text. Although raw texts do not contain explicit word boundary information, statistics such as mutual information between consecutive characters can be useful features for guiding segmentation (Sun and Xu, 2011). For neural segmentation, these distributional statistics can be implicitly learned by pretraining character embeddings. We therefore consider a more explicit clue for pretraining our character window network, namely punctuations (Li and Sun, 2009). Punctuation can serve as a type of explicit markup (Spitkovsky et al., 2010), indicating that the two characters on its left and right belong to two different words. We leverage this source of information by extracting character five-grams excluding punctuation from raw sentences, using them as inputs to classify whether there is punctuation before middle character. Denoting the resulting five character window as [c−2, c−1, c0, c1, c2], the MLP in Figure 3 is used to derive its representation DC, which is then fed to a softmax layer for binary classification: P(punc) = softmax(Wpunc · DC + bpunc) (6) Here P(punc) indicates the probability of a punctuation mark existing before c0. Standard backpropagation training of the MLP in Figure 3 can be done jointly with the training of Wpunc and bpunc. After such training, the embedding Vci and MLP values can be used to initialize the corresponding parameters for DC in the main segmentor, before 842 its training. Automatically Segmented Text. Large texts automatically segmented by a baseline segmentor can be used for self-training (Liu and Zhang, 2012) or deriving statistical features (Wang et al., 2011). We adopt a simple strategy, taking automatically segmented text as silver data to pretrain the five-character window network. Given [c−2, c−1, c0, c1.c2], DC is derived using the MLP in Figure 3, and then used to classify the segmentation of c0 into B(begining)/M(middle)/E(end)/S(single character word) labels. P(silver) = softmax(Wsilv · DC + bsilv) (7) Here Wsilv and bsilv are model parameters. Training can be done in the same way as training with punctuation. Heterogenous Training Data. Multiple segmentation corpora exist for Chinese, with different segmentation granularities. There has been investigation on leveraging two corpora under different annotation standards to improve statistical segmentation (Jiang et al., 2009). We try to utilize heterogenous treebanks by taking an external treebank as labeled data, training a B/M/E/S classifier for the character windows network. P(hete) = softmax(Whete · DC + bhete) (8) POS Data. Previous research has shown that POS information is closely related to segmentation (Ng and Low, 2004; Zhang and Clark, 2008). We verify the utility of POS information for our segmentor by pretraining a classifier that predicts the POS on each character, according to the character window representation DC. In particular, given [c−2, c−1, c0, c1, c2], the POS of the word that c0 belongs to is used as the output. P(pos) = softmax(Wpos · DC + bpos) (9) Multitask Learning. While each type of external training data can offer one source of segmentation information, different external data can be complimentary to each other. We aim to inject all sources of information into the character window representation DC by using it as a shared representation for different classification tasks. Neural model have been shown capable of doing multi-task learning via parameter sharing (Collobert et al., 2011). Shown in Figure 3, in our Algorithm 1: Training Input : (xi, yi) Parameters: Θ Process: agenda ←(S = ⟨[ ], φ, Xi⟩, V = 0) for j in [0:LEN(Xi)] do beam = [] for ˆy in agenda do ˆy′ = ACTION(ˆy, SEP) ADD(ˆy′, beam) ˆy′ = ACTION(ˆy, APP) ADD(ˆy′, beam) end agenda ←TOP(beam, B) if yi j /∈agenda then ˆyj = BESTIN(agenda) UPDATE(yi j, ˆyj,Θ) return end end ˆy = BESTIN(agenda) UPDATE(yi, ˆy,Θ) return case, the output layer for each task is independent, but the hidden layer DC and all layers below DC are shared. For training with all sources above, we randomly sample sentences from the Punc./Autoseg/Heter./POS sources with the ratio of 10/1/1/1, for each sentence in punctuation corpus we take only 2 characters (character before and after the punctuation) as input instances. 4 Decoding and Training To train the main segmentor, we adopt the global transition-based learning and beam-search strategy of Zhang and Clark (2011). For decoding, standard beam search is used, where the B best partial output hypotheses at each step are maintained in an agenda. Initially, the agenda contains only the start state. At each step, all hypotheses in the agenda are expanded, by applying all possible actions and B highest scored resulting hypotheses are used as the agenda for the next step. For training, the same decoding process is applied to each training example (xi, yi). At step j, if the gold-standard sequence of transition actions yi j falls out of the agenda, max-margin update is performed by taking the current best hypothesis ˆyj in the beam as a negative example, and yi j as 843 Paramater Value Paramater Value α 0.01 size(ec) 50 λ 10−8 size(eb) 50 p 0.2 size(ew) 50 η 0.2 size(el) 20 MLP layer 2 size(XC) 150 beam B 8 size(XP ) 50 size(h) 200 size(XW ) 100 Table 2: Hyper-parameter values. a positive example. The loss function is l(ˆyj, yi j) = max((score(ˆyj) + η · δ(ˆyj, yi j) −score(yi j)), 0), (10) where δ(ˆyj, yi j) is the number of incorrect local decisions in ˆyj, and η controls the score margin. The strategy above is early-update (Collins and Roark, 2004). On the other hand, if the goldstandard hypothesis does not fall out of the agenda until the full sentence has been segmented, a final update is made between the highest scored hypothesis ˆy (non-gold standard) in the agenda and the gold-standard yi, using exactly the same loss function. Pseudocode for the online learning algorithm is shown in Algorithm 1. We use Adagrad (Duchi et al., 2011) to optimize model parameters, with an initial learning rate α. L2 regularization and dropout (Srivastava et al., 2014) on input are used to reduce overfitting, with a L2 weight λ and a dropout rate p. All the parameters in our model are randomly initialized to a value (−r, r), where r = q 6.0 fanin+fanout (Bengio, 2012). We fine-tune character and character bigram embeddings, but not word embeddings, acccording to Zhang et al. (2016b). 5 Experiments 5.1 Experimental Settings Data. We use Chinese Treebank 6.0 (CTB6) (Xue et al., 2005) as our main dataset. Training, development and test set splits follow previous work (Zhang et al., 2014). In order to verify the robustness of our model, we additionally use SIGHAN 2005 bake-off (Emerson, 2005) and NLPCC 2016 shared task for Weibo segmentation (Qiu et al., 2016) as test datasets, where the standard splits are used. For pretraining embedding of Source #Chars #Words #Sents Raw data Gigaword 116.5m – – Auto seg Gigaword 398.2m 238.6m 12.04m Hete. People’s Daily 10.14m 6.17m 104k POS People’s Daily 10.14m 6.17m 104k Table 3: Statistics of external data. words, characters and character bigrams, we use Chinese Gigaword (simplified Chinese sections)4, automatically segmented using ZPar 0.6 off-theshelf (Zhang and Clark, 2007), the statictics of which are shown in Table 3. For pretraining character representations, we extract punctuation classification data from the Gigaword corpus, and use the word-based ZPar and a standard character-based CRF model (Tseng et al., 2005) to obtain automatic segmentation results. We compare pretraining using ZPar results only and using results that both segmentors agree on. For heterogenous segmentation corpus and POS data, we use a People’s Daily corpus of 5 months5. Statistics are listed in Table 3. Evaluation. The standard word precision, recall and F1 measure (Emerson, 2005) are used to evaluate segmentation performances. Hyper-parameter Values. We adopt commonly used values for most hyperparameters, but tuned the sizes of hidden layers on the development set. The values are summarized in Table 2. 5.2 Development Experiments We perform development experiments to verify the usefulness of various context representations, network configurations and different pretraining methods, respectively. 5.2.1 Context Representations The influence of character and word context representations are empirically studied by varying the network structures for XC and XW in Figure 1, respectively. All the experiments in this section are performed using a beam size of 8. Character Context. We fix the word representation XW to a 2-word window and compare different character context representations. The results are shown in Table 4, where “no char” represents our model without XC, “5-char window” represents a five-character window context, “char LSTM” represents character LSTM context and 4https://catalog.ldc.upenn.edu/LDC2011T13 5http://www.icl.pku.edu.cn/icl res 844 Character P R F No char 82.19 87.20 84.62 5-char window 95.33 95.50 95.41 char LSTM 95.21 95.82 95.51 5-char window+LSTM 95.77 95.95 95.86 -char emb 95.20 95.19 95.20 -bichar emb 93.87 94.67 94.27 Table 4: Influence of character contexts. “5-char window + LSTM” represents a combination, detailed in Section 3.1. “-char emb” and “bichar emb” represent the combined window and LSTM context without character and characterbigram information, respectively. As can be seen from the table, without character information, the F-score is 84.62%, demonstrating the necessity of character contexts. Using window and LSTM representations, the Fscores increase to 95.41% and 95.51%, respectively. A combination of the two lead to further improvement, showing that local and global character contexts are indeed complementary, as hypothesized in Section 3.1. Finally, by removing character and character-bigram embeddings, the F-score decreases to 95.20% and 94.27%, respectively, which suggests that character bigrams are more useful compared to character unigrams. This is likely because they contain more distinct tokens and hence offer a larger parameter space. Word Context. The influence of various word contexts are shown in Table 5. Without using word information, our segmentor gives an F-score of 95.66% on the development data. Using a context of only w−1 (1-word window), the F-measure increases to 95.78%. This shows that word contexts are far less important in our model compared to character contexts, and also compared to word contexts in previous word-based segmentors (Zhang et al., 2016b; Cai and Zhao, 2016). This is likely due to the difference in our neural network structures, and that we fine-tune both character and character bigram embeddings, which significantly enlarges the adjustable parameter space as compared with Zhang et al. (2016b). The fact that word contexts can contribute relatively less than characters in a word is also not surprising in the sense that word-based neural segmentors do not outperform the best character-based models by large margins. Given that character context is what we pretrain, our model relies more heavily Word P R F No word 95.50 95.83 95.66 1-word window 95.70 95.85 95.78 2-word window 95.77 95.95 95.86 3-word window 95.80 95.85 95.83 word LSTM 95.71 95.97 95.84 2-word window+LSTM 95.74 95.95 95.84 Table 5: Influence of word contexts. on them. With both w−2 and w−1 being used for the context, the F-score further increases to 95.86%, showing that a 2-word window is useful by offering more contextual information. On the other hand, when w−3 is also considered, the F-score does not improve further. This is consistent with previous findings of statistical word segmentation (Zhang and Clark, 2007), which adopt a 2-word context. Interestingly, using a word LSTM does not bring further improvements, even when it is combined with a window context. This suggests that global word contexts may not offer crucial additional information compared with local word contexts. Intuitively, words are significantly less polysemous compared with characters, and hence can serve as effective contexts even if used locally, to supplement a more crucial character context. 5.2.2 Stuctured Learning and Inference We verify the effectiveness of structured learning and inference by measuring the influence of beam size on the baseline segmentor. Figure 4 shows the F-scores against different numbers of training iterations with beam size 1,2,4,8 and 16, respectively. When the beam size is 1, the inference is local and greedy. As the size of the beam increases, more global structural ambiguities can be resolved since learning is designed to guide search. A contrast between beam sizes 1 and 2 demonstrates the usefulness of structured learning and inference. As the beam size increases, the gain by doubling the beam size decreases. We choose a beam size of 8 for the remaining experiments for a tradeoff between speed and accuracy. 5.2.3 Pretraining Results Table 6 shows the effectiveness of rich pretraining of Dc on the development set. In particular, by using punctuation information, the F-score increases from 95.86% to 96.25%, with a relative error reduction of 9.4%. This is consistent with 845 5 10 15 20 iteration 0.90 0.91 0.92 0.93 0.94 0.95 0.96 F1-value beam=1 beam=2 beam=4 beam=8 beam=16 Figure 4: F1 measure against the training epoch. Pretrain P R F ER% Baseline 95.77 95.95 95.86 0 +Punc. pretrain 96.36 96.13 96.25 -9.4 +Auto-seg pretrain 96.23 96.29 96.26 -9.7 +Heter-seg pretrain 96.28 96.27 96.27 -9.9 +POS pretrain 96.16 96.28 96.22 -8.7 +Multitask pretrain 96.54 96.42 96.48 -15.0 Table 6: Influence of pretraining. the observation of Sun and Xu (2011), who show that punctuation is more effective compared with mutual information and access variety as semisupervised data for a statistical word segmentation model. With automatically-segmented data6, heterogenous segmentation and POS information, the F-score increases to 96.26%, 96.27% and 96.22%, respectively, showing the relevance of all information sources to neural segmentation, which is consistent with observations made for statistical word segmentation (Jiang et al., 2009; Wang et al., 2011; Zhang et al., 2013). Finally, by integrating all above information via multi-task learning, the F-score is further improved to 96.48%, with a 15.0% relative error reduction. 5.2.4 Comparision with Zhang et al. (2016b) Both our model and Zhang et al. (2016b) use global learning and beam search, but our network is different. Zhang et al. (2016b) utilizes the action history with LSTM encoder, while we use partial word rather than action information. Besides, the character and character bigram embeddings are fine-tuned in our model while Zhang et al. (2016b) set the embeddings fixed during training. 6By using ZPar alone, the auto-segmented result is 96.02%, less than using results by matching ZPar and the CRF segmentor outputs. 10< 30 50 70 90 >110 Sentence length 0.94 0.95 0.96 0.97 0.98 F1-value Multitask Baseline Zhang et al. 2016 Figure 5: F1 measure against the sentence length. We study the F-measure distribution with respect to sentence length on our baseline model, multitask pretraining model and Zhang et al. (2016b). In particular, we cluster the sentences in the development dataset into 6 categories based on their length and evaluate their F1-values, respectively. As shown in Figure 5, the models give different error distributions, with our models being more robust to the sentence length compared with Zhang et al. (2016b). Their model is better on very short sentences, but worse on all other cases. This shows the relative advantages of our model. 5.3 Final Results Our final results on CTB6 are shown in Table 7, which lists the results of several current state-ofthe-art methods. Without multitask pretraining, our model gives an F-score of 95.44%, which is higher than the neural segmentor of Zhang et al. (2016b), which gives the best accuracies among pure neural segments on this dataset. By using multitask pretraining, the result increases to 96.21%, with a relative error reduction of 16.9%. In comparison, Sun and Xu (2011) investigated heterogenous semi-supervised learning on a stateof-the-art statistical model, obtaining a relative error reduction of 13.8%. Our findings show that external data can be as useful for neural segmentation as for statistical segmentation. Our final results compare favourably to the best statistical models, including those using semisupervised learning (Sun and Xu, 2011; Wang et al., 2011), and those leveraging joint POS and syntactic information (Zhang et al., 2014). In addition, it also outperforms the best neural models, in particular Zhang et al. (2016b)*, which is a hybrid neural and statistical model, integrating man846 Models P R F Baseline 95.3 95.5 95.4 Punc. pretrain 96.0 95.6 95.8 Auto-seg pretrain 95.8 95.6 95.7 Multitask pretrain 96.4 96.0 96.2 Sun and Xu (2011) baseline 95.2 94.9 95.1 Sun and Xu (2011) multi-source semi 95.9 95.6 95.7 Zhang et al. (2016b) neural 95.3 94.7 95.0 Zhang et al. (2016b)* hybrid 96.1 95.8 96.0 Chen et al. (2015a) window 95.7 95.8 95.8 Chen et al. (2015b) char LSTM 96.2 95.8 96.0 Zhang et al. (2014) POS and syntax – – 95.7 Wang et al. (2011) statistical semi 95.8 95.8 95.8 Zhang and Clark (2011) statistical 95.5 94.8 95.1 Table 7: Main results on CTB6. ual discrete features into their word-based neural model. We achieve the best reported F-score on this dataset. To our knowledge, this is the first time a pure neural network model outperforms all existing methods on this dataset, allowing the use of external data 7. We also evaluate our model pretrained only on punctuation and auto-segmented data, which do not include additional manual labels. The results on CTB test data show the accuracy of 95.8% and 95.7%, respectivley, which are comparable with those statistical semi-supervised methods (Sun and Xu, 2011; Wang et al., 2011). They are also among the top performance methods in Table 7. Compared with discrete semisupervised methods (Sun and Xu, 2011; Wang et al., 2011), our semi-supervised model is free from hand-crafted features. In addition to CTB6, which has been the most commonly adopted by recent segmentation research, we additionally evaluate our results on the SIGHAN 2005 bakeoff and Weibo datasets, to examine cross domain robustness. Different stateof-the-art methods for which results are recorded on these datasets are listed in Table 8. Most neural models reported results only on the PKU 8 and MSR datasets of the bakeoff test sets, which are in simplified Chinese. The AS and CityU corpora are in traditional Chinese, sourced from Taiwan and 7 We did not investigate the use of lexicons (Chen et al., 2015a,b) in our research, since lexicons might cover different OOV in the training and test data, and hence directly affecting the accuracies, which makes it relatively difficult to compare different methods fairly unless a single lexicon is used for all methods, as observed by Cai and Zhao (2016). 8We notice that both PKU dataset and our heterogenous data are based on the news of People’s Daily. While the heterogenous data only collect news from Febuary 1998 to June 1998, it does not contain the sentences in the dev and test datasets of PKU. F1 measure PKU MSR AS CityU Weibo Multitask pretrain 96.3 97.5 95.7 96.9 95.5 Cai and Zhao (2016) 95.5 96.5 – – – Zhang et al. (2016b) 95.1 97.0 – – – Zhang et al. (2016b)* 95.7 97.7 – – – Pei et al. (2014) 95.2 97.2 – – – Sun et al. (2012) 95.4 97.4 – – – Zhang and Clark (2007) 94.5 97.2 94.6 95.1 – Zhang et al. (2006) 95.1 97.1 95.1 95.1 – Sun et al. (2009) 95.2 97.3 – 94.6 – Sun (2010) 95.2 96.9 95.2 95.6 – Wang et al. (2014) 95.3 97.4 95.4 94.7 – Xia et al. (2016) – – – – 95.4 Table 8: Main results on other test datasets. Hong Kong corpora, respectively. We map them into simplified Chinese before segmentation. The Weibo corpus is in a yet different genre, being social media text. Xia et al. (2016) achieved the best results on this dataset by using a statistical model with features learned using external lexicons, the CTB7 corpus and the People Daily corpus. Similar to Table 7, our method gives the best accuracies on all corpora except for MSR, where it underperforms the hybrid model of Zhang et al. (2016b) by 0.2%. To our knowledge, we are the first to report results for a neural segmentor on more than 3 datasets, with competitive results consistently. It verifies that knowledge learned from a certain set of resources can be used to enhance cross-domain robustness in training a neural segmentor for different datasets, which is of practical importance. 6 Conclusion We investigated rich external resources for enhancing neural word segmentation, by building a globally optimised beam-search model that leverages both character and word contexts. Taking each type of external resource as an auxiliary classification task, we use neural multi-task learning to pre-train a set of shared parameters for character contexts. Results show that rich pretraining leads to 15.4% relative error reduction, and our model gives results highly competitive to the best systems on six different benchmarks. Acknowledgments We thank the anonymous reviewers for their insightful comments and the support of NSFC 61572245. We would like to thank Meishan Zhang for his insightful discussion and assisting coding. Yue Zhang is the corresponding author. 847 References Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In ACL. Association for Computational Linguistics, pages 2442–2452. https://doi.org/10.18653/v1/P16-1231. Yoshua Bengio. 2012. Practical recommendations for gradient-based training of deep architectures. In Neural networks: Tricks of the trade, Springer, pages 437–478. Deng Cai and Hai Zhao. 2016. Neural word segmentation learning for chinese. In ACL. Association for Computational Linguistics, pages 409–420. https://doi.org/10.18653/v1/P16-1039. Xinchi Chen, Xipeng Qiu, Chenxi Zhu, and Xuanjing Huang. 2015a. Gated recursive neural network for chinese word segmentation. In ACL. Association for Computational Linguistics. https://doi.org/10.3115/v1/P15-1168. Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Pengfei Liu, and Xuanjing Huang. 2015b. Long shortterm memory neural networks for chinese word segmentation. In EMNLP. Association for Computational Linguistics, pages 1385–1394. https://doi.org/10.18653/v1/D15-1141. Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In ACL. Association for Computational Linguistics, page 111. http://aclweb.org/anthology/P04-1015. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12(Aug):2493–2537. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12(Jul):2121–2159. Thomas Emerson. 2005. The second international chinese word segmentation bakeoff. In Proceedings of the fourth SIGHAN workshop on Chinese language Processing. volume 133. Felix A Gers and J¨urgen Schmidhuber. 2000. Recurrent nets that time and count. In Neural Networks, 2000. IJCNN 2000, Proceedings of the IEEE-INNSENNS International Joint Conference on. IEEE, volume 3, pages 189–194. Wenbin Jiang, Liang Huang, and Qun Liu. 2009. Automatic adaptation of annotation standards: Chinese word segmentation and pos tagging: a case study. In ACL-IJCNLP. Association for Computational Linguistics, pages 522–530. http://aclweb.org/anthology/P09-1059. Zhongguo Li and Maosong Sun. 2009. Punctuation as implicit annotations for chinese word segmentation. Computational Linguistics 35(4):505–512. http://aclweb.org/anthology/J09-4006. Yang Liu and Yue Zhang. 2012. Unsupervised domain adaptation for joint segmentation and pos-tagging. In COLING. pages 745–754. http://aclweb.org/anthology/C12-2073. Mairgup Mansur, Wenzhe Pei, and Baobao Chang. 2013. Feature-based neural language model and chinese word segmentation. In IJCNLP. pages 1271– 1277. http://aclweb.org/anthology/I13-1181. Hajime Morita, Daisuke Kawahara, and Sadao Kurohashi. 2015. Morphological analysis for unsegmented languages using recurrent neural network language model. In EMNLP. Association for Computational Linguistics. https://doi.org/10.18653/v1/D15-1276. Hwee Tou Ng and Jin Kiat Low. 2004. Chinese part-ofspeech tagging: One-at-a-time or all-at-once? wordbased or character-based? In EMNLP. Association for Computational Linguistics, pages 277–284. http://aclweb.org/anthology/W04-3236. Wenzhe Pei, Tao Ge, and Baobao Chang. 2014. Max-margin tensor neural network for chinese word segmentation. In ACL. Association for Computational Linguistics, pages 293–303. https://doi.org/10.3115/v1/P14-1028. Fuchun Peng, Fangfang Feng, and Andrew McCallum. 2004. Chinese segmentation and new word detection using conditional random fields. In COLING. page 562. http://aclweb.org/anthology/C04-1081. Xipeng Qiu, Peng Qian, and Zhan Shi. 2016. Overview of the nlpcc-iccpol 2016 shared task: Chinese word segmentation for micro-blog texts. In International Conference on Computer Processing of Oriental Languages. Springer, pages 901–906. Valentin I Spitkovsky, Daniel Jurafsky, and Hiyan Alshawi. 2010. Profiting from mark-up: Hyper-text annotations for guided parsing. In ACL. Association for Computational Linguistics, pages 1278– 1287. http://aclweb.org/anthology/P10-1130. Richard Sproat, William Gale, Chilin Shih, and Nancy Chang. 1996. A stochastic finitestate word-segmentation algorithm for chinese. Computational linguistics 22(3):377–404. http://aclweb.org/anthology/J96-3004. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15(1):1929–1958. Weiwei Sun. 2010. Word-based and characterbased word segmentation models: Comparison and combination. In COLING. pages 1211–1219. http://aclweb.org/anthology/C10-2139. 848 Weiwei Sun and Jia Xu. 2011. Enhancing chinese word segmentation using unlabeled data. In EMNLP. Association for Computational Linguistics, pages 970– 979. http://aclweb.org/anthology/D11-1090. Xu Sun, Houfeng Wang, and Wenjie Li. 2012. Fast online training with frequency-adaptive learning rates for chinese word segmentation and new word detection. In ACL. Association for Computational Linguistics, pages 253–262. http://aclweb.org/anthology/P12-1027. Xu Sun, Yaozhong Zhang, Takuya Matsuzaki, Yoshimasa Tsuruoka, and Jun’ichi Tsujii. 2009. A discriminative latent variable chinese segmenter with hybrid word/character information. In NAACLHLT. Association for Computational Linguistics, pages 56–64. http://aclweb.org/anthology/N091007. Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher Manning. 2005. A conditional random field word segmenter for sighan bakeoff 2005. In Proceedings of the fourth SIGHAN workshop on Chinese language Processing. Mengqiu Wang, Rob Voigt, and Christopher D Manning. 2014. Two knives cut better than one: Chinese word segmentation with dual decomposition. In ACL. Association for Computational Linguistics, pages 193–198. https://doi.org/10.3115/v1/P142032. Yiou Wang, Yoshimasa Tsuruoka Jun’ichi Kazama, Yoshimasa Tsuruoka, Wenliang Chen, Yujie Zhang, and Kentaro Torisawa. 2011. Improving chinese word segmentation and pos tagging with semi-supervised methods using large autoanalyzed data. In IJCNLP. pages 309–317. http://www.aclweb.org/anthology/I11-1035. Sam Wiseman and Alexander M Rush. 2016. Sequence-to-sequence learning as beam-search optimization. In EMNLP. Association for Computational Linguistics, pages 1296–1306. http://aclweb.org/anthology/D16-1137. Qingrong Xia, Zhenghua Li, Jiayuan Chao, and Min Zhang. 2016. Word segmentation on micro-blog texts with external lexicon and heterogeneous data. In International Conference on Computer Processing of Oriental Languages. Springer. Jingjing Xu and Xu Sun. 2016. Dependencybased gated recursive neural network for chinese word segmentation. In ACL. Association for Computational Linguistics, page 567. https://doi.org/10.18653/v1/P16-2092. Naiwen Xue, Fei Xia, Fu-Dong Chiou, and Marta Palmer. 2005. The penn chinese treebank: Phrase structure annotation of a large corpus. Natural language engineering 11(02):207–238. Nianwen Xue et al. 2003. Chinese word segmentation as character tagging. Computational Linguistics and Chinese Language Processing 8(1):29–48. Longkai Zhang, Houfeng Wang, Xu Sun, and Mairgup Mansur. 2013. Exploring representations from unlabeled data with co-training for chinese word segmentation. In EMNLP. Association for Computational Linguistics, pages 311–321. http://aclweb.org/anthology/D13-1031. Meishan Zhang, Jie Yang, Zhiyang Teng, and Yue Zhang. 2016a. Libn3l: a lightweight package for neural nlp. In Proceedings of the Tenth International Conference on Language Resources and Evaluation. https://doi.org/10.1145/322234.322243. Meishan Zhang, Yue Zhang, Wanxiang Che, and Ting Liu. 2014. Character-level chinese dependency parsing. In ACL. Association for Computational Linguistics, pages 1326–1336. https://doi.org/10.3115/v1/P14-1125. Meishan Zhang, Yue Zhang, and Guohong Fu. 2016b. Transition-based neural word segmentation. In ACL. Association for Computational Linguistics. https://doi.org/10.18653/v1/P16-1040. Ruiqiang Zhang, Genichiro Kikui, and Eiichiro Sumita. 2006. Subword-based tagging by conditional random fields for chinese word segmentation. In NAACL. Association for Computational Linguistics, pages 193–196. http://aclweb.org/anthology/N06-2049. Yue Zhang and Stephen Clark. 2007. Chinese segmentation with a word-based perceptron algorithm. In ACL. Association for Computational Linguistics, volume 45, page 840. http://aclweb.org/anthology/P07-1106. Yue Zhang and Stephen Clark. 2008. Joint word segmentation and pos tagging using a single perceptron. In ACL. Association for Computational Linguistics, pages 888–896. http://aclweb.org/anthology/P081101. Yue Zhang and Stephen Clark. 2011. Syntactic processing using the generalized perceptron and beam search. Computational linguistics 37(1):105–151. https://doi.org/10.1162/coli a 00037. Hai Zhao, Chang-Ning Huang, Mu Li, and Bao-Liang Lu. 2006. Effective tag set selection in chinese word segmentation via conditional random field modeling. In PACLIC. Citeseer, volume 20, pages 87–94. http://aclweb.org/anthology/Y06-1012. Xiaoqing Zheng, Hanyang Chen, and Tianyu Xu. 2013. Deep learning for chinese word segmentation and pos tagging. In EMNLP. Association for Computational Linguistics, pages 647–657. http://aclweb.org/anthology/D13-1061. Hao Zhou, Yue Zhang, Shujian Huang, and Jiajun Chen. 2015. A neural probabilistic structured-prediction model for transition-based dependency parsing. In ACL. Association for Computational Linguistics, pages 1213–1222. https://doi.org/10.3115/v1/P15-1117. 849
2017
78
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 850–860 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1079 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 850–860 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1079 Neural Machine Translation via Binary Code Prediction Yusuke Oda† Philip Arthur† Graham Neubig‡† Koichiro Yoshino†§ Satoshi Nakamura† † Nara Institute of Science and Technoloty, 8916-5 Takayama-cho, Ikoma, Nara 630-0192, Japan ‡ Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, USA § Japan Science and Technology Agency, 4-1-8 Hon-machi, Kawaguchi, Saitama 332-0012, Japan {oda.yusuke.on9, philip.arthur.om0}@is.naist.jp, [email protected], {koichiro, s-nakamura}@is.naist.jp Abstract In this paper, we propose a new method for calculating the output layer in neural machine translation systems. The method is based on predicting a binary code for each word and can reduce computation time/memory requirements of the output layer to be logarithmic in vocabulary size in the best case. In addition, we also introduce two advanced approaches to improve the robustness of the proposed model: using error-correcting codes and combining softmax and binary codes. Experiments on two English ↔Japanese bidirectional translation tasks show proposed models achieve BLEU scores that approach the softmax, while reducing memory usage to the order of less than 1/10 and improving decoding speed on CPUs by x5 to x10. 1 Introduction When handling broad or open domains, machine translation systems usually have to handle a large vocabulary as their inputs and outputs. This is particularly a problem in neural machine translation (NMT) models (Sutskever et al., 2014), such as the attention-based models (Bahdanau et al., 2014; Luong et al., 2015) shown in Figure 1. In these models, the output layer is required to generate a specific word from an internal vector, and a large vocabulary size tends to require a large amount of computation to predict each of the candidate word probabilities. Because this is a significant problem for neural language and translation models, there are a number of methods proposed to resolve this problem, which we detail in Section 2.2. However, none of these previous methods simultaneously satisfies the following desiderata, all of which, we argue, are desirable for practical use in NMT systems: Figure 1: Encoder-decoder-attention NMT model and computation amount of the output layer. Memory efficiency: The method should not require large memory to store the parameters and calculated vectors to maintain scalability in resource-constrained environments. Time efficiency: The method should be able to train the parameters efficiently, and possible to perform decoding efficiently with choosing the candidate words from the full probability distribution. In particular, the method should be performed fast on general CPUs to suppress physical costs of computational resources for actual production systems. Compatibility with parallel computation: It should be easy for the method to be minibatched and optimized to run efficiently on GPUs, which are essential for training large NMT models. In this paper, we propose a method that satisfies all of these conditions: requires significantly less memory, fast, and is easy to implement minibatched on GPUs. The method works by not predicting a softmax over the entire output vocab850 ulary, but instead by encoding each vocabulary word as a vector of binary variables, then independently predicting the bits of this binary representation. In order to represent a vocabulary size of 2n, the binary representation need only be at least n bits long, and thus the amount of computation and size of parameters required to select an output word is only O(log V ) in the size of the vocabulary V , a great reduction from the standard linear increase of O(V ) seen in the original softmax. While this idea is simple and intuitive, we found that it alone was not enough to achieve competitive accuracy with real NMT models. Thus we make two improvements: First, we propose a hybrid model, where the high frequency words are predicted by a standard softmax, and low frequency words are predicted by the proposed binary codes separately. Second, we propose the use of convolutional error correcting codes with Viterbi decoding (Viterbi, 1967), which add redundancy to the binary representation, and even in the face of localized mistakes in the calculation of the representation, are able to recover the correct word. In experiments on two translation tasks, we find that the proposed hybrid method with error correction is able to achieve results that are competitive with standard softmax-based models while reducing the output layer to a fraction of its original size. 2 Problem Description and Prior Work 2.1 Formulation and Standard Softmax Most of current NMT models use one-hot representations to represent the words in the output vocabulary – each word w is represented by a unique sparse vector eid(w) ∈RV , in which only one element at the position corresponding to the word ID id(w) ∈{x ∈N | 1 ≤x ≤V } is 1, while others are 0. V represents the vocabulary size of the target language. NMT models optimize network parameters by treating the one-hot representation eid(w) as the true probability distribution, and minimizing the cross entropy between it and the softmax probability v: LH(v, id(w)) := H(eid(w), v), (1) = log sum exp u −uid(w), (2) v := exp u/ sum exp u, (3) u := Whuh + βu, (4) where sum x represents the sum of all elements in x, xi represents the i-th element of x, Whu ∈ RV ×H and βu ∈RV are trainable parameters and H is the total size of hidden layers directly connected to the output layer. According to Equation (4), this model clearly requires time/space computation in proportion to O(HV ), and the actual load of the computation of the output layer is directly affected by the size of vocabulary V , which is typically set around tens of thousands (Sutskever et al., 2014). 2.2 Prior Work on Suppressing Complexity of NMT Models Several previous works have proposed methods to reduce computation in the output layer. The hierarchical softmax (Morin and Bengio, 2005) predicts each word based on binary decision and reduces computation time to O(H log V ). However, this method still requires O(HV ) space for the parameters, and requires calculation much more complicated than the standard softmax, particularly at test time. The differentiated softmax (Chen et al., 2016) divides words into clusters, and predicts words using separate part of the hidden layer for each word clusters. This method make the conversion matrix of the output layer sparser than a fully-connected softmax, and can reduce time/space computation amount by ignoring zero part of the matrix. However, this method restricts the usage of hidden layer, and the size of the matrix is still in proportion to V . Sampling-based approximations (Mnih and Teh, 2012; Mikolov et al., 2013) to the denominator of the softmax have also been proposed to reduce calculation at training. However, these methods are basically not able to be applied at test time, still require heavy computation like the standard softmax. Vocabulary selection approaches (Mi et al., 2016; L’Hostis et al., 2016) can also reduce the vocabulary size at testing, but these methods abandon full search over the target space and the quality of picked vocabularies directly affects the translation quality. Other methods using characters (Ling et al., 2015) or subwords (Sennrich et al., 2016; Chitnis and DeNero, 2015) can be applied to suppress the vocabulary size, but these methods also make for longer sequences, and thus are not a direct solution to problems of computational efficiency. 851 Figure 2: Designs of output layers. 3 Binary Code Prediction Models 3.1 Representing Words using Bit Arrays Figure 2(a) shows the conventional softmax prediction, and Figure 2(b) shows the binary code prediction model proposed in this study. Unlike the conventional softmax, the proposed method predicts each output word indirectly using dense bit arrays that correspond to each word. Let b(w) := [b1(w), b2(w), · · · , bB(w)] ∈{0, 1}B be the target bit array obtained for word w, where each bi(w) ∈{0, 1} is an independent binary function given w, and B is the number of bits in whole array. For convenience, we introduce some constraints on b. First, a word w is mapped to only one bit array b(w). Second, all unique words can be discriminated by b, i.e., all bit arrays satisfy that:1 id(w) ̸= id(w′) ⇒b(w) ̸= b(w′). (5) Third, multiple bit arrays can be mapped to the same word as described in Section 3.5. By considering second constraint, we can also constrain B ≥⌈log2 V ⌉, because b should have at least V unique representations to distinguish each word. The output layer of the network independently predicts B probability values q := [q1(h), q2(h), · · · , qB(h)] ∈ [0, 1]B using the 1We designed this injective condition using the id(·) function to ignore task-specific sensitivities between different word surfaces (e.g. cases, ligatures, etc.). current hidden values h by logistic regressions: q(h) = σ(Whqh + βq), (6) σ(x) := 1/(1 + exp(−x)), (7) where Whq ∈RB×H and βq ∈RB are trainable parameters. When we assume that each qi is the probability that “the i-th bit becomes 1,” the joint probability of generating word w can be represented as: Pr(b(w)|q(h)) := B Y i=1 biqi + ¯bi¯qi  , (8) where ¯x := 1 −x. We can easily obtain the maximum-probability bit array from q by simply assuming the i-th bit is 1 if qi ≥1/2, or 0 otherwise. However, this calculation may generate invalid bit arrays which do not correspond to actual words according to the mapping between words and bit arrays. For now, we simply assume that w = UNK (unknown) when such bit arrays are obtained, and discuss alternatives later in Section 3.5. The constraints described here are very general requirements for bit arrays, which still allows us to choose between a wide variety of mapping functions. However, designing the most appropriate mapping method for NMT models is not a trivial problem. In this study, we use a simple mapping method described in Algorithm 1, which was empirically effective in preliminary experiments.2 Here, V is the set of V target words including 3 extra markers: UNK, BOS (begin-of-sentence), and EOS (end-of-sentence), and rank(w) ∈N>0 is the rank of the word according to their frequencies in the training corpus. Algorithm 1 is one of the minimal mapping methods (i.e., satisfying B = ⌈log2 V ⌉), and generated bit arrays have the characteristics that their higher bits roughly represents the frequency of corresponding words (e.g., if w is frequently appeared in the training corpus, higher bits in b(w) tend to become 0). 3.2 Loss Functions For learning correct binary representations, we can use any loss functions that is (sub-)differentiable and satisfies a constraint that: LB(q, b)  = ϵL, if q = b, ≥ϵL, otherwise, (9) 2Other methods examined included random codes, Huffman codes (Huffman, 1952) and Brown clustering (Brown et al., 1992) with zero-padding to adjust code lengths, and some original allocation methods based on the word2vec embeddings (Mikolov et al., 2013). 852 Algorithm 1 Mapping words to bit arrays. Require: w ∈V Ensure: b ∈{0, 1}B = Bit array representing w x :=      0, if w = UNK 1, if w = BOS 2, if w = EOS 2 + rank(w), otherwise bi := ⌊x/2i−1⌋mod 2 b ←[b1, b2, · · · , bB] where ϵL is the minimum value of the loss function which typically does not affect the gradient descent methods. For example, the squareddistance: LB(q, b) := B X i=1 (qi −bi)2, (10) or the cross-entropy: LB(q, b) := − B X i=1 bi log qi + ¯bi log ¯qi  , (11) are candidates for the loss function. We also examined both loss functions in the preliminary experiments, and in this paper, we only used the squared-distance function (Equation (10)), because this function achieved higher translation accuracies than Equation (11).3 3.3 Efficiency of the Binary Code Prediction The computational complexity for the parameters Whq and βq is O(HB). This is equal to O(H log V ) when using a minimal mapping method like that shown in Algorithm 1, and is significantly smaller than O(HV ) when using standard softmax prediction. For example, if we chose V = 65536 = 216 and use Algorithm 1’s mapping method, then B = 16 and total amount of computation in the output layer could be suppressed to 1/4096 of its original size. On a different note, the binary code prediction model proposed in this study shares some ideas with the hierarchical softmax (Morin and Bengio, 2005) approach. Actually, when we used a binarytree based mapping function for b, our model can be interpreted as the hierarchical softmax with two 3In terms of learning probabilistic models, we should remind that using Eq. (10) is an approximation of Eq. (11). The output bit scores trained by Eq. (10) do not represent actual word perplexities, and this characteristics imposes some practical problems when comparing multiple hypotheses (e.g., reranking, beam search, etc.). We could ignore this problem in this paper because we only evaluated the one-best results in experiments. strong constraints for guaranteeing independence between all bits: all nodes in the same level of the hierarchy share their parameters, and all levels of the hierarchy are predicted independently of each other. By these constraints, all bits in b can be calculated in parallel. This is particularly important because it makes the model conducive to being calculated on parallel computation backends such as GPUs. However, the binary code prediction model also introduces problems of robustness due to these strong constraints. As the experimental results show, the simplest prediction model which directly maps words into bit arrays seriously decreases translation quality. In Sections 3.4 and 3.5, we introduce two additional techniques to prevent reductions of translation quality and improve robustness of the binary code prediction model. 3.4 Hybrid Softmax/Binary Model According to the Zipf’s law (Zipf, 1949), the distribution of word appearances in an actual corpus is biased to a small subset of the vocabulary. As a result, the proposed model mostly learns characteristics for frequent words and cannot obtain enough opportunities to learn for rare words. To alleviate this problem, we introduce a hybrid model using both softmax prediction and binary code prediction as shown in Figure 2(c). In this model, the output layer calculates a standard softmax for the N −1 most frequent words and an OTHER marker which indicates all rare words. When the softmax layer predicts OTHER, then the binary code layer is used to predict the representation of rare words. In this case, the actual probability of generating a particular word can be separated into two equations according to the frequency of words: Pr(w|h) ≃ v′ id(w), if id(w) < N, v′ N · π(w, h), otherwise, (12) v′ := exp u′/ sum exp u′, (13) u′ := Whu′h + βu′, (14) π(w, h) := Pr(b(w)|q(h)), (15) where Whu′ ∈RN×H and βu′ ∈RN are trainable parameters, and id(w) assumes that the value corresponds to the rank of frequency of each word. We also define the loss function for the hybrid 853 Figure 3: Example of the classification problem using redundant bit array mapping. model using both softmax and binary code losses: L :=  lH(id(w)), if id(w) < N, lH(N) + lB, otherwise, (16) lH(i) := λHLH(v′, i), (17) lB := λBLB(q, b), (18) where λH and λB are hyper-parameters to determine strength of both softmax/binary code losses. These also can be adjusted according to the training data, but in this study, we only used λH = λB = 1 for simplicity. The computational complexity of the hybrid model is O(H(N + log V )), which is larger than the original binary code model O(H log V ). However, N can be chosen as N ≪V because the softmax prediction is only required for a few frequent words. As a result, we can control the actual computation for the hybrid model to be much smaller than the standard softmax complexity O(HV ), The idea of separated prediction of frequent words and rare words comes from the differentiated softmax (Chen et al., 2016) approach. However, our output layer can be configured as a fullyconnected network, unlike the differentiated softmax, because the actual size of the output layer is still small after applying the hybrid model. 3.5 Applying Error-correcting Codes The 2 methods proposed in previous sections impose constraints for all bits in q, and the value of each bit must be estimated correctly for the correct word to be chosen. As a result, these models may generate incorrect words due to even a single bit error. This problem is the result of dense mapping between words and bit arrays, and can be avoided by creating redundancy in the bit array. Figure 3 shows a simple example of how this idea works when discriminating 2 words using 3 bits. In this case, the actual words are obtained by Figure 4: Training and generation processes with error-correcting code. estimating the nearest centroid bit array according to the Hamming distance between each centroid and the predicted bit array. This approach can predict correct words as long as the predicted bit arrays are in the set of neighbors for the correct centroid (gray regions in the Figure 3), i.e., up to a 1-bit error in the predicted bits can be corrected. This ability to be robust to errors is a central idea behind error-correcting codes (Shannon, 1948). In general, an error-correcting code has the ability to correct up to ⌊(d−1)/2⌋bit errors when all centroids differ d bits from each other (Golay, 1949). d is known as the free distance determined by the design of error-correcting codes. Errorcorrecting codes have been examined in some previous work on multi-class classification tasks, and have reported advantages from the raw classification (Dietterich and Bakiri, 1995; Klautau et al., 2003; Liu, 2006; Kouzani and Nasireding, 2009; Kouzani, 2010; Ferng and Lin, 2011, 2013). In this study, we applied an error-correcting algorithm to the bit array obtained from Algorithm 1 to improve robustness of the output layer in an NMT system. A challenge in this study is trying a large classification (#classes > 10,000) with error-correction, unlike previous studies focused on solving comparatively small tasks (#classes < 100). And this study also tries to solve a generation task unlike previous studies. As shown in the experiments, we found that this approach is highly effective in these tasks. Figure 4 (a) and (b) illustrate the training and generation processes for the model with errorcorrecting codes. In the training, we first convert the original bit arrays b(w) to a center bit array b′ in the space of error-correcting code: b′(b) := [b′ 1(b), b′ 2(b), · · · , b′ B′(b)] ∈{0, 1}B′, where B′(B) ≥B is the number of bits in the error-correcting code. The NMT model learns its 854 Algorithm 2 Encoding into a convolutional code. Require: b ∈{0, 1}B Ensure: b′ ∈ {0, 1}2(B+6) = Redundant bit array x[t] :=  bt, if 1 ≤t ≤B 0, otherwise y1 t := x[t −6 .. t] · [1001111] mod 2 y2 t := x[t −6 .. t] · [1101101] mod 2 b′ ←[y1 1, y2 1, y1 2, y2 2, · · · , y1 B+6, y2 B+6] parameters based on the loss between predicted probabilities q and b′. Note that typical errorcorrecting codes satisfy O(B′/B) = O(1), and this characteristic efficiently suppresses the increase of actual computation cost in the output layer due to the application of the error-correcting code. In the generation of actual words, the decoding method of the error-correcting code converts the redundant predicted bits q into a dense representation ˜q := [˜q1(q), ˜q2(q), · · · , ˜qB(q)], and uses ˜q as the bits to restore the word, as is done in the method described in the previous sections. It should be noted that the method for performing error correction directly affects the quality of the whole NMT model. For example, the mapping shown in Figure 3 has only 3 bits and it is clear that these bits represent exactly the same information as each other. In this case, all bits can be estimated using exactly the same parameters, and we can not expect that we will benefit significantly from applying this redundant representation. Therefore, we need to choose an error correction method in which the characteristics of original bits should be distributed in various positions of the resulting bit arrays so that errors in bits are not highly correlated with each-other. In addition, it is desirable that the decoding method of the applied error-correcting code can directly utilize the probabilities of each bit, because q generated by the network will be a continuous probabilities between zero and one. In this study, we applied convolutional codes (Viterbi, 1967) to convert between original and redundant bits. Convolutional codes perform a set of bit-wise convolutions between original bits and weight bits (which are hyper-parameters). They are well-suited to our setting here because they distribute the information of original bits in different places in the resulting bits, work robustly for random bit errors, and can be decoded using Algorithm 3 Decoding from a convolutional code. Require: q ∈(0, 1)2(B+6) Ensure: ˜q ∈{0, 1}B = Restored bit array g(q, b) := b log q + (1 −b) log(1 −q) φ0[s | s ∈{0, 1}6] ← 0, if s = [000000] −∞, otherwise for t = 1 →B + 6 do for scur ∈{0, 1}6 do sprev(x) := [x] ◦scur[1 .. 5] o1(x) := ([x] ◦scur) · [1001111] mod 2 o2(x) := ([x] ◦scur) · [1101101] mod 2 g′(x) := g(q2t−1, o1(x)) + g(q2t, o2(x)) φ′(x) := φt−1[sprev(x)] + g′(x) ˆx ←arg maxx∈{0,1} φ′(x) rt[scur] ←sprev(ˆx) φt[scur] ←φ′(ˆx) end for end for s′ ←[000000] for t = B →1 do s′ ←rt+6[s′] ˜qt ←s′ 1 end for ˜q ←[˜q1, ˜q2, · · · , ˜qB] bit probabilities directly. Algorithm 2 describes the particular convolutional code that we applied in this study, with two convolution weights [1001111] and [1101101] as fixed hyper-parameters.4 Where x[i .. j] := [xi, · · · , xj] and x · y := P i xiyi. On the other hand, there are various algorithms to decode convolutional codes with the same format which are based on different criteria. In this study, we use the decoding method described in Algorithm 3, where x ◦y represents the concatenation of vectors x and y. This method is based on the Viterbi algorithm (Viterbi, 1967) and estimates original bits by directly using probability of redundant bits. Although Algorithm 3 looks complicated, this algorithm can be performed efficiently on CPUs at test time, and is not necessary at training time when we are simply performing calculation of Equation (6). Algorithm 2 increases the number of bits from B into B′ = 2(B+6), but does not restrict the actual value of B. 4We also examined many configurations of convolutional codes which have different robustness and computation costs, and finally chose this one. 855 Table 1: Details of the corpus. Name ASPEC BTEC Languages En ↔Ja #sentences Train 2.00 M 465. k Dev 1,790 510 Test 1,812 508 Vocabulary size V 65536 25000 4 Experiments 4.1 Experimental Settings We examined the performance of the proposed methods on two English-Japanese bidirectional translation tasks which have different translation difficulties: ASPEC (Nakazawa et al., 2016) and BTEC (Takezawa, 1999). Table 1 describes details of two corpora. To prepare inputs for training, we used tokenizer.perl in Moses (Koehn et al., 2007) and KyTea (Neubig et al., 2011) for English/Japanese tokenizations respectively, applied lowercase.perl from Moses, and replaced out-of-vocabulary words such that rank(w) > V −3 into the UNK marker. We implemented each NMT model using C++ in the DyNet framework (Neubig et al., 2017) and trained/tested on 1 GPU (GeForce GTX TITAN X). Each test is also performed on CPUs to compare its processing time. We used a bidirectional RNN-based encoder applied in Bahdanau et al. (2014), unidirectional decoder with the same style of (Luong et al., 2015), and the concat global attention model also proposed in Luong et al. (2015). Each recurrent unit is constructed using a 1-layer LSTM (input/forget/output gates and nonpeepholes) (Gers et al., 2000) with 30% dropout (Srivastava et al., 2014) for the input/output vectors of the LSTMs. All word embeddings, recurrent states and model-specific hidden states are designed with 512-dimentional vectors. Only output layers and loss functions are replaced, and other network architectures are identical for the conventional/proposed models. We used the Adam optimizer (Kingma and Ba, 2014) with fixed hyperparameters α = 0.001, β1 = 0.9 β2 = 0.999, ε = 10−8, and mini-batches with 64 sentences sorted according to their sequence lengths. For evaluating the quality of each model, we calculated case-insensitive BLEU (Papineni et al., 2002) every 1000 mini-batches. Table 2 lists summaries of all methods we examined in experiments. Table 2: Evaluated methods. Name Summary Softmax Softmax prediction (Fig. 2(a)) Binary Fig. 2(b) w/ raw bit array Hybrid-N Fig. 2(c) w/ softmax size N Binary-EC Binary w/ error-correction Hybrid-N-EC Hybrid-N w/ error-correction (a) ASPEC (En →Ja) (b) BTEC (En →Ja) Figure 5: Training curves over 180,000 epochs. 4.2 Results and Discussion Table 3 shows the BLEU on the test set (bold and italic faces indicate the best and second places in each task), number of bits B (or B′) for the binary code, actual size of the output layer #out, number of parameters in the output layer #W,β, as well as the ratio of #W,β or amount of whole parameters compared with Softmax, and averaged processing time at training (per mini-batch on GPUs) and test (per sentence on GPUs/CPUs), respectively. Figure 5(a) and 5(b) shows training curves up to 180,000 epochs about some English→Japanese settings. To relax instabilities of translation qualities while training (as shown in Figure 5(a) and 5(b)), each BLEU in Table 3 is calculated by averaging actual test BLEU of 5 consecutive results 856 Table 3: Comparison of BLEU, size of output layers, number of parameters and processing time. Corpus Method BLEU % B #out #W,β Ratio of #params Time (En→Ja) [ms] EnJa JaEn #W,β All Train Test: GPU / CPU ASPEC Softmax 31.13 21.14 — 65536 33.6 M 1/1 1 1026. 121.6 / 2539. Binary 13.78 6.953 16 16 8.21 k 1/4.10 k 0.698 711.2 73.08 / 122.3 Hybrid-512 22.81 13.95 16 528 271. k 1/124. 0.700 843.6 81.28 / 127.5 Hybrid-2048 27.73 16.92 16 2064 1.06 M 1/31.8 0.707 837.1 82.28 / 159.3 Binary-EC 25.95 18.02 44 44 22.6 k 1/1.49 k 0.698 712.0 78.75 / 164.0 Hybrid-512-EC 29.07 18.66 44 556 285. k 1/118. 0.700 850.3 80.30 / 180.2 Hybrid-2048-EC 30.05 19.66 44 2092 1.07 M 1/31.4 0.707 851.6 77.83 / 201.3 BTEC Softmax 47.72 45.22 — 25000 12.8 M 1/1 1 325.0 34.35 / 323.3 Binary 31.83 31.90 15 15 7.70 k 1/1.67 k 0.738 250.7 27.98 / 54.62 Hybrid-512 44.23 43.50 15 527 270. k 1/47.4 0.743 300.7 28.83 / 66.13 Hybrid-2048 46.13 45.76 15 2063 1.06 M 1/12.1 0.759 307.7 28.25 / 67.40 Binary-EC 44.48 41.21 42 42 21.5 k 1/595. 0.738 255.6 28.02 / 69.76 Hybrid-512-EC 47.20 46.52 42 554 284. k 1/45.1 0.744 307.8 28.44 / 56.98 Hybrid-2048-EC 48.17 46.58 42 2090 1.07 M 1/12.0 0.760 311.0 28.47 / 69.44 Figure 6: BLEU changes in the Hybrid-N methods according to the softmax size (En→Ja). around the epoch that has the highest dev BLEU. First, we can see that each proposed method largely suppresses the actual size of the output layer from ten to one thousand times compared with the standard softmax. By looking at the total number of parameters, we can see that the proposed models require only 70% of the actual memory, and the proposed model reduces the total number of parameters for the output layers to a practically negligible level. Note that most of remaining parameters are used for the embedding lookup at the input layer in both encoder/decoder. These still occupy O(EV ) memory, where E represents the size of each embedding layer and usually O(E/H) = O(1). These are not targets to be reduced in this study because these values rarely are accessed at test time because we only need to access them for input words, and do not need them to always be in the physical memory. It might be possible to apply a similar binary representation as that of output layers to the input layers as well, then express the word embedding by multiplying this binary vector by a word embedding matrix. This is one potential avenue of future work. Taking a look at the BLEU for the simple Binary method, we can see that it is far lower than other models for all tasks. This is expected, as described in Section 3, because using raw bit arrays causes many one-off estimation errors at the output layer due to the lack of robustness of the output representation. In contrast, Hybrid-N and Binary-EC models clearly improve BLEU from Binary, and they approach that of Softmax. This demonstrates that these two methods effectively improve the robustness of binary code prediction models. Especially, Binary-EC generally achieves higher quality than Hybrid-512 despite the fact that it suppress the number of parameters by about 1/10. These results show that introducing redundancy to target bit arrays is more effective than incremental prediction. In addition, the Hybrid-NEC model achieves the highest BLEU in all proposed methods, and in particular, comparative or higher BLEU than Softmax in BTEC. This behavior clearly demonstrates that these two methods are orthogonal, and combining them together can be effective. We hypothesize that the lower quality of Softmax in BTEC is caused by an over-fitting due to the large number of parameters required in the softmax prediction. The proposed methods also improve actual computation time in both training and test. In particular on CPU, where the computation speed is directly affected by the size of the output layer, the proposed methods translate significantly faster 857 than Softmax by x5 to x20. In addition, we can also see that applying error-correcting code is also effictive with respect to the decoding speed. Figure 6 shows the trade-off between the translation quality and the size of softmax layers in the hybrid prediction model (Figure 2(c)) without error-correction. According to the model definition in Section 3.4, the softmax prediction and raw binary code prediction can be assumed to be the upper/lower-bound of the hybrid prediction model. The curves in Figure 6 move between Softmax and Binary models, and this behavior intuitively explains the characteristics of the hybrid prediction. In addition, we can see that the BLEU score in BTEC quickly improves, and saturates at N = 1024 in contrast to the ASPEC model, which is still improving at N = 2048. We presume that the shape of curves in Figure 6 is also affected by the difficulty of the corpus, i.e., when we train the hybrid model for easy datasets (e.g., BTEC is easier than ASPEC), it is enough to use a small softmax layer (e.g. N ≤1024). 5 Conclusion In this study, we proposed neural machine translation models which indirectly predict output words via binary codes, and two model improvements: a hybrid prediction model using both softmax and binary codes, and introducing error-correcting codes to introduce robustness of binary code prediction. Experiments show that the proposed model can achieve comparative translation qualities to standard softmax prediction, while significantly suppressing the amount of parameters in the output layer, and improving calculation speeds while training and especially testing. One interesting avenue of future work is to automatically learn encodings and error correcting codes that are well-suited for the type of binary code prediction we are performing here. In Algorithms 2 and 3 we use convolutions that were determined heuristically, and it is likely that learning these along with the model could result in improved accuracy or better compression capability. Acknowledgments Part of this work was supported by JSPS KAKENHI Grant Numbers JP16H05873 and JP17H00747, and Grant-in-Aid for JSPS Fellows Grant Number 15J10649. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 . Peter F Brown, Peter V Desouza, Robert L Mercer, Vincent J Della Pietra, and Jenifer C Lai. 1992. Class-based n-gram models of natural language. Computational linguistics 18(4):467–479. Wenlin Chen, David Grangier, and Michael Auli. 2016. Strategies for training large vocabulary neural language models. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1975–1985. http://www.aclweb.org/anthology/P161186. Rohan Chitnis and John DeNero. 2015. Variablelength word encodings for neural translation models. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 2088–2093. http://aclweb.org/anthology/D15-1249. Thomas G. Dietterich and Ghulum Bakiri. 1995. Solving multiclass learning problems via errorcorrecting output codes. Journal of Artificial Intelligence Research 2:263–286. Chun-Sung Ferng and Hsuan-Tien Lin. 2011. Multilabel classification with error-correcting codes. Journal of Machine Learning Research 20:281–295. Chun-Sung Ferng and Hsuan-Tien Lin. 2013. Multilabel classification using error-correcting codes of hard or soft bits. IEEE transactions on neural networks and learning systems 24(11):1888–1900. Felix A Gers, J¨urgen Schmidhuber, and Fred Cummins. 2000. Learning to forget: Continual prediction with LSTM. Neural computation 12(10):2451–2471. Marcel J. E. Golay. 1949. Notes on digital coding. Proceedings of the Institute of Radio Engineers 37:657. David A. Huffman. 1952. A method for the construction of minimum-redundancy codes. Proceedings of the Institute of Radio Engineers 40(9):1098–1101. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Aldebaro Klautau, Nikola Jevti´c, and Alon Orlitsky. 2003. On nearest-neighbor error-correcting output codes with application to all-pairs multiclass support vector machines. Journal of Machine Learning Research 4(April):1–15. 858 Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions. Association for Computational Linguistics, Prague, Czech Republic, pages 177–180. http://www.aclweb.org/anthology/P07-2045. Abbas Z Kouzani. 2010. Multilabel classification using error correction codes. In International Symposium on Intelligence Computation and Applications. Springer, pages 444–454. Abbas Z Kouzani and Gulisong Nasireding. 2009. Multilabel classification by bch code and random forests. International journal of recent trends in engineering 2(1):113–116. Gurvan L’Hostis, David Grangier, and Michael Auli. 2016. Vocabulary selection strategies for neural machine translation. arXiv preprint arXiv:1610.00072 . Wang Ling, Isabel Trancoso, Chris Dyer, and Alan W Black. 2015. Character-based neural machine translation. arXiv preprint arXiv:1511.04586 . Yang Liu. 2006. Using svm and error-correcting codes for multiclass dialog act classification in meeting corpus. In INTERSPEECH. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 1412– 1421. http://aclweb.org/anthology/D15-1166. Haitao Mi, Zhiguo Wang, and Abe Ittycheriah. 2016. Vocabulary manipulation for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Berlin, Germany, pages 124–129. http://anthology.aclweb.org/P16-2021. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. pages 3111–3119. Andriy Mnih and Yee Whye Teh. 2012. A fast and simple algorithm for training neural probabilistic language models. In Proceedings of the 29th International Conference on Machine Learning. Frederic Morin and Yoshua Bengio. 2005. Hierarchical probabilistic neural network language model. In Proceedings of Tenth International Workshop on Artificial Intelligence and Statistics. volume 5, pages 246–252. Toshiaki Nakazawa, Manabu Yaguchi, Kiyotaka Uchimoto, Masao Utiyama, Eiichiro Sumita, Sadao Kurohashi, and Hitoshi Isahara. 2016. Aspec: Asian scientific paper excerpt corpus. In Nicoletta Calzolari (Conference Chair), Khalid Choukri, Thierry Declerck, Marko Grobelnik, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, and Stelios Piperidis, editors, Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC 2016). European Language Resources Association (ELRA), Portoro, Slovenia, pages 2204–2208. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. Dynet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980 . Graham Neubig, Yosuke Nakata, and Shinsuke Mori. 2011. Pointwise prediction for robust, adaptable japanese morphological analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Portland, Oregon, USA, pages 529–533. http://www.aclweb.org/anthology/P11-2093. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Philadelphia, Pennsylvania, USA, pages 311–318. https://doi.org/10.3115/1073083.1073135. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1715–1725. http://www.aclweb.org/anthology/P161162. Claude E. Shannon. 1948. A mathematical theory of communication. Bell System Technical Journal 27(3):379–423. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15(1):1929–1958. 859 Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. pages 3104–3112. Toshiyuki Takezawa. 1999. Building a bilingual travel conversation database for speech translation research. In Proc. of the 2nd international workshop on East-Asian resources and evaluation conference on language resources and evaluation. pages 17–20. Andrew Viterbi. 1967. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE transactions on Information Theory 13(2):260–269. George. K. Zipf. 1949. Human behavior and the principle of least effort.. Addison-Wesley Press. 860
2017
79
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 77–89 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1008 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 77–89 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1008 The State of the Art in Semantic Representation Omri Abend Ari Rappoport Department of Computer Science, The Hebrew University of Jerusalem {oabend|arir}@cs.huji.ac.il Abstract Semantic representation is receiving growing attention in NLP in the past few years, and many proposals for semantic schemes (e.g., AMR, UCCA, GMB, UDS) have been put forth. Yet, little has been done to assess the achievements and the shortcomings of these new contenders, compare them with syntactic schemes, and clarify the general goals of research on semantic representation. We address these gaps by critically surveying the state of the art in the field. 1 Introduction Schemes for Semantic Representation of Text (SRT) aim to reflect the meaning of sentences and texts in a transparent way. There has recently been an influx of proposals for semantic representations and corpora, e.g. GMB (Basile et al., 2012), AMR (Banarescu et al., 2013), UCCA (Abend and Rappoport, 2013b) and Universal Decompositional Semantics (UDS; White et al., 2016). Nevertheless, no detailed assessment of the relative merits of the different schemes has been carried out, nor their comparison to previous sentential analysis schemes, notably syntactic ones. An understanding of the achievements and gaps of semantic analysis in NLP is crucial to its future prospects. In this paper we begin to chart the various proposals for semantic schemes according to the content they support. As not many semantic queries on texts can at present be answered with near human-like reliability without using manual symbolic annotation, we will mostly focus on schemes that represent semantic distinctions explicitly.1 We begin by discussing the goals of SRT in Section 2. Section 3 surveys major represented meaning components, including predicate-argument relations, discourse relations and logical structure. Section 4 details the various concrete proposals for SRT schemes and annotated resources, while Sections 5 and 6 discuss criteria for their evaluation and their relation to syntax, respectively. We find that despite the major differences in terms of formalism and interface with syntax, in terms of their content there is a great deal of convergence of SRT schemes. Principal differences between schemes are mostly related to their ability to abstract away from formal and syntactic variation, namely to assign similar structures to different constructions that have a similar meaning, and to assign different structures to constructions that have different meanings, despite their surface similarity. Other important differences are in the level of training they require from their annotators (e.g., expert annotators vs. crowd-sourcing) and in their cross-linguistic generality. We discuss the complementary strengths of different schemes, and suggest paths for future integration. 2 Defining Semantic Representation The term semantics is used differently in different contexts. For the purposes of this paper we define a semantic representation as one that reflects the meaning of the text as it is understood by a language speaker. A semantic representation should thus be paired with a method for extracting information from it that can be directly evaluated by humans. The extraction process should be reliable and computationally efficient. 1Note that even a string representation of text can be regarded as semantic given a reliable enough parser. 77 We stipulate that a fundamental component of the content conveyed by SRTs is argument structure – who did what to whom, where, when and why, i.e., events, their participants and the relations between them. Indeed, the fundamental status of argument structure has been recognized by essentially all approaches to semantics both in theoretical linguistics (Levin and Hovav, 2005) and in NLP, through approaches such as Semantic Role Labeling (SRL; Gildea and Jurafsky, 2002), formal semantic analysis (e.g., Bos, 2008), and Abstract Meaning Representation (AMR; Banarescu et al., 2013). Many other useful meaning components have been proposed, and are discussed at a greater depth in Section 3. Another approach to defining an SRT is through external (extra-textual) criteria or applications. For instance, a semantic representation can be defined to support inference, as in textual entailment (Dagan et al., 2006) or natural logic (Angeli and Manning, 2014). Other examples include defining a semantic representation in terms of supporting knowledge base querying (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005), or defining semantics through a different modality, for instance interpreting text in terms of images that correspond to it (Kiros et al., 2014), or in terms of embodied motor and perceptual schemas (Feldman et al., 2010). A different approach to SRT is taken by Vector Space Models (VSM), which eschew the use of symbolic structures, instead modeling all linguistic elements as vectors, from the level of words to phrases and sentences. Proponents of this approach generally invoke neural network methods, obtaining impressive results on a variety of tasks including lexical tasks such as cross-linguistic word similarity (Ammar et al., 2016), machine translation (Bahdanau et al., 2015), and dependency parsing (Andor et al., 2016). VSMs are also attractive in being flexible enough to model non-local and gradient phenomena (e.g., Socher et al., 2013). However, more research is needed to clarify the scope of semantic phenomena that such models are able to reliably capture. We therefore only lightly touch on VSMs in this survey. Finally, a major consideration in semantic analysis, and one of its great potential advantages, is its cross-linguistic universality. While languages differ in terms of their form (e.g., in their phonology, lexicon, and syntax), they have often been assumed to be much closer in terms of their semantic content (Bar-Hillel, 1960; Fodor, 1975). See Section 5 for further discussion. A terminological note: within formal linguistics, semantics is often the study of the relation between symbols (e.g., words, syntactic constructions) and what they signify. In this sense, semantics is the study of the aspects of meaning that are overtly expressed by the lexicon and grammar of a language, and is thus tightly associated with a theory of the syntax-semantics interface. We note that this definition of semantics is somewhat different from the one intended here, which defines semantic schemes as theories of meaning. 3 Semantic Content We turn to discussing the main content types encoded by semantic representation schemes. Due to space limitations, we focus only on text semantics, which studies the meaning relationships between lexical items, rather than the meaning of the lexical items themselves.2 We also defer discussion of more targeted semantic distinctions, such as sentiment, to future work. We will use the following as a running example: (1) Although Ann was leaving, she gave the present to John. Events. Events (sometimes called frames, propositions or scenes) are the basic building blocks of argument structure representations. An event includes a predicate (main relation, frame-evoking element), which is the main determinant of what the event is about. It also includes arguments (participants, core elements) and secondary relations (modifiers, non-core elements). Example 1 is usually viewed as having two events, evoked by “leaving” and “gave”. Schemes commonly provide an ontology or a lexicon of event types (also a predicate lexicon), which categorizes semantically similar events evoked by different lexical items. For instance, FrameNet defines frames as schematized story fragments evoked by a set of conceptually similar predicates. In (1), the frames evoked by “leaving” and “gave” are DEPARTING and GIVING, but DEPARTING may also be evoked by “depart” and “exit”, and GIVING by “donate” and “gift”. 2 We use the term “Text Semantics”, rather than the commonly used “Sentence Semantics” to include inter-sentence semantic relations as well. 78 The events discussed here should not be confused with events as defined in Information Extraction and related tasks such as event coreference (Humphreys et al., 1997), which correspond more closely to the everyday notion of an event, such as a political or financial event, and generally consist of multiple events in the sense discussed here. The representation of such events is recently receiving considerable interest within NLP, e.g. the Richer Event Descriptions framework (RED; Ikuta et al., 2014). Predicates and Arguments. While predicateargument relations are universally recognized as fundamental to semantic representation, the interpretation of the terms varies across schemes. Most SRL schemes cover a wide variety of verbal predicates, but differ in which nominal and adjectival predicates are covered. For example, PropBank (Palmer et al., 2005), one of the major resources for SRL, covers verbs, and in its recent versions also eventive nouns and multi-argument adjectives. FrameNet (Ruppenhofer et al., 2016) covers all these, but also covers relational nouns that do not evoke an event, such as “president”. Other lines of work address semantic arguments that appear outside sentence boundaries, or that do not explicitly appear anywhere in the text (Gerber and Chai, 2010; Roth and Frank, 2015). Core and Non-core Arguments. Perhaps the most common distinction between argument types is between core and non-core arguments (Dowty, 2003). While it is possible to define the distinction distributionally as one between obligatory and optional arguments, here we focus on the semantic dimension, which distinguishes arguments whose meaning is predicate-specific and are necessary components of the described event (core), and those which are predicate-general (non-core). For example, FrameNet defines core arguments as conceptually necessary components of a frame, that make the frame unique and different from other frames, and peripheral arguments as those that introduce additional, independent or distinct relations from that of the frame such as time, place, manner, means and degree (Ruppenhofer et al., 2016, pp. 23-24). Semantic Roles. Semantic roles are categories of arguments. Many different semantic role inventories have been proposed and used in NLP over the years, the most prominent being FrameNet (where roles are shared across predicates that evoke the same frame type, such as “leave” and “depart”), and PropBank (where roles are verbspecific). PropBank’s role sets were extended by subsequent projects such as AMR. Another prominent semantic role inventory is VerbNet (Kipper et al., 2008) and subsequent projects (Bonial et al., 2011; Schneider et al., 2015), which define a closed set of abstract semantic roles (such as AGENT, PATIENT and INSTRUMENT) that apply to all predicate arguments. Co-reference and Anaphora. Co-reference allows to abstract away from the different ways to refer to the same entity, and is commonly included in semantic resources. Coreference interacts with argument structure annotation, as in its absence each argument is arbitrarily linked to one of its textual instances. Most SRL schemes would mark “Ann” in (1) as an argument of “leaving” and “she” as an argument of “gave”, although on semantic grounds “Ann” is an argument of both. Some SRTs distinguish between the cases of argument sharing which is encoded by the syntax and is thus explicit (e.g., in “John went home and took a shower”, “John” is both an argument of “went home” and of “took a shower”), and cases where the sharing of arguments is inferred (as in (1)). This distinction may be important for text understanding, as the inferred cases tend to be more ambiguous (“she” in (1) might not refer to “Ann”). Other schemes, such as AMR, eschew this distinction and use the same terms to represent all cases of coreference. Temporal Relations. Most temporal semantic work in NLP has focused on temporal relations between events, either by timestamping them according to time expressions found in the text, or by predicting their relative order in time. Important resources include TimeML, a specification language for temporal relations (Pustejovsky et al., 2003), and the TempEval series of shared tasks and annotated corpora (Verhagen et al., 2009, 2010; UzZaman et al., 2013). A different line of work explores scripts: schematic, temporally ordered sequences of events associated with a certain scenario (Chambers and Jurafsky, 2008, 2009; Regneri et al., 2010). For instance, going to a restaurant includes sitting at a table, ordering, eating and paying, generally in this order. Related to temporal relations, are causal relations between events, which are ubiquitous in language, and central for a variety of applications, 79 including planning and entailment. See (Mirza et al., 2014) and (Dunietz et al., 2015) for recently proposed annotation schemes for causality and its sub-types. Mostafazadeh et al. (2016) integrated causal and TimeML-style temporal relations into a unified representation. The internal temporal structure of events has been less frequently tackled. Moens and Steedman (1988) defined an ontology for the temporal components of an event, such as its preparatory process (e.g., “climbing a mountain”), or its culmination (“reaching its top”). Statistical work on this topic is unfortunately scarce, and mostly focuses on lexical categories such as aspectual classes (Siegel and McKeown, 2000; Palmer et al., 2007; Friedrich et al., 2016; White et al., 2016), and tense distinctions (Elson and McKeown, 2010). Still, casting events in terms of their temporal components, characterizing an annotation scheme for doing so and rooting it in theoretical foundations, is an open challenge for NLP. Spatial Relations. The representation of spatial relations is pivotal in cognitive theories of meaning (e.g., Langacker, 2008), and in application domains such as geographical information systems or robotic navigation. Important tasks in this field include Spatial Role Labeling (Kordjamshidi et al., 2012) and the more recent SpaceEval (Pustejovsky et al., 2015). The tasks include the identification and classification of spatial elements and relations, such as places, paths, directions and motions, and their relative configuration. Discourse Relations encompass any semantic relation between events or larger semantic units. For example, in (1) the leaving and the giving events are sometimes related through a discourse relation of type CONCESSION, evoked by “although”. Such information is useful, often essential for a variety of NLP tasks such as summarization, machine translation and information extraction, but is commonly overlooked in the development of such systems (Webber and Joshi, 2012). The Penn Discourse Treebank (PeDT; Miltsakaki et al., 2004) annotates discourse units, and classifies the relations between them into a hierarchical, closed category set, including high-level relation types like TEMPORAL, COMPARISON and CONTINGENCY and finer-grained ones such as JUSTIFICATION and EXCEPTION. Another commonly used resource is the RST Discourse Treebank (Carlson et al., 2003), which places more focus on higher-order discourse structures, resulting in deeper hierarchical structures than the PeDT’s, which focuses on local discourse structure. Another discourse information type explored in NLP is discourse segmentation, where texts are partitioned into shallow structures of discourse units categorized either according to their topic or according to their function within the text. An example is the segmentation of scientific papers into functional segments and their labeling with categories such as BACKGROUND and DISCUSSION (Liakata et al., 2010). See (Webber et al., 2011) for a survey of discourse structure in NLP. Discourse relations beyond the scope of a single sentence are often represented by specialized semantic resources and not by general ones, despite the absence of a clear boundary line between them. This, however, is beginning to change with some schemes, e.g., GMB and UCCA, already supporting cross-sentence semantic relations.3 Logical Structure. Logical structure, including quantification, negation, coordination and their associated scope distinctions, is the cornerstone of semantic analysis in much of theoretical linguistics, and has attracted much attention in NLP as well. Common representations are often based on variants of predicate calculus, and are useful for applications that require mapping text into an external, often executable, formal language, such as a querying language (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005) or robot instructions (Artzi and Zettlemoyer, 2013). Logical structures are also useful for recognizing entailment relations between sentences, as some entailments can be computed from the text’s logical structure by formal provers (Bos and Markert, 2005; Lewis and Steedman, 2013). Inference and Entailment. A primary motivation for many semantic schemes is their ability to support inference and entailment. Indeed, means for predicting logical entailment are built into many forms of semantic representations. A different approach was taken in the tasks of Recognizing Textual Entailment (Dagan et al., 2013), and Natural Logic (van Eijck, 2005), which considers an inference valid if a reasonable annotator would find the hypothesis likely to hold given 3AMR will also support discourse structure in its future versions (N. Schneider; personal communication). 80 the premise, even if it cannot be deduced from it. See (Manning, 2006) for a discussion of this point. Such inference relations are usually not included in semantic treebanks, but annotated in specialized resources (e.g., Dagan et al., 2006; Bowman et al., 2015). 4 Semantic Schemes and Resources This section briefly surveys the different schemes and resources for SRT. We focus on design principles rather than specific features, as the latter are likely to change as the schemes undergo continuous development. In general, schemes discussed in Section 3 are not repeated here. Semantic Role Labeling. SRL schemes diverge in their event types, the type of predicates they cover, their granularity, their cross-linguistic applicability, their organizing principles and their relation with syntax. Most SRL schemes define their annotation relative to some syntactic structure, such as parse trees of the PTB in the case of PropBank, or specialized syntactic categories defined for SRL purposes in the case of FrameNet. Other than PropBank, FrameNet and VerbNet discussed above, other notable resources include Semlink (Loper et al., 2007) that links corresponding entries in different resources such as PropBank, FrameNet, VerbNet and WordNet, and the Preposition Supersenses project (Schneider et al., 2015), which focuses on roles evoked by prepositions. See (Palmer et al., 2010, 2013) for a review of SRL schemes and resources. SRL schemes are often termed “shallow semantic analysis” due to their focus on argument structure, leaving out other relations such as discourse events, or how predicates and arguments are internally structured. AMR. AMR covers predicate-argument relations, including semantic roles (adapted from PropBank) that apply to a wide variety of predicates (including verbal, nominal and adjectival predicates), modifiers, co-reference, named entities and some time expressions. AMR does not currently support relations above the sentence level, and is admittedly Englishcentric, which results in an occasional conflation of semantic phenomena that happen to be similarly realized in English, into a single semantic category. AMR thus faces difficulties when assessing the invariance of its structures across translations (Xue et al., 2014). As an example, consider the sentences “I happened to meet Jack in the office”, and “I asked to meet Jack in the office”. While the two have similar syntactic forms, the first describes a single “meeting” event, where “happened” is a modifier, while the second describes two distinct events: asking and meeting. AMR annotates both in similar terms, which may be suitable for English, where aspectual relations are predominantly expressed as subordinating verbs (e.g., “begin”, “want”), and are syntactically similar to primary verbs that take an infinitival complement (such as “ask to meet” or “learn to swim”). However, this approach is less suitable cross-linguistically. For instance, when translating the sentences to German, the divergence between the semantics of the two sentences is clear: in the first “happened” is translated to an adverb: “Ich habe Jack im B¨uro zuf¨allig getroffen” (lit. “I have Jack in-the office by-chance met”), and in the second “asked” is translated to a verb: “Ich habe gebeten, Jack im B¨uro zu treffen” (lit. “I have asked, Jack in-the office to meet”). UCCA. UCCA (Universal Conceptual Cognitive Annotation) (Abend and Rappoport, 2013a,b) is a cross-linguistically applicable scheme for semantic annotation, building on typological theory, primarily on Basic Linguistic Theory (Dixon, 2010). UCCA’s foundational layer of categories focuses on argument structures of various types and relations between them. In its current state, UCCA is considerably more coarse-grained than the above mentioned schemes (e.g., it does not include semantic role information). However, its distinctions tend to generalize well across languages (Sulem et al., 2015). For example, unlike AMR, it distinguishes between primary and aspectual verbs, so cases such as “happened to meet” are annotated similarly to cases such as “met by chance”, and differently from “asked to meet”. Another design principle UCCA evokes is support for annotation by non-experts. To do so the scheme reformulates some of the harder distinctions into more intuitive ones. For instance, the core/non-core distinction is replaced in UCCA with the distinction between pure relations (Adverbials) and those evoking an object (Participants), which has been found easier for annotators to apply. UDS. Universal Decompositional Semantics (White et al., 2016) is a multi-layered scheme, which currently includes semantic role anno81 tation, word senses and aspectual classes (e.g., realis/irrealis). UDS emphasizes accessible distinctions, which can be collected through crowd-sourcing. However, the skeletal structure of UDS representations is derived from syntactic dependencies, and only includes verbal argument structures that can be so extracted. Notably, many of the distinctions in UDS are defined using feature bundles, rather than mutually exclusive categories. For instance, a semantic role may be represented as having the features +VOLITION and +AWARENESS, rather than as having the category AGENT. The Prague Dependency Treebank (PDT) Tectogrammatical Layer (PDT-TL) (Sgall, 1992; B¨ohmov´a et al., 2003) covers a rich variety of functional and semantic distinctions, such as argument structure (including semantic roles), tense, ellipsis, topic/focus, co-reference, word sense disambiguation and local discourse information. The PDT-TL results from an abstraction over PDT’s syntactic layers, and its close relation with syntax is apparent. For instance, the PDT-TL encodes the distinction between a governing clause and a dependent clause, which is primarily syntactic in nature, so in the clauses “John came just as we were leaving” and “We were leaving just as John came” the governing and dependent clause are swapped, despite their semantic similarity. CCG-based Schemes. CCG (Steedman, 2000) is a lexicalized grammar (i.e., nearly all semantic content is encoded in the lexicon), which defines a theory of how lexical information is composed to form the meaning of phrases and sentences (see Section 6.2), and has proven effective in a variety of semantic tasks (Zettlemoyer and Collins, 2005, 2007; Kwiatkowski et al., 2010; Artzi and Zettlemoyer, 2013, inter alia). Several projects have constructed logical representations by associating CCG with semantic forms (by assigning logical forms to the leaves). For example, Boxer (Bos, 2008) and GMB, which builds on Boxer, use Discourse Representation Structures (Kamp and Reyle, 1993), while Lewis and Steedman (2013) used Davidsonian-style λ-expressions, accompanied by lexical categorization of the predicates. These schemes encode events with their argument structures, and include an elaborate logical structure, as well as lexical and discourse information. HPSG-based Schemes. Related to CCG-based schemes are SRTs based on Head-driven Phrase Structure Grammar (HPSG; Pollard and Sag, 1994), where syntactic and semantic features are represented as feature bundles, which are iteratively composed through unification rules to form composite units. HPSG-based SRT schemes commonly use the Minimal Recursion Semantics (Copestake et al., 2005) formalism. Annotated corpora and manually crafted grammars exist for multiple languages (Flickinger, 2002; Oepen et al., 2004; Bender and Flickinger, 2005, inter alia), and generally focus on argument structural and logical semantic phenomena. The Broad-coverage Semantic Dependency Parsing shared task and corpora (Oepen et al., 2014, 2015) include corpora annotated with the PDT-TL, and dependencies extracted from the HPSG grammars Enju (Miyao, 2006) and the LinGO English Reference Grammar (ERG; Flickinger, 2002). Like the PDT-TL, projects based on CCG, HPSG, and other expressive grammars such as LTAG (Joshi and Vijay-Shanker, 1999) and LFG (Kaplan and Bresnan, 1982) (e.g., GlueTag (Frank and van Genabith, 2001)), yield semantic representations that are coupled with syntactic ones. While this approach provides powerful tools for inference, type checking, and mapping into external formal languages, it also often results in difficulties in abstracting away from some syntactic details. For instance, the dependencies derived from ERG in the SDP corpus use the same label for different senses of the English possessive construction, regardless of whether they correspond to ownership (e.g., “John’s dog”) or to a different meaning, such as marking an argument of a nominal predicate (e.g., “John’s kick”). See Section 6. OntoNotes is a useful resource with multiple inter-linked layers of annotation, borrowed from different schemes. The layers include syntactic, SRL, co-reference and word sense disambiguation content. Some properties of the predicate, such as which nouns are eventive, are encoded as well. To summarize, while SRT schemes differ in the types of content they support, schemes evolve to continuously add new content types, making these differences less consequential. The fundamental difference between the schemes is the extent that they abstract away from syntax. For instance, AMR and UCCA abstract away from syntax as part of their design, while in most other schemes syntax and semantics are more tightly coupled. 82 Schemes also differ in other aspects discussed in Sections 5 and 6. 5 Evaluation Human evaluation is the ultimate criterion for validating an SRT scheme given our definition of semantics as meaning as it is understood by a language speaker. Determining how well an SRT scheme corresponds to human interpretation of a text is ideally carried out by asking annotators to make some semantic prediction or annotation according to pre-specified guidelines, and to compare this to the information extracted from the SRT. Question Answering SRL (QASRL; He et al., 2015) is an SRL scheme which solicits nonexperts to answer mostly wh-questions, converting their output to an SRL annotation. Hartshorne et al. (2013) and Reisinger et al. (2015) use crowdsourcing to elicit semantic role features, such as whether the argument was volitional in the described event, in order to evaluate proposals for semantic role sets. Another evaluation approach is task-based evaluation. Many semantic representations in NLP are defined with an application in mind, making this type of evaluation natural. For instance, a major motivation for AMR is its applicability to machine translation, making MT a natural (albeit hitherto unexplored) testbed for AMR evaluation. Another example is using question answering to evaluate semantic parsing into knowledge-base queries. Another common criterion for evaluating a semantic scheme is invariance, where semantic analysis should be similar across paraphrases or translation pairs (Xue et al., 2014; Sulem et al., 2015). For instance, most SRL schemes abstract away from the syntactic divergence between the sentences (1) “He gave a present to John” and (2) “It was John who was given a present” (although a complete analysis would reflect the difference of focus between them). Importantly, these evaluation criteria also apply in cases where the representation is automatically induced, rather than manually defined. For instance, vector space representations are generally evaluated either through task-based evaluation, or in terms of semantic features computed from them, whose validity is established by human annotators (e.g., Agirre et al., 2013, 2014). Finally, where semantic schemes are induced through manual annotation (and not through automated procedures), a common criterion for determining whether the guidelines are sufficiently clear, and whether the categories are well-defined is to measure agreement between annotators, by assigning them the same texts and measuring the similarity of the resulting structures. Measures include the SMATCH measure for AMR (Cai and Knight, 2013), and the PARSEVAL F-score (Black et al., 1991) adapted for DAGs for UCCA. SRT schemes diverge in the background and training they require from their annotators. Some schemes require extensive training (e.g., AMR), while others can be (at least partially) collected by crowdsourcing (e.g., UDS). Other examples include FrameNet, which requires expert annotators for creating new frames, but employs less trained in-house annotators for applying existing frames to texts; QASRL, which employs non-expert annotators remotely; and UCCA, which uses inhouse non-experts, demonstrating no advantage to expert over non-expert annotators after an initial training period. Another approach is taken by GMB, which uses online collaboration where expert collaborators participate in manually correcting automatically created representations. They further employ gamification strategies for collecting some aspects of the annotation. Universality. One of the great promises of semantic analysis (over more surface forms of analysis) is its cross-linguistic potential. However, while the theoretical and applicative importance of universality in semantics has long been recognized (Goddard, 2011), the nature of universal semantics remains unknown. Recently, projects such as BabelNet (Ehrmann et al., 2014), UBY (Gurevych et al., 2012) and Open Multilingual Wordnet4, constructed huge multi-lingual semantic nets, by linking resources such as Wikipedia and WordNet and processing them using modern NLP. However, such projects currently focus on lexical semantic and encyclopedic information rather than on text semantics. Symbolic SRT schemes such as SRL schemes and AMR have also been studied for their crosslinguistic applicability (Pad´o and Lapata, 2009; Sun et al., 2010; Xue et al., 2014), indicating partial portability across languages. Translated versions of PropBank and FrameNet have been constructed for multiple languages (e.g., Akbik et al., 2016; Hartmann and Gurevych, 2013). How4http://compling.hss.ntu.edu.sg/omw/ 83 ever, as both PropBank and FrameNet are lexicalized schemes, and as lexicons diverge wildly across languages, these schemes require considerable adaptation when ported across languages (Kozhevnikov and Titov, 2013). Ongoing research tackles the generalization of VerbNet’s unlexicalized roles to a universally applicable set (e.g., Schneider et al., 2015). Few SRT schemes place cross-linguistically applicability as one of their main criteria, examples include UCCA, and the LinGO Grammar Matrix (Bender and Flickinger, 2005), both of which draw on typological theory. Vector space models, which embed words and sentences in a vector space, have also been applied to induce a shared cross-linguistic space (Klementiev et al., 2012; Rajendran et al., 2015; Wu et al., 2016). However, further evaluation is required in order to determine what aspects of meaning these representations reflect reliably. 6 Syntax and Semantics 6.1 Syntactic and Semantic Generalization Syntactic distinctions are generally guided by a combination of semantic and distributional considerations, where emphasis varies across schemes. Consider phrase-based syntactic structures, common examples of which, such as the Penn Treebank for English (Marcus et al., 1993) and the Penn Chinese Treebank (Xue et al., 2005), are adaptations of X-bar theory. Constituents are commonly defined in terms of distributional criteria, such as whether they can serve as conjuncts, be passivized, elided or fronted (Carnie, 2002, pp. 50-53). Moreover, phrase categories are defined according to the POS category of their headword, such as Noun Phrase, Verb Phrase or Preposition Phrase, which are also at least partly distributional, motivated by their similar morphological and syntactic distribution. In contrast, SRT schemes tend to abstract away from these realizational differences and directly reflect the argument structure of the sentence using the same set of categories, irrespective of the POS of the predicate, or the case marking of its arguments. Distributional considerations are also apparent with functional syntactic schemes (the most commonly used form of which in NLP are lexicalist dependency structures), albeit to a lesser extent. A prominent example is Universal Dependencies (UD; Nivre et al., 2016), which aims at producing a cross-linguistically consistent dependencybased annotation, and whose categories are motivated by a combination of distributional and semantic considerations. For example, UD would distinguish between the dependency type between “John” and “brother” in “John, my brother, arrived” and “John, who is my brother, arrived”, despite their similar semantics. This is due to the former invoking an apposition, and the latter a relative clause, which are different in their distribution. As an example of the different categorization employed by UD and by purely semantic schemes such as AMR and UCCA consider (1) “founding of the school”, (2) “president of the United States” and (3) “United States president”. UD is faithful to the syntactic structure and represents (1) and (2) similarly, while assigning a different structure to (3). In contrast, AMR and UCCA perform a semantic generalization and represents examples (2) and (3) similarly and differently from (1). 6.2 The Syntax-Semantics Interface A common assumption on the interface between syntax and semantics is that semantics of phrases and sentences is compositional – it is determined recursively by the meaning of its immediate constituents and their syntactic relationships, which are generally assumed to form a closed set (Montague, 1970, and much subsequent work). Thus, the interpretation of a sentence can be computed bottom-up, by establishing the meaning of individual words, and recursively composing them, to obtain the full sentential semantics. The order and type of these compositions are determined by the syntactic structure. Compositionality is employed by linguistically expressive grammars, such as those based on CCG and HPSG, and has proven to be a powerful method for various applications. See (Bender et al., 2015) for a recent discussion of the advantages of compositional SRTs. Nevertheless, a compositional account meets difficulties when faced with multi-word expressions and in accounting for cases like “he sneezed the napkin off the table”, where it is difficult to determine whether “sneezed” or “off” account for the constructional meaning. Construction Grammar (Fillmore et al., 1988; Goldberg, 1995) answers these issues by using an open set of construction-specific compositional operators, and supporting lexical en84 tries of varying lengths. Several ongoing projects address the implementation of the principles of Construction Grammar into explicit grammars, including Sign-based Construction Grammar (Fillmore et al., 2012), Embodied Construction Grammar (Feldman et al., 2010) and Fluid Construction Grammar (Steels and de Beules, 2006). The achievements of machine learning methods in many areas, and optimism as to its prospects, have enabled the approaches to semantics discussed in this paper. Machine learning allows to define semantic structures on purely semantic grounds and to let algorithms identify how these distinctions are mapped to surface/distributional forms. Some of the schemes discussed in this paper take this approach in its pure form (e.g., AMR and UCCA). 7 Conclusion Semantic representation in NLP is undergoing rapid changes. Traditional semantic work has either used shallow methods that focus on specific semantic phenomena, or adopted formal semantic theories which are coupled with a syntactic scheme through a theory of the syntax-semantics interface. Recent years have seen increasing interest in an alternative approach that defines semantic structures independently from any syntactic or distributional criteria, much due to the availability of semantic treebanks that implement this approach. Semantic schemes diverge in whether they are anchored in the words and phrases of the text (e.g., all types of semantic dependencies and UCCA) or not (e.g., AMR and logic-based representations). We do not view this as a major difference, because most unanchored representations (including AMR) retain their close affinity with the words of the sentence, possibly because of the absence of a workable scheme for lexical decomposition, while dependency structures can be converted into logic-based representations (Reddy et al., 2016). In practice, anchoring facilitates parsing, while unanchored representations are more flexible to use where words and semantic components are not in a one-to-one correspondence. Our survey concludes that the main distinguishing factors between schemes are their relation to syntax, their degree of universality, and the expertise and training they require from annotators, an important factor in addressing the annotation bottleneck. We hope this survey of the state of the art in semantic representation will promote discussion, expose more researchers to the most pressing questions in semantic representation, and lead to the wide adoption of the best components from each scheme. Acknowledgements. We thank Nathan Schneider for his helpful comments. The work was support by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI). References Omri Abend and Ari Rappoport. 2013a. UCCA: A semantic-based grammatical annotation scheme. In Proc. of IWCS. pages 1–12. Omri Abend and Ari Rappoport. 2013b. Universal Conceptual Cognitive Annotation (UCCA). In Proc. of ACL. pages 228–238. Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2014. Semeval-2014 task 10: Multilingual semantic textual similarity. In Proc. of SemEval. pages 81–91. Eneko Agirre, Daniel Cer, Mona Diab, Aitor GonzalezAgirre, and Weiwei Guo. 2013. *sem 2013 shared task: Semantic textual similarity. In Proc. of SemEval. pages 32–43. Alan Akbik, vishwajeet kumar, and Yunyao Li. 2016. Towards semi-automatic generation of proposition banks for low-resource languages. In Proc. of EMNLP. pages 993–998. Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A. Smith. 2016. Massively multilingual word embeddings. CoRR abs/1602.01925. Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In Proc. of ACL. pages 2442–2452. Gabor Angeli and Christopher D Manning. 2014. Naturalli: Natural logic inference for common sense reasoning. In EMNLP. pages 534–545. Yoav Artzi and Luke Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. TACL 1:49–62. Dzmitry Bahdanau, KyungHyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. of ICLR. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proc. of LAW. pages 178–186. 85 Yehoshua Bar-Hillel. 1960. The present status of automatic translation of languages. In Advances in computers, Academic Press, New York, volume 1, pages 91–163. Valerio Basile, Johan Bos, Kilian Evang, and Noortje Venhuizen. 2012. Developing a large semantically annotated corpus. In Proc. of LREC. pages 3196– 3200. Emily Bender and Dan Flickinger. 2005. Rapid prototyping of scalable grammars: Towards modularity in extensions to a language-independent core. In Proc. of IJCNLP. pages 203–208. Emily M. Bender, Dan Flickinger, Stephan Oepen, Woodley Packard, and Ann Copestake. 2015. Layers of interpretation: On grammar and compositionality. In Proc. of IWCS. pages 239–249. Ezra Black, Steve Abney, Dan Flickinger, C. Gdaniec, Ralph Grishman, P. Harrison, Donald Hindle, Robert Ingria, Frederick Jelinek, Judith Klavans, Mark Liberman, Mitch Marcus, Salim Roukos, Beatrice Santorini, and Thomas Strzalkowski. 1991. A procedure for quantitatively comparing the syntactic coverage of English grammars. In Proc. of the DARPA Speech and Natural Language Workshop. pages 204–210. Alena B¨ohmov´a, Jan Hajiˇc, Eva Hajiˇcov´a, and Barbora Hladk´a. 2003. The Prague dependency treebank. In Treebanks, Springer, pages 103–127. Claire Bonial, William Corvey, Martha Palmer, Volha V Petukhova, and Harry Bunt. 2011. A hierarchical unification of lirics and verbnet semantic roles. In Semantic Computing (ICSC). pages 483– 489. Johan Bos. 2008. Wide-coverage semantic analysis with Boxer. In Johan Bos and Rodolfo Delmonte, editors, Proc. of the Conference on Semantics in Text Processing (STEP). College Publications, Research in Computational Semantics, pages 277–286. Johan Bos and Katja Markert. 2005. Recognising textual entailment with logical inference. In Proc. of EMNLP. pages 628–635. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proc. of EMNLP. pages 632–642. Shu Cai and Kevin Knight. 2013. Smatch: an evaluation metric for semantic feature structures. In Proc. of ACL. pages 748–752. Lynn Carlson, Daniel Marcu, and Mary Ellen Okurowski. 2003. Building a discourse-tagged corpus in the framework of rhetorical structure theory. In Current and new directions in discourse and dialogue, Springer, pages 85–112. Andrew Carnie. 2002. Syntax: A Generative Introduction. Wiley-Blackwell. Nathanael Chambers and Dan Jurafsky. 2008. Unsupervised learning of narrative event chains. In Proc. of ACL-HLT. pages 789–797. Nathanael Chambers and Dan Jurafsky. 2009. Unsupervised learning of narrative schemas and their participants. In Proc. of ACL-IJCNLP. pages 602–610. Ann Copestake, Dan Flickinger, Carl Pollard, and Ivan A. Sag. 2005. Minimal recursion semantics: An introduction. Research on Language and Computation 3:281–332. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL recognising text entailment challenge. In Bernardo Magnini Joaquin Qui˜nonero Candela, Ido Dagan and Florence d’Alch´e Buc, editors, Machine Learning Challenges, Springer, Berlin, volume 3944 of Lecture Notes in Computer Science, pages 177–190. Ido Dagan, Dan Roth, and Mark Sammons. 2013. Recognizing textual entailment. Morgan & Claypool Publishers. Robert M.W. Dixon. 2010. Basic Linguistic Theory: Methodology, volume 1. Oxford University Press. David Dowty. 2003. The dual analysis of adjuncts/complements in categorial grammar. In Ewald Lang, Claudia Maienborn, and Cathry Fabricius-Hansen, editors, Modifying Adjuncts, Mouton de Gruyter, Berlin, pages 33–66. Jesse Dunietz, Lori Levin, and Jaime Carbonell. 2015. Annotating causal language using corpus lexicography of constructions. In Proc. of LAW. pages 188– 196. Maud Ehrmann, Francesco Cecconi, Daniele Vannella, John Philip McCrae, Philipp Cimiano, and Roberto Navigli. 2014. Representing multilingual data as linked data: the case of babelnet 2.0. In Proc. of LREC. pages 401–408. David K Elson and Kathleen R McKeown. 2010. Tense and aspect assignment in narrative discourse. In Proc. of the International Natural Language Generation Conference. pages 47–56. Jerome Feldman, Ellen Dodge, and John Bryant. 2010. Embodied construction grammar. In Bernd Heine and Heiko Narrog, editors, The Oxford Handbook of Linguistic Analysis, Oxford University Press, pages 111–158. Charles Fillmore, Russell Lee-Goldman, and Russell Rhodes. 2012. The FrameNet Constructicon. In Hans Boas and Ivan Sag, editors, Sign-based construction grammar, CSLI Publications, pages 309– 372. Charles J Fillmore, Paul Kay, and Mary C O’Connor. 1988. Regularity and idiomaticity in grammatical constructions: The case of let alone. Language 64(3):501–538. 86 Daniel Flickinger. 2002. On building a more efficient grammar by exploiting types. In Jun’ichi Tsujii, Stefan Oepen, Daniel Flickinger, and Hans Uszkoreit, editors, Collaborative Language Engineering, CLSI, Stanford, CA. Jerry A Fodor. 1975. The language of thought, volume 5. Harvard University Press. Anette Frank and Josef van Genabith. 2001. Gluetag: Linear logic based semantics construction for ltag and what it teaches us about the relation between LFG and LTAG. In Proc. of LFG. Annemarie Friedrich, Alexis Palmer, and Manfred Pinkal. 2016. Situation entity types: automatic classification of clause-level aspect. In Proceedings of ACL 2016. pages 1757–1768. Matthew Gerber and Joyce Y Chai. 2010. Beyond nombank: A study of implicit arguments for nominal predicates. In Proc. of ACL. pages 1583–1592. Daniel Gildea and Dan Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics 28(3):245–288. Cliff Goddard. 2011. Semantic analysis: A practical introduction. Oxford University Press, 2nd edition. Ad`ele Goldberg. 1995. Constructions: A Construction Grammar Approach to Argument Structure. Chicago University Press, Chicago. Iryna Gurevych, Judith Eckle-Kohler, Silvana Hartmann, Michael Matuschek, Christian M. Meyer, and Christian Wirth. 2012. UBY - a large-scale unified lexical-semantic resource based on lmf. In Proc. of EACL. pages 580–590. Silvana Hartmann and Iryna Gurevych. 2013. Framenet on the way to babel: Creating a bilingual framenet using wiktionary as interlingual connection. In Proc. of ACL. pages 1363–1373. Joshua K. Hartshorne, Claire Bonial, and Martha Palmer. 2013. The VerbCorner project: Toward an empirically-based semantic decomposition of verbs. In Proc. of EMNLP. pages 1438–1442. Luheng He, Mike Lewis, and Luke Zettlemoyer. 2015. Question-answer driven semantic role labeling: Using natural language to annotate natural language. In Proc. of EMNLP. pages 643–653. Kevin Humphreys, Robert Gaizauskas, and Saliha Azzam. 1997. Event coreference for information extraction. In Proc. of a Workshop on Operational Factors in Practical, Robust Anaphora Resolution for Unrestricted Texts. pages 75–81. Rei Ikuta, Will Styler, Mariah Hamang, Tim O’Gorman, and Martha Palmer. 2014. Challenges of adding causation to richer event descriptions. In Proc. of the Second Workshop on EVENTS: Definition, Detection, Coreference, and Representation. pages 12–20. Aravind Joshi and K. Vijay-Shanker. 1999. Compositional semantics with Lexicalized Tree-Adjoining Grammar (LTAG). In Proc. of IWCS. pages 131– 146. Hans Kamp and Uwe Reyle. 1993. From Discourse to Logic. Kluwer, Dordrecht. Ronald M Kaplan and Joan Bresnan. 1982. Lexicalfunctional grammar: A formal system for grammatical representation. Formal Issues in LexicalFunctional Grammar pages 29–130. Karen Kipper, Anna Korhonen, Neville Ryant, and Martha Palmer. 2008. A large-scale classification of English verbs. Language Resources and Evaluation 42:21–40. Ryan Kiros, Ruslan Salakhutdinov, and Richard S. Zemel. 2014. Unifying visual-semantic embeddings with multimodal neural language models. CoRR abs/1411.2539. Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. In Proc. of COLING. pages 1459–1474. Parisa Kordjamshidi, Steven Bethard, and MarieFrancine Moens. 2012. Semeval-2012 task 3: Spatial role labeling. In In Proc. of *SEM. pages 365– 373. Mikhail Kozhevnikov and Ivan Titov. 2013. Crosslingual transfer of semantic role labeling models. In Proc. of ACL. pages 1190–1200. Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higherorder unification. In Proc. of EMNLP. pages 1223– 1233. Ronald Langacker. 2008. Cognitive Grammar: A Basic Introduction. Oxford University Press, Oxford. Beth Levin and Malka Rappaport Hovav. 2005. Argument realization. Cambridge University Press. Michael Lewis and Mark Steedman. 2013. Combined distributional and logical semantics. TACL 1:179– 192. Maria Liakata, Simone Teufel, Advaith Siddharthan, and Colin Batchelor. 2010. Corpora for the conceptualisation and zoning of scientific papers. In Proc. of LREC. pages 2054–2061. Edward Loper, Szu-Ting Yi, and Martha Palmer. 2007. Combining lexical resources: Mapping between PropBank and VerbNet. In Proc. of the 7th International Workshop on Computational Linguistics. Christopher Manning. 2006. Local textual inference: It’s hard to circumscribe, but you know it when you see it—and nlp needs it. unpublished ms. 87 Mitch Marcus, Beatrice Santorini, and M. Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics 19:313–330. Eleni Miltsakaki, Rashmi Prasad, Aravind K Joshi, and Bonnie L Webber. 2004. The penn discourse treebank. In LREC. pages 2237–2240. Paramita Mirza, Rachele Sprugnoli, Sara Tonelli, and Manuela Speranza. 2014. Annotating causality in the tempeval-3 corpus. In Proc. of the EACL Workshop on Computational Approaches to Causality in Language (CAtoCL). pages 10–19. Yusuke Miyao. 2006. Corpus-oriented grammar development and feature forest model. Ph.D. thesis, University of Tokyo. Marc Moens and Mark Steedman. 1988. Temporal ontology and temporal reference. Computational Linguistics 14:15–28. Reprinted in Inderjeet Mani, James Pustejovsky, and Robert Gaizauskas (eds.) The Language of Time: A Reader. Oxford University Press, 93-114. Richard Montague. 1970. English as a formal language. In Bruno Visentini, editor, Linguaggi nella Societ`a e nella Technica, Edizioni di Communit`a, Milan, pages 189–224. Reprinted as Thomason 1974:188-221. Nasrin Mostafazadeh, Alyson Grealish, Nathanael Chambers, James Allen, and Lucy Vanderwende. 2016. Caters: Causal and temporal relation scheme for semantic annotation of event structures. In Proc. of the Fourth Workshop on Events. pages 51–61. Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D. Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal dependencies v1: A multilingual treebank collection. In Proc. of LREC. pages 1659– 1666. Stephan Oepen, Dan Flickinger, Kristina Toutanova, and Chris Manning. 2004. Lingo Redwoods. Research on Language & Computation 2:575–596. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkov´a, Dan Flickinger, Jan Hajiˇc, and Zdeˇnka Ureˇsov´a. 2015. SemEval 2015 task 18: Broad-coverage semantic dependency parsing. In Proc. of SemEval. pages 915–926. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Dan Flickinger, Jan Hajiˇc, Angelina Ivanova, and Yi Zhang. 2014. SemEval 2014 task 8: Broad-coverage semantic dependency parsing. In Proc. of SemEval. pages 63–72. Sebastian Pad´o and Mirella Lapata. 2009. Crosslingual annotation projection of semantic roles. Journal of Artificial Intelligence Research 36:307– 340. Alexis Palmer, Elias Ponvert, Jason Baldridge, and Carlota Smith. 2007. A sequencing model for situation entity classification. In Proc. of ACL. pages 896–903. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An annotated corpus of semantic roles. Computational Linguistics 31(1):71–106. Martha Palmer, Daniel Gildea, and Nianwen Xue. 2010. Semantic Role Labeling. Synthesis lectures on human language technologies. Morgan & Claypool Publishers. Martha Palmer, Ivan Titov, and Shumin Wu. 2013. Semantic role labeling tutorial at naacl 2013. http://ivan-titov.org/teaching/ srl-tutorial-naacl13/. Carl Pollard and Ivan A Sag. 1994. Head-driven phrase structure grammar. University of Chicago Press. James Pustejovsky, Jos´e Caste˜no, Robert Ingria, Roser Saur´ı, Robert Gaizauiuskas, Andrea Setzer, Graham Katz, and Dragomir Radev. 2003. Timeml: Robust specification of event and temporal expressions in text. In Proc. of the 5th International Workshop on Computational Semantics. James Pustejovsky, Parisa Kordjamshidi, MarieFrancine Moens, Aaron Levine, Seth Dworman, and Zachary Yocum. 2015. Semeval-2015 task 8: Spaceeval. In Proc. of SemEval. pages 884–894. Janarthanan Rajendran, Mitesh M. Khapra, Sarath Chandar, and Balaraman Ravindran. 2015. Bridge correlational neural networks for multilingual multimodal representation learning. CoRR abs/1510.03519. Siva Reddy, Oscar T¨ackstr¨om, Michael Collins, Tom Kwiatkowski, Dipanjan Das, Mark Steedman, and Mirella Lapata. 2016. Transforming dependency structures to logical forms for semantic parsing. TACL 4:127–140. Michaela Regneri, Alexander Koller, and Manfred Pinkal. 2010. Learning script knowledge with web experiments. In Proc. of ACL. pages 979–988. Drew Reisinger, Rachel Rudinger, Francis Ferraro, Craig Harman, Kyle Rawlins, and Benjamin Van Durme. 2015. Semantic proto-roles. TACL 3:475– 488. Michael Roth and Anette Frank. 2015. Inducing implicit arguments from comparable texts: A framework and its applications. Computational Linguistics 41:625–664. Josef Ruppenhofer, Michael Ellsworth, Miriam R. L. Petruck, Christopher R. Johnson, Collin F. Baker, and Jan Scheffczyk. 2016. FrameNet II: Extended Theory and Practice. The Berkeley FrameNet Project. 88 Nathan Schneider, Vivek Srikumar, Jena D. Hwang, and Martha Palmer. 2015. A hierarchy with, of, and for preposition supersenses. In Proc. of LAW. pages 112–123. Petr Sgall. 1992. Underlying Structure of Sentences and Its Relations to Semantics. In T. Reuthe, editor, Wiener Slawistischer Almanach. Sonderband 33, Wien: Gesellschaft zur F¨orderung slawistischer Studien, pages 273–282. Eric Siegel and Kathy McKeown. 2000. Learning methods to combine linguistic indicators: Improving aspectual classification and revealing linguistic insights. Computational Linguistics 26:595–628. Richard Socher, John Bauer, Christopher D. Manning, and Ng Andrew Y. 2013. Parsing with compositional vector grammars. In Proc. of ACL. pages 455– 465. Mark Steedman. 2000. The Syntactic Process. MIT Press, Cambridge, MA. Luc Steels and Joachim de Beules. 2006. A (very) brief introduction to fluid construction grammar. In Proc. of the 3rd Workshop on Scalable Natural Language Understanding. pages 73–80. Elior Sulem, Omri Abend, and Ari Rappoport. 2015. Conceptual annotations preserve structure across translations: A French-English case study. In ACL 2015 Workshop on Semantics-Driven Statistical Machine Translation (S2MT). pages 11–22. Lin Sun, Anna Korhonen, Thierry Poibeau, and C´edric Messiant. 2010. Investigating the cross-linguistic potential of verbnet: style classification. In Proc. of COLING. pages 1056–1064. Richmond Thomason, editor. 1974. Formal Philosophy: Papers of Richard Montague. Yale University Press, New Haven, CT. Naushad UzZaman, Hector Llorens, Leon Derczynski, James Allen, Marc Verhagen, and James Pustejovsky. 2013. Semeval-2013 task 1: Tempeval-3: Evaluating time expressions, events, and temporal relations. In *SEM-SemEval ’13. pages 1–9. Jan van Eijck. 2005. Natural logic for natural language. In Balder ten Cate and Henk Zeevat, editors, Logic, Language, and Computation. Springer, Berlin, Lecture Notes in Computer Science 4363, pages 216– 230. Marc Verhagen, Roser Sauri, Tomasso Caselli, and James Pustejovsky. 2010. Semeval-2010 task 13: Tempeval-2. In Proc. of the 5th International Workshop on Semantic Evaluation. ACL, pages 57–62. Mark Verhagen, Robert Gaizauskas, Frank Schilder, Mark Hepple, Jessica Moszkowitcz, and James Pustejovsky. 2009. The tempeval challenge: Identifying temporal relations in text. Language Resources and Evaluation 43:161–179. Bonnie Webber, Markus Egg, and Valia Kordoni. 2011. Discourse structure and language technology. Natural Language Engineering 18(4):437–490. Bonnie Webber and Aravind Joshi. 2012. Discourse structure and computation: Past, present and future. In Proceedings of the ACL-2012 Special Workshop on Rediscovering 50 Years of Discoveries. pages 42– 54. Aaron Steven White, Drew Reisinger, Keisuke Sakaguchi, Tim Vieira, Sheng Zhang, Rachel Rudinger, Kyle Rawlins, and Benjamin Van Durme. 2016. Universal decompositional semantics on universal dependencies. In Proc. of EMNLP. pages 1713– 1723. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR abs/1609.08144. Naiwen Xue, Fei Xia, Fu-Dong Chiou, and Marta Palmer. 2005. The penn chinese treebank: Phrase structure annotation of a large corpus. Natural language engineering 11(02):207–238. Nianwen Xue, Odrej Bojar, Jan Hajic, Martha Palmer, Zdenka Uresova, and Xiuhong Zhang. 2014. Not an intelingua, but close: comparison of English AMRs to Chinese and Czech. In Proc. of LREC. pages 1765–1772. John Zelle and Ray Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proc. of the 14th National Conference on Artificial Intelligence. pages 1050–1055. Luke Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with Probabilistic Categorial Grammars. In Proc. of UAI. pages 658–666. Luke Zettlemoyer and Michael Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Proc. of EMNLP-CoNLL. pages 678–687. 89
2017
8
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 861–872 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1080 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 861–872 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1080 What do Neural Machine Translation Models Learn about Morphology? Yonatan Belinkov1 Nadir Durrani2 Fahim Dalvi2 Hassan Sajjad2 James Glass1 1MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, MA 02139, USA {belinkov, glass}@mit.edu 2Qatar Computing Research Institute, HBKU, Doha, Qatar {ndurrani, faimaduddin, hsajjad}@qf.org.qa Abstract Neural machine translation (MT) models obtain state-of-the-art performance while maintaining a simple, end-to-end architecture. However, little is known about what these models learn about source and target languages during the training process. In this work, we analyze the representations learned by neural MT models at various levels of granularity and empirically evaluate the quality of the representations for learning morphology through extrinsic part-of-speech and morphological tagging tasks. We conduct a thorough investigation along several parameters: word-based vs. character-based representations, depth of the encoding layer, the identity of the target language, and encoder vs. decoder representations. Our data-driven, quantitative evaluation sheds light on important aspects in the neural MT system and its ability to capture word structure.1 1 Introduction Neural network models are quickly becoming the predominant approach to machine translation (MT). Training neural MT (NMT) models can be done in an end-to-end fashion, which is simpler and more elegant than traditional MT systems. Moreover, NMT systems have become competitive with, or better than, the previous state-of-the-art, especially since the introduction of sequence-to-sequence models and the attention mechanism (Bahdanau et al., 2014; Sutskever et al., 2014). The improved translation quality is often attributed to better handling of non-local dependencies and morphology generation (Luong 1Our code is available at https://github.com/ boknilev/nmt-repr-analysis. and Manning, 2015; Bentivogli et al., 2016; Toral and S´anchez-Cartagena, 2017). However, little is known about what and how much these models learn about each language and its features. Recent work has started exploring the role of the NMT encoder in learning source syntax (Shi et al., 2016), but research studies are yet to answer important questions such as: (i) what do NMT models learn about word morphology? (ii) what is the effect on learning when translating into/from morphologically-rich languages? (iii) what impact do different representations (character vs. word) have on learning? and (iv) what do different modules learn about the syntactic and semantic structure of a language? Answering such questions is imperative for fully understanding the NMT architecture. In this paper, we strive towards exploring (i), (ii), and (iii) by providing quantitative, data-driven answers to the following specific questions: • Which parts of the NMT architecture capture word structure? • What is the division of labor between different components (e.g. different layers or encoder vs. decoder)? • How do different word representations help learn better morphology and modeling of infrequent words? • How does the target language affect the learning of word structure? To achieve this, we follow a simple but effective procedure with three steps: (i) train a neural MT system on a parallel corpus; (ii) use the trained model to extract feature representations for words in a language of interest; and (iii) train a classifier using extracted features to make predictions 861 for another task. We then evaluate the quality of the trained classifier on the given task as a proxy to the quality of the extracted representations. In this way, we obtain a quantitative measure of how well the original MT system learns features that are relevant to the given task. We focus on the tasks of part-of-speech (POS) and full morphological tagging. We investigate how different neural MT systems capture POS and morphology through a series of experiments along several parameters. For instance, we contrast word-based and character-based representations, use different encoding layers, vary source and target languages, and compare extracting features from the encoder vs. the decoder. We experiment with several languages with varying degrees of morphological richness: French, German, Czech, Arabic, and Hebrew. Our analysis reveals interesting insights such as: • Character-based representations are much better for learning morphology, especially for low-frequency words. This improvement is correlated with better BLEU scores. On the other hand, word-based models are sufficient for learning the structure of common words. • Lower layers of the encoder are better at capturing word structure, while deeper networks improve translation quality, suggesting that higher layers focus more on word meaning. • The target language impacts the kind of information learned by the MT system. Translating into morphologically-poorer languages leads to better source-side word representations. This is partly, but not completely, correlated with BLEU scores. • The neural decoder learns very little about word structure. The attention mechanism removes much of the burden of learning word representations from the decoder. 2 Methodology Given a source sentence s = {w1, w2, ..., wN} and a target sentence t = {u1, u2, ..., uM}, we first generate a vector representation for the source sentence using an encoder (Eqn. 1) and then map this vector to the target sentence using a decoder (Eqn. 2) (Sutskever et al., 2014): Figure 1: Illustration of our approach: (i) NMT system trained on parallel data; (ii) features extracted from pre-trained model; (iii) classifier trained using the extracted features. Here a POS tagging classifier is trained on features from the first hidden layer. ENC : s = {w1, w2, ..., wN} 7! s 2 Rk (1) DEC : s 2 Rk 7! t = {u1, u2, ..., uM} (2) In this work, we use long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) encoder-decoders with attention (Bahdanau et al., 2014), which we train on parallel data. After training the NMT system, we freeze the parameters of the encoder and use ENC as a feature extractor to generate vectors representing words in the sentence. Let ENCi(s) denote the encoded representation of word wi. For example, this may be the output of the LSTM after word wi. We feed ENCi(s) to a neural classifier that is trained to predict POS or morphological tags and evaluate the quality of the representation based on our ability to train a good classifier. By comparing the performance of classifiers trained with features from different instantiations of ENC, we can evaluate what MT encoders learn about word structure. Figure 1 illustrates this process. We follow a similar procedure for analyzing representation learning in DEC. The classifier itself can be modeled in different ways. For example, it may be an LSTM over outputs of the encoder. However, as we are interested in assessing the quality of the representations learned by the MT system, we choose to model the classifier as a simple feed-forward neural network with one hidden layer and a ReLU non-linearity. Arguably, if the learned representations are good, then a non-linear classifier should be able to extract useful information from them.2 We empha2We also experimented with a linear classifier and observed similar trends to the non-linear case, but overall lower results; Qian et al. (2016b) reported similar findings. 862 Ar De Fr Cz Gold/Pred Gold/Pred Pred Pred Train Tokens 0.5M/2.7M 0.9M/4M 5.2M 2M Dev Tokens 63K/114K 45K/50K 55K 35K Test Tokens 62K/16K 44K/25K 23K 20K POS Tags 42 54 33 368 Morph Tags 1969 214 – – Table 1: Statistics for annotated corpora in Arabic (Ar), German (De), French (Fr), and Czech (Cz). size that our goal is not to beat the state-of-the-art on a given task, but rather to analyze what NMT models learn about morphology. The classifier is trained with a cross-entropy loss; more details about its architecture are given in the supplementary material (appendix A.1). 3 Data Language pairs We experiment with several language pairs, including morphologically-rich languages, that have received relatively significant attention in the MT community. These include Arabic-, German-, French-, and Czech-English pairs. To broaden our analysis and study the effect of having morphologically-rich languages on both source and target sides, we also include ArabicHebrew, two languages with rich and similar morphological systems, and Arabic-German, two languages with rich but different morphologies. MT data Our translation models are trained on the WIT3 corpus of TED talks (Cettolo et al., 2012; Cettolo, 2016) made available for IWSLT 2016. This allows for comparable and crosslinguistic analysis. Statistics about each language pair are given in Table 1 (under Pred). We use official dev and test sets for tuning and testing. Reported figures are the averages over test sets. Annotated data We use two kinds of datasets to train POS and morphological classifiers: goldstandard and predicted tags. For predicted tags, we simply used freely available taggers to annotate the MT data. For gold tags, we use goldannotated datasets. Table 1 provides statistics for datasets with gold and predicted tags; see the supplementary material (appendix A.2) for more details about taggers and gold data. We train and test our classifiers on predicted annotations, and similarly on gold annotations, when we have them. We report both results wherever available. Gold Pred BLEU Word/Char Word/Char Word/Char Ar-En 80.31/93.66 89.62/95.35 24.7/28.4 Ar-He 78.20/92.48 88.33/94.66 9.9/10.7 De-En 87.68/94.57 93.54/94.63 29.6/30.4 Fr-En – 94.61/95.55 37.8/38.8 Cz-En – 75.71/79.10 23.2/25.4 Table 2: POS accuracy on gold and predicted tags using word-based and character-based representations, as well as corresponding BLEU scores. 4 Encoder Analysis Recall that after training the NMT system we freeze its parameters and use it only to generate features for the POS/morphology classifier. Given a trained encoder ENC and a sentence s with POS/morphology annotation, we generate word features ENCi(s) for every word in the sentence. We then train a classifier that uses the features ENCi(s) to predict POS or morphological tags. 4.1 Effect of word representation In this section, we compare different word representations extracted with different encoders. Our word-based model uses a word embedding matrix which is initialized randomly and learned with other NMT parameters. For a character-based model we adopt a convolutional neural network (CNN) over character embeddings that is also learned during training (Kim et al., 2015); see appendix A.1 for specific settings. In both cases we run the encoder over these representations and use its output ENCi(s) as features for the classifier. Table 2 shows POS tagging accuracy using features from different NMT encoders. Charbased models always generate better representations for POS tagging, especially in the case of morphologically-richer languages like Arabic and Czech. We observed a similar pattern in the full morphological tagging task. For example, we obtain morphological tagging accuracy of 65.2/79.66 and 67.66/81.66 using word/charbased representations from the Arabic-Hebrew and Arabic-English encoders, respectively.3 The superior morphological power of the char-based model also manifests in better translation quality (measured by BLEU), as shown in Table 2. 3The results are not far below dedicated taggers (e.g. 95.1/84.1 on Arabic POS/morphology (Pasha et al., 2014)), indicating that NMT models learn quite good representations. 863 Figure 2: POS and morphological tagging accuracy of word-based and character-based models per word frequency in the training data. Best viewed in color. Figure 3: Improvement in POS/morphology accuracy of character-based vs. word-based models for words unseen/seen in training, and for all words. Impact of word frequency Let us look more closely at an example case: Arabic POS and morphological tagging. Figure 3 shows the effect of using word-based vs. char-based feature representations, obtained from the encoder of the ArabicHebrew system (other language pairs exhibit similar trends). Clearly, the char-based model is superior to the word-based one. This is true for the overall accuracy (+14.3% in POS, +14.5% in morphology), but more so in OOV words (+37.6% in POS, +32.7% in morphology). Figure 2 shows that the gap between word-based and char-based representations increases as the frequency of the word in the training data decreases. In other words, the more frequent the word, the less need there is for character information. These findings make intuitive sense: the char-based model is able to learn character n-gram patterns that are important for identifying word structure, but as the word becomes more frequent the word-based model has seen enough examples to make a decision. Figure 4: Increase in POS accuracy with char- vs. word-based representations per tag frequency in the training set; larger bubbles reflect greater gaps. Analyzing specific tags In Figure 5 we plot confusion matrices for POS tagging using wordbased and char-based representations (from Arabic encoders). While the char-based representations are overall better, the two models still share similar misclassified tags. Much of the confusion comes from wrongly predicting nouns (NN, NNP). In the word-based case, relatively many tags with determiner (DT+NNP, DT+NNPS, DT+NNS, DT+VBG) are wrongly predicted as non-determined nouns (NN, NNP). In the charbased case, this hardly happens. This suggests that the char-based representations are predictive of the presence of a determiner, which in Arabic is expressed as the prefix “Al-” (the definite article), a pattern easily captured by a char-based model. In Figure 4 we plot the difference in POS accuracy when moving from word-based to char-based representations, per POS tag frequency in the training data. Tags closer to the upper-right corner occur more frequently in the training set and are 864 (a) Word-based representations. (b) Character-based representations. Figure 5: Confusion matrices for POS tagging using word-based and character-based representations. better predicted by char-based compared to wordbased representations. There are a few fairly frequent tags (in the middle-bottom part of the figure) whose accuracy does not improve much when moving from word- to char-based representations: mostly conjunctions, determiners, and certain particles (CC, DT, WP). But there are several very frequent tags (NN, DT+NN, DT+JJ, VBP, and even PUNC) whose accuracy improves quite a lot. Then there are plural nouns (NNS, DT+NNS) where the char-based model really shines, which makes sense linguistically as plurality in Arabic is usually expressed by certain suffixes (“-wn/yn” for masc. plural, “-At” for fem. plural). The charbased model is thus especially good with frequent tags and infrequent words, which is understandable given that infrequent words typically belong to frequent open categories like nouns and verbs. 4.2 Effect of encoder depth Modern NMT systems use very deep architectures with up to 8 or 16 layers (Wu et al., 2016; Zhou et al., 2016). We would like to understand what kind of information different layers capture. Given a trained NMT model with multiple layers, we extract feature representations from the different layers in the encoder. Let ENCl i(s) denote the encoded representation of word wi after the l-th layer. We can vary l and train different classifiers to predict POS or morphological tags. Here we focus on the case of a 2-layer encoder-decoder model for simplicity (l 2 {1, 2}). Figure 6: POS tagging accuracy using representations from layers 0 (word vectors), 1, and 2, taken from encoders of different language pairs. Figure 6 shows POS tagging results using representations from different encoding layers across five language pairs. The general trend is that passing word vectors through the NMT encoder improves POS tagging, which can be explained by the contextual information contained in the representations after one layer. However, it turns out that representations from the 1st layer are better than those from the 2nd layer, at least for the purpose of capturing word structure. Figure 7 demonstrates that the same pattern holds for both word-based and char-based representations, on Arabic POS and morphological tagging. In all cases, layer 1 representations are better than layer 2 representations.4 In contrast, BLEU scores ac4We found this result to be also true in French, German, and Czech experiments; see appendix A.3. 865 Figure 7: POS and morphological tagging accuracy across layers. Layer 0: word vectors or charbased representations before the encoder; layers 1 and 2: representations after the 1st and 2nd layers. tually increase when training 2-layer vs. 1-layer models (+1.11/+0.56 BLEU for Arabic-Hebrew word/char-based models). Thus translation quality improves when adding layers but morphology quality degrades. Intuitively, it seems that lower layers of the network learn to represent word structure while higher layers are more focused on word meaning. A similar pattern was recently observed in a joint language-vision deep recurrent network (Gelderloos and Chrupała, 2016). 4.3 Effect of target language While translating from morphologically-rich languages is challenging, translating into such languages is even harder. For instance, our basic system obtains BLEU scores of 24.69/23.2 on Arabic/Czech to English, but only 13.37/13.9 on English to Arabic/Czech. How does the target language affect the learned source language representations? Does translating into a morphologically-rich language require more knowledge about source language morphology? In order to investigate these questions, we fix the source language and train NMT models using different target languages. For example, given an Arabic source side, we train Arabic-toEnglish/Hebrew/German systems. These target languages represent a morphologically-poor language (English), a morphologically-rich language with similar morphology to the source language (Hebrew), and a morphologically-rich language with different morphology (German). To make a fair comparison, we train the models on the intersection of the training data based on the source language. In this way the experimental setup is Figure 8: Effect of target language on representation quality of the Arabic source. completely identical: the models are trained on the same Arabic sentences with different translations. Figure 8 shows POS and morphological tagging accuracy of word-based representations from the NMT encoders, as well as corresponding BLEU scores. As expected, translating into English is easier than translating into the morphologicallyricher Hebrew and German, resulting in higher BLEU scores. Despite their similar morphological systems, translating Arabic to Hebrew is worse than Arabic to German, which can be attributed to the richer Hebrew morphology compared to German. POS and morphology accuracies share an intriguing pattern: the representations that are learned when translating into English are better for predicting POS or morphology than those learned when translating into German, which are in turn better than those learned when translating into Hebrew. This is remarkable given that English is a morphologically-poor language that does not display many of the morphological properties that are found in the Arabic source. In contrast, German and Hebrew have richer morphologies, so one could expect that translating into them would make the model learn more about morphology. A possible explanation for this phenomenon is that the Arabic-English model is simply better than the Arabic-Hebrew and Arabic-German models, as hinted by the BLEU scores in Table 2. The inherent difficulty in translating Arabic to Hebrew/German may affect the ability to learn good representations of word structure. To probe this more, we trained an Arabic-Arabic autoencoder on the same training data. We found that it learns to recreate the test sentences extremely well, with very high BLEU scores (Figure 8). However, its 866 word representations are actually inferior for the purpose of POS/morphological tagging. This implies that higher BLEU does not necessarily entail better morphological representations. In other words, a better translation model learns more informative representations, but only when it is actually learning to translate rather than merely memorizing the data as in the autoencoder case. We found this to be consistently true also for charbased experiments, and in other language pairs. 5 Decoder Analysis So far we only looked at the encoder. However, the decoder DEC is a crucial part in an MT system with access to both source and target sentences. In order to examine what the decoder learns about morphology, we first train an NMT system on the parallel corpus. Then, we use the trained model to encode a source sentence and extract features for words in the target sentence. These features are used to train a classifier on POS or morphological tagging on the target side.5 Note that in this case the decoder is given the correct target words oneby-one, similar to the usual NMT training regime. Table 3 (1st row) shows the results of using representations extracted with ENC and DEC from the Arabic-English and English-Arabic models, respectively. There is clearly a huge drop in representation quality with the decoder.6 At first, this drop seems correlated with lower BLEU scores in English to Arabic vs. Arabic to English. However, we observed similar low POS tagging accuracy using decoder representations from high-quality NMT models. For instance, the French-to-English system obtains 37.8 BLEU, but its decoder representations give a mere 54.26% accuracy on English POS tagging. As an alternative explanation for the poor quality of the decoder representations, consider the fundamental tasks of the two NMT modules: encoder and decoder. The encoder’s task is to create a generic, close to language-independent representation of the source sentence, as shown by recent evidence from multilingual NMT (Johnson et al., 2016). The decoder’s task is to use this representation to generate the target sentence in a specific 5In this section we only experiment with predicted tags as there are no parallel data with gold POS/morphological tags that we are aware of. 6Note that the decoder results are above a majority baseline of 20%, so the decoder is still learning something about the target language. POS Accuracy BLEU Attn ENC DEC Ar-En En-Ar 3 89.62 43.93 24.69 13.37 7 74.10 50.38 11.88 5.04 Table 3: POS tagging accuracy using encoder and decoder representations with/without attention. language. Presumably, it is sufficient for the decoder to learn a strong language model in order to produce morphologically-correct output, without learning much about morphology, while the encoder needs to learn quite a lot about source language morphology in order to create a good generic representation. In the following section we show that the attention mechanism also plays an important role in the division of labor between encoder and decoder. 5.1 Effect of attention Consider the role of the attention mechanism in learning useful representations: during decoding, the attention weights are combined with the decoder’s hidden states to generate the current translation. These two sources of information need to jointly point to the most relevant source word(s) and predict the next most likely word. Thus, the decoder puts significant emphasis on mapping back to the source sentence, which may come at the expense of obtaining a meaningful representation of the current word. We hypothesize that the attention mechanism hurts the quality of the target word representations learned by the decoder. To test this hypothesis, we train NMT models with and without attention and compare the quality of their learned representations. As Table 3 shows (compare 1st and 2nd rows), removing the attention mechanism decreases the quality of the encoder representations, but improves the quality of the decoder representations. Without the attention mechanism, the decoder is forced to learn more informative representations of the target language. 5.2 Effect of word representation We also conducted experiments to verify our findings regarding word-based versus character-based representations on the decoder side. By character representation we mean a character CNN on the input words. The decoder predictions are still done at the word-level, which enables us to use its hidden states as word representations. 867 Table 4 shows POS accuracy of word-based vs. char-based representations in the encoder and decoder. While char-based representations improve the encoder, they do not help the decoder. BLEU scores behave similarly: the char-based model leads to better translations in Arabic-to-English, but not in English-to-Arabic. A possible explanation for this phenomenon is that the decoder’s predictions are still done at word level even with the char-based model (which encodes the target input but not the output). In practice, this can lead to generating unknown words. Indeed, in Arabicto-English the char-based model reduces the number of generated unknown words in the MT test set by 25%, while in English-to-Arabic the number of unknown words remains roughly the same between word-based and char-based models. 6 Related Work Analysis of neural models The opacity of neural networks has motivated researchers to analyze such models in different ways. One line of work visualizes hidden unit activations in recurrent neural networks that are trained for a given task (Elman, 1991; Karpathy et al., 2015; K´ad´ar et al., 2016; Qian et al., 2016a). While such visualizations illuminate the inner workings of the network, they are often qualitative in nature and somewhat anecdotal. A different approach tries to provide a quantitative analysis by correlating parts of the neural network with linguistic properties, for example by training a classifier to predict features of interest. Different units have been used, from word embeddings (K¨ohn, 2015; Qian et al., 2016b), through LSTM gates or states (Qian et al., 2016a), to sentence embeddings (Adi et al., 2016). Our work is most similar to Shi et al. (2016), who use hidden vectors from a neural MT encoder to predict syntactic properties on the English source side. In contrast, we focus on representations in morphologically-rich languages and evaluate both source and target sides across several criteria. Vylomova et al. (2016) also analyze different representations for morphologically-rich languages in MT, but do not directly measure the quality of the learned representations. Word representations in MT Machine translation systems that deal with morphologically-rich languages resort to various techniques for representing morphological knowledge, such as word segmentation (Nieflen and Ney, 2000; Koehn and POS Accuracy BLEU ENC DEC Ar-En En-Ar Word 89.62 43.93 24.69 13.37 Char 95.35 44.54 28.42 13.00 Table 4: POS tagging accuracy using word-based and char-based encoder/decoder representations. Knight, 2003; Badr et al., 2008) and factored translation and reordering models (Koehn and Hoang, 2007; Durrani et al., 2014). Characters and other sub-word units have become increasingly popular in neural MT, although they had also been used in phrase-based MT for handling morphologically-rich (Luong et al., 2010) or closely related language pairs (Durrani et al., 2010; Nakov and Tiedemann, 2012). In neural MT, such units are obtained in a pre-processing step—e.g. by byte-pair encoding (Sennrich et al., 2016) or the word-piece model (Wu et al., 2016)— or learned during training using a character-based convolutional/recurrent sub-network (Costa-juss`a and Fonollosa, 2016; Luong and Manning, 2016; Vylomova et al., 2016). The latter approach has the advantage of keeping the original word boundaries without requiring pre- and post-processing. Here we focus on a character CNN which has been used in language modeling and machine translation (Kim et al., 2015; Belinkov and Glass, 2016; Costa-juss`a and Fonollosa, 2016; Jozefowicz et al., 2016; Sajjad et al., 2017). We evaluate the quality of different representations learned by an MT system augmented with a character CNN in terms of POS and morphological tagging, and contrast them with a purely word-based system. 7 Conclusion Neural networks have become ubiquitous in machine translation due to their elegant architecture and good performance. The representations they use for linguistic units are crucial for obtaining high-quality translation. In this work, we investigated how neural MT models learn word structure. We evaluated their representation quality on POS and morphological tagging in a number of languages. Our results lead to the following conclusions: • Character-based representations are better than word-based ones for learning morphology, especially in rare and unseen words. 868 • Lower layers of the neural network are better at capturing morphology, while deeper networks improve translation performance. We hypothesize that lower layers are more focused on word structure, while higher ones are focused on word meaning. • Translating into morphologically-poorer languages leads to better source-side representations. This is partly, but not completely, correlated with BLEU scores. • The attentional decoder learns impoverished representations that do not carry much information about morphology. These insights can guide further development of neural MT systems. For instance, jointly learning translation and morphology can possibly lead to better representations and improved translation. Our analysis indicates that this kind of approach should take into account factors such as the encoding layer and the type of word representation. Another area for future work is to extend the analysis to other word representations (e.g. byte-pair encoding), deeper networks, and more semantically-oriented tasks such as semantic rolelabeling or semantic parsing. Acknowledgments We would like to thank Helmut Schmid for providing the Tiger corpus, members of the MIT Spoken Language Systems group for helpful comments, and the three anonymous reviewers for their useful suggestions. This research was carried out in collaboration between the HBKU Qatar Computing Research Institute (QCRI) and the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). References Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2016. Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks. arXiv preprint arXiv:1608.04207 . Ibrahim Badr, Rabih Zbib, and James Glass. 2008. Segmentation for English-to-Arabic Statistical Machine Translation. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers. Columbus, Ohio, HLT-Short ’08, pages 153–156. http://dl.acm.org/citation.cfm?id=1557690.1557732. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural Machine Translation by Jointly Learning to Align and Translate. arXiv preprint arXiv:1409.0473 . Yonatan Belinkov and James Glass. 2016. Large-Scale Machine Translation between Arabic and Hebrew: Available Corpora and Initial Results. In Proceedings of the Workshop on Semitic Machine Translation. Association for Computational Linguistics, Austin, Texas, pages 7–12. Luisa Bentivogli, Arianna Bisazza, Mauro Cettolo, and Marcello Federico. 2016. Neural versus PhraseBased Machine Translation Quality: a Case Study. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 257–267. https://aclweb.org/anthology/D161025. Mauro Cettolo. 2016. An Arabic-Hebrew parallel corpus of TED talks. In Proceedings of the Workshop on Semitic Machine Translation. Association for Computational Linguistics, Austin, Texas, pages 1–6. Mauro Cettolo, Christian Girardi, and Marcello Federico. 2012. WIT3: Web Inventory of Transcribed and Translated Talks. In Proceedings of the 16th Conference of the European Association for Machine Translation (EAMT). Trento, Italy, pages 261– 268. Marta R. Costa-juss`a and Jos´e A. R. Fonollosa. 2016. Character-based Neural Machine Translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Berlin, Germany, pages 357–361. http://anthology.aclweb.org/P16-2058. Nadir Durrani, Philipp Koehn, Helmut Schmid, and Alexander Fraser. 2014. Investigating the Usefulness of Generalized Word Representations in SMT. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. Dublin City University and Association for Computational Linguistics, Dublin, Ireland, pages 421–432. http://www.aclweb.org/anthology/C14-1041. Nadir Durrani, Hassan Sajjad, Alexander Fraser, and Helmut Schmid. 2010. Hindi-to-Urdu Machine Translation through Transliteration. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Uppsala, Sweden, pages 465– 474. http://www.aclweb.org/anthology/P10-1048. Jeffrey L Elman. 1991. Distributed representations, simple recurrent networks, and grammatical structure. Machine learning 7(2-3):195–225. 869 Lieke Gelderloos and Grzegorz Chrupała. 2016. From phonemes to images: levels of representation in a recurrent neural model of visually-grounded language learning. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee, Osaka, Japan, pages 1309– 1319. http://aclweb.org/anthology/C16-1124. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9(8):1735–1780. Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi´egas, Martin Wattenberg, Greg Corrado, et al. 2016. Google’s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation. arXiv preprint arXiv:1611.04558 . Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the Limits of Language Modeling. arXiv preprint arXiv:1602.02410 . ´Akos K´ad´ar, Grzegorz Chrupała, and Afra Alishahi. 2016. Representation of linguistic form and function in recurrent neural networks. arXiv preprint arXiv:1602.08952 . Andrej Karpathy, Justin Johnson, and Fei-Fei Li. 2015. Visualizing and Understanding Recurrent Networks. arXiv preprint arXiv:1506.02078 . Yoon Kim. 2016. Seq2seq-attn. https:// github.com/harvardnlp/seq2seq-attn. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. 2015. Character-aware Neural Language Models. arXiv preprint arXiv:1508.06615 . Diederik Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980 . Philipp Koehn and Hieu Hoang. 2007. Factored Translation Models. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL). Association for Computational Linguistics, Prague, Czech Republic, pages 868–876. http://www.aclweb.org/anthology/D07-1091. Philipp Koehn and Kevin Knight. 2003. Empirical Methods for Compound Splitting. In 10th Conference of the European Chapter of the Association for Computational Linguistics. pages 187–194. http://www.aclweb.org/anthology/E03-1076. Arne K¨ohn. 2015. What’s in an Embedding? Analyzing Word Embeddings through Multilingual Evaluation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 2067–2073. http://aclweb.org/anthology/D15-1246. Minh-Thang Luong and Christopher D. Manning. 2015. Stanford Neural Machine Translation Systems for Spoken Language Domains. In Proceedings of the International Workshop on Spoken Language Translation. Da Nang, Vietnam. Minh-Thang Luong and D. Christopher Manning. 2016. Achieving Open Vocabulary Neural Machine Translation with Hybrid Word-Character Models. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1054–1063. https://doi.org/10.18653/v1/P16-1100. Minh-Thang Luong, Preslav Nakov, and Min-Yen Kan. 2010. A Hybrid Morpheme-Word Representation for Machine Translation of Morphologically Rich Languages. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 148–157. http://aclweb.org/anthology/D101015. Thomas Mueller, Helmut Schmid, and Hinrich Sch¨utze. 2013. Efficient Higher-Order CRFs for Morphological Tagging. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Seattle, Washington, USA, pages 322– 332. http://www.aclweb.org/anthology/D13-1032. Preslav Nakov and J¨org Tiedemann. 2012. Combining Word-Level and Character-Level Models for Machine Translation Between Closely-Related Languages. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Jeju, Korea, ACL ’12, pages 301–305. http://aclweb.org/anthology/P122059. Sonja Nieflen and Hermann Ney. 2000. Improving SMT quality with morpho-syntactic analysis. In COLING 2000 Volume 2: The 18th International Conference on Computational Linguistics. http://www.aclweb.org/anthology/C00-2162. Arfath Pasha, Mohamed Al-Badrashiny, Mona Diab, Ahmed El Kholy, Ramy Eskander, Nizar Habash, Manoj Pooleery, Owen Rambow, and Ryan Roth. 2014. MADAMIRA: A Fast, Comprehensive Tool for Morphological Analysis and Disambiguation of Arabic. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14). Reykjavik, Iceland, pages 1094–1101. Peng Qian, Xipeng Qiu, and Xuanjing Huang. 2016a. Analyzing Linguistic Knowledge in Sequential Model of Sentence. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 826–835. https://aclweb.org/anthology/D16-1079. 870 Peng Qian, Xipeng Qiu, and Xuanjing Huang. 2016b. Investigating Language Universal and Specific Properties in Word Embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1478–1488. http://www.aclweb.org/anthology/P16-1140. Adwait Ratnaparkhi. 1998. Maximum Entropy Models for Natural Language Ambiguity Resolution. Ph.D. thesis, University of Pennsylvania, Philadelphia, PA. Hassan Sajjad, Fahim Dalvi, Nadir Durrani, Ahmed Abdelali, Yonatan Belinkov, and Stephan Vogel. 2017. Challenging Language-Dependent Segmentation for Arabic: An Application to Machine Translation and Part-of-Speech Tagging. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Vancouver, Canada. Helmut Schmid. 1994. Part-of-Speech Tagging with Neural Networks. In Proceedings of the 15th International Conference on Computational Linguistics (Coling 1994). Coling 1994 Organizing Committee, Kyoto, Japan, pages 172–176. Helmut Schmid. 2000. LoPar: Design and Implementation. Bericht des Sonderforschungsbereiches “Sprachtheoretische Grundlagen fr die Computerlinguistik” 149, Institute for Computational Linguistics, University of Stuttgart. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1715–1725. http://www.aclweb.org/anthology/P16-1162. Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does String-Based Neural MT Learn Source Syntax? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 1526–1534. https://aclweb.org/anthology/D16-1159. Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to Sequence Learning with Neural Networks. In Advances in neural information processing systems. pages 3104–3112. Antonio Toral and V´ıctor M. S´anchez-Cartagena. 2017. A Multifaceted Evaluation of Neural versus Phrase-Based Machine Translation for 9 Language Directions. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers. Association for Computational Linguistics, Valencia, Spain, pages 1063–1073. http://aclweb.org/anthology/E17-1100. Ekaterina Vylomova, Trevor Cohn, Xuanli He, and Gholamreza Haffari. 2016. Word Representation Models for Morphologically Rich Languages in Neural Machine Translation. arXiv preprint arXiv:1606.04217 . Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. arXiv preprint arXiv:1609.08144 . Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. 2016. Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation. Transactions of the Association for Computational Linguistics 4:371–383. https://transacl.org/ojs/index.php/tacl/article/view/863. 871 A Supplementary Material A.1 Training Details POS/Morphological classifier The classifier used for all prediction tasks is a feed-forward network with one hidden layer, dropout (⇢= 0.5), a ReLU non-linearity, and an output layer mapping to the tag set (followed by a Softmax). The size of the hidden layer is set to be identical to the size of the encoder’s hidden state (typically 500 dimensions). We use Adam (Kingma and Ba, 2014) with default parameters to minimize the cross-entropy objective. Training is run with mini-batches of size 16 and stopped once the loss on the dev set stops improving; we allow a patience of 5 epochs. Neural MT system We train a 2-layer LSTM encoder-decoder with attention. We use the seq2seq-attn implementation (Kim, 2016) with the following default settings: word vectors and LSTM states have 500 dimensions, SGD with initial learning rate of 1.0 and rate decay of 0.5, and dropout rate of 0.3. The characterbased model is a CNN with a highway network over characters (Kim et al., 2015) with 1000 feature maps and a kernel width of 6 characters. This model was found to be useful for translating morphologically-rich languages (Costa-juss`a and Fonollosa, 2016). The MT system is trained for 20 epochs, and the model with the best dev loss is used for extracting features for the classifier. A.2 Data and Taggers Datasets All of the translation models are trained on the Ted talks corpus included in WIT3 (Cettolo et al., 2012; Cettolo, 2016). Statistics about each language pair are available on the WIT3 website: https://wit3.fbk.eu. For experiments using gold tags, we used the Arabic Treebank for Arabic (with the versions and splits described in the MADAMIRA manual (Pasha et al., 2014)) and the Tiger corpus for German.7 POS and morphological taggers We used the following tools to annotate the MT corpora: MADAMIRA (Pasha et al., 2014) for Arabic POS and morphological tags, Tree-Tagger (Schmid, 1994) for Czech and French POS tags, LoPar (Schmid, 2000) for German POS and morphological tags, and MXPOST (Ratnaparkhi, 1998) for English POS tags. These tools are recommended 7http://www.ims.uni-stuttgart.de/ forschung/ressourcen/korpora/tiger.html on the Moses website.8 As mentioned before, our goal is not to achieve state-of-the-art results, but rather to study what different components of the NMT architecture learn about word morphology. Please refer to Mueller et al. (2013) for representative POS and morphological tagging accuracies. A.3 Supplementary Results We report here results that were omitted from the paper due to the space limit. Table 5 shows encoder results using different layers, languages, and representations (word/char-based). As noted in the paper, all the results consistently show that i) layer 1 performs better than layers 0 and 2; and ii) charbased representations are better than word-based for learning morphology. Table 6 shows that translating into a morphologically-poor language (English) leads to better source representations, and Table 7 provides additional decoder results. Layer 0 Layer 1 Layer 2 Word/Char (POS) De 91.1/92.0 93.6/95.2 93.5/94.6 Fr 92.1/92.9 95.1/95.9 94.6/95.6 Cz 76.3/78.3 77.0/79.1 75.7/80.6 Word/Char (Morphology) De 87.6/88.8 89.5/91.2 88.7/90.5 Table 5: POS and morphology accuracy on predicted tags using word- and char-based representations from different layers of *-to-En systems. Source Target English Arabic Self German 93.5 92.7 89.3 Czech 75.7 75.2 71.8 Table 6: Impact of changing the target language on POS tagging accuracy. Self = German/Czech in rows 1/2 respectively. En-De En-Cz De-En Fr-En POS 53.6 36.3 53.3 54.1 BLEU 23.4 13.9 29.6 37.8 Table 7: POS accuracy and BLEU using decoder representations from different language pairs. 8http://www.statmt.org/moses/?n=Moses. ExternalTools 872
2017
80
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 873–883 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1081 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 873–883 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1081 Context-Dependent Sentiment Analysis in User-Generated Videos Soujanya Poria Temasek Laboratories NTU, Singapore [email protected] Erik Cambria School of Computer Science and Engineering, NTU, Singapore [email protected] Devamanyu Hazarika Computer Science and Engineering, NITW, India [email protected] Navonil Mazumder Centro de Investigacin en Computacin, IPN, Mexico [email protected] Amir Zadeh Language Technologies Institute, CMU, USA [email protected] Louis-Philippe Morency Language Technologies Institute, CMU, USA [email protected] Abstract Multimodal sentiment analysis is a developing area of research, which involves the identification of sentiments in videos. Current research considers utterances as independent entities, i.e., ignores the interdependencies and relations among the utterances of a video. In this paper, we propose a LSTM-based model that enables utterances to capture contextual information from their surroundings in the same video, thus aiding the classification process. Our method shows 5-10% performance improvement over the state of the art and high robustness to generalizability. 1 Introduction Sentiment analysis is a ‘suitcase’ research problem that requires tackling many NLP sub-tasks, e.g., aspect extraction (Poria et al., 2016a), named entity recognition (Ma et al., 2016), concept extraction (Rajagopal et al., 2013), sarcasm detection (Poria et al., 2016b), personality recognition (Majumder et al., 2017), and more. Sentiment analysis can be performed at different granularity levels, e.g., subjectivity detection simply classifies data as either subjective (opinionated) or objective (neutral), while polarity detection focuses on determining whether subjective data indicate positive or negative sentiment. Emotion recognition further breaks down the inferred polarity into a set of emotions conveyed by the subjective data, e.g., positive sentiment can be caused by joy or anticipation, while negative sentiment can be caused by fear or disgust. Even though the primary focus of this paper is to classify sentiment in videos, we also show the performance of the proposed method for the finergrained task of emotion recognition. Emotion recognition and sentiment analysis have become a new trend in social media, helping users and companies to automatically extract the opinions expressed in user-generated content, especially videos. Thanks to the high availability of computers and smartphones, and the rapid rise of social media, consumers tend to record their reviews and opinions about products or films and upload them on social media platforms, such as YouTube and Facebook. Such videos often contain comparisons, which can aid prospective buyers make an informed decision. The primary advantage of analyzing videos over text is the surplus of behavioral cues present in vocal and visual modalities. The vocal modulations and facial expressions in the visual data, along with textual data, provide important cues to better identify affective states of the opinion holder. Thus, a combination of text and video data helps to create a more robust emotion and sentiment analysis model (Poria et al., 2017a). An utterance (Olson, 1977) is a unit of speech bound by breathes or pauses. Utterance-level sentiment analysis focuses on tagging every utterance of a video with a sentiment label (instead of assigning a unique label to the whole video). In particular, utterance-level sentiment analysis is useful to understand the sentiment dynamics of different aspects of the topics covered by the speaker throughout his/her speech. Recently, a number of approaches to multimodal sentiment analysis, producing interesting results, have been proposed (P´erez-Rosas et al., 2013; Wollmer et al., 2013; Poria et al., 2015). However, there are major issues that remain unaddressed. Not considering the relation and dependencies among the utterances is one of such issues. State-of-the-art approaches in this area treat utterances independently and ignore the order of utterances in a video (Cambria et al., 2017b). 873 Every utterance in a video is spoken at a distinct time and in a particular order. Thus, a video can be treated as a sequence of utterances. Like any other sequence classification problem (Collobert et al., 2011), sequential utterances of a video may largely be contextually correlated and, hence, influence each other’s sentiment distribution. In our paper, we give importance to the order in which utterances appear in a video. We treat surrounding utterances as the context of the utterance that is aimed to be classified. For example, the MOSI dataset (Zadeh et al., 2016) contains a video, in which a girl reviews the movie ‘Green Hornet’. At one point, she says “The Green Hornet did something similar”. Normally, doing something similar, i.e., monotonous or repetitive might be perceived as negative. However, the nearby utterances “It engages the audience more”, “they took a new spin on it”, “and I just loved it” indicate a positive context. The hypothesis of the independence of tokens is quite popular in information retrieval and data mining, e.g., bag-of-words model, but it has a lot limitations (Cambria and White, 2014). In this paper, we discard such an oversimplifying hypothesis and develop a framework based on long shortterm memory (LSTM) that takes a sequence of utterances as input and extracts contextual utterancelevel features. The other uncovered major issues in the literature are the role of speaker-dependent versus speaker-independent models, the impact of each modality across the dataset, and generalization ability of a multimodal sentiment classifier. Leaving these issues unaddressed has presented difficulties in effective comparison of different multimodal sentiment analysis methods. In this work, we address all of these issues. Our model preserves the sequential order of utterances and enables consecutive utterances to share information, thus providing contextual information to the utterance-level sentiment classification process. Experimental results show that the proposed framework has outperformed the state of the art on three benchmark datasets by 5-10%. The paper is organized as follows: Section 2 provides a brief literature review on multimodal sentiment analysis; Section 3 describes the proposed method in detail; experimental results and discussion are shown in Section 4; finally, Section 5 concludes the paper. 2 Related Work The opportunity to capture people’s opinions has raised growing interest both within the scientific community, for the new research challenges, and in the business world, due to the remarkable benefits to be had from financial market prediction. Text-based sentiment analysis systems can be broadly categorized into knowledge-based and statistics-based approaches (Cambria et al., 2017a). While the use of knowledge bases was initially more popular for the identification of polarity in text (Cambria et al., 2016; Poria et al., 2016c), sentiment analysis researchers have recently been using statistics-based approaches, with a special focus on supervised statistical methods (Socher et al., 2013; Oneto et al., 2016). In 1974, Ekman (Ekman, 1974) carried out extensive studies on facial expressions which showed that universal facial expressions are able to provide sufficient clues to detect emotions. Recent studies on speech-based emotion analysis (Datcu and Rothkrantz, 2008) have focused on identifying relevant acoustic features, such as fundamental frequency (pitch), intensity of utterance, bandwidth, and duration. As for fusing audio and visual modalities for emotion recognition, two of the early works were (De Silva et al., 1997) and (Chen et al., 1998). Both works showed that a bimodal system yielded a higher accuracy than any unimodal system. More recent research on audio-visual fusion for emotion recognition has been conducted at either feature level (Kessous et al., 2010) or decision level (Schuller, 2011). While there are many research papers on audio-visual fusion for emotion recognition, only a few have been devoted to multimodal emotion or sentiment analysis using textual clues along with visual and audio modalities. (Wollmer et al., 2013) and (Rozgic et al., 2012) fused information from audio, visual, and textual modalities to extract emotion and sentiment. Poria et al. (Poria et al., 2015, 2016d, 2017b) extracted audio, visual and textual features using convolutional neural network (CNN); concatenated those features and employed multiple kernel learning (MKL) for final sentiment classification. (Metallinou et al., 2008) and (Eyben et al., 2010a) fused audio and textual modalities for emotion recognition. Both approaches relied on a featurelevel fusion. (Wu and Liang, 2011) fused audio and textual clues at decision level. 874 3 Method In this work, we propose a LSTM network that takes as input the sequence of utterances in a video and extracts contextual unimodal and multimodal features by modeling the dependencies among the input utterances. M number of videos, comprising of its constituent utterances, serve as the input. We represent the dataset as U = u1,u2,u3...,uM and each ui = ui,1,ui,2,...,ui,Li where Li is the number of utterances in video ui. Below, we present an overview of the proposed method in two major steps. A. Context-Independent Unimodal UtteranceLevel Feature Extraction Firstly, the unimodal features are extracted without considering the contextual information of the utterances (Section 3.1). B. Contextual Unimodal and Multimodal Classification Secondly, the context-independent unimodal features (from Step A) are fed into a LSTM network (termed contextual LSTM) that allows consecutive utterances in a video to share information in the feature extraction process (Section 3.2). We experimentally show that this proposed framework improves the performance of utterance-level sentiment classification over traditional frameworks. 3.1 Extracting Context-Independent Unimodal Features Initially, the unimodal features are extracted from each utterance separately, i.e., we do not consider the contextual relation and dependency among the utterances. Below, we explain the textual, audio, and visual feature extraction methods. 3.1.1 text-CNN: Textual Features Extraction The source of textual modality is the transcription of the spoken words. For extracting features from the textual modality, we use a CNN (Karpathy et al., 2014). In particular, we first represent each utterance as the concatenation of vectors of the constituent words. These vectors are the publicly available 300-dimensional word2vec vectors trained on 100 billion words from Google News (Mikolov et al., 2013). The convolution kernels are thus applied to these concatenated word vectors instead of individual words. Each utterance is wrapped to a window of 50 words which serves as the input to the CNN. The CNN has two convolutional layers; the first layer has two kernels of size 3 and 4, with 50 feature maps each and the second layer has a kernel of size 2 with 100 feature maps. The convolution layers are interleaved with max-pooling layers of window 2 × 2. This is followed by a fully connected layer of size 500 and softmax output. We use a rectified linear unit (ReLU) (Teh and Hinton, 2001) as the activation function. The activation values of the fullyconnected layer are taken as the features of utterances for text modality. The convolution of the CNN over the utterance learns abstract representations of the phrases equipped with implicit semantic information, which with each successive layer spans over increasing number of words and ultimately the entire utterance. 3.1.2 openSMILE: Audio Feature Extraction Audio features are extracted at 30 Hz frame-rate and a sliding window of 100 ms. To compute the features, we use openSMILE (Eyben et al., 2010b), an open-source software that automatically extracts audio features such as pitch and voice intensity. Voice normalization is performed and voice intensity is thresholded to identify samples with and without voice. Z-standardization is used to perform voice normalization. The features extracted by openSMILE consist of several low-level descriptors (LLD), e.g., MFCC, voice intensity, pitch, and their statistics, e.g., mean, root quadratic mean, etc. Specifically, we use IS13-ComParE configuration file in openSMILE. Taking into account all functionals of each LLD, we obtained 6373 features. 3.1.3 3D-CNN: Visual Feature Extraction We use 3D-CNN (Ji et al., 2013) to obtain visual features from the video. We hypothesize that 3D-CNN will not only be able to learn relevant features from each frame, but will also learn the changes among given number of consecutive frames. In the past, 3D-CNN has been successfully applied to object classification on tridimensional data (Ji et al., 2013). Its ability to achieve stateof-the-art results motivated us to adopt it in our framework. 875 Let vid ∈Rc×f×h×w be a video, where c = number of channels in an image (in our case c = 3, since we consider only RGB images), f = number of frames, h = height of the frames, and w = width of the frames. Again, we consider the 3D convolutional filter filt ∈Rfm×c×fd×fh×fw, where fm = number of feature maps, c = number of channels, fd = number of frames (in other words depth of the filter), fh = height of the filter, and fw = width of the filter. Similar to 2D-CNN, filt slides across video vid and generates output convout ∈ Rfm×c×(f−fd+1)×(h−fh+1)×(w−fw+1). Next, we apply max pooling to convout to select only relevant features. The pooling will be applied only to the last three dimensions of the array convout. In our experiments, we obtained best results with 32 feature maps (fm) with the filter-size of 5 × 5 × 5 (or fd × fh × fw). In other words, the dimension of the filter is 32 × 3 × 5 × 5 × 5 (or fm × c × fd × fh × fw). Subsequently, we apply max pooling on the output of convolution operation, with window-size being 3 × 3 × 3. This is followed by a dense layer of size 300 and softmax. The activation values of this dense layer are finally used as the video features for each utterance. 3.2 Context-Dependent Feature Extraction In sequence classification, the classification of each member is dependent on the other members. Utterances in a video maintain a sequence. We hypothesize that, within a video, there is a high probability of inter-utterance dependency with respect to their sentimental clues. In particular, we claim that, when classifying one utterance, other utterances can provide important contextual information. This calls for a model which takes into account such inter-dependencies and the effect these might have on the target utterance. To capture this flow of informational triggers across utterances, we use a LSTM-based recurrent neural network (RNN) scheme (Gers, 2001). 3.2.1 Long Short-Term Memory LSTM (Hochreiter and Schmidhuber, 1997) is a kind of RNN, an extension of conventional feedforward neural network. Specifically, LSTM cells are capable of modeling long-range dependencies, which other traditional RNNs fail to do given the vanishing gradient issue. Each LSTM cell consists of an input gate i, an output gate o, and a forget gate f, to control the flow of information. Current research (Zhou et al., 2016) indicates the benefit of using such networks to incorporate contextual information in the classification process. In our case, the LSTM network serves the purpose of context-dependent feature extraction by modeling relations among utterances. We term our architecture ‘contextual LSTM’. We propose several architectural variants of it later in the paper. 3.2.2 Contextual LSTM Architecture Let unimodal features have dimension k, each utterance is thus represented by a feature vector xi,t ∈Rk, where t represents the tth utterance of the video i. For a video, we collect the vectors for all the utterances in it, to get Xi = [xi,1,xi,2,...,xi,Li] ∈RLi×k, where Li represents the number of utterances in the video. This matrix Xi serves as the input to the LSTM. Figure 1 demonstrates the functioning of this LSTM module. In the procedure, getLstmFeatures(Xi) of Algorithm 1, each of these utterance xi,t is passed through a LSTM cell using the equations mentioned in line 32 to 37. The output of the LSTM cell hi,t is then fed into a dense layer and finally into a softmax layer (line 38 to 39). The activations of the dense layer zi,t are used as the contextdependent features of contextual LSTM. 3.2.3 Training The training of the LSTM network is performed using categorical cross-entropy on each utterance’s softmax output per video, i.e., loss = − 1 (∑M i=1 Li) M ∑ i=1 Li ∑ j=1 C ∑ c=1 yj i,c log2(ˆyj i,c), where M = total number of videos, Li = number of utterances for ith video, yj i,c = original output of class c, and ˆyj i,c = predicted output for jth utterance of ith video. As a regularization method, dropout between the LSTM cell and dense layer is introduced to avoid overfitting. As the videos do not have the same number of utterances, padding is introduced to serve as neutral utterances. To avoid the proliferation of noise within the network, bit masking is done on these padded utterances to eliminate their effect in the network. Hyper-parameters tuning is done on the training set by splitting it into train and validation components with 80/20% split. 876 Softmax Output Dense Layer Output Contextual features sc-LSTM Utterance 1 Utterance 2 Utterance n LSTM LSTM LSTM Utterance 3 LSTM ... ... ... ... ... Figure 1: Contextual LSTM network: input features are passed through an unidirectional LSTM layer, followed by a dense and then a softmax layer. The dense layer activations serve as the output features. RMSprop has been used as the optimizer which is known to resolve Adagrad’s radically diminishing learning rates (Duchi et al., 2011). After feeding the training set to the network, the test set is passed through it to generate their contextdependent features. These features are finally passed through an SVM for the final classification. Different Network Architectures We consider the following variants of the contextual LSTM architecture in our experiments. sc-LSTM This variant of the contextual LSTM architecture consists of unidirectional LSTM cells. As this is the simple variant of the contextual LSTM, we termed it as simple contextual LSTM (sc-LSTM1). h-LSTM We also investigate an architecture where the dense layer after the LSTM cell is omitted. Thus, the output of the LSTM cell hi,t provides our context-dependent features and the softmax layer provides the classification. We call this architecture hidden-LSTM (h-LSTM). bc-LSTM Bi-directional LSTMs are two unidirectional LSTMs stacked together having opposite directions. Thus, an utterance can get information from utterances occurring before and after itself in the video. We replaced the regular LSTM with a bi-directional LSTM and named the resulting architecture as bi-directional contextual LSTM (bc-LSTM). The training process of this architecture is similar to sc-LSTM. 1http://github.com/senticnet/sc-lstm uni-SVM In this setting, we first obtain the unimodal features as explained in Section 3.1, concatenate them and then send to an SVM for the final classification. It should be noted that using a gated recurrent unit (GRU) instead of LSTM did not improve the performance. 3.3 Fusion of Modalities We accomplish multimodal fusion through two different frameworks, described below. 3.3.1 Non-hierarchical Framework In this framework, we concatenate contextindependent unimodal features (from Section 3.1) and feed that into the contextual LSTM networks, i.e., sc-LSTM, bc-LSTM, and h-LSTM. 3.3.2 Hierarchical Framework Contextual unimodal features can further improve performance of the multimodal fusion framework explained in Section 3.3.1. To accomplish this, we propose a hierarchical deep network which consists of two levels. Level-1 Context-independent unimodal features (from Section 3.1) are fed to the proposed LSTM network to get context-sensitive unimodal feature representations for each utterance. Individual LSTM networks are used for each modality. Level-2 This level consists of a contextual LSTM network similar to Level-1 but independent in training and computation. Output from each LSTM network in Level-1 are concatenated and fed into this LSTM network, thus providing an inherent fusion scheme (see Figure 2). 877 Figure 2: Hierarchical architecture for extracting contextdependent multimodal utterance features (see Figure 1 for the LSTM module). The performance of the second level banks on the quality of the features from the previous level, with better features aiding the fusion process. Algorithm 1 describes the overall computation for utterance classification. For the hierarchical framework, we train Level-1 and Level-2 successively but separately, i.e., the training is not performed “end-to-end”. Weight Bias Wi, Wf, Wc, Wo ∈Rd×k bi, bf, bc, bo ∈Rd Pi, Pf, Pc, PoVo ∈Rd×d bz ∈Rm Wz ∈Rm×d bsft ∈Rc Wsft ∈Rc×m Table 1: Summary of notations used in Algorithm 1. Legenda: d = dimension of hidden unit; k = dimension of input vectors to LSTM layer; c = number of classes. 4 Experiments 4.1 Dataset details Most of the research in multimodal sentiment analysis is performed on datasets with speaker overlap in train and test splits. Because each individual has a unique way of expressing emotions and sentiments, however, finding generic, personindependent features for sentiment analysis is very important. Algorithm 1 Proposed Architecture 1: procedure TRAINARCHITECTURE( U, V) 2: Train context-independent models with U 3: for i:[1,M] do ▷extract baseline features 4: for j:[1,Li] do 5: xi,j ←TextFeatures(ui,j) 6: x ′ i,j ←V ideoFeatures(ui,j) 7: x” i,j ←AudioFeatures(ui,j) 8: Unimodal: 9: Train LSTM at Level-1 with X, X ′andX”. 10: for i:[1,M] do ▷unimodal features 11: Zi ←getLSTMFeatures(Xi) 12: Z ′ i ←getLSTMFeatures(X ′ i) 13: Z” i ←getLSTMFeatures(X” i ) 14: Multimodal: 15: for i:[1,M] do 16: for j:[1,Li] do 17: if Non-hierarchical fusion then 18: x∗ i,j ←(xi,j∣∣x ′ i,j∣∣x” i,j) ▷ concatenation 19: else 20: if Hierarchical fusion then 21: x∗ i,j ←(zi,j∣∣z ′ i,j∣∣z” i,j) ▷ concatenation 22: Train LSTM at Level-2 with X∗. 23: for i:[1,M] do ▷multimodal features 24: Z∗ i ←getLSTMFeatures(X∗ i ) 25: testArchitecture( V) 26: return Z∗ 27: procedure TESTARCHITECTURE( V) 28: Similar to training phase. V is passed through the learnt models to get the features and classification outputs. Table 1 shows the trainable parameters. 29: procedure GETLSTMFEATURES(Xi) ▷for ith video 30: Zi ←φ 31: for t:[1,Li] do ▷Table 1 provides notation 32: it ←σ(Wixi,t + Pi.ht−1 + bi) 33: ̃ Ct ←tanh(Wcxi,t + Pcht−1 + bc) 34: ft ←σ(Wfxt + Pfht−1 + bf) 35: Ct ←it ∗̃ Ct + ft ∗Ct−1 36: ot ←σ(Woxt + Poht−1 + VoCt + bo) 37: ht ←ot ∗tanh(Ct) ▷output of lstm cell 38: zt ←ReLU(Wzht + bz) ▷dense layer 39: prediction ←softmax(Wsftzt + bsft) 40: Zi ←Zi ∪zt 41: return Zi In real-world applications, the model should be robust to person idiosyncrasy but it is very difficult to come up with a generalized model from the behavior of a limited number of individuals. To this end, we perform person-independent experiments to study generalization of our model, i.e., our train/test splits of the datasets are completely disjoint with respect to speakers. Multimodal Sentiment Analysis Datasets MOSI The MOSI dataset (Zadeh et al., 2016) is a dataset rich in sentimental expressions where 93 people review topics in English. The videos 878 are segmented with each segments sentiment label scored between +3 (strong positive) to -3 (strong negative) by 5 annotators. We took the average of these five annotations as the sentiment polarity and, hence, considered only two classes (positive and negative). The train/validation set consists of the first 62 individuals in the dataset. The test set contains opinionated videos by rest 31 speakers. In particular, 1447 and 752 utterances are used in training and test, respectively. MOUD This dataset (P´erez-Rosas et al., 2013) contains product review videos provided by 55 persons. The reviews are in Spanish (we used Google Translate API2 to get the English transcripts). The utterances are labeled to be either positive, negative or neutral. However, we drop the neutral label to maintain consistency with previous work. Out of 79 videos in the dataset, 59 videos are considered in the train/val set. Multimodal Emotion Recognition Datasets IEMOCAP The IEMOCAP (Busso et al., 2008) contains the acts of 10 speakers in a twoway conversation segmented into utterances. The medium of the conversations in all the videos is English. The database contains the following categorical labels: anger, happiness, sadness, neutral, excitement, frustration, fear, surprise, and other, but we take only the first four so as to compare with the state of the art (Rozgic et al., 2012). Videos by the first 8 speakers are considered in the training set. The train/test split details are provided in Table 2, which provides information regarding train/test split of all the datasets. Table 2 also provides cross-dataset split details where the datasets MOSI and MOUD are used for training and testing, respectively. The proposed model being used on reviews from different languages allows us to analyze its robustness and generalizability. 4.1.1 Characteristic of the Datasets In order to evaluate the robustness of our proposed method, we employ it on multiple datasets of different kinds. Both MOSI and MOUD are used for the sentiment classification task but they consist of review videos spoken in different languages, i.e., English and Spanish, respectively. 2http://translate.google.com IEMOCAP dataset is different from MOSI and MOUD since it is annotated with emotion labels. Apart from this, IEMOCAP dataset was created using a different method than MOSI and MOUD. These two datasets were developed by crawling consumers’ spontaneous online product review videos from popular social websites and later labeled with sentiment labels. To curate the IEMOCAP dataset, instead, subjects were provided affect-related scripts and asked to act. As pointed out by Poria et al. (Poria et al., 2017a), acted dataset like IEMOCAP can suffer from biased labeling and incorrect acting which can further cause the poor generalizability of the models trained on the acted datasets. Dataset Train Test uttrnce video uttrnce video IEMOCAP 4290 120 1208 31 MOSI 1447 62 752 31 MOUD 322 59 115 20 MOSI →MOUD 2199 93 437 79 Table 2: uttrnce: Utterance; Person-Independent Train/Test split details of each dataset (≈70/30 % split). Legenda: X→Y represents train: X and test: Y; Validation sets are extracted from the shuffled training sets using 80/20 % train/val ratio. It should be noted that the datasets’ individual configuration and splits are same throughout all the experiments (i.e., context-independent unimodal feature extraction, LSTM-based contextdependent unimodal and multimodal feature extraction and classification). 4.2 Performance of Different Models In this section, we present unimodal and multimodal sentiment analysis performance of different LSTM network variants as explained in Section 3.2.3 and comparison with the state of the art. Hierarchical vs Non-hierarchical Fusion Framework As expected, trained contextual unimodal features help the hierarchical fusion framework to outperform the non-hierarchical framework. Table 3 demonstrates this by comparing the hierarchical and the non-hierarchical frameworks using the bc-LSTM network. For this reason, we the rest of the analysis only leverages on the hierarchical framework. The non-hierarchical model outperforms the baseline uni-SVM, which confirms that it is the contextsensitive learning paradigm that plays the key role in improving performance over the baseline. 879 Comparison of Different Network Variants It is to be noted that both sc-LSTM and bc-LSTM perform quite well on the multimodal emotion recognition and sentiment analysis datasets. Since bc-LSTM has access to both the preceding and following information of the utterance sequence, it performs consistently better on all the datasets over sc-LSTM. The usefulness of the dense layer in increasing the performance is evident from the experimental results shown in Table 3. The performance improvement is in the range of 0.3% to 1.5% on MOSI and MOUD datasets. On the IEMOCAP dataset, the performance improvement of bc-LSTM and sc-LSTM over h-LSTM is in the range of 1% to 5%. Comparison with the Baselines Every LSTM network variant has outperformed the baseline uni-SVM on all the datasets by the margin of 2% to 5% (see Table 3). These results prove our initial hypothesis that modeling the contextual dependencies among utterances (which uniSVM cannot do) improves the classification. The higher performance improvement on the IEMOCAP dataset indicates the necessity of modeling long-range dependencies among the utterances as continuous emotion recognition is a multiclass sequential problem where a person does not frequently change emotions (W¨ollmer et al., 2008). We have implemented and compared with the current state-of-the-art approach proposed by (Poria et al., 2015). In their method, they extracted features from each modality and fed these to a MKL classifier. However, they did not conduct the experiment in a speaker-independent manner and also did not consider the contextual relation among the utterances. In Table 3, the results in bold are statistically significant (p < 0.05) in compare to uni-SVM. Experimental results in Table 4 show that the proposed method outperformes (Poria et al., 2015) by a significant margin. For the emotion recognition task, we have compared our method with the current state of the art (Rozgic et al., 2012), who extracted features in a similar fashion to (Poria et al., 2015) (although they used SVM trees (Yuan et al., 2006) for the fusion). 4.3 Importance of the Modalities As expected, in all kinds of experiments, bimodal and trimodal models have outperformed unimodal models. Overall, audio modality has performed better than visual on all the datasets. On MOSI and IEMOCAP datasets, the textual classifier achieves the best performance over other unimodal classifiers. On IEMOCAP dataset, the unimodal and multimodal classifiers obtained poor performance to classify neutral utterances. The textual modality, combined with non-textual modes, boosts the performance in IEMOCAP by a large margin. However, the margin is less in the other datasets. On the MOUD dataset, the textual modality performs worse than audio modality due to the noise introduced in translating Spanish utterances to English. Using Spanish word vectors3 in text-CNN results in an improvement of 10%. Nonetheless, we report results using these translated utterances as opposed to utterances trained on Spanish word vectors, in order to make fair comparison with (Poria et al., 2015). 4.4 Generalization of the Models To test the generalizability of the models, we have trained our framework on complete MOSI dataset and tested on MOUD dataset (Table 5). The performance was poor for audio and textual modality as the MOUD dataset is in Spanish while the model is trained on MOSI dataset, which is in English language. However, notably the visual modality performs better than the other two modalities in this experiment, which means that in cross-lingual scenarios facial expressions carry more generalized, robust information than audio and textual modalities. We could not carry out a similar experiment for emotion recognition as no other utterance-level dataset apart from the IEMOCAP was available at the time of our experiments. 4.5 Qualitative Analysis The need for considering context dependency (see Section 1) is of prime importance for utterancelevel sentiment classification. For example, in the utterance “What would have been a better name for the movie”, the speaker is attempting to comment the quality of the movie by giving an appropriate name. However, the sentiment is expressed implicitly and requires the contextual knowledge about the mood of the speaker and his/her general opinion about the film. The baseline unimodalSVM and state of the art fail to classify this utterance correctly4. 3http://crscardellino.me/SBWCE 4RNTN classifies it as neutral. It can be seen here http://nlp.stanford.edu:8080/sentiment/rntnDemo.html 880 Modality MOSI MOUD IEMOCAP hierarchical (%) non-hier (%) hierarchical (%) non-hier (%) hierarchical (%) non-hier (%) uni-SVM h-LSTM sc-LSTM bc-LSTM uni-SVM h-LSTM sc-LSTM bc-LSTM uni-SVM h-LSTM sc-LSTM bc-LSTM T 75.5 77.4 77.6 78.1 49.5 50.1 51.3 52.1 65.5 68.9 71.4 73.6 V 53.1 55.2 55.6 55.8 46.3 48.0 48.2 48.5 47.0 52.0 52.6 53.2 A 58.5 59.6 59.9 60.3 51.5 56.3 57.5 59.9 52.9 54.4 55.2 57.1 T + V 76.7 78.9 79.9 80.2 78.5 50.2 50.6 51.3 52.2 50.9 68.5 70.3 72.3 75.4 73.2 T + A 75.8 78.3 78.8 79.3 78.2 53.1 56.9 57.4 60.4 55.5 70.1 74.1 75.2 75.6 74.5 V + A 58.6 61.5 61.8 62.1 60.3 62.8 62.9 64.4 65.3 64.2 67.6 67.8 68.2 68.9 67.3 T + V + A 77.9 78.1 78.6 80.3 78.1 66.1 66.4 67.3 68.1 67.0 72.5 73.3 74.2 76.1 73.5 Table 3: Comparison of models mentioned in Section 3.2.3. The table reports the accuracy of classification. Legenda: non-hier ←Non-hierarchical bc-lstm. For remaining fusion, hierarchical fusion framework is used (Section 3.3.2). Modality Sentiment (%) Emotion on IEMOCAP (%) MOSI MOUD angry happy sad neutral T 78.12 52.17 76.07 78.97 76.23 67.44 V 55.80 48.58 53.15 58.15 55.49 51.26 A 60.31 59.99 58.37 60.45 61.35 52.31 T + V 80.22 52.23 77.24 78.99 78.35 68.15 T + A 79.33 60.39 77.15 79.10 78.10 69.14 V + A 62.17 65.36 68.21 71.97 70.35 62.37 A + V + T 80.30 68.11 77.98 79.31 78.30 69.92 State-of 73.551 63.251 73.10 2 72.402 61.902 58.102 -the-art 1by (Poria et al., 2015),2by (Rozgic et al., 2012) Table 4: Accuracy % on textual (T), visual (V), audio (A) modality and comparison with the state of the art. For the fusion, the hierarchical fusion framework was used. Modality MOSI →MOUD uni-SVM h-LSTM sc-LSTM bc-LSTM T 46.5% 46.5% 46.6% 46.9% V 43.3% 45.5% 48.3% 49.6% A 42.9% 46.0% 46.4% 47.2% T + V 49.8% 49.8% 49.8% 49.8% T + A 50.4% 50.9% 51.1% 51.3% V + A 46.0% 47.1% 49.3% 49.6% T + V + A 51.1% 52.2% 52.5% 52.7% Table 5: Cross-dataset comparison in terms of classification accuracy. However, information from neighboring utterances, e.g., “And I really enjoyed it” and “The countryside which they showed while going through Ireland was astoundingly beautiful” indicate its positive context and help our contextual model to classify the target utterance correctly. Such contextual relationships are prevalent throughout the dataset. In order to have a better understanding of the roles of each modality for the overall classification, we have also done some qualitative analysis. For example, the utterance “who doesn’t have any presence or greatness at all” was classified as positive by the audio classifier (as “presence and greatness at all” was spoken with enthusiasm). However, the textual modality caught the negation induced by “doesn’t” and classified it correctly. The same happened to the utterance “amazing special effects”, which presented no jest of enthusiasm in the speaker’s voice nor face, but was correctly classified by the textual classifier. On other hand, the textual classifier classified the utterance “that like to see comic book characters treated responsibly” as positive (for the presence of “like to see” and “responsibly”) but the high pitch of anger in the person’s voice and the frowning face helps to identify this as a negative utterance. In some cases, the predictions of the proposed method are wrong because of face occlusion or noisy audio. Also, in cases where sentiment is very weak and non contextual, the proposed approach shows some bias towards its surrounding utterances, which further leads to wrong predictions. 5 Conclusion The contextual relationship among utterances in a video is mostly ignored in the literature. In this paper, we developed a LSTM-based network to extract contextual features from the utterances of a video for multimodal sentiment analysis. The proposed method has outperformed the state of the art and showed significant performance improvement over the baseline. As future work, we plan to develop a LSTMbased attention model to determine the importance of each utterance and its specific contribution to each modality for sentiment classification. 881 References Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette N Chang, Sungbok Lee, and Shrikanth S Narayanan. 2008. Iemocap: Interactive emotional dyadic motion capture database. Language resources and evaluation 42(4):335–359. Erik Cambria, Dipankar Das, Sivaji Bandyopadhyay, and Antonio Feraco. 2017a. A Practical Guide to Sentiment Analysis. Springer, Cham, Switzerland. Erik Cambria, Devamanyu Hazarika, Soujanya Poria, Amir Hussain, and RBV Subramanyam. 2017b. Benchmarking multimodal sentiment aanlysis. In CICLing. Erik Cambria, Soujanya Poria, Rajiv Bajpai, and Bj¨orn Schuller. 2016. SenticNet 4: A semantic resource for sentiment analysis based on conceptual primitives. In COLING. pages 2666–2677. Erik Cambria and Bebo White. 2014. Jumping NLP curves: A review of natural language processing research. IEEE Computational Intelligence Magazine 9(2):48–57. Lawrence S Chen, Thomas S Huang, Tsutomu Miyasato, and Ryohei Nakatsu. 1998. Multimodal human emotion/expression recognition. In Proceedings of the Third IEEE International Conference on Automatic Face and Gesture Recognition. IEEE, pages 366–371. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12(Aug):2493–2537. Dragos Datcu and L Rothkrantz. 2008. Semantic audio-visual data fusion for automatic emotion recognition. Euromedia’2008 . Liyanage C De Silva, Tsutomu Miyasato, and Ryohei Nakatsu. 1997. Facial emotion recognition using multi-modal information. In Proceedings of ICICS. IEEE, volume 1, pages 397–401. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12(Jul):2121–2159. Paul Ekman. 1974. Universal facial expressions of emotion. Culture and Personality: Contemporary Readings/Chicago . Florian Eyben, Martin W¨ollmer, Alex Graves, Bj¨orn Schuller, Ellen Douglas-Cowie, and Roddy Cowie. 2010a. On-line emotion recognition in a 3-d activation-valence-time continuum using acoustic and linguistic cues. Journal on Multimodal User Interfaces 3(1-2):7–19. Florian Eyben, Martin W¨ollmer, and Bj¨orn Schuller. 2010b. Opensmile: the munich versatile and fast open-source audio feature extractor. In Proceedings of the 18th ACM international conference on Multimedia. ACM, pages 1459–1462. Felix Gers. 2001. Long Short-Term Memory in Recurrent Neural Networks. Ph.D. thesis, Universit¨at Hannover. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Shuiwang Ji, Wei Xu, Ming Yang, and Kai Yu. 2013. 3d convolutional neural networks for human action recognition. IEEE transactions on pattern analysis and machine intelligence 35(1):221–231. Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. 2014. Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. pages 1725–1732. Loic Kessous, Ginevra Castellano, and George Caridakis. 2010. Multimodal emotion recognition in speech-based interaction using facial expression, body gesture and acoustic analysis. Journal on Multimodal User Interfaces 3(1-2):33–48. Yukun Ma, Erik Cambria, and Sa Gao. 2016. Label embedding for zero-shot fine-grained named entity typing. In COLING. Osaka, pages 171–180. Navonil Majumder, Soujanya Poria, Alexander Gelbukh, and Erik Cambria. 2017. Deep learning based document modeling for personality detection from text. IEEE Intelligent Systems 32(2):74–79. Angeliki Metallinou, Sungbok Lee, and Shrikanth Narayanan. 2008. Audio-visual emotion recognition using gaussian mixture models for face and voice. In Tenth IEEE International Symposium on ISM 2008. IEEE, pages 250–257. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 . David Olson. 1977. From utterance to text: The bias of language in speech and writing. Harvard educational review 47(3):257–281. Luca Oneto, Federica Bisio, Erik Cambria, and Davide Anguita. 2016. Statistical learning theory and ELM for big social data analysis. IEEE Computational Intelligence Magazine 11(3):45–55. Ver´onica P´erez-Rosas, Rada Mihalcea, and LouisPhilippe Morency. 2013. Utterance-level multimodal sentiment analysis. In ACL (1). pages 973– 982. 882 Soujanya Poria, Erik Cambria, Rajiv Bajpai, and Amir Hussain. 2017a. A review of affective computing: From unimodal analysis to multimodal fusion. Information Fusion . Soujanya Poria, Erik Cambria, and Alexander Gelbukh. 2015. Deep convolutional neural network textual features and multiple kernel learning for utterance-level multimodal sentiment analysis. In Proceedings of EMNLP. pages 2539–2544. Soujanya Poria, Erik Cambria, and Alexander Gelbukh. 2016a. Aspect extraction for opinion mining with a deep convolutional neural network. Knowledge-Based Systems 108:42–49. Soujanya Poria, Erik Cambria, D Hazarika, and Prateek Vij. 2016b. A deeper look into sarcastic tweets using deep convolutional neural networks. In COLING. pages 1601–1612. Soujanya Poria, Iti Chaturvedi, Erik Cambria, and Federica Bisio. 2016c. Sentic LDA: Improving on LDA with semantic similarity for aspect-based sentiment analysis. In IJCNN. pages 4465–4473. Soujanya Poria, Iti Chaturvedi, Erik Cambria, and Amir Hussain. 2016d. Convolutional mkl based multimodal emotion recognition and sentiment analysis. In Data Mining (ICDM), 2016 IEEE 16th International Conference on. IEEE, pages 439–448. Soujanya Poria, Haiyun Peng, Amir Hussain, Newton Howard, and Erik Cambria. 2017b. Ensemble application of convolutional neural networks and multiple kernel learning for multimodal sentiment analysis. Neurocomputing . Dheeraj Rajagopal, Erik Cambria, Daniel Olsher, and Kenneth Kwok. 2013. A graph-based approach to commonsense concept extraction and semantic similarity detection. In WWW. Rio De Janeiro, pages 565–570. Viktor Rozgic, Sankaranarayanan Ananthakrishnan, Shirin Saleem, Rohit Kumar, and Rohit Prasad. 2012. Ensemble of svm trees for multimodal emotion recognition. In Signal & Information Processing Association Annual Summit and Conference (APSIPA ASC), 2012 Asia-Pacific. IEEE, pages 1–4. Bj¨orn Schuller. 2011. Recognizing affect from linguistic information in 3d continuous space. IEEE Transactions on Affective Computing 2(4):192–205. Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of EMNLP. pages 1631–1642. Vee Teh and Geoffrey E Hinton. 2001. Rate-coded restricted boltzmann machines for face recognition. In T Leen, T Dietterich, and V Tresp, editors, Advances in neural information processing system. volume 13, pages 908–914. Martin W¨ollmer, Florian Eyben, Stephan Reiter, Bj¨orn W Schuller, Cate Cox, Ellen Douglas-Cowie, Roddy Cowie, et al. 2008. Abandoning emotion classes-towards continuous emotion recognition with modelling of long-range dependencies. In Interspeech. volume 2008, pages 597–600. Martin Wollmer, Felix Weninger, Timo Knaup, Bjorn Schuller, Congkai Sun, Kenji Sagae, and LouisPhilippe Morency. 2013. Youtube movie reviews: Sentiment analysis in an audio-visual context. IEEE Intelligent Systems 28(3):46–53. Chung-Hsien Wu and Wei-Bin Liang. 2011. Emotion recognition of affective speech based on multiple classifiers using acoustic-prosodic information and semantic labels. IEEE Transactions on Affective Computing 2(1):10–21. Xun Yuan, Wei Lai, Tao Mei, Xian-Sheng Hua, XiuQing Wu, and Shipeng Li. 2006. Automatic video genre categorization using hierarchical svm. In Image Processing, 2006 IEEE International Conference on. IEEE, pages 2905–2908. Amir Zadeh, Rowan Zellers, Eli Pincus, and LouisPhilippe Morency. 2016. Multimodal sentiment intensity analysis in videos: Facial gestures and verbal messages. IEEE Intelligent Systems 31(6):82–88. Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attentionbased bidirectional long short-term memory networks for relation classification. In The 54th Annual Meeting of the Association for Computational Linguistics. pages 207–213. 883
2017
81
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 884–895 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1082 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 884–895 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1082 A Multidimensional Lexicon for Interpersonal Stancetaking Umashanthi Pavalanathan Georgia Institute of Technology Atlanta, GA [email protected] Jim Fitzpatrick University of Pittsburgh Pittsburgh, PA [email protected] Scott F. Kiesling University of Pittsburgh Pittsburgh, PA [email protected] Jacob Eisenstein Georgia Institute of Technology Atlanta, GA [email protected] Abstract The sociolinguistic construct of stancetaking describes the activities through which discourse participants create and signal relationships to their interlocutors, to the topic of discussion, and to the talk itself. Stancetaking underlies a wide range of interactional phenomena, relating to formality, politeness, affect, and subjectivity. We present a computational approach to stancetaking, in which we build a theoretically-motivated lexicon of stance markers, and then use multidimensional analysis to identify a set of underlying stance dimensions. We validate these dimensions intrinsically and extrinsically, showing that they are internally coherent, match pre-registered hypotheses, and correlate with social phenomena. 1 Introduction What does it mean to be welcoming or standoffish, light-hearted or cynical? Such interactional styles are performed primarily with language, yet little is known about how linguistic resources are arrayed to create these social impressions. The sociolinguistic concept of interpersonal stancetaking attempts to answer this question, by providing a conceptual framework that accounts for a range of interpersonal phenomena, subsuming formality, politeness, and subjectivity (Du Bois, 2007).1 This 1Stancetaking is distinct from the notion of stance which corresponds to a position in a debate (Walker et al., 2012). Similarly, Freeman et al. (2014) correlate phonetic features with the strength of such argumentative stances. framework has been applied almost exclusively through qualitative methods, using close readings of individual texts or dialogs to uncover how language is used to position individuals with respect to their interlocutors and readers. We attempt the first large-scale operationalization of stancetaking through computational methods. Du Bois (2007) formalizes stancetaking as a multi-dimensional construct, reflecting the relationship of discourse participants to (a) the audience or interlocutor; (b) the topic of discourse; (c) the talk or text itself. However, the multidimensional nature of stancetaking poses problems for traditional computational approaches, in which labeled data is obtained by relying on annotator intuitions about scalar concepts such politeness (Danescu-Niculescu-Mizil et al., 2013) and formality (Pavlick and Tetreault, 2016). Instead, our approach is based on a theoretically-guided application of unsupervised learning, in the form of factor analysis, applied to lexical features. Stancetaking is characterized in large part by an array of linguistic features ranging from discourse markers such as actually to backchannels such as yep (Kiesling, 2009). We therefore first compile a lexicon of stance markers, combining prior lexicons from Biber and Finegan (1989) and the Switchboard Dialogue Act Corpus (Jurafsky et al., 1998). We then extend this lexicon to the social media domain using word embeddings. Finally, we apply multi-dimensional analysis of co-occurrence patterns to identify a small set of stance dimensions. To measure the internal coherence (construct validity) of the stance dimensions, we use a word 884 intrusion task (Chang et al., 2009) and a set of preregistered hypotheses. To measure the utility of the stance dimensions, we perform a series of extrinsic evaluations. A predictive evaluation shows that the membership of online communities is determined in part by the interactional stances that predominate in those communities. Furthermore, the induced stance dimensions are shown to align with annotations of politeness and formality. Contributions We operationalize the sociolinguistic concept of stancetaking as a multidimensional framework, making it possible to measure at scale. Specifically, • we contribute a lexicon of stance markers based on prior work and adapted to the genre of online interpersonal discourse; • we group stance markers into latent dimensions; • we show that these stance dimensions are internally coherent; • we demonstrate that the stance dimensions predict and correlate with social phenomena.2 2 Related Work From a theoretical perspective, we build on prior work on interactional meaning in language. Methodologically, our paper relates to prior work on lexicon-based analysis and contrastive studies of social media communities. 2.1 Linguistic Variation and Social Meaning In computational sociolinguistics (Nguyen et al., 2016), language variation has been studied primarily in connection with macro-scale social variables, such as age (Argamon et al., 2007; Nguyen et al., 2013), gender (Burger et al., 2011; Bamman et al., 2014), race (Eisenstein et al., 2011; Blodgett et al., 2016), and geography (Eisenstein et al., 2010). This parallels what Eckert (2012) has called the “first wave” of language variation studies in sociolinguistics, which also focused on macro-scale variables. More recently, sociolinguists have dedicated increased attention to situational and stylistic variation, and the interactional meaning that such variation can convey (Eckert and Rickford, 2001). This linguistic research can be aligned with computational efforts to quantify phenomena such 2Lexicons and stance dimensions are available at https://github.com/umashanthi-research/ multidimensional-stance-lexicon as subjectivity (Riloff and Wiebe, 2003), sentiment (Wiebe et al., 2005), politeness (DanescuNiculescu-Mizil et al., 2013), formality (Pavlick and Tetreault, 2016), and power dynamics (Prabhakaran et al., 2012). While linguistic research on interactional meaning has focused largely on qualitative methodologies such as discourse analysis (e.g., Bucholtz and Hall, 2005), these computational efforts have made use of crowdsourced annotations to build large datasets of, for example, polite and impolite text. These annotation efforts draw on the annotators’ intuitions about the meaning of these sociolinguistic constructs. Interpersonal stancetaking represents an attempt to unify concepts such as sentiment, politeness, formality, and subjectivity under a single theoretical framework (Jaffe, 2009; Kiesling, 2009). The key idea, as articulated by Du Bois (2007), is that stancetaking captures the speaker’s relationship to (a) the topic of discussion, (b) the interlocutor or audience, and (c) the talk (or writing) itself. Various configurations of these three legs of the “stance triangle” can account for a range of phenomena. For example, epistemic stance relates to the speaker’s certainty about what is being expressed, while affective stance indicates the speaker’s emotional position with respect to the content (Ochs, 1993). The framework of stancetaking has been widely adopted in linguistics, particularly in the discourse analytic tradition, which involves close reading of individual texts or conversations (K¨arkk¨ainen, 2006; Keisanen, 2007; Precht, 2003; White, 2003). But despite its strong theoretical foundation, we are aware of no prior efforts to operationalize stancetaking at scale. Since annotators may not have strong intuitions about stance — in the way that they do about formality and politeness — we cannot rely on the annotation methodologies employed in prior work. We take a different approach, performing a multidimensional analysis of the distribution of likely stance markers. 2.2 Lexicon-based Analysis Our operationalization of stancetaking is based on the induction of lexicons of stance markers. The lexicon-based methodology is related to earlier work from social psychology, such as the General Inquirer (Stone, 1966) and LIWC (Tausczik and Pennebaker, 2010). In LIWC, the basic categories were identified first, based on psychological 885 constructs (e.g., positive emotion, cognitive processes, drive to power) and syntactic groupings of words and phrases (e.g., pronouns, prepositions, quantifiers). The lexicon designers then manually contructed lexicons for each category, augmenting their intuitions by using distributional statistics to suggest words that may have been missed (Pennebaker et al., 2015). In contrast, we follow the approach of Biber (1991), using multidimensional analysis to identify latent groupings of markers based on co-occurrence statistics. We then use crowdsourcing and extrinsic comparisons to validate the coherence of these dimensions. 2.3 Multicommunity Studies Social media platforms such as Reddit, Stack Exchange, and Wikia can be considered multicommunity environments, in that they host multiple subcommunities with distinct social and linguistic properties. Such subcommunities can be contrasted in terms of topics (Adamic et al., 2008; Hessel et al., 2014) and social networks (Backstrom et al., 2006). Our work focuses on Reddit, emphasizing community-wide differences in norms for interpersonal interaction. In the same vein, Tan and Lee (2015) attempt to characterize stylistic differences across subreddits by focusing on very common words and parts-of-speech; Tran and Ostendorf (2016) use language models and topic models to measure similarity across threads within a subreddit. One distinction of our approach is that the use of multidimensional analysis gives us interpretable dimensions of variation. This makes it possible to identify the specific interpersonal features that vary across communities. 3 Data Reddit, one of the internet’s largest social media platforms, is a collection of subreddits organized around various topics of interest. As of January 2017, there were more than one million subreddits and nearly 250 million users, discussing topics ranging from politics (r/politics) to horror stories (r/nosleep).3 Although Reddit was originally designed for sharing hyperlinks, it also provides the ability to post original textual content, submit comments, and vote on content quality (Gilbert, 2013). Reddit’s conversation-like threads are therefore well suited for the study of interpersonal social and linguistic phenomena. 3http://redditmetrics.com/ Subreddits 126,789 Authors 6,401,699 Threads 52,888,024 Comments 531,804,658 Table 1: Dataset size For example, the following are two comments from the subreddit r/malefashionadvice, posted in response to a picture posted by a user asking for fashion advise. U1: “I think the beard looks pretty good. Definitely not the goatee. Clean shaven is always the safe option.” U2: “Definitely the beard. But keep it trimmed.” The phrases in bold face are markers of stance, indicating a evaluative stance. The following example is a part of a thread in the subreddit r/photoshopbattles where users discuss an edited image posted by the original poster OP. The phrases in bold face are markers of stance, indicating an involved and interactional stance. U3: “Ha ha awesome!” U4: ‘‘are those..... furries?” OP: “yes, sir. They are!” U4: “Oh cool. That makes sense!” We used an archive of 530 million comments posted on Reddit in 2014, retrieved from the public archive of Reddit comments.4 This dataset consists of each post’s textual content, along with metadata that identifies the subreddit, thread, author, and post creation time. More statistics about the full dataset are shown in Table 1. 4 Stance Lexicon Interpersonal stancetaking can be characterized in part by an array of linguistic features such as hedges (e.g., might, kind of), discourse markers (e.g., actually, I mean), and backchannels (e.g., yep, um). Our analysis focuses on these markers, which we collect into a lexicon. 4.1 Seed lexicon We began with a seed lexicon of stance markers from Biber and Finegan (1989), who compiled an 4https://archive.org/details/2015_ reddit_comments_corpus 886 extensive list by surveying dictionaries, previous studies on stance, and texts in several genres of English. This list includes certainty adverbs (e.g., actually, of course, in fact), affect markers (e.g., amazing, thankful, sadly), and hedges (e.g., kind of, maybe, something like) among other adverbial, adjectival, verbal, and modal markers of stance. In total, this list consists of 448 stance markers. The Biber and Finegan (1989) lexicon is primarily based on written genres from the pre-social media era. Our dataset — like much of the recent work in this domain — consists of online discussions, which differ significantly from printed texts (Eisenstein, 2013). One difference is that online discussions contain a number of dialog act markers that are characteristic of spoken language, such as oh yeah, nah, wow. We accounted for this by adding 74 dialog act markers from the Switchboard Dialog Act Corpus (Jurafsky et al., 1998). The final seed lexicon consists of 517 unique markers, from these two sources. Note that the seed lexicon also includes markers that contain multiple tokens (e.g. kind of, I know). 4.2 Lexicon expansion Online discussions differ not only from written texts, but also from spoken discussions, due to their use of non-standard vocabulary and spellings. To measure stance accurately, these genre differences must be accounted for. We therefore expanded the seed lexicon using automated techniques based on distributional statistics. This is similar to prior work on the expansion of sentiment lexicons (Hatzivassiloglou and McKeown, 1997; Hamilton et al., 2016). Our lexicon expansion approach used word embeddings to find words that are distributionally similar to those in the seed set. We trained word embeddings on a corpus of 25 million Reddit comments and a vocabulary of 100K most frequent words on Reddit using the structured skip-gram models of both WORD2VEC (Mikolov et al., 2013) and WANG2VEC (Ling et al., 2015) with default parameters. The WANG2VEC method augments WORD2VEC by accounting for word order information. We found the similarity judgments obtained from WANG2VEC to be qualitatively more meaningful, so we used these embeddings to construct the expanded lexicon.5 5We used the following default parameters: 100 dimensions, a window size of five, a negative sampling size of ten, five-epoch iterations, and a sub-sampling rate of 10−4. Seed term Expanded terms (Example seeds from Biber and Finegan (1989)) significantly considerably, substantially, dramatically certainly surely, frankly, definitely incredibly extremely, unbelievably, exceptionally (Example seeds from Jurafsky et al. (1998)) nope nah, yup, nevermind great fantastic, terrific, excellent Table 2: Stance lexicon: seed and expanded terms. To perform lexicon expansion, we constructed a dictionary of candidate terms, consisting of all unigrams that occur with a frequency rate of at least 10−7 in the Reddit comment corpus. Then, for each single-token marker in the seed lexicon, we identified all terms from the candidate set whose embedding has cosine similarity of at least 0.75 with respect to the seed marker.6 Table 2 shows examples of seed markers and related terms we extracted from word embeddings. Through this procedure, we identified 228 additional markers based on similarity to items in the seed list from Biber and Finegan (1989), and 112 additional markers based on the seed list of dialog acts. In total, our stance lexicon contains 812 unique markers. 5 Linguistic Dimensions of Stancetaking To summarize the main axes of variation across the lexicon of stance markers, we apply a multidimensional analysis (Biber, 1992) to the distributional statistics of stance markers across subreddit communities. Each dimension of variation can then be viewed as a spectrum, characterized by the stance markers and subreddits that are associated with the positive and negative extremes. Multidimensional analysis is based on singular value decomposition, which has been applied successfully to a wide range of problems in natural language processing and information retrieval (e.g., Landauer et al., 1998). While Bayesian topic models are an appealing alternative, singular value decomposition is fast and deterministic, with a minimal number of tuning parameters. 6We tried different thresholds on the similarity value and the corpus frequency, and the reported values were chosen based on the quality of the resulting related terms. This was done prior to any of the validations or extrinsic analyses described later in the paper. 887 5.1 Extracting Stance Dimensions Our analysis is based on the co-occurrence of stance markers and subreddits. This is motivated by our interest in comparisons of the interactional styles of online communities within Reddit, and by the premise that these distributional differences reflect socially meaningful communicative norms. A pilot study applied the same technique to the cooccurrence of stance markers and individual authors, and the resulting dimensions appeared to be less stylistically coherent. Singular value decomposition is often used in combination with a transformation of the cooccurrence counts by pointwise mutual information (Bullinaria and Levy, 2007). This transformation ensures that each cell in the matrix indicates how much more likely a stance marker is to cooccur with a given subreddit than would happen by chance under an independence assumption. Because negative PMI values tend to be unreliable, we use positive PMI (PPMI), which involves replacing all negative PMI values with zeros (Niwa and Nitta, 1994). Therefore, we obtain stance dimensions by applying singular value decomposition to the matrix constructed as follows: Xm,s =  log Pr(marker = m, subreddit = s) Pr(marker = m) Pr(subreddit = s)  + . Truncated singular value decomposition performs the approximate factorization X ≈UΣV ⊤, where each row of the matrix U is a k-dimensional description of each stance marker, and each row of V is a k-dimensional description of each subreddit. We included the 7,589 subreddits that received at least 1,000 comments in 2014. 5.2 Results: Stance Dimensions From the SVD analysis, we extracted the six principal latent dimensions that explain the most variation in our dataset.7 The decision to include only the first six dimensions was based on the strength of the singular values corresponding to the dimensions. Table 3 shows the top five stance markers for each extreme of the six dimensions. The stance dimensions convey a range of concepts, such as involved versus informational language, narrative 7Similar to factor analysis, the top few dimensions of SVD explain the most variation, and tend to be most interpretable. A scree plot (Cattell, 1966) showed that the amount of variation explained dropped after the top six dimensions, and qualitative interpretation showed that the remaining dimension were less interpretable. 0.04 0.03 0.02 0.01 0.00 0.01 0.02 0.03 0.02 0.01 0.00 0.01 0.02 gadgets nsfw politics funny space malefashionadvice food worldnews explainlikeimfive tattoos facepalm photoshopbattles aww askscience trees gonewild science programming personalfinance atheism 4chan history (+) Dim-2 (-) Dim-3 (+) (-) Figure 1: Mapping of subreddits in dimension two and dimension three, highlighting especially popular subreddits. Picture-oriented subreddits r/gonewild and r/aww map high on dimension two and low on dimension three, indicating involved and informal style of discourse. Subreddits dedicated for knowledge sharing discussions such as r/askscience and r/space map low on dimension two and high on dimension three indicating informational and formal style. versus dialogue-oriented writing, standard versus non-standard variation, and positive versus negative affect. Figure 1 shows the distribution of subreddits along two of these dimensions. 6 Construct Validity Evaluating model output against gold-standard annotations is appropriate when there is some notion of a correct answer. As stancetaking is a multidimensional concept, we have taken an unsupervised approach. Therefore, we use evaluation techniques based on the notion of validity, which is the extent to which the operationalization of a construct truly captures the intended quantity or concept. Validation techniques for unsupervised content analysis are widely found in the social science literature (Weber, 1990; Quinn et al., 2010) and have also been recently used in the NLP and machine learning communities (e.g., Chang et al., 2009; Murphy et al., 2012; Sim et al., 2013). We used several methods to validate the stance dimensions extracted from the corpus of Reddit comments. This section describes intrinsic evaluations, which test whether the extracted stance dimensions are linguistically coherent and mean888 Stance markers Subreddits Dim-1 beautifully, pleased, thanks, spectacular, delightful philosophy, history, science + just, even, all, no, so pcmasterrace, leagueoflegends, gaming Dim-2 suggests that, demonstrates, conclude, demonstrated, demonstrate philosophy, science, askscience, + lovely, awww, hehe, aww, haha gonewild, nsfw, aww Dim-3 funnier, hilarious, disturbing, creepy, funny cringe, creepy, cringepics + thanks, ideally, calculate, estimate, calculation askscience, personalfinance, space Dim-4 phenomenal, bummed, enjoyed, fantastic, disappointing movies, television, books + hello, thx, hehe, aww, hi philosophy, 4chan, atheism Dim-5 lovely, stunning, wonderful, delightful, beautifully gonewild, aww, tattoos + nvm, cmon, smh, lmao, disappointing nfl, soccer, cringe Dim-6 stunning, fantastic, incredible, amazing, spectacular philosophy, gonewild, askscience + anxious, stressed, exhausted, overwhelmed, relieved relationships, sex, nosleep Table 3: For each of the six dimensions extracted by our method, we show the five markers and three subreddits (among the 100 most popular subreddits) with the highest loadings. ingful, thereby testing the construct or content validity of the proposed stance dimensions (Quinn et al., 2010). Extrinsic evaluations are presented in section 7. 6.1 Word Intrusion Task A word intrusion task is used to measure the coherence and interpretability of a group of words. Human raters are presented with a list of terms, all but one of which are selected from a target concept; their task is to identify the intruder. If the target concept is internally coherent, human raters should be able to perform this task accurately; if not, their selections should be random. Word intrusion tasks have previously been used to validate the interpretability of topic models (Chang et al., 2009) and vector space models (Murphy et al., 2012). We deployed a word intrusion task on Amazon Mechanical Turk (AMT), in which we presented the top four stance markers from one end of a dimension, along with an intruder marker selected from the top four markers of the opposite end of that dimension. In this way, we created four word intrusion tasks for each end of each dimension. The main reason for including only the top four words in each dimension is the expense of conducting crowd-sourced evaluations. In the most relevant prior work, Chang et al. (2009) used the top five words from each topic in their evaluation of topic models. Worker selection We required that the AMT workers (“turkers”) have completed a minimum of 1,000 HITs and have at least 95% approval rate Furthermore, because our task is based on analysis of English language texts, we required the turkers to be native speakers of English living in one of the majority English speaking countries. As a further requirement, we required the turkers to obtain a qualification which involves an English comprehension test similar to the questions in standardized English language tests. These requirements are based on best practices identified by CallisonBurch and Dredze (2010). Task specification Each AMT human intelligence task (HIT) consists of twelve word intrusion tasks, one for each end of the six dimensions. We provided minimal instructions regarding the task, and did not provide any examples, to avoid introducing bias.8 As a further quality control, each HIT included three questions which ask the turkers to pick the best synonym for a given word from a list of five answers, where one answer was clearly correct; Turkers who gave incorrect answers were to be excluded, but this situation did not arise in practice. Altogether each HIT consists of 15 questions, and was paid US$1.50. Five different turkers performed each HIT. Results We measured the interrater reliability using Krippendorf’s α (Krippendorff, 2007) and the model precision metric of Chang et al. (2009). Results on both metrics were encouraging. We obtained a value of α = 0.73, on a scale where 8The prompt for the word intrusions task was: “Select the intruder word/phrase: you will be given a list of five English words/phrases and asked to pick the word/phrase that is least similar to the other four words/phrases when used in online discussion forums.” 889 α = 0 indicates chance agreement and α = 1 indicates perfect agreement. The model precision was 0.82; chance precision is 0.20. To offer a sense of typical values for this metric, Chang et al. (2009) report model precisions in the range 0.7–0.83 in their analysis of topic models. Overall, these results indicate that the multi-dimensional analysis has succeeded at identifying dimensions that reflect natural groupings of stance markers. 6.2 Pre-registered Hypotheses Content validity was also assessed using a set of pre-registered hypotheses. The practice of preregistering hypotheses before an analysis and testing the correctness is widely used in the social sciences; it was adopted by Sim et al. (2013) to evaluate the induction of political ideological models from text. Before performing the mutidimensional analysis, we identified two groups of hypotheses that are expected to hold with respect to the latent stancetaking dimensions using our prior linguistic knowledge: • Hypothesis I: Stance markers that are synonyms should not appear on the opposite ends of a stance dimension. • Hypothesis II: If at least one stance marker from a predefined stance feature group (defined below) appears on one end of a stance dimension, then other markers from the same feature group will tend not to appear at the opposite end of the same dimension. 6.2.1 Synonym Pairs For each marker in our stance lexicon, we extracted synonyms from Wordnet, focusing on markers that appear in only one Wordnet synset, and not including pairs in which one term was an inflection of the other.9 Our final list contains 73 synonym pairs (e.g., eventually/finally, grateful/thankful, yea/yeah). Of these pairs, there were 59 cases in which both terms appeared in either the top or bottom 200 positions of a stance dimension. In 51 of these cases (86%), the two terms appeared on the same side of the dimension. The chance rate would be 50%, so this supports Hypothesis I and 9It is possible that inflections are semantically similar, because by definition they are changes in the form of a word to mark distinctions such as tense, person, or number. However, different inflections of a single word form might be used to mark different stances (e.g., some stances might be associated with the past while others might be associated with the present or future). Number of synonym pairs Stance Dimension On same end On opposite ends DIMENSION 1 6 3 DIMENSION 2 12 2 DIMENSION 3 2 1 DIMENSION 4 11 0 DIMENSION 5 10 2 DIMENSION 6 10 0 Total 51/59 8/59 Table 4: Results for pre-registered hypothesis that stance dimensions will not split synonym pairs. further validates the stance dimensions. More details of the results are shown in Table 4. Note that synonym pairs may differ in aspects such as formality (e.g., said/informed, want/desire), which is one of the main dimensions of stancetaking. Therefore, perfect support for Hypothesis I is not expected. 6.2.2 Stance Feature Groups Biber and Finegan (1989) group stance markers into twelve “feature groups”, such as certainty adverbs, doubt adverbs, affect expressions, and hedges. Ideally, the stance dimensions should preserve these groupings. To test this, for each of the seven feature groups with at least ten stance markers in the lexicon, we counted the number of terms appearing among the top 200 positions in both ends (high/low) of each dimension. Under the null hypothesis, the stance dimensions are random with respect to the feature groups, so we would expect roughly an equal number of markers on both ends. As shown in Table 5, for five of the seven feature groups, it is possible to reject the null hypothesis at p < .007, which is the significance threshold at α = 0.05, after correcting for multiple comparisons using the Bonferroni correction. This indicates that the stance dimensions are aligned with predefined stance feature groups. 7 Extrinsic Evaluations The evaluations in the previous section test internal validity; we now describe evaluations testing whether the stance dimensions are relevant to external social and interactional phenomena. 7.1 Predicting Cross-posting Online communities can be considered as communities of practice (Eckert and McConnell-Ginet, 1992), where members come together to engage in shared linguistic practices. These practices 890 Feature #Stance χ2 p-value Reject group marker null? Certainty adv. 38 16.94 4.6e−03 ✓ Doubt adv. 23 13.21 2.2e−02 × Certainty verbs 36 48.99 2.2e−09 ✓ Doubt verbs 55 30.45 1.2e−05 ✓ Certainty adj. 28 29.73 1.7e−05 ✓ Doubt adj. 12 14.80 1.1e−02 × Affect exp. 227 97.17 2.1e−19 ✓ Table 5: Results for preregistered hypothesis that stance dimensions will align with stance feature groups of Biber and Finegan (1989). evolve simultaneously with membership, coalescing into shared norms. The memberships of multiple subreddits on the same topic (e.g., r/science and r/askscience) often do not overlap considerably. Therefore we hypothesize that users of Reddit have preferred interactional styles, and that participation in subreddit communities is governed not only by topic interest, but also by these interactional preferences. The proposed stancetaking dimensions provide a simple measure of interactional style, allowing us to test whether it is predictive of community membership decisions. Classification task We design a classification task, in which the goal is to determine whether a pair of subreddits is high-crossover or lowcrossover. In high-crossover subreddit pairs, individuals are especially likely to participate in both. For the purpose of this evaluation, individuals are considered to participate in a subreddit if they contribute posts or comments. We compute the pointwise mutual information (PMI) with respect to cross-participation among the 100 most popular subreddits. For each subreddit s, we identify the five highest and lowest PMI pairs ⟨s, t⟩, and add these to the high-crossover and low-crossover sets, respectively. Example pairs are shown in Table 6. After eliminating redundant pairs, we identify 437 unique high-crossover pairs, and 465 unique lowcrossover pairs. All evaluations are based on multiple random training/test splits over this dataset. Classification approaches A simple classification approach is to predict that subreddits with similar text will have high crossover. We measure similarity using TF-IDF weighted cosine similarity, using two possible lexicons: the 8,000 most frequent words on reddit (BOW), and the stance lexicon (STANCE MARKERS). The similarity threshold between high-crossover and lowCross-Community Participation High-Scoring Pairs Low-Scoring Pairs r/blog, r/announcements r/gonewild, r/leagueoflegends r/pokemon, r/wheredidthesodago r/soccer, r/nosleep r/politics, r/technology r/programming, r/gonewild r/LifeProTips, r/dataisbeautiful r/nfl, r/leagueoflegends r/Unexpected, r/JusticePorn r/Minecraft, r/personalfinance Table 6: Examples of subreddit pairs that have large and small amount of overlap of contributing members. Cosine SVD BOW 66.13% 77.48% STANCE MARKERS 64.31% 84.93% Table 7: Accuracy for prediction of subreddit cross-participation. crossover pairs was estimated on the training data. We also tested the relevance of multi-dimensional analysis, by applying SVD to both lexicons. For each pair of subreddits, we computed a feature set of the absolute difference across the top six latent dimensions, and applied a logistic regression classifier. Regularization was tuned by internal crossvalidation. Results Table 7 shows average accuracies for these models. The stance-based SVD features are considerably more accurate than the BOWbased SVD features, indicating that interactional style does indeed predict cross-posting behavior.10 Both are considerably more accurate than the bagof-words models based on cosine similarity. 7.2 Politeness and Formality The utility of the induced stance dimensions depends on their correlation with social phenomena of interest. Prior work has used crowdsourcing to annotate texts for politeness and formality. We now evaluate the stancetaking properties of these annotated texts. Data We used the politeness corpus of Wikipedia edit requests from Danescu-NiculescuMizil et al. (2013), which includes the textual content of the edit requests, along with scalar annotations of politeness. Following the original 10We use BOW+SVD as the most comparable contentbased alternative to our stancetaking dimensions. While there may be more accurate discriminative approaches, our goal is a direct comparison of stance and content-based features, not an exhaustive comparison of classification approaches. 891 authors, we compare the text for the messages ranked in the first and fourth quartiles of politeness scores. For formality, we used the corpus from Pavlick and Tetreault (2016), focusing on the blogs domain, which is most similar to our domain of Reddit. Each sentence in this corpus was annotated for formality levels from −3 to +3. We considered only the sentences with mean formality score greater than +1 (more formal) and less than −1 (less formal). Stance dimensions For each document in the above datasets, we compute the stance properties, as follows: for each dimension, we compute the total frequency of the hundred most positive terms and the hundred most negative terms, and then take the difference. Instances containing no terms from either list are excluded. We focus on stance dimensions two and five (summarized in Table 3), because they appeared to be most relevant to politeness and formality. Dimension two contrasts informational and argumentative language against emotional and non-standard language. Dimension five contrasts positive and formal language against non-standard and somewhat negative language. Results A kernel density plot of the resulting differences is shown in Figure 2. The effect sizes of the resulting differences are quantified using Cohen’s d statistic (Cohen, 1988). Effect sizes for all differences are between 0.3 and 0.4, indicating small-to-medium effects — except for the evaluation of formality on dimension five, where the effect size is close to zero. The relatively modest effect sizes are unsurprising, given the short length of the texts. However, these differences lend insight to the relationship between formality and politeness, which may seem to be closely related concepts. On dimension two, it is possible to be polite while using non-standard language such as hehe and awww, so long as the sentiment expressed is positive; however, these markers are not consistent with formality. On dimension five, we see that positive sentiment terms such as lovely and stunning are consistent with politeness, but not with formality. Indeed, the distribution of dimension five indicates that both ends of dimension five are consistent only with informal texts. Overall, these results indicate that interactional phenomena such as politeness and formality are reflected in our stance dimensions, which are induced in an unsupervised manner. Future work 0 2 4 6 8 10 not polite polite not polite polite 0.4 0.2 0.0 0.2 0.4 dimension 2: suggests that, demonstrates vs.lovely, awww, hehe 0 2 4 6 8 10 not formal formal 0.4 0.2 0.0 0.2 0.4 dimension 5: lovely, stunning, wonderful vs. nvm, cmon, smh not formal formal Figure 2: Kernel density distributions for stance dimensions 2 and 5, plotted with respect to annotations of politeness and formality. may consider the utility of these stance dimensions to predict these social phenomena, particularly in cross-domain settings where lexical classifiers may overfit. 8 Conclusion Stancetaking provides a general perspective on the various linguistic phenomena that structure social interactions. We have identified a set of several hundred stance markers, building on previouslyidentified lexicons by using word embeddings to perform lexicon expansion. We then used multidimensional analysis to group these markers into stance dimensions, which we show to be internally coherent and extrinsically useful. Our hope is that these stance dimensions will be valuable as a convenient building block for future research on interactional meaning. Acknowledgments Thanks to the anonymous reviewers for their useful and constructive feedback on our submission. This research was supported by Air Force Office of Scientific Research award FA9550-14-1-0379, by National Institutes of Health award R01-GM112697, and by the National Science Foundation awards 1452443 and 1111142. We thank Tyler Schnoebelen for helpful discussions; C.J. Hutto, Tanushree Mitra, and Sandeep Soni for assistance with Mechanical Turk experiments; and Ian Stewart for assistance with creating word embeddings. We also thank the Mechanical Turk workers for performing the word intrusion task, and for feedback on a pilot task. References Lada A. Adamic, Jun Zhang, Eytan Bakshy, and Mark S. Ackerman. 2008. Knowledge sharing and yahoo answers: Everyone knows something. In 892 Proceedings of the Conference on World-Wide Web (WWW). pages 665–674. Shlomo Argamon, Moshe Koppel, James W. Pennebaker, and Jonathan Schler. 2007. Mining the blogosphere: Age, gender and the varieties of selfexpression. First Monday 12(9). Lars Backstrom, Dan Huttenlocher, Jon Kleinberg, and Xiangyang Lan. 2006. Group formation in large social networks: Membership, growth, and evolution. In Proceedings of Knowledge Discovery and Data Mining (KDD). pages 44–54. David Bamman, Jacob Eisenstein, and Tyler Schnoebelen. 2014. Gender identity and lexical variation in social media. Journal of Sociolinguistics 18(2):135– 160. Douglas Biber. 1991. Variation across speech and writing. Cambridge University Press. Douglas Biber. 1992. The multi-dimensional approach to linguistic analyses of genre variation: An overview of methodology and findings. Computers and the Humanities 26(5-6):331–345. Douglas Biber and Edward Finegan. 1989. Styles of stance in english: Lexical and grammatical marking of evidentiality and affect. Text 9(1):93–124. Su Lin Blodgett, Lisa Green, and Brendan OConnor. 2016. Demographic dialectal variation in social media: A case study of african-american english. In Proceedings of Empirical Methods for Natural Language Processing (EMNLP). pages 1119–1130. M. Bucholtz and K. Hall. 2005. Identity and interaction: A sociocultural linguistic approach. Discourse studies 7(4-5):585–614. John A Bullinaria and Joseph P Levy. 2007. Extracting semantic representations from word co-occurrence statistics: A computational study. Behavior research methods 39(3):510–526. John D. Burger, John Henderson, George Kim, and Guido Zarrella. 2011. Discriminating gender on twitter. In Proceedings of Empirical Methods for Natural Language Processing (EMNLP). pages 1301–1309. Chris Callison-Burch and Mark Dredze. 2010. Creating speech and language data with amazon’s mechanical turk. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk. Association for Computational Linguistics, pages 1–12. Raymond B Cattell. 1966. The scree test for the number of factors. Multivariate behavioral research 1(2):245–276. Jonathan Chang, Sean Gerrish, Chong Wang, Jordan L Boyd-graber, and David M Blei. 2009. Reading tea leaves: How humans interpret topic models. In Neural Information Processing Systems (NIPS). Vancouver, pages 288–296. Jacob Cohen. 1988. Statistical power analysis for the behavioral sciences. Lawrence Earlbaum Associates, Hillsdale, NJ. Cristian Danescu-Niculescu-Mizil, Moritz Sudhof, Dan Jurafsky, Jure Leskovec, and Christopher Potts. 2013. A computational approach to politeness with application to social factors. In Proceedings of the Association for Computational Linguistics (ACL). Sophia, Bulgaria, pages 250–259. John W. Du Bois. 2007. The stance triangle. In Robert Engelbretson, editor, Stancetaking in discourse, John Benjamins Publishing Company, Amsterdam/Philadelphia, pages 139–182. Penelope Eckert. 2012. Three waves of variation study: the emergence of meaning in the study of sociolinguistic variation. Annual Review of Anthropology 41:87–100. Penelope Eckert and Sally McConnell-Ginet. 1992. Think practically and look locally: Language and gender as community-based practice. Annual review of anthropology 21:461–490. Penelope Eckert and John R Rickford. 2001. Style and sociolinguistic variation. Cambridge University Press. Jacob Eisenstein. 2013. What to do about bad language on the internet. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL). pages 359–369. Jacob Eisenstein, Amr Ahmed, and Eric P. Xing. 2011. Sparse additive generative models of text. In Proceedings of the International Conference on Machine Learning (ICML). pages 1041–1048. Jacob Eisenstein, Brendan O’Connor, Noah A. Smith, and Eric P. Xing. 2010. A latent variable model for geographic lexical variation. In Proceedings of Empirical Methods for Natural Language Processing (EMNLP). pages 1277–1287. Valerie Freeman, Richard Wright, Gina-Anne Levow, Yi Luan, Julian Chan, Trang Tran, Victoria Zayats, Maria Antoniak, and Mari Ostendorf. 2014. Phonetic correlates of stance-taking. The Journal of the Acoustical Society of America 136(4):2175–2175. Eric Gilbert. 2013. Widespread underprovision on reddit. In Proceedings of Computer-Supported Cooperative Work (CSCW). pages 803–808. William L. Hamilton, Kevin Clark, Jure Leskovec, and Dan Jurafsky. 2016. Inducing domain-specific sentiment lexicons from unlabeled corpora. In Proceedings of Empirical Methods for Natural Language Processing (EMNLP). pages 595–605. Vasileios Hatzivassiloglou and Kathleen R. McKeown. 1997. Predicting the semantic orientation of adjectives. In Proceedings of the Association for Computational Linguistics (ACL). Madrid, Spain, pages 174–181. 893 Jack Hessel, Chenhao Tan, and Lillian Lee. 2014. Science, askscience, and badscience: On the coexistence of highly related communities. In Proceedings of the International Conference on Web and Social Media (ICWSM). AAAI Publications, Menlo Park, California, pages 171–180. Alexandra Jaffe. 2009. Stance: Sociolinguistic Perspectives. Oxford University Press. Daniel Jurafsky, Elizabeth Shriberg, Barbara Fox, and Traci Curl. 1998. Lexical, prosodic, and syntactic cues for dialog acts. In Proceedings of ACL/COLING-98 Workshop on Discourse Relations and Discourse Markers. pages 114–120. Elise K¨arkk¨ainen. 2006. Stance taking in conversation: From subjectivity to intersubjectivity. Text & Talk-An Interdisciplinary Journal of Language, Discourse Communication Studies 26(6):699–731. Tiina Keisanen. 2007. Stancetaking as an interactional activity: Challenging the prior speaker. Stancetaking in discourse: Subjectivity, evaluation, interaction pages 253–81. Scott Fabius Kiesling. 2009. Style as stance. Stance: sociolinguistic perspectives pages 171–194. Klaus Krippendorff. 2007. Computing krippendorff’s alpha reliability. Departmental papers (ASC) page 43. Thomas Landauer, Peter W. Foltz, and Darrel Laham. 1998. Introduction to latent semantic analysis. Discource Processes 25:259–284. Wang Ling, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015. Two/too simple adaptations of word2vec for syntax problems. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL). Denver, CO, pages 1299–1304. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems. pages 3111–3119. Brian Murphy, Partha Pratim Talukdar, and Tom Mitchell. 2012. Learning effective and interpretable semantic models using non-negative sparse embedding. In Proceedings of the International Conference on Computational Linguistics (COLING). Mumbai, India, pages 1933–1949. Dong Nguyen, A Seza Do˘gru¨oz, Carolyn P Ros´e, and Franciska de Jong. 2016. Computational sociolinguistics: A survey. Computational Linguistics 42(3):537–593. Dong Nguyen, Rilana Gravel, Dolf Trieschnigg, and Theo Meder. 2013. ”How Old Do You Think I Am?” A Study of Language and Age in Twitter. In Proceedings of the International Conference on Web and Social Media (ICWSM). pages 439–448. Yoshiki Niwa and Yoshihiko Nitta. 1994. Cooccurrence vectors from corpora vs. distance vectors from dictionaries. In Proceedings of the International Conference on Computational Linguistics (COLING). Kyoto, Japan, pages 304–309. Elinor Ochs. 1993. Constructing social identity: A language socialization perspective. Research on language and social interaction 26(3):287–306. Ellie Pavlick and Joel Tetreault. 2016. An empirical analysis of formality in online communication. Transactions of the Association for Computational Linguistics (TACL) 4:61–74. James W Pennebaker, Ryan L Boyd, Kayla Jordan, and Kate Blackburn. 2015. The development and psychometric properties of LIWC2015. Technical report. Vinodkumar Prabhakaran, Owen Rambow, and Mona Diab. 2012. Predicting overt display of power in written dialogs. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL). pages 518–522. Kristen Precht. 2003. Stance moods in spoken english: Evidentiality and affect in british and american conversation. Text - Interdisciplinary Journal for the Study of Discourse 23(2):239–258. Kevin M Quinn, Burt L Monroe, Michael Colaresi, Michael H Crespin, and Dragomir R Radev. 2010. How to analyze political attention with minimal assumptions and costs. American Journal of Political Science 54(1):209–228. Ellen Riloff and Janyce Wiebe. 2003. Learning extraction patterns for subjective expressions. In Proceedings of Empirical Methods for Natural Language Processing (EMNLP). pages 105–112. Yanchuan Sim, Brice Acree, Justin H Gross, and Noah A Smith. 2013. Measuring ideological proportions in political speeches. In Proceedings of Empirical Methods for Natural Language Processing (EMNLP). Philip J. Stone. 1966. The General Inquirer: A Computer Approach to Content Analysis. The MIT Press. Chenhao Tan and Lillian Lee. 2015. All who wander: On the prevalence and characteristics of multicommunity engagement. In Proceedings of the Conference on World-Wide Web (WWW). pages 1056– 1066. Yla R Tausczik and James W Pennebaker. 2010. The psychological meaning of words: LIWC and computerized text analysis methods. Journal of Language and Social Psychology 29(1):24–54. Trang Tran and Mari Ostendorf. 2016. Characterizing the language of online communities and its relation to community reception. In Proceedings of 894 Empirical Methods for Natural Language Processing (EMNLP). Marilyn A Walker, Pranav Anand, Robert Abbott, and Ricky Grant. 2012. Stance classification using dialogic properties of persuasion. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL). pages 592– 596. Robert Philip Weber. 1990. Basic content analysis. 49. Sage. Peter RR White. 2003. Beyond modality and hedging: A dialogic view of the language of intersubjective stance. Text - Interdisciplinary Journal for the Study of Discourse 23(2):259–284. Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. Language resources and evaluation 39(2):165–210. 895
2017
82
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 896–905 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1083 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 896–905 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1083 Tandem Anchoring: a Multiword Anchor Approach for Interactive Topic Modeling Jeffrey Lund, Connor Cook, Kevin Seppi Computer Science Department Brigham Young University {jefflund,cojoco,kseppi}@byu.edu Jordan Boyd-Graber Computer Science Department University of Colorado Boulder [email protected] Abstract Interactive topic models are powerful tools for understanding large collections of text. However, existing sampling-based interactive topic modeling approaches scale poorly to large data sets. Anchor methods, which use a single word to uniquely identify a topic, offer the speed needed for interactive work but lack both a mechanism to inject prior knowledge and lack the intuitive semantics needed for userfacing applications. We propose combinations of words as anchors, going beyond existing single word anchor algorithms— an approach we call “Tandem Anchors”. We begin with a synthetic investigation of this approach then apply the approach to interactive topic modeling in a user study and compare it to interactive and noninteractive approaches. Tandem anchors are faster and more intuitive than existing interactive approaches. Topic models distill large collections of text into topics, giving a high-level summary of the thematic structure of the data without manual annotation. In addition to facilitating discovery of topical trends (Gardner et al., 2010), topic modeling is used for a wide variety of problems including document classification (Rubin et al., 2012), information retrieval (Wei and Croft, 2006), author identification (Rosen-Zvi et al., 2004), and sentiment analysis (Titov and McDonald, 2008). However, the most compelling use of topic models is to help users understand large datasets (Chuang et al., 2012). Interactive topic modeling (Hu et al., 2014) allows non-experts to refine automatically generated topics, making topic models less of a “take it or leave it” proposition. Including humans input during training improves the quality of the model and allows users to guide topics in a specific way, custom tailoring the model for a specific downstream task or analysis. The downside is that interactive topic modeling is slow—algorithms typically scale with the size of the corpus—and requires non-intuitive information from the user in the form of must-link and cannot-link constraints (Andrzejewski et al., 2009). We address these shortcomings of interactive topic modeling by using an interactive version of the anchor words algorithm for topic models. The anchor algorithm (Arora et al., 2013) is an alternative topic modeling algorithm which scales with the number of unique word types in the data rather than the number of documents or tokens (Section 1). This makes the anchor algorithm fast enough for interactive use, even in web-scale document collections. A drawback of the anchor method is that anchor words—words that have high probability of being in a single topic—are not intuitive. We extend the anchor algorithm to use multiple anchor words in tandem (Section 2). Tandem anchors not only improve interactive refinement, but also make the underlying anchor-based method more intuitive. For interactive topic modeling, tandem anchors produce higher quality topics than single word anchors (Section 3). Tandem anchors provide a framework for fast interactive topic modeling: users improve and refine an existing model through multiword anchors (Section 4). Compared to existing methods such as Interactive Topic Models (Hu et al., 2014), our method is much faster. 896 1 Vanilla Anchor Algorithm The anchor algorithm computes the topic matrix A, where Av,k is the conditional probability of observing word v given topic k, e.g., the probability of seeing the word “lens” given the camera topic in a corpus of Amazon product reviews. Arora et al. (2012a) find these probabilities by assuming that every topic contains at least one ‘anchor’ word which has a non-zero probability only in that topic. Anchor words make computing the topic matrix A tractable because the occurrence pattern of the anchor word mirrors the occurrence pattern of the topic itself. To recover the topic matrix A using anchor words, we first compute a V × V cooccurrence matrix Q, where Qi,j is the conditional probability p(wj | wi) of seeing word type wj after having seen wi in the same document. A form of the Gram-Schmidt process on Q finds anchor words {g1 . . . gk} (Arora et al., 2013). Once we have the set of anchor words, we can compute the probability of a topic given a word (the inverse of the conditioning in A). This coefficient matrix C is defined row-wise for each word i C∗ i,· = argmin Ci,· DKL Qi,· K X k=1 Ci,kQgk,· ! , (1) which gives the best reconstruction (based on Kullback-Leibler divergence DKL) of non-anchor words given anchor words’ conditional probabilities. For example, in our product review data, a word such as “battery” is a convex combination of the anchor words’ contexts (Qgk,·) such as “camera”, “phone”, and “car”. Solving each row of C is fast and is embarrassingly parallel. Finally, we apply Bayes’ rule to recover the topic matrix A from the coefficient matrix C. The anchor algorithm can be orders of magnitude faster than probabilistic inference (Arora et al., 2013). The construction of Q has a runtime of O(DN2) where D is the number of documents and N is the average number of tokens per document. This computation requires only a single pass over the data and can be pre-computed for interactive use-cases. Once Q is constructed, topic recovery requires O(KV 2 + K2V I), where K is the number of topics, V is the vocabulary size, and I is the average number of iterations (typically 100-1000). In contrast, traditional topic Anchor Top Words in Topics backpack backpack camera lens bag room carry fit cameras equipment comfortable camera camera lens pictures canon digital lenses batteries filter mm photos bag bag camera diaper lens bags genie smell room diapers odor Table 1: Three separate attempts to construct a topic concerning camera bags in Amazon product reviews with single word anchors. This example is drawn from preliminary experiments with an author as the user. The term “backpack” is a good anchor because it uniquely identifies the topic. However, both “camera” and “bag” are poor anchors for this topic. model inference typically requires multiple passes over the entire data. Techniques such as Online LDA (Hoffman et al., 2010) or Stochastic Variation Inference (Hoffman et al., 2013) improves this to a single pass over the entire data. However, from Heaps’ law (Heaps, 1978) it follows that V 2 ≪DN for large datasets, leading to much faster inference times for anchor methods compared to probabilistic topic modeling. Further, even if online were to be adapted to incorporate human guidance, a single pass is not tractable for interactive use. 2 Tandem Anchor Extension Single word anchors can be opaque to users. For an example of bewildering anchor words, consider a camera bag topic from a collection of Amazon product reviews (Table 1). The anchor word “backpack” may seem strange. However, this dataset contains nothing about regular backpacks; thus, “backpack” is unique to camera bags. Bizarre, low-to-mid frequency words are often anchors because anchor words must be unique to a topic; intuitive or high-frequency words cannot be anchors if they have probability in any other topic. The anchor selection strategy can mitigate this problem to some degree. For example, rather than selecting anchors using an approximate convex hull in high-dimensional space, we can find an exact convex hull in a low-dimensional embedding (Lee and Mimno, 2014). This strategy will produce more salient topics but still makes it difficult for users to manually choose unique anchor words for interactive topic modeling. If we instead ask users to give us representative 897 words for this topic, we would expect combinations of words like “camera” and “bag.” However, with single word anchors we must choose a single word to anchor each topic. Unfortunately, because these words might appear in multiple topics, individually they are not suitable as anchor words. The anchor word “camera” generates a general camera topic instead of camera bags, and the topic anchored by “bag” includes bags for diaper pails (Table 1). Instead, we need to use sets of representative terms as an interpretable, parsimonious description of a topic. This section discusses strategies to build anchors from multiple words and the implications of using multiword anchors to recover topics. This extension not only makes anchors more interpretable but also enables users to manually construct effective anchors in interactive topic modeling settings. 2.1 Anchor Facets We first need to turn words into an anchor. If we interpret the anchor algorithm geometrically, each row of Q represents a word as a point in V -dimensional space. We then model each point as a convex combination of anchor words to reconstruct the topic matrix A (Equation 1). Instead of individual anchor words (one anchor word per topic), we use anchor facets, or sets of words that describe a topic. The facets for each anchor form a new pseudoword, or an invented point in V -dimensional space (described in more detail in Section 2.2). While these new points do not correspond to words in the vocabulary, we can express nonanchor words as convex combinations of pseudowords. To construct these pseudowords from their facets, we combine the co-occurrence profiles of the facets. These pseudowords then augment the original cooccurrence matrix Q with K additional rows corresponding to synthetic pseudowords forming each of K multiword anchors. We refer to this augmented matrix as S. The rest of the anchor algorithm proceeds unmodified. Our augmented matrix S is therefore a (V + K) × V matrix. As before, V is the number of token types in the data and K is the number of topics. The first V rows of S correspond to the V token types observed in the data, while the additional K rows correspond to the pseudowords constructed from anchor facets. Each entry of S encodes conditional probabilities so that Si,j is equal to p(wi | wj). For the additional K rows, we invent a cooccurrence pattern that can effectively explain the other words’ conditional probabilities. This modification is similar in spirit to supervised anchor words (Nguyen et al., 2015). This supervised extension of the anchor words algorithm adds columns corresponding to conditional probabilities of metadata values after having seen a particular word. By extending the vector-space representation of each word, anchor words corresponding to metadata values can be found. In contrast, our extension does not add dimensions to the representation, but simply places additional points corresponding to pseudoword words in the vectorspace representation. 2.2 Combining Facets into Pseudowords We now describe more concretely how to combine an anchor facets to describe the cooccurrence pattern of our new pseudoword anchor. In tandem anchors, we create vector representations that combine the information from anchor facets. Our anchor facets are G1 . . . GK, where Gk is a set of anchor facets which will form the kth pseudoword anchor. The pseudowords are g1 . . . gK, where gk is the pseudoword from Gk. These pseudowords form the new rows of S. We give several candidates for combining anchors facets into a single multiword anchor; we compare their performance in Section 3. Vector Average An obvious function for computing the central tendency is the vector average. For each anchor facet, Sgk,j = X i∈Gk Si,j |Gk|, (2) where |Gk| is the cardinality of Gk. Vector average makes the pseudoword Sgk,j more central, which is intuitive but inconsistent with the interpretation from Arora et al. (2013) that anchors should be extreme points whose linear combinations explain more central words. Or-operator An alternative approach is to consider a cooccurrence with any anchor facet in Gk. For word j, we use De Morgan’s laws to set Sgk,j = 1 − Y i∈Gk (1 −Si,j). (3) Unlike the average, which pulls the pseudoword inward, this or-operator pushes the word outward, 898 increasing each of the dimensions. Increasing the volume of the simplex spanned by the anchors explains more words. Element-wise Min Vector average and oroperator are both sensitive to outliers and cannot account for polysemous anchor facets. Returning to our previous example, both “camera” and “bag” are bad anchors for camera bags because they appear in documents discussing other products. However, if both “camera” and “bag” are anchor facets, we can look at an intersection of their contexts: words that appear with both. Using the intersection, the cooccurrence pattern of our anchor facet will only include terms relevant to camera bags. Mathematically, this is an element-wise min operator, Sgk,j = min i∈Gk Si,j. (4) This construction, while perhaps not as simple as the previous two, is robust to words which have cooccurrences which are not unique to a single topic. Harmonic Mean Leveraging the intuition that we should use a combination function which is both centralizing (like vector average) and ignores large outliers (like element-wise min), the final combination function is the element-wise harmonic mean. Thus, for each anchor facet Sgk,j = X i∈Gk S−1 i,j |Gk| !−1 . (5) Since the harmonic mean tends towards the lowest values in the set, it is not sensitive to large outliers, giving us robustness to polysemous words. 2.3 Finding Topics After constructing the pseudowords of S we then need to find the coefficients Ci,k which describe each word in our vocabulary as a convex combination of the multiword anchors. Like standard anchor methods, we solve the following for each token type: C∗ i,· = argmin Ci,· DKL Si,· K X k=1 Ci,kSgk,· ! . (6) Finally, we appeal to Bayes’ rule, we recover the topic-word matrix A from the coefficients of C. The correctness of the topic recovery algorithm hinges upon the assumption of separability. Separability means that the occurrence pattern across documents of the anchor words across the data mirrors that of the topics themselves. For single word anchors, this has been observed to hold for a wide variety of data (Arora et al., 2012b). With our tandem anchor extension, we make similar assumptions as the vanilla algorithm, except with pseudowords constructed from anchor facets. So long as the occurrence pattern of our tandem anchors mirrors that of the underlying topics, we can use the same reasoning as Arora et al. (2012a) to assert that we can provably recover the topic-word matrix A with all of the same theoretical guarantees of complexity and robustness. Furthermore, we runtime analysis given by Arora et al. (2013) applies to tandem anchors. If desired, we can also add further robustness and extensibility to tandem anchors by adding regularization to Equation 6. Regularization allows us to add something which is mathematically similar to priors, and has been shown to improve the vanilla anchor word algorithm (Nguyen et al., 2014). We leave the question of the best regularization for tandem anchors as future work, and focus our efforts on solving the problem of interactive topic modeling. 3 High Water Mark for Tandem Anchors Before addressing interactivity, we apply tandem anchors to real world data, but with anchors gleaned from metadata. Our purpose is twofold. First, we determine which combiner from Section 2.2 to use in our interactive experiments in Section 4 and second, we confirm that well-chosen tandem anchors can improve topics. In addition, we examine the runtime of tandem anchors and compare to traditional model-based interactive topic modeling techniques. We cannot assume that we will have metadata available to build tandem anchors, but we use them here because they provide a high water mark without the variance introduced by study participants. 3.1 Experimental Setup We use the well-known 20 Newsgroups dataset (20NEWS) used in previous interactive topic modeling work: 18,846 Usenet postings from 20 different newgroups in the early 1990s.1 We remove the newsgroup headers from each message, which contain the newsgroup names, but otherwise left messages intact with any footers or quotes. We 1http://qwone.com/˜jason/20Newsgroups/ 899 then remove stopwords and words which appear in fewer than 100 documents or more than 1,500 documents. To seed the tandem anchors, we use the titles of newsgroups. To build each multiword anchor facet, we split the title on word boundaries and expand any abbreviations or acronyms. For example, the newsgroup title ‘comp.os.mswindows.misc’ becomes {“computer”, “operating”, “system”, “microsoft”, “windows”, “miscellaneous”}. We do not fully specify the topic; the title gives some intuition, but the topic modeling algorithm must still recover the complete topic-word distributions. This is akin to knowing the names of the categories used but nothing else. Critically, the topic modeling algorithm has no knowledge of document-label relationships. 3.2 Experimental Results Our first evaluation is a classification task to predict documents’ newsgroup membership. Thus, we do not aim for state-of-the-art accuracy,2 but the experiment shows title-based tandem anchors yield topics closer to the underlying classes than Gram-Schmidt anchors. After randomly splitting the data into test and training sets we learn topics from the test data using both the title-based tandem anchors and the Gram-Schmidt single word anchors.3 For multiword anchors, we use each of the combiner functions from Section 2.2. The anchor algorithm only gives the topic-word distributions and not word-level topic assignments, so we infer token-level topic assignments using LDA Latent Dirichlet Allocation (Blei et al., 2003) with fixed topics discovered by the anchor method. We use our own implementation of Gibbs sampling with fixed topics and a symmetric documenttopic Dirichlet prior with concentration α = .01. Since the topics are fixed, this inference is very fast and can be parallelized on a per-document basis. We then train a hinge-loss linear classifier on the newsgroup labels using Vowpal Wabbit4 with topic-word pairs as features. Finally, we infer topic assignments in the test data and evaluate the classification using those topic-word features. For both training and test, we exclude words outside 2The best system would incorporate topic features with other features, making it harder to study and understand the topical trends in isolation. 3With fixed anchors and data the anchor algorithm is deterministic, so we use random splits instead of the standard train/test splits so that we can compute variance. 4http://hunch.net/˜vw/ the LDA vocabulary. The topics created from multiword anchor facets are more accurate than Gram-Schmidt topics (Figure 1). This is true regardless of the combiner function. However, harmonic mean is more accurate than the other functions.5 Since 20NEWS has twenty classes, accuracy alone does not capture confusion between closely related newsgroups. For example, accuracy penalizes a classifier just as much for labeling a document from ‘rec.sport.baseball’ with ‘rec.sport.hockey’ as with ‘alt.atheism’ despite the similarity between sports newsgroups. Consequently, after building a confusion matrix between the predicted and true classes, external clustering metrics reveal confusion between classes. The first clustering metric is the adjusted Rand index (Yeung and Ruzzo, 2001), which is akin to accuracy for clustering, as it gives the percentage of correct pairing decisions from a reference clustering. Adjusted Rand index (ARI) also accounts for chance groupings of documents. Next we use F-measure, which also considers pairwise groups, balancing the contribution of false negatives, but without the true negatives. Finally, we use variation of information (VI). This metric measures the amount of information lost by switching from the gold standard labels to the predicted labels (Meil˘a, 2003). Since we are measuring the amount of information lost, lower variation of information is better. Based on these clustering metrics, tandem anchors can yield superior topics to those created using single word anchors (Figure 1). As with accuracy, this is true regardless of which combination function we use. Furthermore, harmonic mean produces the least confusion between classes.5 The final evaluation is topic coherence by Newman et al. (2010), which measures whether the topics make sense, and correlates with human judgments of topic quality. Given V , the set of the n most probable words of a topic, coherence is X v1,v2∈V logD(v1, v2) + ϵ D(v2) (7) where D(v1, v2) is the co-document frequency of 5Significant at p < 0.01/4 when using two-tailed t-tests with a Bonferroni correction. For each of our evaluations, we verify the normality of our data (D’Agostino and Pearson, 1973) and use two-tailed t-tests with Bonferroni correction to determine whether the differences between the different methods are significant. 900 G G G G G G G G G G GG GG G G G GG GG GGG Accuracy ARI F−Measure VI Coherence Gram−Schmidt Title+Average Title+Or Title+Min Title+HMean 0.55 0.60 0.65 0.70 0.30 0.35 0.40 0.45 0.50 0.56 0.60 0.64 0.68 0.72 2.4 2.7 3.0 3.3 3.6 −230 −225 −220 −215 −210 Figure 1: Using metadata can improve anchor-based topic models. For all metrics, the unsupervised Gram-Schmidt anchors do worse than creating anchors based on Newsgroup titles (for all metrics except VI, higher is better). For coherence, Gram-Schmidt does better than two functions for combining anchor words, but not the element-wise min or harmonic mean. word types v1 and v2, and D(v2) is the document frequency of word type v2. A smoothing parameter ϵ prevents zero logarithms. Figure 1 also shows topic coherence. Although title-based anchor facets produce better classification features, topics from Gram-Schmidt anchors have better coherence than title-based anchors with the vector average or the or-operator. However, when using the harmonic mean combiner, title-based anchors produce the most human interpretable topics.6 Harmonic mean beats other combiner functions because it is robust to ambiguous or irrelevant term cooccurrences an anchor facet. Both the vector average and the or-operator are swayed by large outliers, making them sensitive to ambiguous terms in an anchor facet. Element-wise min also has this robustness, but harmonic mean is also able to better characterize anchor facets as it has more centralizing tendency than the min. 3.3 Runtime Considerations Tandem anchors will enable users to direct topic inference to improve topic quality. However, for the algorithm to be interactive we must also consider runtime. Cook and Thomas (2005) argue that for interactive applications with user-initiated actions like ours the response time should be less than ten seconds. Longer waits can increase the cognitive load on the user and harm the user interaction. 6Significant at p < 0.01/4 when using two-tailed t-tests with a Bonferroni correction. For each of our evaluations, we verify the normality of our data (D’Agostino and Pearson, 1973) and use two-tailed t-tests with Bonferroni correction to determine whether the differences between the different methods are significant. Fortunately, the runtime of tandem anchors is amenable to interactive topic modeling. On 20NEWS, interactive updates take a median time of 2.13 seconds. This result was obtained using a single core of an AMD Phemon II X6 1090T processor. Furthermore, larger datasets typically have a sublinear increase in distinct word types, so we can expect to see similar run times, even on much larger datasets. Compared to other interactive topic modeling algorithms, tandem anchors has a very attractive run time. For example, using an optimized version of the sampler for the Interactive Topic Model described by Hu and Boyd-Graber (2012), and the recommended 30 iterations of sampling, the Interactive Topic Model updates with a median time of 24.8 seconds (Hu and Boyd-Graber, 2012), which is well beyond our desired update time for interactive use and an order of magnitude slower than tandem anchors. Another promising interactive topic modeling approach is Utopian (Choo et al., 2013), which uses non-negative factorization, albeit without the benefit of anchor words. Utopian is much slower than tandem anchors. Even on the small InfoVisVAST dataset which contains only 515 documents, Utopian takes 48 seconds to converge. While the times are not strictly comparable due to differing datasets, Utopian scales linearly with the size of the data, we can intuit that even for moderately sized datasets such as 20NEWS, Utopian is infeasible for interactive topic modeling due to run time. While each of these interactive topic modeling algorithms do achieve reasonable topics, only our algorithm fits the run time requirements for inter901 Figure 2: Interface for user study with multiword anchors applied to interactive topic modeling. activity. Furthermore, since tandem anchors scales with the size of the vocabulary rather than the size of the data, this trend will only become more pronounced as we increase the amount of data. 4 Interactive Anchor Words Given high quality anchor facets, the tandem anchor algorithm can produce high quality topic models (particularly when the harmonic mean combiner is used). Moreover, the tandem anchor algorithm is fast enough to be interactive (as opposed to model-based approaches such as the Interactive Topic Model). We now turn our attention to our main experiment: tandem anchors applied to the problem of interactive topic modeling. We compare both single word and tandem anchors in our study. We do not include the Interactive Topic Model or Utopian, as their run times are too slow for our users. 4.1 Interface and User Study To show that interactive tandem anchor words are fast, effective, and intuitive, we ask users to understand a dataset using the anchor word algorithm. For this user study, we recruit twenty participants drawn from a university student body. The student median age is twenty-two. Seven are female, and thirteen are male. None of the students had any prior familiarity with topic modeling or the 20NEWS dataset. Each participant sees a simple user interface (Figure 2) with topic given as a row with two columns. The left column allows users to view and edit topics’ anchor words; the right column lists the most probable words in each topic.7 The user can remove an anchor word or drag words from 7While we use topics generated using harmonic mean for our final analysis, users were shown topics generated using the min combiner. However, this does not change our result. the topic word lists (right column) to become an anchor word. Users can also add additional topics by clicking the “Add Anchor” to create additional anchors. If the user wants to add a word to a tandem anchor set that does not appear in the interface, they manually type the word (restricted to the model’s vocabulary). When the user wants to see the updated topics for their newly refined anchors, they click “Update Topics”. We give each a participant a high level overview of topic modeling. We also describe common problems with topic models including intruding topic words, duplicate topics, and ambiguous topics. Users are instructed to use their best judgement to determine if topics are useful. The task is to edit the anchor words to improve the topics. We asked that users spend at least twenty minutes, but no more than thirty minutes. We repeat the task twice: once with tandem anchors, and once with single word anchors.8 4.2 Quantitative Results We now validate our main result that for interactive topic modeling, tandem anchors yields better topics than single word anchors. Like our titlebased experiments in Section 3, topics generated from users become features to train and test a classifier for the 20NEWS dataset. We choose this dataset for easier comparison with the Interactive Topic Modeling result of Hu et al. (2014). Basedsie on our results with title-based anchors, we use the harmonic mean combiner in our analysis. As before, we report not only accuracy, but also multiple clustering metrics using the confusion matrix from the classification task. Finally, we report topic coherence. Figure 3 summarizes the results of our quantitative evaluation. While we only compare user generated anchors in our analysis, we include the unsupervised Gram-Schmidt anchors as a baseline. Some of the data violate assumptions of normality. Therefore, we use Wilcoxon’s signed-rank test (Wilcoxon, 1945) to determine if the differences between multiword anchors and single word anchors are significant. Topics from user generated multiword anchors yield higher classification accuracy (Figure 3). Not only is our approach more scalable than the Interactive Topic Model, but we also achieve 8The order in which users complete these tasks is counterbalanced. 902 G GG G GG G Accuracy ARI F−Measure VI Coherence Tandem Singleword Gram−Schmidt 0.55 0.60 0.65 0.70 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 2.7 3.0 3.3 3.6 −240 −230 −220 −210 −200 −190 Figure 3: Classification accuracy and coherence using topic features gleaned from user provided multiword and single word anchors. Grahm-Schmidt anchors are provided as a baseline. For all metrics except VI, higher is better. Except for coherence, multiword anchors are best. higher classification accuracy than Hu et al. (2014).9 Tandem anchors also improve clustering metrics.10 While user selected tandem anchors produce better classification features than single word anchors, users selected single word anchors produce topics with similar topic coherence scores.11 To understand this phenomenon, we use quality metrics (AlSumait et al., 2009) for ranking topics by their correspondence to genuine themes in the data. Significant topics are likely skewed towards a few related words, so we measure the distance of each topic-word distribution from the uniform distribution over words. Topics which are close to the underlying word distribution of the entire data are likely to be vacuous, so we also measure the distance of each topic-word distribution from the underlying word distribution. Finally, background topics are likely to appear in a wide range of documents, while meaningful topics will appear in a smaller subset of the data. Figure 4 reports our topic significance findings. For all three significance metrics, multiword anchors produce more significant topics than single word anchors.10 Topic coherence is based solely on the top n words of a topic, while both accuracy and topic significance depend on the entire topicword distributions. With single word anchors, topics with good coherence may still be too general. Tandem anchors enables users to produce topics with more specific word distributions which are better features for classification. Anchor Top Words in Topic Automatic Gram Schmidt love love god evolution romans heard car game game games team hockey baseball heard Interactive Single-word evolution evolution theory science faith quote facts religion religion god government state jesus israel baseball baseball games players word teams car hockey hockey team play games season players Interactive Tandem atheism god exists prove god science evidence reason faith objective christian jesus jesus christian christ church bible christians jew israel israel jews jewish israeli state religion baseball bat ball hit baseball ball player games call hockey nhl team hockey player nhl win play Table 2: Comparison of topics generated for 20NEWS using various types of anchor words. Users are able to combine words to create more specific topics with tandem anchors. 4.3 Qualitative Results We examine the qualitative differences between how users select multiword anchor facets versus single word anchors. Table 2 gives examples of topics generated using different anchor strategies. In a follow-up survey with our users, 75% find it easier to affect individual changes in the topics using tandem anchors compared to single word anchors. Users who prefer editing multiword anchors over single word anchors often report that 9However, the values are not strictly comparable, as Hu et al. (2014) use the standard chronological test/train fold, and we use random splits. 10Significant at p < 0.01 when using Wilcoxon’s signedrank test. 11The difference between coherence scores was not statistically significant using Wilcoxon’s signed-rank test. 903 G G GG G G GG G G GG uniform vacuous background Tandem Singleword Gram−Schmidt 1 2 3 0.5 1.0 1.5 2.0 0.5 1.0 1.5 2.0 Figure 4: Topic significance for both single word and multiword anchors. In all cases higher is better. Multiword anchors produce topics which are more significant than single word anchors. multiword anchors make it easier to merge similar topics into a single focused topic by combining anchors. For example, by combining multiple words related to Christianity, users were able to create a topic which is highly specific, and differentiated from general religion themes which included terms about Atheism and Judaism. While users find that use tandem anchors is easier, only 55% of our users say that they prefer the final topics produced by tandem anchors compared to single word anchors. This is in harmony with our quantitative measurements of topic coherence, and may be the result of our stopping criteria: when users judged the topics to be useful. However, 100% of our users feel that the topics created through interaction were better than those generated from Gram-Schmidt anchors. This was true regardless of whether we used tandem anchors or single word anchors. Our participants also produce fewer topics when using multiword anchors. The mean difference between topics under single word anchors and multiple word anchors is 9.35. In follow up interviews, participants indicate that the easiest way to resolve an ambiguous topic with single word anchors was to create a new anchor for each of the ambiguous terms, thus explaining the proliferation of topics for single word anchors. In contrast, fixing an ambiguous tandem anchor is simple: users just add more terms to the anchor facet. 5 Conclusion Tandem anchors extend the anchor words algorithm to allow multiple words to be combined into anchor facets. For interactive topic modeling, using anchor facets in place of single word anchors produces higher quality topic models and are more intuitive to use. Furthermore, our approach scales much better than existing interactive topic modeling techniques, allowing interactivity on large datasets for which interactivity was previous impossible. Acknowledgements This work was supported by the collaborative NSF Grant IIS-1409287 (UMD) and IIS- 1409739 (BYU). Boyd-Graber is also supported by NSF grants IIS-1320538 and NCSE-1422492. References Loulwah AlSumait, Daniel Barbar´a, James Gentle, and Carlotta Domeniconi. 2009. Topic significance ranking of LDA generative models. In Proceedings of European Conference of Machine Learning. David Andrzejewski, Xiaojin Zhu, and Mark Craven. 2009. Incorporating domain knowledge into topic modeling via Dirichlet forest priors. In Proceedings of the International Conference of Machine Learning. Sanjeev Arora, Rong Ge, Yonatan Halpern, David Mimno, Ankur Moitra, David Sontag, Yichen Wu, and Michael Zhu. 2013. A practical algorithm for topic modeling with provable guarantees. In Proceedings of the International Conference of Machine Learning. Sanjeev Arora, Rong Ge, Ravindran Kannan, and Ankur Moitra. 2012a. Computing a nonnegative matrix factorization–provably. In Proceedings of the forty-fourth annual ACM symposium on Theory of computing. Sanjeev Arora, Rong Ge, and Ankur Moitra. 2012b. Learning topic models–going beyond svd. In FiftyThird IEEE Annual Symposium on Foundations of Computer Science. David M. Blei, Andrew Ng, and Michael Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research 3:993–1022. Jaegul Choo, Changhyun Lee, Chandan K Reddy, and Heejung Park. 2013. Utopian: User-driven topic modeling based on interactive nonnegative matrix factorization. Visualization and Computer Graphics, IEEE Transactions on 19(12):1992–2001. 904 Jason Chuang, Christopher D Manning, and Jeffrey Heer. 2012. Termite: Visualization techniques for assessing textual topic models. In Proceedings of the International Working Conference on Advanced Visual Interfaces. Kristin A. Cook and James J. Thomas. 2005. Illuminating the path: The research and development agenda for visual analytics. Technical report, Pacific Northwest National Laboratory (PNNL), Richland, WA (US). Ralph D’Agostino and Egon S Pearson. 1973. Tests for departure from normality. empirical results for the distributions of b2 and b1. Biometrika 60(3):613– 622. Matthew J Gardner, Joshua Lutes, Jeff Lund, Josh Hansen, Dan Walker, Eric Ringger, and Kevin Seppi. 2010. The topic browser: An interactive tool for browsing topic models. In NIPS Workshop on Challenges of Data Visualization. Harold Stanley Heaps. 1978. Information retrieval: Computational and theoretical aspects, Academic Press, Inc., pages 206–208. Matthew Hoffman, Francis R Bach, and David M Blei. 2010. Online learning for latent dirichlet allocation. In advances in neural information processing systems. Matthew D Hoffman, David M Blei, Chong Wang, and John William Paisley. 2013. Stochastic variational inference. Journal of Machine Learning Research 14(1):1303–1347. Yuening Hu and Jordan Boyd-Graber. 2012. Efficient tree-based topic modeling. In Proceedings of the Association for Computational Linguistics. Yuening Hu, Jordan Boyd-Graber, Brianna Satinoff, and Alison Smith. 2014. Interactive topic modeling. Machine Learning 95(3):423–469. Moontae Lee and David Mimno. 2014. Lowdimensional embeddings for interpretable anchorbased topic inference. In Proceedings of Empirical Methods in Natural Language Processing. Marina Meil˘a. 2003. Comparing clusterings by the variation of information. In Learning theory and kernel machines. David Newman, Jey Han Lau, Karl Grieser, and Timothy Baldwin. 2010. Automatic evaluation of topic coherence. In Proceedings of the Association for Computational Linguistics. Thang Nguyen, Jordan Boyd-Graber, Jeffrey Lund, Kevin Seppi, and Eric Ringger. 2015. Is your anchor going up or down? Fast and accurate supervised topic models. In Conference of the North American Chapter of the Association for Computational Linguistics. Thang Nguyen, Yuening Hu, and Jordan L BoydGraber. 2014. Anchors regularized: Adding robustness and extensibility to scalable topic-modeling algorithms. In Proceedings of the Association for Computational Linguistics. Michal Rosen-Zvi, Thomas Griffiths, Mark Steyvers, and Padhraic Smyth. 2004. The author-topic model for authors and documents. In Proceedings of Uncertainty in Artificial Intelligence. Timothy Rubin, America Chambers, Padhraic Smyth, and Mark Steyvers. 2012. Statistical topic models for multi-label document classification. Machine Learning 1(88):157–208. Ivan Titov and Ryan T McDonald. 2008. A joint model of text and aspect ratings for sentiment summarization. In Proceedings of the Association for Computational Linguistics. Xing Wei and W Bruce Croft. 2006. LDA-based document models for ad-hoc retrieval. In Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval. Frank Wilcoxon. 1945. Individual comparisons by ranking methods. Biometrics bulletin 1(6):80–83. Ka Yee Yeung and Walter L Ruzzo. 2001. Details of the adjusted rand index and clustering algorithms, supplement to the paper an empirical study on principal component analysis for clustering gene expression data. Bioinformatics 17(9):763–774. 905
2017
83
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 906–916 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1084 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 906–916 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1084 Apples to Apples: Learning Semantics of Common Entities Through a Novel Comprehension Task Omid Bakhshandeh University of Rochester [email protected] James F. Allen University of Rochester Institute for Human and Machine Cognition [email protected] Abstract Understanding common entities and their attributes is a primary requirement for any system that comprehends natural language. In order to enable learning about common entities, we introduce a novel machine comprehension task, GuessTwo: given a short paragraph comparing different aspects of two realworld semantically-similar entities, a system should guess what those entities are. Accomplishing this task requires deep language understanding which enables inference, connecting each comparison paragraph to different levels of knowledge about world entities and their attributes. So far we have crowdsourced a dataset of more than 14K comparison paragraphs comparing entities from a variety of categories such as fruits and animals. We have designed two schemes for evaluation: open-ended, and binary-choice prediction. For benchmarking further progress in the task, we have collected a set of paragraphs as the test set on which human can accomplish the task with an accuracy of 94.2% on open-ended prediction. We have implemented various models for tackling the task, ranging from semantic-driven to neural models. The semantic-driven approach outperforms the neural models, however, the results indicate that the task is very challenging across the models. 1 Introduction In the past few years, there has been great progress on core NLP tasks (e.g., parsing and part of speech tagging) which has renewed interest in primary language learning tasks which require text understanding and reasoning, such as machine comprehension (Schoenick et al., 2016; Hermann et al., 2015; Rajpurkar et al., 2016; Mostafazadeh et al., 2016). Our question is how far have we got in learning basic concepts of the world through language comprehension. If we look at the large body of work on extracting knowledge from unstructured corpora, we will see that they often lack some very basic pieces of information. For example, let us focus on the basic concept of apple, the fruit. What do the state-of-the-art systems and resources know about an apple? None of the state-of-the-art knowledge bases (Speer and Havasi, 2012; Carlson et al., 2010; Fader et al., 2011) include much precise information about the fact that apples have an edible skin, vary from sweet to sour, are round, and relatively the same size of a fist. Moreover, there is no clear approach on how to extract such information, if any, from trained word embeddings. This paper focuses on how we can automatically learn about various attributes of such generic entities in the world. A key observation motivating this work is that we can learn more detail about objects when they are compared to other similar objects. When we compare things we often contrast, that is, we count their similarities along with their dissimilarities. This results in covering the primary attributes and aspects of objects. As humans, we tend to recall and mention the difference between things (say green skin vs. red skin in apples) as opposed to absolute measures (say the existence of skin). Interestingly, there is evidence that human knowledge is structured by semantic similarity and the relations among objects are defined by their relative perceptual and conceptual properties, such as their form, function, behavior, and environment (Collins and Loftus, 1975; Tversky and Gati, 1978; Cree and Mcrae, 2003). Our idea is to leverage comparison as a way of naturally learning 906 about common world concepts and their specific attributes. Comparison, where we name the similarities and differences between things, is a unique cognitive ability in humans1 which requires memorizing facts, experiencing things and integration of concepts of the world (Hazlitt, 1933). It is clear that developing AI systems that are capable of comprehending comparison is crucial. In this paper, in order to enable learning through comparison, we introduce a new language comprehension task which requires understanding different attributes of basic entities that are being compared. The contributions of this paper are as follows: (1) To equip learning about common entities through comparison comprehension, we have crowdsourced a dataset of more than 14K comparison paragraphs comparing entities from nine broad categories (Section 2). This resource will be expanded over time and will be released to the public. (2) We introduce a novel task called GuessTwo, in which given a short paragraph comparing two entities, a system should guess what the two things are. (Section 3). To make systematic benchmarking on the task possible, we vet a collection of comparison paragraphs to obtain a test set on which human performs with an accuracy 94.2%. (3) We present a host of neural approaches and a novel semantic-driven model for tackling the GuessTwo task (Sections 4, 5). Our experiments show that the semantic approach outperforms the neural models. The results strongly suggest that closing the gap between system and human performances requires richer semantic processing (Section 6). We hope that this work will establish a new base for a machine comprehension test that requires systems to go beyond information extraction and towards levels of performing basic reasoning. 2 Data Collection To enable learning about common entities, we aimed to create a dataset which meets the following goals: 1. The dataset should be a collection of highquality documents which are rich in compar1It has been suggested (Hazlitt, 1933) that children under seven years old cannot name differences between simple things such as peach and apple. This further shows that the ability for comparison develops at a later age and is cognitively complex. ing and contrasting entities using their various attributes and aspects. 2. The comparisons in the dataset should involve everyday non-technical concepts, making their comprehension easy and commonsense for a human. After many experiments with scraping existing Web resources, we decided to crowdsource the comparison paragraphs using Amazon Mechanical Turk2 (Mturk). We prompt the crowd workers as follows: “Your task is to compare two given items in one simple language paragraph so that a knowledgeable person who reads it can guess what the two things are”. The workers were instructed to compare only the major and well-known aspects of the two entities. We also asked them to use X and Y for anonymously referring to the two entities. Table 1 shows three examples of our crowdsourced comparison paragraphs. As these examples show, the paragraphs are very contentful and rich in comparison which meets our initial goals in the dataset creation. Entity Pair Selection. The choice of the two entities which should be compared against each other plays a key role in the quality of the collected dataset. It is evident that naturally, we compare two things which are semantically similar, yet have some dissimilarities3, such as jam and jelly. Given the goals of our task, we experimented with concrete nouns which share a common taxonomy class. We choose semantic classes which have at least five well-known entities. So far, we have covered nine broad categories as shown in Figure 2, with 21 subcategories shown in Figure 3. We use Wikipedia item categories and the WordNet (Miller, 1995) ontology for identifying entities from each subcategory. Then, we choose the most common entities by looking up their frequency on Google Web 1T N-grams4. We manually inspected the frequency-filtered list to make sure that the entities are rather easy to describe without getting technical. Given the list of entities, we paired each entity with at most five and at least three other entities from the same subcategory. We also include inter-subcategory compar2www.mturk.com 3Tversky’s (1978) analysis of similarity suggests that similarity statements compare objects that belong to the same class of things. 4https://catalog.ldc.upenn.edu/ ldc2006t13 907 Comparison Paragraph Entity X Entity Y Both X and Y are fruits and a variety of apples. X and Y are generally similar in size. X are dark red in color when ripe, while Y are a bright green color. X is sweeter and softer than Y in taste and texture, sometimes starchy. Y are tart and somewhat stringy. Y is often used in cooking, whereas X is not. Red Delicious Apple Fruit Granny Smith Apple Fruit The X and Y are two types of vehicles. X is a smaller vehicle than Y. The X has two wheels while Y has none. The X travels on roadways and smooth surfaces, whereas Y is capable of flying. Only one or two people are able to ride on X at once, while Y can carry more people. Motorcycle Motor Vehicle Vehicle Helicopter Aircraft Vehicle X and Y are both types of world cuisines. X incorporates a lot of pasta dishes and sauces, with basil, tomato, and cheese being major ingredients. Y consists of many curries and stir fried dishes, with coconut and lemongrass being used often. Y is generally spicier and more aromatic than X. X is a European cuisine, while Y is an Asian cuisine. Italian Cuisine Cuisine Cuisine Thai Cuisine Cuisine Cuisine Table 1: Examples from the GuessTwo comprehension dataset. Also provided with the dataset is the subcategory and the broad category of the entities which are listed below the entity names in this Table. Figure 1: An example illustrating the entity pair matching process. ison for a handful of entities at the boundaries. Figure 1 illustrates our entity pair matching process with an example on subcategories ‘apple’ and ‘citrus’. Data Quality Control. Our task of free-form writing is trickier than many other tasks such as tagging on Mturk. To instruct the non-expert workers, we designed a qualification test on Mturk in which the workers had to judge whether or not a given paragraph is acceptable according to our criteria. We used three carefully selected paragraphs to be a part of the qualification test. Moreover, to further ensure the quality of the submissions, one of our team members qualitatively browsed through the submissions and gave the workers detailed feedback before approving their paragraphs. For each pair of entities, we collected eight comparison paragraphs from different workers. Given that different workers have different perspectives on what the major aspects to be compared are, collecting multiple paragraphs helps further enriching our dataset. We constrained the paragraphs to be at least 250 characters and at most 850 characters. Table 2 shows the basic statistics of our dataset. In this Table, we also included the median number of adjectives (including comparatives) per paragraph as a measure of descriptiveness of the comparison paragraphs. As a point of reference, the median number of adjectives in a random Wikipedia paragraph of the same length is 5. Figure 2: Distribution of broad category of the entities. Figure 3: Distribution of subcategory of the entities. Given the quality control we have in place, our data collection is going slowly. So far we have collected 14,142 paragraphs; however, we are aiming 908 Number of total approved paragraphs 14,142 Number of workers participated 649 Average number of paragraphs by one worker 21.7 Average work time among workers (minutes) 17.3 Median work time among workers (minutes) 6.4 Payment per paragraph (cents) 50 Number of broad entity categories 9 Number of entity sub-categories 24 Number of unique entities 920 Number of unique pairs compared 1974 Median number of sentences per paragraph 7 Median number of tokens per paragraph 70 Median number of adjectives per paragraph 7 Table 2: Statistics of the GuessTwo dataset as of April 2017. Figure 4: An example showing the entity pairs in the test and training sets. to expand the resource over time. Test Set Creation. In order to enable benchmarking on the task, we assessed the quality of a random sample of GuessTwo paragraphs as follows: we show the paragraph to three human workers on Mturk and ask them to guess what the two things are. Then, we choose 520 paragraphs for which all three workers have made exactly correct guesses for both entities. The test set will also be expanded along with the further data collection. We divided the rest of the GuessTwo dataset into training and validation sets, with a 90%/10% split. To ensure that the test set requires some level of basic reasoning, our training set does not share any exact entity pairs with the validation or test set. This further enforces systems to learn about entities indirectly by processing across paragraphs. For instance, as shown in Figure 4, at test time, a system should be able to guess a comparison involving the entities blood orange vs. lemon by having seen comparisons of blood orange vs. tangerine and tangerine vs. lemon. Our dataset will be released to the public through https://omidb.github.io/ guesstwo/. 3 The GuessTwo Task Definition We define the following two different schemes for the GuessTwo task: • Open-ended GuessTwo. Given a short paragraph P which compares two entities X and Y, guess what the two entities are. The scope of this prediction is the set of all entities appearing in the training dataset. • Binary Choice GuessTwo. Given a short paragraph P which compares two entities X and Y, and two nominals n1 and n2, choose 0 if n1 = X and n2 = Y, choose 1 otherwise. We speculate that system which can successfully tackle the GuessTwo task, has achieved two major objectives: (1) Has successfully learned the knowledge about entities stored in any form (e.g., continuous-space representation or symbolic) (2) Has a basic natural language understanding capability, using which, it can comprehend a paragraph and access its knowledge. We predict that our training dataset has enough detailed information about entities for learning the required knowledge for tackling the task. Given the design of our dataset, at test time, a system should perform some level of reasoning to go beyond understanding only one paragraph. 4 Neural Models In this Section we present various end-to-end neural models for tackling the task of GuessTwo. Continuous Bag-of-words Language Model. This model computes the probability of a sequence of consecutive words in context. The premise is that the probability of a paragraph with the correct realization of X and Y should be higher than the a paragraph with incorrect realizations. In order to compute the probability of a word given a context we use Continuous Bag-of-words (CBOW) (Mikolov et al., 2013a) which models the following conditional probability: p(w|C(w), θ) (1) here, C(w) is the context of the word w and θ is the model parameters. Then, the probability of a sequence of words (in a paragraph) is computed as follows: n Y i=1 p(wi|C(wi), θ) (2) We define context to be a window of five words. Figure 5a summarizes this model. We train this 909 (a) The CBOW model. (b) The CNN open-ended model. (c) The CNN binary-choice model. Figure 5: Various neural models for tackling the task of GuessTwo. Figure 6: The Encoder-Decoder model. model on two datasets: (1) A collection5 of processed Wikipedia articles. Wikipedia articles often include definitions and descriptions of variety of items, which can provide a reasonable resource for our task. (2) the GuessTwo training dataset. We call these models CBOW-Wikipedia and CBOWGuessTwo respectively. At test time, for open-ended prediction we find the two nominals which maximize the following probability: argmax x,y n Y i=1 p(wi|C(wi)x,y, θ) (3) where C(wi)x,y indicates the context in which any occurrences of X have been replaced with x and Y’s have been replaced with y. For binary choice classification, we use the same modeling except that we only consider x = n1, y = n2 and x = n2, y = n1. Encoder-Decoder Recurrent Neural Net 5http://mattmahoney.net/dc/text8.zip (RNN). This model is a sequence-to-sequence generation model (Cho et al., 2014; Sutskever et al., 2014) that maps an input sequence to an output sequence using an encoder-decoder RNN with attention (Bahdanau et al., 2014). The encoder RNN processes the comparison paragraph and the decoder generates the first item followed by the second item (Figure 6). The paragraph is encoded into a state vector of size 512. This vector is then set as the initial recurrent state of the decoder. We tune the model parameters on the validation set, where we set the number of layers to 2. The model is trained end-to-end, using Stochastic Gradient Descent with early stopping. For open-ended prediction, we use beam search with beam-width = 25 and then output the two tokens with the highest probability. For binary choice classification, we use the same model where we set the encoder RNN inputs to the input paragraph tokens, then, we set the input of the decoder RNN once to [n1, n2] and next to [n2, n1]. After running the network forward, we take the probability of the decoder logits and choose the ordering which has the highest probability. Convolutional Neural Network (CNN) Encoder. As shown in the Figure 5b, this model first uses a Convolutional Neural Network (CNN) (LeCun and Bengio, 1998) for encoding the paragraph (Kim, 2014). We train a simple CNN with one layer of convolution on top of pre-trained word vectors. Here we use the word vectors trained by 910 Be - Be The set Physical object - X Physical object - Y Both Fruit - Apple Neutral1 Neutral Sequence1 Sequence Operator Figure 7: Semantic parsing for the sentence Both X and Y are apples. Skip-gram model (Mikolov et al., 2013b) on 100 billion words of Google News6. For open-ended prediction, the output of CNN is fed forward and transformed into a 300 dimension vector. Then, we use a softmax layer to get the probability of each of the possible nominals for X and Y. For binary choice classification, we use the same architecture and settings as above. Additionally, we encode each nominal into a 300-dimensional vector, which then gets concatenated with the paragraph vector. Figure 5c shows this model. 5 Semantic-driven Model In this Section we present a semantic-driven approach which models the comparison paragraph using semantic features and is capable of performing basic reasoning across paragraphs. 5.1 Representing Paragraphs The question is, given a comparison paragraph, what is the best representation which can enable further reasoning? The comparison paragraphs often have complex syntactic and semantic structures, which might be challenging for many offthe-shelf NLP tools to process. For instance, consider the sentence X is much sweeter in taste than Y. Although a dependency parser provides a lot of information regarding how the individual words relate grammatically, it does not give us any information regarding how Y’s sweetness (which is elided from the sentence and is implicit) relates to X’s. As another processing technique, if we use the standard information extraction methods for extracting and representing syntactic triplets (argument1, relation, argument2) (Fader et al., 2014; Etzioni et al., 2011), we will extract a triplet such 6https://code.google.com/archive/p/ word2vec/ as X is sweeter which shares the same shortcomings. Our approach for better representation of comparison paragraphs starts with a broad-coverage semantic parser (Banarescu et al., 2013; Bos, 2008; Allen et al., 2008). A semantic parser maps an input sentence to its formal meaning representation, operating at the generic natural language level. Here we use the TRIPS7 (Allen et al., 2008) broad-coverage semantic parser. TRIPS provides a very rich semantic structure; mainly it provides sense disambiguated deep structures augmented with semantic ontology types. Figure 7 shows an example TRIPS semantic parse. In this graph representation, each node specifies a word in bold along with its corresponding ontology type on its left. The edges in the graph are semantic roles8. As you can see, this semantic parse represents the sentence by decoupling the token ‘both’ and attributing the property of ‘be apple’ to both X and Y. In our comparison paragraphs there are two major types of sentences: • Sentences with Absolute Information. These sentences contain direct information about the entities, such as X is red or Both X and Y are very sweet. From each absolute sentence, we extract frames which describe the absolute attributes of the corresponding entity. We define a frame to be a subgraph of a semantic parse which involves exactly one entity and all of its semantic roles. Relying on the deep semantic features offered by the semantic parser, we perform negation propagation9 and sequence decoupling, among others features. For example, given a sentence which has a sequence, as the one depicted in Figure 7, we perform sequence decoupling and extract the two frames [X Be Apple] and [Y Be Apple]. • Sentences with Relative Information. These sentences contain relative information about the two entities, for instance, X is somewhat sweeter than Y. As opposed to the sentences with absolute information, we cannot extract frames from sentences with comparisons directly. Various properties of entities can be associated with an abstract scale, such as ‘size’ or ‘sweetness’, on which dif7http://trips.ihmc.us/parser/cgi/parse 8Refer to http://trips.ihmc.us/parser/ LFDocumentation.pdf for the full list of semantic roles in TRIPS parser. 9A common construction which needs negation propagation is Neither X nor Y are ... . 911 Comparative> X is sweet -er than Y. Scale/+ Figure Ground Figure 8: The comparison construction predicted for the sentence X is sweeter than Y. ferent entities can be compared. In order to extract such scales and the relative standing of items on them we use the structured prediction model presented in Bakhshandeh et al. (2016), which given a sentence predicts its comparison structures. Figure 8 shows an example predicate-argument structure that is predicted by this model. We use pretrained model on the annotated corpus (Bakhshandeh et al., 2016) of comparison structures. Given a comparison structure such as the one presented in Figure 8, we can extract the information that on the scale of ‘sweetness’ X is higher than Y. It is clear that one can build a large knowledge base of such relations by reading large collections of comparison paragraphs. We populate our knowledge base of relative information about entities as follows: First, we predict the comparison structure of each sentence and then extract a binary relation ≺s which shows the relation on the scale of s. Second, for any scale s, we apply transitivity on its entities. As shown in equation 4, the binary relation ≺s is transitive over the set of all entities, A. This process, called closure, enables us do basic reasoning and derives implicit relations on scales from explicit relations. ∀s ∈S ∀x, y, z ∈A : (x ≺s y ∧y ≺s z) =⇒x ≺s z (4) The product of this step is a structured knowledge base on entity ordering which we call the ordering lattice. Figure 9 shows an example partial ordering lattice inferred by our model, where the sweetness of Golden Delicious can be compared to Granny Smith through their direct link with Red Delicious. 5.2 Modeling Given a paragraph P, we first extract the set of all the absolute information frames for X and Y (as described above), called FX(P) and FY(P). Second, for the sentences with relative information, Figure 9: The inferred partial ordering lattice comparing the sweetness of different apples. we extract all the binary relations ≺s∈R(P) that should hold between X and Y. Then, our objective is to find two realizations for X and Y that maximize the following: argmax x,y p(x|FX(P)) + p(y|FY(P)) s.t. ∀≺s∈R(P) : x ≺s y (5) In order to compute the p(x|FX(P)) and p(y|FY(P)) scores we used Regularized Gradient Boosting (XGBoost) classifier (Friedman, 2000), which uses a regularized model formulation to limit overfitting. We directly use each frame in the FX(P) and FY(P) sets as the classifier features. We use Integer Linear Programming (ILP) for formulating the constraints as follows: for each relation r ∈R on the scale s, we lookup the scale s in the ordering lattice and make the blacklist B(P) containing each pair of entities which do not satisfy the relation r. Our ordering lattice does not have perfect complete information, hence, we have Open World Assumption and only prune our search space not to include the already observed pairs which violate the relation. our ILP objective function will be the following: argmax b,b′ X x∈N bx p(x|FX(P)) + X y∈N b′ y p(y|FY(P)) s.t. ∀(j, j′) ∈B(P) : bj + b′ j′ ≤1 (6) where N is the set of all possible realizations and b and b′ are the binary indicator variables, so bx = 1 indicates the realization of x for X. 912 In the case of open-ended prediction, the maximization presented in Equation 6 is carried out on the set N. In the case of binary choice classification, however, only the two choices of n1 and n2 are considered in the maximization. 6 Results We evaluate all the models presented in Sections 4 and 5 using the following accuracy measure: #correct predictions of both entities #test cases (7) As for the open-ended prediction we compute the nominator of the accuracy measure using three various matching methods on both entities: (1) exact-match, (2) subcategory match, (3) broad category match. As Table 3 shows, the semantic model outperforms all the neural models. Moreover, the ILP constraints have been very effective in directing the system in the correct search space. Among the neural models, the Encoder-Decoder RNN model performs noticeably better than other models when matching the subcategory and broad category. According to the exact-matching, neither of the CBOW models could guess any of the two test entities correctly. Overall, it is evident that the end-to-end neural models have not been able to generalize well and learn about the attributes of entities across various training paragraphs. This can be partly due to not being trained on large enough comparison training dataset. The semantic model, however, could outperform the neural models using the same amount of data. To a degree, this is because the semantic model leverages the basic language understanding capabilities offered by the semantic parser. It is also important to note that our semantic approach is not only capable of binary and open-ended prediction, but it also offers two byproducts that can be used as knowledge in a variety of other tasks: (1) a set of the most important absolute information frames which can be chosen based on feature importance in the classification, (2) the partial ordering lattice of entities. Overall, the results strongly suggest that the GuessTwo task is challenging, with the open-ended scheme being the most challenging. There is a wide gap between human and system performance on this task, which makes it a very promising task for the community to pursue. Model Binary Open-ended Exact. Subcat. Human 100.0 94.2 100.0 CBOW-Wikipedia 51.9 0.0 1.5 CBOW-GuessTwo 51.7 0.0 1.1 Encoder-Decoder RNN 58.8 2.9 6.8 CNN 57.6 1.9 2.5 Semantic (no constraints) 61.5 10.5 38.5 Semantic (with ILP constraints) 69.2 11.7 40.4 Table 3: System accuracy results on the GuessTwo test set. A random baseline on binary choice task achieves 51%. The open-ended evaluation has two columns: exact-match (exact) and subcategory match (subcat), respectively. 7 Related Work The task of Machine Comprehension (MC) has gained a significant attention over the past few years. The major driver for MC has been the publicly available benchmarking datasets. A variety of MC tasks have been introduced in the community (Richardson et al.; Hermann et al., 2015; Rajpurkar et al., 2016; Hill et al., 2015), in which the system reads a short text and answers a few multiple-choice questions. The reading comprehension involved in these tests ranges from reading a short fictional story (Richardson et al.) to reading a short news article (Hermann et al., 2015). In comparison, in the GuessTwo task the reading comprehension involves reading a short comparison paragraph and one can say the multiple-choice question is the constant What are X and Y? The CNN/DailyMail dataset consists of more than 100K short news articles with the questions automatically created from the bullet-point summaries of the original article. This dataset uses fill-in-the-blank-style questions such as ‘Producer X will not press charges against Jeremy Clarkson’ where the system should choose among all the anonymized entities in the corresponding paragraph to fill in X. The Stanford Question Answering (SQuAD) dataset is another recent machine comprehension test with over 500 Wikipedia articles and +100,000 crowdsourced questions. The answer to every question in this dataset is a span of text from the corresponding reading passage. Human accuracy on CNN/DailyMail is estimated to be around 75% (Chen et al., 2016) with the current state-of-the-art at 76.1 on CNN (Sordoni et al., 2016), and 75.8 on DailyMail (Chen et al., 2016). The human F1 score on SQuAD 913 dataset is reported to be at 86.8%, with the current state-of-the-art achieving 82.9%. Given these statistics, neither of these datasets leave enough room for further research. Given that in both these tasks the answer to the question is directly found in the provided passage, we argue that the community requires a more challenging MC task which goes beyond matching and needs some level of inference across passages. The GuessTwo task requires basic reasoning and inference across paragraphs for comprehending various aspects of entities relative to one another. Another interesting task is MCTest (Richardson et al.), which is a reading comprehension test with 660 fictional stories as the passage and four questions per story. The human-level performance on MCTest is estimated to be around 90%, with the state-of-the-art achieving an accuracy of 70% (Wang et al., 2015). MCTest is also proven to be challenging, however, given its very limited training data, further progress on the task has been hindered. Yet another relevant QA task is the Allen AI Science Challenge (Clarke et al., 2010; Schoenick et al., 2016), which is a dataset of multiple-choice questions and answers from a standardized 8th grade science exam. The questions can range from simple fact lookup to complex ones which require extensive world knowledge and commonsense reasoning. This task requires machine reading of a variety of resources such as textbooks and goes beyond reading a couple of passages. 8 Conclusion We introduced the novel task of GuessTwo, in which given a short paragraph comparing two common entities, a system should guess what the two entities are. The comparison paragraphs often have complex semantic structures which make this comprehension task demanding. Furthermore, guessing the two entities requires a system to go beyond only understanding one given passage and requires reasoning across paragraphs, which is one of the most under-explored, yet crucial, capabilities of an intelligent agent. So far, we have crowdsourced a dataset of more than 14K comparison paragraphs comparing entities from nine major categories. For benchmarking the progress, we filter a collection of these paragraphs to create a test set, on which humans perform with an accuracy of 94.2%. For continuing our data collection, we would like to have a targeted entity pair selection where we particularly collect the missing relations in our partial ordering lattice. We believe that this process can help developing more effective systems. For the most recent statistics of the dataset and the best performing systems please check this website. We presented a host of neural models and a novel semantic-driven approach for tackling the task of GuessTwo. Our experiments show that the semantic approach outperforms the neural models by a large margin. The poor performance of the neural models we experimented with can motivate designing new architectures which are capable of performing basic reasoning across paragraphs. The results strongly suggest that bridging the gap between system and human performance on this task requires models with richer language representation and reasoning capabilities. As a future work, we would like to explore the feasibility of marrying our semantic and neural models to exploit the benefits that each of them has to offer. 9 Acknowledgments This work was supported in part by Grant W911NF-15-1-0542 with the US Defense Advanced Research Projects Agency (DARPA) and the Army Research Office (ARO). We would like to thank Linxiuzhi Yang for her help in the data collection and anonymous reviewers for their insightful comments on this work. We specially thank William de Beaumont for his invaluable feedback on this paper. We also thank the inputs from Steven Piantadosi, Brad Mahon, and Gregory Carlson on cognitive aspects of comparison. References James F. Allen, Mary Swift, and Will de Beaumont. 2008. Deep semantic analysis of text. In Proceedings of the 2008 Conference on Semantics in Text Processing. Association for Computational Linguistics, Stroudsburg, PA, USA, STEP ’08, pages 343–354. http://dl.acm.org/citation.cfm?id=1626481.1626508. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473. http://arxiv.org/abs/1409.0473. Omid Bakhshandeh, Alexis Cornelia Wellwood, and James Allen. 2016. Learning to jointly predict ellipsis and comparison structures. In Proceedings of The 20th SIGNLL Conference on Computational 914 Natural Language Learning. Association for Computational Linguistics, Berlin, Germany, pages 62– 74. http://www.aclweb.org/anthology/K16-1007. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse. Association for Computational Linguistics, Sofia, Bulgaria, pages 178–186. http://www.aclweb.org/anthology/W13-2322. Johan Bos. 2008. Wide-coverage semantic analysis with boxer. In Johan Bos and Rodolfo Delmonte, editors, Semantics in Text Processing. STEP 2008 Conference Proceedings. College Publications, Research in Computational Semantics, pages 277–286. Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R. Hruschka, and Tom M. Mitchell. 2010. Toward an architecture for never-ending language learning. In In AAAI. Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. In Association for Computational Linguistics (ACL). Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 . James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from the world’s response. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning. Association for Computational Linguistics, Stroudsburg, PA, USA, CoNLL ’10, pages 18–27. http://dl.acm.org/citation.cfm?id=1870568.1870571. Allan M. Collins and Elizabeth F. Loftus. 1975. A spreading-activation theory of semantic processing. Psychological Review 82(6):407 – 428. George S. Cree and Ken Mcrae. 2003. Analyzing the factors underlying the structure and computation of the meaning of chipmunk, cherry, chisel, cheese, and cello (and many other such concrete nouns). Journal of Experimental Psychology: General 132(2):163– 201+. Oren Etzioni, Anthony Fader, Janara Christensen, Stephen Soderland, and Mausam Mausam. 2011. Open information extraction: The second generation. In Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence - Volume Volume One. AAAI Press, IJCAI’11, pages 3–10. https://doi.org/10.5591/978-1-57735516-8/IJCAI11-012. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Stroudsburg, PA, USA, EMNLP ’11, pages 1535–1545. http://dl.acm.org/citation.cfm?id=2145432.2145596. Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and extracted knowledge bases. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, New York, NY, USA, KDD ’14, pages 1156–1165. https://doi.org/10.1145/2623330.2623677. Jerome H. Friedman. 2000. Greedy function approximation: A gradient boosting machine. Annals of Statistics 29:1189–1232. V. Hazlitt. 1933. The psychology of infancy. E.P. Dutton and company, inc. https://books.google.com/books?id=I8svAAAAYAAJ. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems. pages 1693– 1701. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representations. International Conference on Learning Representations (ICLR) . Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Alessandro Moschitti, Bo Pang, and Walter Daelemans, editors, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL. ACL, pages 1746–1751. http://aclweb.org/anthology/D/D14/D14-1181.pdf. Yann LeCun and Yoshua Bengio. 1998. The handbook of brain theory and neural networks. MIT Press, Cambridge, MA, USA, chapter Convolutional Networks for Images, Speech, and Time Series, pages 255–258. http://dl.acm.org/citation.cfm?id=303568.303704. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. CoRR abs/1301.3781. http://arxiv.org/abs/1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information 915 Processing Systems 26, Curran Associates, Inc., pages 3111–3119. http://papers.nips.cc/paper/5021distributed-representations-of-words-and-phrasesand-their-compositionality.pdf. George A. Miller. 1995. Wordnet: A lexical database for english. Commun. ACM 38(11):39–41. https://doi.org/10.1145/219717.219748. Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and cloze evaluation for deeper understanding of commonsense stories. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 839–849. http://www.aclweb.org/anthology/N161098. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text . Matthew Richardson, Christopher J. C. Burges, and Erin Renshaw. ???? Mctest: A challenge dataset for the open-domain machine comprehension of text. pages 193–203. Carissa Schoenick, Peter Clark, Oyvind Tafjord, Peter D. Turney, and Oren Etzioni. 2016. Moving beyond the turing test with the allen AI science challenge. CoRR abs/1604.04315. http://arxiv.org/abs/1604.04315. Alessandro Sordoni, Phillip Bachman, and Yoshua Bengio. 2016. Iterative alternating neural attention for machine reading. CoRR abs/1606.02245. http://arxiv.org/abs/1606.02245. Robert Speer and Catherine Havasi. 2012. Representing general relational knowledge in conceptnet 5. In Nicoletta Calzolari (Conference Chair), Khalid Choukri, Thierry Declerck, Mehmet Uur Doan, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, and Stelios Piperidis, editors, Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC’12). European Language Resources Association (ELRA), Istanbul, Turkey. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 813 2014, Montreal, Quebec, Canada. pages 3104– 3112. http://papers.nips.cc/paper/5346-sequenceto-sequence-learning-with-neural-networks. Amos Tversky and Itamar Gati. 1978. Studies of similarity. Cognition and categorization 1(1978):79–98. Hai Wang, Mohit Bansal, Kevin Gimpel, and David A. McAllester. 2015. Machine comprehension with syntax, frames, and semantics. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 2: Short Papers. The Association for Computer Linguistics, pages 700–706. http://aclweb.org/anthology/P/P15/P15-2115.pdf. 916
2017
84
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 917–928 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1085 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 917–928 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1085 Going out on a limb: Joint Extraction of Entity Mentions and Relations without Dependency Trees Arzoo Katiyar and Claire Cardie Department of Computer Science Cornell University Ithaca, NY, 14853, USA arzoo, [email protected] Abstract We present a novel attention-based recurrent neural network for joint extraction of entity mentions and relations. We show that attention along with long short term memory (LSTM) network can extract semantic relations between entity mentions without having access to dependency trees. Experiments on Automatic Content Extraction (ACE) corpora show that our model significantly outperforms featurebased joint model by Li and Ji (2014). We also compare our model with an end-toend tree-based LSTM model (SPTree) by Miwa and Bansal (2016) and show that our model performs within 1% on entity mentions and 2% on relations. Our finegrained analysis also shows that our model performs significantly better on AGENTARTIFACT relations, while SPTree performs better on PHYSICAL and PARTWHOLE relations. 1 Introduction Extraction of entities and their relations from text belongs to a very well-studied family of structured prediction tasks in NLP. There are several NLP tasks such as fine-grained opinion mining (Choi et al., 2006), semantic role labeling (Gildea and Jurafsky, 2002), etc., which have a similar structure; thus making it an important and a challenging task. Several methods have been proposed for entity mention and relation extraction at the sentencelevel. These can be broadly categorized into – 1) pipeline models that treat the identification of entity mentions (Nadeau and Sekine, 2007) and relation classification (Zhou et al., 2005) as two separate tasks; and 2) joint models, also the more recent, which simultaneously identify the entity mention and relations (Li and Ji, 2014; Miwa and Sasaki, 2014). Joint models have been argued to perform better than the pipeline models as knowledge of the typed relation can increase the confidence of the model on entity extraction and vice versa. Recurrent networks (RNNs) (Elman, 1990) have recently become very popular for sequence tagging tasks such as entity extraction that involves a set of contiguous tokens. However, their ability to identify relations between non-adjacent tokens in a sequence, e.g., the head nouns of two entities, is less explored. For these tasks, RNNs that make use of tree structures have been deemed more suitable. Miwa and Bansal (2016), for example, propose an RNN comprised of a sequencebased long short term memory (LSTM) for entity identification and a separate tree-based dependency LSTM layer for relation classification using shared parameters between the two components. As a result, their model depends critically on access to dependency trees, restricting it to sentencelevel extraction and to languages for which (good) dependency parsers exist. Also, their model does not jointly extract entities and relations; they first extract all entities and then perform relation classification on all pairs of entities in a sentence. In our previous work (Katiyar and Cardie, 2016), we address the same task in an opinion extraction context. Our LSTM-based formulation explicitly encodes distance between the head of entities into opinion relation labels. The output space of our model is quadratic in size of the entity and relation label set and we do not specifically identify the relation type. Unfortunately, adding relation type makes the output label space very sparse, making it difficult for the model to learn. In this paper, we propose a novel RNN-based model for the joint extraction of entity mentions 917 and relations. Unlike other models, our model does not depend on any dependency tree information. Our RNN-based model is a multi-layer bidirectional LSTM over a sequence. We encode the output sequence from left-to-right. At each time step, we use an attention-like model on the previously decoded time steps, to identify the tokens in a specified relation with the current token. We also add an additional layer to our network to encode the output sequence from right-to-left and find significant improvement on the performance of relation identification using bi-directional encoding. Our model significantly outperforms the feature-based structured perceptron model of Li and Ji (2014), showing improvements on both entity and relation extraction on the ACE05 dataset. In comparison to the dependency treebased LSTM model of Miwa and Bansal (2016), our model performs within 1% on entities and 2% on relations on ACE05 dataset. We also find that our model performs significantly better than their tree-based model on the AGENT-ARTIFACT relation, while their tree-based model performs better on PHYSICAL and PART-WHOLE relations; the two models perform comparably on all other relation types. The very competitive performance of our non-tree-based model bodes well for relation extraction of non-adjacent entities in low-resource languages that lack good parsers. In the sections that follow, we describe related work (Section 2); our bi-directional LSTM model with attention (Section 3); the training (Section 4); the experiments on ACE dataset (Section 5); results (Section 6); error analysis (Section 7) and conclusion (Section 8). 2 Related Work RNNs (Hochreiter and Schmidhuber, 1997) have been recently applied to many sequential modeling and prediction tasks, such as machine translation (Bahdanau et al., 2015; Sutskever et al., 2014), named entity recognition (NER) (Hammerton, 2003), opinion mining (Irsoy and Cardie, 2014). Variants such as adding CRF-like objective on top of LSTMs have been found to produce state-of-the-art results on several sequence prediction NLP tasks (Collobert et al., 2011; Huang et al., 2015; Katiyar and Cardie, 2016). These models assume conditional independence at the output layer whereas the model we propose in this paper does not assume any conditional independence at the output layer, allowing it to model an arbitrary distribution over output sequences. Relation classification has been widely studied as a stand-alone task, assuming that the arguments of the relations are known in advance. There have been several models proposed including featurebased models (Bunescu and Mooney, 2005; Zelenko et al., 2003) and neural network based models (Socher et al., 2012; dos Santos et al., 2015; Hashimoto et al., 2015; Xu et al., 2015a,b). For joint-extraction of entities and relations, feature-based structured prediction models (Li and Ji, 2014; Miwa and Sasaki, 2014), joint inference integer linear programming models(Yih and Roth, 2007; Yang and Cardie, 2013), card-pyramid parsing (Kate and Mooney, 2010) and probabilistic graphical models (Yu and Lam, 2010; Singh et al., 2013) have been proposed. In contrast, we propose a neural network model which does not depend on the availability of any features such as part of speech (POS) tags, dependency trees, etc. Recently, Miwa and Bansal (2016) proposed an end-to-end LSTM based sequence and treestructured model. They extract entities via a sequence layer and relations between the entities via the shortest path dependency tree network. In this paper, we try to investigate recurrent neural networks with attention for extracting semantic relations between entity mentions without using any dependency parse tree features. We also present the first neural network based joint model that can extract entity mentions and relations along with the relation type. In our previous work (Katiyar and Cardie, 2016), as explained earlier, we proposed a LSTM-based model for joint extraction of opinion entities and relations, but no relation types. This model cannot be directly extended to include relation types as the output space becomes sparse making it difficult for the model to learn. Recent advances in recurrent neural network has seen the application of attention on recurrent neural networks to obtain a representation weighted by the importance of tokens in the sequence model. Such models have been very frequently used in question-answering tasks (for recent examples, see Chen et al. (2016) and Lee et al. (2016)), machine translation (Luong et al., 2015; Bahdanau et al., 2015), and many other NLP applications. Pointer networks (Vinyals et al., 2015), an adaptation of attention models, use these tokenlevel weights as pointers to the input elements. 918 Martin Geissler , ITV News , Safwan southern Iraq . Entity tags B PER L PER O B ORG L ORG O U GPE O U LOC O ORG-AFF PHYS PART-WHOLE Figure 1: Gold standard annotation for an example sentence from ACE05 dataset. Zhai et al. (2017), for example, have used these for neural chunking, and Nallapati et al. (2016) and Cheng and Lapata (2016), for summarization. However, to the best of our knowledge, these networks have not been used for joint extraction of entity mentions and relations. We present first such attempt to use these attention models with recurrent neural networks for joint extraction of entity mentions and relations. 3 Model Our model comprises of a multi-layer bidirectional recurrent network which learns a representation for each token in the sequence. We use the hidden representation from the top layer for joint entity and relation extraction. For each token in the sequence, we output an entity tag and a relation tag. The entity tag corresponds to the entity type, whereas the relation tag is a tuple of pointers to related entities and their respective relation types. Figure 1 shows the annotation for an example sentence from the dataset. We transform the relation tags from entity level to token level. For example, we separately model the relation “ORG-AFF” for each token in the entity “ITV News”. Thus, we model the relations between “ITV” and “Martin Geissler”, and “News” and “Martin Geissler” separately. We employ a pointer-like network on top of the sequence layer in order to find the relation tag for each token as shown in Figure 2. At each time step, the network utilizes the information available about all output tags from the previous time steps in order to output the entity tag and relation tag jointly for the current token. 3.1 Multi-layer Bi-directional Recurrent Network We use multi-layer bi-directional LSTMs for sequence tagging because LSTMs are more capable of capturing long-term dependencies between tokens, making it ideal for both entity mention and relation extraction. Using LSTMs, we can compute the hidden state −→ ht in the forward direction and ←− ht in the backward direction for every token as below: −→h t = LSTM(xt, −→h t−1) ←−h t = LSTM(xt, ←−h t+1) For every token t in the subsequent layer l, we combine the representations −→h l−1 t and ←−h l−1 t from previous layer l-1 and feed it as an input. In this paper, we only use the hidden state from the last layer L for output layer and compute the top hidden layer representation as below: z ′ t = −→ V −→h (L) t + ←− V ←−h (L) t + c −→ V and ←− V are weight matrices for combining hidden representations from the two directions. 3.2 Entity detection We formulate entity detection as a sequence labeling task using BILOU scheme similar to Li and Ji (2014) and Miwa and Bansal (2016). We assign each token in the entity with the tag B appended with the entity type if it is the beginning of the entity, I for inside of an entity, L for the end of the entity or U if there is only one token in the entity. Figure 1 shows an example of the entity tag sequence assigned to the sentence. For each token in the sequence, we perform a softmax over all candidate tags to output the most likely tag: yt = softmax(Uz ′ t + b) Our network structure as shown in Figure 2 also contains connections from the output yt−1 of the previous time step to the current top hidden layer. Thus our outputs are not conditionally independent from each other. In order to add connections from yt−1, we transform this output k into a label embedding bk t−1 1. We represent each label type 1We can also add relation label embeddings using the relation tag output from the previous time step. 919 Figure 2: Our network structure based on bi-directional LSTMs for joint entity and relation extraction. This snapshot shows the network when encoding the relation tag for the word “Safwan” in the sentence. The dotted lines in the figure show that top hidden layer and label embeddings for tokens is copied into relation layer. The pointers at attention layer indicate the probability distribution over tokens, the length of the pointers is used to denote the probability value. k with a dense representation bk. We compute the output layer representations as: zt = LSTM([z ′ t; bk t−1], ht−1) yt = softmax(Uzt + b ′) We decode the output sequence from left to right in a greedy manner. 3.3 Attention Model We use attention model for relation extraction. Attention models, over an encoder sequence of representations z, can compute a soft probability distribution p over these learned representations, where di is the ith token in decoder sequence. These probabilities are an indication of the importance of different tokens in the encoder sequence: ui t = vT tanh(W1z + W2di) pi t = softmax(ui t) v is a weight matrix for attention which transforms the hidden representations into attention scores. We use pointer networks (Vinyals et al., 2015) in our approach, which are a variation of these attention models. Pointer networks interpret these pi t as the probability distribution over the input encoding sequence and use ui t as pointers to the input elements. We can use these pointers to encode relation between the current token and the previous predicted tokens, making it fit for relation extraction as explained in Section 3.4. 3.4 Relation detection We formulate relation extraction also as a sequence labeling task. For each token, we want to find the tokens in the past that the current token is related to along with its relation type. In Figure 1, “Safwan” is related to the tokens “Martin” as well as “Geissler” by the relation type “PHYS”. For simplicity, let us assume that there is only one previous token the current token is related to when training, i.e., “Safwan” is related to “Geissler” via PHYS relation. We can extend our approach to output multiple relations as explained in Section 4. We use pointer networks as described in Sec920 tion 3.3. At each time step, we stack the top hidden layer representations from the previous time steps z≤t2 and its corresponding label embeddings b≤t. We only stack the top hidden layer representations for the tokens which were predicted as non-O’s for previous time steps as shown in Figure 2. Our decoding representation at time t is the concatenation of zt and bt. The attention probabilities can now be computed as below: ut ≤t = vT tanh(W1[z≤t; b≤t] + W2[zt; bt]) pt ≤t = softmax(ut ≤t) Thus, pt ≤t corresponds to the probability of each token, in the sequence so far, being related to the current token at time step t. For the case of NONE relations, the token at t is related to itself. We also want to find the type of the relations. In order to achieve this, we add an extra dimension to v corresponding to the size of relation types R space. Thus, ui t is no longer a score but a R dimensional vector. We then take softmax over this vector of size O(|z≤t|×R) to find the most likely tuple of pointer to the related entity and its relation type. 3.5 Bi-directional Encoding Bi-directional LSTMs have been found to be able to capture context better than plain left-to-right LSTMs, based on their performance on various NLP tasks (Irsoy and Cardie, 2014). Also, Sutskever et al. (2014) found that their performance on machine translation task improved on reversing the input sentences during training. Inspired by these developments, we experiment with bi-directional encoding at the output layer. We add another top hidden layer on Bi-LSTM in Figure 2 which encodes the output sequence from rightto-left. The two encoding share the same multilayer bi-directional LSTM except for the top hidden layer. Thus, we have two output layers in our network which output the entity tags and relation tags separately. At inference time, we employ heuristics to combine the output from the two directions. 2The notation ≤is used to denote the stacking of the representations from the previous time steps. Thus, if zt is a 2-dimensional matrix then z≤t will be a 3-dimensional tensor. The size along the first dimension will now correspond to the number of 2-dimensional matrices stacked. 4 Training We train our network by maximizing the logprobability of the correct entity E and relation R tag sequences jointly given the sentence S as below: log p(E, R|S, θ) = 1 |S| X i∈|S| log p(ei, ri|e<i, r<i, S, θ) = 1 |S| X i∈|S| log p(ei|e<i, r<i) + log p(ri|e≤i, r<i) Thus, we can decompose our objective into the sum of log-probabilities over entity sequence and relation sequence. We use the gold entity tags while training. As shown in Figure 2, we input the label embedding from the previous time step to the top hidden layer at the current time step along with the other recurrent inputs. During training, we pass the gold label embedding to the next time step which enables better training of our model. However, at test time when the gold label is not available we use the predicted label at previous time step as input to the current step. At inference time, we can greedily decode the sequence to find the most likely entity bE and relation bR tag sequences: ( bE, bR) = argmax E,R p(E, R) Since, we add another top layer to encode tag sequences in the reverse order as explained in Section 3.5, there may be conflicts in the output. We select the positive and more confident label similar to Miwa and Bansal (2016). Multiple Relations Our approach to relation extraction is different from Miwa and Bansal (2016). Miwa and Bansal (2016) present each pair of entities to their model for relation classification. In our approach, we use pointer networks to identify the related entities. Thus, for our approach described so far if we only compute the argmax on our objective then we limit our model to output only one relation label per token. However, from our analysis of the dataset, an entity may be related to more than one entity in the sentence. Hence, we modify our objective to include multiple relations. In Figure 2, token “Safwan” is related to both tokens “Martin” and “Geissler” of the entity “Martin Geissler”, hence we assign probability of 0.5 921 to both these tokens. This can be easily expanded to include tokens from other related entities, such that we assign equal probability 1 N to all tokens3 depending on the number N of these related tokens. The log-probability for the entity part remain the same as in our objective discussed in Section 4, however we modify the relation log-probability as below: X |j:r′ i,j>0| r ′ i,j log p(ri,j|e≤i, r<i, S, θ) where, r ′ i is the true distribution over relation label space and ri is the softmax output from our model. From empirical analysis, we find that r ′ i is generally sparse and hence using a cross entropy objective like this can be useful to find multiple relations. We can also use Sparsemax (Martins and Astudillo, 2016) instead of softmax which is more suitable for sparse distributions. However, we leave it for future work. At inference time, we output all the labels with probability value above a certain threshold. We adapt this threshold based on the validation set. 5 Experiments 5.1 Data We evaluate our proposed model on the two datasets from the Automatic Content Extraction (ACE) program – ACE05 and ACE04. There are 7 main entity types namely Person (PER), Organization (ORG), Geographical Entities (GPE), Location (LOC), Facility (FAC), Weapon (WEA) and Vehicle (VEH). For each entity, both entity mentions and its head phrase are annotated. For the scope of this paper, we only use the entity head phrase similar to Li and Ji (2014) and Miwa and Bansal (2016). Also, there are relation types namely Physical (PHYS), Person-Social (PER-SOC), Organization-Affiliation (ORG-AFF), Agent-Artifact (ART), GPE-Affiliation (GPEAFF). ACE05 has a total of 6 relation types including PART-WHOLE. We use the same data splits as Li and Ji (2014) and Miwa and Bansal (2016) such that there are 351 documents for training, 80 for 3In this paper, we only identify mention heads and hence the span is limited to a few tokens. We can also include only the last token of the gold entity span in the gold probability distribution. development and the remaining 80 documents for the test set. ACE04 has 7 relation types with an additional Discourse (DISC) type and split ORG-AFF relation type into ORG-AFF and OTHER-AFF. We perform 5-fold cross validation similar to Chan and Roth (2011) for fair comparison with the state-of-theart. 5.2 Evaluation Metrics In order to compare our system with the previous systems, we report micro F1-scores, Precision and Recall on both entities and relations similar to Li and Ji (2014) and Miwa and Bansal (2016). An entity is considered correct if we can identify its head and the entity type correctly. A relation is considered correct if we can identify the head of the argument entities and also the relation type. We also report a combined score when both argument entities and relations are correct. 5.3 Baselines and Previous Models We compare our approach with two previous approaches. The model proposed by Li and Ji (2014) is a feature-based structured perceptron model with efficient beam-search. They employ a segment-based decoder instead of token-based decoding. Their model outperformed previous stateof-the-art pipelined models. Miwa and Sasaki (2014) (SPTree) recently proposed a LSTM-based model with a sequence layer for entity identification, and a tree-based dependency layer which identifies relations between pairs of candidate entities using the shortest dependency path between them. We also employed our previous approach (Katiyar and Cardie, 2016) for extraction of opinion entities and relations to this task. We found that the performance was not competitive with the two approaches mentioned above, performing upto 10 points lower on relations. Hence, we do not include the results in Table 1. Also, Li and Ji (2014) showed that the joint model performs better than the pipelined approaches. Thus, we do not include any pipeline baselines. 5.4 Hyperparameters and Training Details We train our model using Adadelta (Zeiler, 2012) with gradient clipping. We regularize our network using dropout (Srivastava et al., 2014) with the drop-out rate tuned using development set. We initialized our word embeddings 922 Entity Relation Entity+Relation Method P R F1 P R F1 P R F1 Li and Ji (2014) .852 .769 .808 .689 .419 .521 .654 .398 .495 SPTree .829 .839 .834 – – – .572 .540 .556 SPTree1 .823 .839 .831 .605 .553 .578 .578 .529 .553 Our Model .840 .813 .826 .579 .540 .559 .555 .518 .536 Table 1: Performance on ACE05 test dataset. The dashed (“–”) performance numbers were missing in the original paper (Miwa and Bansal, 2016). 1 We ran the system made publicly available by Miwa and Bansal (2016), on ACE05 dataset for filling in the missing values and comparing our system with theirs at fine-grained level. Entity Relation Entity+Relation Encoding P R F1 P R F1 P R F1 Left-to-Right .821 .812 .817 .622 .449 .522 .601 .434 .504 +Multiple Relations .835 .811 .823 .560 .492 .524 .539 .473 .504 +Bi-directional (Our Model) .840 .813 .826 .579 .540 .559 .555 .518 .536 Table 2: Performance of different encoding methods on ACE05 dataset. with 300-dimensional word2vec (Mikolov et al., 2013) word embeddings trained on Google News dataset. We have 3 hidden layers in our network and the dimensionality of the hidden units is 100. All the weights in the network are initialized from small random uniform noise. We tune our hyperparameters based on ACE05 development set and use them for training on ACE04 dataset. 6 Results Table 1 compares the performance of our system with respect to the baselines on ACE05 dataset. We find that our joint model significantly outperforms the joint structured perceptron model (Li and Ji, 2014) on both entities and relations, despite the unavailability of features such as dependency trees, POS tags, etc. However, if we compare our model to the SPTree models, then we find that their model has better recall on both entities and relations. In Section 7, we perform error analysis to understand the difference in the performance of the two models in detail. We also compare the performance of various encoding schemes in Table 2. We compare the benefits of introducing multiple relations in our objective and bi-directional encoding compared to leftto-right encoding. Multiple Relations We find that modifying our objective to include multiple relations improves the recall of our system on relations, leading to slight improvement on the overall performance on relations. However, careful tuning of the threshold may further improve precision. Bi-directional Encoding By adding bidirectional encoding to our system, we find that we can significantly improve the performance of our system compared to left-to-right encoding. It also improves precision compared to left-toright decoding combined with multiple relations objective. We find that for some relations it is easier to detect them with respect to one of the entities in the entity pair. PHYS relation is easier identified with respect to GPE entity than PER entity. Thus, our bi-directional encoding of relations allows us to encode these relations with respect to both entities in the relation. Table 3 shows the performance of our model on ACE04 dataset. We believe that tuning the hyperparameters of our model can further improve the results on this dataset. As also pointed out by Li and Ji (2014) that ACE05 has better annotation quality, we focused on ACE05 dataset for this work. 7 Error Analysis In this section, we perform a fine-grained comparison of our model with respect to the SPTree (Miwa and Bansal, 2016) model. We compare the performance of the two models with respect to entities, relation types and the distance between the relation arguments and provide examples from the test set in Table 6. 923 Entity Relation Entity+Relation Method P R F1 P R F1 P R F1 Li and Ji (2014) .835 .762 .797 .647 .385 .483 .608 .361 .453 SPTree .808 .829 .818 – – – .487 .481 .484 Our Model .812 .781 .796 .502 .488 .493 .464 .453 .457 Table 3: Performance on ACE04 test dataset. The dashed (“–”) performance numbers were missing in the original paper (Miwa and Bansal, 2016). 7.1 Entities We find that our model has lower recall on entity extraction than SPTree as shown in Table 1. Miwa and Bansal (2016), in one of the ablation tests on ACE05 development set, show that their model can gain upto 2% improvement in recall by entity pretraining. Since we propose a jointmodel, we cannot directly apply their pretraining trick on entities separately. We leave it for future work. Li and Ji (2014) mentioned in their analysis of the dataset that there were many “UNK” tokens in the test set which were never seen during training. We verified the same and we hypothesize that for this reason the performance on the entities depends largely on the pretrained word embeddings being used. We found considerable improvements on entity recall when using pretrained word embeddings, if available, for these “UNK” tokens. Miwa and Bansal (2016) also use additional features such as POS tags in addition to pretrained word embeddings at the input layer. Relation Type Method R P F1 ART SPTree .363 .552 .438 Our model .431 .611 .505 PART-WHOLE SPTree .560 .538 .548 Our model .520 .538 .528 PER-SOC SPTree .671 .671 .671 Our model .657 .648 .652 PHYS SPTree .489 .513 .500 Our model .388 .426 .406 GEN-AFF SPTree .414 .640 .502 Our model .484 .516 .500 ORG-AFF SPTree .692 .704 .697 Our model .706 .700 .703 Table 4: Performance on different relation types in ACE05 test dataset. Numbers in the bracket denote the number of relations of each relation type in the test set. 7.2 Relation Types We evaluate our model on different relation types and compare the performance with SPTree model Relation Distance Method R P F1 ≤7 SPTree .589 .628 .608 Our model .591 .605 .598 > 7 SPTree .275 .375 .267 Our model .153 .259 .192 Table 5: Performance based on the distance between entity arguments in relations for ACE05 test dataset. in Table 4. Interestingly, we find that the performance of the two models is varied over different relation types. The dependency tree-based model significantly outperforms our joint-model on PHYS and PART-WHOLE relations, whereas our model is significantly better than tree-based model on ART relation. We show an example sentence (S1) in Table 6, where SPTree model identifies the entities in ART relation correctly but fails to identify ART relation. We compare the performance with respect to PHYS relation in Section 7.3. 7.3 Distance-based Analysis We also compare the performance of the two models on relations based on the distance between the entities in a relation in Table 5. We find that the performance of both the models is very low for distance greater than 7. SPTree model can identify 36 relations out of 131 such relations correctly, while our model can only identify 20 relations in this category. We manually compare the output of the two systems on these cases on several examples to understand the gain of using dependency tree on longer distances. Interestingly, the majority of these relations belong to PHYS type, thus resulting in lower performance on PHYS as discussed in Section 7.2. We found that there were a few instances of co-reference errors as shown in S2 in Table 6. Our model identifies a PHYS relation between “here” and “baghdad”, whereas the gold annotation has PHYS relation between “location” and “baghdad”. We think that 924 S1 : the [men]PER:ART-1 held on the sinking [vessel]VEH:ART-1 until the [passenger]PER:ART-2 [ship]VEH:ART-2 was able... SPTree : the [men]PER held on the sinking [vessel]VEH until the [passenger]PER [ship]VEH was able to reach them. Our Model : the [men]PER:ART-1 held on the sinking [vessel]VEH:ART-1 until the [passenger]PER:ART-2 [ship]VEH:ART-2 was able... S2 : [her]PER research was conducted [here]FAC at a [location]FAC:PHYS1 well-known to [u.n.]ORG:ORG-AFF1 [arms]WEA [inspectors]PER:ORG-AFF1. 300 miles west of [baghdad]GPE:PHYS1. SPTree : [her]PER research was conducted [here]GPE at a [location]LOC:PHYS1 well-known to u.n. [arms]WEA [[inspectors] PER:PHYS1,PHY2. 300 miles west of [baghdad]GPE:PHYS2. Our Model : [her]PER research was conducted [here]FAC:PHYS1 at a [location]GPE well-known to [u.n.]ORG:ORG-AFF1 [arms]WEA [inspectors]PER:ORG-AFF1. 300 miles west of [baghdad]GPE:PHYS1. S3 : ... [Abigail Fletcher]PER:PHYS1 , a [marcher]FAC:GEN-AFF2 from [Florida]FAC:GEN-AFF2, said outside the [president]PER:ART3 ’s [[residence] FAC:ART3, PHYS1. SPTree : ... [Abigail Fletcher]PER:PHYS1 , a [marcher]FAC:GEN-AFF2 from [Florida]FAC:GEN-AFF2, said outside the [president]PER:ART3 ’s [[residence]]FAC:ART3, PHYS1. Our Model : ... [Abigail Fletcher]PER , a [marcher]FAC:GEN-AFF2 from [Florida]FAC:GEN-AFF2, said outside the [president]PER ’s residence. Table 6: Examples from the dataset with label annotations from SPTree and our model for comparison. The first row for each example is the gold standard. incorporating these co-reference information during both training and evaluation will further improve the performance of both systems. Another source of error that we found was the inability of our system to extract entities (lower recall) as in S3. Our model could not identify the FAC entity “residence”. Hence, we think an improvement on entity performance via methods like pretraining might be helpful in identifying more relations. For distance less than 7, we find that our model has better recall but lower precision, as expected. 8 Conclusion In this paper, we propose a novel attention-based LSTM model for joint extraction of entity mentions and relations. Experimentally, we found that our model significantly outperforms feature-rich structured perceptron joint model by Li and Ji (2014). We also compare our model to an endto-end LSTM model by Miwa and Bansal (2016) which comprises of a sequence layer for entity extraction and a tree-based dependency layer for relation classification. We find that our model, without access to dependency trees, POS tags, etc performs within 1% on entities and 2% on relations on ACE05 dataset. We also find that our model performs significantly better than their treebased model on the ART relation, while their treebased model performs better on PHYS and PARTWHOLE relations; the two models perform comparably on all other relation types. In future, we plan to explore pretraining methods for our model which were shown to improve recall on entity and relation performance by Miwa and Bansal (2016). We introduce bi-directional output encoding as well as an objective to learn multiple relations in this paper. However, this presents the challenge of combining predictions from the two directions. We use heuristics in this paper to combine the predictions. We think that using probabilistic methods to combine model predictions from both directions may further improve the performance. We also plan to use Sparsemax (Martins and Astudillo, 2016) instead of Softmax for multiple relations, as the former is more suitable for multi-label classification for sparse labels. It would also be interesting to see the effect of reranking (Collins and Koo, 2005) on our joint model. We also plan to extend the identification of entities to full entity mention span instead of only the head phrase as in Lu and Roth (2015). Acknowledgments We thank Qi Li and Makoto Miwa for their help with the dataset and sharing their code for analysis. We also thank Xilun Chen, Xanda Schofield, Yiqing Hua, Vlad Niculae, Tianze Shi and the three anonymous reviewers for their helpful feedback and discussion. 925 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. ICLR. Razvan C. Bunescu and Raymond J. Mooney. 2005. A shortest path dependency kernel for relation extraction. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Stroudsburg, PA, USA, HLT ’05, pages 724–731. https://doi.org/10.3115/1220575.1220666. Yee Seng Chan and Dan Roth. 2011. Exploiting syntactico-semantic structures for relation extraction. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1. Association for Computational Linguistics, Stroudsburg, PA, USA, HLT ’11, pages 551–560. http://dl.acm.org/citation.cfm?id=2002472.2002542. Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. http://aclweb.org/anthology/P/P16/P161223.pdf. Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 484–494. http://www.aclweb.org/anthology/P16-1046. Yejin Choi, Eric Breck, and Claire Cardie. 2006. Joint extraction of entities and relations for opinion recognition. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Sydney, Australia, pages 431–439. http://www.aclweb.org/anthology/W/W06/W061651. Michael Collins and Terry Koo. 2005. Discriminative reranking for natural language parsing. Comput. Linguist. 31(1):25–70. https://doi.org/10.1162/0891201053630273. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res. 12:2493–2537. http://dl.acm.org/citation.cfm?id=1953048.2078186. C´ıcero Nogueira dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convolutional neural networks. CoRR abs/1504.06580. http://arxiv.org/abs/1504.06580. Jeffrey L. Elman. 1990. Finding structure in time. COGNITIVE SCIENCE 14(2):179–211. Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Comput. Linguist. 28(3):245–288. https://doi.org/10.1162/089120102760275983. James Hammerton. 2003. Named entity recognition with long short-term memory. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003 - Volume 4. Association for Computational Linguistics, Stroudsburg, PA, USA, CONLL ’03, pages 172–175. https://doi.org/10.3115/1119176.1119202. Kazuma Hashimoto, Pontus Stenetorp, Makoto Miwa, and Yoshimasa Tsuruoka. 2015. Task-oriented learning of word embeddings for semantic relation classification. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning. Association for Computational Linguistics, Beijing, China, pages 268–278. http://www.aclweb.org/anthology/K15-1027. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput. 9(8):1735– 1780. https://doi.org/10.1162/neco.1997.9.8.1735. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. CoRR abs/1508.01991. http://arxiv.org/abs/1508.01991. Ozan Irsoy and Claire Cardie. 2014. Opinion mining with deep recurrent neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL. pages 720–728. http://aclweb.org/anthology/D/D14/D141080.pdf. Rohit J. Kate and Raymond J. Mooney. 2010. Joint entity and relation extraction using card-pyramid parsing. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning. Association for Computational Linguistics, Stroudsburg, PA, USA, CoNLL ’10, pages 203–212. http://dl.acm.org/citation.cfm?id=1870568.1870592. Arzoo Katiyar and Claire Cardie. 2016. Investigating lstms for joint extraction of opinion entities and relations. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. http://aclweb.org/anthology/P/P16/P16-1087.pdf. Kenton Lee, Tom Kwiatkowski, Ankur P. Parikh, and Dipanjan Das. 2016. Learning recurrent span representations for extractive question answering. CoRR abs/1611.01436. http://arxiv.org/abs/1611.01436. 926 Qi Li and Heng Ji. 2014. Incremental joint extraction of entity mentions and relations. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, Volume 1: Long Papers. pages 402–412. http://aclweb.org/anthology/P/P14/P14-1038.pdf. Wei Lu and Dan Roth. 2015. Joint mention extraction and classification with mention hypergraphs. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 857–867. http://aclweb.org/anthology/D15-1102. Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attentionbased neural machine translation. In Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Lisbon, Portugal, pages 1412–1421. http://aclweb.org/anthology/D15-1166. Andr´e F. T. Martins and Ram´on F. Astudillo. 2016. From softmax to sparsemax: A sparse model of attention and multi-label classification. In Proceedings of the 33rd International Conference on International Conference on Machine Learning Volume 48. JMLR.org, ICML’16, pages 1614–1623. http://dl.acm.org/citation.cfm?id=3045390.3045561. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, Curran Associates, Inc., pages 3111–3119. http://papers.nips.cc/paper/5021distributed-representations-of-words-and-phrasesand-their-compositionality.pdf. Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1105– 1116. http://www.aclweb.org/anthology/P16-1105. Makoto Miwa and Yutaka Sasaki. 2014. Modeling joint entity and relation extraction with table representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL. pages 1858–1869. http://aclweb.org/anthology/D/D14/D14-1200.pdf. David Nadeau and Satoshi Sekine. 2007. A survey of named entity recognition and classification. Linguisticae Investigationes 30. Ramesh Nallapati, Bing Xiang, and Bowen Zhou. 2016. Sequence-to-sequence rnns for text summarization. CoRR abs/1602.06023. http://arxiv.org/abs/1602.06023. Sameer Singh, Sebastian Riedel, Brian Martin, Jiaping Zheng, and Andrew McCallum. 2013. Joint inference of entities, relations, and coreference. In Proceedings of the 2013 Workshop on Automated Knowledge Base Construction. ACM, New York, NY, USA, AKBC ’13, pages 1–6. https://doi.org/10.1145/2509558.2509559. Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics, Stroudsburg, PA, USA, EMNLP-CoNLL ’12, pages 1201–1211. http://dl.acm.org/citation.cfm?id=2390948.2391084. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15:1929–1958. http://jmlr.org/papers/v15/srivastava14a.html. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 813 2014, Montreal, Quebec, Canada. pages 3104– 3112. http://papers.nips.cc/paper/5346-sequenceto-sequence-learning-with-neural-networks. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada. pages 2692–2700. http://papers.nips.cc/paper/5866-pointer-networks. Kun Xu, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2015a. Semantic relation classification via convolutional neural networks with simple negative sampling. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 536–540. http://aclweb.org/anthology/D15-1062. Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015b. Classifying relations via long short term memory networks along shortest dependency paths. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 1785–1794. http://aclweb.org/anthology/D15-1206. Bishan Yang and Claire Cardie. 2013. Joint inference for fine-grained opinion extraction. In 927 Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL 2013, 4-9 August 2013, Sofia, Bulgaria, Volume 1: Long Papers. pages 1640–1649. http://aclweb.org/anthology/P/P13/P13-1161.pdf. Wen-Tau Yih and D. Roth. 2007. Global inference for entity and relation identification via a linear programming formulation. In L. Getoor and B. Taskar, editors, An Introduction to Statistical Relational Learning, MIT Press. Xiaofeng Yu and Wai Lam. 2010. Jointly identifying entities and extracting relations in encyclopedia text via a graphical model approach. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters. Association for Computational Linguistics, Stroudsburg, PA, USA, COLING ’10, pages 1399–1407. http://dl.acm.org/citation.cfm?id=1944566.1944726. Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR abs/1212.5701. http://arxiv.org/abs/1212.5701. Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. J. Mach. Learn. Res. 3:1083–1106. http://dl.acm.org/citation.cfm?id=944919.944964. Feifei Zhai, Saloni Potdar, Bing Xiang, and Bowen Zhou. 2017. Neural models for sequence chunking. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA.. pages 3365–3371. http://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14776. GuoDong Zhou, Jian Su, Jie Zhang, and Min Zhang. 2005. Exploring various knowledge in relation extraction. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05). Association for Computational Linguistics, Ann Arbor, Michigan, pages 427–434. https://doi.org/10.3115/1219840.1219893. 928
2017
85
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 929–938 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1086 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 929–938 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1086 Naturalizing a Programming Language via Interactive Learning Sida I. Wang, Samuel Ginn, Percy Liang, Christopher D. Manning Computer Science Department Stanford University {sidaw, samginn, pliang, manning}@cs.stanford.edu Abstract Our goal is to create a convenient natural language interface for performing wellspecified but complex actions such as analyzing data, manipulating text, and querying databases. However, existing natural language interfaces for such tasks are quite primitive compared to the power one wields with a programming language. To bridge this gap, we start with a core programming language and allow users to “naturalize” the core language incrementally by defining alternative, more natural syntax and increasingly complex concepts in terms of compositions of simpler ones. In a voxel world, we show that a community of users can simultaneously teach a common system a diverse language and use it to build hundreds of complex voxel structures. Over the course of three days, these users went from using only the core language to using the naturalized language in 85.9% of the last 10K utterances. 1 Introduction In tasks such as analyzing and plotting data (Gulwani and Marron, 2014), querying databases (Zelle and Mooney, 1996; Berant et al., 2013), manipulating text (Kushman and Barzilay, 2013), or controlling the Internet of Things (Campagna et al., 2017) and robots (Tellex et al., 2011), people need computers to perform well-specified but complex actions. To accomplish this, one route is to use a programming language, but this is inaccessible to most and can be tedious even for experts because the syntax is uncompromising and all statements have to be precise. Another route is to convert natural language into a formal lanCubes: initial – select left 6 – select front 8 – black 10x10x10 frame – black 10x10x10 frame – move front 10 – red cube size 6 – move bot 2 – blue cube size 6 – green cube size 4 – (some steps are omitted) Monsters, Inc: initial – move forward – add green monster – go down 8 – go right and front – add brown floor – add girl – go back and down – add door – add black column 30 – go up 9 – finish door – (some steps for moving are omitted) Deer: initial – bird’s eye view – deer head; up; left 2; back 2; { left antler }; right 2; {right antler} – down 4; front 2; left 3; deer body; down 6; {deer leg front}; back 7; {deer leg back}; left 4; {deer leg back}; front 7; {deer leg front} – (some steps omitted) Figure 1: Some examples of users building structures using a naturalized language in Voxelurn: http://www.voxelurn.com guage, which has been the subject of work in semantic parsing (Zettlemoyer and Collins, 2005; Artzi and Zettlemoyer, 2011, 2013; Pasupat and Liang, 2015). However, the capability of semantic parsers is still quite primitive compared to the power one wields with a programming language. This gap is increasingly limiting the potential of 929 both text and voice interfaces as they become more ubiquitous and desirable. In this paper, we propose bridging this gap with an interactive language learning process which we call naturalization. Before any learning, we seed a system with a core programming language that is always available to the user. As users instruct the system to perform actions, they augment the language by defining new utterances — e.g., the user can explicitly tell the computer that ‘X’ means ‘Y’. Through this process, users gradually and interactively teach the system to understand the language that they want to use, rather than the core language that they are forced to use initially. While the first users have to learn the core language, later users can make use of everything that is already taught. This process accommodates both users’ preferences and the computer action space, where the final language is both interpretable by the computer and easier to produce by human users. Compared to interactive language learning with weak denotational supervision (Wang et al., 2016), definitions are critical for learning complex actions (Figure 1). Definitions equate a novel utterance to a sequence of utterances that the system already understands. For example, ‘go left 6 and go front’ might be defined as ‘repeat 6 [go left]; go front’, which eventually can be traced back to the expression ‘repeat 6 [select left of this]; select front of this’ in the core language. Unlike function definitions in programming languages, the user writes concrete values rather than explicitly declaring arguments. The system automatically extracts arguments and learns to produce the correct generalizations. For this, we propose a grammar induction algorithm tailored to the learning from definitions setting. Compared to standard machine learning, say from demonstrations, definitions provide a much more powerful learning signal: the system is told directly that ‘a 3 by 4 red square’ is ‘3 red columns of height 4’, and does not have to infer how to generalize from observing many structures of different sizes. We implemented a system called Voxelurn, which is a command language interface for a voxel world initially equipped with a programming language supporting conditionals, loops, and variable scoping etc. We recruited 70 users from Amazon Mechanical Turk to build 230 voxel structures using our system. All users teach the system at once, and what is learned from one user can be used by another user. Thus a community of users evolves the language to becomes more efficient over time, in a distributed way, through interaction. We show that the user community defined many new utterances—short forms, alternative syntax, and also complex concepts such as ‘add green monster, add yellow plate 3 x 3’. As the system learns, users increasingly prefer to use the naturalized language over the core language: 85.9% of the last 10K accepted utterances are in the naturalized language. Figure 2: Interface used by users to enter utterances and create definitions. 2 Voxelurn World. A world state in Voxelurn contains a set of voxels, where each voxel has relations ‘row’, ‘col’, ‘height’, and ‘color’. There are two domainspecific actions, ‘add’ and ‘move’, one domainspecific relation ‘direction’. In addition, the state contains a selection, which is a set of positions. While our focus is Voxelurn, we can think more generally about the world as a set of objects equiped with relations — events on a calendar, cells of a spreadsheet, or lines of text. Core language. The system is born understanding a core language called Dependency-based Action Language (DAL), which we created (see Table 1 for an overview). The language composes actions using the usual but expressive control primitives such as ‘if’, ‘foreach’, ‘repeat’, etc. Actions usually take sets as arguments, which are represented using lambda dependency-based compositional semantics (lambda DCS) expressions (Liang, 2013). Besides standard set operations like union, intersec930 Rule(s) Example(s) Description A→A; A select left; add red perform actions sequentially A→repeat N A repeat 3-1 add red top repeat action N times A→if S A if has color red [select origin] action if S is non-empty A→while S A while not has color red [select left of this] action while S is non-empty A→foreach S A foreach this [remove has row row of this] action for each item in S A→[A] [select left or right; add red; add red top] group actions for precedence A→{A} {select left; add red} scope only selection A→isolate A isolate [add red top; select has color red] scope voxels and selection A→select S select all and not origin set the selection A→remove S remove has color red remove voxels A→update R S update color [color of left of this] change property of selection S this current selection S all | none | origin all voxels, empty set, (0, 0) R of S | has R S has color red or yellow | has row [col of this] lambda DCS joins not S | S and S | S or S this or left and not has color red set operations N | N+N | N-N 1,. . . ,10 | 1+2 | row of this + 1 numbers and arithmetic argmax R S | argmin R S argmax col has color red superlatives R color | row | col | height | top | left | · · · voxel relations C red | orange | green | blue | black | · · · color values D top | bot | front | back | left | right direction values S→very D of S very top of very bot of has color green syntax sugar for argmax A→add C [D] | move D add red | add yellow bot | move left add voxel, move selection Table 1: Grammar of the core language (DAL), which includes actions (A), relations (R), and sets of values (S). The grammar rules are grouped into four categories. From top to bottom: domain-general action compositions, actions using sets, lambda DCS expressions for sets, and domain-specific relations and actions. tion and complement, lambda DCS leverages the tree dependency structure common in natural language: for the relation ‘color’, ‘has color red’ refers to the set of voxels that have color red, and its reverse ‘color of has row 1’ refers to the set of colors of voxels having row number 1. Treestructured joins can be chained without using any variables, e.g., ‘has color [yellow or color of has row 1]’. We protect the core language from being redefined so it is always precise and usable.1 In addition to expressivity, the core language interpolates well with natural language. We avoid explicit variables by using a selection, which serves as the default argument for most actions.2 For example, ‘select has color red; add yellow top; remove’ adds yellow on top of red voxels and then removes the red voxels. To enable the building of more complex struc1Not doing so resulted in ambiguities that propagated uncontrollably, e.g., once ‘red’ can mean many different colors. 2The selection is like the turtle in LOGO, but can be a set. tures in a more modular way, we introduce a notion of scoping. Suppose one is operating on one of the palm trees in Figure 2. The user might want to use ‘select all’ to select only the voxels in that tree rather than all of the voxels in the scene. In general, an action A can be viewed as taking a set of voxels v and a selection s, and producing an updated set of voxels v′ and a modified selection s′. The default scoping is ‘[A]’, which is the same as ‘A’ and returns (v′, s′). There are two constructs that alter the flow: First, ‘{A}’ takes (v, s) and returns (v′, s), thus restoring the selection. This allows A to use the selection as a temporary variable without affecting the rest of the program. Second, ‘isolate [A]’ takes (v, s), calls A with (s, s) (restricting the set of voxels to just the selection) and returns (v′′, s), where v′′ consists of voxels in v′ and voxels in v that occupy empty locations in v′. This allows A to focus only on the selection (e.g., one of the palm trees). Although scoping can be explicitly controlled via 931 ‘[ ]’, ‘isolate’, and ‘{ }’, it is an unnatural concept for non-programmers. Therefore when the choice is not explicit, the parser generates all three possible scoping interpretations, and the model learns which is intended based on the user, the rule, and potentially the context. 3 Learning interactively from definitions The goal of the user is to build a structure in Voxelurn. In Wang et al. (2016), the user provided interactive supervision to the system by selecting from a list of candidates. This is practical when there are less than tens of candidates, but is completely infeasible for a complex action space such as Voxelurn. Roughly, 10 possible colors over the 3 × 3 × 4 box containing the palm tree in Figure 2 yields 1036 distinct denotations, and many more programs. Obtaining the structures in Figure 1 by selecting candidates alone would be infeasible. This work thus uses definitions in addition to selecting candidates as the supervision signal. Each definition consists of a head utterance and a body, which is a sequence of utterances that the system understands. One use of definitions is paraphrasing and defining alternative syntax, which helps naturalize the core language (e.g., defining ‘add brown top 3 times’ as ‘repeat 3 add brown top’). The second use is building up complex concepts hierarchically. In Figure 2, ‘add yellow palm tree’ is defined as a sequence of steps for building the palm tree. Once the system understands an utterance, it can be used in the body of other definitions. For example, Figure 3 shows the full definition tree of ‘add palm tree’. Unlike function definitions in a programming language, our definitions do not specify the exact arguments; the system has to learn to extract arguments to achieve the correct generalization. The interactive definition process is described in Figure 4. When the user types an utterance x, the system parses x into a list of candidate programs. If the user selects one of them (based on its denotation), then the system executes the resulting program. If the utterance is unparsable or the user rejects all candidate programs, the user is asked to provide the definition body for x. Any utterances in the body not yet understood can be defined recursively. Alternatively, the user can first execute a sequence of commands X, and then provide a head utterance for body X. When constructing the definition body, users def: add palm tree def: brown trunk height 3 def: add brown top 3 times repeat 3 [add brown top] def: go to top of tree select very top of has color brown def: add leaves here def: select all sides select left or right or front or back add green Figure 3: Defining ‘add palm tree’, tracing back to the core language (utterances without def:). begin execute x: if x does not parse then define x; if user rejects all parses then define x; execute user choice begin define x: repeat starting with X ←[ ] user enters x′; if x′ does not parse then define x′; if user rejects all x′ then define x′; X ←[X; x′]; until user accepts X as the def’n of x; Figure 4: When the user enters an utterance, the system tries to parse and execute it, or requests that the user define it. can type utterances with multiple parses; e.g., ‘move forward’ could either modify the selection (‘select front’) or move the voxel (‘move front’). Rather than propagating this ambiguity to the head, we force the user to commit to one interpretation by selecting a particular candidate. Note that we are using interactivity to control the exploding ambiguity. 4 Model and learning Let us turn to how the system learns and predicts. This section contains prerequisites before we describe definitions and grammar induction in Section 5. Semantic parsing. Our system is based on a semantic parser that maps utterances x to programs z, which can be executed on the current state s (set of voxels and selection) to produce the next state s′ = JzKs. Our system is implemented as the interactive package in SEMPRE (Berant et al., 2013); 932 Feature Description Rule.ID ID of the rule Rule.Type core?, used?, used by others? Social.Author ID of author Social.Friends (ID of author, ID of user) Social.Self rule is authored by user? Span (left/right token(s), category) Scope type of scoping for each user Table 2: Summary of features. see Liang (2016) for a gentle exposition. A derivation d represents the process by which an utterance x turns into a program z = prog(d). More precisely, d is a tree where each node contains the corresponding span of the utterance (start(d), end(d)), the grammar rule rule(d), the grammar category cat(d), and a list of child derivations [d1, . . . , dn]. Following Zettlemoyer and Collins (2005), we define a log-linear model over derivations d given an utterance x produced by the user u: pθ(d | x, u) ∝exp(θTφ(d, x, u)), (1) where φ(d, x, u) ∈Rp is a feature vector and θ ∈Rp is a parameter vector. The user u does not appear in previous work on semantic parsing, but we use it to personalize the semantic parser trained on the community. We use a standard chart parser to construct a chart. For each chart cell, indexed by the start and end indices of a span, we construct a list of partial derivations recursively by selecting child derivations from subspans and applying a grammar rule. The resulting derivations are sorted by model score and only the top K are kept. We use chart(x) to denote the set of all partial derivations across all chart cells. The set of grammar rules starts with the set of rules for the core language (Table 1), but grows via grammar induction when users add definitions (Section 5). Rules in the grammar are stored in a trie based on the righthand side to enable better scalability to a large number of rules. Features. Derivations are scored using a weighted combination of features. There are three types of features, summarized in Table 2. Rule features fire on each rule used to construct a derivation. ID features fire on specific rules (by ID). Type features track whether a rule is part of the core language or induced, whether it has been used again after it was defined, if it was used by someone other than its author, and if the user and the author are the same (5 + #rules features). Social features fire on properties of rules that capture the unique linguistic styles of different users and their interaction with each other. Author features capture the fact that some users provide better, and more generalizable definitions that tend to be accepted. Friends features are cross products of author ID and user ID, which captures whether rules from a particular author are systematically preferred or not by the current user, due to stylistic similarities or differences (#users+#users×#users features). Span features include conjunctions of the category of the derivation and the leftmost/rightmost token on the border of the span. In addition, span features include conjunctions of the category of the derivation and the 1 or 2 adjacent tokens just outside of the left/right border of the span. These capture a weak form of context-dependence that is generally helpful (<≈V 4 × #cats features for a vocabulary of size V ). Scoping features track how the community, as well as individual users, prefer each of the 3 scoping choices (none, selection only ‘{A}’, and voxels+selection ‘isolate {A}’), as described in Section 2. 3 global indicators, and 3 indicators for each user fire every time a particular scoping choice is made (3 + 3 × #users features). Parameter estimation. When the user types an utterance, the system generates a list of candidate next states. When the user chooses a particular next state s′ from this list, the system performs an online AdaGrad update (Duchi et al., 2010) on the parameters θ according to the gradient of the following loss function: −log X d:Jprog(d)Ks=s′ pθ(d | x, u) + λ||θ||1, which attempts to increase the model probability on derivations whose programs produce the next state s′. 5 Grammar induction Recall that the main form of supervision is via user definitions, which allows creation of user-defined concepts. In this section, we show how to turn 933 these definitions into new grammar rules that can be used by the system to parse new utterances. Previous systems of grammar induction for semantic parsing were given utterance-program pairs (x, z). Both the GENLEX (Zettlemoyer and Collins, 2005) and higher-order unification (Kwiatkowski et al., 2010) algorithms overgenerate rules that liberally associate parts of x with parts of z. Though some rules are immediately pruned, many spurious rules are undoubtedly still kept. In the interactive setting, we must keep the number of candidates small to avoid a bad user experience, which means a higher precision bar for new rules. Fortunately, the structure of definitions makes the grammar induction task easier. Rather than being given an utterance-program (x, z) pair, we are given a definition, which consists of an utterance x (head) along with the body X = [x1, . . . , xn], which is a sequence of utterances. The body X is fully parsed into a derivation d, while the head x is likely only partially parsed. These partial derivations are denoted by chart(x). At a high-level, we find matches—partial derivations chart(x) of the head x that also occur in the full derivation d of the body X. A grammar rule is produced by substituting any set of nonoverlapping matches by their categories. As an example, suppose the user defines ‘add red top times 3’ as ‘repeat 3 [add red top]’. Then we would be able to induce the following two grammar rules: A →add C D times N : λCDN.repeat N [add C D] A →A times N : λAN.repeat N [A] The first rule substitutes primitive values (‘red’, ‘top’, and ‘3’) with their respective pre-terminal categories (C, D, N). The second rule contains compositional categories like actions (A), which require some care. One might expect that greedily substituting the largest matches or the match that covers the largest portion of the body would work, but the following example shows that this is not the case: A1 A1 A1 z }| { z }| { z }| { add red left and here = add red left; add red | {z } | {z } A2 A2 Here, both the highest coverage substitution (A1: ‘add red’, which covers 4 tokens of the body), and the largest substitution available (A2: ‘add red left’) would generalize incorrectly. The correct grammar rule only substitutes the primitive values (‘red’, ‘left’). 5.1 Highest scoring abstractions We now propose a grammar induction procedure that optimizes a more global objective and uses the learned semantic parsing model to choose substitutions. More formally, let M be the set of partial derivations in the head whose programs appear in the derivation dX of the body X: M def = {d ∈chart(x) : ∃d′ ∈desc(dX) ∧prog(d) = prog(d′)}, where desc(dX) are the descendant derivations of dX. Our goal is to find a packing P ⊆M, which is a set of derivations corresponding to nonoverlapping spans of the head. We say that a packing P is maximal if no other derivations may be added to it without creating an overlap. Let packings(M) denote the set of maximal packings, we can frame our problem as finding the maximal packing that has the highest score under our current semantic parsing model: P ∗ L = argmax P∈packings(M); X d∈P score(d). (2) Finding the highest scoring packing can be done using dynamic programming on P ∗ i for i = 0, 1, . . . , L, where L is the length of x and P ∗ 0 = ∅. Since d ∈M, start(d) and end(d) (exclusive) refer to span in the head x. To obtain this dynamic program, let Di be the highest scoring maximal packing containing a derivation ending exactly at position i (if it exists): Di = {di} ∪P ∗ start(di), (3) di = argmax d∈M;end(d)=i score(d ∪P ∗ start(d)). (4) Then the maximal packing of up to i can be defined recursively as P ∗ i = argmax D∈{Ds(i)+1,Ds(i)+2,...,Di} score(D) (5) s(i) = max d:end(d)≤i start(d), (6) 934 Input : x, dX, P ∗ Output: rule r ←x; f ←dX; for d ∈P ∗do r ←r[cat(d)/ span(d)] f ←λ cat(d).f[cat(d)/d] return rule (cat(dX)→r : f) Algorithm 1: Extract a rule r from a derivation dX of body X and a packing P ∗. Here, f[t/s] means substituting s by t in f, with the usual care about names of bound variables. where s(i) is the largest index such that Ds(i) is no longer maximal for the span (0, i) (i.e. there is a d ∈M on the span start(d) ≥s(i) ∧end(d) ≤i. Once we have a packing P ∗= P ∗ L, we can go through d ∈P ∗in order of start(d), as in Algorithm 1. This generates one high precision rule per packing per definition. In addition to the highest scoring packing, we also use a “simple packing”, which includes only primitive values (in Voxelurn, these are colors, numbers, and directions). Unlike the simple packing, the rule induced from the highest scoring packing does not always generalize correctly. However, a rule that often generalizes incorrectly should be down-weighted, along with the score of its packings. As a result, a different rule might be induced next time, even with the same definition. 5.2 Extending the chart via alignment Algorithm 1 yields high precision rules, but fails to generalize in some cases. Suppose that ‘move up’ is defined as ‘move top’, where ‘up’ does not parse, and does not match anything. We would like to infer that ‘up’ means ‘top’. To handle this, we leverage a property of definitions that we have not used thus far: the utterances themselves. If we align the head and body, then we would intuitively expect aligned phrases to correspond to the same derivations. Under this assumption, we can then transplant these derivations from dX to chart(x) to create new matches. This is more constrained than the usual alignment problem (e.g., in machine translation) since we only need to consider spans of X which corresponds to derivations in desc(dX). Algorithm 2 provides the algorithm for extending the chart via alignments. The aligned function is implemented using the following two heuristics: Input : x, X, dX for d ∈desc(dX), x′ ∈spans(x) do if aligned(x′, d, (x, X)) then d′ ←d; start(d′) ←start(x′); end(d′) ←end(x′); chart(x) ←chart(x) ∪d′ end end Algorithm 2: Extending the chart by alignment: If d is aligned with x′ based on the utterance, then we pretend that x′ should also parse to d, and d is transplanted to chart(x) as if it parsed from x′. • exclusion: if all but 1 pair of short spans (1 or 2 tokens) are matched, the unmatched pair is considered aligned. • projectivity: if d1, d2 ∈ desc(dX) ∩ chart(x), then ances(d1, d2) is aligned to the corresponding span in x. With the extended chart, we can run the algorithm from Section 5.1 to induce rules. The transplanted derivations (e.g., ‘up’) might now form new matches which allows the grammar induction to induce more generalizable rules. We only perform this extension when the body consists of one utterance, which tend to be a paraphrase. Bodies with multiple utterances tend to be new concepts (e.g., ‘add green monster’), for which alignment is impossible. Because users have to select from candidates parses in the interactive setting, inducing low precision rules that generate many parses degrade the user experience. Therefore, we induce alignment-based rules conservatively—only when all but 1 or 2 tokens of the head aligns to the body and vice versa. 6 Experiments Setup. Our ultimate goal is to create a community of users who can build interesting structures in Voxelurn while naturalizing the core language. We created this community using Amazon Mechanical Turk (AMT) in two stages. First, we had qualifier tasks, in which an AMT worker was instructed to replicate a fixed target exactly (Figure 5), ensuring that the initial users are familiar with at least some of the core language, which is the starting point of the naturalization process. 935 Figure 5: The target used for the qualifier. Next, we allowed the workers who qualified to enter the second freebuilding task, in which they were asked to build any structure they wanted in 30 minutes. This process was designed to give users freedom while ensuring quality. The analogy of this scheme in a real system is that early users (or a small portion of expert users) have to make some learning investment, so the system can learn and become easier for other users. Statistics. 70 workers passed the qualifier task, and 42 workers participated in the final freebuilding experiment. They built 230 structures. There were over 103,000 queries consisting of 5,388 distinct token types. Of these, 64,075 utterances were tried and 36,589 were accepted (so an action was performed). There were 2,495 definitions combining over 15,000 body utterances with 6.5 body utterances per head on average (96 max). From these definitions, 2,817 grammar rules were induced, compared to less than 100 core rules. Over all queries, there were 8.73 parses per utterance on average (starting from 1 for core). Is naturalization happening? The answer is yes according to Figure 6, which plots the cummulative percentage of utterances that are core, induced, or unparsable. To rule out that more induced utterances are getting rejected, we consider only accepted utterances in the middle of Figure 6, which plots the percentage of induced rules among accepted utterances for the entire community, as well as for the 5 heaviest users. Since unparsable utterances cannot be accepted, accepted core (which is not shown) is the complement of accepted induced. At the conclusion of the experiment, 72.9% of all accepted utterances are induced—this becomes 85.9% if we only consider the final 10,000 accepted utterances. Three modes of naturalization are outlined in Table 3. For very common operations, like moving the selection, people found ‘select left’ too verbose and shorterned this to l, left, >, sel l. One user preferred ‘go down and right’ instead of ‘select bot; select right’ in core and defined it as ‘go down; go right’. Definitions for high-level Figure 6: Learning curves. Top: percentage of all utterances that are part of the core language, the induced language, or unparsable by the system. Middle: percentage of accepted utterances belonging to the induced language, overall and for the 5 heaviest users. Bottom: expressiveness measured by the ratio of the length of the program to the length of the corresponding utterance. concepts tend to be whole objects that are not parameterized (e.g., ‘dancer’). The bottom plot of Figure 6 suggests that users are defining and using higher level concepts, since programs become longer relative to utterances over time. As a result of the automatic but implicit grammar induction, some concepts do not generalize correctly. In definition head ‘3 tall 9 wide white tower centered here’, arguments do not match the body; for ‘black 10x10x10 frame’, we failed to tokenize. 936 Short forms left, l, mov left, go left, <, sel left br, black, blu, brn, orangeright, left3 add row brn left 5 := add row brown left 5 Alternative syntax go down and right := go down; go right select orange := select has color orange add red top 4 times := repeat 4 [add red top] l white := go left and add white mov up 2 := repeat 2 [select up] go up 3 := go up 2; go up Higher level add red plate 6 x 7, green cube size 4, add green monster, black 10x10x10 frame, flower petals, deer leg back, music box, dancer Table 3: Example definitions. See CodaLab worksheet for the full leaderboard. Learned parameters. Training using L1 regularization, we obtained 1713 features with nonzero parameters. One user defined many concepts consisting of a single short token, and the Social.Author feature for that user has the most negative weight overall. With user compatibility (Social.Friends), some pairs have large positive weights and others large negative weights. The ‘isolate’ scoping choice (which allows easier hierarchical building) received the most positive weights, both overall and for many users. The 2 highest scoring induced rules correspond to ‘add row red right 5’ and ‘select left 2’. Incentives. Having complex structures show that the actions in Voxelurn are expressive and that hierarchical definitions are useful. To incentivize this behavior, we created a leaderboard which ranked structures based on recency and upvotes (like Hacker News). Over the course of 3 days, we picked three prize categories to be released daily. The prize categories for each day were bridge, house, animal; tower, monster, flower; ship, dancer, and castle. To incentivize more definitions, we also track citations. When a rule is used in an accepted utterance by another user, the rule (and its author) receives a citation. We pay bonuses to top users according to their h-index. Most cited definitions are also displayed on the leaderboard. Our qualitative results should be robust to the incentives scheme, because the users do not overfit to the incentives—e.g., around 20% of the structures are not in the prize categories and users define complex concepts that are rarely cited. 7 Related work and discussion This work is an evolution of Wang et al. (2016), but differs crucially in several ways: While Wang et al. (2016) starts from scratch and relies on selecting candidates, this work starts with a programming language (PL) and additionally relies on definitions, allowing us to scale. Instead of having a private language for each user, the user community in this work shares one language. Azaria et al. (2016) presents Learning by Instruction Agent (LIA), which also advocates learning from users. They argue that developers cannot anticipate all the actions that users want, and that the system cannot understand the corresponding natural language even if the desired action is built-in. Like Jia et al. (2017), Azaria et al. (2016) starts with an ad-hoc set of initial slot-filling commands in natural language as the basis of further instructions—our approach starts with a more expressive core PL designed to interpolate with natural language. Compared to previous work, this work studied interactive learning in a shared community setting and hierarchical definitions resulting in more complex concepts. Allowing ambiguity and a flexible syntax is a key reason why natural language is easier to produce—this cannot be achieved by PLs such as Inform and COBOL which look like natural language. In this work, we use semantic parsing techniques that can handle ambiguity (Zettlemoyer and Collins, 2005, 2007; Kwiatkowski et al., 2010; Liang et al., 2011; Pasupat and Liang, 2015). In semantic parsing, the semantic representation and action space is usually designed to accommodate the natural language that is considered constant. In contrast, the action space is considered constant in the naturalizing PL approach, and the language adapts to be more natural while accommodating the action space. Our work demonstrates that interactive definitions is a strong and usable form of supervision. In the future, we wish to test these ideas in more domains, naturalize a real PL, and handle paraphrasing and implicit arguments. In the process of naturalization, both data and the semantic grammar have important roles in the evolution of a language that is easier for humans to produce while still parsable by computers. 937 Acknowledgments. We thank our reviewers, Panupong (Ice) Pasupat for helpful suggestions and discussions on lambda DCS, DARPA Communicating with Computers (CwC) program under ARO prime contract no. W911NF-15-1-0462, and NSF CAREER Award no. IIS-1552635. Reproducibility. All code, data, and experiments for this paper are available on the CodaLab platform: https://worksheets. codalab.org/worksheets/ 0xbf8f4f5b42e54eba9921f7654b3c5c5d and a demo: http://www.voxelurn.com References Y. Artzi and L. Zettlemoyer. 2011. Bootstrapping semantic parsers from conversations. In Empirical Methods in Natural Language Processing (EMNLP). pages 421–432. Y. Artzi and L. Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics (TACL) 1:49–62. A. Azaria, J. Krishnamurthy, and T. M. Mitchell. 2016. Instructable intelligent personal agent. In Association for the Advancement of Artificial Intelligence (AAAI). pages 2681–2689. J. Berant, A. Chou, R. Frostig, and P. Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Empirical Methods in Natural Language Processing (EMNLP). G. Campagna, R. Ramesh, S. Xu, M. Fischer, and M. S. Lam. 2017. Almond: The architecture of an open, crowdsourced, privacy-preserving, programmable virtual assistant. In World Wide Web (WWW). pages 341–350. J. Duchi, E. Hazan, and Y. Singer. 2010. Adaptive subgradient methods for online learning and stochastic optimization. In Conference on Learning Theory (COLT). S. Gulwani and M. Marron. 2014. NLyze: interactive programming by natural language for spreadsheet data analysis and manipulation. In International Conference on Management of Data, SIGMOD. pages 803–814. R. Jia, L. Heck, D. Hakkani-Tür, and G. Nikolov. 2017. Learning concepts through conversations in spoken dialogue systems. In International Conference on Acoustics, Speech, and Signal Processing (ICASSP). N. Kushman and R. Barzilay. 2013. Using semantic unification to generate regular expressions from natural language. In Human Language Technology and North American Association for Computational Linguistics (HLT/NAACL). pages 826–836. T. Kwiatkowski, L. Zettlemoyer, S. Goldwater, and M. Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higher-order unification. In Empirical Methods in Natural Language Processing (EMNLP). pages 1223–1233. P. Liang. 2013. Lambda dependency-based compositional semantics. arXiv preprint arXiv:1309.4408 . P. Liang. 2016. Learning executable semantic parsers for natural language understanding. Communications of the ACM 59. P. Liang, M. I. Jordan, and D. Klein. 2011. Learning dependency-based compositional semantics. In Association for Computational Linguistics (ACL). pages 590–599. P. Pasupat and P. Liang. 2015. Compositional semantic parsing on semi-structured tables. In Association for Computational Linguistics (ACL). S. Tellex, T. Kollar, S. Dickerson, M. R. Walter, A. G. Banerjee, S. J. Teller, and N. Roy. 2011. Understanding natural language commands for robotic navigation and mobile manipulation. In Association for the Advancement of Artificial Intelligence (AAAI). S. I. Wang, P. Liang, and C. Manning. 2016. Learning language games through interaction. In Association for Computational Linguistics (ACL). M. Zelle and R. J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Association for the Advancement of Artificial Intelligence (AAAI). pages 1050–1055. L. S. Zettlemoyer and M. Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Uncertainty in Artificial Intelligence (UAI). pages 658– 666. L. S. Zettlemoyer and M. Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP/CoNLL). pages 678–687. 938
2017
86
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 939–949 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1087 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 939–949 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1087 Semantic Word Clusters Using Signed Spectral Clustering Jo˜ao Sedoc, Jean Gallier, Lyle Ungar Computer & Information Science University of Pennsylvania joao, jean, [email protected] Dean Foster Amazon LLC [email protected] Abstract Vector space representations of words capture many aspects of word similarity, but such methods tend to produce vector spaces in which antonyms (as well as synonyms) are close to each other. For spectral clustering using such word embeddings, words are points in a vector space where synonyms are linked with positive weights, while antonyms are linked with negative weights. We present a new signed spectral normalized graph cut algorithm, signed clustering, that overlays existing thesauri upon distributionally derived vector representations of words, so that antonym relationships between word pairs are represented by negative weights. Our signed clustering algorithm produces clusters of words that simultaneously capture distributional and synonym relations. By using randomized spectral decomposition (Halko et al., 2011) and sparse matrices, our method is both fast and scalable. We validate our clusters using datasets containing human judgments of word pair similarities and show the benefit of using our word clusters for sentiment prediction. 1 Introduction In distributional vector representations, opposite relations are not fully captured. Take, for example, words such as “great” and “awful” that can appear with similar frequency in the same sentence structure: “John had a great meeting” and “John had an awful day.” Word embeddings, which are successful in a wide array of NLP tasks (Turney et al., 2010; Dhillon et al., 2015), fail to capture this antonymy because they follow the distributional hypothesis that similar words are used in similar contexts (Harris, 1954), thus assigning small cosine or euclidean distances between the vector representations of “great” and “awful”. While vector space models (Turney et al., 2010) such as word2vec (Mikolov et al., 2013), Global vectors (GloVe) (Pennington et al., 2014), or Eigenwords (Dhillon et al., 2015) capture relatedness, they do not adequately encode synonymy and semantic similarity (Mohammad et al., 2013; Scheible et al., 2013). Our goal is to create clusters of synonyms or semantically equivalent words and linguistically motivated unified constructs. Signed graphs, which are graphs with negative edge weights, were first introduced by Cartwright and Harary (1956). However, signed graph clustering for multiclass normalized cuts (K-clusters) has been largely unexplored until recently. We present a novel theory and method that extends multiclass normalized cuts (K-cluster) of Yu and Shi (2003) to signed graphs (Gallier, 2016)1 and the work of Kunegis et al. (2010) to K-clustering. This extension allows the incorporation of knowledge base information, positive and negatively weighted links (see figure 2.1). Negative edges serve as repellent or opposite relationships between nodes. Our signed spectral normalized graph cut algorithm (henceforth, signed clustering) builds negative edge relations into graph embeddings using similarity structure in vector spaces. It takes as input an initial set of vectors and edge relations, and hence is easy to combine with any word embedding method. This paper formally improves on the discrete optimization problem of Yu and Shi (2003). Signed clustering gives better clusters than spectral clustering (Shi and Malik, 2000) of word embeddings, and it has better coverage and is more robust than thesaurus look-up. This is because the1Gallier (2016) is a full theoretical exposition of our methods with proofs on arXiv. 939 sauri erroneously give equal weight to rare senses of a word – for example, “rich” as a rarely used synonym of “absurd”. Also, the overlap between thesauri is small, due to their manual creation. Lin (1998) found 17.8397% overlap between synonym sets from Roget’s Thesaurus and WordNet 1.5. We find similarly small overlap between all three thesauri tested. We evaluate our clusters using SimLex-999 (Hill et al., 2014) and SimVerb-3500 (Gerz et al., 2016) as a ground truth for our cluster evaluation. Finally, we test our method on the sentiment analysis task. Overall, signed spectral clustering can augment methods using signed information and has broad application for many fields. Our main contributions are: the novel extension of signed clustering to the multiclass (K-cluster), and the application of this method to create semantic word clusters that are agnostic to vector space representations and thesauri. 1.1 Related Work Semantic word cluster and distributional thesauri have been well studied in the NLP literature (Lin, 1998; Curran, 2004). Recently there has been a line of research on incorporating synonyms and antonyms into word embeddings. Our approach is very much in the line of Vlachos et al. (2009). However, they explicitly made verb clusters using Dirichlet Process Mixture Models and must-link / cannot-link clustering. Furthermore, they note that cannot-link clustering does not improve performance whereas our signed clustering antonyms are key. Most recent models either attempt to make richer contexts, in order to find semantic similarity, or overlay thesaurus information in a supervised or semi-supervised manner. One line of active research is post processing the word vector embedding by transforming the space using a single or multi-relational objective (Yih et al., 2012; Tang et al., 2014; Chang et al., 2013; Tang et al., 2014; Zhang et al., 2014; Faruqui et al., 2015; Mrkˇsi´c et al., 2016). Alternatively, there are methods to modify the objective function for generating the word embeddings (Ono et al., 2015; Pham et al., 2015; Schwartz et al., 2015). Our approach differs from the aforementioned methods in that we created word clusters using the antonym relationships as negative links. Unlike the previous approaches using semi-supervised methods, we incorporated the thesauri as a knowledge base. Similar to word vector retrofitting and counter-fitting methods described in Faruqui et al. (2015) and Mrkˇsi´c et al. (2016), our signed clustering method uses existing vector representations to create word clusters. To our knowledge, this work is the first theoretical foundation of multiclass signed normalized cuts.2 Zass and Shashua (2005) solved multiclass cluster from another approach, by relaxing the orthogonality assumption and focusing instead on the non-negativity constraint. This led to a doubly stochastic optimization problem. Negative edges are handled by a constrained hyperparameter. Hou (2005) used positive degrees of nodes in the degree matrix of a signed graph with weights (-1, 0, 1), which was advanced by Kolluri et al. (2004) and Kunegis et al. (2010) using absolute values of weights in the degree matrix. Interestingly, Chiang et al. (2014) presented a theoretical foundation for edge sign prediction and a recursive clustering approach. Mercado et al. (2016) found that using the geometric mean of the graph Laplacian improves performance. Wang et al. (2016) used semi-supervised polarity induction (Rao and Ravichandran, 2009) to create clusters of words with similar valence and arousal. Must-link and cannot-link soft spectral clustering (Rangapuram and Hein, 2012) share similarities with our method, particularly in the limit where there are no must-link edges present. Both must-link and cannot-link clustering as well as polarity induction differ in optimization method. Our method is significantly faster due to the use of randomized SVD (Halko et al., 2011) and can thus be applied to large scale NLP problems. We developed a novel theory and algorithm that extends the clustering of Shi and Malik (2000) and Yu and Shi (2003) to the multiclass signed graph case. 2 Signed Graph Cluster Estimation 2.1 Signed Normalized Cut Weighted graphs for which the weight matrix is a symmetric matrix in which negative and positive entries are allowed are called signed graphs. 2The full exposition by Gallier (2016) is available on arXiv. 940 Such graphs (with weights (−1, 0, +1)) were introduced as early as 1953 by (Harary, 1953), to model social relations involving disliking, indifference, and liking. The problem of clustering the nodes of a signed graph arises naturally as a generalization of the clustering problem for weighted graphs. Figure 1 shows a signed graph of word similarities with a thesaurus overlay. Gallier Figure 1: Signed graph of words using a distance metric from the word embedding. The red dashed edges represent the antonym relation while solid edges represent synonymy relations. (2016) extends normalized cuts to signed graphs in order to incorporate antonym information into word clusters. Definition 2.1. A weighted graph is a pair G = (V, W), where V = {v1, . . . , vm} is a set of nodes or vertices, and W is a symmetric matrix called the weight matrix, such that wi j ≥0 for all i, j ∈{1, . . . , m}, and wi i = 0 for i = 1, . . . , m. We say that a set {vi, vj} is an edge iff wi j > 0. The corresponding (undirected) graph (V, E) with E = {{vi, vj} | wi j > 0}, is called the underlying graph of G. Given a signed graph G = (V, W) (where W is a symmetric matrix with zero diagonal entries), the underlying graph of G is the graph with node set V and set of (undirected) edges E = {{vi, vj} | wij ̸= 0}. If (V, W) is a signed graph, where W is an m × m symmetric matrix with zero diagonal entries and with the other entries wij ∈R arbitrary, for any node vi ∈V , the signed degree of vi is defined as di = d(vi) = m X j=1 |wij|, and the signed degree matrix D as D = diag(d(v1), . . . , d(vm)). For any subset A of the set of nodes V , let vol(A) = X vi∈A di = X vi∈A m X j=1 |wij|. For any two subsets A and B of V and AC which is the complement of A, define links+(A, B), links−(A, B), and cut(A, AC) by links+(A, B) = X vi∈A,vj∈B wij>0 wij links−(A, B) = X vi∈A,vj∈B wij<0 −wij cut(A, AC) = X vi∈A,vj∈AC wij̸=0 |wij|. Then, the signed Laplacian L is defined by L = D −W, and its normalized version Lsym by Lsym = D −1/2 L D −1/2 = I −D −1/2WD −1/2. Kunegis et al. (2010) showed that L is positive semidefinite. For a graph without isolated vertices, we have d(vi) > 0 for i = 1, . . . , m, so D −1/2 is well defined. Given a partition of V into K clusters (A1, . . . , AK), if we represent the jth block of this partition by a vector Xj such that Xj i = ( aj if vi ∈Aj 0 if vi /∈Aj, for some aj ̸= 0. For illustration, suppose m = 5 and A1 = {v1, v3} then (X1)⊤= [a1, 0, a1, 0, 0]. Definition 2.2. The signed normalized cut sNcut(A1, . . . , AK) of the partition (A1, ..., AK) is defined as sNcut(A1, . . . ,AK) = K X j=1 cut(Aj, AC j ) + 2links−(Aj, Aj) vol(Aj) . 941 It should be noted that this formulation differs significantly from Kunegis et al. (2010) and even more so from must-link / cannot-link clustering. Observe that minimizing sNcut(A1, . . . , AK) minimizes the number of positive and negative edges between clusters and also the number of negative edges within clusters. Removing the term links−(Aj, Aj) reduces sNcut to normalized cuts. A linear algebraic formulation is sNcut(A1, . . . , AK) = K X j=1 (Xj)⊤LXj (Xj)⊤DXj . where X is the N × K matrix whose jth column is Xj. 2.2 Optimization Problem We now formulate K-way clustering of a graph using normalized cuts. If we let X = n [X1 . . . XK] | Xj = aj(xj 1, . . . , xj N), xj i ∈{1, 0}, aj ∈R, Xj ̸= 0 o our solution set is K = n X ∈X | (Xi)⊤DXj = 0, 1 ≤i, j ≤K, i ̸= j o . The resulting optimization problem is minimize K X j=1 (Xj)⊤LXj (Xj)⊤DXj subject to (Xi)⊤DXj = 0, 1 ≤i, j ≤K, i ̸= j, X ∈X. The problem can be reformulated to an equivalent optimization problem: minimize tr(X⊤LX) subject to X⊤DX = I, X ∈X. We then form a relaxation of the above problem, dropping the condition that X ∈X, giving Relaxed Problem minimize tr(Y ⊤D −1/2LD −1/2Y ) subject to Y ⊤Y = I. The minimum of the relaxed problem is achieved by the K unit eigenvectors associated with the smallest eigenvalues of Lsym. 2.3 Finding an Approximate Discrete Solution Given a solution Z of the relaxed problem, we look for pairs (X, Q) with X ∈X and where Q is a K×K matrix with nonzero and pairwise orthogonal columns, with ∥X∥F = ∥Z∥F , that minimize ϕ(X, Q) = ∥X −ZQ∥F . Here, ∥A∥F is the Frobenius norm of A. This nonlinear optimization problem involves two unknown matrices X and Q. To solve the relaxed problem, we proceed by alternating between minimizing ϕ(X, Q) = ∥X −ZQ∥F with respect to X holding Q fixed (step 5 in algorithm 1), and minimizing ϕ(X, Q) with respect to Q holding X fixed (steps 6 and 7 in algorithm 1). This second stage in which X is held fixed has been studied, but it is still a hard problem for which no closed-form solution is known. Hence we divide the problem into steps 6 and 7 for which the solution is known. Since Q is of the form Q = RΛ where R ∈O(K) and Λ is a diagonal invertible matrix, we minimize ∥X −ZRΛ∥F . The matrix RΛ is not a minimizer of ∥X −ZRΛ∥F in general, but it is an improvement on R alone, and both stages can be solved quite easily. In step 6 the problem reduces to minimizing −2tr(Q⊤Z⊤X); that is, maximizing tr(Q⊤Z⊤X). Algorithm 1 Signed Clustering 1: Input: W the weight matrix (without isolated nodes), K the number of clusters, and termination threshold ϵ. 2: Using the D the degree matrix, and the signed Laplacian L, compute Lsym the signed normalized Laplacian. 3: Initialize Λ = I, X = D −1 2 U where U is the matrix of the eigenvectors corresponding to the K smallest eigenvalues of Lsym. 3 4: while ∥X −ZRΛ∥F > ϵ do 5: Minimize ∥X −ZRΛ∥F with respect to X holding Q fixed. 6: Fix X, Z, and Λ, find R ∈O(K) that minimizes ∥X −ZRΛ∥F . 7: Fix X, Z, and R, find a diagonal invertible matrix Λ that minimizes ∥X −ZRΛ∥F . 8: end while 9: Find the discrete solution X∗by choosing the largest entry xij on row i set xij = 1 and all other xij = 0 for row i. 10: Output: X∗. Steps 3 through 10 may be replaced by standard Kmeans clustering. It should also be noted that by 942 removing the solution requirement that Xj ̸= 0, the algorithm can find k ≤K clusters. 3 Similarity Calculation The main input to the spectral signed clustering algorithm is the similarity matrix W, which overlays both the distributional properties and thesaurus information. Following Belkin and Niyogi (2003), we chose the heat kernel based on the Euclidean distance between word vector representations as our similarity metric, such that Wij =      0 if e−∥wi−wj∥ 2 σ < ϵ e−∥wi−wj∥ 2 σ otherwise . where σ and ϵ are hyperparameters found using grid search (see Supplemental material for more detail). We represented the thesaurus as two matrices where T syn ij = ( 1 if words i and j are synonyms 0 otherwise . and T ant ij = ( −1 if words i and j are antonyms 0 otherwise . T syn is the synonym graph and T ant is the antonym graph. The signed graph can then be written in matrix form as ˆW = γW + βantT ant ⊙ W +βsynT syn⊙W, where ⊙computes Hadamard product (element-wise multiplication). The parameters γ, βsyn, and βant are tuned to the data target dataset using cross validation. The reader should note that σ and ϵ are not found using a target dataset, but instead using cross validation and grid search to minimize the number of negative edges within clusters and the number of disconnected components in the cluster. 4 Evaluation Metrics We evaluated the clusters using both intrinsic and extrinsic methods. For intrinsic evaluation, we used thesaurus information for two novel metrics: 1) the number of negative edges (NNE) within the clusters, which in our semantic clusters is the number of antonyms in the same cluster, and 2) the number of disconnected components (NDC) in the synonym graph, so the number of groups of words that are not connected by a synonym relation in the thesaurus. The NDC thus has the disadvantage that it is a function of the thesaurus coverage. Our third intrinsic measure uses a gold standard designed to measure how well we capture word similarity: Semantically similar words should be in the same cluster and semantically dissimilar words should not. For extrinsic evaluation, as descibed below, we measure how much our clusters help to identify text polarity. We also compare multiple word embeddings and thesauri to demonstrate the stability of our method. 5 Experiments with Synthetic Data In order to evaluate our signed graph clustering method, we first focused on intrinsic measures of cluster quality in synthetic data. To do so, we created random signed graphs with the same proportion of positive and negative edges as in our real dataset. Figure 2 demonstrates that the number of Figure 2: The relation between disconnected component (NDC) and negative edge (NNE) using simulated signed graphs with 100 vertices. negative edges within a cluster is minimized using our clustering algorithm on simulated data. As the number of clusters becomes large, the number of disconnected components, which includes clusters of size one, consistently increases. Determining the optimal cluster size and similarity parameters requires making a trade off between NDC and NNE. For example, in figure 2 the optimal cluster size is 20. One can see that as the number of clusters increases NNE goes to zero, but the number of disconnected components becomes the number of vertices. In the extreme case all clusters contain one vertex. K-means, also shown in figure 2, does not optimize NNE. 943 6 Experimental Setup 6.1 Word Embeddings We used four different word embedding methods for evaluation: Skip-gram vectors (word2vec) (Mikolov et al., 2013), Global vectors (GloVe) (Pennington et al., 2014), Eigenwords (Dhillon et al., 2015), and Global Context (GloCon) (Huang et al., 2012); however, we only report the results for word2vec, which is the most popular word embedding (see the supplemental material for other embeddings). We used word2vec 300 dimensional embeddings which were trained on several billion words of English: the Gigaword and the English discussion forum data gathered as part of BOLT. Tokenization was performed using CMU’s Twokenize.4 6.2 Thesauri Several thesauri were used in order to test the robustness including Roget’s Thesaurus (Roget, 1852), the Microsoft Word English (MS Word) thesaurus from Samsonovic et al. (2010) and WordNet 3.0 (Miller, 1995). We chose a subset of 5108 words for the training dataset, which had high overlap between various sources. Changes to the training dataset had minimal effects on the optimal parameters. Within the training dataset, each of the thesauri had roughly 3700 antonym pairs; combined they had 6680. However, the number of distinct connected components varied, with Roget’s Thesaurus having the fewest (629), and MS Word Thesaurus (1162) and WordNet (2449) having the most. These ratios were consistent across the full dataset. 6.3 Gold Standard SimLex-999 And SimVerb-3500 Following the analysis of Vlachos et al. (2009), we threshold the semantically similar datasets to find word pairs which should or should not belong to the same cluster. As ground truth, we extracted 120 semantically similar words from SimLex-999 with a similarity score greater than 8 out of 10. SimLex-999 is a gold standard resource for semantic similarity, not relatedness, based on ratings by human annotators. Our 120 pair subset of SimLex-999 has multiple parts-of-speech including Noun-Noun pairs, VerbVerb pairs and Adjective-Adjective pairs. Within 4https://github.com/brendano/ ark-tweet-nlp SimVerb-3500, we used a subset of 318 semantically similar verb pairs. The community is attempting to define better gold standards; however, currently these are the best datasets that we are aware of. We tried to use WordNet, Roget, and the Paraphrase Database (PPDB) (Ganitkevitch et al., 2013) as a gold standard, but manual inspection as well as empirical results showed that none of the automatically generated datasets were a sufficient gold standard. Possibly the symmetric pattern of (Schwartz et al., 2015) would have been sufficient; we did not have time to validate this. 6.4 Stanford Sentiment Treebank We also evaluated our clusters by using them as features for predicting sentiment, using sentiment treebank 5 (Socher et al., 2013) with coarsegrained labels on phrases and sentences from movie review excerpts. This dataset is widely used for the evaluation of sentiment analysis. We used the standard partition of the treebank into training (6920), development (872), and test (1821) sets. 7 Cluster Evaluation Table 1 shows the four most-associated words with “accept” using different methods. We now turn to quantitative measures of word similarity and synonym cluster quality. 7.1 Comparison with K-means and Normalized Cuts In order to assess the model we tested (1) Kmeans, (2) normalized cuts without thesaurus, and (3) signed normalized cuts. As a baseline, we created clusters using K-means on the original word2vec vector representations where the number of K clusters was set to 750. Table 2 shows the relative ratios of the different clustering methods of with respect to antonym pair inclusion and the number of disconnected components within the clusters. For both methods, over twenty percent of the clusters contain antonym pairs even though the median cluster size is six. Signed clustering radically reduced the number of antonyms within clusters compared to the other methods. 5http://nlp.stanford.edu/sentiment/ treebank.html 944 Ref word Roget WordNet MS Word W2V SC W2V accept adopt agree take accepts grant accept your fate get swallow reject permit be fooled by fancy consent agree let acquiesce hold assume accepting okay Table 1: Qualitative comparison of clusters. Method Antonym Ratio DC Ratio K-Means 0.24 0.95 NC 0.21 0.97 SC 0.06 0.49 Table 2: Clustering evaluation of K-means, normalized cuts, and signed normalized cuts with 750 clusters. Ratio of clusters with containing one or more antonym pair and ratio of clusters with disconnected components. 8 Empirical Results Tables 3 and 5 present our main result. When using our signed clustering method with similar words, as labeled by SimLex-999 and SimVerb3500, our clustering accuracy increased by 5% on both SimLex-999 and SimVerb-3000. Furthermore, by combining the thesauri lookup with our clustering, we achieved almost perfect accuracy (96%). Table 5 shows the sentiment analysis task performance. Our method outperforms all methods with similar complexity; however, we did not reach state-of-the-art results when compared to much more complex models which also use a richer dataset. 8.1 Evaluation Using Word Similarity Datasets In a perfect setting, all word pairs rated highly similar by human annotators would be in the same cluster, and all words which were rated dissimilar would be in different clusters. Since our clustering algorithm produced sets of words, we used this evaluation instead of the more commonly reported correlations. In table 3 we show the results of the evaluation with SimLex-999. Combining thesaurus lookup and word2vec+CombThes clusters, labeled as Lookup + SC(W2V), yielded an accuracy of 0.96 (5 errors). Note that clusters using word2vec with normalized cuts does not improve accuracy. The MSW thesaurus has much lower coverage, but 100 % accuracy, which is why when Method Acc SimLex Err MSW Lookup 0.70 0 Roget Lookup 0.63 0 WordNet Lookup 0.43 0 Combined Lookup 0.90 0 NC(W2V) 0.36 0.05 SC (W2V) 0.67 0 Lookup + NC(W2V) 0.91 0.05 Lookup + SC(W2V) 0.96 0 MSW + SC(W2V) 0.95 0 Table 3: Clustering evaluation using SimLex-999 with 120 word pairs having similarity score over 8. SC stands for our signed clustering and NC is standard normalized cuts. SC(W2V) are the word clusters from signed clustering using word2vec and the combined thesauri. Err is the proportion of dissimilar words (with score < 2) present in the same cluster. combined with the signed clustering the performance is 0.95. In table 3 we state the proportion of clusters containing dissimilar words as a sanity check for cluster size. (See supplemental material for full cluster size optimization information.) Another important result is that the verb accuracy yielded the largest accuracy gains, consistent with the results of Schwartz et al. (2015). Table 4 clearly shows that the overall performance of all methods is lower for verb similarity. However, the improvement using both signed clustering as well as thesaurus look is also larger. 8.2 Sentiment Analysis We trained an l2-norm regularized logistic regression (Friedman et al., 2001) and simultaneously γ, βsyn, and βant using our word clusters in order to predict the coarse-grained sentiment at the sentence level. The γ and β parameters were found using a portion of the data where we iteratively switch between the logistic regression and the parameters, holding each fixed. However, hyperparameters σ and ϵ, and the number of clusters 945 Method Acc SimVerb MSW Lookup 0.45 Roget Lookup 0.59 WordNet Lookup 0.43 Combined Lookup 0.83 NC(W2V) 0.24 SC (W2V) 0.56 Lookup + NC(W2V) 0.83 Lookup + SC(W2V) 0.88 Table 4: Clustering evaluation using SimVerb3500 with 317 word pairs having similarity score over 8. SC stands for our signed clustering and NC is standard normalized cuts. SC(W2V) are the word clusters from signed clustering using word2vec and the combined thesauri. K were optimized minimizing error using grid search. We compared our model against existing models: Naive Bayes with bag of words (NB) (Socher et al., 2013), sentence word embedding averages (VecAvg), retrofitted sentence word embeddings (RVecAvg) (Faruqui et al., 2015) that incorporate thesaurus information, simple recurrent neural networks (RNN), and two baselines of normalized cuts and signed normalized cuts using only thesaurus information. While the state-of-the art Convolutional Neural Network (CNN) (Kim, 2014) is at 0.881, our model performs quite well with much less information and complexity. Table 5 shows that signed clustering outperforms the baselines of Naive Bayes, normalized cuts, and signed cuts using just thesaurus information. Furthermore, we outperform comparable models, including retrofitting, which has thesaurus information, and the recurrent neural network, which has access to domain specific context information. Signed clustering using only thesaurus information (SC(Thes)) performed significantly worse than all other methods. This was largely due to low coverage; rare words such as “WOW” and “???” are not covered. As expected, because normalized cut clusters include antonyms, the method performs worse than others. Nonetheless the improvement from 0.79 to 0.836 is quite drastic. 9 Conclusion We developed a novel theory for signed normalized cuts and an algorithm for finding their discrete solution. We showed that we can find suModel Accuracy NB (Socher et al., 2013) 0.818 VecAvg (W2V) 0.812 (Faruqui et al., 2015) RVecAvg (W2V) 0.821 (Faruqui et al., 2015) RNN(Socher et al., 2013) 0.824 NC(W2V) 0.79 SC(Thes) 0.752 SC(W2V) 0.836 Table 5: Sentiment analysis accuracy for binary predictions of signed clustering algorithm (SC) versus other models. SC(W2V) are the signed clusters using word2vec word representations. perior semantically similar clusters which do not require new word embeddings but simply overlay thesaurus information on preexisting ones. The clusters are general and can be used with many out-of-the-box word embeddings. By accounting for antonym relationships, our algorithm greatly outperforms simple normalized cuts. Finally, we examined our clustering method on the sentiment analysis task from Socher et al. (2013) sentiment treebank dataset and showed that it improved performance versus comparable models. Our automatically generated clusters give better coverage than manually constructed thesauri. Our signed spectral clustering method allows us to incorporate the knowledge contained in these thesauri without modifying the word embeddings themselves. We further showed that use of the thesauri can be tuned to the task at hand. Our signed spectral clustering method could be applied to a broad range of NLP tasks, such as prediction of social group clustering, identification of personal versus non-personal verbs, and analyses of clusters which capture positive, negative, and objective emotional content. It could also be used to explore multi-view relationships, such as aligning synonym clusters across multiple languages. Another possibility is to use thesauri and word vector representations together with word sense disambiguation to generate semantically similar clusters for multiple senses of words. Furthermore, signed spectral clustering has broader applications such as cellular biology, social networking, and electricity networks. Finally, we plan to extend the hard signed clustering presented here to probabilistic soft clustering. 946 References Mikhail Belkin and Partha Niyogi. 2003. Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation 15(6):1373– 1396. Dorwin Cartwright and Frank Harary. 1956. Structural balance: a generalization of heider’s theory. Psychological review 63(5):277. Kai-Wei Chang, Wen-tau Yih, and Christopher Meek. 2013. Multi-relational latent semantic analysis. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1602– 1612. http://aclweb.org/anthology/D13-1167. Kai-Yang Chiang, Cho-Jui Hsieh, Nagarajan Natarajan, Inderjit S. Dhillon, and Ambuj Tewari. 2014. Prediction and clustering in signed networks: A local to global perspective. Journal of Machine Learning Research 15:1177–1213. http://jmlr.org/papers/v15/chiang14a.html. James Richard Curran. 2004. From distributional to semantic similarity . Paramveer S. Dhillon, Dean P. Foster, and Lyle H. Ungar. 2015. Eigenwords: Spectral word embeddings. Journal of Machine Learning Research 16:3035– 3078. http://jmlr.org/papers/v16/dhillon15a.html. Manaal Faruqui, Jesse Dodge, Kumar Sujay Jauhar, Chris Dyer, Eduard Hovy, and A. Noah Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 1606–1615. https://doi.org/10.3115/v1/N15-1184. Jerome Friedman, Trevor Hastie, and Robert Tibshirani. 2001. The elements of statistical learning, volume 1. Springer series in statistics Springer, Berlin. Jean Gallier. 2016. Spectral theory of unsigned and signed graphs applications to graph clustering: a survey. arXiv preprint arXiv:1601.04692 . Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. Ppdb: The paraphrase database. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 758–764. http://aclweb.org/anthology/N13-1092. Daniela Gerz, Ivan Vuli´c, Felix Hill, Roi Reichart, and Anna Korhonen. 2016. Simverb-3500: A largescale evaluation set of verb similarity. arXiv preprint arXiv:1608.00869 . Nathan Halko, Per-Gunnar Martinsson, and Joel A Tropp. 2011. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM review 53(2):217–288. Frank Harary. 1953. On the notion of balance of a signed graph. The Michigan Mathematical Journal 2(2):143–146. Zellig S Harris. 1954. Distributional structure. Word . Felix Hill, Roi Reichart, and Anna Korhonen. 2014. Simlex-999: Evaluating semantic models with (genuine) similarity estimation. arXiv preprint arXiv:1408.3456 . Yao Ping Hou. 2005. Bounds for the least laplacian eigenvalue of a signed graph. Acta Mathematica Sinica 21(4):955–960. Eric Huang, Richard Socher, Christopher Manning, and Andrew Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 873–882. http://aclweb.org/anthology/P12-1092. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, pages 1746–1751. https://doi.org/10.3115/v1/D14-1181. Ravikrishna Kolluri, Jonathan Richard Shewchuk, and James F O’Brien. 2004. Spectral surface reconstruction from noisy point clouds. In Proceedings of the 2004 Eurographics/ACM SIGGRAPH symposium on Geometry processing. ACM, pages 11–21. J´erˆome Kunegis, Stephan Schmidt, Andreas Lommatzsch, J¨urgen Lerner, Ernesto William De Luca, and Sahin Albayrak. 2010. Spectral analysis of signed graphs for clustering, prediction and visualization. In SDM. SIAM, volume 10, pages 559–559. Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 2. http://aclweb.org/anthology/P982127. Pedro Mercado, Francesco Tudisco, and Matthias Hein. 2016. Clustering signed networks with the geometric mean of laplacians. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29. Curran Associates, Inc., pages 4421–4429. http://papers.nips.cc/paper/6164clustering-signed-networks-with-the-geometricmean-of-laplacians.pdf. 947 Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 . George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM 38(11):39– 41. M. Saif Mohammad, J. Bonnie Dorr, Graeme Hirst, and D. Peter Turney. 2013. Computing lexical contrast. Computational Linguistics 39(3). https://doi.org/10.1162/COLI a 00143. Nikola Mrkˇsi´c, Diarmuid ´O S´eaghdha, Blaise Thomson, Milica Gaˇsi´c, M. Lina Rojas-Barahona, PeiHao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic constraints. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 142–148. https://doi.org/10.18653/v1/N16-1018. Masataka Ono, Makoto Miwa, and Yutaka Sasaki. 2015. Word embedding-based antonym detection using thesauri and distributional information. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 984–989. https://doi.org/10.3115/v1/N15-1100. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, pages 1532–1543. https://doi.org/10.3115/v1/D14-1162. The Nghia Pham, Angeliki Lazaridou, and Marco Baroni. 2015. A multitask objective to inject lexical contrast into distributional semantics. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Association for Computational Linguistics, pages 21–26. https://doi.org/10.3115/v1/P15-2004. Syama Sundar Rangapuram and Matthias Hein. 2012. Constrained 1-spectral clustering. International conference on Artificial Intelligence and Statistics (AISTATS) 22:1143—1151. Delip Rao and Deepak Ravichandran. 2009. Semisupervised polarity lexicon induction. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009). Association for Computational Linguistics, pages 675–682. http://aclweb.org/anthology/E09-1077. Peter Mark Roget. 1852. Roget’s Thesaurus of English Words and Phrases.... Longman Group Ltd. Alexei V Samsonovic, Giorgio A Ascoli, and Jeffrey Krichmar. 2010. Principal semantic components of language and the measurement of meaning. PloS one 5(6):e10921. Silke Scheible, Sabine Schulte im Walde, and Sylvia Springorum. 2013. Uncovering distributional differences between synonyms and antonyms in a word space model. In Proceedings of the Sixth International Joint Conference on Natural Language Processing. Asian Federation of Natural Language Processing, pages 489–497. http://aclweb.org/anthology/I13-1056. Roy Schwartz, Roi Reichart, and Ari Rappoport. 2015. Symmetric pattern based word embeddings for improved word similarity prediction. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning. Association for Computational Linguistics, pages 258–267. https://doi.org/10.18653/v1/K15-1026. Jianbo Shi and Jitendra Malik. 2000. Normalized cuts and image segmentation. Pattern Analysis and Machine Intelligence, IEEE Transactions on 22(8):888–905. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, D. Christopher Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1631–1642. http://aclweb.org/anthology/D13-1170. Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning sentimentspecific word embedding for twitter sentiment classification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1555–1565. https://doi.org/10.3115/v1/P14-1146. Peter D Turney, Patrick Pantel, et al. 2010. From frequency to meaning: Vector space models of semantics. Journal of artificial intelligence research 37(1):141–188. Andreas Vlachos, Anna Korhonen, and Zoubin Ghahramani. 2009. Proceedings of the Workshop on Geometrical Models of Natural Language Semantics, Association for Computational Linguistics, chapter Unsupervised and Constrained Dirichlet Process Mixture Models for Verb Clustering, pages 74–82. http://aclweb.org/anthology/W09-0210. Jin Wang, Liang-Chih Yu, K Robert Lai, and Xuejie Zhang. 2016. Community-based weighted graph model for valence-arousal prediction of affective words. IEEE/ACM Transactions on Audio, Speech, and Language Processing 24(11):1957–1968. 948 Wen-tau Yih, Geoffrey Zweig, and John Platt. 2012. Polarity inducing latent semantic analysis. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics, pages 1212– 1222. http://aclweb.org/anthology/D12-1111. Stella X Yu and Jianbo Shi. 2003. Multiclass spectral clustering. In Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on. IEEE, pages 313–319. Ron Zass and Amnon Shashua. 2005. A unifying approach to hard and probabilistic clustering. In Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on. IEEE, volume 1, pages 294– 301. Jingwei Zhang, Jeremy Salwen, Michael Glass, and Alfio Gliozzo. 2014. Word semantic representations using bayesian probabilistic tensor factorization. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, pages 1522–1531. https://doi.org/10.3115/v1/D14-1161. 949
2017
87
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 950–962 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1088 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 950–962 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1088 An Interpretable Knowledge Transfer Model for Knowledge Base Completion Qizhe Xie, Xuezhe Ma, Zihang Dai, Eduard Hovy Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213, USA {qzxie, xuezhem, dzihang, hovy}@cs.cmu.edu Abstract Knowledge bases are important resources for a variety of natural language processing tasks but suffer from incompleteness. We propose a novel embedding model, ITransF, to perform knowledge base completion. Equipped with a sparse attention mechanism, ITransF discovers hidden concepts of relations and transfer statistical strength through the sharing of concepts. Moreover, the learned associations between relations and concepts, which are represented by sparse attention vectors, can be interpreted easily. We evaluate ITransF on two benchmark datasets— WN18 and FB15k for knowledge base completion and obtains improvements on both the mean rank and Hits@10 metrics, over all baselines that do not use additional information. 1 Introduction Knowledge bases (KB), such as WordNet (Fellbaum, 1998), Freebase (Bollacker et al., 2008), YAGO (Suchanek et al., 2007) and DBpedia (Lehmann et al., 2015), are useful resources for many applications such as question answering (Berant et al., 2013; Yih et al., 2015; Dai et al., 2016) and information extraction (Mintz et al., 2009). However, knowledge bases suffer from incompleteness despite their formidable sizes (Socher et al., 2013; West et al., 2014), leading to a number of studies on automatic knowledge base completion (KBC) (Nickel et al., 2015) or link prediction. The fundamental motivation behind these studies is that there exist some statistical regularities under the intertwined facts stored in the multirelational knowledge base. By discovering generalizable regularities in known facts, missing ones may be recovered in a faithful way. Due to its excellent generalization capability, distributed representations, a.k.a. embeddings, have been popularized to address the KBC task (Nickel et al., 2011; Bordes et al., 2011, 2014, 2013; Socher et al., 2013; Wang et al., 2014; Guu et al., 2015; Nguyen et al., 2016b). As a seminal work, Bordes et al. (2013) proposes the TransE, which models the statistical regularities with linear translations between entity embeddings operated by a relation embedding. Implicitly, TransE assumes both entity embeddings and relation embeddings dwell in the same vector space, posing an unnecessarily strong prior. To relax this requirement, a variety of models first project the entity embeddings to a relationdependent space (Bordes et al., 2014; Ji et al., 2015; Lin et al., 2015b; Nguyen et al., 2016b), and then model the translation property in the projected space. Typically, these relation-dependent spaces are characterized by the projection matrices unique to each relation. As a benefit, different aspects of the same entity can be temporarily emphasized or depressed as an effect of the projection. For instance, STransE (Nguyen et al., 2016b) utilizes two projection matrices per relation, one for the head entity and the other for the tail entity. Despite the superior performance of STransE compared to TransE, it is more prone to the data sparsity problem. Concretely, since the projection spaces are unique to each relation, projection matrices associated with rare relations can only be exposed to very few facts during training, resulting in poor generalization. For common relations, a similar issue exists. Without any restrictions on the number of projection matrices, logically related or conceptually similar relations may have distinct projection spaces, hindering the discovery, sharing, and generalization of statistical regularities. 950 Previously, a line of research makes use of external information such as textual relations from web-scale corpus or node features (Toutanova et al., 2015; Toutanova and Chen, 2015; Nguyen et al., 2016a), alleviating the sparsity problem. In parallel, recent work has proposed to model regularities beyond local facts by considering multirelation paths (Garc´ıa-Dur´an et al., 2015; Lin et al., 2015a; Shen et al., 2016). Since the number of paths grows exponentially with its length, as a side effect, path-based models enjoy much more training cases, suffering less from the problem. In this paper, we propose an interpretable knowledge transfer model (ITransF), which encourages the sharing of statistic regularities between the projection matrices of relations and alleviates the data sparsity problem. At the core of ITransF is a sparse attention mechanism, which learns to compose shared concept matrices into relation-specific projection matrices, leading to a better generalization property. Without any external resources, ITransF improves mean rank and Hits@10 on two benchmark datasets, over all previous approaches of the same kind. In addition, the parameter sharing is clearly indicated by the learned sparse attention vectors, enabling us to interpret how knowledge transfer is carried out. To induce the desired sparsity during optimization, we further introduce a block iterative optimization algorithm. In summary, the contributions of this work are: (i) proposing a novel knowledge embedding model which enables knowledge transfer by learning to discover shared regularities; (ii) introducing a learning algorithm to directly optimize a sparse representation from which the knowledge transferring procedure is interpretable; (iii) showing the effectiveness of our model by outperforming baselines on two benchmark datasets for knowledge base completion task. 2 Notation and Previous Models Let E denote the set of entities and R denote the set of relations. In knowledge base completion, given a training set P of triples (h, r, t) where h, t ∈E are the head and tail entities having a relation r ∈R, e.g., (Steve Jobs, FounderOf, Apple), we want to predict missing facts such as (Steve Jobs, Profession, Businessperson). Most of the embedding models for knowledge base completion define an energy function fr(h, t) according to the fact’s plausibility (Bordes et al., 2011, 2014, 2013; Socher et al., 2013; Wang et al., 2014; Yang et al., 2015; Guu et al., 2015; Nguyen et al., 2016b). The models are learned to minimize energy fr(h, t) of a plausible triple (h, r, t) and to maximize energy fr(h′, t′) of an implausible triple (h′, r, t′). Motivated by the linear translation phenomenon observed in well trained word embeddings (Mikolov et al., 2013), TransE (Bordes et al., 2013) represents the head entity h, the relation r and the tail entity t with vectors h, r and t ∈Rn respectively, which were trained so that h+r ≈t. They define the energy function as fr(h, t) = ∥h + r −t∥ℓ where ℓ= 1 or 2, which means either the ℓ1 or the ℓ2 norm of the vector h + r −t will be used depending on the performance on the validation set. To better model relation-specific aspects of the same entity, TransR (Lin et al., 2015b) uses projection matrices and projects the head entity and the tail entity to a relation-dependent space. STransE (Nguyen et al., 2016b) extends TransR by employing different matrices for mapping the head and the tail entity. The energy function is fr(h, t) = ∥Wr,1h + r −Wr,2t∥ℓ However, not all relations have abundant data to estimate the relation specific matrices as most of the training samples are associated with only a few relations, leading to the data sparsity problem for rare relations. 3 Interpretable Knowledge Transfer 3.1 Model As discussed above, a fundamental weakness in TransR and STransE is that they equip each relation with a set of unique projection matrices, which not only introduces more parameters but also hinders knowledge sharing. Intuitively, many relations share some concepts with each other, although they are stored as independent symbols in KB. For example, the relation “(somebody) won award for (some work)” and “(somebody) was nominated for (some work)” both describe a person’s high-quality work which wins an award or a nomination respectively. This phenomenon suggests that one relation actually represents a collection of real-world concepts, and one concept 951 can be shared by several relations. Inspired by the existence of such lower-level concepts, instead of defining a unique set of projection matrices for every relation, we can alternatively define a small set of concept projection matrices and then compose them into customized projection matrices. Effectively, the relation-dependent translation space is then reduced to the smaller concept spaces. However, in general, we do not have prior knowledge about what concepts exist out there and how they are composed to form relations. Therefore, in ITransF, we propose to learn this information simultaneously from data, together with all knowledge embeddings. Following this idea, we first present the model details, then discuss the optimization techniques for training. Energy function Specifically, we stack all the concept projection matrices to a 3-dimensional tensor D ∈Rm×n×n, where m is the pre-specified number of concept projection matrices and n is the dimensionality of entity embeddings and relation embeddings. We let each relation select the most useful projection matrices from the tensor, where the selection is represented by an attention vector. The energy function of ITransF is defined as: fr(h, t) = ∥αααH r · D · h + r −αααT r · D · t∥ℓ (1) where αααH r ,αααT r ∈[0, 1]m, satisfying P i αααH r,i = P i αααT r,i = 1, are normalized attention vectors used to compose all concept projection matrices in D by a convex combination. It is obvious that STransE can be expressed as a special case of our model when we use m = 2|R| concept matrices and set attention vectors to disjoint one-hot vectors. Hence our model space is a generalization of STransE. Note that we can safely use fewer concept matrices in ITransF and obtain better performance (see section 4.3), though STransE always requires 2|R| projection matrices. We follow previous work to minimize the following hinge loss function: L = X (h,r,t)∼P, (h′,r,t′)∼N  γ + fr(h, t) −fr(h′, t′)  + (2) where P is the training set consisting of correct triples, N is the distribution of corrupted triples defined in section 3.3, and [·]+ = max(·, 0). Note that we have omitted the dependence of N on (h, r, t) to avoid clutter. We normalize the entity vectors h, t, and the projected entity vectors αααH r · D · h and αααT r · D · t to have unit length after each update, which is an effective regularization method that benefits all models. Sparse attention vectors In Eq. (1), we have defined αααH r ,αααT r to be some normalized vectors used for composition. With a dense attention vector, it is computationally expensive to perform the convex combination of m matrices in each iteration. Moreover, a relation usually does not consist of all existing concepts in practice. Furthermore, when the attention vectors are sparse, it is often easier to interpret their behaviors and understand how concepts are shared by different relations. Motivated by these potential benefits, we further hope to learn sparse attention vectors in ITransF. However, directly posing ℓ1 regularization (Tibshirani, 1996) on the attention vectors fails to produce sparse representations in our preliminary experiment, which motivates us to enforce ℓ0 constraints on αααT r ,αααH r . In order to satisfy both the normalization condition and the ℓ0 constraints, we reparameterize the attention vectors in the following way: αααH r = SparseSoftmax(vH r , IH r ) αααT r = SparseSoftmax(vT r , IT r ) where vH r , vT r ∈Rm are the pre-softmax scores, IH r , IT r ∈{0, 1}m are the sparse assignment vectors, indicating the non-zero entries of attention vectors, and the SparseSoftmax is defined as SparseSoftmax(v, I)i = exp(vi/τ)Ii P j exp(vj/τ)Ij with τ being the temperature of Softmax. With this reparameterization, vH r , vT r and IH r , IT r replace αααT r ,αααH r to become the real parameters of the model. Also, note that it is equivalent to pose the ℓ0 constraints on IH r , IT r instead of αααT r ,αααH r . Putting these modifications together, we can rewrite the optimization problem as minimize L subject to ∥IH r ∥0 ≤k, ∥IT r ∥0 ≤k (3) where L is the loss function defined in Eq. (2). 3.2 Block Iterative Optimization Though sparseness is favorable in practice, it is generally NP-hard to find the optimal solution under ℓ0 constraints. Thus, we resort to an approximated algorithm in this work. 952 For convenience, we refer to the parameters with and without the sparse constraints as the sparse partition and the dense partition, respectively. Based on this notion, the high-level idea of the approximated algorithm is to iteratively optimize one of the two partitions while holding the other one fixed. Since all parameters in the dense partition, including the embeddings, the projection matrices, and the pre-softmax scores, are fully differentiable with the sparse partition fixed, we can simply utilize SGD to optimize the dense partition. Then, the core difficulty lies in the step of optimizing the sparse partition (i.e. the sparse assignment vectors), during which we want the following two properties to hold 1. the sparsity required by the ℓ0 constaint is maintained, and 2. the cost define by Eq. (2) is decreased. Satisfying the two criterion seems to highly resemble the original problem defined in Eq. (3). However, the dramatic difference here is that with parameters in the dense partition regarded as constant, the cost function is decoupled w.r.t. each relation r. In other words, the optimal choice of IH r , IT r is independent of IH r′ , IT r′ for any r′ ̸= r. Therefore, we only need to consider the optimization for a single relation r, which is essentially an assignment problem. Note that, however, IH r and IT r are still coupled, without which we basically reach the situation in a backpack problem. In principle, one can explore combinatorial optimization techniques to optimize IH r′ , IT r′ jointly, which usually involve some iterative procedure. To avoid adding another inner loop to our algorithm, we turn to a simple but fast approximation method based on the following single-matrix cost. Specifically, for each relation r, we consider the induced cost LH r,i where only a single projection matrix i is used for the head entity: LH r,i = X (h,r,t)∼Pr, (h′,r,t′)∼Nr  γ + fH r,i(h, t) −fH r,i(h′, t′)  + where fH r,i(h, t) = ∥Di · h + r −αααT r · D · t∥is the corresponding energy function, and the subscript in Pr and Nr denotes the subsets with relation r. Intuitively, LH r,i measures, given the current tail attention vector αααT r , if only one project matrix could be chosen for the head entity, how implausible Di would be. Hence, i∗= arg mini LH r,i gives us the best single projection matrix on the head side given αααT r . Now, in order to choose the best k matrices, we basically ignore the interaction among projection matrices, and update IH r in the following way: IH r,i ← ( 1, i ∈argpartitioni(LH r,i, k) 0, otherwise where the function argpartitioni(xi, k) produces the index set of the lowest-k values of xi. Analogously, we can define the single-matrix cost LT r,i and the energy function fT r,i(h, t) on the tail side in a symmetric way. Then, the update rule for IH r follows the same derivation. Admittedly, the approximation described here is relatively crude. But as we will show in section 4, the proposed algorithm yields good performance empirically. We leave the further improvement of the optimization method as future work. 3.3 Corrupted Sample Generating Method Recall that we need to sample a negative triple (h′, r, t′) to compute hinge loss shown in Eq. 2, given a positive triple (h, r, t) ∈P. The distribution of negative triple is denoted by N(h, r, t). Previous work (Bordes et al., 2013; Lin et al., 2015b; Yang et al., 2015; Nguyen et al., 2016b) generally constructs a set of corrupted triples by replacing the head entity or tail entity with a random entity uniformly sampled from the KB. However, uniformly sampling corrupted entities may not be optimal. Often, the head and tail entities associated a relation can only belong to a specific domain. When the corrupted entity comes from other domains, it is very easy for the model to induce a large energy gap between true triple and corrupted one. As the energy gap exceeds γ, there will be no training signal from this corrupted triple. In comparison, if the corrupted entity comes from the same domain, the task becomes harder for the model, leading to more consistent training signal. Motivated by this observation, we propose to sample corrupted head or tail from entities in the same domain with a probability pr and from the whole entity set with probability 1 −pr. The choice of relation-dependent probability pr is specified in Appendix A.1. In the rest of the paper, we refer to the new proposed sampling method as ”domain sampling”. 953 4 Experiments 4.1 Setup To evaluate link prediction, we conduct experiments on the WN18 (WordNet) and FB15k (Freebase) introduced by Bordes et al. (2013) and use the same training/validation/test split as in (Bordes et al., 2013). The information of the two datasets is given in Table 1. Dataset #E #R #Train #Valid #Test WN18 40,943 18 141,442 5,000 5,000 FB15k 14,951 1,345 483,142 50,000 59,071 Table 1: Statistics of FB15k and WN18 used in experiments. #E, #R denote the number of entities and relation types respectively. #Train, #Valid and #Test are the numbers of triples in the training, validation and test sets respectively. In knowledge base completion task, we evaluate model’s performance of predicting the head entity or the tail entity given the relation and the other entity. For example, to predict head given relation r and tail t in triple (h, r, t), we compute the energy function fr(h′, t) for each entity h′ in the knowledge base and rank all the entities according to the energy. We follow Bordes et al. (2013) to report the filter results, i.e., removing all other correct candidates h′ in ranking. The rank of the correct entity is then obtained and we report the mean rank (mean of the predicted ranks) and Hits@10 (top 10 accuracy). Lower mean rank or higher Hits@10 mean better performance. 4.2 Implementation Details We initialize the projection matrices with identity matrices added with a small noise sampled from normal distribution N(0, 0.0052). The entity and relation vectors of ITransF are initialized by TransE (Bordes et al., 2013), following Lin et al. (2015b); Ji et al. (2015); Garc´ıa-Dur´an et al. (2016, 2015); Lin et al. (2015a). We ran minibatch SGD until convergence. We employ the “Bernoulli” sampling method to generate incorrect triples as used in Wang et al. (2014), Lin et al. (2015b), He et al. (2015), Ji et al. (2015) and Lin et al. (2015a). STransE (Nguyen et al., 2016b) is the most similar knowledge embedding model to ours except that they use distinct projection matrices for each relation. We use the same hyperparameters as used in STransE and no significant improvement is observed when we alter hyperparameters. We set the margin γ to 5 and dimension of embedding n to 50 for WN18, and γ = 1, n = 100 for FB15k. We set the batch size to 20 for WN18 and 1000 for FB15k. The learning rate is 0.01 on WN18 and 0.1 on FB15k. We use 30 matrices on WN18 and 300 matrices on FB15k. All the models are implemented with Theano (Bergstra et al., 2010). The Softmax temperature is set to 1/4. 4.3 Results & Analysis The overall link prediction results1 are reported in Table 2. Our model consistently outperforms previous models without external information on both the metrics of WN18 and FB15k. On WN18, we even achieve a much better mean rank with comparable Hits@10 than current state-of-the-art model IRN employing external information. We can see that path information is very helpful on FB15k and models taking advantage of path information outperform intrinsic models by a significant margin. Indeed, a lot of facts are easier to recover with the help of multi-step inference. For example, if we know Barack Obama is born in Honolulu, a city in the United States, then we easily know the nationality of Obama is the United States. An straightforward way of extending our proposed model to k-step path P = {ri}k i=1 is to define a path energy function ∥αααH P · D · h + P ri∈P ri −αααT P · D · t∥ℓ, αααH P is a concept association related to the path. We plan to extend our model to multi-step path in the future. To provide a detailed understanding why the proposed model achieves better performance, we present some further analysis in the sequel. Performance on Rare Relations In the proposed ITransF, we design an attention mechanism to encourage knowledge sharing across different relations. Naturally, facts associated with rare relations should benefit most from such sharing, boosting the overall performance. To verify this hypothesis, we investigate our model’s performance on relations with different frequency. The overall distribution of relation frequencies resembles that of word frequencies, subject to the zipf’s law. Since the frequencies of relations approximately follow a power distribution, their log 1Note that although IRN (Shen et al., 2016) does not explicitly exploit path information, it performs multi-step inference through the multiple usages of external memory. When IRN is allowed to access memory once for each prediction, its Hits@10 is 80.7, similar to models without path information. 954 Model Additional Information WN18 FB15k Mean Rank Hits@10 Mean Rank Hits@10 SE (Bordes et al., 2011) No 985 80.5 162 39.8 Unstructured (Bordes et al., 2014) No 304 38.2 979 6.3 TransE (Bordes et al., 2013) No 251 89.2 125 47.1 TransH (Wang et al., 2014) No 303 86.7 87 64.4 TransR (Lin et al., 2015b) No 225 92.0 77 68.7 CTransR (Lin et al., 2015b) No 218 92.3 75 70.2 KG2E (He et al., 2015) No 348 93.2 59 74.0 TransD (Ji et al., 2015) No 212 92.2 91 77.3 TATEC (Garc´ıa-Dur´an et al., 2016) No 58 76.7 NTN (Socher et al., 2013) No 66.1 41.4 DISTMULT (Yang et al., 2015) No 94.2 57.7 STransE (Nguyen et al., 2016b) No 206 (244) 93.4 (94.7) 69 79.7 ITransF No 205 94.2 65 81.0 ITransF (domain sampling) No 223 95.2 77 81.4 RTransE (Garc´ıa-Dur´an et al., 2015) Path 50 76.2 PTransE (Lin et al., 2015a) Path 58 84.6 NLFeat (Toutanova and Chen, 2015) Node + Link Features 94.3 87.0 Random Walk (Wei et al., 2016) Path 94.8 74.7 IRN (Shen et al., 2016) External Memory 249 95.3 38 92.7 Table 2: Link prediction results on two datasets. Higher Hits@10 or lower Mean Rank indicates better performance. Following Nguyen et al. (2016b) and Shen et al. (2016), we divide the models into two groups. The first group contains intrinsic models without using extra information. The second group make use of additional information. Results in the brackets are another set of results STransE reported. frequencies are linear. The statistics of relations on FB15k and WN18 are shown in Figure 1. We can clearly see that the distributions exhibit long tails, just like the Zipf’s law for word frequency. In order to study the performance of relations with different frequencies, we sort all relations by their frequency in the training set, and split them into 3 buckets evenly so that each bucket has a similar interval length of log frequency. Within each bucket, we compare our model with STransE, as shown in Figure 2.2 As we can see, on WN18, ITransF outperforms STransE by a significant margin on rare relations. In particular, in the last bin (rarest relations), the average Hits@10 increases from 55.2 to 93.8, showing the great benefits of transferring statistical strength from common relations to rare ones. The comparison on each relation is shown in Appendix A.2. On FB15k, we can also observe a similar pattern, although the degree of improvement is less significant. We conjecture the difference roots in the fact that many rare relations on FB15k have disjoint domains, knowledge transfer through common concepts is harder. Interpretability In addition to the quantitative evidence supporting the effectiveness of knowledge sharing, we provide some intuitive examples to show how knowledge is shared in our model. As 2Domain sampling is not employed. we mentioned earlier, the sparse attention vectors fully capture the association between relations and concepts and hence the knowledge transfer among relations. Thus, we visualize the attention vectors for several relations on both WN18 and FB15K in Figure 3. For WN18, the words “hyponym” and “hypernym” refer to words with more specific or general meaning respectively. For example, PhD is a hyponym of student and student is a hypernym of PhD. As we can see, concepts associated with the head entities in one relation are also associated with the tail entities in its reverse relation. Further, “instance hypernym” is a special hypernym with the head entity being an instance, and the tail entity being an abstract notion. A typical example is (New York, instance hypernym, city). This connection has also been discovered by our model, indicated by the fact that “instance hypernym(T)” and “hypernym(T)” share a common concept matrix. Finally, for symmetric relations like “similar to”, we see the head attention is identical to the tail attention, which well matches our intuition. On FB15k, we also see the sharing between reverse relations, as in “(somebody) won award for (some work)” and “(some work) award winning work (somebody)”. What’s more, although relation “won award for” and “was nominated for” share the same concepts, 955 Log(Frequency) 0 2.75 5.5 8.25 11 Frequency 0 10000 20000 30000 40000 Relation Frequency Log(Frequency) (a) WN18 Log(Frequency) 0 2.5 5 7.5 10 Frequency 0 4000 8000 12000 16000 Relation Frequency Log(Frequency) (b) FB15k Figure 1: Frequencies and log frequencies of relations on two datasets. The X-axis are relations sorted by frequency. Hits@10 0 25 50 75 100 Relation Bin 1 2 3 ITransF STransE (a) WN18 Hits@10 0 25 50 75 100 Relation Bin 1 2 3 ITransF STransE (b) FB15k Figure 2: Hits@10 on relations with different amount of data. We give each relation the equal weight and report the average Hits@10 of each relation in a bin instead of reporting the average Hits@10 of each sample in a bin. Bins with smaller index corresponding to high-frequency relations. their attention distributions are different, suggesting distinct emphasis. Finally, symmetric relations like spouse behave similarly as mentioned before. Model Compression A byproduct of parameter sharing mechanism employed by ITransF is a much more compact model with equal performance. Figure 5 plots the average performance of ITransF against the number of projection matrices m, together with two baseline models. On FB15k, when we reduce the number of matrices from 2200 to 30 (∼90× compression), our model performance decreases by only 0.09% on Hits@10, still outperforming STransE. Similarly, on WN18, ITransF continues to achieve the best performance when we reduce the number of concept project matrices to 18. 5 Analysis on Sparseness Sparseness is desirable since it contribute to interpretability and computational efficiency of our model. We investigate whether enforcing sparseness would deteriorate the model performance and compare our method with another sparse encoding methods in this section. Dense Attention w/o ℓ1 regularization Although ℓ0 constrained model usually enjoys many practical advantages, it may deteriorate the model performance when applied improperly. Here, we show that our model employing sparse attention can achieve similar results with dense attention with a significantly less computational burden. We also compare dense attention with ℓ1 regularization. We set the ℓ1 coefficient to 0.001 in our experiments and does not apply Softmax since the ℓ1 of a vector after Softmax is always 1. We compare models in a setting where the computation time of 956 (a) WN18 (b) FB15k Figure 3: Heatmap visualization of attention vectors for ITransF on WN18 and FB15k. Each row is an attention vector αααH r or αααT r for a relation’s head or tail concepts. (a) WN18 (b) FB15k Figure 4: Heatmap visualization of ℓ1 regularized dense attention vectors, which are not sparse. Note that the colorscale is not from 0 to 1 since Softmax is not applied. Hits@10 70 73.25 76.5 79.75 83 # matrices 15 30 75 300 600 1200 1345 2200 2690 ITransF STransE CTransR (a) FB15k Hits@10 90 91.25 92.5 93.75 95 # matrices 18 22 26 30 36 45 ITransF STransE CTransR (b) WN18 Figure 5: Performance with different number of projection matrices. Note that the X-axis denoting the number of matrices is not linearly scaled. dense attention model is acceptable3. We use 22 weight matrices on WN18 and 15 weight matrices on FB15k and train both the models for 2000 epochs. The results are reported in Table 3. Generally, ITransF with sparse attention has slightly better or comparable performance comparing to dense attention. Further, we show the attention vectors of 3With 300 projection matrices, it takes 1h1m to run one epoch for a model with dense attention. model with ℓ1 regularized dense attention in Figure 4. We see that ℓ1 regularization does not produce a sparse attention, especially on FB15k. Nonnegative Sparse Encoding In the proposed model, we induce the sparsity by a carefully designed iterative optimization procedure. Apart from this approach, one may utilize sparse encoding techniques to obtain sparseness based on the pretrained projection matrices from STransE. Concretely, stacking |2R| pretrained projection 957 Method WN18 FB15k MR H10 Time MR H10 Time Dense 199 94.0 4m34s 69 79.4 4m30s Dense + ℓ1 228 94.2 4m25s 131 78.9 5m47s Sparse 207 94.1 2m32s 67 79.6 1m52s Table 3: Performance of model with dense attention vectors or sparse attention vectors. MR, H10 and Time denotes mean rank, Hits@10 and training time per epoch respectively matrices into a 3-dimensional tensor X ∈ R2|R|×n×n, similar sparsity can be induced by solving an ℓ1-regularized tensor completion problem minA,D ||X −DA||2 2 + λ∥A∥ℓ1. Basically, A plays the same role as the attention vectors in our model. For more details, we refer readers to (Faruqui et al., 2015). For completeness, we compare our model with the aforementioned approach4. The comparison is summarized in table 4. On both benchmarks, ITransF achieves significant improvement against sparse encoding on pretrained model. This performance gap should be expected since the objective function of sparse encoding methods is to minimize the reconstruction loss rather than optimize the criterion for link prediction. Method WN18 FB15k MR H10 MR H10 Sparse Encoding 211 86.6 66 79.1 ITransF 205 94.2 65 81.0 Table 4: Different methods to obtain sparse representations 6 Related Work In KBC, CTransR (Lin et al., 2015b) enables relation embedding sharing across similar relations, but they cluster relations before training rather than learning it in a principled way. Further, they do not solve the data sparsity problem because there is no sharing of projection matrices which have a lot more parameters. Learning the association between semantic relations has been used in related problems such as relational similarity measurement (Turney, 2012) and relation adaptation (Bollegala et al., 2015). Data sparsity is a common problem in many fields. Transfer learning (Pan and Yang, 2010) has been shown to be promising to transfer knowl4We use the toolkit provided by (Faruqui et al., 2015). edge and statistical strengths across similar models or languages. For example, Bharadwaj et al. (2016) transfers models on resource-rich languages to low resource languages by parameter sharing through common phonological features in name entity recognition. Zoph et al. (2016) initialize from models trained by resource-rich languages to translate low-resource languages. Several works on obtaining a sparse attention (Martins and Astudillo, 2016; Makhzani and Frey, 2014; Shazeer et al., 2017) share a similar idea of sorting the values before softmax and only keeping the K largest values. However, the sorting operation in these works is not GPU-friendly. The block iterative optimization algorithm in our work is inspired by LightRNN (Li et al., 2016). They allocate every word in the vocabulary in a table. A word is represented by a row vector and a column vector depending on its position in the table. They iteratively optimize embeddings and allocation of words in tables. 7 Conclusion and Future Work In summary, we propose a knowledge embedding model which can discover shared hidden concepts, and design a learning algorithm to induce the interpretable sparse representation. Empirically, we show our model can improve the performance on two benchmark datasets without external resources, over all previous models of the same kind. In the future, we plan to enable ITransF to perform multi-step inference, and extend the sharing mechanism to entity and relation embeddings, further enhancing the statistical binding across parameters. In addition, our framework can also be applied to multi-task learning, promoting a finer sharing among different tasks. Acknowledgments We thank anonymous reviewers and Graham Neubig for valuable comments. We thank Yulun Du, Paul Mitchell, Abhilasha Ravichander, Pengcheng Yin and Chunting Zhou for suggestions on the draft. We are also appreciative for the great working environment provided by staff in LTI. This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program. 958 References Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Seattle, Washington, USA, pages 1533– 1544. James Bergstra, Olivier Breuleux, Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. 2010. Theano: a cpu and gpu math expression compiler. In Proceedings of the Python for scientific computing conference (SciPy). Austin, TX, volume 4, page 3. Akash Bharadwaj, David Mortensen, Chris Dyer, and Jaime Carbonell. 2016. Phonologically aware neural model for named entity recognition in low resource transfer settings. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1462–1472. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A Collaboratively Created Graph Database for Structuring Human Knowledge. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data. pages 1247–1250. Danushka Bollegala, Takanori Maehara, and Ken-ichi Kawarabayashi. 2015. Embedding semantic relations into word representations. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence. Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. 2014. A Semantic Matching Energy Function for Learning with Multi-relational Data. Machine Learning 94(2):233–259. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating Embeddings for Modeling Multirelational Data. In Advances in Neural Information Processing Systems 26, pages 2787–2795. Antoine Bordes, Jason Weston, Ronan Collobert, and Yoshua Bengio. 2011. Learning Structured Embeddings of Knowledge Bases. In Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence. pages 301–306. Zihang Dai, Lei Li, and Wei Xu. 2016. Cfo: Conditional focused neural question answering with largescale knowledge bases. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 800–810. Manaal Faruqui, Yulia Tsvetkov, Dani Yogatama, Chris Dyer, and Noah A. Smith. 2015. Sparse overcomplete word vector representations. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 1491– 1500. Christiane D. Fellbaum. 1998. WordNet: An Electronic Lexical Database. MIT Press. Alberto Garc´ıa-Dur´an, Antoine Bordes, and Nicolas Usunier. 2015. Composing Relationships with Translations. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pages 286–290. Alberto Garc´ıa-Dur´an, Antoine Bordes, Nicolas Usunier, and Yves Grandvalet. 2016. Combining Two and Three-Way Embedding Models for Link Prediction in Knowledge Bases. Journal of Artificial Intelligence Research 55:715–742. Kelvin Guu, John Miller, and Percy Liang. 2015. Traversing Knowledge Graphs in Vector Space. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pages 318–327. Shizhu He, Kang Liu, Guoliang Ji, and Jun Zhao. 2015. Learning to Represent Knowledge Graphs with Gaussian Embedding. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. pages 623–632. Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Knowledge Graph Embedding via Dynamic Mapping Matrix. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). pages 687–696. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, S¨oren Auer, and Christian Bizer. 2015. DBpedia - A Large-scale, Multilingual Knowledge Base Extracted from Wikipedia. Semantic Web 6(2):167– 195. Xiang Li, Tao Qin, Jian Yang, and Tieyan Liu. 2016. LightRNN: Memory and Computation-Efficient Recurrent Neural Networks. In Advances in Neural Information Processing Systems 29. Yankai Lin, Zhiyuan Liu, Huanbo Luan, Maosong Sun, Siwei Rao, and Song Liu. 2015a. Modeling Relation Paths for Representation Learning of Knowledge Bases. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pages 705–714. 959 Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015b. Learning Entity and Relation Embeddings for Knowledge Graph Completion. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence Learning, pages 2181–2187. Alireza Makhzani and Brendan Frey. 2014. K-sparse autoencoders. In Proceedings of the International Conference on Learning Representations. Andr´e FT Martins and Ram´on Fernandez Astudillo. 2016. From softmax to sparsemax: A sparse model of attention and multi-label classification. In Proceedings of the 33th International Conference on Machine Learning. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. pages 3111–3119. Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. Association for Computational Linguistics, Suntec, Singapore, pages 1003–1011. Dat Quoc Nguyen, Kairit Sirts, Lizhen Qu, and Mark Johnson. 2016a. Neighborhood mixture model for knowledge base completion. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning (CoNLL). Association for Computational Linguistics, page 4050. Dat Quoc Nguyen, Kairit Sirts, Lizhen Qu, and Mark Johnson. 2016b. STransE: a novel embedding model of entities and relationships in knowledge bases. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 460–466. Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. 2015. A Review of Relational Machine Learning for Knowledge Graphs. Proceedings of the IEEE, to appear . Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A Three-Way Model for Collective Learning on Multi-Relational Data. In Proceedings of the 28th International Conference on Machine Learning. pages 809–816. Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. IEEE Transactions on knowledge and data engineering 22(10):1345–1359. Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. In Proceedings of the International Conference on Learning Representations. Yelong Shen, Po-Sen Huang, Ming-Wei Chang, and Jianfeng Gao. 2016. Implicit reasonet: Modeling large-scale structured relationships with shared memory. arXiv preprint arXiv:1611.04642 . Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning With Neural Tensor Networks for Knowledge Base Completion. In Advances in Neural Information Processing Systems 26, pages 926–934. Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. YAGO: A Core of Semantic Knowledge. In Proceedings of the 16th International Conference on World Wide Web. pages 697–706. Robert Tibshirani. 1996. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological) pages 267–288. Kristina Toutanova and Danqi Chen. 2015. Observed Versus Latent Features for Knowledge Base and Text Inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality. pages 57–66. Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon. 2015. Representing Text for Joint Embedding of Text and Knowledge Bases. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pages 1499–1509. Peter D Turney. 2012. Domain and function: A dualspace model of semantic relations and compositions. Journal of Artificial Intelligence Research 44:533– 585. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge Graph Embedding by Translating on Hyperplanes. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, pages 1112–1119. Zhuoyu Wei, Jun Zhao, and Kang Liu. 2016. Mining inference formulas by goal-directed random walks. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 1379–1388. Robert West, Evgeniy Gabrilovich, Kevin Murphy, Shaohua Sun, Rahul Gupta, and Dekang Lin. 2014. Knowledge Base Completion via Searchbased Question Answering. In Proceedings of the 23rd International Conference on World Wide Web. pages 515–526. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding Entities and Relations for Learning and Inference in Knowledge Bases. In Proceedings of the International Conference on Learning Representations. 960 Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 1321–1331. Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 1568–1575. 961 A Appendix A.1 Domain Sampling Probability In this section, we define the probability pr to generate a negative sample from the same domain mentioned in Section 3.3. The probability cannot be too high to avoid generating negative samples that are actually correct, since there are generally a lot of facts missing in KBs. Specifically, let MH r = {h | ∃t(h, r, t) ∈P} and MT r = {t | ∃h(h, r, t) ∈P} denote the head or tail domain of relation r. Suppose Nr = {(h, r, t) ∈P} is the induced set of edges with relation r. We define the probability pr as pr = min(λ|MT r ||MH r | |Nr| , 0.5) (4) Our motivation of such a formulation is as follows: Suppose Or is the set that contains all truthful fact triples on relation r, i.e., all triples in training set and all other missing correct triples. If we assume all fact triples within the domain has uniform probability of being true, the probability of a random triple being correct is Pr((h, r, t) ∈Or | h ∈MH r , t ∈MT r ) = |Or| |MH r ||MTr | Assume that all facts are missing with a probability λ, then |Nr| = λ|Or| and the above probability can be approximated by |Nr| λ|MH r ||MTr |. We want the probability of generating a negative sample from the domain to be inversely proportional to the probability of the sample being true, so we define the probability as Eq. 4. The results in section 4 are obtained with λ set to 0.001. We compare how different value of λ would influence our model’s performance in Table. 5. With large λ and higher domain sampling probability, our model’s Hits@10 increases while mean rank also increases. The rise of mean rank is due to higher probability of generating a valid triple as a negative sample causing the energy of a valid triple to increase, which leads to a higher overall rank of a correct entity. However, the reasoning capability is boosted with higher Hits@10 as shown in the table. A.2 Performance on individual relations of WN18 We plot the performance of ITransF and STransE on each relation. We see that the improvement is greater on rare relations. Method WN18 FB15k MR H10 MR H10 λ = 0.0003 217 95.0 68 80.4 λ = 0.001 223 95.2 73 80.6 λ = 0.003 239 95.2 82 80.9 Table 5: Different λ’s effect on our model performance. The compared models are trained for 2000 epochs Hits@10 0 25 50 75 100 Relation 1 3 5 7 9 11 13 15 17 ITransF STransE Figure 6: Hits@10 on each relation in WN18. The relations are sorted according to their frequency. 962
2017
88
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 963–973 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1089 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 963–973 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1089 Learning a Neural Semantic Parser from User Feedback Srinivasan Iyer†⋄, Ioannis Konstas†, Alvin Cheung† Jayant Krishnamurthy‡ and Luke Zettlemoyer†‡ †Paul G. Allen School of Computer Science & Engineering, Univ. of Washington, Seattle, WA {sviyer,ikonstas,akcheung,lsz}@cs.washington.edu ‡Allen Institute for Artificial Intelligence, Seattle, WA {jayantk,lukez}@allenai.org Abstract We present an approach to rapidly and easily build natural language interfaces to databases for new domains, whose performance improves over time based on user feedback, and requires minimal intervention. To achieve this, we adapt neural sequence models to map utterances directly to SQL with its full expressivity, bypassing any intermediate meaning representations. These models are immediately deployed online to solicit feedback from real users to flag incorrect queries. Finally, the popularity of SQL facilitates gathering annotations for incorrect predictions using the crowd, which is directly used to improve our models. This complete feedback loop, without intermediate representations or database specific engineering, opens up new ways of building high quality semantic parsers. Experiments suggest that this approach can be deployed quickly for any new target domain, as we show by learning a semantic parser for an online academic database from scratch. 1 Introduction Existing semantic parsing approaches for building natural language interfaces to databases (NLIDBs) either use special-purpose intermediate meaning representations that lack the full expressivity of database query languages or require extensive feature engineering, making it difficult to deploy them in new domains. We present a robust approach to quickly and easily learn and deploy semantic parsers from scratch, whose performance ⋄Work done partly during an internship at the Allen Institute for Artificial Intelligence. Most recent papers of Michael I. Jordan SELECT paper.paperId , paper.year FROM paper , writes , author WHERE paper.paperId = writes.paperId AND writes.authorId = author.authorId AND author.authorName = "michael i. jordan" AND paper.year = (SELECT max(paper.year) FROM paper , writes , author WHERE paper.paperId = writes.paperId AND writes.authorId = author.authorId AND author.authorName = "michael i. jordan"); I’d like to book a flight from San Diego to Toronto SELECT DISTINCT f1.flight_id FROM flight f1 , airport_service a1 , city c1 , airport_service a2 , city c2 WHERE f1.from_airport = a1.airport_code AND a1.city_code = c1.city_code AND c1.city_name = 'san diego' AND f1.to_airport = a2.airport_code AND a2.city_code = c2.city_code AND c2.city_name = 'toronto '; Figure 1: Utterances with corresponding SQL queries to answer them for two domains, an academic database and a flight reservation database. improves over time based on user feedback, and requires very little expert intervention. To learn these semantic parsers, we (1) adapt neural sequence models to map utterances directly to SQL thereby bypassing intermediate representations and taking full advantage of SQL’s querying capabilities, (2) immediately deploy the model online to solicit questions and user feedback on results to reduce SQL annotation efforts, and (3) use crowd workers from skilled markets to provide SQL annotations that can directly be used for model improvement, in addition to being easier and cheaper to obtain than logical meaning representations. We demonstrate the effectiveness of the complete approach by successfully learning a semantic parser for an academic domain by simply deploying it online for three days. This type of interactive learning is related to a number of recent ideas in semantic parsing, in963 cluding batch learning of models that directly produce programs (e.g., regular expressions (Locascio et al., 2016)), learning from paraphrases (often gathered through crowdsourcing (Wang et al., 2015)), data augmentation (e.g. based on manually engineered semantic grammars (Jia and Liang, 2016)) and learning through direct interaction with users (e.g., where a single user teaches the model new concepts (Wang et al., 2016)). However, there are unique advantages to our approach, including showing (1) that non-linguists can write SQL to encode complex, compositional computations (see Fig 1 for an example), (2) that external paraphrase resources and the structure of facts from the target database itself can be used for effective data augmentation, and (3) that actual database users can effectively drive the overall learning by simply providing feedback about what the model is currently getting correct. Our experiments measure the performance of these learning advances, both in batch on existing datasets and through a simple online experiment for the full interactive setting. For the batch evaluation, we use sentences from the benchmark GeoQuery and ATIS domains, converted to contain SQL meaning representations. Our neural learning with data augmentation achieves reasonably high accuracies, despite the extra complexities of mapping directly to SQL. We also perform simulated interactive learning on this data, showing that with perfect user feedback our full approach could learn high quality parsers with only 55% of the data. Finally, we do a small scale online experiment for a new domain, academic paper metadata search, demonstrating that actual users can provide useful feedback and our full approach is an effective method for learning a high quality parser that continues to improve over time as it is used. 2 Related Work Although diverse meaning representation languages have been used with semantic parsers – such as regular expressions (Kushman and Barzilay, 2013; Locascio et al., 2016), Abstract Meaning Representations (AMR) (Artzi et al., 2015; Misra and Artzi, 2016), and systems of equations (Kushman et al., 2014; Roy et al., 2016) – parsers for querying databases have typically used either logic programs (Zelle and Mooney, 1996), lambda calculus (Zettlemoyer and Collins, 2005), or λDCS (Liang et al., 2013) as the meaning representation language. All three of these languages are modeled after natural language to simplify parsing. However, none of them is used to query databases outside of the semantic parsing literature; therefore, they are understood by few people and not supported by standard database implementations. In contrast, we parse directly to SQL, which is a popular database query language with wide usage and support. Learning parsers directly from SQL queries has the added benefit that we can potentially hire programmers on skilled-labor crowd markets to provide labeled examples, such as UpWork1, which we demonstrate in this work. A few systems have been developed to directly generate SQL queries from natural language (Popescu et al., 2003; Giordani and Moschitti, 2012; Poon, 2013). However, all of these systems make strong assumptions on the structure of queries: they use manually engineered rules that can only generate a subset of SQL, require lexical matches between question tokens and table/column names, or require questions to have a certain syntactic structure. In contrast, our approach can generate arbitrary SQL queries, only uses lexical matching for entity names, and does not depend on syntactic parsing. We use a neural sequence-to-sequence model to directly generate SQL queries from natural language questions. This approach builds on recent work demonstrating that such models are effective for tasks such as machine translation (Bahdanau et al., 2015) and natural language generation (Kiddon et al., 2016). Recently, neural models have been successfully applied to semantic parsing with simpler meaning representation languages (Dong and Lapata, 2016; Jia and Liang, 2016) and short regular expressions (Locascio et al., 2016). Our work extends these results to the task of SQL generation. Finally, Ling et al. (2016) generate Java/Python code for trading cards given a natural language description; however, this system suffers from low overall accuracy. A final direction of related work studies methods for reducing the annotation effort required to train a semantic parser. Semantic parsers have been trained from various kinds of annotations, including labeled queries (Zelle and Mooney, 1996; Wong and Mooney, 2007; Zettlemoyer and Collins, 2005), question/answer pairs (Liang et al., 2013; Kwiatkowski et al., 2013; Berant et al., 1http://www.upwork.com 964 2013), distant supervision (Krishnamurthy and Mitchell, 2012; Choi et al., 2015), and binary correct/incorrect feedback signals (Clarke et al., 2010; Artzi and Zettlemoyer, 2013). Each of these schemes presents a particular trade-off between annotation effort and parser accuracy; however, recent work has suggested that labeled queries are the most effective (Yih et al., 2016). Our approach trains on fully labeled SQL queries to maximize accuracy, but uses binary feedback from users to reduce the number of queries that need to be labeled. Annotation effort can also be reduced by using crowd workers to paraphrase automatically generated questions (Wang et al., 2015); however, this approach may not generate the questions that users actually want to ask the database – an experiment in this paper demonstrated that 48% of users’ questions in a calendar domain could not be generated. 3 Feedback-based Learning Our feedback-based learning approach can be used to quickly deploy semantic parsers to create NLIDBs for any new domain. It is a simple interactive learning algorithm that deploys a preliminary semantic parser, then iteratively improves this parser using user feedback and selective query annotation. A key requirement of this algorithm is the ability to cheaply and efficiently annotate queries for chosen user utterances. We address this requirement by developing a model that directly outputs SQL queries (Section 4), which can also be produced by crowd workers. Our algorithm alternates between stages of training the model and making predictions to gather user feedback, with the goal of improving performance in each successive stage. The procedure is described in Algorithm 1. Our neural model N is initially trained on synthetic data T generated by domain-independent schema templates (see Section 4), and is then ready to answer new user questions, n. The results R of executing the predicted SQL query q are presented to the user who provides a binary correct/incorrect feedback signal. If the user marks the result correct, the pair (n, q) is added to the training set. If the user marks the result incorrect, the algorithm asks a crowd worker to annotate the utterance with the correct query, ˆq, and adds (n, ˆq) to the training set. This procedure can be repeated indefinitely, ideally increasing parser accuracy and requesting fewer annotations in each successive stage. 1 Procedure LEARN(schema) 2 T ←initial data(schema) 3 while true do 4 T ←T ∪paraphrase(T) 5 N ←train model(T ) 6 for n ∈new utterances do 7 q ←predict(N , n) 8 R ←execute(q) 9 f ←feedback(R) 10 if f = correct then 11 T ←T ∪(n, q) 12 else if f = wrong then 13 ˆq ←annotate(n) 14 T ←T ∪(n, ˆq) 15 end 16 end 17 end 18 end Algorithm 1: Feedback-based learning. 4 Semantic Parsing to SQL We use a neural sequence-to-sequence model for mapping natural language questions directly to SQL queries and this allows us to scale our feedback-based learning approach, by easily crowdsourcing labels when necessary. We further present two data augmentation techniques which use content from the database schema and external paraphrase resources. 4.1 Model We use an encoder-decoder model with global attention, similar to Luong et al. (2015), where the anonymized utterance (see Section 4.2) is encoded using a bidirectional LSTM network, then decoded to directly predict SQL query tokens. Fixed pre-trained word embeddings from word2vec (Mikolov et al., 2013) are concatenated to the embeddings that are learned for source tokens from the training data. The decoder predicts a conditional probability distribution over possible values for the next SQL token given the previous tokens using a combination of the previous SQL token embedding, attention over the hidden states of the encoder network, and an attention signal from the previous time step. Formally, if qi represents an embedding for the 965 ith SQL token qi, the decoder distribution is p(qi|q1, . . . , qi−1) ∝exp (W tanh( ˆ W[hi : ci])) where hi represents the hidden state output of the decoder LSTM at the ith timestep, ci represents the context vector generated using an attention weighted sum of encoder hidden states based on hi, and, W and ˆ W are linear transformations. If sj is the hidden representation generated by the encoder for the jth word in the utterance (k words long), then the context vectors are defined to be: ci = k X j=1 αi,j · sj The attention weights αi,j are computed using an inner product between the decoder hidden state for the current timestep hi, and the hidden representation of the jth source token sj: αi,j = exp(hiTFsj) Pk j=1 exp(hiTFsj) where F is a linear transformation. The decoder LSTM cell f computes the next hidden state hi, and cell state, mi, based on the previous hidden and cell states, hi−1, mi−1, the embeddings of the previous SQL token qi−1 and the context vector of the previous timestep, ci−1 hi, mi = f(hi−1, mi−1, qi−1, ci−1) We apply dropout on non-recurrent connections for regularization, as suggested by Pham et al. (2014). Beam search is used for decoding the SQL queries after learning. 4.2 Entity Anonymization We handle entities in the utterances and SQL by replacing them with their types, using incremental numbering to model multiple entities of the same type (e.g., CITY NAME 1). During training, when the SQL is available, we infer the type from the associated column name; for example, Boston is a city in city.city name = ’Boston’. To recognize entities in the utterances at test time, we build a search engine on all entities from the target database. For every span of words (starting with a high span size and progressively reducing it), we query the search engine using a TF-IDF scheme to retrieve the entity that most closely matches the span, then replace the span with the entity’s type. We store these mappings and apply them to the generated SQL to fill in the entity names. TF-IDF matching allows some flexibility in matching entity names in utterances, for example, a user could say Donald Knuth instead of Donald E. Knuth. 4.3 Data Augmentation We present two data augmentation strategies that either (1) provide the initial training data to start the interactive learning, before more labeled examples become available, or (2) use external paraphrase resources to improve generalization. Schema Templates To bootstrap the model to answer simple questions initially, we defined 22 language/SQL templates that are schema-agnostic, so they can be applied to any database. These templates contain slots whose values are populated given a database schema. An example template is shown in Figure 2a. The <ENT> types represent tables in the database schema, <ENT>.<COL> represents a column in the particular table and <ENT>.<COL>.<TYPE> represents the type associated with the particular column. A template is instantiated by first choosing the entities and attributes. Next, join conditions, i.e., JOIN FROM and JOIN WHERE clauses, are generated from the tables on the shortest path between the chosen tables in the database schema graph, which connects tables (graph nodes) using foreign key constraints. Figure 2b shows an instantiation of a template using the path author - writes - paper - paperdataset dataset. SQL queries generated in this manner are guaranteed to be executable on the target database. On the language side, an English name of each entity is plugged into the template to generate an utterance for the query. Paraphrasing The second data augmentation strategy uses the Paraphrase Database (PPDB) (Ganitkevitch et al., 2013) to automatically generate paraphrases of training utterances. Such methods have been recently used to improve performance for parsing to logical forms (Chen et al., 2016). PPDB contains over 220 million paraphrase pairs divided into 6 sets (small to XXXL) based on precision of the paraphrases. We use the one-one and one-many paraphrases from the large version of PPDB. To paraphrase a training utterance, we pick a random word in the utterance that is not a stop word or entity and replace it with a random paraphrase. We perform paraphrase expansion on all examples labeled during learning, as well as the initial seed examples from schema templates. 966 Get all <ENT1>.<NAME> having <ENT2>.<COL1>.<NAME> as <ENT2>.<COL1>.<TYPE> SELECT <ENT1>.<DEF> FROM JOIN_FROM(<ENT1>, <ENT2>) WHERE JOIN_WHERE(<ENT1>, <ENT2>) AND <ENT2>.<COL1> = <ENT2>.<COL1>.<TYPE> (a) Schema template SELECT author.authorId FROM author , writes , paper , paperDataset , dataset WHERE author.authorId = writes.authorId AND writes.paperId = paper.paperId AND paper.paperId = paperDataset.paperId AND paperDataset.datasetId = dataset.datasetId AND dataset.datasetName = DATASET_TYPE Get all author having dataset as DATASET_TYPE (b) Generated utterance-SQL pair Figure 2: (a) Example schema template consisting of a question and SQL query with slots to be filled with database entities, columns, and values; (b) Entity-anonymized training example generated by applying the template to an academic database. 5 Benchmark Experiments Our first set of experiments demonstrates that our semantic parsing model has comparable accuracy to previous work, despite the increased difficulty of directly producing SQL. We demonstrate this result by running our model on two benchmark datasets for semantic parsing, GEO880 and ATIS. 5.1 Data sets GEO880 is a collection of 880 utterances issued to a database of US geographical facts (Geobase), originally in Prolog format. Popescu et al. (2003) created a relational database schema for Geobase together with SQL queries for a subset of 700 utterances. To compare against prior work on the full corpus, we annotated the remaining utterances and used the standard 600/280 training/test split (Zettlemoyer and Collins, 2005). ATIS is a collection of 5,418 utterances to a flight booking system, accompanied by a relational database and SQL queries to answer the questions. We use 4,473 utterances for training, 497 for development and 448 for test, following Kwiatkowski et al. (2011). The original SQL queries were very inefficient to execute due to the use of IN clauses, so we converted them to joins (Ramakrishnan and Gehrke, 2003) while verifying that the output of the queries was unchanged. Table 1 shows characteristics of both data sets. GEO880 has shorter queries but is more compositional: almost 40% of the SQL queries have at Geo880 ATIS SCHOLAR Avg. NL length 7.56 10.97 6.69 NL vocab size 151 808 303 Avg. SQL length 16.06 67.01 28.85 SQL vocab size 89 605 163 % Subqueries > 1 39.8 12.42 2.58 # Tables 1.19 5.88 3.33 Table 1: Utterance and SQL query statistics for each dataset. Vocabulary sizes are counted after entity anonymization. least one nested subquery. ATIS has the longest utterances and queries, with an average utterance length of 11 words and an average SQL query length of 67 tokens. They also operate on approximately 6 tables per query on average. We will release our processed versions of both datasets. 5.2 Experimental Methodology We follow a standard train/dev/test methodology for our experiments. The training set is augmented using schema templates and 3 paraphrases per training example, as described in Section 4. Utterances were anonymized by replacing them with their corresponding types and all words that occur only once were replaced by UNK symbols. The development set is used for hyperparameter tuning and early stopping. For GEO880, we use cross validation on the training set to tune hyperparameters. We used a minibatch size of 100 and used Adam (Kingma and Ba, 2015) with a learning rate of 0.001 for 70 epochs for all our experiments. We used a beam size of 5 for decoding. We report test set accuracy of our SQL query predictions by executing them on the target database and comparing the result with the true result. 5.3 Results Tables 2 and 3 show test accuracies based on denotations for our model on GEO880 and ATIS respectively, compared with previous work.2 To our knowledge, this is the first result on directly parsing to SQL to achieve comparable performance to prior work without using any database-specific feature engineering. Popescu et al. (2003) and Giordani and Moschitti (2012) also directly produce SQL queries but on a subset of 700 examples from GEO880. The former only works on semantically tractable utterances where words can be un2Note that 2.8% of GEO880 and 5% ATIS gold test set SQL queries (before any processing) produced empty results. 967 System Acc. Ours (SQL) 82.5 Popescu et al. (2003) (SQL) 77.5∗ Giordani and Moschitti (2012) (SQL) 87.2∗ Dong and Lapata (2016) 84.6⋄† Jia and Liang (2016) 89.3⋄ Liang et al. (2013) 91.1⋄ Table 2: Accuracy of SQL query results on the Geo880 corpus; ∗use Geo700; ⋄convert to logical forms instead of SQL; † measure accuracy in terms of obtaining the correct logical form, other systems, including ours, use denotations. System Acc. Ours (SQL) 79.24 GUSP (Poon, 2013) (SQL) 74.8 GUSP++ (Poon, 2013) (SQL) 83.5 Zettlemoyer and Collins (2007) 84.6⋄† Dong and Lapata (2016) 84.2⋄† Jia and Liang (2016) 83.3⋄ Wang et al. (2014) 91.3⋄† Table 3: Accuracy of SQL query results on ATIS; ⋄convert to logical forms instead of SQL; † measure accuracy in terms of obtaining the correct logical form, other systems, including ours, use denotations. ambiguously mapped to schema elements, while the latter uses a reranking approach that also limits the complexity of SQL queries that can be handled. GUSP (Poon, 2013) creates an intermediate representation that is then deterministically converted to SQL to obtain an accuracy of 74.8% on ATIS, which is boosted to 83.5% using manually introduced disambiguation rules. However, it requires a lot of SQL specific engineering (for example, special nodes for argmax) and is hard to extend to more complex SQL queries. On both datasets, our SQL model achieves reasonably high accuracies approaching that of the best non-SQL results. Most relevant to this work are the neural sequence based approaches of Dong and Lapata (2016) and Jia and Liang (2016). We note that Jia and Liang (2016) use a data recombination technique that boosts accuracy from 85.0 on GEO880 and 76.3 on ATIS; this technique is also compatible with our model and we hope to experiSystem GEO880 ATIS Ours 84.8 86.2 - paraphrases 81.8 84.3 - templates 84.7 85.7 Table 4: Addition of paraphrases to the training set helps performance, but template based data augmentation does not significantly help in the fully supervised setting. Accuracies are reported on the standard dev set for ATIS and on the training set, using cross-validation, for Geo880. ment with this in future work. Our results demonstrate that these models are powerful enough to directly produce SQL queries. Thus, our methods enable us to utilize the full expressivity of the SQL language without any extensions that certain logical representations require to answer more complex queries. More importantly, it can be immediately deployed for users in new domains, with a large programming community available for annotation, and thus, fits effectively into a framework for interactive learning. We perform ablation studies on the development sets (see Table 4) and find that paraphrasing using PPDB consistently helps boost performance. However, unlike in the interactive experiments (Section 6), data augmentation using schema templates does not improve performance in the fully supervised setting. 6 Interactive Learning Experiments In this section, we learn a semantic parser for an academic domain from scratch by deploying an online system using our interactive learning algorithm (Section 3). After three train-deploy cycles, the system correctly answered 63.51% of user’s questions. To our knowledge, this is the first effort to learn a semantic parser using a live system, and is enabled by our models that can directly parse language to SQL without manual intervention. 6.1 User Interface We developed a web interface for accepting natural language questions to an academic database from users, using our model to generate a SQL query, and displaying the results after execution. Several example utterances are also displayed to help users understand the domain. Together with the results of the generated SQL query, users are prompted to provide feedback which is used for 968 interactive learning. Screenshots of our interface are included in our Supplementary Materials. Collecting accurate user feedback on predicted queries is a key challenge in the interactive learning setting for two reasons. First, the system’s results can be incorrect due to poor entity identification or incompleteness in the database, neither of which are under the semantic parser’s control. Second, it can be difficult for users to determine if the presented results are in fact correct. This determination is especially challenging if the system responds with the correct type of result, for example, if the user requests “papers at ACL 2016” and the system responds with all ACL papers. We address this challenge by providing users with two assists for understanding the system’s behavior, and allowing users to provide more granular feedback than simply correct/incorrect. The first assist is type highlighting, which highlights entities identified in the utterance, for example, “paper by Michael I. Jordan (AUTHOR) in ICRA (VENUE) in 2016 (YEAR).” This assist is especially helpful because the academic database contains noisy keyword and dataset tables that were automatically extracted from the papers. The second assist is utterance paraphrasing, which shows the user another utterance that maps to the same SQL query. For example, for the above query, the system may show “what papers does Michael I. Jordan (AUTHOR) have in ICRA (VENUE) in 2016 (YEAR).” This assist only appears if a matching query (after entity anonymization) exists in the model’s training set. Using these assists and the predicted results, users are asked to select from five feedback options: Correct, Wrong Types, Incomplete Result, Wrong Result and Can’t Tell. The Correct and Wrong Result options represent scenarios when the user is satisfied with the result, or the result is identifiably wrong, respectively. Wrong Types indicates incorrect entity identification, which can be determined from type highlighting. Incomplete Result indicates that the query is correct but the result is not; this outcome can occur because the database is incomplete. Can’t Tell indicates that the user is unsure about the feedback to provide. 6.2 Three-Stage Online Experiment In this experiment, using our developed user interface, we use Algorithm 1 to learn a semantic parser from scratch. The experiment had three stages; in each stage, we recruited 10 new users (computer science graduate students) and asked them to issue at least 10 utterances each to the system and to provide feedback on the results. We considered results marked as either Correct or Incomplete Result as correct queries for learning. The remaining incorrect utterances were sent to a crowd worker for annotation and were used to retrain the system for the next stage. The crowd worker had prior experience in writing SQL queries and was hired from Upwork after completing a short SQL test. The worker was also given access to the database to be able to execute the queries and ensure that they are correct. For the first stage, the system was trained using 640 examples generated using templates, that were augmented to 1746 examples using paraphrasing (see Section 4.3). The complexity of the utterances issued in each of the three phases were comparable, in that, the average length of the correct SQL query for the utterances, and the number of tables required to be queried, were similar. Table 5 shows the percent of utterances judged by users as either Correct or Incomplete Result in each stage. In the first stage, we do not have any labeled examples, and the model is trained using only synthetically generated data from schema templates and paraphrases (see Section 4.3). Despite the lack of real examples, the system correctly answers 25% of questions. The system’s accuracy increases and annotation effort decreases in each successive stage as additional utterances are contributed and incorrect utterances are labeled. This result demonstrates that we can successfully build semantic parsers for new domains by using neural models to generate SQL with crowdsourced annotations driven by user feedback. We analyzed the feedback signals provided by the users in the final stage of the experiment to measure the quality of feedback. We found that 22.3% of the generated queries did not execute (and hence were incorrect). 6.1% of correctly generated queries were marked wrong by users (see Table 6). This erroneous feedback results in redundant annotation of already correct examples. The main cause of this erroneous feedback was incomplete data for aggregation queries, where users chose Wrong instead of Incomplete. 6.3% of incorrect queries were erroneously deemed correct by users. It is important that this fraction be low, as these queries become incorrectly-labeled exam969 Stage 1 Stage 2 Stage 3 Accuracy (%) 25 53.7 63.5 Table 5: Percentage of utterances marked as Correct or Incomplete by users, in each stage of our online experiment. Feedback Error Rate (%) Correct SQL 6.1 Incorrect SQL 6.3 Table 6: Error rates of user feedback when the SQL is correct and incorrect. The Correct and Incomplete results options are erroneous if the SQL query is correct, and vice versa for incorrect queries. ples in the training set that may contribute to the deterioration of model accuracy over time. This quality of feedback is already sufficient for our neural models to improve with usage, and creating better interfaces to make feedback more accurate is an important task for future work. 6.3 SCHOLAR dataset We release a new semantic parsing dataset for academic database search using the utterances gathered in the user study. We augment these labeled utterances with additional utterances labeled by crowd workers. (Note that these additional utterances were not used in the online experiment). The final dataset comprises 816 natural language utterances labeled with SQL, divided into a 600/216 train/test split. We also provide a database on which to execute these queries containing academic papers with their authors, citations, journals, keywords and datasets used. Table 1 shows statistics of this dataset. Our parser achieves an accuracy of 67% on this train/test split in the fully supervised setting. In comparison, a nearest neighbor strategy that uses the cosine similarity metric using a TF-IDF representation for the utterances yields an accuracy of 52.75%. We found that 15% of the predicted queries did not execute, predominantly owing to (1) accessing table columns without joining with those tables, and (2) generating incorrect types that could not be deanonymized using the utterance. The main types of errors in the remaining well-formed queries that produced incorrect results were (1) portions of the utterance (such as ‘top’ and ‘cited by both’) were ignored, and (2) some types from the utterance were not transferred to the SQL query. 2 4 6 8 10 12 Stages 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Fraction Correct Simulated Interactive Learning on Geo880 Ours Without templates Without paraphrasing 2 4 6 8 10 12 Stages 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Fraction Correct Simulated Interactive Learning on ATIS Ours Without templates Without paraphrasing Figure 3: Accuracy as a function of batch number in simulated interactive learning experiments on Geo880 (top) and ATIS (bottom). 6.4 Simulated Interactive Experiments We conducted additional simulated interactive learning experiments using GEO880 and ATIS to better understand the behavior of our train-deploy feedback loop, the effects of our data augmentation approaches, and the annotation effort required. We randomly divide each training set into K batches and present these batches sequentially to our interactive learning algorithm. Correctness feedback is provided by comparing the result of the predicted query to the gold query, i.e., we assume that users are able to perfectly distinguish correct results from incorrect ones. Figure 3 shows accuracies on GEO880 and ATIS respectively of each batch when the model is trained on all previous batches. As in the live experiment, accuracy improves with successive batches. Data augmentation using templates helps in the initial stages of GEO880, but its advantage 970 Batch Size 150 100 50 % Wrong 70.2 60.4 54.3 Table 7: Percentage of examples that required annotation (i.e., where the model initially made an incorrect prediction) on GEO880 vs. batch size. is reduced as more labeled data is obtained. Templates did not improve accuracy on ATIS, possibly because most ATIS queries involve two entities, i.e., a source city and a destination city, whereas our templates only generate questions with a single entity type. Nevertheless, templates are important in a live system to motivate users to interact with it in early stages. As observed before, paraphrasing improves performance at all stages. Table 7 shows the percent of examples that require annotation using various batch sizes for GEO880. Smaller batch sizes reduce annotation effort, with a batch size of 50 requiring only 54.3% of the examples to be annotated. This result demonstrates that more frequent deployments of improved models leads to fewer mistakes. 7 Conclusion We describe an approach to rapidly train a semantic parser as a NLIDB that iteratively improves parser accuracy over time while requiring minimal intervention. Our approach uses an attentionbased neural sequence-to-sequence model, with data augmentation from the target database and paraphrasing, to parse utterances to SQL. This model is deployed in an online system, where user feedback on its predictions is used to select utterances to send for crowd worker annotation. We find that the semantic parsing model is comparable in performance to previous systems that either map from utterances to logical forms, or generate SQL, on two benchmark datasets, GEO880 and ATIS. We further demonstrate the effectiveness of our online system by learning a semantic parser from scratch for an academic domain. A key advantage of our approach is that it is not language-specific, and can easily be ported to other commonly used query languages, such as SPARQL or ElasticSearch. Finally, we also release a new dataset of utterances and SQL queries for an academic domain. Acknowledgments The research was supported in part by DARPA, under the DEFT program through the AFRL (FA8750-13-2-0019), the ARO (W911NF-16-10121), the NSF (IIS-1252835, IIS-1562364, IIS1546083, IIS-1651489, CNS-1563788), the DOE (DE-SC0016260), an Allen Distinguished Investigator Award, and gifts from NVIDIA, Adobe, and Google. The authors thank Rik Koncel-Kedziorski and the anonymous reviewers for their helpful comments. References Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage CCG semantic parsing with AMR. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1699– 1710. https://doi.org/10.18653/v1/D15-1198. Yoav Artzi and Luke Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics 1(1):49–62. http://aclweb.org/anthology/Q13-1005. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 2015 International Conference on Learning Representations. CBLS, San Diego, California. http://arxiv.org/abs/1409.0473. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1533–1544. http://aclweb.org/anthology/D13-1160. Bo Chen, Le Sun, Xianpei Han, and Bo An. 2016. Sentence rewriting for semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 766–777. http://www.aclweb.org/anthology/P16-1073. Eunsol Choi, Tom Kwiatkowski, and Luke Zettlemoyer. 2015. Scalable semantic parsing with partial ontologies. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, pages 1311–1320. https://doi.org/10.3115/v1/P151127. 971 James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from the world’s response. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning. Association for Computational Linguistics, pages 18–27. http://aclweb.org/anthology/W10-2903. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 33–43. https://doi.org/10.18653/v1/P16-1004. Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The paraphrase database. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 758–764. http://aclweb.org/anthology/N13-1092. Alessandra Giordani and Alessandro Moschitti. 2012. Translating questions to SQL queries with generative parsers discriminatively reranked. In Proceedings of COLING 2012: Posters. The COLING 2012 Organizing Committee, pages 401–410. http://aclweb.org/anthology/C12-2040. Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 12–22. https://doi.org/10.18653/v1/P16-1002. Chlo´e Kiddon, Luke Zettlemoyer, and Yejin Choi. 2016. Globally coherent text generation with neural checklist models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 329–339. http://aclweb.org/anthology/D16-1032. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. Jayant Krishnamurthy and Tom Mitchell. 2012. Weakly supervised training of semantic parsers. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics, pages 754–765. http://aclweb.org/anthology/D12-1069. Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Baltimore, Maryland, pages 271–281. http://www.aclweb.org/anthology/P14-1026. Nate Kushman and Regina Barzilay. 2013. Using semantic unification to generate regular expressions from natural language. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Seattle, Washington, USA, pages 1545–1556. http://www.aclweb.org/anthology/D131161. Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2011. Lexical generalization in CCG grammar induction for semantic parsing. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1512–1523. http://aclweb.org/anthology/D11-1140. Percy Liang, I. Michael Jordan, and Dan Klein. 2013. Learning dependency-based compositional semantics. Computational Linguistics 39(2). https://doi.org/10.1162/COLI a 00127. Wang Ling, Phil Blunsom, Edward Grefenstette, Moritz Karl Hermann, Tom´aˇs Koˇcisk´y, Fumin Wang, and Andrew Senior. 2016. Latent predictor networks for code generation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 599– 609. https://doi.org/10.18653/v1/P16-1057. Nicholas Locascio, Karthik Narasimhan, Eduardo De Leon, Nate Kushman, and Regina Barzilay. 2016. Neural generation of regular expressions from natural language with minimal domain knowledge. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 1918–1923. https://aclweb.org/anthology/D16-1197. Thang Luong, Hieu Pham, and D. Christopher Manning. 2015. Effective approaches to attentionbased neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1412–1421. https://doi.org/10.18653/v1/D15-1166. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. pages 3111–3119. Kumar Dipendra Misra and Yoav Artzi. 2016. Neural shift-reduce CCG semantic parsing. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association 972 for Computational Linguistics, pages 1775–1786. http://aclweb.org/anthology/D16-1183. V. Pham, T. Bluche, C. Kermorvant, and J. Louradour. 2014. Dropout improves recurrent neural networks for handwriting recognition. In 2014 14th International Conference on Frontiers in Handwriting Recognition. pages 285–290. https://doi.org/10.1109/ICFHR.2014.55. Hoifung Poon. 2013. Grounded unsupervised semantic parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 933–943. http://aclweb.org/anthology/P13-1092. Ana-Maria Popescu, Oren Etzioni, and Henry Kautz. 2003. Towards a theory of natural language interfaces to databases. In Proceedings of the 8th international conference on Intelligent user interfaces. ACM, pages 149–157. Raghu Ramakrishnan and Johannes Gehrke. 2003. Database Management Systems. McGraw-Hill, Inc., New York, NY, USA, 3 edition. Subhro Roy, Shyam Upadhyay, and Dan Roth. 2016. Equation parsing : Mapping sentences to grounded equations. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1088–1097. http://aclweb.org/anthology/D16-1117. Adrienne Wang, Tom Kwiatkowski, and Luke Zettlemoyer. 2014. Morpho-syntactic lexical generalization for CCG semantic parsing. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, pages 1284–1295. https://doi.org/10.3115/v1/D14-1135. I. Sida Wang, Percy Liang, and D. Christopher Manning. 2016. Learning language games through interaction. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 2368–2378. https://doi.org/10.18653/v1/P16-1224. Yushi Wang, Jonathan Berant, and Percy Liang. 2015. Building a semantic parser overnight. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, pages 1332–1342. https://doi.org/10.3115/v1/P15-1129. Wah Yuk Wong and Raymond Mooney. 2007. Generation by inverting a semantic parser that uses statistical machine translation. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference. Association for Computational Linguistics, pages 172–179. http://aclweb.org/anthology/N071022. Wen-tau Yih, Matthew Richardson, Chris Meek, MingWei Chang, and Jina Suh. 2016. The value of semantic parse labeling for knowledge base question answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, pages 201–206. https://doi.org/10.18653/v1/P16-2033. John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of the Thirteenth National Conference on Artificial Intelligence. Luke Zettlemoyer and Michael Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL). http://aclweb.org/anthology/D07-1071. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: structured classification with probabilistic categorial grammars. In UAI ’05, Proceedings of the 21st Conference in Uncertainty in Artificial Intelligence. 973
2017
89
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 90–101 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1009 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 90–101 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1009 Joint Learning for Event Coreference Resolution Jing Lu and Vincent Ng Human Language Technology Research Institute University of Texas at Dallas Richardson, TX 75083-0688 {ljwinnie,vince}@hlt.utdallas.edu Abstract While joint models have been developed for many NLP tasks, the vast majority of event coreference resolvers, including the top-performing resolvers competing in the recent TAC KBP 2016 Event Nugget Detection and Coreference task, are pipelinebased, where the propagation of errors from the trigger detection component to the event coreference component is a major performance limiting factor. To address this problem, we propose a model for jointly learning event coreference, trigger detection, and event anaphoricity. Our joint model is novel in its choice of tasks and its features for capturing cross-task interactions. To our knowledge, this is the first attempt to train a mention-ranking model and employ event anaphoricity for event coreference. Our model achieves the best results to date on the KBP 2016 English and Chinese datasets. 1 Introduction Within-document event coreference resolution is the task of determining which event mentions in a text refer to the same real-world event. Compared to entity coreference resolution, event coreference resolution is not only much less studied, but it is arguably more challenging. The challenge stems in part from the fact that an event coreference resolver typically lies towards the end of the standard information extraction pipeline, assuming as input the noisy outputs of its upstream components. One such component is the trigger detection system, which is responsible for identifying event triggers and determining their event subtypes. As is commonly known, trigger detection is another challenging task that is far from being solved. In fact, in the recent TAC KBP 2016 Event Nugget Detection and Coreference task, trigger detection (a.k.a. event nugget detection in KBP) is deliberately made more challenging by focusing only on detecting the 18 subtypes of triggers on which the KBP 2015 participating systems’ performances were the poorest (Mitamura et al., 2016). The best-performing KBP 2016 system on English trigger detection achieved only an F-score of 47 (Lu and Ng, 2016).1 Given the difficulty of trigger detection, it is conceivable that many errors will propagate from the trigger detection component to the event coreference component in any pipeline architecture where trigger detection precedes event coreference resolution. These trigger detection errors could severely harm event coreference performance. For instance, two event mentions could be wrongly posited as coreferent if the underlying triggers were wrongly predicted to have the same subtype. Nevertheless, the top-performing systems in the KBP 2016 event coreference task all adopted the aforementioned pipeline architecture (Liu et al., 2016; Lu and Ng, 2016; Nguyen et al., 2016). Their performances are not particularly impressive, however: the best English event coreference F-score (averaged over four scoring metrics) is only around 30%. To address this error propagation problem, we describe a joint model of trigger detection, event coreference, and event anaphoricity in this paper. Our choice of these three tasks is motivated in part by their inter-dependencies. As mentioned above, it is well-known that trigger detection performance has a huge impact on event coreference performance. Though largely underinvestigated, event coreference could also improve 1This is the best English nugget type result in KBP 2016. In this paper, we will not be concerned with realis classification, as it does not play any role in event coreference. 90 trigger detection. For instance, if two event mentions are posited as coreferent, then the underlying triggers must have the same event subtype. While the use of anaphoricity information for entity coreference has been extensively studied (see Ng (2010)), to our knowledge there has thus far been no attempt to explicitly model event anaphoricity for event coreference.2 Although the mention-ranking model we employ for event coreference also allows an event mention to be posited as non-anaphoric (by resolving it to a null candidate antecedent), our decision to train a separate anaphoricity model and integrate it into our joint model is motivated in part by the recent successes of Wiseman et al. (2015), who showed that there are benefits in jointly training a noun phrase anaphoricity model and a mention-ranking model for entity coreference resolution. Finally, event anaphoricity and trigger detection can also mutually benefit each other. For instance, any verb posited as a non-trigger cannot be anaphoric, and any verb posited as anaphoric must be a trigger. Note that in our joint model, anaphoricity serves as an auxiliary task: its intended use is to improve trigger detection and event coreference, potentially mediating the interaction between trigger detection and event coreference. Being a structured conditional random field, our model encompasses two types of factors. Unary factors encode the features specific for each task. Binary and ternary factors capture the interaction between each pair of tasks in a soft manner, enabling the learner to learn which combinations of values of the output variables are more probable. For instance, the learner should learn that it is not a good idea to classify a verb both as anaphoric and as a non-trigger. Our model is similar in spirit to Durrett and Klein’s (2014) joint model for entity analysis, which performs joint learning for entity coreference, entity linking and semantic typing via the use of interaction features. Our contributions are two-fold. First, we present a joint model of event coreference, trigger detection, and anaphoricity that is novel in terms of the choice of tasks and the features used to capture cross-task interactions. Second, our model achieves the best results to date on the KBP 2016 English and Chinese event coreference tasks. 2Following the entity coreference literature, we overload the term anaphoricity, saying that an event mention is anaphoric if it is coreferent with a preceding mention in the associated text. 2 Definitions, Task, and Corpora 2.1 Definitions We employ the following definitions in our discussion of trigger detection and event coreference: • An event mention is an explicit occurrence of an event consisting of a textual trigger, arguments or participants (if any), and the event type/subtype. • An event trigger is a string of text that most clearly expresses the occurrence of an event, usually a word or a multi-word phrase • An event argument is an argument filler that plays a certain role in an event. • An event coreference chain (a.k.a. an event hopper) is a group of event mentions that refer to the same real-world event. They must have the same event (sub)type. To understand these definitions, consider the example in Table 1, which contains two coreferent event mentions, ev1 and ev2. left is the trigger for ev1 and departed is the trigger for ev2. Both triggers have subtype Movement.TransportPerson. ev1 has three arguments, Georges Cipriani, prison, and Wednesday with roles Person, Origin, and Time respectively. ev2 also has three arguments, He, Ensisheim, and police vehicle with roles Person, Origin, and Instrument respectively. 2.2 Task The version of the event coreference task we focus on in this paper is the Event Nugget Detection and Coreference task in the TAC KBP 2016 Event Track. While we discuss the role played by event arguments in event coreference in the previous subsection, KBP 2016 addresses event argument detection as a separate shared task. In other words, the KBP 2016 Event Nugget Detection and Coreference task focuses solely on trigger detection and event coreference. It is worth mentioning that the KBP Event Nugget Detection and Coreference task, which started in 2015, aims to address a major weakness of the ACE 2005 event coreference task. Specifically, ACE 2005 adopts a strict notion of event identity, with which two event mentions were annotated as coreferent if and only if “they had the same agent(s), patient(s), time, and location” (Song et al., 2015), and their event attributes (polarity, modality, genericity, and tense) were not incompatible. In contrast, KBP adopts a more relaxed definition of event coreference, allowing two 91 Georges Cipriani[P erson], {left}ev1 the prison[Origin] in Ensisheim in northern France on parole on Wednesday[T ime]. He[P erson] {departed}ev2 Ensisheim[Origin] in a police vehicle[Instrument] bound for an open prison near Strasbourg. Table 1: Event coreference resolution example. event mentions to be coreferent as long as they intuitively refer to the same real-world event. Under this definition, two event mentions can be coreferent even if their time and location arguments are not coreferent. In our example in Table 1, ev1 and ev2 are coreferent in KBP because they both refer to the same event of Cipriani leaving the prison. However, they are not coreferent in ACE because their Origin arguments are not coreferent (one Origin argument involves a prison in Ensisheim while the other involves the city Ensisheim). 2.3 Corpora Given our focus on the KBP 2016 Event Nugget Detection and Coreference task, we employ the English and Chinese corpora used in this task for evaluation, referring to these corpora as the KBP 2016 English and Chinese corpora for brevity. There are no official training sets: the task organizers simply made available a number of event coreference-annotated corpora for training. For English, we use LDC2015E29, E68, E73, and E94 for training. These corpora are composed of two types of documents, newswire documents and discussion forum documents. Together they contain 648 documents with 18739 event mentions distributed over 9955 event coreference chains. For Chinese, we use LDC2015E78, E105, and E112 for training. These corpora are composed of discussion forum documents only. Together they contain 383 documents with 4870 event mentions distributed over 3614 event coreference chains. The test set for English consists of 169 newswire and discussion forum documents with 4155 event mentions distributed over 3191 event coreference chains. The test set for Chinese consists of 167 newswire and discussion forum documents with 2518 event mentions distributed over 1912 event coreference chains. Note that these test sets contain only annotations for event triggers and event coreference (i.e., there are no event argument annotations). While some of the training sets additionally contain event argument annotations, we do not make use of event argument annotations in model training to ensure a fairer comparison to the teams participating in the KBP 2016 Event Nugget Detection and Coreference task. 3 Model 3.1 Overview Our model, which is a structured conditional random field, operates at the document level. Specifically, given a test document, we first extract from it (1) all single-word nouns and verbs and (2) all words and phrases that have appeared at least once as a trigger in the training data. We treat each of these extracted words and phrases as a candidate event mention.3 The goal of the model is to make joint predictions for the candidate event mentions in a document. Three predictions will be made for each candidate event mention that correspond to the three tasks in the model: its trigger subtype, its anaphoricity, and its antecedent. Given this formulation, we define three types of output variables: • Event subtype variables t = (t1, . . . , tn). Each ti takes a value in the set of 18 event subtypes defined in KBP 2016 or NONE, which indicates that the event mention is not a trigger. • Anaphoricity variables a = (a1, . . . , an). Each ai is either ANAPHORIC or NOT ANAPHORIC. • Coreference variables c = (c1, . . . , cn), where ci ∈{1, . . . , i −1, NEW}. In other words, the value of each ci is the id of its antecedent, which can be one of the preceding event mentions or NEW (if the event mention underlying ci starts a new cluster). Each candidate event mention is associated with exactly one coreference variable, one event subtype variable, and one anaphoricity variable. Our model induces the following log-linear probability distribution over these variables: p(t, a, c|x; Θ) ∝exp( X i θifi(t, a, c,x)) 3According to the KBP annotation guidelines, each word may trigger multiple event mentions (e.g., murder can trigger two event mentions with subtypes Life.Die and Conflict.Attack). Hence, our treating each extracted word as a candidate event mention effectively prevents a word from triggering multiple event mentions. Rather than complicate model design by relaxing this simplifying assumption, we present an alternative, though partial, solution to this problem wherein we allow each event mention to be associated with multiple event subtypes. See the Appendix for details. 92 Figure 1: Unary factors for the three tasks, the variables they are connected to, and the possible values of the variables. Unary factors encode taskspecific features. Each factor is connected to the corresponding output node. The features associated with a factor are used to predict the value of the output node it is connected to when a model is run independently of other models. where θi ∈Θ is the weight associated with feature function fi and x is the input document. 3.2 Features Given that our model is a structured conditional random field, the features can be divided into two types: (1) task-specific features, and (2) crosstask features, which capture the interactions between a pair of tasks. We express these two types of features in factor graph notation. The taskspecific features are encoded in unary factors, each of which is connected to the corresponding variable (Figure 1). The cross-task features are encoded in binary or ternary factors, each of which couples the output variables from two tasks (Figure 2). Next, we describe these two types of features. Each feature is used to train models for both English and Chinese unless otherwise stated. 3.2.1 Task-Specific Features We begin by describing the task-specific features, which are encoded in unary factors, as well as each of the three independent models. 3.2.1.1 Trigger Detection When applied in isolation, our trigger detection model returns a distribution over possible subtypes given a candidate trigger. Each candidate trigger t is represented using t’s word, t’s lemma, word bigrams formed with a window size of three from t, as well as feature conjunctions created by pairing t’s lemma with each of the following features: Figure 2: Binary and ternary factors. These higherorder factors capture cross-task interactions. The binary anaphoricity and trigger factors encourage anaphoric mentions to be triggers. The binary anaphoricity and coreference factors encourage non-anaphoric mentions to start a NEW coreference cluster. The ternary trigger and coreference factors encourage coreferent mentions to be triggers. the head word of the entity syntactically closest to t, the head word of the entity textually closest to t, the entity type of the entity that is syntactically closest to t, and the entity type of the entity that is textually closest to t.4 In addition, for event mentions with verb triggers, we use the head words and the entity types of their subjects and objects as features, where the subjects and objects are extracted from the dependency parse trees obtained using Stanford CoreNLP (Manning et al., 2014). For event mentions with noun triggers, we create the same features that we did for verb triggers, except that we replace the subjects and verbs with heuristically extracted agents and patients. Finally, for the Chinese trigger detector, we additionally create two features from each character in t, one encoding the character itself and the other encoding the entry number of the corresponding character in a Chinese synonym dictionary.5 3.2.1.2 Event Coreference We employ a mention-ranking model for event coreference that selects the most probable antecedent for a mention to be resolved (or NEW if the mention is non-anaphoric) from its set of candidate antecedents. When applied in isolation, the model is trained to maximize the condi4We train a CRF-based entity extraction model for jointly identifying the entity mentions and their types. Details can be found in Lu et al. (2016). 5The dictionary is available from http://ir.hit.edu.cn/. An entry number in this dictionary conceptually resembles a synset id in WordNet (Fellbaum, 1998). 93 tional likelihood of collectively resolving the mentions to their correct antecedents in the training texts (Durrett and Klein, 2013). Below we describe the features used to represent the candidate antecedents for the mention to be resolved, mj. Features representing the NULL candidate antecedent: Besides mj’s word and mj’s lemma, we employ feature conjunctions given their usefulness in entity coreference (Fernandes et al., 2014). Specifically, we create a conjunction between mj’s lemma and the number of sentences preceding mj, as well as a conjunction between mj’s lemma and the number of mentions preceding mj in the document. Features representing a non-NULL candidate antecedent, mi: mi’s word, mi’s lemma, whether mi and mj have the same lemma, and feature conjunctions including: (1) mi’s word paired with mj’s word, (2) mi’s lemma paired with mj’s lemma, (3) the sentence distance between mi and mj paired with mi’s lemma and mj’s lemma, (4) the mention distance between mi and mj paired with mi’s lemma and mj’s lemma, (5) a quadruple consisting of mi and mj’s subjects and their lemmas, and (6) a quadruple consisting of mi and mj’s objects and their lemmas. 3.2.1.3 Anaphoricity Determination When used in isolation, the anaphoricity model returns the probability that the given event mention is anaphoric. To train the model, we represent each event mention mj using the following features: (1) the head word of each candidate antecedent paired with mj’s word, (2) whether at least one candidate antecedent has the same lemma as that of mj, and (3) the probability that mj is anaphoric in the training data (if mj never appears in the training data, this probability is set to 0.5). 3.2.2 Cross-Task Interaction Features Cross-task interaction features are associated with the binary and ternary factors. 3.2.2.1 Trigger Detection and Anaphoricity We fire features that conjoin each candidate event mention’s event subtype, the lemma of its trigger and its anaphoricity. 3.2.2.2 Trigger Detection and Coreference We define our joint coreference and trigger detection factors such that the features defined on subtype variables ti and tj are fired only if current mention mj is coreferent with preceding mention mi. These features are: (1) the pair of mi and mj’s subtypes, (2) the pair of mj’s subtype and mi’s word, and (3) the pair of mi’s subtype and mj’s word. 3.2.2.3 Coreference and Anaphoricity We fire a feature that conjoins event mention mj’s anaphoricity and whether or not a non-NULL antecedent is selected for mj. 3.3 Training We learn the model parameters Θ from a set of d training documents, where document i contains content xi, gold triggers t∗ i and gold event coreference partition C∗ i . Before learning, there are a couple of issues we need to address. First, we need to derive gold anaphoricity labels a∗ i from C∗ i . This is straightforward: the first mention of each coreference chain is NOT ANAPHORIC, whereas the rest are ANAPHORIC. Second, we employ gold event mentions for model training, but training models only on gold mentions is not sufficient: for instance, a trigger detector trained solely on gold mentions will not be able to classify a candidate event mention as NONE during testing. To address this issue, we additionally train the models on candidate event mentions corresponding to non-triggers. We create these candidate event mentions as follows. For each word w that appears as a true trigger at least once in the training data, we create a candidate event mention from each occurrence of w in the training data that is not annotated as a true trigger. Third, since our model produces event coreference output in the form of an antecedent vector (with one antecedent per event mention), it needs to be trained on antecedent vectors. However, since the coreference annotation for each document i is provided in the form of a clustering C∗ i , we follow previous work on entity coreference resolution (Durrett and Klein, 2013): we sum over all antecedent structures A(C∗ i ) that are consistent with C∗ i (i.e., the first mention of a cluster has antecedent NEW, whereas each of the subsequent mentions can select any of the preceding mentions in the same cluster as its antecedent). Next, we learn the model parameters to maximize the following conditional likelihood of the training data with L1 regularization: L(Θ) = d X i=1 log X c∗∈A(C∗ i ) p′(t∗ i , a∗ i , c∗|xi; Θ)+λ∥Θ∥1 94 In this objective, p′ is obtained by augmenting the distribution p (defined in Section 3.1) with task-specific parameterized loss functions: p′(t, a, c|xi; Θ) ∝p(t, a, c|xi; Θ) exp[αtlt(t, t∗) + αala(a, a∗) + αclc(c, C∗)] where lt, la and lc are task-specific loss functions, and αt, αa and αc are the associated weight parameters that specify the relative importance of the three tasks in the objective function. Softmax-margin, the technique of integrating task-specific loss functions into the objective function, was introduced by Gimpel and Smith (2010) and subsequently used by Durrett and Klein (2013, 2014). By encoding task-specific knowledge, these loss functions can help train a model that places less probability mass on less desirable output configurations. Our loss function for event coreference, lc, is motivated by the one Durrett and Klein (2013) developed for entity coreference. It is a weighted sum of the counts of three error types: lc(c, C∗) = αc,F AFA(c, C∗)+αc,F NFN(c, C∗) + αc,W LWL(c, C∗) where FA(c, C∗) is the number of non-anaphoric mentions misclassified as anaphoric, FN(c, C∗) is the number of anaphoric mentions misclassified as non-anaphoric, and WL(c, C∗) is the number of incorrectly resolved anaphoric mentions. Our loss function for trigger detection, lt, is parameterized in a similar way, having three parameters associated with three error types: αt,F T is associated with the number of non-triggers misclassified as triggers, αt,F N is associated with the number of triggers misclassified as non-triggers, and αt,W L is associated with the number of triggers labeled with the wrong subtype. Finally, our loss function for anaphoricity determination, la, is also similarly parameterized, having two parameters: αa,F A and αa,F N are associated with the number of false anaphors and the number of false non-anaphors, respectively. Following Durrett and Klein (2014), we use AdaGrad (Duchi et al., 2011) to optimize our objective with λ = 0.001 in our experiments. 3.4 Inference Inference, which is performed during training and decoding, involves computing the marginals for a variable or a set of variables to which a factor connects. For efficiency, we perform approximate inference using belief propagation rather than exact inference. Given that convergence can typically be reached within five iterations of belief propagation, we employ five iterations in all experiments. Performing inference using belief propagation on the full factor graph defined in Section 3.1 can still be computationally expensive, however. One reason is that the number of ternary factors grows quadratically with the number of event mentions in a document. To improve scalability, we restrict the domains of the coreference variables. Rather than allow the domain of coreference variable cj to be of size j, we allow a preceding mention mi to be a candidate antecedent of mention mj if (1) the sentence distance between the two mentions is less than an empirically determined threshold and (2) either they are coreferent at least once in the training data or their head words have the same lemma. Doing so effectively enables us to prune the unlikely candidate antecedents for each event mention. As Durrett and Klein (2014) point out, such pruning has the additional benefit of reducing “the memory footprint and time needed to build a factor graph”, as we do not need to create any factor between mi and mj and its associated features if mi is pruned. To further reduce the memory footprint, we additionally restrict the domains of the event subtype variables. Given a candidate event mention created from word w, we allow the domain of its subtype variable to include only NONE as well as those subtypes that w is labeled with at least once in the training data. For decoding, we employ minimum Bayes risk, which computes the marginals of each variable w.r.t. the joint model and derives the most probable assignment to each variable. 4 Evaluation 4.1 Experimental Setup We perform training and evaluation on the KBP 2016 English and Chinese corpora. For English, we train models on 509 of the training documents, tune parameters on 139 training documents, and report results on the official KBP 2016 English test set.6 For Chinese, we train models on 302 of the training documents, tune parameters on 81 training documents, and report results on the official 6The parameters to be tuned are the α’s multiplying the loss functions and those inside the loss functions. 95 English MUC B3 CEAFe BLANC AVG-F Trigger Anaphoricity KBP2016 26.37 37.49 34.21 22.25 30.08 46.99 − INDEP. 22.71 40.72 39.00 22.71 31.28 48.82 27.35 JOINT 27.41 40.90 39.00 25.00 33.08 49.30 31.94 ∆over INDEP. +4.70 +0.18 0.00 +2.29 +1.80 +0.48 +4.59 Chinese MUC B3 CEAFe BLANC AVG-F Trigger Anaphoricity KBP2016 24.27 32.83 30.82 17.80 26.43 40.01 − INDEP. 22.68 32.97 29.96 17.74 25.84 39.82 19.31 JOINT 27.94 33.01 29.96 20.24 27.79 40.53 23.33 ∆over INDEP. +5.26 +0.04 0.00 +2.50 +1.95 +0.71 +4.02 Table 2: Results of all three tasks on the KBP 2016 evaluation sets. The KBP2016 results are those achieved by the best-performing coreference resolver in the official KBP 2016 evaluation. ∆is the performance difference between the JOINT model and the corresponding INDEP. model. All results are expressed in terms of F-score. KBP 2016 Chinese test set. Results of event coreference and trigger detection are obtained using version 1.7.2 of the official scorer provided by the KBP 2016 organizers. To evaluate event coreference performance, the scorer employs four scoring measures, namely MUC (Vilain et al., 1995), B3 (Bagga and Baldwin, 1998), CEAFe (Luo, 2005) and BLANC (Recasens and Hovy, 2011), as well as the unweighted average of their F-scores (AVG-F). The scorer reports event mention detection performance in terms of F-score, considering a mention correctly detected if it has an exact match with a gold mention in terms of boundary, event type, and event subtype. In addition, we report anaphoricity determination performance in terms of the F-score computed over anaphoric mentions, counting an extracted anaphoric mention as a true positive if it has an exact match with a gold anaphoric mention in terms of boundary. 4.2 Results and Discussion Results are shown in Table 2 where performance on all three tasks (event coreference, trigger detection, and anaphoricity determination) is expressed in terms of F-score. The top half of the table shows the results on the English evaluation set. Specifically, row 1 shows the performance of the best event coreference system participating in KBP 2016 (Lu and Ng, 2016). This system adopts a pipeline architecture. It first uses an ensemble of one-nearest-neighbor classifiers for trigger detection. Using the extracted triggers, it then applies a pipeline of three sieves, each of which is a onenearest-neighbor classifier, for event coreference. As we can see, this system achieves an AVG-F of 30.08 for event coreference and an F-score of 46.99 for trigger detection. Row 2 shows the performance of the independent models, each of which is trained independently of the other models. Specifically, each independent model is trained using only the unary factors associated with it. As we can see, the independent models outperform the top KBP 2016 system by 1.2 points in AVG-F for event coreference and 1.83 points for trigger detection. Results of our joint model are shown in row 3. The absolute performance differences between the joint model and the independent models are shown in row 4. As we can see, the joint model outperforms the independent models for all three tasks: by 1.80 points for event coreference, 0.48 points for trigger detection and 4.59 points for anaphoricity determination. Most encouragingly, the joint model outperforms the top KBP 2016 system for both event coreference and trigger detection. For event coreference, it outperforms the top KBP system w.r.t. all scoring metrics, yielding an improvement of 3 points in AVG-F. For trigger detection, it outperforms the top KBP system by 2.31 points. The bottom half of Table 2 shows the results on the Chinese evaluation set. The top KBP 2016 event coreference system on Chinese is also the Lu and Ng (2016) system. While the top KBP system outperforms the independent models for both tasks (by 0.59 points in AVG-F for event coreference and 0.19 points for trigger detection), our joint model outperforms the independent models 96 English Chinese Coref Trigger Anaph Coref Trigger Anaph INDEP. 31.28 48.82 27.35 25.84 39.82 19.31 INDEP.+CorefTrigger +0.39 +0.42 −0.05 +0.95 +0.56 −0.37 INDEP.+CorefAnaph +0.40 −0.08 +3.45 +0.37 +0.04 −0.11 INDEP.+TriggerAnaph +0.11 +0.38 +1.35 +0.14 +0.52 +0.02 JOINT−CorefTrigger +0.56 −0.06 +4.41 +0.19 +0.35 +3.34 JOINT−CorefAnaph +0.63 +0.66 +1.46 +1.50 +0.88 +0.28 JOINT−TriggerAnaph +1.89 +0.50 +4.01 +1.65 +0.50 +1.79 JOINT +1.80 +0.48 +4.59 +1.95 +0.71 +4.02 Table 3: Results of model ablations on the KBP 2016 evaluation sets. Each row of ablation results is obtained by either adding one type of interaction factor to the INDEP. model or deleting one type of interaction factor from the JOINT model. For each column, the results are expressed in terms of changes to the INDEP. model’s F-score shown in row 1. for all three tasks: by 1.95 points for event coreference, 4.02 points for anaphoricity determination, and 0.71 points for trigger detection. Like its English counterpart, our Chinese joint model outperforms the top KBP system for both event coreference and trigger detection. For event coreference, it outperforms the top KBP system w.r.t. all but the CEAFe metric, yielding an absolute improvement of 1.36 points in AVG-F. For trigger detection, it outperforms the top KBP system by 0.52 points. For both datasets, the joint model’s superior performance to the independent coreference model stems primarily from considerable improvements in MUC F-score. As MUC is a link-based measure, these results provide suggestive evidence that joint modeling has enabled more event coreference links to be discovered. 4.3 Model Ablations To evaluate the importance of each of the three types of joint factors in the joint model, we perform ablation experiments.7 Table 3 shows the results on the English and Chinese datasets when we add each type of joint factors to the independent model and remove each type of joint factors from the full joint model. The results of each task are expressed in terms of changes to the corresponding independent model’s F-score. 7Chen and Ng (2013) also performed ablation on their ACE-style Chinese event coreference resolver. However, given the differences in the tasks involved (e.g., they did not model event anaphoricity, but included tasks such as event argument extraction and role classification, entity coreference, and event mention attribute value computation) and the ablation setup (e.g., they ablated individual tasks/components in their pipeline-based system in an incremental fashion, whereas we ablate interaction factors rather than tasks), a direct comparison of their observations and ours is difficult. Coref-Trigger interactions. Among the three types of factors, this one contributes the most to coreference performance, regardless of whether it is applied in isolation or in combination with the other two types of factors to the independent coreference model. In addition, it is the most effective type of factor for improving trigger detection. When applied in combination, it also improves anaphoricity determination, although less effectively than the other two types of factors. Coref-Anaphoricity interactions. When applied in isolation to the independent models, this type of factor improves coreference resolution but has a mixed impact on anaphoricity determination. When applied in combination with other types of factors, it improves both tasks, particularly anaphoricity determination. Its impact on trigger detection, however, is generally negative. Trigger-Anaphoricity interactions. When applied in isolation to the independent models, this type of factor improves both trigger detection and anaphoricity determination. When applied in combination with other types of factors, it still improves anaphoricity determination (particularly on Chinese), but has a mixed effect on trigger detection. Among the three types of factors, it has the least impact on coreference resolution. 4.4 Error Analysis Next, we conduct an analysis of the major sources of error made by our joint coreference model. 4.4.1 Two Major Types of Precision Error Erroneous and mistyped triggers. Our trigger model tends to assign the same subtype to event mentions triggered by the same word. As a result, it often assigns the wrong subtype to triggers that 97 possess different subtypes in different contexts. For the same reason, words that are only sometimes used as triggers are often wrongly posited as triggers when they are not. These two types of triggers have in turn led to the establishment of incorrect coreference links.8 Failure to extract arguments. In the absence of an annotated corpus for training an argument classifier, we exploit dependency relations for argument extraction. Doing so proves inadequate, particularly for noun triggers, owing to the absence of dependency relations that can be used to reliably extract their arguments. Moreover, using dependency relations does not allow the extraction of arguments that do not appear in the same sentence as their trigger. Since the presence of incompatible arguments is an important indicator of noncoreference, our model’s failure to extract arguments has resulted in incorrect coreference links. 4.4.2 Three Major Types of Recall Error Missing triggers. Our trigger model fails to identify trigger words that are unseen or rarelyoccurring in the training data. As a result, many coreference links cannot be established. Lack of entity coreference information. Entity coreference information is useful for event coreference because the corresponding arguments of two event mentions are typically coreferent. Since our model does not exploit entity coreference information, it treats two lexically different event arguments as non-coreferent/unrelated. This in turn weakens its ability to determine whether two event mentions are coreferent. This issue is particularly serious in discussion forum documents, where it is not uncommon to see pronouns serve as subjects and objects of event mentions. The situation is further aggravated in Chinese documents, where zero pronouns are prevalent. Lack of contextual understanding. Our model only extracts features from the sentence in which an event mention appears. However, additional contextual information present in neighboring sentences may be needed for correct coreference resolution. This is particularly true in discussion forum documents, where the same event may be described differently by different people. For exam8In our joint model, mentions that are posited as coreferent are encouraged to have the same subtype. While it can potentially fix the errors involving coreferent mentions that have different subtypes, it cannot fix the errors in which the two mentions involved have the same erroneous subtype. ple, when describing the fact that Tim Cook will attend Apple’s Istanbul store opening, one person said “Cook is expected to return to Turkey for the store opening”, and another person described this event as “Tim travels abroad YET AGAIN to be feted by the not-so-high-and-mighty”. It is by no means easy to determine that return and travel trigger two coreferent mentions in these sentences. 5 Related Work Existing event coreference resolvers have been evaluated on different corpora, such as MUC (e.g., Humphreys et al. (1997)), ACE (e.g., Ahn (2006), Chen and Ji (2009), McConky et al. (2012), Sangeetha and Arock (2012), Chen and Ng (2015, 2016), Krause et al. (2016)), OntoNotes (e.g., Chen et al. (2011)), the Intelligence Community corpus (e.g., Cybulska and Vossen (2012), Araki et al. (2014), Liu et al. (2014)), the ECB corpus (e.g., Lee et al. (2012), Bejan and Harabagiu (2014)) and its extension ECB+ (e.g., Yang et al. (2015)), and ProcessBank (e.g., Araki and Mitamura (2015)). The newest event coreference corpora are the ones used in the KBP 2015 and 2016 Event Nugget Detection and Coreference shared tasks, in which the best performers in 2015 and 2016 are RPI’s system (Hong et al., 2015) and UTD’s system (Lu and Ng, 2016), respectively. The KBP 2015 corpus has recently been used to evaluate Peng et al.’s (2016) minimally supervised approach and Lu et al.’s (2016) joint inference approach to event coreference. With the rarest exceptions (e.g., Lu et al. (2016)), existing resolvers have adopted a pipeline architecture in which trigger detection is performed prior to coreference resolution. 6 Conclusion We proposed a joint model of event coreference resolution, trigger detection, and event anaphoricity determination. The model is novel in its choice of tasks and the cross-task interaction features. When evaluated on the KBP 2016 English and Chinese corpora, our model not only outperforms the independent models but also achieves the best results to date on these corpora. Acknowledgments We thank the three anonymous reviewers for their detailed comments. This work was supported in part by NSF Grants IIS-1219142 and IIS-1528037. 98 References David Ahn. 2006. The stages of event extraction. In Proceedings of the COLING/ACL Workshop on Annotating and Reasoning about Time and Events. pages 1–8. Jun Araki, Zhengzhong Liu, Eduard Hovy, and Teruko Mitamura. 2014. Detecting subevent structure for event coreference resolution. In Proceedings of the Ninth International Conference on Language Resources and Evaluation, pages 4553–4558. Jun Araki and Teruko Mitamura. 2015. Joint event trigger identification and event coreference resolution with structured perceptron. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2074–2080. Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In Proceedings of the Linguistic Coreference Workshop at The First International Conference on Language Resources and Evaluation, pages 563–566. Cosmin Adrian Bejan and Sanda Harabagiu. 2014. Unsupervised event coreference resolution. Computational Linguistics 40(2):311–347. Bin Chen, Jian Su, Sinno Jialin Pan, and Chew Lim Tan. 2011. A unified event coreference resolution by integrating multiple resolvers. In Proceedings of the Fifth International Conference on Natural Language Processing. pages 102–110. Chen Chen and Vincent Ng. 2013. Chinese event coreference resolution: Understanding the state of the art. In Proceedings of the 6th International Joint Conference on Natural Language Processing. pages 822–828. Chen Chen and Vincent Ng. 2015. Chinese event coreference resolution: An unsupervised probabilistic model rivaling supervised resolvers. In Proceedings of Human Language Technologies: The 2015 Annual Conference of the North American Chapter of the Association for Computational Linguistics. pages 1097–1107. Chen Chen and Vincent Ng. 2016. Joint inference over a lightly supervised information extraction pipeline: Towards event coreference resolution for resourcescarce languages. In Proceedings of the 30th AAAI Conference on Artificial Intelligence. pages 2913– 2920. Zheng Chen and Heng Ji. 2009. Graph-based event coreference resolution. In Proceedings of the 2009 Workshop on Graph-based Methods for Natural Language Processing (TextGraphs-4), pages 54–57. Agata Cybulska and Piek Vossen. 2012. Using semantic relations to solve event coreference in text. In Proceedings of the LREC Workshop on Semantic Relations-II Enhancing Resources and Applications, pages 60–67. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12:2121–2159. Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1971–1982. Greg Durrett and Dan Klein. 2014. A joint model for entity analysis: Coreference, typing, and linking. Transactions of the Association for Computational Linguistics 2:477–490. Christiane Fellbaum. 1998. WordNet: An Electronical Lexical Database. MIT Press, Cambridge, MA. Eraldo Rezende Fernandes, C´ıcero Nogueira dos Santos, and Ruy Luiz Milidiu. 2014. Latent trees for coreference resolution. Computational Linguistics 40(4):801–835. Kevin Gimpel and Noah A Smith. 2010. Softmaxmargin CRFs: Training log-linear models with cost functions. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 733–736. Yu Hong, Di Lu, Dian Yu, Xiaoman Pan, Xiaobin Wang, Yadong Chen, Lifu Huang, and Heng Ji. 2015. RPI BLENDER TAC-KBP2015 system description. In Proceedings of the Eighth Text Analysis Conference. Kevin Humphreys, Robert Gaizauskas, and Saliha Azzam. 1997. Event coreference for information extraction. In Proceedings of the ACL/EACL Workshop on Operational Factors in Practical, Robust Anaphora Resolution for Unrestricted Texts, pages 75–81. Sebastian Krause, Feiyu Xu, Hans Uszkoreit, and Dirk Weissenborn. 2016. Event linking with sentential features from convolutional neural networks. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pages 239–249. Heeyoung Lee, Marta Recasens, Angel Chang, Mihai Surdeanu, and Dan Jurafsky. 2012. Joint entity and event coreference resolution across documents. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 489–500. Zhengzhong Liu, Jun Araki, Eduard Hovy, and Teruko Mitamura. 2014. Supervised within-document event coreference using information propagation. In Proceedings of the Ninth International Conference on Language Resources and Evaluation, pages 4539– 4544. 99 Zhengzhong Liu, Jun Araki, Teruko Mitamura, and Eduard Hovy. 2016. CMU-LTI at KBP 2016 event nugget track. In Proceedings of the Ninth Text Analysis Conference. Jing Lu and Vincent Ng. 2016. UTD’s event nugget detection and coreference system at KBP 2016. In Proceedings of the Ninth Text Analysis Conference. Jing Lu, Deepak Venugopal, Vibhav Gogate, and Vincent Ng. 2016. Joint inference for event coreference resolution. In Proceedings of the 26th International Conference on Computational Linguistics, pages 3264–3275. Xiaoqiang Luo. 2005. On coreference resolution performance metrics. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 25–32. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 55–60. Katie McConky, Rakesh Nagi, Moises Sudit, and William Hughes. 2012. Improving event coreference by context extraction and dynamic feature weighting. In Proceedings of the 2012 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support, pages 38–43. Teruko Mitamura, Zhengzhong Liu, and Eduard Hovy. 2016. Overview of TAC-KBP 2016 event nugget track. In Proceedings of the Ninth Text Analysis Conference. Vincent Ng. 2010. Supervised noun phrase coreference research: The first fifteen years. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. pages 1396–1411. Thien Huu Nguyen, Adam Meyers, and Ralph Grishman. 2016. New York University 2016 system for KBP event nugget: A deep learning approach. In Proceedings of Ninth Text Analysis Conference. Haoruo Peng, Yangqiu Song, and Dan Roth. 2016. Event detection and co-reference with minimal supervision. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. pages 392–402. Marta Recasens and Eduard Hovy. 2011. BLANC: Implementing the Rand Index for coreference evaluation. Natural Language Engineering 17(4):485– 510. S. Sangeetha and Michael Arock. 2012. Event coreference resolution using mincut based graph clustering. In Proceedings of the Fourth International Workshop on Computer Networks & Communications pages 253–260. Zhiyi Song, Ann Bies, Stephanie Strassel, Tom Riese, Justin Mott, Joe Ellis, Jonathan Wright, Seth Kulick, Neville Ryant, and Xiaoyi Ma. 2015. From light to rich ERE: Annotation of entities, relations, and events. In Proceedings of the 3rd Workshop on EVENTS, pages 89–98. Marc Vilain, John Burger, John Aberdeen, Dennis Connolly, and Lynette Hirschman. 1995. A modeltheoretic coreference scoring scheme. In Proceedings of the Sixth Message Understanding Conference, pages 45–52. Sam Wiseman, Alexander M. Rush, Stuart Shieber, and Jason Weston. 2015. Learning anaphoricity and antecedent ranking features for coreference resolution. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1416–1426. Bishan Yang, Claire Cardie, and Peter Frazier. 2015. A hierarchical distance-dependent Bayesian model for event coreference resolution. Transactions of the Association for Computational Linguistics 3:517– 528. Appendix: Handling Words that Trigger Multiple Event Mentions In KBP, a word can trigger multiple event mentions. However, since we create exactly one candidate event mention from each extracted word in each test document, our model effectively prevents a word from triggering multiple event mentions. This poses a problem: each word cannot be associated with more than one event subtype. This appendix describes how we (partially) address this issue that involves allowing each event mention to be associated with multiple event subtypes. To address this problem, we preprocess the gold trigger annotations in the training data as follows. First, for each word triggering multiple event mentions (with different event subtypes), we merge their event mentions into one event mention having the combined subtype. In principle, we can add each of these combined subtypes into our event subtype inventory and allow our model to make predictions using them. However, to avoid over-complicating the prediction task (by having a large subtype inventory), we only add the three most frequently occurring combined subtypes in the training data to the inventory. Merged mentions whose combined subtype is not among the most frequent three will be unmerged in order to recover the original mentions so that the model can still be trained on them. 100 To train our joint model, however, the trigger annotations and the event coreference annotations in the training data must be consistent. Since we modified the trigger annotations (by merging event mentions and allowing combined subtypes), we make two modifications to the event coreference annotations to ensure consistency between the two sets of annotations. First, let C1 and C2 be two event coreference chains in a training document such that the set of words triggering the event mentions in C1 (with subtype t1) is the same as that triggering the event mentions in C2 (with subtype t2). If each of the event mentions in C1 was merged with the corresponding event mention in C2 during the aforementioned preprocessing of the trigger annotations (because combining t1 and t2 results in one of the three most frequent combined subtypes), then we delete one of the two coreference chains, and assign the combined subtype to the remaining chain. Finally, we remove any remaining event mentions that were merged during the preprocessing of trigger annotations from their respective coreference chains and create a singleton cluster for each of the merged mentions. 101
2017
9
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 974–984 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1090 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 974–984 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1090 Joint Modeling of Content and Discourse Relations in Dialogues Kechen Qin1 Lu Wang1 Joseph Kim2 1College of Computer and Information Science, Northeastern University 2Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology [email protected], [email protected] 2joseph [email protected] Abstract We present a joint modeling approach to identify salient discussion points in spoken meetings as well as to label the discourse relations between speaker turns. A variation of our model is also discussed when discourse relations are treated as latent variables. Experimental results on two popular meeting corpora show that our joint model can outperform state-of-the-art approaches for both phrasebased content selection and discourse relation prediction tasks. We also evaluate our model on predicting the consistency among team members’ understanding of their group decisions. Classifiers trained with features constructed from our model achieve significant better predictive performance than the state-of-the-art. 1 Introduction Goal-oriented dialogues, such as meetings, negotiations, or customer service transcripts, play an important role in our daily life. Automatically extracting the critical points and important outcomes from dialogues would facilitate generating summaries for complicated conversations, understanding the decision-making process of meetings, or analyzing the effectiveness of collaborations. We are interested in a specific type of dialogues — spoken meetings, which is a common way for collaboration and idea sharing. Previous work (Kirschner et al., 2012) has shown that discourse structure can be used to capture the main discussion points and arguments put forward during problem-solving and decision-making processes in meetings. Indeed, content of different speaker turns do not occur in isolation, and should be interpreted within the context of discourse. Meanwhile, content can also reflect the purpose of speaker turns, thus facilitate with discourse relation understanding. Take the meeting snippet from D: Three different types of batteries. Um can either use a hand dynamo, or the kinetic type ones, you know that they use in watches, or else uh a solar powered one. B: Um the bat uh the battery for a a watch wouldn't require a lot of power, would be my one query. Is a kinetic one going to be able to supply enough power? D: Yeah, I don't think it would. C: Yeah. D: We should probably just use conventional batteries. B: Which I suppose as well would allow us to go off the shelf again, you'd say ? D: Yeah. Uncertain Option Figure 1: A sample clip from AMI meeting corpus. B, C, and D denotes different speakers. Here we highlight salient phrases (in italics) that are relevant to the major topic discussed, i.e., “which type of battery to use for the remote control”. Arrows indicate discourse structure between speaker turns. We also show some of the discourse relations for illustration. AMI corpus (Carletta et al., 2006) in Figure 1 as an example. This discussion is annotated with discourse structure based on the Twente Argumentation Schema (TAS) by Rienks et al. (2005), which focuses on argumentative discourse information. As can be seen, meeting participants evaluate different options by showing doubt (UNCERTAIN), bringing up alternative solution (OPTION), or giving feedback. The discourse information helps with the identification of the key discussion point, i.e., “which type of battery to use”, by revealing the discussion flow. To date, most efforts to leverage discourse information to detect salient content from dialogues have focused on encoding gold-standard discourse relations as features for use in classifier training (Murray et al., 2006; Galley, 2006; McKeown et al., 2007; Bui et al., 2009). However, automatic discourse parsing in dialogues is still a challenging problem (Perret et al., 2016). Moreover, acquiring human annotation on discourse relations is a timeconsuming and expensive process, and does not 974 scale for large datasets. In this paper, we propose a joint modeling approach to select salient phrases reflecting key discussion points as well as label the discourse relations between speaker turns in spoken meetings. We hypothesize that leveraging the interaction between content and discourse has the potential to yield better prediction performance on both phrase-based content selection and discourse relation prediction. Specifically, we utilize argumentative discourse relations as defined in Twente Argument Schema (TAS) (Rienks et al., 2005), where discussions are organized into tree structures with discourse relations labeled between nodes (as shown in Figure 1). Algorithms for joint learning and joint inference are proposed for our model. We also present a variation of our model to treat discourse relations as latent variables when true labels are not available for learning. We envision that the extracted salient phrases by our model can be used as input to abstractive meeting summarization systems (Wang and Cardie, 2013; Mehdad et al., 2014). Combined with the predicted discourse structure, a visualization tool can be exploited to display conversation flow to support intelligent meeting assistant systems. To the best of our knowledge, our work is the first to jointly model content and discourse relations in meetings. We test our model with two meeting corpora — the AMI corpus (Carletta et al., 2006) and the ICSI corpus (Janin et al., 2003). Experimental results show that our model yields an accuracy of 63.2 on phrase selection, which is significantly better than a classifier based on Support Vector Machines (SVM). Our discourse prediction component also obtains better accuracy than a state-of-the-art neural networkbased approach (59.2 vs. 54.2). Moreover, our model trained with latent discourse outperforms SVMs on both AMI and ICSI corpora for phrase selection. We further evaluate the usage of selected phrases as extractive meeting summaries. Results evaluated by ROUGE (Lin and Hovy, 2003) demonstrate that our system summaries obtain a ROUGE-SU4 F1 score of 21.3 on AMI corpus, which outperforms non-trivial extractive summarization baselines and a keyword selection algorithm proposed in Liu et al. (2009). Moreover, since both content and discourse structure are critical for building shared understanding among participants (Mulder et al., 2002; Mercer, 2004), we further investigate whether our learned model can be utilized to predict the consistency among team members’ understanding of their group decisions. This task is first defined as consistency of understanding (COU) prediction by Kim and Shah (2016), who have labeled a portion of AMI discussions with consistency or inconsistency labels. We construct features from our model predictions to capture different discourse patterns and word entrainment scores for discussion with different COU level. Results on AMI discussions show that SVM classifiers trained with our features significantly outperform the state-ofthe-art results (Kim and Shah, 2016) (F1: 63.1 vs. 50.5) and non-trivial baselines. The rest of the paper is structured as follows: we first summarize related work in Section 2. The joint model is presented in Section 3. Datasets and experimental setup are described in Section 4, which is followed by experimental results (Section 5). We then study the usage of our model for predicting consistency of understanding in groups in Section 6. We finally conclude in Section 7. 2 Related Work Our model is inspired by research work that leverages discourse structure for identifying salient content in conversations, which is still largely reliant on features derived from gold-standard discourse labels (McKeown et al., 2007; Murray et al., 2010; Bokaei et al., 2016). For instance, adjacency pairs, which are paired utterances with question-answer or offer-accept relations, are found to frequently appear in meeting summaries together and thus are utilized to extract summary-worthy utterances by Galley (2006). There is much less work that jointly predicts the importance of content along with the discourse structure in dialogus. Oya and Carenini (2014) employs Dynamic Conditional Random Field to recognize sentences in email threads for use in summary as well as their dialogue acts. Only local discourse structures from adjacent utterances are considered. Our model is built on tree structures, which captures more global information. Our work is also in line with keyphrase identification or phrase-based summarization for conversations. Due to the noisy nature of dialogues, recent work focuses on identifying summary-worthy phrases from meetings (Fern´andez et al., 2008; Riedhammer et al., 2010) or email threads (Loza 975 et al., 2014). For instance, Wang and Cardie (2012) treat the problem as an information extraction task, where summary-worthy content represented as indicator and argument pairs is identified by an unsupervised latent variable model. Our work also targets at detecting salient phrases from meetings, but focuses on the joint modeling of critical discussion points and discourse relations held between them. For the area of discourse analysis in dialogues, a significant amount of work has been done in predicting local discourse structures, such as recognizing dialogue acts or social acts of adjacent utterances from phone conversations (Stolcke et al., 2000; Kalchbrenner and Blunsom, 2013; Ji et al., 2016), spoken meetings (Dielmann and Renals, 2008), or emails (Cohen et al., 2004). Although discourse information from non-adjacent turns has been studied in the context of online discussion forums (Ghosh et al., 2014) and meetings (HakkaniTur, 2009), none of them models the effect of discourse structure on content selection, which is a gap that this work fills in. 3 The Joint Model of Content and Discourse Relations In this section, we first present our joint model in Section 3.1. The algorithms for learning and inference are described in Sections 3.2 and 3.3, followed by feature description (Section 3.4). 3.1 Model Description Our proposed model learns to jointly perform phrase-based content selection and discourse relation prediction by making use of the interaction between the two sources of information. Assume that a meeting discussion is denoted as x, where x consists of a sequence of discourse units x = {x1, x2, · · · , xn}. Each discourse unit can be a complete speaker turn or a part of it. As demonstrated in Figure 1, a tree-structured discourse diagram is constructed for each discussion with each discourse unit xi as a node of the tree. In this work, we consider the argumentative discourse structure by Twente Argument Schema (TAS) (Rienks et al., 2005). For each node xi, it is attached to another node xi′ (i′ < i) in the discussion, and a discourse relation di is hold on the link ⟨xi, xi′⟩(di is empty if xi is the root). Let t denote the set of links ⟨xi, xi′⟩in x. Following previous work on discourse analysis in meetings (Rienks et al., 2005; Hakkani-Tur, 2009), we assume that the attachment structure between discourse units are given during both training and testing. A set of candidate phrases are extracted from each discourse unit xi, from which salient phrases that contain gist information will be identified. We obtain constituent and dependency parses for utterances using Stanford parser (Klein and Manning, 2003). We restrict eligible candidate to be a noun phrase (NP), verb phrase (VP), prepositional phrase (PP), or adjective phrase (ADJP) with at most 5 words, and its head word cannot be a stop word.1 If a candidate is a parent of another candidate in the constituent parse tree, we will only keep the parent. We further merge a verb and a candidate noun phrase into one candidate if the later is the direct object or subject of the verb. For example, from utterance “let’s use a rubber case as well as rubber buttons”, we can identify candidates “use a rubber case” and “rubber buttons”. For xi, the set of candidate phrases are denoted as ci = {ci,1, ci,2, · · · , ci,mi}, where mi is the number of candidates. ci,j takes a value of 1 if the corresponding candidate is selected as salient phrase; otherwise, ci,j is equal to 0. All candidate phrases in discussion x are represented as c. We then define a log-linear model with feature parameters w for the candidate phrases c and discourse relations d in x as: p(c, d|x, w) ∝exp[w · Φ(c, d, x)] ∝exp[w · n X i=1,<xi,xi′>∈t φ(ci, di, di′, x)] ∝exp[ n X i=1,<xi,xi′>∈t (wc · mi X j=1 φc(ci,j, x) + wd · φd(di, di′, x) + wcd · mi X j=1 φcd(ci,j, di, x))] (1) Here Φ(·) and φ(·) denote feature vectors. We utilize three types of feature functions: (1) content-only features φc(·), which capture the importance of phrases, (2) discourse-only features φd(·), which characterize the (potentially higherorder) discourse relations, and (3) joint features of content and discourse φcd(·), which model the interaction between the two. wc, wd, and wcd are 1Other methods for mining candidate phrases, such as frequency-based method (Liu et al., 2015), will be studied for future work. 976 corresponding feature parameters. Detailed feature descriptions can be found in Section 3.4. Discourse Relations as Latent Variables. As we mentioned in the introduction, acquiring labeled training data for discourse relations is a timeconsuming process since it would require human annotators to inspect the full discussions. Therefore, we further propose a variation of our model where it treats the discourse relations as latent variables, so that p(c|x, w) = P d p(c, d|x, w). Its learning algorithm is slightly different as described in the next section. 3.2 Joint Learning for Parameter Estimation For learning the model parameters w, we employ an algorithm based on SampleRank (Rohanimanesh et al., 2011), which is a stochastic structure learning method. In general, the learning algorithm constructs a sequence of configurations for sample labels as a Markov chain Monte Carlo (MCMC) chain based on a task-specific loss function, where stochastic gradients are distributed across the chain. The full learning procedure is described in Algorithm 1. To start with, the feature weights w is initialized with each value randomly drawn from [−1, 1]. Multiple epochs are run through all samples. For each sample, we randomly initialize the assignment of candidate phrases labels c and discourse relations d. Then an MCMC chain is constructed with a series of configurations σ = (c, d): at each step, it first samples a discourse structure d based on the proposal distribution q(d′|d, x), and then samples phrase labels conditional on the new discourse relations and previous phrase labels based on q(c′|c, d′, x). Local search is used for both proposal distributions.2 The new configuration is accepted if it improves on the score by ω(σ′). The parameters w are updated accordingly. For the scorer ω, we use a weighted combination of F1 scores of phrase selection (F1c) and discourse relation prediction (F1d): ω(σ) = α · F1c + (1 −α) · F1d. We fix α to 0.1. When discourse relations are treated as latent, we initialize discourse relations for each sample with a label in {1, 2, . . . , K} if there are K relations indicated, and we only use F1c as the scorer. 2For future work, we can explore other proposal distributions that utilize the conditional distribution of salient phrases given sampled discourse relations. Input : X = {x}: discussions in the training set, η: learning rate, ϵ: number of epochs, δ: number of sampling rounds, ω(·): scoring function, Φ(·): feature functions Output: feature weights 1 |W| P w∈W w Initialize w; W ←{w}; for e = 1 to ϵ do for x in X do // Initialize configuration for x Initialize c and d; σ = (c, d); for s = 1 to δ do // New configuration via local search d′ ∼qd(·|x, d); c′ ∼qd(·|x, c, d′); σ′ = (c′, d′); σ+ = arg max˜σ∈{σ,σ′} ω(˜σ); σ−= arg min˜σ∈{σ,σ′} ω(˜σ); ˆ∇= Φ(σ+) −Φ(σ−); ∆ω = ω(σ+) −ω(σ−); // Update parameters if w · ˆ∇< ∆ω & ∆ω ̸= 0 then w ←w + η · ˆ∇; Add w in W; end // Accept or reject new configuration if σ+ == σ′ then σ = σ′ end end end end Algorithm 1: SampleRank-based joint learning. 3.3 Joint Inference for Prediction Given a new sample x and learned parameters w, we predict phrase labels and discourse relations as arg maxc,d p(c, d|x, w). Dynamic programming can be employed to carry out joint inference, however, it would be time-consuming since our objective function has a large search space for both content and discourse labels. Hence we propose an alternating optimizing algorithm to search for c and d iteratively. Concretely, for each iteration, we first optimize on d by maximizing Pn i=1,<xi,x′ i>∈t(wd · φd(di, di′, x) + wcd · Pmi j=1 φcd(ci,j, di, x)). Message-passing (Smith and Eisner, 2008) is used to find the best d. In the second step, we search for c that maximizes Pn i=1,<xi,x′ i>∈t(wc · Pmi j=1 φc(ci,j, x) + wcd · Pmi j=1 φcd(ci,j, di, x)). We believe that candidate phrases based on the same concepts should have the same predicted label. Therefore, candidates of the same phrase type and sharing the same head word are grouped into one cluster. We then cast our task as an integer linear programming 977 problem.3 We optimize our objective function under constraints: (1) ci,j = ci′,j′ if ci,j and ci′,j′ are in the same cluster, and (2) ci,j ∈{0, 1}, ∀i, j. The inference process is the same for models trained with latent discourse relations. 3.4 Features We use features that characterize content, discourse relations, and the combination of both. Content Features. For modeling the salience of content, we calculate the minimum, maximum, and average of TF-IDF scores of words and number of content words in each phrase based on the intuition that important phrases tend to have more content words with high TF-IDF scores (Fern´andez et al., 2008). We also consider whether the head word of the phrase has been mentioned in preceding turn, which implies the focus of a discussion. The size of the cluster each phrase belongs to is also included. Number of POS tags and phrase types are counted to characterize the syntactic structure. Previous work (Wang and Cardie, 2012) has found that a discussion usually ends with decision-relevant information. We thus identify the absolute and relative positions of the turn containing the candidate phrase in the discussion. Finally, we record whether the candidate phrase is uttered by the main speaker, who speakers the most words in the discussion. Discourse Features. For each discourse unit, we collect the dialogue act types of the current unit and its parent node in discourse tree, whether there is any adjacency pair held between the two nodes (Hakkani-Tur, 2009), and the Jaccard similarity between them. We record whether two turns are uttered by the same speaker, for example, ELABORATION is commonly observed between the turns from the same participant. We also calculate the number of candidate phrases based on the observation that OPTION and SPECIALIZATION tend to contain more informative words than POSITIVE feedback. Length of the discourse unit is also relevant. Therefore, we compute the time span and number of words. To incorporate global structure features, we encode the depth of the node in the discourse tree and the 3We use lpsolve: http://lpsolve. sourceforge.net/5.5/. number of its siblings. Finally, we include an order-2 discourse relation feature that encodes the relation between current discourse unit and its parent, and the relation between the parent and its grandparent if it exists. Joint Features. For modeling the interaction between content and discourse, the discourse relation is added to each content feature to compose a joint feature. For example, if candidate c in discussion x has a content feature φ[avg−TFIDF](c, x) with a value of 0.5, and its discourse relation d is POSITIVE, then the joint feature takes the form of φ[avg−TFIDF,Positive](c, d, x) = 0.5. 4 Datasets and Experimental Setup Meeting Corpora. We evaluate our joint model on two meeting corpora with rich annotations: the AMI meeting corpus (Carletta et al., 2006) and the ICSI meeting corpus (Janin et al., 2003). AMI corpus consists of 139 scenario-driven meetings, and ICSI corpus contains 75 naturally occurring meetings. Both of the corpora are annotated with dialogue acts, adjacency pairs, and topic segmentation. We treat each topic segment as one discussion, and remove discussions with less than 10 turns or labeled as “opening” and “chitchat”. 694 discussions from AMI and 1139 discussions from ICSI are extracted, and these two datasets are henceforth referred as AMI-FULL and ICSIFULL. Acquiring Gold-Standard Labels. Both corpora contain human constructed abstractive summaries and extractive summaries on meeting level. Short abstracts, usually in one sentence, are constructed by meeting participants — participant summaries, and external annotators — abstractive summaries. Dialogue acts that contribute to important output of the meeting, e.g. decisions, are identified and used as extractive summaries, and some of them are also linked to the corresponding abstracts. Since the corpora do not contain phrase-level importance annotation, we induce gold-standard labels for candidate phrases based on the following rule. A candidate phrase is considered as a positive sample if its head word is contained in any abstractive summary or participant summary. On average, 71.9 candidate phrases are identified per discussion for AMI-FULL with 31.3% labeled as positive, and 73.4 for ICSI-FULL with 24.0% of them as positive samples. Furthermore, a subset of discussions in AMI978 FULL are annotated with discourse structure and relations based on Twente Argumentation Schema (TAS) by Rienks et al. (2005)4. A tree-structured argument diagram (as shown in Figure 1) is created for each discussion or a part of the discussion. The nodes of the tree contain partial or complete speaker turns, and discourse relation types are labeled on the links between the nodes. In total, we have 129 discussions annotated with discourse labels. This dataset is called AMI-SUB hereafter. Experimental Setup. 5-fold cross validation is used for all experiments. All real-valued features are uniformly normalized to [0,1]. For the joint learning algorithm, we use 10 epochs and carry out 50 sampling for MCMC for each training sample. The learning rate is set to 0.01. We run the learning algorithm for 20 times, and use the average of the learned weights as the final parameter values. For models trained with latent discourse relations, we fix the number of relations to 9. Baselines and Comparisons. For both phrasebased content selection and discourse relation prediction tasks, we consider a baseline that always predicts the majority label (Majority). Previous work has shown that Support Vector Machines (SVMs)-based classifiers achieve state-of-the-art performance for keyphrase selection in meetings (Fern´andez et al., 2008; Wang and Cardie, 2013) and discourse parsing for formal text (Hernault et al., 2010). Therefore, we compare with linear SVM-based classifiers, trained with the same feature set of content features or discourse features. We fix the trade-off parameter to 1.0 for all SVM-based experiments. For discourse relation prediction, we use one-vs-rest strategy to build multiple binary classifiers.5 We also compare with a state-of-the-art discourse parser (Ji et al., 2016), which employs neural language model to predict discourse relations. 5 Experimental Results 5.1 Phrase Selection and Discourse Labeling Here we present the experimental results on phrase-based content selection and discourse relation prediction. We experiment with two variations of our joint model: one is trained on goldstandard discourse relations, the other is trained by 4There are 9 types of relations in TAS: POSITIVE, NEGATIVE, UNCERTAIN, REQUEST, SPECIALIZATION, ELABORATION, OPTION, OPTION EXCLUSION, and SUBJECT-TO. 5Multi-class classifier was also experimented with, but gave inferior performance. Acc F1 Comparisons Baseline (Majority) 60.1 37.5 SVM (w content features in § 3.4) 57.8 54.6 Our Models Joint-Learn + Joint-Inference 63.2∗ 62.6∗ Joint-Learn + Separate-Inference 57.9 57.8 Separate-Learn 53.4 52.6 Our Models (Latent Discourse) w/ True Attachment Structure Joint-Learn + Joint-Inference 60.3∗ 60.3∗ Joint-Learn + Separate-Inference 56.4 56.2 w/o True Attachment Structure Joint-Learn + Joint-Inference 56.4 56.4 Joint-Learn + Separate-Inference 52.7 52.3 Table 1: Phrase-based content selection performance on AMI-SUB with accuracy (acc) and F1. We display results of our models trained with gold-standard discourse relation labels and with latent discourse relations. For the later, we also show results based on True Attachment Structure, where the gold-standard attachments are known, and without the True Attachment Structure. Our models that significantly outperform SVM-based model are highlighted with ∗(p < 0.05, paired t-test). Best result for each column is in bold. Acc F1 Comparisons Baseline (Majority) 51.2 7.5 SVM (w discourse features in § 3.4) 51.2 22.8 Ji et al. (2016) 54.2 21.4 Our Models Joint-Learn + Joint-Inference 58.0∗ 21.7 Joint-Learn + Separate-Inference 59.2∗ 23.4 Separate-Learn 58.2∗ 25.1 Table 2: Discourse relation prediction performance on AMI-SUB. Our models that significantly outperform SVM-based model and Ji et al. (2016) are highlighted with ∗(p < 0.05, paired t-test). Best result for each column is in bold. treating discourse relations as latent models as described in Section 3.1. Remember that we have gold-standard argument diagrams on the AMISUB dataset, we can thus conduct experiments by assuming the True Attachment Structure is given for latent versions. When argument diagrams are not available, we build a tree among the turns in each discussion as follows. Two turns are attached if there is any adjacency pair between them. If one turn is attached to more than one previous turns, the closest one is considered. For the rest of the turns, they are attached to the preceding turn. This construction is applied on AMI-FULL and ICSIFULL. We also investigate whether joint learning and joint inference can produce better prediction per979 AMI-FULL ICSI-FULL Acc F1 Acc F1 Comparisons Baseline (Majority) 61.8 38.2 75.3 43.0 SVM (with content features in § 3.4) 58.6 56.7 66.2 53.1 Our Models (Latent Discourse) Joint-Learn + Joint-Inference 63.4∗63.0∗73.5∗61.4∗ Joint-Learn + Separate-Inference 57.7 57.5 70.0∗62.7∗ Table 3: Phrase-based content selection performance on AMI-FULL and ICSI-FULL. We display results of our models trained with latent discourse relations. Results that are significantly better than SVM-based model are highlighted with ∗(p < 0.05, paired t-test). formance. We consider joint learning with separate inference, where only content features or discourse features are used for prediction (SeparateInference). We further study learning separate classifiers for content selection and discourse relations without joint features (Separate-Learn). We first show the phrase selection and discourse relation prediction results on AMI-SUB in Tables 1 and 2. As shown in Table 1, our models, trained with gold-standard discourse relations or latent ones with true attachment structure, yield significant better accuracy and F1 scores than SVM-based classifiers trained with the same feature sets for phrase selection (paired t-test, p < 0.05). Our joint learning model with separate inference also outperforms neural network-based discourse parsing model (Ji et al., 2016) in Table 2. Moreover, Tables 1 and 2 demonstrate that joint learning usually produces superior performance for both tasks than separate learning. Combined with joint inference, our model obtains the best accuracy and F1 on phrase selection. This indicates that leveraging the interplay between content and discourse boost the prediction performance. Similar results are achieved on AMI-FULL and ICSIFULL in Table 3, where latent discourse relations without true attachment structure are employed for training. 5.2 Phrase-Based Extractive Summarization We further evaluate whether the prediction of the content selection component can be used for summarizing the key points on discussion level. For each discussion, salient phrases identified by our model are concatenated in sequence for use as the summary. We consider two types of gold-standard summaries. One is utterance-level extractive summary, which consists of human labeled summaryworthy utterances. The other is abstractive sumExtractive Summaries as Gold-Standard ROUGE-1 ROUGE-SU4 Len Prec Rec F1 Prec Rec F1 Longest DA 30.9 64.4 15.0 23.1 58.6 9.3 15.3 Centroid DA 17.5 73.9 13.4 20.8 62.5 6.9 11.3 SVM 49.8 47.1 24.1 27.5 22.7 10.7 11.8 Liu et al. (2009) 62.4 40.4 39.2 36.2 15.5 15.2 13.5 Our Model 66.6 45.4 44.7 41.1∗24.1∗23.4∗20.9∗ Our Model-latent 85.9 42.9 49.3 42.4∗21.6 25.7∗21.3∗ Abstractive Summaries as Gold-Standard ROUGE1 ROUGE-SU4 Len Prec Rec F1 Prec Rec F1 Longest DA 30.9 14.8 5.5 7.4 4.8 1.4 1.9 Centroid DA 17.5 24.9 5.6 8.5 11.6 1.4 2.2 SVM 49.8 13.3 9.7 9.5 4.4 2.4 2.4 Liu et al. (2009) 62.4 10.3 16.7 11.3 2.7 4.5 2.8 Our Model 66.6 12.6 18.9 13.1∗3.8 5.5∗ 3.7∗ Our Model-latent 85.9 11.4 20.0 12.4∗3.3 6.1∗ 3.5∗ Table 4: ROUGE scores for phrase-based extractive summarization evaluated against human-constructed utterance-level extractive summaries and abstractive summaries. Our models that statistically significantly outperform SVM and Liu et al. (2009) are highlighted with ∗(p < 0.05, paired t-test). Best ROUGE score for each column is in bold. mary, where we collect human abstract with at least one link from summary-worthy utterances. We calculate scores based on ROUGE (Lin and Hovy, 2003), which is a popular tool for evaluating text summarization (Gillick et al., 2009; Liu and Liu, 2010). ROUGE-1 (unigrams) and ROUGE-SU4 (skip-bigrams with at most 4 words in between) are used. Following previous work on meeting summarization (Riedhammer et al., 2010; Wang and Cardie, 2013), we consider two dialogue act-level summarization baselines: (1) LONGEST DA in each discussion is selected as the summary, and (2) CENTROID DA, the one with the highest TF-IDF similarity with all DAs in the discussion. We also compare with an unsupervised keyword extraction approach by Liu et al. (2009), where word importance is estimated by its TF-IDF score, POS tag, and the salience of its corresponding sentence. With the same candidate phrases as in our model, we extend Liu et al. (2009) by scoring each phrase based on its average score of the words. Top phrases, with the same number of phrases output by our model, are included into the summaries. Finally, we compare with summaries consisting of salient phrases predicted by an SVM classifier trained with our content features. From the results in Table 4, we can see that phrase-based extractive summarization methods can yield better ROUGE scores for recall and F1 than baselines that extract the whole sentences. Meanwhile, our system significantly out980 Meeting Clip: D: can we uh power a light in this? can we get a strong enough battery to power a light? A: um i think we could because the lcd panel requires power, and the lcd is a form of a light so that. . . D: . . .it’s gonna have to have something high-tech about it and that’s gonna take battery power. . . D: illuminate the buttons. yeah it glows. D: well m i’m thinking along the lines of you’re you’re in the dark watching a dvd and you um you find the thing in the dark and you go like this . . . oh where’s the volume button in the dark, and uh y you just touch it . . . and it lights up or something. Abstract by Human: What sort of battery to use. The industrial designer presented options for materials, components, and batteries and discussed the restrictions involved in using certain materials. Longest DA: well m i’m thinking along the lines of you’re you’re in the dark watching a dvd and you um you find the thing in the dark and you go like this. Centroid DA: can we uh power a light in this? Our Method: - power a light, a strong enough battery, - requires power, a form, - a really good battery, battery power, - illuminate the buttons, glows, - watching a dvd, the volume button, lights up or something Figure 2: Sample summaries output by different systems for a meeting clip from AMI corpus (less relevant utterances in between are removed). Salient phrases by our system output are displayed for each turn of the clip, with duplicated phrases removed for brevity. performs the SVM-based classifiers when evaluated on ROUGE recall and F1, while achieving comparable precision. Compared to Liu et al. (2009), our system also yields better results on all metrics. Sample summaries by our model along with two baselines are displayed in Figure 2. Utterancelevel extract-based baselines unavoidably contain disfluency and unnecessary details. Our phrasebased extractive summary is able to capture the key points from both the argumentation process and important outcomes of the conversation. This implies that our model output can be used as input for an abstractive summarization system. It can also facilitate the visualization of decision-making processes. 5.3 Further Analysis and Discussions Features Analysis. We first discuss salient features with top weights learned by our joint model. For content features, main speaker tends to utter more salient content. Higher TF-IDF scores also indicate important phrases. If a phrase is mentioned in previous turn and repeated in the current turn, it is likely to be a key point. For discourse features, structure features matter the most. For instance, jointly modeling the discourse relation of the parent node along with the current node can lead to better inference. An example is that giving more details on the proposal (ELABORATION) tends to lead to POSITIVE feedback. Moreover, REQUEST usually appears close to the root of the argument diagram tree, while POSITIVE feedback is usually observed on leaves. Adjacency pairs also play an important role for discourse prediction. For joint features, features that composite “phrase mentioned in previous turn” and relation POSITIVE feedback or REQUEST yield higher weight, which are indicators for both key phrases and discourse relations. We also find that main speaker information composite with ELABORATION and UNCERTAIN are associated with high weights. Error Analysis and Potential Directions. Taking a closer look at our prediction results, one major source of incorrect prediction for phrase selection is based on the fact that similar concepts might be expressed in different ways, and our model predicts inconsistently for different variations. For example, participants use both “thick” and “two centimeters” to talk about the desired shape of a remote control. However, our model does not group them into the same cluster and later makes different predictions. For future work, semantic similarity with context information can be leveraged to produce better clustering results. Furthermore, identifying discourse relations in dialogues is still a challenging task. For instance, “I wouldn’t choose a plastic case” should be labeled as OPTION EXCLUSION, if the previous turns talk about different options. Otherwise, it can be labeled as NEGATIVE. Therefore, models that better handle semantics and context need to be considered. 6 Predicting Consistency of Understanding As discussed in previous work (Mulder et al., 2002; Mercer, 2004), both content and discourse structure are critical for building shared understanding among discussants. In this section, we test whether our joint model can be utilized to predict the consistency among team members’ under981 standing of their group decisions, which is defined as consistency of understanding (COU) in Kim and Shah (2016). Kim and Shah (2016) establish gold-standard COU labels on a portion of AMI discussions, by comparing participant summaries to determine whether participants report the same decisions. If all decision points are consistent, the associated topic discussion is labeled as consistent; otherwise, the discussion is identified as inconsistent. Their annotation covers the AMI-SUB dataset. Therefore, we run the prediction experiments on AMI-SUB by using the same annotation. Out of total 129 discussions in AMI-SUB, 86 discussions are labeled as consistent and 43 are inconsistent. We construct three types of features by using our model’s predicted labels. Firstly, we learn two versions of our model based on the “consistent” discussions and the “inconsistent” ones in the training set, with learned parameters wcon and wincon. For a discussion in the test set, these two models output two probabilities pcon = maxc,d P(c, d|x, wcon) and pincon = maxc,d P(c, d|x, wincon). We use pcon −pincon as a feature. Furthermore, we consider discourse relations of length one and two from the discourse structure tree. Intuitively, some discourse relations, e.g., ELABORATION followed by multiple POSITIVE feedback, imply consistent understanding. The third feature is based on word entrainment, which has been shown to correlate with task success for groups (Nenkova et al., 2008). Using the formula in Nenkova et al. (2008), we compute the average word entrainment between the main speaker who utters the most words and all the other participants. The content words in the salient phrases predicted by our model is considered for entrainment computation. Results. Leave-one-out is used for experiments. For training, our features are constructed from gold-standard phrase and discourse labels. Predicted labels by our model is used for constructing features during testing. SVM-based classifier is used for experimenting with different sets of features output by our model. A majority class baseline is constructed as well. We also consider an SVM classifier trained with ngram features (unigrams and bigrams). Finally, we compare with the state-of-the-art method in Kim and Shah (2016), where discourse-relevant features and head gesAcc F1 Comparisons Baseline (Majority) 66.7 40.0 Ngrams (SVM) 51.2 50.6 Kim and Shah (2016) 60.5 50.5 Features from Our Model Consistency Probability (Prob) 52.7 52.1 Discourse Relation (Disc) 63.6 57.1∗ Word Entrainment (Ent) 60.5∗ 57.1∗ Prob + Disc+ Ent 68.2∗ 63.1∗ Oracles Discourse Relation 69.8 62.7 Word Entrainment 61.2 57.8 Table 5: Consistency of Understanding (COU) prediction results on AMI-SUB. Results that statistically significantly outperform ngrams-based baseline and Kim and Shah (2016) are highlighted with ∗(p < 0.05, paired t-test). For reference, we also show the prediction performance based on gold-standard discourse relations and phrase selection labels. ture features are utilized in Hidden Markov Models to predict the consistency label. The results are displayed in Table 5. All SVMs trained with our features surpass the ngrams-based baseline. Especially, the discourse features, word entrainment feature, and the combination of the three, all significantly outperform the state-of-theart system by Kim and Shah (2016).6 7 Conclusion We presented a joint model for performing phraselevel content selection and discourse relation prediction in spoken meetings. Experimental results on AMI and ICSI meeting corpora showed that our model can outperform state-of-the-art methods for both tasks. Further evaluation on the task of predicting consistency-of-understanding in meetings demonstrated that classifiers trained with features constructed from our model output produced superior performance compared to the state-of-the-art model. This provides an evidence of our model being successfully applied in other prediction tasks in spoken meetings. Acknowledgments This work was supported in part by National Science Foundation Grant IIS-1566382 and a GPU gift from Nvidia. We thank three anonymous reviewers for their valuable suggestions on various aspects of this work. 6We also experiment with other popular classifiers, e.g. logistic regression or decision tree, and similar trend is respected. 982 References Mohammad Hadi Bokaei, Hossein Sameti, and Yang Liu. 2016. Extractive Summarization of Multi-party Meetings Through Discourse Segmentation. Natural Language Engineering 22(01):41–72. Trung H. Bui, Matthew Frampton, John Dowding, and Stanley Peters. 2009. Extracting Decisions from Multi-party Dialogue Using Directed Graphical Models and Semantic Similarity. In Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Association for Computational Linguistics, Stroudsburg, PA, USA, SIGDIAL ’09, pages 235–243. Jean Carletta, Simone Ashby, Sebastien Bourban, Mike Flynn, Mael Guillemot, Thomas Hain, Jaroslav Kadlec, Vasilis Karaiskos, Wessel Kraaij, Melissa Kronenthal, Guillaume Lathoud, Mike Lincoln, Agnes Lisowska, Iain McCowan, Wilfried Post, Dennis Reidsma, and Pierre Wellner. 2006. The AMI Meeting Corpus: A Pre-announcement. In Proceedings of the Second International Conference on Machine Learning for Multimodal Interaction. Springer-Verlag, Berlin, Heidelberg, MLMI’05, pages 28–39. William W. Cohen, Vitor R. Carvalho, and Tom M. Mitchell. 2004. Learning to Classify Email into “Speech Acts” . In Dekang Lin and Dekai Wu, editors, Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Barcelona, Spain, pages 309–316. Alfred Dielmann and Steve Renals. 2008. Recognition of Dialogue Acts in Multiparty Meetings Using a Switching DBN. IEEE transactions on audio, speech, and language processing 16(7):1303–1314. Raquel Fern´andez, Matthew Frampton, John Dowding, Anish Adukuzhiyil, Patrick Ehlen, and Stanley Peters. 2008. Identifying Relevant Phrases to Summarize Decisions in Spoken Meetings. In INTERSPEECH. pages 78–81. Michel Galley. 2006. A Skip-chain Conditional Random Field for Ranking Meeting Utterances by Importance. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Stroudsburg, PA, USA, EMNLP ’06, pages 364– 372. Debanjan Ghosh, Smaranda Muresan, Nina Wacholder, Mark Aakhus, and Matthew Mitsui. 2014. Analyzing Argumentative Discourse Units in Online Interactions. In Proceedings of the First Workshop on Argumentation Mining. pages 39–48. Dan Gillick, Korbinian Riedhammer, Benoit Favre, and Dilek Hakkani-Tur. 2009. A Global Optimization Framework for Meeting Summarization. In Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Conference on. IEEE, pages 4769–4772. Dilek Hakkani-Tur. 2009. Towards Automatic Argument Diagramming of Multiparity Meetings. In Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Conference on. IEEE, pages 4753–4756. Hugo Hernault, Helmut Prendinger, David A. duVerle, and Mitsuru Ishizuka. 2010. HILDA: A Discourse Parser Using Support Vector Machine Classification. Dialogue & Discourse 1(3):1–33. Adam Janin, Don Baron, Jane Edwards, Dan Ellis, David Gelbart, Nelson Morgan, Barbara Peskin, Thilo Pfau, Elizabeth Shriberg, Andreas Stolcke, et al. 2003. The ICSI Meeting Corpus. In Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP’03). 2003 IEEE International Conference on. IEEE, volume 1, pages I–I. Yangfeng Ji, Gholamreza Haffari, and Jacob Eisenstein. 2016. A Latent Variable Recurrent Neural Network for Discourse-Driven Language Models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 332–342. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent Convolutional Neural Networks for Discourse Compositionality. In Proceedings of the Workshop on Continuous Vector Space Models and their Compositionality. Association for Computational Linguistics, Sofia, Bulgaria, pages 119–126. Joseph Kim and Julie A Shah. 2016. Improving Team’s Consistency of Understanding in Meetings. IEEE Transactions on Human-Machine Systems 46(5):625–637. Paul A Kirschner, Simon J Buckingham-Shum, and Chad S Carr. 2012. Visualizing Argumentation: Software Tools for Collaborative and Educational Sense-making. Springer Science & Business Media. Dan Klein and Christopher D. Manning. 2003. Accurate Unlexicalized Parsing. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1. Association for Computational Linguistics, Stroudsburg, PA, USA, ACL ’03, pages 423–430. Chin-Yew Lin and Eduard Hovy. 2003. Automatic Evaluation of Summaries Using N-gram Cooccurrence Statistics. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1. pages 71–78. Fei Liu and Yang Liu. 2010. Using Spoken Utterance Compression for Meeting Summarization: A Pilot Study. In Spoken Language Technology Workshop (SLT), 2010 IEEE. IEEE, pages 37–42. 983 Feifan Liu, Deana Pennell, Fei Liu, and Yang Liu. 2009. Unsupervised Approaches for Automatic Keyword Extraction Using Meeting Transcripts. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Boulder, Colorado, pages 620–628. Jialu Liu, Jingbo Shang, Chi Wang, Xiang Ren, and Jiawei Han. 2015. Mining quality phrases from massive text corpora. In Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data. ACM, pages 1729–1744. Vanessa Loza, Shibamouli Lahiri, Rada Mihalcea, and Po-Hsiang Lai. 2014. Building a Dataset for Summarization and Keyword Extraction from Emails. In LREC. pages 2441–2446. Kathleen McKeown, Lokesh Shrestha, and Owen Rambow. 2007. Using Question-answer Pairs in Extractive Summarization of Email Conversations. In International Conference on Intelligent Text Processing and Computational Linguistics. Springer, pages 542–550. Yashar Mehdad, Giuseppe Carenini, and Raymond T. Ng. 2014. Abstractive Summarization of Spoken and Written Conversations Based on Phrasal Queries. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Baltimore, Maryland, pages 1220–1230. Neil Mercer. 2004. Sociocultural Discourse Analysis. Journal of applied linguistics 1(2):137–168. Ingrid Mulder, Janine Swaak, and Joseph Kessels. 2002. Assessing Group Learning and Shared Understanding in Technology-mediated Interaction. Educational Technology & Society 5(1):35–47. Gabriel Murray, Giuseppe Carenini, and Raymond Ng. 2010. Generating and Validating Abstracts of Meeting Conversations: A User Study. In Proceedings of the 6th International Natural Language Generation Conference. Association for Computational Linguistics, Stroudsburg, PA, USA, INLG ’10, pages 105– 113. Gabriel Murray, Steve Renals, Jean Carletta, and Johanna Moore. 2006. Incorporating Speaker and Discourse Features into Speech Summarization. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics. Association for Computational Linguistics, pages 367–374. Ani Nenkova, Agustin Gravano, and Julia Hirschberg. 2008. High Frequency Word Entrainment in Spoken Dialogue. In Proceedings of the 46th annual meeting of the association for computational linguistics on human language technologies: Short papers. Association for Computational Linguistics, pages 169– 172. Tatsuro Oya and Giuseppe Carenini. 2014. Extractive Summarization and Dialogue Act Modeling on Email Threads: An Integrated Probabilistic Approach. In 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue. page 133. J´er´emy Perret, Stergos Afantenos, Nicholas Asher, and Mathieu Morey. 2016. Integer Linear Programming for Discourse Parsing. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 99– 109. Korbinian Riedhammer, Benoit Favre, and Dilek Hakkani-T¨ur. 2010. Long Story Short - Global Unsupervised Models for Keyphrase Based Meeting Summarization. Speech Commun. 52(10):801–815. Rutger Rienks, Dirk Heylen, and E. van der Weijden. 2005. Argument Diagramming of Meeting Conversations. In A. Vinciarelli and J-M. Odobez, editors, International Workshop on Multimodal Multiparty Meeting Processing, MMMP 2005, part of the 7th International Conference on Multimodal Interfaces, ICMI 2005. Khashayar Rohanimanesh, Kedar Bellare, Aron Culotta, Andrew McCallum, and Michael L Wick. 2011. Samplerank: Training Factor Graphs with Atomic Gradients. In Proceedings of the 28th International Conference on Machine Learning (ICML11). pages 777–784. David A Smith and Jason Eisner. 2008. Dependency Parsing by Belief Propagation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 145–156. Andreas Stolcke, Klaus Ries, Noah Coccaro, Elizabeth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema, and Marie Meteer. 2000. Dialogue Act Modeling for Automatic Tagging and Recognition of Conversational Speech. Computational linguistics 26(3):339–373. Lu Wang and Claire Cardie. 2012. Focused Meeting Summarization via Unsupervised Relation Extraction. In Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Association for Computational Linguistics, Seoul, South Korea. Lu Wang and Claire Cardie. 2013. DomainIndependent Abstract Generation for Focused Meeting Summarization. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Sofia, Bulgaria, pages 1395–1405. 984
2017
90
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 985–995 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1091 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 985–995 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1091 Argument Mining with Structured SVMs and RNNs Vlad Niculae Cornell University [email protected] Joonsuk Park Williams College [email protected] Claire Cardie Cornell University [email protected] Abstract We propose a novel factor graph model for argument mining, designed for settings in which the argumentative relations in a document do not necessarily form a tree structure. (This is the case in over 20% of the web comments dataset we release.) Our model jointly learns elementary unit type classification and argumentative relation prediction. Moreover, our model supports SVM and RNN parametrizations, can enforce structure constraints (e.g., transitivity), and can express dependencies between adjacent relations and propositions. Our approaches outperform unstructured baselines in both web comments and argumentative essay datasets. 1 Introduction Argument mining consists of the automatic identification of argumentative structures in documents, a valuable task with applications in policy making, summarization, and education, among others. The argument mining task includes the tightly-knit subproblems of classifying propositions into elementary unit types and detecting argumentative relations between the elementary units. The desired output is a document argumentation graph structure, such as the one in Figure 1, where propositions are denoted by letter subscripts, and the associated argumentation graph shows their types and support relations between them. Most annotation and prediction efforts in argument mining have focused on tree or forest structures (Peldszus and Stede, 2015; Stab and Gurevych, 2016), constraining argument structures to form one or more trees. This makes the problem computationally easier by enabling the use of maximum spanning tree–style parsing ap[ Calling a debtor at work is counter-intuitive; ]a [ if collectors are continuously calling someone at work, other employees may report it to the debtor’s supervisor. ]b [ Most companies have established rules about receiving or making personal calls during working hours. ]c [ If a collector or creditor calls a debtor on his/her cell phone and is informed that the debtor is at work, the call should be terminated. ]d [ No calls to employers should be allowed, ]e [ as this jeopardizes the debtor’s job. ]f b (VALUE) a (VALUE) d (POLICY) c (FACT) f (VALUE) e (POLICY) Figure 1: Example annotated CDCP comment.1 proaches. However, argumentation in the wild can be less well-formed. The argument put forth in Figure 1, for instance, consists of two components: a simple tree structure and a more complex graph structure (c jointly supports b and d). In this work, we design a flexible and highly expressive structured prediction model for argument mining, jointly learning to classify elementary units (henceforth propositions) and to identify the argumentative relations between them (henceforth links). By formulating argument mining as inference in a factor graph (Kschischang et al., 2001), our model (described in Section 4) can account for correlations between the two tasks, can consider second order link structures (e.g., in Figure 1, c →b →a), and can impose arbitrary constraints (e.g., transitivity). To parametrize our models, we evaluate two alternative directions: linear structured SVMs 1We describe proposition types (FACT, etc.) in Section 3. 985 (Tsochantaridis et al., 2005), and recurrent neural networks with structured loss, extending (Kiperwasser and Goldberg, 2016). Interestingly, RNNs perform poorly when trained with classification losses, but become competitive with the featureengineered structured SVMs when trained within our proposed structured learning model. We evaluate our approach on two argument mining datasets. Firstly, on our new Cornell eRulemaking Corpus – CDCP,2 consisting of argument annotations on comments from an eRulemaking discussion forum, where links don’t always form trees (Figure 1 shows an abridged example comment, and Section 3 describes the dataset in more detail). Secondly, on the UKP argumentative essays v2 (henceforth UKP), where argument graphs are annotated strictly as multiple trees (Stab and Gurevych, 2016). In both cases, the results presented in Section 5 confirm that our models outperform unstructured baselines. On UKP, we improve link prediction over the best reported result in (Stab and Gurevych, 2016), which is based on integer linear programming postprocessing. For insight into the strengths and weaknesses of the proposed models, as well as into the differences between SVM and RNN parameterizations, we perform an error analysis in Section 5.1. To support argument mining research, we also release our Python implementation, Marseille.3 2 Related work Our factor graph formulation draws from ideas previously used independently in parsing and argument mining. In particular, maximum spanning tree (MST) methods for arc-factored dependency parsing have been successfully used by McDonald et al. (2005) and applied to argument mining with mixed results by Peldszus and Stede (2015). As they are not designed for the task, MST parsers cannot directly handle proposition classification or model the correlation between proposition and link prediction—a limitation our model addresses. Using RNN features in an MST parser with a structured loss was proposed by Kiperwasser and Goldberg (2016); their model can be seen as a particular case of our factor graph approach, limited to link prediction with a tree structure constraint. Our models support multi-task learning for proposition classification, parameter2Dataset available at http://joonsuk.org. 3Available at https://github.com/vene/marseille. izing adjacent links with higher-order structures (e.g., c →b →a) and enforcing arbitrary constraints on the link structure, not limited to trees. Such higher-order structures and logic constraints have been successfully used for dependency and semantic parsing by Martins et al. (2013) and Martins and Almeida (2014); to our knowledge we are the first to apply them to argument mining, as well as the first to parametrize them with neural networks. Stab and Gurevych (2016) used an integer linear program to combine the output of independent proposition and link classifiers using a hand-crafted scoring formula, an approach similar to our baseline. Our factor graph method can combine the two tasks in a more principled way, as it fully learns the correlation between the two tasks without relying on hand-crafted scoring, and therefore can readily be applied to other argumentation datasets. Furthermore, our model can enforce the tree structure constraint, required on the UKP dataset, using MST cycle constraints used by Stab and Gurevych (2016), thanks to the AD3 inference algorithm (Martins et al., 2015). Sequence tagging has been applied to the related structured tasks of proposition identification and classification (Stab and Gurevych, 2016; Habernal and Gurevych, 2016; Park et al., 2015b); integrating such models is an important next step. Meanwhile, a new direction in argument mining explores pointer networks (Potash et al., 2016); a promising method, currently lacking support for tree structures and domain-specific constraints. 3 Data We release a new argument mining dataset consisting of user comments about rule proposals regarding Consumer Debt Collection Practices (CDCP) by the Consumer Financial Protection Bureau collected from an eRulemaking website, http:// regulationroom.org. Argumentation structures found in web discussion forums, such as the eRulemaking one we use, can be more free-form than the ones encountered in controlled, elicited writing such as (Peldszus and Stede, 2015). For this reason, we adopt the model proposed by Park et al. (2015a), which does not constrain links to form tree structures, but unrestricted directed graphs. Indeed, over 20% of the comments in our dataset exhibit local structures that would not be allowable in a tree. Possible link types are reason and evidence, and propo986 sition types are split into five fine-grained categories: POLICY and VALUE contain subjective judgements/interpretations, where only the former specifies a specific course of action to be taken. On the other hand, TESTIMONY and FACT do not contain subjective expressions, the former being about personal experience, or “anecdotal.” Lastly, REFERENCE covers URLs and citations, which are used to point to objective evidence in an online setting. In comparison, the UKP dataset (Stab and Gurevych, 2016) only makes the syntactic distinction between CLAIM, MAJOR CLAIM, and PREMISE types, but it also includes attack links. The permissible link structure is stricter in UKP, with links constrained in annotation to form one or more disjoint directed trees within each paragraph. Also, since web arguments are not necessarily fully developed, our dataset has many argumentative propositions that are not in any argumentation relations. In fact, it isn’t unusual for comments to have no argumentative links at all: 28% of CDCP comments have no links, unlike UKP, where all essays have complete argument structures. Such comments with no links make the problem harder, emphasizing the importance of capturing the lack of argumentative support, not only its presence. 3.1 Annotation results Each user comment was annotated by two annotators, who independently annotated the boundaries and types of propositions, as well as the links among them.4 To produce the final corpus, a third annotator manually resolved the conflicts,5 and two automatic preprocessing steps were applied: we take the link transitive closure, and we remove a small number of nested propositions.6 The resulting dataset contains 731 comments, consisting of about 3800 sentences (≈4700 propositions) and 88k words. Out of the 43k possible pairs of propositions, links are present between only 1300 (roughly 3%). In comparison, UKP has fewer documents (402), but they are longer, with a total of 7100 sentences (6100 propositions) and 147k 4The annotators used the GATE annotation tool (Cunningham et al., 2011). 5Inter-annotator agreement is measured with Krippendorf’s α (Krippendorff, 1980) with respect to elementary unit type (α=64.8%) and links (α=44.1%). A separate paper describing the dataset is under preparation. 6When two propositions overlap, we keep the one that results in losing the fewest links. For generality, we release the dataset without this preprocessing, and include code to reproduce it; we believe that handling nested argumentative units is an important direction for further research. words. Since UKP links only occur within the same paragraph and propositions not connected to the argument are removed in a preprocessing step, link prediction is less imbalanced in UKP, with 3800 pairs of propositions being linked out of a total of 22k (17%). We reserve a test set of 150 documents (973 propositions, 272 links) from CDCP, and use the provided 80-document test split from UKP (1266 propositions, 809 links). 4 Structured learning for argument mining 4.1 Preliminaries Binary and multi-class classification have been applied with some success to proposition and link prediction separately, but we seek a way to jointly learn the argument mining problem at the document level, to better model contextual dependencies and constraints. We therefore turn to structured learning, a framework that provides the desired level of expressivity. In general, learning from a dataset of documents xi ∈X and their associated labels yi ∈Y involves seeking model parameters w that can “pick out” the best label under a scoring function f: ˆy := arg maxy∈Y f(x, y; w). (1) Unlike classification or regression, where X is usually a feature space Rd and Y ⊆R (e.g., we predict an integer class index or a probability), in structured learning, more complex inputs and outputs are allowed. This makes the arg max in Equation 1 impossible to evaluate by enumeration, so it is desirable to find models that decompose over smaller units and dependencies between them; for instance, as factor graphs. In this section, we give a factor graph description of our proposed structured model for argument mining. 4.2 Model description An input document is a string of words with proposition offsets delimited. We denote the propositions in a document by {a, b, c, ...} and the possible directed link between a and b as a →b. The argument structure we seek to predict consists of the type of each proposition ya ∈P and a binary label for each link ya→b ∈R = {on, off}.7 7For simplicity and comparability, we follow Stab and Gurevych (2016) in using binary link labels even if links could be of different types. This can be addressed in our model by incorporating “labeled link” factors. 987 a b c a →b b →c a →c a ←b b ←c a ←c (a) CDCP a b c a →b b →c a →c a ←b b ←c a ←c (b) UKP Figure 2: Factor graphs for a document with three propositions (a, b, c) and the six possible edges between them, and some of the factors used, illustrating differences and similarities between our models for the two datasets. Unary factors are light gray; compatibility factors are black. Factors not part of the basic model have curved edges: higher-order factors are orange and on the right; link structure factors are hollow, as that they don’t have any parameters. Strict constraint factors are omitted for simplicity. The possible proposition types P differ for the two datasets; such differences are documented in Table 1. As we describe the variables and factors constituting a document’s factor graph, we shall refer to Figure 2 for illustration. Unary potentials. Each proposition a and each link a →b has a corresponding random variable in the factor graph (the circles in Figure 2). To encode the model’s belief in each possible value for these variables, we parametrize the unary factors (gray boxes in Figure 2) with unary potentials: φ(a) ∈R|P| is a score of ya for each possible proposition type. Similarly, link unary potentials φ(a →b) ∈R|R| are scores for ya→b being on/off. Without any other factors, this would amount to independent classifiers for each task. Compatibility factors. For every possible link a →b, the variables (a, b, a →b) are bound by a dense factor scoring their joint assignment (the black boxes in Figure 2). Such a factor could automatically learn to encourage links from compatible types (e.g., from TESTIMONY to POLICY) or discourage links between less compatible ones (e.g., from FACT to TESTIMONY). In the simplest form, this factor would be parametrized as a tensor T ∈R|P|×|P|×|R|, with tijk retaining the score of a source proposition of type i to be (k = on) or not to be (k = off) in a link with a proposition of type j. For more flexibility, we parametrize this factor with compatibility features depending only on simple structure: tijk becomes a vector, and the score of configuration (i, j, k) is given by v⊤ abtijk where vab consists of three binary features: • bias: a constant value of 1, allowing T to learn a base score for a label configuration (i, j, k), as in the simple form above, • adjacency: when there are no other propositions between the source and the target, • order: when the source precedes the target. Second order factors. Local argumentation graph structures such as a →b →c might be modeled better together rather than through separate link factors for a →b and b →c. As in higher-order structured models for semantic and dependency parsing (Martins et al., 2013; Martins and Almeida, 2014), we implement three types of second order factors: grandparent (a →b →c), sibling (a ←b →c), and co-parent (a →b ← c). Not all of these types of factors make sense on all datasets: as sibling structures cannot exist in directed trees, we don’t use sibling factors on UKP. On CDCP, by transitivity, every grandparent structure implies a corresponding sibling, so it is sufficient to parametrize siblings. This difference between datasets is emphasized in Figure 2, where one example of each type of factor is pictured on the right side of the graphs (orange boxes with curved edges): on CDCP we illustrate a coparent factor (top right) and a sibling factor (bot988 tom right), while on UKP we show a co-parent factor (top right) and a grandparent factor (bottom right). We call these factors second order because they involve two link variables, scoring the joint assignment of both links being on. Valid link structure. The global structure of argument links can be further constrained using domain knowledge. We implement this using constraint factors; these have no parameters and are denoted by empty boxes in Figure 2. In general, well-formed arguments should be cycle-free. In the UKP dataset, links form a directed forest and can never cross paragraphs. This particular constraint can be expressed as a series of tree factors,8 one for each paragraph (the factor connected to all link variables in Figure 2). In CDCP, links do not form a tree, but we use logic constraints to enforce transitivity (top left factor in Figure 2) and to prevent symmetry (bottom left); the logic formulas implemented by these factors are described in Table 1. Together, the two constraints have the desirable side effect of preventing cycles. Strict constraints. We may include further domain-specific constraints into the model, to express certain disallowed configurations. For instance, proposition types that appear in CDCP data can be ordered by the level of objectivity (Park et al., 2015a), as shown in Table 1. In a wellformed argument, we would want to see links from more objective to equally or less objective propositions: it’s fine to provide FACT as reason for VALUE, but not the other way around. While the training data sometimes violates this constraint, enforcing it might provide a useful inductive bias. Inference. The arg max in Equation 1 is a MAP over a factor graph with cycles and many overlapping factors, including logic factors. While exact inference methods are generally unavailable, our setting is perfectly suited for the Alternating Directions Dual Decomposition (AD3) algorithm: approximate inference on expressive factor graphs with overlapping factors, logic constraints, and generic factors (e.g., directed tree factors) defined through maximization oracles (Martins et al., 2015). When AD3 returns an integral solution, it is globally optimal, but when solutions are frac8A tree factor regards each bound variable as an edge in a graph and assigns −∞scores to configurations that are not valid trees. For inference, we can use maximum spanning arborescence algorithms such as Chu-Liu/Edmonds. tional, several options are available. At test time, for analysis, we retrieve exact solutions using the branch-and-bound method. At training time, however, fractional solutions can be used as-is; this makes better use of each iteration and actually increases the ratio of integral solutions in future iterations, as well as at test time, as proven by Meshi et al. (2016). We also find that after around 15 training iterations with fractional solutions, over 99% of inference calls are integral. Learning. We train the models by minimizing the structured hinge loss (Taskar et al., 2004): X (x,y)∈D max y′∈Y(f(x, y′; w) + ρ(y, y′)) −f(x, y; w) (2) where ρ is a configurable misclassification cost. The max in Equation 2 is not the same as the one used for prediction, in Equation 1. However, when the cost function ρ decomposes over the variables, cost-augmented inference amounts to regular inference after augmenting the potentials accordingly. We use a weighted Hamming cost: ρ(y, ˆy) := X v ρ(yv)I[yv = ˆyv] where v is summed over all variables in a document {a} ∪{a →b}, and ρ(yv) is a misclassification cost. We assign uniform costs ρ to 1 for all mistakes except false-negative links, where we use higher cost proportional to the class imbalance in the training split, effectively giving more weight to positive links during training. 4.3 Argument structure SVM One option for parameterizing the potentials of the unary and higher-order factors is with linear models, using proposition, link, and higher-order features. This gives birth to a linear structured SVM (Tsochantaridis et al., 2005), which, when using l2 regularization, can be trained efficiently in the dual using the online block-coordinate FrankWolfe algorithm of Lacoste-Julien et al. (2013), as implemented in the pystruct library (M¨uller and Behnke, 2014). This algorithm is more convenient than subgradient methods, as it does not require tuning a learning rate parameter. Features. For unary proposition and link features, we faithfully follow Stab and Gurevych (2016, Tables 9 and 10): proposition features are 989 Model part CDCP dataset UKP dataset proposition types REFERENCE ≻TESTIMONY ≻FACT ≻VALUE ≻POLICY CLAIM, MAJOR CLAIM, PREMISE links all possible within each paragraph 2nd order factors siblings, co-parents grandparents, co-parents link structure transitive acyclic: • a →b & b →c =⇒a →c • ATMOSTONE(a →b, b →a) directed forest: • TREEFACTOR over each paragraph • zero-potential “root” links a →∗ strict constraints link source must be as least as objective as the target: a →b =⇒a ⪰b link source must be premise: a →b =⇒a = PREMISE Table 1: Instantiation of model design choices for each dataset. lexical (unigrams and dependency tuples), structural (token statistics and proposition location), indicators (from hand-crafted lexicons), contextual, syntactic (subclauses, depth, tense, modal, and POS), probability, discourse (Lin et al., 2014), and average GloVe embeddings (Pennington et al., 2014). Link features are lexical (unigrams), syntactic (POS and productions), structural (token statistics, proposition statistics and location features), hand-crafted indicators, discourse triples, PMI, and shared noun counts. Our proposed higher-order factors for grandparent, co-parent, and sibling structures require features extracted from a proposition triplet a, b, c. In dependency and semantic parsing, higher-order factors capture relationships between words, so sparse indicator features can be efficiently used. In our case, since propositions consist of many words, BOW features may be too noisy and too dense; so for simplicity we again take a cue from the link-specific features used by Stab and Gurevych (2016). Our higher-order factor features are: same sentence indicators (for all 3 and for each pair), proposition order (one for each of the 6 possible orderings), Jaccard similarity (between all 3 and between each pair), presence of any shared nouns (between all 3 and between each pair), and shared noun ratios: nouns shared by all 3 divided by total nouns in each proposition and each pair, and shared nouns between each pair with respect to each proposition. Up to vocabulary size difference, our total feature dimensionality is approximately 7000 for propositions and 2100 for links. The number of second order features is 35. Hyperparameters. We pick the SVM regularization parameter C ∈{0.001, 0.003, 0.01, 0.03, 0.1, 0.3} by k-fold cross validation at document level, optimizing for the average between link and proposition F1 scores. 4.4 Argument structure RNN Neural network methods have proven effective for natural language problems even with minimalto-no feature engineering. Inspired by the use of LSTMs (Hochreiter and Schmidhuber, 1997) for MST dependency parsing by Kiperwasser and Goldberg (2016), we parametrize the potentials in our factor graph with an LSTM-based neural network,9 replacing MST inference with the more general AD3 algorithm, and using relaxed solutions for training when inference is inexact. We extract embeddings of all words with a corpus frequency > 1, initialized with GloVe word vectors. We use a deep bidirectional LSTM to encode contextual information, representing a proposition a as the average of the LSTM outputs of its words, henceforth denoted ↔ a. Proposition potentials. We apply a multi-layer perceptron (MLP) with rectified linear activations to each proposition, with all layer dimensions equal except the final output layer, which has size |P| and is not passed through any nonlinearities. Link potentials. To score a dependency a →b, Kiperwasser and Goldberg (2016) pass the concatenation [ ↔ a; ↔ b] through an MLP. After trying this, we found slightly better performance by first passing each proposition through a slot-specific dense layer a := σsrc( ↔ a), b := σtrg( ↔ b)  followed by a bilinear transformation: φon(a →b) := a ⊤W b + w⊤ srca + w⊤ trgb + w(on) 0 . Since the bilinear expression returns a scalar, but the link potentials must have a value for both the on and off states, we set the full potential to φ(a →b) := [φon(a →b), w(off) 0 ] where w(off) 0 is a learned scalar bias. We initialize W to the diagonal identity matrix. 9We use the dynet library (Neubig et al., 2017). 990 Second order potentials. Grandparent potentials φ(a →b →c) score two adjacent directed edges, in other words three propositions. We again first pass each proposition representation through a slot-specific dense layer. We implement a multilinear scorer analogously to the link potentials: φ(a →b →c) := X i,j,k aibjckwijk where W = (w)ijk is a third-order cube tensor. To reduce the large numbers of parameters, we implicitly represent W as a rank r tensor: wijk = Pr s=1 u(1) is u(2) js u(3) ks . Notably, this model captures only third-order interactions between the representation of the three propositions. To capture first-order “bias” terms, we could include slotspecific linear terms, e.g., w⊤ a a; but to further capture quadratic backoff effects (for instance, if two propositions carry a strong signal of being siblings regardless of their parent), we would require quadratically many parameters. Instead of explicit lower-order terms, we propose augmenting a, b, and c with a constant feature of 1, which has approximately the same effect, while benefiting from the parameter sharing in the low-rank factorization; an effect described by Blondel et al. (2016). Siblings and co-parents factors are similarly parametrized with their own tensors. Hyperparameters. We perform grid search using k-fold document-level cross-validation, tuning the dropout probability in the dense MLP layers over {0.05, 0.1, 0.15, 0.2, 0.25} and the optimal number of passes over the training data over {10, 25, 50, 75, 100}. We use 2 layers for the LSTM and the proposition classifier, 128 hidden units in all layers, and a multilinear decomposition with rank r = 16, after preliminary CV runs. 4.5 Baseline models We compare our proposed models to equivalent independent unary classifiers. The unary-only version of a structured SVM is an l2-regularized linear SVM.10 For the RNN, we compute unary potentials in the same way as in the structured model, but apply independent hinge losses at each variable, instead of the global structured hinge loss. Since the RNN weights are shared, this is a form of multi-task learning. The baseline predictions can 10We train our SVM using SAGA (Defazio et al., 2014) in lightning (Blondel and Pedregosa, 2016). be interpreted as unary potentials, therefore we can simply round their output to the highest scoring labels, or we can, alternatively, perform testtime inference, imposing the desired structure. 5 Results We evaluate our proposed models on both datasets. For model selection and development we used kfold cross-validation at document level: on CDCP we set k = 3 to avoid small validation folds, while on UKP we follow Stab and Gurevych (2016) setting k = 5. We compare our proposed structured learning systems (the linear structured SVM and the structured RNN) to the corresponding baseline versions. We organize our experiments in three incremental variants of our factor graph: basic, full, and strict, each with the following components:11 component basic full strict (baseline) unaries ✓ ✓ ✓ ✓ compat. factors ✓ ✓ ✓ compat. features ✓ ✓ higher-order ✓ ✓ link structure ✓ ✓ ✓ strict constraints ✓ ✓ Following Stab and Gurevych (2016), we compute F1 scores at proposition and link level, and also report their average as a summary of overall performance.12 The results of a single prediction run on the test set are displayed in Table 2. The overall trend is that training using a structured objective is better than the baseline models, even when structured inference is applied on the baseline predictions. On UKP, for link prediction, the linear baseline can reach good performance when using inference, similar to the approach of Stab and Gurevych (2016), but the improvement in proposition prediction leads to higher overall F1 for the structured models. Meanwhile, on the more difficult CDCP setting, performing inference on the baseline output is not competitive. While feature engineering still outperforms our RNN model, we find that RNNs shine on proposition classification, especially on UKP, and that structured training can make them competitive, reducing their observed lag on link prediction (Katiyar and Cardie, 2016), possibly through mitigating class imbalance. 11Components are described in Section 4. The baselines with inference support only unaries and factors with no parameters, as indicated in the last column. 12For link F1 scores, however, we find it more intuitive to only consider retrieval of positive links rather than macroaveraged two-class scores. 991 Baseline Structured SVM RNN SVM RNN Metric basic full strict basic full strict basic full strict basic full strict CDCP dataset Average 47.4 47.3 47.9 40.8 38.0 38.0 48.1 49.3 50.0 43.5 33.5 38.2 Link (272) 22.0 21.9 23.8 9.9 12.8 12.8 24.7 25.1 26.7 14.4 14.6 10.5 Proposition 72.7 72.7 72.0 71.8 63.2 63.2 71.6 73.5 73.2 72.7 52.4 65.9 VALUE (491) 75.3 75.3 74.4 74.1 74.8 74.8 73.4 75.7 76.4 73.7 73.1 69.7 POLICY (153) 78.7 78.7 78.5 74.3 72.2 72.2 72.3 77.3 76.8 73.9 74.4 76.8 TESTIMONY (204) 70.3 70.3 68.6 74.6 71.8 71.8 69.8 71.7 71.5 74.2 72.3 75.8 FACT (124) 39.2 39.2 38.3 35.8 30.5 30.5 42.4 42.5 41.3 41.5 42.2 40.5 REFERENCE (1) 100.0 100.0 100.0 100.0 66.7 66.7 100.0 100.0 100.0 100.0 0.0 66.7 UKP dataset Average 64.7 66.6 66.5 58.7 57.4 58.7 67.1 68.9 67.1 59.0 63.6 64.7 Link (809) 55.8 59.7 60.3 44.8 43.8 44.0 56.9 60.1 56.9 44.1 50.4 50.1 Proposition 73.5 73.5 72.6 72.6 70.9 73.3 77.2 77.6 77.3 74.0 76.9 79.3 MAJOR CLAIM (153) 76.7 76.7 77.6 81.4 75.1 81.3 77.0 78.2 80.0 83.6 84.6 88.3 CLAIM (304) 55.4 55.4 52.0 51.7 52.7 53.5 64.3 64.5 62.8 53.2 60.2 62.0 PREMISE (809) 88.4 88.4 88.3 84.8 84.8 85.2 90.3 90.2 89.2 85.0 85.9 87.6 Table 2: Test set F1 scores for link and proposition classification, as well as their average, on the two datasets. The number of test instances is shown in parentheses; best scores on overall tasks are in bold. 5.1 Discussion and analysis Contribution of compatibility features. The compatibility factor in our model can be visualized as conditional odds ratios given the source and target proposition types. Since there are only four possible configurations of the compatibility features, we can plot all cases in Figure 3, alongside the basic model. Not using compatibility features, the basic model can only learn whether certain configurations are more likely than others (e.g. a REFERENCE supporting another REFERENCE is unlikely, while a REFERENCE supporting a FACT is more likely; essentially a soft version of our domain-specific strict constraints. The full model with compatibility features is finer grained, capturing, for example, that links from REFERENCE to FACT are more likely when the reference comes after, or that links from VALUE to POLICY are extremely likely only when the two are adjacent. Proposition errors. The confusion matrices in Figure 4 reveal that the most common confusion is misclassifying FACT as VALUE. The strongest difference between the various models tested is that the RNN-based models make this error less often. For instance, in the proposition: And the single most frequently used excuse of any debtor is “I didn’t receive the letter/invoice/statement” the pronouns in the nested quote may be mistaken for subjectivity, leading to the structured SVMs predictions of VALUE or TESTIMONY, while the basic structured RNN correctly classifies it as FACT. Link errors. While structured inference certainly helps baselines by preventing invalid structures such as cycles, it still depends on local decisions, losing to fully structured training in cases where joint proposition and link decisions are needed. For instance, in the following conclusion of an UKP essay, the annotators found no links: In short, [ the individual should finance his or her education ]a because [ it is a personal choice. ]b Otherwise, [ it would cause too much cost from taxpayers and the government. ]c Indeed, no reasons are provided, but baseline are misled by the connectives: the SVM baseline outputs that b and c are PREMISEs supporting the CLAIM a. The full structured SVM combines the two tasks and correctly recognizes the link structure. Linear SVMs are still a very good baseline, but they tend to overgenerate links due to class imbalance, even if we use class weights during training. Surprisingly, RNNs are at the opposite end, being extremely conservative, and getting the highest precision among the models. On CDCP, where the number of true links is 272, the linear baseline with strict inference predicts 796 links with a precision of only 16%, while the strict structured RNN only predicts 52 links, with 33% precision; the example in Figure 5 illustrates this. In terms of higher-order structures, we find that using higherorder factors increases precision, at a cost in recall. 992 P V F T R Target Policy Value Fact Testimony Reference Source -0.3 -0.1 -0.1 -0.2 -0.1 +0.1 -0.0 -0.1 -0.1 -0.2 -0.0 +0.0 -0.1 -0.2 -0.1 -0.2 -0.1 -0.1 -0.3 +0.1 -0.3 +0.0 +0.6 +0.1 -0.4 Non-adjacent, trg precedes src P V F T R Target -0.2 -0.3 -0.0 -0.2 -0.2 -0.4 -0.3 -0.2 -0.1 -0.3 -0.3 -0.1 +0.0 -0.0 -0.3 -0.3 -0.0 -0.1 +0.1 -0.2 -0.2 -0.0 +0.4 +0.1 -0.4 Non-adjacent, src precedes trg P V F T R Target +0.6 +0.9 +0.3 +0.1 -0.1 +2.2 +1.7 +1.0 +0.9 -0.1 +2.0 +1.7 +1.0 +0.6 -0.1 +1.5 +1.5 +0.9 +0.9 +0.1 -0.2 +0.1 +0.5 +0.1 -0.8 Adjacent, trg precedes src P V F T R Target +0.7 +0.7 +0.3 +0.1 -0.2 +1.7 +1.4 +0.9 +0.9 -0.2 +1.7 +1.5 +1.1 +0.7 -0.3 +1.4 +1.5 +0.9 +1.4 -0.1 -0.1 +0.1 +0.3 +0.1 -0.9 Adjacent, src precedes trg P V F T R Target -0.8 -0.7 -1.1 -1.0 -0.4 +0.9 +0.3 -0.5 -0.5 -0.5 +0.6 +0.6 -0.2 -0.5 -0.3 +0.1 +0.3 -0.3 -0.2 -0.1 -0.7 -0.0 +1.3 -0.0 -1.0 Basic (no compatibility features) Figure 3: Learned conditional log-odds log p(on|·) p(off|·), given the source and target proposition types and compatibility feature settings. First four figures correspond to the four possible settings of the compatibility features in the full structured SVM model. For comparison, the rightmost figure shows the same parameters in the basic structured SVM model, which does not use compatibility features. P V F T R Predicted P V F T R True 0.77 0.10 0.06 0.07 0.00 0.05 0.75 0.10 0.10 0.00 0.02 0.50 0.39 0.09 0.00 0.01 0.20 0.06 0.73 0.00 0.00 0.00 0.00 0.00 1.00 Baseline SVM basic P V F T R Predicted 0.76 0.16 0.05 0.04 0.00 0.05 0.76 0.11 0.08 0.00 0.04 0.42 0.44 0.10 0.00 0.01 0.21 0.06 0.72 0.00 0.00 0.00 0.00 0.00 1.00 Structured SVM full P V F T R Predicted P V F T R True 0.72 0.14 0.12 0.02 0.00 0.04 0.74 0.15 0.07 0.00 0.06 0.48 0.40 0.06 0.00 0.02 0.20 0.06 0.72 0.00 0.00 0.00 0.00 0.00 1.00 Baseline RNN basic P V F T R Predicted 0.73 0.17 0.10 0.00 0.00 0.05 0.71 0.15 0.08 0.00 0.07 0.38 0.48 0.06 0.00 0.01 0.19 0.08 0.73 0.00 0.00 0.00 0.00 0.00 1.00 Structured RNN basic Figure 4: Normalized confusion matrices for proposition type classification. This is most beneficial for the 856 co-parent structures in the UKP test set: the full structured SVM has 53% F1, while the basic structured SVM and the basic baseline get 47% and 45% respectively. On CDCP, while higher-order factors help, performance on siblings and co-parents is below 10% F1 score. This is likely due to link sparsity and suggests plenty of room for further development. 6 Conclusions and future work We introduce an argumentation parsing model based on AD3 relaxed inference in expressive factor graphs, experimenting with both linear struc[ I think the cost of education needs to be reduced (...) or repayment plans need to be income based. ]a [ As far as consumer protection, legal aid needs to be made available, affordable and effective, ]b [ and consumers need to take time to really know their rights and stop complaining about harassment ]c [ because that’s a completely different cause of action than restitution. ]d a (P) c (P) b (P) d (V) (a) Ground truth a (V) c (V) b (P) d (V) (b) Baseline linear strict a (P) c (V) b (P) d (V) (c) Structured linear full a (P) c (P) b (P) d (F) (d) Structured RNN strict Figure 5: Predictions on a CDCP comment where the structured RNN outperforms the other models. tured SVMs and structured RNNs, parametrized with higher-order factors and link structure constraints. We demonstrate our model on a new argumentation mining dataset with more permissive argument structure annotation. Our model also achieves state-of-the-art link prediction performance on the UKP essays dataset. Future work. Stab and Gurevych (2016) found polynomial kernels useful for modeling feature interactions, but kernel structured SVMs scale poorly, we intend to investigate alternate ways to capture feature interactions. While we focus on monological argumentation, our model could be extended to dialogs, for which argumentation theory thoroughly motivates non-tree structures (Afantenos and Asher, 2014). 993 Acknowledgements We are grateful to Andr´e Martins, Andreas M¨uller, Arzoo Katyiar, Chenhao Tan, Felix Wu, Jack Hessel, Justine Zhang, Mathieu Blondel, Tianze Shi, Tobias Schnabel, and the rest of the Cornell NLP seminar for extremely helpful discussions. We thank the anonymous reviewers for their thorough and well-argued feedback. References Stergos Afantenos and Nicholas Asher. 2014. Counterargumentation and discourse: A case study. In Proceedings of ArgNLP. Mathieu Blondel, Masakazu Ishihata, Akinori Fujino, and Naonori Ueda. 2016. Polynomial networks and factorization machines: New insights and efficient training algorithms. In Proceedings of ICML. Mathieu Blondel and Fabian Pedregosa. 2016. Lightning: large-scale linear classification, regression and ranking in Python. https://doi.org/10.5281/zenodo.200504. Hamish Cunningham, Diana Maynard, Kalina Bontcheva, Valentin Tablan, Niraj Aswani, Ian Roberts, Genevieve Gorrell, Adam Funk, Angus Roberts, Danica Damljanovic, Thomas Heitz, Mark A. Greenwood, Horacio Saggion, Johann Petrak, Yaoyong Li, and Wim Peters. 2011. Text Processing with GATE (Version 6). Aaron Defazio, Francis Bach, and Simon LacosteJulien. 2014. SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. In Proceedings of NIPS. Ivan Habernal and Iryna Gurevych. 2016. Argumentation mining in user-generated web discourse. Computational Linguistics . Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9(8):1735–1780. Arzoo Katiyar and Claire Cardie. 2016. Investigating LSTMs for joint extraction of opinion entities and relations. In Proceedings of ACL. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional LSTM feature representations. arXiv:1603.04351 preprint. Klaus Krippendorff. 1980. Content Analysis: An Introduction to Its Methodology. Commtext. Sage. Frank R Kschischang, Brendan J Frey, and H-A Loeliger. 2001. Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory 47(2):498–519. Simon Lacoste-Julien, Martin Jaggi, Mark Schmidt, and Patrick Pletscher. 2013. Block-coordinate Frank-Wolfe optimization for structural SVMs. In Proceedings of ICML. Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2014. A PDTB-styled end-to-end discourse parser. Natural Language Engineering 20(02):151–184. Andr´e FT Martins and Mariana SC Almeida. 2014. Priberam: A Turbo Semantic Parser with second order features. In Proceedings of SemEval. Andr´e FT Martins, Miguel B Almeida, and Noah A Smith. 2013. Turning on the Turbo: Fast thirdorder non-projective Turbo Parsers. In Proceedings of ACL. Andr´e FT Martins, M´ario AT Figueiredo, Pedro MQ Aguiar, Noah A Smith, and Eric P Xing. 2015. AD3: Alternating directions dual decomposition for MAP inference in graphical models. Journal of Machine Learning Research 16:495–545. Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajiˇc. 2005. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of EMNLP. Ofer Meshi, Mehrdad Mahdavi, Adrian Weller, and David Sontag. 2016. Train and test tightness of LP relaxations in structured prediction. In Proceedings of ICML. Andreas C M¨uller and Sven Behnke. 2014. PyStruct: learning structured prediction in Python. Journal of Machine Learning Research 15(1):2055–2060. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. DyNet: The dynamic neural network toolkit. arXiv:1701.03980 preprint. Joonsuk Park, Cheryl Blake, and Claire Cardie. 2015a. Toward machine-assisted participation in eRulemaking: An argumentation model of evaluability. In Proceedings of ICAIL. Joonsuk Park, Arzoo Katiyar, and Bishan Yang. 2015b. Conditional random fields for identifying appropriate types of support for propositions in online user comments. In Proceedings of the 2nd Workshop on Argumentation Mining. Association for Computational Linguistics, Denver, CO, pages 39–44. Andreas Peldszus and Manfred Stede. 2015. Joint prediction in MST-style discourse parsing for argumentation mining. In Proceedings of EMNLP. 994 Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of EMNLP. Peter Potash, Alexey Romanov, and Anna Rumshisky. 2016. Here’s my point: Argumentation mining with pointer networks. arXiv:1612.08994 preprint. Christian Stab and Iryna Gurevych. 2016. Parsing argumentation structures in persuasive essays. arXiv:1604.07370 preprint. Ben Taskar, Carlos Guestrin, and Daphne Koller. 2004. Max-margin Markov networks. In Proceedings of NIPS. Ioannis Tsochantaridis, Thorsten Joachims, Thomas Hofmann, and Yasemin Altun. 2005. Large margin methods for structured and interdependent output variables. Journal of Machine Learning Research 6(Sep):1453–1484. 995
2017
91
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 996–1005 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1092 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 996–1005 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1092 Neural Discourse Structure for Text Categorization Yangfeng Ji and Noah A. Smith Paul G. Allen School of Computer Science & Engineering University of Washington Seattle, WA 98195, USA {yangfeng,nasmith}@cs.washington.edu Abstract We show that discourse structure, as defined by Rhetorical Structure Theory and provided by an existing discourse parser, benefits text categorization. Our approach uses a recursive neural network and a newly proposed attention mechanism to compute a representation of the text that focuses on salient content, from the perspective of both RST and the task. Experiments consider variants of the approach and illustrate its strengths and weaknesses. 1 Introduction Advances in text categorization have the potential to improve systems for analyzing sentiment, inferring authorship or author attributes, making predictions, and many more. Several past researchers have noticed that methods that reason about the relative salience or importance of passages within a text can lead to improvements (Ko et al., 2004). Latent variables (Yessenalina et al., 2010), structured-sparse regularizers (Yogatama and Smith, 2014), and neural attention models (Yang et al., 2016) have all been explored. Discourse structure, which represents the organization of a text as a tree (for an example, see Figure 1), might provide cues for the importance of different parts of a text. Some promising results on sentiment classification tasks support this idea: Bhatia et al. (2015) and Hogenboom et al. (2015) applied hand-crafted weighting schemes to the sentences in a document, based on its discourse structure, and showed benefit to sentiment polarity classification. In this paper, we investigate the value of discourse structure for text categorization more broadly, considering five tasks, through the use of a recursive neural network built on an R Contrast Elaboration A B Explanation C Joint D Constrast E F [Although the food was amazing]A [and I was in love with the spicy pork burrito,]B [the service was really awful.]C [We watched our waiter serve himself many drinks.]D [He kept running into the bathroom]E [instead of grabbing our bill.]F Figure 1: A manually constructed example of the RST (Mann and Thompson, 1988) discourse structure on a text. automatically-derived document parse from a topperforming, open-source discourse parser, DPLP (Ji and Eisenstein, 2014). Our models learn to weight the importance of a document’s sentences, based on their positions and relations in the discourse tree. We introduce a new, unnormalized attention mechanism to this end. Experimental results show that variants of our model outperform prior work on four out of five tasks considered. Our method unsurprisingly underperforms on the fifth task, making predictions about legislative bills—a genre in which discourse conventions are quite different from those in the discourse parser’s training data. Further experiments show the effect of discourse parse quality on text categorization performance, suggesting that future improvements to discourse parsing will pay off for text categorization, and validate our new attention mechanism. 996 Our implementation is available at https:// github.com/jiyfeng/disco4textcat. 2 Background: Rhetorical Structure Theory Rhetorical Structure Theory (RST; Mann and Thompson, 1988) is a theory of discourse that has enjoyed popularity in NLP. RST posits that a document can be represented by a tree whose leaves are elementary discourse units (EDUs, typically clauses or sentences). Internal nodes in the tree correspond to spans of sentences that are connected via discourse relations such as CONTRAST and ELABORATION. In most cases, a discourse relation links adjacent spans denoted “nucleus” and “satellite,” with the former more essential to the writer’s purpose than the latter.1 An example of a manually constructed RST parse for a restaurant review is shown in Figure 1. The six EDUs are indexed from A to F; the discourse tree organizes them hierarchically into increasingly larger spans, with the last CONTRAST relation resulting in a span that covers the whole review. Within each relation, the RST tree indicates the nucleus pointed by an arrow from its satellite (e.g., in the ELABORATION relation, A is the nucleus and B is the satellite). The information embedded in RST trees has motivated many applications in NLP research, including document summarization (Marcu, 1999), argumentation mining (Azar, 1999), and sentiment analysis (Bhatia et al., 2015). In most applications, RST trees are built by automatic discourse parsing, due to the expensive cost of manual annotation. In this work, we use a state-of-the-art open-source RST-style discourse parser, DPLP (Ji and Eisenstein, 2014).2 We follow recent work that suggests transforming the RST tree into a dependency structure (Yoshida et al., 2014).3 Figure 2(a) shows the corresponding dependency structure of the RST tree in Figure 1. It is clear that C is the root of the tree, and in fact this clause summarizes the review and suffices to categorize it as negative. This dependency representation of the RST tree offers a 1There are also a few exceptions in which a relation can be realized with multiple nuclei. 2https://github.com/jiyfeng/DPLP 3The transformation is trivial and deterministic given the nucleus-satellite mapping for each relation. The procedure is analogous to the transformation of a headed phrase-structure parse in syntax into a dependency tree (e.g., Yamada and Matsumoto, 2003). form of inductive bias for our neural model, helping it to discern the most salient parts of a text in order to assign it a label. 3 Model Our model is a recursive neural network built on a discourse dependency tree. It includes a distributed representation computed for each EDU, and a composition function that combines EDUs and partial trees into larger trees. At the top of the tree, the representation of the complete document is used to make a categorization decision. Our approach is analogous to (and inspired by) the use of recursive neural networks on syntactic dependency trees, with word embeddings at the leaves (Socher et al., 2014). 3.1 Representation of Sentences Let e be the distributed representation of an EDU. We use a bidirectional LSTM on the words’ embeddings within each EDU (details of word embeddings are given in section 4), concatenating the last hidden state vector from the forward LSTM (−→e ) with that of the backward LSTM (←−e ) to get e. There is extensive recent work on architectures for embedding representations of sentences and other short pieces of text, including, for example, (bi)recursive neural networks (Paulus et al., 2014) and convolutional neural networks (Kalchbrenner et al., 2014). Future work might consider alternatives; we chose the bidirectional LSTM due to its effectiveness in many settings. 3.2 Full Recursive Model Given the discourse dependency tree for an input text, our recursive model builds a vector representation through composition at each arc in the tree. Let vi denote the vector representation of EDU i and its descendants. For the base case where EDU i is a leaf in the tree, we let vi = tanh(ei), which is the elementwise hyperbolic tangent function. For an internal node i, the composition function considers a parent and all of its children, whose indices are denoted by children(i). In defining this composition function, we seek for (i.) the contribution of the parent node ei to be central; and (ii.) the contribution of each child node ej be determined by its content as well as the discourse relation it holds with the parent. We therefore define 997 C D A E B F Elab. Cont. Exp. Exp. Cont. (a) dependency structure tanh(eC + P j∈{A,D,E} αC,jWC,jvj) tanh(eD) tanh(eA + αA,BWA,BvB) tanh(eE + αF,EWF,EvF) tanh(eB) tanh(eF) WA,B WC,A WC,D WC,E WF,E (b) recursive neural network structure Figure 2: The dependency discourse tree derived from the example RST tree in Figure 1 (a) and the corresponding recursive neural network model on the tree (b). vi = tanh  ei + X j∈children(i) αi,jWri,jvj  , (1) where Wri,j is a relation-specific composition matrix indexed by the relation between i and j, ri,j. αi,j is an “attention” weight, defined as αi,j = σ  e⊤ i Wαvj  , (2) where σ is the elementwise sigmoid and Wα contains attention parameters (these are relationindependent). Our attention mechanism differs from prior work (Bahdanau et al., 2015), in which attention weights are normalized to sum to one across competing candidates for attention. Here, αi,j does not depend on node i’s other children. This is motivated by RST, in which the presence of a node does not signify lesser importance to its siblings. Consider, for example, EDU D and text span E-F in Figure 1, which in parallel provide EXPLANATION for EDU C. This scenario differs from machine translation, where attention isused to implicitly and softly align output-language words to relatively few input-language words. It also differs from attention in composition functions used in syntactic parsing (Kuncoro et al., 2017), where attention can mimic head rules that follow from an endocentricity hypothesis of syntactic phrase representation. Our recursive composition function, through the attention mechanism and the relation-specific weight matrices, is designed to learn how to differently weight EDUs for the categorization task. This idea of using a weighting scheme along with discourse structure is explored in prior works (Bhatia et al., 2015; Hogenboom et al., 2015), although they are manually designed, rather than learned from training data. Once we have vroot of a text, the prediction of its category is given by softmax (Wovroot + b). We refer to this model as the FULL model, since it makes use of the entire discourse dependency tree. 3.3 Unlabeled Model The FULL model based on Equation 1 uses a dependency discourse tree with relations. Because alternate discourse relation labels have been proposed (e.g., Prasad et al., 2008), we seek to measure the effect of these labels. We therefore consider an UNLABELED model based only on the tree structure, without the relations: vi = tanh  ei + X j∈children(i) αi,jvj  . (3) Here, only attention weights are used to compose the children nodes’ representations, significantly reducing the number of model parameters. This UNLABELED model is similar to the depth weighting scheme introduced by Bhatia et al. (2015), which also uses an unlabeled discourse dependency tree, but our attention weights are computed by a function whose parameters are learned. This approach sits squarely between Bhatia et al. (2015) and the flat document structure used by Yang et al. (2016); the UNLABELED model still uses discourse to bias the model toward some content (that which is closer to the tree’s root). 3.4 Simpler Variants We consider two additional baselines that are even simpler. The first, ROOT, uses the discourse dependency structure only to select the root EDU, which is used to represent the entire text: vroot = eroot. No composition function is needed. This model variant is motivated by work on document summarization (Yoshida et al., 2014), where the 998 most central EDU is used to represent the whole text. The second variant, ADDITIVE, uses all the EDUs with a simple composition function, and does not depend on discourse structure at all: vroot = 1 N PN i=1 ei, where N is the total number of EDUs. This serves as a baseline to test the benefits of discourse, controlling for other design decisions and implementation choices. Although sentence representations ei are built in a different way from the work of Yang et al. (2016), this model is quite similar to their HN-AVE model on building document representations. 4 Implementation Details The parameters of all components of our model (top-level classification, composition, and EDU representation) are learned end-to-end using standard methods. We implement our learning procedure with the DyNet package (Neubig et al., 2017). Preprocessing. For all datasets, we use the same preprocessing steps, mostly following recent work on language modeling (e.g., Mikolov et al., 2010). We lowercased all the tokens and removed tokens that contain only punctuation symbols. We replaced numbers in the documents with a special number token. Low-frequency word types were replaced by UNK; we reduce the vocabulary for each dataset until approximately 5% of tokens are mapped to UNK. The vocabulary sizes after preprocessing are also shown in Table 1. Discourse parsing. Our model requires the discourse structure for each document. We used DPLP, the RST parser from Ji and Eisenstein (2014), which is one of the best discourse parsers on the RST discourse treebank benchmark (Carlson et al., 2001). It employs a greedy decoding algorithm for parsing, producing 2,000 parses per minute on average on a single CPU. DPLP provides discourse segmentation, breaking a text into EDUs, typically clauses or sentences, based on syntactic parses provided by Stanford CoreNLP. RST trees are converted to dependencies following the method of Yoshida et al. (2014). DPLP as distributed is trained on 347 Wall Street Journal articles from the Penn Treebank (Marcus et al., 1993). Word embeddings. In cases where there are 10,000 or fewer training examples, we used pretrained GloVe word embeddings (Pennington et al., 2014), following previous work on neural discourse processing (Ji and Eisenstein, 2015). For larger datasets, we randomly initialized word embeddings and trained them alongside other model parameters. Learning and hyperparameters. Online learning was performed with the optimization method and initial learning rate as hyperparameters. To avoid the exploding gradient problem, we used the norm clipping trick with a threshold of τ = 5.0. In addition, dropout rate 0.3 was used on both input and hidden layers to avoid overfitting. We performed grid search over the word vector representation dimensionality, the LSTM hidden state dimensionality (both {32, 48, 64, 128, 256}), the initial learning rate ({0.1, 0.01, 0.001}), and the update method (SGD and Adam, Kingma and Ba, 2015). For each corpus, the highest-accuracy combination of these hyperparameters is selected using development data or ten-fold cross validation, which will be specified in section 5. 5 Datasets We selected five datasets of different sizes and corresponding to varying categorization tasks. Some information about these datasets is summarized in Table 1. Sentiment analysis on Yelp reviews. Originally from the Yelp Dataset Challenge in 2015, this dataset contains 1.5 million examples. We used the preprocessed dataset from Zhang et al. (2015), which has 650,000 training and 50,000 test examples. The task is to predict an ordinal rating (1–5) from the text of the review. To select the best combination of hyperparameters, we randomly sampled 10% training examples as the development data. We compared with hierarchical attention networks (Yang et al., 2016), which use the normalized attention mechanism on both word and sentence layers with a flat document structure, and provide the state-of-the-art result on this corpus. Framing dimensions in news articles. The Media Frames Corpus (MFC; Card et al., 2015) includes around 4,200 news articles about immigration from 13 U.S. newspapers over the years 1980–2012. The annotations of these articles are in terms of a set of 15 general-purpose labels, such as ECONOMICS and MORALITY, designed to categorize the emphasis framing applied to the 999 Number of docs. Dataset Task Classes Total Training Development Test Vocab. size Yelp Sentiment 5 700K 650K – 50K 10K MFC Frames 15 4.2K – – – 7.5K Debates Vote 2 1.6K 1,135 105 403 5K Movies Sentiment 2 2.0K – – – 5K Bills Survival 2 52K 46K – 6K 10K Table 1: Information about the five datasets used in our experiments. To compare with prior work, we use different experimental settings. For Yelp and Bill corpora, we use 10% of the training examples as development data. For MFC and Movies corpora, we use 10-fold cross validation and report averages across all folds. immigration issue within the articles. We focused on predicting the single primary frame of each article. The state-of-the-art result on this corpus is from Card et al. (2016), where they used logistic regression together with unigrams, bigrams and Bamman-style personas (Bamman et al., 2014) as features. The best feature combination in their model alongside other hyperparameters was identified by a Bayesian optimization method (Bergstra et al., 2015). To select hyperparameters, we used a small set of examples from the corpus as a development set. Then, we report average accuracy across 10-fold cross validation as in (Card et al., 2016). Congressional floor debates. The corpus was originally collected by Thomas et al. (2006), and the data split we used was constructed by Yessenalina et al. (2010). The goal is to predict the vote (“yea” or “nay”) for the speaker of each speech segment. The most recent work on this corpus is from Yogatama and Smith (2014), which proposed structured regularization methods based on linguistic components, e.g., sentences, topics, and syntactic parses. Each regularization method induces a linguistic bias to improve text classification accuracy, where the best result we repeated here is from the model with sentence regularizers. Movie reviews. This classic movie review corpus was constructed by Pang and Lee (2004) and includes 1,000 positive and 1,000 negative reviews. On this corpus, we used the standard tenfold data split for cross validation and reported the average accuracy across folds. We compared with the work from both Bhatia et al. (2015) and Hogenboom et al. (2015), which are two recent works on discourse for sentiment analysis. Bhatia et al. (2015) used a hand-crafted weighting scheme to bias the bag-of-word representations on sentences. Hogenboom et al. (2015) also considered manually-designed weighting schemes and a lexicon-based model as classifier, achieving performance inferior to fully-supervised methods like Bhatia et al. (2015) and ours. Congressional bill corpus. This corpus, collected by Yano et al. (2012), includes 51,762 legislative bills from the 103rd to 111th U.S. Congresses. The task is to predict whether a bill will survive based on its content. We randomly sampled 10% training examples as development data to search for the best hyperparameters. To our knowledge, the best published results are due to Yogatama and Smith (2014), which is the same baseline as for the congressional floor debates corpus. 6 Experiments We evaluated all variants of our model on the five datasets presented in section 5, comparing in each case to the published state of the art as well as the most relevant works. Results. See Table 2. On four out of five datasets, our UNLABELED model (line 8) outperforms past methods. In the case of the very large Yelp dataset, our FULL model (line 9) gives even stronger performance, but not elsewhere, suggesting that it is overparameterized for the smaller datasets. Indeed, on the MFC and Movies tasks, the discourse-ignorant ADDITIVE outperforms the FULL model. On these datasets, the selected FULL model had nearly 20 times as many parameters as the UNLABELED model, which in turn had twice as many parameters as the ADDITIVE. 1000 Method Yelp MFC Debates Movies Bills Prior work 1. Yang et al. (2016) 71.0 — — — — 2. Card et al. (2016) — 56.8 — — — 3. Yogatama and Smith (2014) — — 74.0 — 88.5 4. Bhatia et al. (2015) — — — 82.9 — 5. Hogenboom et al. (2015) — — — 71.9 — Variants of our model 6. ADDITIVE 68.5 57.6 69.0 82.7 80.1 7. ROOT 54.3 51.2 60.3 68.7 70.5 8. UNLABELED 71.3 58.4 75.7 83.1 78.4 9. FULL 71.8 56.3 74.2 79.5 77.0 Table 2: Test-set accuracy across five datasets. Results from prior work are reprinted from the corresponding publications. Boldface marks performance stronger than the previous state of the art. This finding demonstrates the benefit of explicit discourse structure—even the output from an imperfect parser—for text categorization in some genres. This benefit is supported by both UNLABELED and FULL, since both of them use discourse structures of texts. The advantage of using discourse information varies on different genres and different corpus sizes. Even though the discourse parser is trained on news text, it still offers benefit to restaurant and movie reviews and to the genre of congressional debates. Even for news text, if the training dataset is small (e.g., MFC), a lighter-weight variant of discourse (UNLABELED) is preferred. Legislative bills, which have technical legal content and highly specialized conventions (see the supplementary material for an example), are arguably the most distant genre from news among those we considered. On that task, we see discourse working against accuracy. Note that the corpus of bills is more than ten times larger than three cases where our UNLABELED model outperformed past methods, suggesting that the drop in performance is not due to lack of data. It is also important to notice that the ROOT model performs quite poorly in all cases. This implies that discourse structure is not simply helping by finding a single EDU upon which to make the categorization decision. Qualitative analysis. Figure 3 shows some example texts from the Yelp Review corpus with their discourse structures produced by DPLP, where the weights were generated with the FULL model. Figures 3(a) and 3(b) are two successful examples of the FULL model. Figure 3(a) shows a simple case with respect to the discourse structure. Figure 3(b) is slightly different—the text in this example may have more than one reasonable discourse structure, e.g., 2D could be a child of 2C instead of 2A. In both cases, discourse structures help the FULL model bias to the important sentences. Figure 3(c), on the other hand, presents a negative example, where DPLP failed to identify the most salient sentence 3F. In addition, the weights produced by the FULL model do not make much sense, which we suspect the model was confused by the structure. Figure 3(c) also presents a manually-constructed discourse structure on the same text for reference. A more accurate prediction is expected if we use this manuallyconstructed discourse structure, because it has the appropriate dependency between sentences. In addition, the annotated discourse relations are able to select the right relation-specific composition matrices in FULL model, which are consistent with the training examples. Effect of parsing performance. A natural question is whether further improvements to RST discourse parsing would lead to even greater gains in text categorization. While advances in discourse parsing are beyond the scope of this paper, we can gain some insight by exploring degradation to the DPLP parser. An easy way to do this is to train it on subsets of the RST discourse treebank. We repeated the conditions described above for our FULL model, training DPLP on 25%, 50%, and 75% of the training set (randomly selected in 1001 From DPLP: 1A 1B 1C 0.66 Elaboration 0.67 Cause [This store is somewhat convenient but I can never find any workers,]1A [it drives me crazy.]1B [I never come here on the weekends or around holidays anymore.]1C (a) true label: 2, predicted label: 2 From DPLP: 2A 2B 2C 2D 0.87 Evaluation 0.61 Elaboration 0.70 Evaluation [I love these people.]2A [They are very friendly and always ask about my life.]2B [They remember things I tell them then ask about it the next time I’m in.]2C [Patrick and Lily are the best but everyone there is wonderful in their own ways.]2D (b) true label: 5, predicted label: 5 From DPLP: 3B 3A 3C 3D 3E 3F 0.47 Elaboration 0.32 Elaboration 0.62 Elaboration 0.16 Elaboration 0.32 Attribution Manually constructed: 3F 3B 3A 3C 3D 3E Cause Background Explanation Explanation Explanation [We use to visit this pub 10 years ago because they had a nice english waitress and excellent fish and chips for the price.]3A [However we went back a few weeks ago and were disappointed.]3B [The price of the fish and chip dinner went up and they cut the portion in half.]3C [No one assisted us in putting two tables together we had to do it ourselves.]3D [Two guests wanted a good English hot tea and they didn’t brew it in advance.]3E [So we’ve decided there are newer and better places to eat fish and chips especially up in north phoenix.]3F (c) true label: 1, predicted label: 3 Figure 3: Some example texts (with light revision for readability) from the Yelp Review corpus and their corresponding dependency discourse parses from DPLP (Ji and Eisenstein, 2014). The numbers on dependency edges are attention weights produced by the FULL model. 1002 50 52 54 56 58 60 62 F1 on RST Discourse Treebank 70.8 71.0 71.2 71.4 71.6 71.8 Accuracy on Yelp Reviews Figure 4: Varying the amount of training data for the discourse parser, we can see how parsing F1 performance affects accuracy on the Yelp review task. each case) before re-parsing the data for the sentiment analysis task. We did not repeat the hyperparameter search. In Figure 4, we plot accuracy of the classifier (y-axis) against the F1 performance of the discourse parser (x-axis). Unsurprisingly, lower parsing performance implies lower classification accuracy. Notably, if the RST discourse treebank were reduced to 25% of its size, our method would underperform the discourseignorant model of Yang et al. (2016). While we cannot extrapolate with certainty, these findings suggest that further improvements to discourse parsing, through larger annotated datasets or improved models, could lead to greater gains. Attention mechanism. In section 3, we contrasted our new attention mechanism (Equation 2), which is inspired by RST’s lack of “competition” for salience among satellites, with the attention mechanism used in machine translation (Bahdanau et al., 2015). We consider here a variant of our model with normalized attention: α′ i = softmax       ... v⊤ j...   j∈children(i) Wα · ei    . (4) The result here is a vector α′ i, with one element for each child node j ∈children(i), and which sums to one. On Yelp dateset, this variant of the FULL model achieves 70.3% accuracy (1.5% absolute behind our FULL model), giving empirical support to our theoretically-motivated design decision not to normalize attention. Of course, further architecture improvements may yet be possible. Discussion. Our findings in this work show the benefit of using discourse structure for text categorization. Although discourse structure strongly improves the performance on most of corpora in our experiments, its benefit is limited particularly by two factors: (1) the state-of-the-art performance on RST discourse parsing; and (2) domain mismatch between the training corpus for a discourse parser and the domain where the discourse parser is used. For the first factor, discourse parsing is still an active research topic in NLP, and may yet improve. The second factor suggests exploring domain adaptation methods or even direct discourse annotation for genres of interest. 7 Related Work Early work on text categorization often treated text as a bag of words (e.g., Joachims, 1998; Yang and Pedersen, 1997). Representation learning, for example through matrix decomposition (Deerwester et al., 1990) or latent topic variables (Ramage et al., 2009), has been considered to avoid overfitting in the face of sparse data. The assumption that all parts of a text should influence categorization equally persists even as more powerful representation learners are considered. Zhang et al. (2015) treat a text as a sequence of characters, proposing to a deep convolutional neural network to build text representation. Xiao and Cho (2016) extended that architecture by inserting a recurrent neural network layer between the convolutional layer and the classification layer. In contrast, our contributions follow Ko et al. (2004), who sought to weight the influence of different parts of an input text on the task. Two works that sought to learn the importance of sentences in a document are Yessenalina et al. (2010) and Yang et al. (2016). The former used a latent variable for the informativeness of each sentence, and the latter used a neural network to learn an attention function. Neither used any linguistic bias, relying only on task supervision to discover the latent variable distribution or attention function. Our work builds the neural network directly on a discourse dependency tree, favoring the most central EDUs over the others but giving the model the ability to overcome this bias. Another way to use linguistic information was 1003 presented by Yogatama and Smith (2014), who used a bag-of-words model. The novelty in their approach was a data-driven regularization method that encouraged the model to collectively ignore groups of features found to coocur. Most related to our work is their “sentence regularizer,” which encouraged the model to try to ignore training-set sentences that were not informative for the task. Discourse structure was not considered. Discourse for sentiment analysis. Recently, discourse structure has been considered for sentiment analysis, which can be cast as a text categorization problem. Bhatia et al. (2015) proposed two discourse-motivated models for sentiment polarity prediction. One of the models is also based on discourse dependency trees, but using a handcrafted weighting scheme. Our method’s attention mechanism automates the weighting. 8 Conclusion We conclude that automatically-derived discourse structure can be helpful to text categorization, and the benefit increases with the accuracy of discourse parsing. We did not see a benefit for categorizing legislative bills, a text genre whose discourse structure diverges from that of news. These findings motivate further improvements to discourse parsing, especially for new genres. Acknowledgments We thank anonymous reviewers and members of Noah’s ARK for helpful feedback on this work. We thank Dallas Card and Jesse Dodge for helping prepare the Media Frames Corpus and the Congressional bill corpus. This work was made possible by a University of Washington Innovation Award. References Moshe Azar. 1999. Argumentative text as rhetorical structure: An application of rhetorical structure theory. Argumentation 13(1):97–114. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. David Bamman, Brendan O’Connor, and Noah A Smith. 2014. Learning latent personas of film characters. In ACL. James Bergstra, Brent Komer, Chris Eliasmith, Dan Yamins, and David D. Cox. 2015. Hyperopt: a Python library for model selection and hyperparameter optimization. Computational Science & Discovery 8(1). Parminder Bhatia, Yangfeng Ji, and Jacob Eisenstein. 2015. Better document-level sentiment analysis from RST discourse parsing. In EMNLP. Dallas Card, Amber E. Boydstun, Justin H. Gross, Philip Resnik, and Noah A. Smith. 2015. The Media Frames Corpus: Annotations of frames across issues. In ACL. Dallas Card, Justin Gross, Amber E. Boydstun, and Noah A. Smith. 2016. Analyzing framing through the casts of characters in the news. In EMNLP. Lynn Carlson, Daniel Marcu, and Mary Ellen Okurowski. 2001. Building a discourse-tagged corpus in the framework of Rhetorical Structure Theory. In Proceedings of Second SIGdial Workshop on Discourse and Dialogue. Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science 41(6):391. Alexander Hogenboom, Flavius Frasincar, Franciska de Jong, and Uzay Kaymak. 2015. Using rhetorical structure in sentiment analysis. Communications of the ACM 58(7):69–77. Yangfeng Ji and Jacob Eisenstein. 2014. Representation learning for document-level discourse parsing. In ACL. Yangfeng Ji and Jacob Eisenstein. 2015. One vector is not enough: Entity-augmented distributed semantics for discourse relations. Transactions of the Association of Computational Linguistics 3:329–344. Thorsten Joachims. 1998. Text categorization with support vector machines: Learning with many relevant features. In ECML. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. ArXiv:1404.2188. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. Youngjoong Ko, Jinwoo Park, and Jungyun Seo. 2004. Improving text categorization using the importance of sentences. Information Processing & Management 40(1):65–79. Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, Graham Neubig, and Noah A. Smith. 2017. What do recurrent neural network grammars learn about syntax? In EACL. William Mann and Sandra Thompson. 1988. Rhetorical Structure Theory: Toward a functional theory of text organization. Text 8(3):243–281. 1004 Daniel Marcu. 1999. Discourse trees are good indicators of importance in text. In Inderjeet Mani and Mark T. Maybury, editors, Advances in Automatic Text Summarization, pages 123–136. Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics 19(2):313–330. Tomas Mikolov, Martin Karafi´at, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, et al. 2017. Dynet: The dynamic neural network toolkit. ArXiv:1701.03980. Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd annual meeting on Association for Computational Linguistics. Association for Computational Linguistics, page 271. Romain Paulus, Richard Socher, and Christopher D Manning. 2014. Global belief recursive neural networks. In NIPS. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global vectors for word representation. In EMNLP. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The Penn Discourse Treebank 2.0. In LREC. Daniel Ramage, David Hall, Ramesh Nallapati, and Christopher D. Manning. 2009. Labeled lda: A supervised topic model for credit attribution in multilabeled corpora. In EMNLP. Richard Socher, Andrej Karpathy, Quoc V Le, Christopher D. Manning, and Andrew Y. Ng. 2014. Grounded compositional semantics for finding and describing images with sentences. Transactions of the Association for Computational Linguistics 2:207–218. Matt Thomas, Bo Pang, and Lillian Lee. 2006. Get out the vote: Determining support or opposition from Congressional floor-debate transcripts. In EMNLP. Yijun Xiao and Kyunghyun Cho. 2016. Efficient character-level document classification by combining convolution and recurrent layers. ArXiv:1602.00367. H. Yamada and Y. Matsumoto. 2003. Statistical dependency analysis with support vector machines. In IWPT. Yiming Yang and Jan O. Pedersen. 1997. A comparative study on feature selection in text categorization. In ICML. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In NAACL. Tae Yano, Noah A. Smith, and John D. Wilkerson. 2012. Textual predictors of bill survival in congressional committees. In NAACL. Ainur Yessenalina, Yisong Yue, and Claire Cardie. 2010. Multi-level structured models for document sentiment classification. In EMNLP. Dani Yogatama and Noah A. Smith. 2014. Linguistic structured sparsity in text categorization. In ACL. Yasuhisa Yoshida, Jun Suzuki, Tsutomu Hirao, and Masaaki Nagata. 2014. Dependency-based discourse parser for single-document summarization. In EMNLP. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In NIPS. 1005
2017
92
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1006–1017 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1093 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1006–1017 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1093 Adversarial Connective-exploiting Networks for Implicit Discourse Relation Classification Lianhui Qin1,2, Zhisong Zhang1,2, Hai Zhao1,2,∗, Zhiting Hu3, Eric P. Xing3 1Department of Computer Science and Engineering, Shanghai Jiao Tong University 2Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China 3Carnegie Mellon University {qinlianhui, zzs2011}@sjtu.edu.cn, [email protected], {zhitingh, epxing}@cs.cmu.edu Abstract Implicit discourse relation classification is of great challenge due to the lack of connectives as strong linguistic cues, which motivates the use of annotated implicit connectives to improve the recognition. We propose a feature imitation framework in which an implicit relation network is driven to learn from another neural network with access to connectives, and thus encouraged to extract similarly salient features for accurate classification. We develop an adversarial model to enable an adaptive imitation scheme through competition between the implicit network and a rival feature discriminator. Our method effectively transfers discriminability of connectives to the implicit features, and achieves state-of-the-art performance on the PDTB benchmark. 1 Introduction Discourse relations connect linguistic units such as clauses and sentences to form coherent semantics. Identification of discourse relations can benefit a variety of downstream applications including question answering (Liakata et al., 2013), machine translation (Li et al., 2014), text summarization (Gerani et al., 2014), opinion spam detection (Chen and Zhao, 2015), and so forth. ∗Corresponding authors. This paper was partially supported by Cai Yuanpei Program (CSC No. 201304490199 and No. 201304490171), National Natural Science Foundation of China (No. 61170114, No. 61672343 and No. 61272248), National Basic Research Program of China (No. 2013CB329401), Major Basic Research Program of Shanghai Science and Technology Committee (No. 15JC1400103), Art and Science Interdisciplinary Funds of Shanghai Jiao Tong University (No. 14JCRZ04), and Key Project of National Society Science Foundation of China (No. 15ZDA041). Connectives (e.g., but, so, etc) are one of the most critical linguistic cues for identifying discourse relations. When explicit connectives are present in the text, a simple frequency-based mapping is sufficient to achieve over 85% classification accuracy (Xue et al., 2016; Li et al., 2016). In contrast, implicit discourse relation recognition has long been seen as a challenging problem, with the best accuracy so far still lower than 50% (Chen et al., 2015). In the implicit case, discourse relations are not lexicalized by connectives, but to be inferred from relevant sentences (i.e., arguments). For example, the following two adjacent sentences Arg1 and Arg2 imply relation Cause (i.e., Arg2 is the cause of Arg1). [Arg1]: Never mind. [Arg2]: You already know the answer. [Implicit connective]: Because [Discourse relation]: Cause Various attempts have been made to directly infer underlying relations by modeling the semantics of the arguments, ranging from feature-based methods (Lin et al., 2009; Pitler et al., 2009) to the very recent end-to-end neural models (Chen et al., 2016a; Qin et al., 2016c). Despite impressive performance, the absence of strong explicit connective cues has made the inference extremely hard and hindered further improvement. In fact, even the human annotators would make use of connectives to aid relation annotation. For instance, the popular Penn Discourse Treebank (PDTB) benchmark data (Prasad et al., 2008) was annotated by first inserting a connective expression (i.e., implicit connective, as shown in the above example) manually, and determining the abstract relation by combining both the implicit connective and contextual semantics. 1006 Therefore, the huge performance gap between explicit and implicit parsing (namely, 85% vs 50%), as well as the human annotation practice, strongly motivates to incorporate connective information to guide the reasoning process. This paper aims to advance implicit parsing by making use of annotated implicit connectives available in training data. Few recent work has explored such combination. Zhou et al. (2010) developed a two-step approach by first predicting implicit connectives whose sense is then disambiguated to obtain the relation. However, the pipeline approach usually suffers from error propagation, and the method itself has relied on hand-crafted features which do not necessarily generalize well. Other research leveraged explicit connective examples for data augmentation (Rutherford and Xue, 2015; Braud and Denis, 2015; Ji et al., 2015; Braud and Denis, 2016). Our work is orthogonal and complementary to this line. In this paper, we propose a novel neural method that incorporates implicit connectives in a principled adversarial framework. We use deep neural models for relation classification, and take the intuition that, sentence arguments integrated with connectives would enable highly discriminative neural features for accurate relation inference, and an ideal implicit relation classifier, even though without access to connectives, should mimic the connective-augmented reasoning behavior by extracting similarly salient features. We therefore setup a secondary network in addition to the implicit relation classifier, building upon connectiveaugmented inputs and serving as a feature learning model for the implicit classifier to emulate. Methodologically, however, feature imitation in our problem is challenging due to the semantic gap induced by adding the connective cues. It is necessary to develop an adaptive scheme to flexibly drive learning and transfer discriminability. We devise a novel adversarial approach which enables a self-calibrated imitation mechanism. Specifically, we build a discriminator which distinguishes between the features by the two counterpart networks. The implicit relation network is then trained to correctly classify relations and simultaneously to fool the discriminator, resulting in an adversarial framework. The adversarial mechanism has been an emerging method in different context, especially for image generation (Goodfellow et al., 2014) and domain adaptation (Ganin et al., 2016; Chen et al., 2016c). Our adversarial framework is unique to address neural feature emulation between two models. Besides, to the best of our knowledge, this is the first adversarial approach in the context of discourse parsing. Compared to previous connective exploiting work (Zhou et al., 2010; Xu et al., 2012), our method provides a new integration paradigm and an end-to-end procedure that avoids inefficient feature engineering and error propagation. Our method is evaluated on the PDTB 2.0 benchmark in a variety of experimental settings. The proposed adversarial model greatly improves over standalone neural models and previous bestperforming approaches. We also demonstrate that our implicit recognition network successfully imitates and extracts crucial hidden representations. We begin by briefly reviewing related work in section 2. Section 3 presents the proposed adversarial model. Section 4 shows substantially improved experimental results over previous methods. Section 5 discusses extensions and future work. 2 Related Work 2.1 Implicit Discourse Relation Recognition There has been a surge of interest in implicit discourse parsing since the release of PDTB (Prasad et al., 2008), the first large discourse corpus distinguishing implicit examples from explicit ones. A large set of work has focused on direct classification based on observed sentences, including structured methods with linguistically-informed features (Lin et al., 2009; Pitler et al., 2009; Zhou et al., 2010), end-to-end neural models (Qin et al., 2016b,c; Chen et al., 2016a; Liu and Li, 2016), and combined approaches (Ji and Eisenstein, 2015; Ji et al., 2016). However, the lacking of connective cues makes learning purely from contextual semantics full of challenges. Prior work has attempted to leverage connective information. Zhou et al. (2010) also incorporate implicit connectives, but in a pipeline manner by first predicting the implicit connective with a language model and determining discourse relation accordingly. Instead of treating implicit connectives as intermediate prediction targets which can suffer from error propagation, we use the connectives to induce highly discriminative features to guide the learning of an implicit network, serving as an adaptive regularization mechanism for en1007 hanced robustness and generalization. Our framework is also end-to-end, avoiding costly feature engineering. Another notable line aims at adapting explicit examples for data synthesis (Biran and McKeown, 2013; Rutherford and Xue, 2015; Braud and Denis, 2015; Ji et al., 2015), multi-task learning (Lan et al., 2013; Liu et al., 2016), and word representation (Braud and Denis, 2016). Our work is orthogonal and complementary to these methods, as we use implicit connectives which have been annotated for implicit examples. 2.2 Adversarial Networks Deep neural networks have gained impressive success in various natural language processing tasks (Wang et al., 2016; Zhang et al., 2016b; Cai et al., 2017), in which adversarial networks have been shown especially effective in deep generative modeling (Goodfellow et al., 2014) and domain adaptation (Ganin et al., 2016). Generative adversarial nets (Goodfellow et al., 2014) learn to produce realistic samples through competition between a generator and a real/fake discriminator. Professor forcing (Lamb et al., 2016) applies a similar idea to improve long-term generation of a recurrent neural language model. Other approaches (Chen et al., 2016b; Hu et al., 2017; Liang et al., 2017) extend the framework for controllable image/text generation. Li et al. (2015); Salimans et al. (2016) propose feature matching which trains generators to match the statistics of real/fake examples. Their features are extracted by the discriminator rather than the classifier networks as in our case. Our work differs from the above since we consider the context of discriminative modeling. Adversarial domain adaptation forces a neural network to learn domain-invariant features using a classifier that distinguishes the domain of the network’s input data based on the hidden feature. Our adversarial framework is distinct in that besides the implicit relation network we construct a second neural network serving as a teacher model for feature emulation. To the best of our knowledge, this is the first to employ the idea of adversarial learning in the context of discourse parsing. We propose a novel connective exploiting scheme based on feature imitation, and to this end derive a new adversarial framework, achieving substantial performance gain over existing methods. The proposed approach is generally applicable to other tasks for utilizing any indicative side information. We give more discussions in section 5. 3 Adversarial Method Discourse connectives are key indicators for discourse relation. In the annotation procedure of the PDTB implicit relation benchmark, annotators inserted implicit connective expressions between adjacent sentences to lexicalize abstract relations and help with final decisions. Our model aims at making full use of the provided implicit connectives at training time to regulate learning of implicit relation recognizer, encouraging extraction of highly discriminative semantics from raw arguments, and improving generalization at test time. Our method provides a novel adversarial framework that leverages connective information in a flexible adaptive manner, and is efficiently trained end-to-end through standard back-propagation. The basic idea of the proposed approach is simple. We want our implicit relation recognizer, which predicts the underlying relation of sentence arguments without discourse connective, to have prediction behaviors close to a connectiveaugmented relation recognizer which is provided with a discourse connective in addition to the arguments. The connective-augmented recognizer is in analogy to an annotator with the help of connectives as in the human annotation process, and the implicit recognizer would be improved by learning from such an “informed” annotator. Specifically, we want the latent features extracted by the two models to match as closely as possible, which explicitly transfers the discriminability of the connective-augmented representations to implicit ones. To this end, instead of manually selecting a closeness metric, we take advantage of the adversarial framework by constructing a two-player zero-sum game between the implicit recognizer and a rival discriminator. The discriminator attempts to distinguish between the features extracted by the two relation models, while the implicit relation model is trained to maximize the accuracy on implicit data, and at the same time to confuse the discriminator. In the next we first present the overall architecture of the proposed approach (section 3.1), then develop the training procedure (section 3.2). The components are realized as deep (convolutional) neural networks, with detailed modeling choices 1008 x1: Never mind. x2: You Know the answer. i-CNN a-CNN +implicit connective c: Because Discriminator D Classifier C x1: Never mind. x2: Because You Know the answer. HI HA Figure 1: Architecture of the proposed method. The framework contains three main components: 1) an implicit relation network i-CNN over raw sentence arguments, 2) a connective-augmented relation network a-CNN whose inputs are augmented with implicit connectives, and 3) a discriminator distinguishing between the features by the two networks. The features are fed to the final classifier for relation classification. The discriminator and i-CNN form an adversarial pair for feature imitation. At test time, the implicit network i-CNN with the classifier is used for prediction. discussed in section 3.3. 3.1 Model Architecture Let (x, y) be a pair of input and output of implicit relation classification, where x = (x1, x2) is a pair of sentence arguments, and y is the underlying discourse relation. Each training example also includes an annotated implicit connective c that best expresses the relation. Figure 1 shows the architecture of our framework. The neural model for implicit relation classification (i-CNN in the figure) extracts latent representation from the arguments, denoted as HI(x1, x2), and feeds the feature into a classifier C for final prediction C(HI(x1, x2)). For ease of notation, we will also use HI(x) to denote the latent feature on data x. The second relation network (a-CNN) takes as inputs the sentence arguments along with an implicit connective, to induce the connectiveaugmented representation HA(x1, x2, c), and obtains relation prediction C(HA(x1, x2, c)). Note that the same final classifier C is used for both networks, so that the feature representations by the two networks are ensured to be within the same semantic space, enabling feature emulation as presented shortly. We further pair the implicit network with a rival discriminator D to form our adversarial game. The discriminator is to differentiate between the reasoning behaviors of the implicit network i-CNN and the augmented network a-CNN. Specifically, D is a binary classifier that takes as inputs a latent feature H derived from either i-CNN or aCNN given appropriate data (where implicit connectives is either missing or present, respectively). The output D(H) estimates the probability that H comes from the connective-augmented a-CNN rather than i-CNN. 3.2 Training Procedure The system is trained through an alternating optimization procedure that updates the components in an interleaved manner. In this section, we first present the training objective for each component, and then give the overall training algorithm. Let θD denote the parameters of the discriminator. The training objective of D is straightforward, i.e., to maximize the probability of correctly distinguishing the input features: max θD LD = E(x,c,y)∼data h log D(HA(x, c); θD)+ log(1 −D(HI(x); θD)) i , (1) where E(x,c,y)∼data[·] denotes the expectation in terms of the data distribution. We denote the parameters of the implicit network i-CNN and the classifier C as θI and θC, respectively. The model is then trained to (a) correctly classify relations in training data and (b) produce salient features close to connectiveaugmented ones. The first objective can be fulfilled by minimizing the usual cross-entropy loss: LI,C(θI, θC) = E(x,y)∼data h J C(HI(x; θI); θC), y i , (2) 1009 Algorithm 1 Adversarial Model for Implicit Recognition Input: Training data {(x, c, y)n} Parameters: λ1, λ2 – balancing parameters 1: Initialize {θI, θC} and {θA} by minimizing Eq.(2) and Eq.(4), respectively 2: repeat 3: Train the discriminator through Eq.(1) 4: Train the relation models through Eq.(5) 5: until convergence Output: Adversarially enhanced implicit relation network i-CNN with classifier C for prediction where J(p, y) = −P k I(y = k) log pk is the cross-entropy loss between predictive distribution p and ground-truth label y. We achieve objective (b) by minimizing the discriminator’s chance of correctly telling apart the features: LI(θI) = Ex∼data h log 1 −D(HI(x; θI)) i . (3) The parameters of the augmented network aCNN, denoted as θA, can be learned by simply fitting to the data, i.e., minimizing the cross-entropy loss as follows: LA(θA) = E(x,c,y)∼data h J C(HA(x, c; θA)), y i . (4) As mentioned above, here we use the same classifier C as for the implicit network, forcing a unified feature space of both networks. We combine the above objectives Eqs.(2)-(4) of the relation classifiers and minimize the joint loss: min θI,θA,θC LI,A,C = LI,C(θI, θC) + λ1LI(θI) + λ2LA(θA), (5) where λ1 and λ2 are two balancing parameters calibrating the weights of the classification losses and the feature-regulating loss. In practice, we pretrain the implicit and augmented networks independently by minimizing Eq.(2) and Eq.(4), respectively. In the adversarial training process, we found setting λ2 = 0 gives stable convergence. That is, the connective-augmented features are fixed after the pre-training stage. Algorithm 1 summarizes the training procedure, where we interleave the optimization of Eq.(1) and Eq.(5) at each iteration. More practical details are provided in section 4. We instantiate all modules as neural networks (section 3.3) which are differentiable, and perform the optimization efficiently through standard stochastic gradient descent and back-propagation. Concat Max-pooling Convolution Embedding Arg1: Never mind. Arg2: You know the answer. H(x) Classifcation: Cause Discrimination Figure 2: Neural structure of i-CNN. Two sets of convolutional filters are shown, with the corresponding features in red and blue, respectively. The weights of the filters on two input arguments are tied. Through Eq.(1) and Eq.(3), the discriminator and the implicit relation network follow a minimax competition, which drives both to improve until the implicit feature representations are close to the connective-augmented latent representations, encouraging the implicit network to extract highly discriminative features from raw sentence arguments for relation classification. Alternatively, we can see Eq.(3) as an adaptive regularization on the implicit model, which, compared to pre-fixed regularizors such as ℓ2-regularization, provides a more flexible, self-calibrated mechanism to improve generalization ability. 0 1 Input Gate Output / HI HA / Gate Figure 3: Neural structure of the discriminator D. 1010 3.3 Component Structures We have presented our adversarial framework for implicit relation classification. We now discuss the model realization of each component. All components of the framework are parameterized with neural networks. Distinct roles of the modules in the framework lead to different modeling choices. Relation Classification Networks Figure 2 illustrates the structure of the implicit relation network i-CNN. We use a convolutional network as it is a common architectural choice for discourse parsing. The network takes as inputs the word vectors of the tokens in each sentence argument, and maps each argument to intermediate features through a shared convolutional layer. The resulting representations are then concatenated and fed into a max pooling layer to select most salient features as the final representation. The final classifier C is a simple fully-connected layer followed by a softmax classifier. The connective-augmented network a-CNN has a similar structure as i-CNN, wherein implicit connective is appended to the second sentence as input. The key difference from i-CNN is that here we adopt average k-max pooling, which takes the average of the top-k maximum values in each pooling window. The reason is to prevent the network from solely selecting the connective induced features (which are typically the most salient features) which would be the case when using max pooling, but instead force it to also attend to contextual features derived from the arguments. This facilitates more homogeneous output features of the two networks, and thus facilitates feature imitation. In all the experiments we fixed k = 2. Discriminator The discriminator is a binary classifier to identify the correct source of an input feature vector. To make it a strong rival to the feature imitating network (i-CNN), we model the discriminator as a multi-layer perceptron (MLP) enhanced with gated mechanism for efficient information flow (Srivastava et al., 2015; Qin et al., 2016c), as shown in Figure 3. 4 Experiments We demonstrate the effectiveness of our approach both quantitatively and qualitatively with extensive experiments. We evaluate prediction performance on the PDTB benchmark in different settings. Our method substantially improves over a diverse set of previous models, especially in the practical multi-class classification task. We perform in-depth analysis of the model behaviors, and show our adversarial framework successfully enables the implicit relation model to imitate and learn discriminative features. 4.1 Experiment Setup We use PDTB 2.01, one of the largest manually annotated discourse relation corpus. The dataset contains 16,224 implicit relation instances in total, with three levels of senses: Level-1 Class, Level-2 Type, and Level-3 Subtypes. The 1st level consists of four major relation Classes: COMPARISON, CONTINGENCY, EXPANSION and TEMPORAL. The 2nd level contains 16 Types. To make extensive comparison with prior work of implicit discourse relation classification, we evaluate on two popular experimental settings: 1) multi-class classification for 2nd-level types (Lin et al., 2009; Ji and Eisenstein, 2015), and 2) oneversus-others binary classifications for 1st-level classes (Pitler et al., 2009). We describe the detailed configurations in the following respective sections. We will focus our analysis on the multiclass classification setting, which is most realistic in practice and serves as a building block for a complete discourse parser such as that for the shared tasks of CoNLL-2015 and 2016 (Xue et al., 2015, 2016). Model Training Here we provide the detailed architecture configurations of each component we used in the experiments. • Throughout the experiments i-CNN and aCNN contain 3 sets of convolutional filters with the filter sizes selected on the dev set. Table 1 lists the filter configurations of the convolutional layer in i-CNN and a-CNN in different tasks. As described in section 3.3, following the convolutional layer is a max pooling layer in i-CNN, and an average kmax pooling layer with k = 2 in a-CNN. • The final single-layer classifier C contains 512 neurons with tanh activation function. • The discriminator D consists of 4 fullyconnected layers, with 2 gated pathways from layer 1 to layer 3 and layer 4 (see Figure 3). 1http://www.seas.upenn.edu/∼pdtb/ 1011 Task Filter sizes Filter number PDTB-Lin 2, 4, 8 3×256 PDTB-Ji 2, 5, 10 3×256 One-vs-all 2, 5, 10 3×1024 Table 1: The convolutional architectures of i-CNN and a-CNN in different tasks (section 4). For example, in PDTB-Lin, we use 3 sets of filters, each of which is of size 2, 4, and 8, respectively; and each set has 256 filters. The size of each layer is set to 1024 and is fixed in all the experiments. • We set the dimension of the input word vectors to 300 and initialize with pre-trained word2vec (Mikolov et al., 2013). The maximum length of sentence argument is set to 80. Truncation and zero-padding are applied when necessary. All experiments were performed on a TITAN-X GPU and 128GB RAM, with neural implementation based on Tensorflow2. For adversarial model training, it is critical to keep balance between the progress of the two players. We use a simple strategy which at each iteration optimizes the discriminator and the implicit relation network on a randomly-sampled minibatch. We found this is enough to stabilize the training. The neural parameters are trained using AdaGrad (Duchi et al., 2011) with an initial learning rate of 0.001. For the balancing parameters in Eq.(5), we set λ1 = 0.1, while λ2 = 0. That is, after the initialization stage the weights of the connective-augmented network a-CNN are fixed. This has been shown capable of giving stable and good predictive performance for our system. 4.2 Implicit Relation Classification We will mainly focus on the general multi-class classification problem in two alternative settings adopted in prior work, showing the superiority of our model over previous state of the arts. We perform in-depth comparison with carefully designed baselines, providing empirical insights into the working mechanism of the proposed framework. For broader comparisons we also report the performance in the one-versus-all setting. 2https://www.tensorflow.org Model PDTB-Lin PDTB-Ji 1 Word-vector 34.07 36.86 2 CNN 43.12 44.51 3 Ensemble 42.17 44.27 4 Multi-task 43.73 44.75 5 ℓ2-reg 44.12 45.33 6 Lin et al. (2009) 40.20 7 Lin et al. (2009) 40.66 +Brown clusters 8 Ji and Eisenstein (2015) 44.59 9 Qin et al. (2016a) 43.81 45.04 10 Ours 44.65 46.23 Table 2: Accuracy (%) on the test sets of the PDTB-Lin and PDTB-Ji settings for multi-class classification. Please see the text for more details. Multi-class Classifications We first adopt the standard PDTB splitting convention following (Lin et al., 2009), denoted as PDTB-Lin, where sections 2-21, 22, and 23 are used as training, dev, and test sets, respectively. The most frequent 11 types of relations are selected in the task. During training, instances with more than one annotated relation types are considered as multiple instances, each of which has one of the annotations. At test time, a prediction that matches one of the gold types is considered as correct. The test set contains 766 examples. More details are in (Lin et al., 2009). An alternative, slightly different multi-class setting is used in (Ji and Eisenstein, 2015), denoted as PDTB-Ji, where sections 2-20, 0-1, and 21-22 are used as training, dev, and test sets, respectively. The resulting test set contains 1039 examples. We also evaluate in this setting for thorough comparisons. Table 2 shows the classification accuracy in both of the settings. We see that our model (Row 10) achieves state-of-the-art performance, greatly outperforming previous methods (Rows 69) with various modeling paradigms, including the linguistic feature-based model (Lin et al., 2009), pure neural methods (Qin et al., 2016c), and combined approach (Ji and Eisenstein, 2015). To obtain better insights into the working mechanism of our method, we further compare with a set of carefully selected baselines as shown in Rows 1-5. 1) “Word-vector” sums over the word vectors for sentence representation, showing the base effect of word embeddings. 2) “CNN” is a standalone convolutional net having the exact same architecture with our implicit rela1012 tion network. Our model trained within the proposed framework provides significant improvement, showing the benefits of utilizing implicit connectives at training time. 3) “Ensemble” has the same neural architecture with the proposed framework except that the input of a-CNN is not augmented with implicit connectives. This essentially is an ensemble of two implicit recognition networks. We see that the method performs even inferior to the single CNN model. This further confirms the necessity of exploiting connective information. 4) “Multi-task” is the convolutional net augmented with an additional task of simultaneously predicting the implicit connectives based on the network features. As a straightforward way of incorporating connectives, we see that the method slightly improves over the stand-alone CNN, while falling behind our approach with a large margin. This indicates that our proposed feature imitation is a more effective scheme for making use of implicit connectives. 5) At last, “ℓ2-reg” also implements feature mimicking by imposing an ℓ2 distance penalty between the implicit relation features and connective-augmented features. We see that the simple model has obtained improvement over previous best-performing systems in both settings, further validating the idea of imitation. However, in contrast to the fixed ℓ2 regularization, our adversarial framework provides an adaptive mechanism, which is more flexible and performs better as shown in the table. Model COMP. CONT. EXP. TEMP. Pitler et al. (2009) 21.96 47.13 16.76 Qin et al. (2016c) 41.55 57.32 71.50 35.43 Zhang et al. (2016a) 35.88 50.56 71.48 29.54 Zhou et al. (2010) 31.79 47.16 70.11 20.30 Liu and Li (2016) 36.70 54.48 70.43 38.84 Chen et al. (2016a) 40.17 54.76 31.32 Ours 40.87 54.56 72.38 36.20 Table 3: Comparisons of F1 scores (%) for binary classification. One-versus-all Classifications We also report the results of four one-versus-all binary classifications for more comparisons with prior work. We follow the conventional experimental setting (Pitler et al., 2009) by selecting sections 2-20, 21-22, and 0-1 as training, dev, and test sets. Table 4 lists the statistics of the data. Following previous work, Table 3 reports the F1 Relation Train Dev Test Comparison 1942/1942 197/986 152/894 Contigency 3342/3342 295/888 279/767 Expansion 7004/7004 671/512 574/472 Temporal 760/760 64/1119 85/961 Table 4: Distributions of positive and negative instances from the train/dev/test sets in four binary relation classification tasks. scores. Our method outperforms most of the prior systems in all the tasks. We achieve state-of-theart performance in recognition of the Expansion relation, and obtain comparable scores with the best-performing methods in each of the other relations, respectively. Notably, our feature imitation scheme greatly improves over (Zhou et al., 2010) which leverages implicit connectives as an intermediate prediction task. This provides additional evidence for the effectiveness of our approach. 4.3 Qualitative Analysis We now take a closer look into the modeling behavior of our framework, by investigating the process of the adversarial game during training, as well as the feature imitation effects. Figure 4 demonstrates the training progress of different components. The a-CNN network keeps high predictive accuracy as implicit connectives are given, showing the importance of connective cues. The rise-and-fall patterns in the accuracy of the discriminator clearly show its competition with the implicit relation network i-CNN as training goes. At first few iterations the accuracy of the discriminator increases quickly to over 0.9, while at late stage the accuracy drops to around 0.6, showing that the discriminator is getting confused by i-CNN (an accuracy of 0.5 indicates full confusion). The i-CNN network keeps improving in terms of implicit relation classification accuracy, as it is gradually fitting to the data and simultaneously learning increasingly discriminative features by mimicking a-CNN. The system exhibits similar learning patterns in the two different settings, showing the stability of the training strategy. We finally visualize the output feature vectors of i-CNN and a-CNN using the t-SNE method (Maaten and Hinton, 2008) in Figure 5. Without feature imitation, the extracted features by the two networks are clearly separated (Figure 5(a)). In contrast, as shown in Figures 5(b)(c), the feature vectors are increasingly mixed as training proceeds. Thus our framework has suc1013 0 5 10 15 Training epochs 0.4 0.6 0.8 Accuracy a-CNN i-CNN Discr 0 5 10 15 20 Training epochs 0.4 0.6 0.8 Accuracy a-CNN i-CNN Discr Figure 4: (Best viewed in colors.) Test-set performance of three components over training epochs. Relation networks a-CNN and i-CNN are measured with multi-class classification accuracy (with or without implicit connectives, respectively), while the discriminator is evaluated with binary classification accuracy. Top: the PDTB-Lin setting (Lin et al., 2009), where first 8 epochs are for initialization stage (thus the discriminator is fixed and not shown); Bottom: the PDTB-Ji setting (Ji and Eisenstein, 2015), where first 3 epochs are for initialization. (a) (b) (c) Figure 5: (Best viewed in colors.) Visualizations of the extracted hidden features by the implicit relation network i-CNN (blue) and connective-augmented relation network a-CNN (orange), in the multi-class classification setting (Lin et al., 2009). (a) Two networks are trained without adversary (with shared classifier); (b) Two networks are trained within our framework at epoch 10; (c) at epoch 20. The implicit relation network successfully imitates the connective-augmented features through the adversarial game. Visualization is conducted with the t-SNE algorithm (Maaten and Hinton, 2008). cessfully driven i-CNN to induce similar representations with a-CNN, even though connectives are not present. 5 Discussions We have developed an adversarial neural framework that facilitates an implicit relation network to extract highly discriminative features by mimicking a connective-augmented network. Our method achieved state-of-the-art performance for implicit discourse relation classification. Besides implicit connective examples, our model can naturally exploit enormous explicit connective data to further improve discourse parsing. The proposed adversarial feature imitation scheme is also generally applicable to other context to incorporate indicative side information available at training time for enhanced inference. Our framework shares a similar spirit of the iterative knowledge distillation method (Hu et al., 2016a,b) which train a “student” network to mimic the classification behavior of a knowledgeinformed “teacher” network. Our approach encourages imitation on the feature level instead of the final prediction level. This allows our approach to apply to regression tasks, and more interestingly, the context in which the student and teacher networks have different prediction outputs, e.g., performing different tasks, while transferring knowledge between each other can be beneficial. Besides, our adversarial mechanism provides an adaptive metric to measure and drive the imitation procedure. 1014 References Or Biran and Kathleen McKeown. 2013. Aggregated word pair features for implicit discourse relation disambiguation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL, Volume 2: Short Papers). Sofia, Bulgaria, pages 69–73. Chlo´e Braud and Pascal Denis. 2015. Comparing word representations for implicit discourse relation classification. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Lisbon, Portugal, pages 2201–2211. Chlo´e Braud and Pascal Denis. 2016. Learning connective-based word representations for implicit discourse relation identification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP). Austin, Texas, pages 203–213. Deng Cai, Hai Zhao, Zhisong Zhang, Yuan Xin, Yongjian Wu, and Feiyue Huang. 2017. Fast and accurate neural word segmentation for Chinese. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL). Vancouver, Canada. Changge Chen, Peilu Wang, and Hai Zhao. 2015. Shallow discourse parsing using constituent parsing tree. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning - Shared Task (CONLL). Beijing, China, pages 37– 41. Changge Chen and Hai Zhao. 2015. Deceptive opinion spam detection using deep level linguistic feature. In The 4th CCF Conference on Natural Language Processing and Chinese Computing (NLPCC 2015), LNCS. Nanchang, China, volume 9362, pages 465– 474. Jifan Chen, Qi Zhang, Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016a. Implicit discourse relation detection via a deep architecture with gated relevance network. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL Volume 1: Long Papers). Berlin, Germany, pages 1726–1735. Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. 2016b. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems. pages 2172–2180. Xilun Chen, Ben Athiwaratkun, Yu Sun, Kilian Weinberger, and Claire Cardie. 2016c. Adversarial deep averaging networks for cross-lingual sentiment classification. arXiv preprint arXiv:1606.01614 . John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12(Jul):2121–2159. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc¸ois Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. Journal of Machine Learning Research 17(59):1–35. Shima Gerani, Yashar Mehdad, Giuseppe Carenini, T. Raymond Ng, and Bita Nejat. 2014. Abstractive summarization of product reviews using discourse structure. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 1602–1613. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems. pages 2672–2680. Zhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, and Eric P Xing. 2016a. Harnessing deep neural networks with logic rules. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). Berlin, Germany, pages 2410–2420. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Controllable text generation. arXiv preprint arXiv:1703.00955 . Zhiting Hu, Zichao Yang, Ruslan Salakhutdinov, and Eric P Xing. 2016b. Deep neural networks with massive learned knowledge. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP). Austin, USA, pages 1670–1679. Yangfeng Ji and Jacob Eisenstein. 2015. One vector is not enough: Entity-augmented distributed semantics for discourse relations. Transactions of the Association for Computational Linguistics (TACL) 3:329– 344. Yangfeng Ji, Gholamreza Haffari, and Jacob Eisenstein. 2016. A latent variable recurrent neural network for discourse-driven language models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL). San Diego, California, pages 332–342. Yangfeng Ji, Gongbo Zhang, and Jacob Eisenstein. 2015. Closing the gap: Domain adaptation from explicit to implicit discourse relations. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Lisbon, Portugal, pages 2219–2224. Alex M Lamb, Anirudh Goyal, Ying Zhang, Saizheng Zhang, Aaron C Courville, and Yoshua Bengio. 2016. Professor forcing: A new algorithm for training recurrent networks. In Advances In Neural Information Processing Systems. pages 4601–4609. 1015 Man Lan, Yu Xu, and Zhengyu Niu. 2013. Leveraging synthetic discourse data via multi-task learning for implicit discourse relation recognition. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL, Volume 1: Long Papers). Sofia, Bulgaria, pages 476–485. Junyi Jessy Li, Marine Carpuat, and Ani Nenkova. 2014. Assessing the discourse factors that influence the quality of machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL, Volume 2: Short Papers). Baltimore, Maryland, pages 283–288. Yujia Li, Kevin Swersky, and Richard S Zemel. 2015. Generative moment matching networks. In Proceedings of the 32nd International Conference on Machine Learning (ICML). Lille, France, pages 1718–1727. Zhongyi Li, Hai Zhao, Chenxi Pang, Lili Wang, and Huan Wang. 2016. A constituent syntactic parse tree based discourse parser. In Proceedings of the Twentieth Conference on Computational Natural Language Learning - Shared Task (CONLL). Berlin, Germany, pages 60–64. Maria Liakata, Simon Dobnik, Shyamasree Saha, Colin Batchelor, and Dietrich Rebholz-Schuhmann. 2013. A discourse-driven content model for summarising scientific articles evaluated in a complex question answering task. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP). Seattle, Washington, USA, pages 747–757. Xiaodan Liang, Zhiting Hu, Hao Zhang, Chuang Gan, and Eric P Xing. 2017. Recurrent topictransition GAN for visual paragraph generation. arXiv preprint arXiv:1703.07022 . Ziheng Lin, Min-Yen Kan, and Hwee Tou Ng. 2009. Recognizing implicit discourse relations in the Penn Discourse Treebank. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP). Singapore, pages 343– 351. Yang Liu and Sujian Li. 2016. Recognizing implicit discourse relations via repeated reading: Neural networks with multi-level attention. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP). Austin, Texas, pages 1224–1233. Yang Liu, Sujian Li, Xiaodong Zhang, and Zhifang Sui. 2016. Implicit discourse relation classification via multi-task neural networks. arXiv preprint arXiv:1603.02776 . Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research 9:2579–2605. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems (3). South Lake Tahoe, Nevada, USA, pages 3111–3119. Emily Pitler, Annie Louis, and Ani Nenkova. 2009. Automatic sense prediction for implicit discourse relations in text. In Proceedings of the Joint Conference of the 47th Annual Meeting of he Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing (ACL-IJCNLP). Suntec, Singapore, pages 683–691. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind K Joshi, and Bonnie L Webber. 2008. The Penn discourse treebank 2.0. In The sixth international conference on Language Resources and Evaluation (LREC). Marrakech, Morocco, pages 2961–2968. Lianhui Qin, Zhisong Zhang, and Hai Zhao. 2016a. Implicit discourse relation recognition with contextaware character-enhanced embeddings. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. Osaka, Japan, pages 1914–1924. Lianhui Qin, Zhisong Zhang, and Hai Zhao. 2016b. Shallow discourse parsing using convolutional neural network. In Proceedings of the CoNLL-16 shared task. Berlin, Germany, pages 70–77. Lianhui Qin, Zhisong Zhang, and Hai Zhao. 2016c. A stacking gated neural architecture for implicit discourse relation classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP). Austin, Texas, pages 2263–2270. Attapol Rutherford and Nianwen Xue. 2015. Improving the inference of implicit discourse relations via classifying explicit discourse connectives. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL: HLT). Denver, Colorado, pages 799–808. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. 2016. Improved techniques for training gans. In Advances in Neural Information Processing Systems. pages 2226–2234. Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Highway networks. arXiv preprint arXiv:1505.00387 . Peilu Wang, Yao Qian, Frank K. Soong, Lei He, and Hai Zhao. 2016. Learning distributed word representations for bidirectional LSTM recurrent neural network. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). San Diego, California, pages 527–533. 1016 Yu Xu, Man Lan, Yue Lu, Zheng Yu Niu, and Chew Lim Tan. 2012. Connective prediction using machine learning for implicit discourse relation classification. In The 2012 International Joint Conference on Neural Networks (IJCNN). Brisbane, Australia, pages 1–8. Nianwen Xue, Hwee Tou Ng, Sameer Pradhan, Rashmi Prasad, Christopher Bryant, and Attapol Rutherford. 2015. The CoNLL-2015 shared task on shallow discourse parsing. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning - Shared Task (CoNLL). Beijing, China, pages 1–16. Nianwen Xue, Hwee Tou Ng, Sameer Pradhan, Bonnie Webber, Attapol Rutherford, Chuan Wang, and Hongmin Wang. 2016. The CoNLL-2016 shared task on shallow discourse parsing. In Proceedings of the Twentieth Conference on Computational Natural Language Learning - Shared Task (CoNLL). Berlin, Germany, pages 1–19. Biao Zhang, Deyi Xiong, jinsong su, Qun Liu, Rongrong Ji, Hong Duan, and Min Zhang. 2016a. Variational neural discourse relation recognizer. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP). Austin, Texas, pages 382–391. Zhisong Zhang, Hai Zhao, and Lianhui Qin. 2016b. Probabilistic graph-based dependency parsing with convolutional neural network. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). Berlin, Germany, pages 1382–1392. Zhi-Min Zhou, Yu Xu, Zheng-Yu Niu, Man Lan, Jian Su, and Chew Lim Tan. 2010. Predicting discourse connectives for implicit discourse relation recognition. In Proceedings of the 23rd International Conference on Computational Linguistics (CoLING 2010). Beijing, China, pages 1507–1514. 1017
2017
93
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1018–1028 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1094 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1018–1028 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1094 Don’t understand a measure? Learn it: Structured Prediction for Coreference Resolution optimizing its measures Iryna Haponchyk∗and Alessandro Moschitti ∗DISI, University of Trento, 38123 Povo (TN), Italy Qatar Computing Research Institute, HBKU, 34110, Doha, Qatar {gaponchik.irina,amoschitti}@gmail.com Abstract An assential aspect of structured prediction is the evaluation of an output structure against the gold standard. Especially in the loss-augmented setting, the need of finding the max-violating constraint has severely limited the expressivity of effective loss functions. In this paper, we trade off exact computation for enabling the use of more complex loss functions for coreference resolution (CR). Most noteworthily, we show that such functions can be (i) automatically learned also from controversial but commonly accepted CR measures, e.g., MELA, and (ii) successfully used in learning algorithms. The accurate model comparison on the standard CoNLL–2012 setting shows the benefit of more expressive loss for Arabic and English data. 1 Introduction In recent years, interesting structured prediction methods have been developed for coreference resolution (CR), e.g., (Fernandes et al., 2014; Bj¨orkelund and Kuhn, 2014; Martschat and Strube, 2015). These models are supposed to output clusters but, to better control the exponential nature of the problem, the clusters are converted into tree structures. Although this simplifies the problem, optimal solutions are associated with an exponential set of trees, requiring to maximize over such a set. This originated latent models (Yu and Joachims, 2009) optimizing the so-called lossaugmented objective functions. In this setting, loss functions need to be factorizable together with the feature representations for finding the max-violating constraints. The consequence is that only simple loss functions, basically just counting incorrect edges, were applied in previous work, giving up expressivity for simplicity. This is a critical limitation as domain experts consider more information than just counting edges. In this paper, we study the use of more expressive loss functions in the structured prediction framework for CR, although some findings are clearly applicable to more general settings. We attempted to optimize the complicated official MELA measure1 (Pradhan et al., 2012) of CR within the learning algorithm. Unfortunately, MELA is the average of measures, among which CEAFe has an excessive computational complexity preventing its direct use. To solve this problem, we defined a model for learning MELA from data using a fast linear regressor, which can be then effectively used in structured prediction algorithms. We defined features to learn such a loss function, e.g., different link counts or aggregations such as Precision and Recall. Moreover, we designed methods for generating training data from which our regression loss algorithm (RL) can generalize well and accurately predict MELA values on unseen data. Since RL is not factorizable2 over a mention graph, we designed a latent structured perceptron (LSP) that can optimize non-factorizable loss functions on CR graphs. We tested LSP using RL and other traditional loss functions using the same setting of the CoNLL–2012 Shared Task, thus enabling an exact comparison with previous work. The results confirmed that RL can be effectively learned and used in LSP, although the improvement was smaller than expected, considering that our RL provides the algorithm with a more accurate feedback. Thus, we analyzed the theory behind this pro1Received most consensus in the NLP community. 2We have not found yet a possible factorization. 1018 cess by also contributing to the definition of the properties of loss optimality. These show that the available loss functions, e.g., by Fernandes et al.; Yu and Joachims, are enough for optimizing MELA on the training set, at least when the data is separable. Thus, in such conditions, we cannot expect a very large improvement from RL. To confirm such a conjecture, we tested the models in a more difficult setting, in terms of separability. We used different feature sets of a smaller size and found out that in such conditions, RL requires less epochs for converging and produces better results than the other simpler loss functions. The accuracy of RL-based model, using 16 times less features, decreases by just 0.3 points, still improving the state of the art in structured prediction. Accordingly, in the Arabic setting, where the available features are less discriminative, our approach highly improves the standard LSP. 2 Related Work There is a number of works attempting to directly optimize coreference metrics. The solution proposed by Zhao and Ng (2010) consists in finding an optimal weighting (by beam search) of training instances, which would maximize the target coreference metric. Their models, optimizing MUC and B3, deliver a significant improvement on the MUC and ACE corpora. Uryupina et al. (2011) benefited from applying genetic algorithms for the selection of features and architecture configuration by multi-objective optimization of MUC and the two CEAF variants. Our approach is different in that the evaluation measure (its approximation) is injected directly into the learning algorithm. Clark and Manning (2016) optimize B3 directly as well within a mention-ranking model. For the efficiency reasons, they omit optimization of CEAF, which we enable in this work. SVMcluster – a structured output approach by Finley and Joachims (2005) – enables optimization to any clustering loss function (including nondecomposable ones). The authors experimentally show that optimizing particular loss functions results into a better classification accuracy in terms of the same functions. However, these are in general fast to compute, which is not the MELA case. While Finley and Joachims are compelled to perform approximate inference to overcome the intractability of finding an optimal clustering, the latent variable structural approaches – SVM of Yu and Joachims (2009) and perceptron of FernanFigure 1: Latent tree used for structural learning des et al. (2014) – render exact inference possible by introducing auxiliary graph structures. The modeling of Fernandes et al. (also referred to as the antecedent tree approach) is exploited in the works of Bj¨orkelund and Kuhn (2014), Martschat and Strube (2015), and Lassalle and Denis (2015). Like us, the first couples such approach with approximate inference but for enabling the use of non-local features. The current state-of-the-art model of Wiseman et al. (2016) also employs a greedy inference procedure as it has global features from an RNN as a non-decomposable term in the inference objective. 3 Structure Output Learning for CR We consider online learning algorithms for linking structured input and output patterns. More formally, such algorithms find a linear mapping f(x, y) = ⟨w, Φ(x, y)⟩, where f : X × Y →R, w is a linear model, Φ(x, y) is a combined feature vector of input variables X and output variables Y . The predicted structure is derived with the argmax y∈Y f(x, y). In the next sections, we show how to learn w for CR using structured perceptron. Additionally, we provide a characterization of effective loss functions for separable cases. 3.1 Modeling CR In this framework, CR is essentially modeled as a clustering problem, where an input-output example is described by a tuple (x, y, h), x is a set of entity mentions contained in a text document, y is set of the corresponding mention clusters, and h is a latent variable, i.e., an auxiliary structure that can represent the clusters of y. For example, given the following text: Although (she)m1 was supported by (President Obama)m2, (Mrs. Clinton)m3 missed (her)m4 (chance)m5, (which)m6 looked very good before counting votes. the clusters of the entity mentions are represented by the latent tree in Figure 1, where its nodes are 1019 Algorithm 1 Latent Structured Perceptron 1: Input: X = {(xi, yi)}n i=1, w0, C, T 2: w ←w0; t ←0 3: repeat 4: for i = 1, ..., n do 5: h∗ i ←argmax h∈H(xi,yi) ⟨wt, Φ(xi, h)⟩ 6: ˆhi ←argmax h∈H(xi) ⟨wt, Φ(xi, h)⟩+C×∆(yi, h∗ i , h) 7: if ∆(yi, h∗ i, ˆhi) > 0 then 8: wt+1 ←wt + Φ(xi, h∗ i ) −Φ(xi, ˆhi) 9: end if 10: end for 11: t ←t + 1 12: until t < nT 13: w ←1 t tP i=1 wi return w mentions and the subtrees connected to the additional root node form distinct clusters. The tree h is called a latent variable as it is consistent with y, i.e., it contains only links between mention nodes that corefer or fall into the same cluster according to y. Clearly, an exponential set of trees, H, can be associated with one and the same clustering y. Using only one tree to represent a clustering makes the search for optimal mention clusters tractable. In particular, structured prediction algorithms select h that maximizes the model learned at time t as shown in the next section. 3.2 Latent Structured Perceptron (LSP) The LSP model proposed by Sun et al. (2009) and specialized for solving CR tasks by Fernandes et al. (2012) is described by Alg. 1. Given a training set {(xi, yi)}n i=1, initial w03, a trade off parameter C, and the maximum number of epochs T, LSP iterates the following operations: Line 5 finds a latent tree h∗ i that maximizes ⟨wt, Φ(xi, h)⟩for the current example (xi, yi). It basically finds the max ground truth tree with respect to the current wt. Finding such max requires an exploration over the tree set H(xi, yi), which only contains arcs between mentions that corefer according to the gold standard clustering yi. Line 6 seeks for the max-violating tree ˆhi in H(xi), which is the set of all candidate trees using any possible combination of arcs. Line 7 tests if the produced tree ˆhi has some mistakes with respect to the gold clustering yi, using loss function ∆(yi, h∗ i , ˆhi). Note that some models define a loss exploiting also the current best latent tree h∗ i . If the test is verified, the model is updated with the vector Φ(xi, h∗ i ) −Φ(xi, ˆhi). 3Either 0 or a random vector. Fernandes et al. (2012) used exactly the directed trees we showed as latent structures and applied Edmonds’ spanning tree algorithm (Edmonds, 1967) for finding the max. Their model achieved the best results in the CoNLL–2012 Shared Task, a challenge for CR systems (Pradhan et al., 2012). Their selected loss function also plays an important role as shown in the following. 3.3 Loss functions When defining a loss, it is very important to preserve the factorization of the model components along the latent tree edges since this leads to efficient maximization algorithms (see Section 5). Fernandes et al. uses a loss function that (i) compares a predicted tree ˆh against the gold tree h∗and (ii) factorizes over the edges in the way the model does. Its equation is: ∆F (h∗, ˆh) = M X i=1 1ˆh(i)̸=h∗(i)(1+r·1h∗(i)=0), (1) where h∗(i) and ˆh(i) output the parent of the mention node i in the gold and predicted tree, respectively, whereas 1h∗(i)̸=ˆh(i) just checks if the parents are different, and if yes, penalty of 1 (or 1 + r if the gold parent is the root) is added. Yu and Joachims’s loss is based on undirected tree without a root and on the gold clustering y. It is computed as: ∆Y J(y, ˆh) = n(y) −k(y) + X e∈ˆh l(y, e), (2) where n(y) is the number of graph nodes, k(y) is the number of clusters in y, and l(y, e) assigns −1 to any edge e that connects nodes from the same cluster in y, and r otherwise. In our experiments, we adopt both loss functions, however, in contrast to Fernandes et al., we always measure ∆F against the gold label y and not against the current h∗, i.e., in the way it is done by Martschat and Strube (2015), who employ an equivalent LSP model in their work. 3.4 On optimality of simple loss functions The above loss functions are rather simple and mainly based on counting the number of mistaken edges. Below, we show that such simple loss functions achieve training data separation (if it exists) of a general task measure reaching its max on their 0 mistakes. The latter is a desirable characteristic of many measures used in CR and NLP research. 1020 Proposition 1 (Sufficient condition for optimality of loss functions for learning graphs). Let ∆(y, h∗, ˆh) ≥0 be a simple, edge-factorizable loss function, which is also monotone in the number of edge errors, and let µ(y, ˆh) be any graphbased measure maximized by no edge errors. Then, if the training set is linearly separable LSP optimizing ∆converges to the µ optimum. Proof. If the data is linearly separable the perceptron converges ⇒∆(yi, h∗i, ˆhi) = 0, ∀xi. The loss is factorizable, i.e., ∆(yi, h∗i, ˆhi) = X e∈ˆhi l(yi, h∗i, e), (3) where l(·) is an edge loss function. Thus, P e∈ˆhi l(yi, h∗i, e) = 0. The latter equation and monotonicity imply l(yi, h∗i, e) = 0, ∀e ∈ˆhi, i.e., there are no edge mistakes, otherwise by fixing such edges, we would have a smaller ∆, i.e., negative, contradicting the initial positiveness hypothesis. Thus, no edge mistake in any xi implies that µ(y, ˆh) is maximized on the training set. Corollary 1. ∆F (h∗, ˆh) and ∆Y J(y, ˆh) are both optimal loss functions for graphs. Proof. Equations 1 and 2 show that both are 0 when applied to a clustering with no mistake on the edges. Additionally, for each edge mistake more, both loss functions increase, implying monotonicity. Thus, they satisfy all the assumptions of Proposition 1. The above characteristic suggests that ∆F and ∆Y J can optimize any measure that reasonably targets no mistakes as its best outcome. Clearly, this property does not guarantee loss functions to be suitable for a given task measure, e.g., the latter may have different max points and behave rather discontinuously. However, a common practice in NLP is to optimize the maximum of a measure, e.g., in case of Precision and Recall, or Accuracy, therefore, loss functions able to at least achieve such an optimum are preferable. 4 Automatically learning a loss function How to measure a complex task such as CR has generated a long and controversial discussion in the research community. While such a debate is progressing, the most accepted and used measure is the so-called Mention, Entity, and Link Average (MELA) score. As it will be clear from the description below, MELA is not easily interpretable and not robust to the mention identification effect (Moosavi and Strube, 2016). Thus, loss functions showing the optimality property may not be enough to optimize it. Our proposal is to use a version of MELA transformed in a loss function optimized by an LSP algorithm with inexact inference. However, the computational complexity of the measure prevents to carry out an effective learning. Our solution is thus to learn MELA with a fast linear regressor, which also produces a continuos version of the measure. 4.1 Measures for CR MELA is the unweighted average of MUC (Vilain et al., 1995), B3 (Bagga and Baldwin, 1998) and CEAFe (CEAF variant with entity-based similarity) (Luo, 2005; Cai and Strube, 2010) scores, having heterogeneous nature. MUC is based on the number of correctly predicted links between mentions. The number of links required for obtaining the key entity set K is P ki∈K(|ki|−1), where ki are key entities in K (cardinality of each entity minus one). MUC recall computes what fraction of these were predicted, and the predicted were as many as P ki∈K(|ki| − |p(ki)|) = P ki∈K(|ki|−1−(|p(ki)|−1)), where p(ki) is a partition of the key entity ki formed by intersecting it with the corresponding response entities rj ∈R, s.t., ki ∩rj ̸= ∅. This number equals to the number of the key links minus the number of missing links, required to unite the parts of the partition p(ki) to obtain ki. B3 computes Precision and Recall individually for each mention. For mention m: Recallm = |km i ∩rm j | |km i | , where km i and rm j , subscripted with m, denote, correspondingly, the key and response entities into which m falls. The over-document Recall is then an average of these taken with respect to the number of the key mentions. The MUC and B3 Precision is computed by interchanging the roles of the key and response entities. CEAFe computes similarity between key and system entities after finding an optimal alignment between them. Using ψ(ki, rj) = 2|ki∩rj| |ki|+|rj| as the entity similarity measure, it finds an optimal oneto-one map g∗: K →R, which maps every key entity to a response entity, maximazing an overall similarity Ψ(g) = P ki∈K ψ(ki, g(ki)) of the example. This is solved as a bipartite matching problem by the Kuhn-Munkres algorithm. Then Preci1021 Algorithm 2 Finding a Max-violating Spanning Tree 1: Input: training example (x, y); graph G(x) with vertices V denoting mentions; set of the incoming candidate edges, E(v), v ∈V ; weight vector w 2: h∗←∅ 3: for v ∈V do 4: e∗= argmax e∈E(v) ⟨w, e⟩+ C × l(y, e) 5: h∗= h∗∪e∗ 6: end for 7: return max-violating tree h∗ 8: (clustering y∗is induced by the tree h∗) sion and Recall are Ψ(g∗) P rj∈R ψ(rj,rj) and Ψ(g∗) P ki∈K ψ(ki,ki), respectively. MELA computation is rather expensive mostly because of CEAFe. Its complexity is bounded by O(Ml2 log l) (Luo, 2005), where M and l are, correspondingly, the maximum and minimum number of entities in y and ˆy. Computing CEAFe is especially slow for the candidate outputs ˆy with a low quality of prediction, i.e, when l is big, and the coherence with the gold y is scarse. Finally, B3 and CEAFe are strongly influenced by the mention identification effect (Moosavi and Strube, 2016). Thus, ∆F and ∆Y J may output identical values for different clusterings that can have a big gap in terms of MELA. 4.2 Features for learning measures As computational reasons prevent to use MELA in LSP (see our inexact search algorithm in Section 5), we study methods for approximating it with a linear regressor. For this purpose, we define nine features, which count either exact or simplified versions of Precision, Recall and F1 of each of the three metric-components of MELA. Clearly, neither ∆F nor ∆Y J provide the same values. Apart from the computational complexity, the difficulty of evaluating the quality of the predicted clustering ˆy during training is also due to the fact that CR is carried out on automatically detected mentions, while it needs to be compared against a gold standard clustering of a gold mention set. However, we can use simple information about automatic mentions and how they relate to gold mentions and gold clusters. In particular, we use four numbers: (i) correctly detected automatic mentions, (ii) links they have in the gold standard, (iii) gold mentions, and (iv) gold links. The last one enables the precise computation of Precision, Recall and F1-measure values of MUC; the required partitions p(ki) of key entities are also available at training time as they contain only automatic mentions. These are the first three features that we design. Likewise for B3, the feature values can be derived using (ii) and (iii). For computing CEAFe heuristics, we do not perform cluster alignment to find an optimal Ψ(g∗). Instead of Ψ(g∗), which can be rewritten as P m∈K∩R 2 |km i |+|g∗(km i )| if summing up over the mentions not the entities, we simply use ˜Ψ = P m∈K∩R 2 |km i |+|rm j |, pretending that for each m its key km i and response rm j entities are aligned. P rj∈R ψ(rj, rj) and P ki∈K ψ(ki, ki) in the denominators of the Precision and Recall are the number of predicted and gold clusters, correspondingly. The imprecision of the CEAFe related features is expected to be leveraged when put together with the exact B3 and MUC values into the regression learning using the exact MELA values (implicitly exact CEAFe values as well). 4.3 Generating training and test data The features described above can be used to characterize the clustering variables ˆy. For generating training data, we collected all the maxviolating ˆy produced during LSPF (using ∆F ) learning and associate them with their correct MELA scores from the scorer. This way, we can have both training and test data for our regressor. In our experiments, for the generation purpose, we decided to run LSPF on each document separately to obtain more variability in ˆy’s. We use a simple linear SVM to learn a model wρ. Considering that MELA(y, ˆy) score lies in the interval [100, 0], a simple approximation of the loss could be: ∆ρ(y, ˆy) = 100 −wρ · φ(y, ˆy). (4) Below, we show its improved version and an LSP for learning with it based on inexact search. 5 Learning with learned loss functions Our experiments will demonstrate that ∆ρ can be accurately learned from data. However, the features we used for this are not factorizable over the edges of the latent trees. Thus, we design a new LSP algorithm that can use our learned loss in an approximated max search. 5.1 A general inexact algorithm for CR If the loss function can be factorized over tree edges (see Equation 3) the max-violating constraint in Line 6 of Alg. 1 can be efficiently found by exact decoding, e.g., using Edmonds’ algorithm as in Fernandes et al. (2014) or Kruskal’s as 1022 Algorithm 3 Inexact Inference of a Max-violating Spanning Tree with a Global Loss 1: Input: training example (x, y); graph G(x) with vertices V denoting mentions; set of the incoming candidate edges, E(v), v ∈V ; w, ground truth tree h∗ 2: ˆh ←∅ 3: score ←0 4: repeat 5: prev score = score 6: score = 0 7: for v ∈V do 8: h = ˆh \ e(v) 9: ˆe = argmax e∈E(v) ⟨w, e⟩+ C × ∆(y, h∗, h ∪e) 10: ˆh = h ∪ˆe 11: score = score + ⟨w,ˆe⟩ 12: end for 13: score = score + ∆(y, h∗, ˆh) 14: until score = prev score 15: return max-violating tree ˆh in Yu and Joachims (2009). The candidate graph, by construction, does not contain cycles, and the inference by Edmonds’ algorithm does technically the same as the ”best-left-link” inference algorithm by Chang et al. (2012). This can be schematically represented in Alg. 2. When we deal with ∆ρ, Alg. 2 cannot be longer applied as our new loss function is nonfactorizable. Thus, we designed a greedy solution, Alg. 3, which still uses the spanning tree algorithm, though, it is not guaranteed to deliver the max-violating constraint. However, finding even a suboptimal solution optimizing a more accurate loss function may achieve better performance both in terms of speed and accuracy. We reformulate Step 4 of Alg. 2, where a maxviolating incoming edge ˆe is identified for a vertex v. The new max-violating inference objective contains now a global loss measured on the partial structure ˆh built up to now plus a candidate edge e for a vertex v in consideration (Line 10 of Alg. 3). On a high level, this resembles the inference procedure of Wiseman et al. (2016), who use it for optimizing global features coming from an RNN. Differently though, after processing all the vertices, we repeat the procedure until the score of ˆh no longer improves. Note that Bj¨orkelund and Kuhn (2014) perform inexact search on the same latent tree structures to extend the model to non-local features. In contrast to our approach, they use beam search and accumulate the early updates. In addition to the design of an algorithm enabling the use of our ∆ρ, there are other intricacies Samples # examples MSE SCC Train Test S1 S2 6, 011 2.650 99.68 S2 S1 5, 496 2.483 99.70 Table 1: Accuracy of the loss regressor on two different sets of examples generated from different documents samples. caused by the lack of factorization that need to be taken into account (see the next section). 5.2 Approaching factorization properties The ∆ρ defined by Equation 4 approximately falls into the interval [0, 100]. However, the simple optimal loss functions, ∆F and ∆Y J, output a value dependent on the size of the input training document in terms of edges (as they factorize in terms of edges). Since this property cannot be learned from MELA by our regression algorithm, we calibrate our loss with respect to the number of correctly predicted mentions, c, in that document, obtaining ∆′ ρ = c 100∆ρ. Finally, another important issue is connected to the fact that on the way as we incrementally construct a max-violating tree according to Alg. 3, ∆ρ decreases (and MELA grows), as we add more mentions to the output, traversing the tree nodes v. Thus, to equalize the contribution of the loss among the candidate edges of different nodes, we also scale the loss of the candidate edges of the node v having order i in the document, according to the formula ∆′′ ρ = i |V |∆′ ρ. This can be interpreted as giving more weight to the hard-toclassify instances – an important issue alleviated by Zhao and Ng (2010). Towards the end of the document, the probability of correctly predicting an incoming edge for a node generally decreases, as increases the number of hypotheses. 6 Experiments In our experiments, we first show that our regressor for learning MELA approximates it rather accurately. Then, we examine the impact of our ∆ρ on state-of-the-art systems in comparison with other loss functions. Finally, we show that the impact of our model is amplified when learning in smaller feature spaces. 6.1 Setup Data We conducted our experiments on English and Arabic parts of the corpus from CoNLL 2012-Shared Task4. The English data contains 2,802, 343, and 348 documents in the training, 4conll.cemantix.org/2012/data.html 1023 101 102 103 2 4 6 8 10 12 number of training examples MSE 101 102 103 98.8 99.0 99.2 99.4 99.6 99.8 number of training examples SCC Figure 2: Regressor Learning curves. dev. and test parts, respectively. The Arabic data includes 359, 44, and 44 documents for training, dev. and test sets, respectively. Models We implement our version of LSP, where LSPF , LSPY J, and LSPρ use the loss functions, ∆F , ∆Y J, and ∆ρ, defined in Section 3.3 and 5.2, respectively. We used cort5 – coreference toolkit by Martschat and Strube (2015) both to preprocess the English data and to extract candidate mentions and features (the basic set). For Arabic, we used mentions and features from BART6 (Uryupina et al., 2012). We extended the initial feature set for Arabic with the feature combinations proposed by Durrett and Klein (2013), those permitted by the available initial features. Parametrization All the perceptron models require tuning of a regularization parameter C. LSPF and LSPY J – also tuning of a specific loss parameter r. We select the parameters on the entire dev. set by training on 100 random documents from the training set. We pick up C ∈ {1.0, 100.0, 1000.0, 2000.0}, the r values for LSPF from the interval [0.5, 2.5] with step 0.5, and the r values for LSPY J – from {0.05, 0.1, 0.5}. Ultimately, for English, we used C = 1000.0 in all the models; r = 1.0 in LSPF and r = 0.1 in LSPY J. And wider ranges of parameter values were considered for Arabic, due to the lower mention detection rate: C = 1000.0, r = 6.0 for LSPF , C = 1000.0, r = 0.01 for LSPY J, and C = 5000.0 – for LSPρ. A standard previous work setting for the number of epochs T of LSP is 5 (Martschat and Strube, 2015). Fernandes et al. (2014) noted that T = 50 was sufficient for convergence. We selected the best T from 1 to 50 on the dev. set. Evaluation measure We used MUC, B3, CEAFe and their average MELA for evaluation, computed by the version 8 of the official CoNLL scorer. 5http://smartschat.de/software 6http://www.bart-coref.org/ Model Selected (N = 1M) All (N ∼16.8M) Dev. Test Tbest Dev. Test Tbest LSPF 63.72 62.19 49 64.05 63.05 41 LSPY J 63.72 62.44 29 64.32 62.76 13 LSPρ 64.12 63.09 27 64.30 63.37 18 M&S AT – – – 62.31 61.24 5 M&S MR – – – 63.52 62.47 5 B&K – – – 62.52 61.63 – Fer – – – 60.57 60.65 – Table 2: Results of our and previous work models evaluated on the dev. and test sets following the exact CoNLL-2012 English setting, using all training documents with All and 1M features. Tbest is evaluated on the dev. set. 6.2 Learning loss functions For learning MELA, we generated training and test examples from LSPF according to the procedure described in Section 4.3. In the first experiment, we trained the wρ model on a set of examples S1, generated from a sample of 100 English documents and tested on a set of examples S2, generated from another sample of the same size, and vice versa. The results in Table 1 show that with just 5, 000/6, 000, the Mean Squared Error (MSE) is roughly between ∼2.4 −2.7: these are rather small numbers considering that the regression output values in the interval [0, 100]. Squared Correlation Coefficient (SCC) reaches a correlation of about 99.7%, demonstrating that our regression approach is effective in estimating MELA. Additionally, Figure 2 shows the regression learning curves evaluated with MSE and SCC. The former rapidly decreases and, with about 1, 000 examples, reaches a plateau of around 2.3. The latter shows a similar behaviour, approaching a correlation of about 99.8% with real MELA. 6.3 State of the art and model comparison We first experimented with the standard CoNLL setting to compare the LSP accuracy in terms of MELA using the three different loss functions, i.e., LSPF , LSPY J and LSPρ. In particular, we used all the documents of the training set and all N ∼16.8M features from cort, and tested on the both dev. and test sets. The results are reported in Columns All of Table 2. We note first that our ∆ρ is effective as it stays on a par with ∆F and ∆Y J on the dev. set. This is interesting as Corollary 1 shows that such functions can optimize MELA, the reported values refer to the optimal epoch numbers. Also, LSPρ improves the other models on the test set by 0.3 percent points (statistical significant at the 93% level of confidence). 1024 0 25 50 75 100 42 44 46 48 number of epochs, T MELA N = 10K 0 25 50 75 100 54 56 58 60 number of epochs, T MELA N = 100K 0 25 50 75 100 56 58 60 62 number of epochs, T MELA N = 300K 0 25 50 75 100 58 60 62 64 number of epochs, T MELA N = 500K 0 25 50 75 100 60 62 64 number of epochs, T MELA N = 1M 0 25 50 75 100 60 62 64 number of epochs, T MELA N = 1.5M 0 25 50 75 100 61 62 63 64 number of epochs, T MELA All (N ∼16.8M) 104 105 106 107 45 50 55 60 65 number of features, N MELA All on the Test Set LSPF LSPY J LSPρ Figure 3: Results of LSP models on the dev. set using different number of features, N. The last plot reports MELA score on the test set of the models using the optimal number of epochs tuned on the dev. set. #Feat. Model Test Set MUC B3 CEAFe MELA All LSPF 72.66 59.94 56.54 63.05 LSPY J 72.18 59.31 55.82 62.76 LSPρ 72.34 60.36 57.40 63.37 LSPF 71.95 59.03 55.59 62.19 1M LSPY J 72.35 59.54 56.38 62.44 LSPρ 72.09 60.11 57.07 63.09 Table 3: Results on the test set using the same setting of Table 2 and the measures composing MELA. Secondly, all the three models improve the state of the art on CR using LSP, i.e., by Martschat and Strube (2015) using antecedent trees (M&S AT) or mention ranking (M&S MR), Bj¨orkelund and Kuhn (2014) using a global feature model (B&K) and Fernandes et al. (2014) (Fer). Noted that all the LSP models were trained on the training set only, without retraining on the training and dev. sets together, thus our scores can be improved. Thirdly, Table 3 shows the breakdown of the MELA results in terms of its components on the test set. Interestingly, LSPρ is noticeably better in terms of B3 and CEAFe, while LSP with simple losses, as expected, deliver higher MUC score. Finally, the overall improvement of ∆ρ is not impressive. This mainly depends on the optimality of the competing loss functions, which in a setting of ∼16.8M features, satisfy the separability condition of Proposition 1. 6.4 Learning in more challenging conditions In these experiments, we verify the hypothesis that when the optimality property is partially or totally missing ∆ρ is more visibly superior to ∆F and ∆Y J. As we do not want to degrade their effectiveness, the only condition dependent on the setting is the data inseparability or at least harder to be separated. These conditions can be obtained by reducing the size of the feature space. However, since we aim at testing conditions, where ∆ρ is practically useful, we filter out less important features, preserving the model accuracy (at least when the selection is not extremely harsh). For this purpose, we use a feature selection approach using a basic binary classifier trained to discriminate between correct and incorrect mention pairs. It is typically used in non structured CR methods and has a nice property of using the same features of LSP (we do not use global features in our study). We carried out a selection using the absolute values of the model weights of the classifier for ranking features and then selecting those having higher rank (Haponchyk and Moschitti, 2017). The MELA produced by our models using all the training data is presented in Figure 3. The first 7 plots show learning curves in terms of LSP epochs for different feature sets with increasing size N, evaluated on the dev. set. We note that: firstly, the fewer features are available, the better LSPρ curves are than those of LSPF and LSPY J in terms of accuracy and convergence speed. The intuition is that finding a separation of the training set (generalizing well) becomes more challenging (e.g., with 10k features, the data is not linearly sep1025 arable) thus a loss function which is closer to the real measure provides some advantages. Secondly, when using all features, LSPρ is still overall better than the other models but clearly the latter can achieve the same MELA on the dev. set. Thirdly, the last plot shows the MELA produced by LSP models on the test set, when trained with the best epoch derived from the dev. set (previous plots). We observe that LSPρ is constantly better than the other models, though decreasing its effect as the feature number increases. Next, in Column 1 (Selected) of Table 2, we report the model MELA using 1 million features. We note that LSPρ improves the other models by at least 0.6 percent points, achieving the same accuracy as the best of its competitors, i.e., LSPF , using all the features. Finally, ∆ρ does not satisfy Proposition 1, therefore, generally, we do not know if it can optimize any µ-type measure over graphs. However, being learned to optimize MELA, it clearly separates data maximizing such a measure. We empirically verified this by checking the MELA score obtained on the training set: we found that LSPρ always optimizes MELA, iterating for fewer epochs than the other loss functions. 6.5 Generalization to other languages Here, we test the effectiveness of the proposed method on Arabic using all available data and features. The results in Table 4 reveal an indisputable superiority of LSPρ over the counterparts optimizing simple loss functions. They support the results of the previous section as we had to deal with the insufficiency of the expert-based features for Arabic. In such an uneasy case, LSPρ was able to improve over LSPF by more than 4.7 points. We also tested the loss model wρ trained for the experiments on the English data (resp. setting All of Section 6.3) in LSPρ on Arabic. This corresponds to LSPEN ρ model. Notably, it performs even better, 1.5 points more, than LSPρ using a loss learned from Arabic examples. This suggests a nice property of data invariance of ∆ρ. The improvement delivered by the ”English” wρ is due to the fact that it was trained on the data which is richer: (i) quantitatively, since coming from almost 8 times more training documents in comparison to Arabic and (ii) qualitatively, in a sense of diversity with respect to the RL target value. Indeed, the Arabic data is much less separable than Model All (N ∼395K) Dev. Test Tbest LSPF 31.20 33.19 10 LSPY J 27.70 28.51 13 LSPρ 36.91 37.91 6 LSPEN ρ 38.47 39.56 12 Uryupina et al., 2012 – 37.54 – B&K 46.67 48.72 – Fer – 45.18 – Table 4: Results of our and baseline models evaluated on the dev. and test sets following the exact CoNLL-2012 Arabic setting, using all training documents. Tbest is evaluated on the dev. set. the English data and this prevents to have examples where MELA values are higher. 7 Conclusions In this paper, we studied the use of complex loss functions in structured prediction for CR. Given the scale of our investigation, we limited our study to LSP, which is anyway considered state of the art. We derived several findings: (i) for the first time, up to our knowledge, we showed that a complex measure, such as MELA, can be learned by a linear regressor (RL) with high accuracy and effective generalization. (ii) The latter was essential for designing a new LSP based on inexact search and RL. (iii) We showed that an automatically learned loss can be optimized and provides stateof-the-art performance in a real setting, including thousands of documents and millions of features, such as CoNLL–2012 Shared Task. (iv) We defined a property of optimal loss functions for CR, which shows that in separable cases, such losses are enough to get the state of the art. However, as soon as separability becomes more complex simple loss functions lose optimality and RL becomes more accurate and faster. (v) Our MELA approximation provides a loss that is data invariant which, once learned, can be optimized in LSP on different datasets and in different languages. Our study opens several future directions, ranging from defining algorithms based on automatically learned loss functions to learning more effective measures from expert examples. Acknowledgements We would like to thank Olga Uryupina for providing us with the preprocessed data from BART for Arabic. This work has been supported by the EC project CogNet, 671625 (H2020-ICT-2014-2, Research and Innovation action). Many thanks to the anonymous reviewers for their valuable suggestions. 1026 References Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In Proceedings of the Linguistic Coreference Workshop at the First International Conference on Language Resources and Evaluation. Granada, Spain, pages 563–566. Anders Bj¨orkelund and Jonas Kuhn. 2014. Learning structured perceptrons for coreference resolution with latent antecedents and non-local features. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Baltimore, Maryland, pages 47–57. http://www.aclweb.org/anthology/P/P14/P14-1005. Jie Cai and Michael Strube. 2010. Evaluation metrics for end-to-end coreference resolution systems. In Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Association for Computational Linguistics, Stroudsburg, PA, USA, SIGDIAL ’10, pages 28–36. http://dl.acm.org/citation.cfm?id=1944506.1944511. Kai-Wei Chang, Rajhans Samdani, Alla Rozovskaya, Mark Sammons, and Dan Roth. 2012. Illinoiscoref: The ui system in the conll-2012 shared task. In Joint Conference on EMNLP and CoNLL - Shared Task. Association for Computational Linguistics, Jeju Island, Korea, pages 113– 117. http://www.aclweb.org/anthology/W12-4513. Kevin Clark and Christopher D. Manning. 2016. Improving coreference resolution by learning entitylevel distributed representations. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 643–653. http://www.aclweb.org/anthology/P16-1061. Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Jack Edmonds. 1967. Optimum branchings. Journal of research of National Bureau of standards pages 233–240. Eraldo Rezende Fernandes, C´ıcero Nogueira dos Santos, and Ruy Luiz Milidi´u. 2012. Latent structure perceptron with feature induction for unrestricted coreference resolution. In Joint Conference on EMNLP and CoNLL Shared Task. Association for Computational Linguistics, Jeju Island, Korea, pages 41–48. http://www.aclweb.org/anthology/W12-4502. Eraldo Rezende Fernandes, C´ıcero Nogueira dos Santos, and Ruy Luiz Milidi´u. 2014. Latent trees for coreference resolution. Computational Linguistics 40(4):801–835. Thomas Finley and Thorsten Joachims. 2005. Supervised clustering with support vector machines. In ICML ’05: Proceedings of the 22nd international conference on Machine learning. ACM, New York, NY, USA, pages 217–224. https://doi.org/10.1145/1102351.1102379. Iryna Haponchyk and Alessandro Moschitti. 2017. A practical perspective on latent structured prediction for coreference resolution. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers. Association for Computational Linguistics, Valencia, Spain, pages 143–149. http://www.aclweb.org/anthology/E17-2023. Emmanuel Lassalle and Pascal Denis. 2015. Joint anaphoricity detection and coreference resolution with constrained latent structures. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence. AAAI Press, AAAI’15, pages 2274–2280. http://dl.acm.org/citation.cfm?id=2886521.2886637. Xiaoqiang Luo. 2005. On coreference resolution performance metrics. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Stroudsburg, PA, USA, HLT ’05, pages 25–32. https://doi.org/10.3115/1220575.1220579. Sebastian Martschat and Michael Strube. 2015. Latent structures for coreference resolution. Transactions of the Association for Computational Linguistics 3:405–418. Nafise Sadat Moosavi and Michael Strube. 2016. Which coreference evaluation metric do you trust? a proposal for a link-based entity aware metric. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 632–642. http://www.aclweb.org/anthology/P16-1060. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll-2012 shared task: Modeling multilingual unrestricted coreference in ontonotes. In Joint Conference on EMNLP and CoNLL - Shared Task. Association for Computational Linguistics, Jeju Island, Korea, page 1–40. http://www.aclweb.org/anthology/W12-4501. Xu Sun, Takuya Matsuzaki, Daisuke Okanohara, and Jun’ichi Tsujii. 2009. Latent variable perceptron algorithm for structured classification. In Proceedings of the 21st International Jont Conference on Artifical Intelligence. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, IJCAI’09, pages 1236–1242. http://dl.acm.org/citation.cfm?id=1661445.1661643. Olga Uryupina, Alessandro Moschitti, and Massimo Poesio. 2012. Bart goes multilingual: 1027 The unitn/essex submission to the conll2012 shared task. In Joint Conference on EMNLP and CoNLL - Shared Task. Association for Computational Linguistics, Stroudsburg, PA, USA, CoNLL ’12, pages 122–128. http://dl.acm.org/citation.cfm?id=2391181.2391198. Olga Uryupina, Sriparna Saha, Asif Ekbal, and Massimo Poesio. 2011. Multi-metric optimization for coreference: The unitn/iitp/essex submission to the 2011 conll shared task. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task. Association for Computational Linguistics, Stroudsburg, PA, USA, CONLL Shared Task ’11, pages 61–65. http://dl.acm.org/citation.cfm?id=2132936.2132944. Marc Vilain, John Burger, John Aberdeen, Dennis Connolly, and Lynette Hirschman. 1995. A modeltheoretic coreference scoring scheme. In Proceedings of the 6th Message Understanding Conference. pages 45–52. Sam Wiseman, Alexander M. Rush, and Stuart M. Shieber. 2016. Learning global features for coreference resolution. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016. pages 994–1004. http://aclweb.org/anthology/N/N16/N16-1114.pdf. Chun-Nam John Yu and Thorsten Joachims. 2009. Learning structural svms with latent variables. In Proceedings of the 26th Annual International Conference on Machine Learning. ACM, New York, NY, USA, ICML ’09, pages 1169–1176. https://doi.org/10.1145/1553374.1553523. Shanheng Zhao and Hwee Tou Ng. 2010. Maximum metric score training for coreference resolution. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010). Coling 2010 Organizing Committee, Beijing, China, pages 1308–1316. http://www.aclweb.org/anthology/C101147. 1028
2017
94
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1029–1039 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1095 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1029–1039 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1095 Bayesian Modeling of Lexical Resources for Low-Resource Settings Nicholas Andrews and Mark Dredze and Benjamin Van Durme and Jason Eisner Department of Computer Science and Human Language Technology Center of Excellence Johns Hopkins University 3400 N. Charles St., Baltimore, MD 21218 USA {noa,eisner,mdredze,vandurme}@jhu.edu Abstract Lexical resources such as dictionaries and gazetteers are often used as auxiliary data for tasks such as part-of-speech induction and named-entity recognition. However, discriminative training with lexical features requires annotated data to reliably estimate the lexical feature weights and may result in overfitting the lexical features at the expense of features which generalize better. In this paper, we investigate a more robust approach: we stipulate that the lexicon is the result of an assumed generative process. Practically, this means that we may treat the lexical resources as observations under the proposed generative model. The lexical resources provide training data for the generative model without requiring separate data to estimate lexical feature weights. We evaluate the proposed approach in two settings: part-of-speech induction and lowresource named-entity recognition. 1 Introduction Dictionaries and gazetteers are useful in many natural language processing tasks. These lexical resources may be derived from freely available sources (such as Wikidata and Wiktionary) or constructed for a particular domain. Lexical resources are typically used to complement existing annotations for a given task (Ando and Zhang, 2005; Collobert et al., 2011). In this paper, we focus instead on low-resource settings where task annotations are unavailable or scarce. Specifically, we use lexical resources to guide part-of-speech induction (§4) and to bootstrap named-entity recognizers in low-resource languages (§5). Given their success, it is perhaps surprising that incorporating gazetteers or dictionaries into discriminative models (e.g. conditional random fields) may sometimes hurt performance. This phenomena is called weight under-training, in which lexical features—which detect whether a name is listed in the dictionary or gazetteer—are given excessive weight at the expense of other useful features such as spelling features that would generalize to unlisted names (Smith et al., 2005; Sutton et al., 2006; Smith and Osborne, 2006). Furthermore, discriminative training with lexical features requires sufficient annotated training data, which poses challenges for the unsupervised and low-resource settings we consider here. Our observation is that Bayesian modeling provides a principled solution. The lexicon is itself a dataset that was generated by some process. Practically, this means that lexicon entries (words or phrases) may be treated as additional observations. As a result, these entries provide information about how names are spelled. The presence of the lexicon therefore now improves training of the spelling features, rather than competing with the spelling features to help explain the labeled corpus. A downside is that generative models are typically less feature-rich than their globally normalized discriminative counterparts (e.g. conditional random fields). In designing our approach—the hierarchical sequence memoizer (HSM)—we aim to be reasonably expressive while retaining practically useful inference algorithms. We propose a Bayesian nonparametric model to serve as a generative distribution responsible for both lexicon and corpus data. The proposed model memoizes previously used lexical entries (words or phrases) but backs off to a character-level distribution when generating novel types (Teh, 2006; Mochihashi et al., 2009). We propose an efficient inference algorithm for the proposed model using particle Gibbs sampling (§3). Our code is available at https://github.com/noa/bayesner. 1029 2 Model Our goal is to fit a model that can automatically annotate text. We observe a supervised or unsupervised training corpus. For each label y in the annotation scheme, we also observe a lexicon of strings of type y. For example, in our tagging task (§4), a dictionary provides us with a list of words for each part-of-speech tag y. (These lists need not be disjoint.) For named-entity recognition (NER, §5), we use a list of words or phrases for each named-entity type y (PER, LOC, ORG, etc.).1 2.1 Modeling the lexicon We may treat the lexicon for type y, of size my, as having been produced by a set of my IID draws from an unknown distribution Py over the words or named entities of type y. It therefore provides some evidence about Py. We will later assume that Py is also used when generating mentions of these words or entities in text. Thanks to this sharing of Py, if x = Washington is listed in the gazetteer of locations (y = LOC), we can draw the same conclusions as if we had seen a LOC-labeled instance of Washington in a supervised corpus. Generalizing this a bit, we may suppose that one observation of string x in the lexicon is equivalent to c labeled tokens of x in a corpus, where the constant c > 0 is known as a pseudocount. In other words, observing a lexicon of my distinct types {x1, . . . , xmy} is equivalent to observing a labeled pseudocorpus of cmy tokens. Notice that given such an observation, the prior probability of any candidate distribution Py is reweighted by the likelihood (cmy)! (c!)my · (Py(x1)Py(x2) · · · Py(xmy))c. Therefore, this choice of Py can have relatively high posterior probability only to the extent that it assigns high probability to all of the lexicon types. 2.2 Discussion We employ the above model because it has reasonable qualitative behavior and because computationally, it allows us to condition on observed lexicons as easily as we condition on observed corpora. However, we caution that as a generative model of the lexicon, it is deficient, in the sense that it 1Dictionaries and knowledge bases provide more information than we use in this paper. For instance, Wikidata also provides a wealth of attributes and other metadata for each entity s. In principle, this additional information could also be helpful in estimating Py(s); we leave this intriguing possibility for future work. allocates probability mass to events that cannot actually correspond to any lexicon. After all, drawing cmy IID tokens from Py is highly unlikely to result in exactly c tokens of each of my different types, and yet a run of our system will always assume that precisely this happened to produce each observed lexicon! To avoid the deficiency, one could assume that the lexicon was generated by rejection sampling: that is, the gazetteer author repeatedly drew samples of size cmy from Py until one was obtained that had this property, and then returned the set of distinct types in that sample as the lexicon for y. But this is hardly a realistic description of how gazetteers are actually constructed. Rather, one imagines that the gazetteer author simply harvested a lexicon of frequent types from Py or from a corpus of tokens generated from Py. For example, a much better generative story is that the lexicon was constructed as the first my distinct types to appear ≥c times in an unbounded sequence of IID draws from Py. When c = 1, this is equivalent to modeling the lexicon as my draws without replacement from Py.2 Unfortunately, draws without replacement are no longer IID or exchangeable: order matters. It would therefore become difficult to condition inference and learning on an observed lexicon, because we would need to explicitly sum or sample over the possibilities for the latent sequence of tokens (or stick segments). We therefore adopt the simpler deficient model. A version of our lexicon model (with c = 1) was previously used by Dreyer and Eisner (2011, Appendix C), who observed a list of verb paradigm types rather than word or entity-name types. 2.3 Prior distribution over Py We assume a priori that Py was drawn from a Pitman-Yor process (PYP) (Pitman and Yor, 1997). Both the lexicon and the ordinary corpus are observations that provide information about Py. The PYP is defined by three parameters: a concentration parameter α, a discount parameter d, and a base distribution Hy. In our case, Hy is a distribution over X = Σ∗, the set of possible strings over a finite character alphabet Σ. For example, HLOC is used to choose new place names, so it describes what place names tend to 2If we assume that Py was drawn from a Pitman-Yor process prior (as in §2.3) using the stick-breaking method (Pitman, 1996), it is also equivalent to modeling the lexicon as the set of labels of the first my stick segments (which tend to have high probability). 1030 look like in the language. The draw PLOC ∼ PYP(d, α, HLOC) is an “adapted” version of HLOC. It is PLOC that determines how often each name is mentioned in text (and whether it is mentioned in the lexicon). Some names such as Washington that are merely plausible under HLOC are far more frequent under PLOC, presumably because they were chosen as the names of actual, significant places. These place names were randomly drawn from HLOC as part of the procedure for drawing Py. The expected value of Py is H (i.e., H is the mean of the PYP distribution), but if α and d are small, then a typical draw of Py will be rather different from H, with much of the probability mass falling on a subset of the strings. At training or test time, when deciding whether to label a corpus token of x = Washington as a place or person, we will be interested in the relative values of PLOC(x) and PPER(x). In practice, we do not have to represent the unknown infinite object Py, but can integrate over its possible values. When Py ∼PYP(d, α, Hy), then a sequence of draws X1, X2, . . . ∼Py is distributed according to a Chinese restaurant process, via Py(Xi+1 = x | X1, . . . , Xi) (1) = customers(x) −d · tables(x) α + i + α + d · P x′ tables(x′) α + i Hy(x) where customers(x) ≤ i is the number of times that x appeared among X1, . . . , Xi, and tables(x) ≤customers(x) is the number of those times that x was drawn from Hy (where each Py(Xi | · · · ) defined by (1) is interpreted as a mixture distribution that sometimes uses Hy). 2.4 Form of the base distribution Hy By fitting Hy on corpus and lexicon data, we learn what place names or noun strings tend to look like in the language. By simultaneously fitting Py, we learn which ones are commonly mentioned. Recall that under our model, tokens are drawn from Py but the underlying types are drawn from Hy, e.g., Hy is responsible for (at least) the first token of each type. A simple choice for Hy is a Markov process that emits characters in Σ ∪{$}, where $ is a distinguished stop symbol that indicates the end of the string. Thus, the probability of producing $ controls the typical string length under Hy. We use a more sophisticated model of strings—a sequence memoizer (SM), which is a (hierarchical) Bayesian treatment of variable-order Markov modeling (Wood et al., 2009). The SM allows dependence on an unbounded history, and the probability of a given sequence (string) can be found efficiently much as in equation (1). Given a string x = a1 · · · aJ ∈Σ∗, the SM assigns a probability to it via Hy(a1:J) =  J Y j=1 Hy(aj | a1:j−1)  Hy($ | a1:J) =  J Y j=1 Hy,a1:j−1(aj)  Hy,a1:J($) (2) where Hy,u(a) denotes the conditional probability of character a given the left context u ∈Σ∗. Each Hy,u is a distribution over Σ, defined recursively as Hy,ϵ ∼PYP(dϵ, αϵ, U Σ) (3) Hy,u ∼PYP(d|u|, α|u|, Hy,σ(u)) where ϵ is the empty sequence, U Σ is the uniform distribution over Σ ∪{$}, and σ(u) drops the first symbol from u. The discount and concentration parameters (d|u|, α|u|) are associated with the lengths of the contexts |u|, and should generally be larger for longer (more specific) contexts, implying stronger backoff from those contexts.3 Our inference procedure is largely indifferent to the form of Hy, so the SM is not the only option. It would be possible to inject more assumptions into Hy, for instance via structured priors for morphology or a grammar of name structure. Another possibility is to use a parametric model such as a neural language model (e.g., Jozefowicz et al. (2016)), although this would require an inner-loop of gradient optimization. 2.5 Modeling the sequence of tags y We now turn to modeling the corpus. We assume that each sentence is generated via a sequence of latent labels y = y1:T ∈Y∗.4 The observations 3We fix these hyperparameters using the values suggested in (Wood et al., 2009; Gasthaus and Teh, 2010), which we find to be quite robust in practice. One could also resample their values (Blunsom and Cohn, 2010); we experimented with this but did not observe any consistent advantage to doing so in our setting. 4The label sequence is terminated by a distinguished endof-sequence label, again written as $. 1031 x1:T are then generated conditioned on the label sequence via the corresponding Py distribution (defined in §2.3). All observations with the same label y are drawn from the same Py, and thus this subsequence of observations is distributed according to the Chinese restaurant process (1). We model y using another sequence memoizer model. This is similar to other hierarchical Bayesian models of latent sequences (Goldwater and Griffiths, 2007; Blunsom and Cohn, 2010), but again, it does not limit the Markov order (the number of preceding labels that are conditioned on). Thus, the probability of a sequence of latent types is computed in the same way as the base distribution in §2.4, that is, p(y1:T ) :=  T Y t=1 Gy1:t−1(yt)  Gy1:T ($) (4) where Gv(y) denotes the conditional probability of latent label y ∈Y given the left context v ∈Y∗. Each Gv is a distribution over Y, defined recursively as Gϵ ∼PYP(dϵ, αϵ, UY) (5) Gv ∼PYP(d|v|, α|v|, Gσ(v)) The probability of transitioning to label yt depends on the assignments of all previous labels y1 . . . yt−1. For part-of-speech induction, each label yt is the part-of-speech associated with the corresponding word xt. For named-entity recognition, we say that each word token is labeled with a named entity type (LOC, PER, ...),5 or with itself if it is not a named entity but rather a “context word.” For example, the word token xt = Washington could have been emitted from the label yt = LOC, or from yt = PER, or from yt = Washington itself (in which case p(xt | yt) = 1). This uses a much larger set of labels Y than in the traditional setup where all context words are emitted from the same latent label type O. Of course, most labels are impossible at most positions (e.g., yt cannot be Washington unless xt = Washington). This scheme makes our generative model sensitive to specific contexts (which is accomplished in discriminative NER systems by contextual features). For example, the SM for y can learn that spoke to P E R yesterday is a common 4-gram 5In §3.2, we will generalize this labeling scheme to allow multi-word named entities such as New York. in the label sequence y, and thus we are more likely to label Washington as a person if x = . . . spoke to Washington yesterday . . .. We need one change to make this work, since now Y must include not only the standard NER labels Y′ = {PER, LOC, ORG, GPE} but also words like Washington. Indeed, now Y = Y′ ∪Σ∗. But no uniform distribution exists over the infinite set Σ∗, so how should we replace the base distribution UY over labels in equation (5)? Answer: To draw from the new base distribution, sample y ∼ UY′ ∪{CONTEXT}. If y = CONTEXT, however, then “expand” it by resampling y ∼HCONTEXT. Here HCONTEXT is the base distribution over spellings of context words, and is learned just like the other Hy distributions in §2.4. 3 Inference via particle Markov chain Monte Carlo 3.1 Sequential sampler Taking Y to be a random variable, we are interested in the posterior distribution p(Y = y | x) over label sequences y given the emitted word sequence x. Our model does not admit an efficient dynamic programming algorithm, owing to the dependencies introduced among the Yt when we marginalize over the unknown G and P distributions that govern transitions and emissions, respectively. In contrast to tagging with a hidden Markov model tagging, the distribution of each label Yt depends on all previous labels y1:t−1, for two reasons: x The transition distribution p(Yt = y | y1:t−1) has unbounded dependence because of the PYP prior (4). y The emission distribution p(xt | Yt = y) depends on the emissions observed from any earlier tokens of y, because of the Chinese restaurant process (1). When y is the only complication, block Metropolis-Hastings samplers have proven effective (Johnson et al., 2007). However, this approach uses dynamic programming to sample from a proposal distribution efficiently, which x precludes in our case. Instead, we use sequential Monte Carlo (SMC)—sometimes called particle filtering—as a proposal distribution. Particle filtering is typically used in online settings, including word segmentation (Borschinger and Johnson, 2011), to make decisions before all of x has been observed. However, we are interested in the inference (or smoothing) problem that conditions on all of x (Dubbin and Blunsom, 2012; Tripuraneni et al., 2015). SMC employs a proposal distribution q(y | x) 1032 whose definition decomposes as follows: q(y1 | x1) T Y t=2 q(yt | y1:t−1, x1:t) (6) for T = |x|. To sample a sequence of latent labels, first sample an initial label y1 from q1, then proceed incrementally by sampling yt from qt(· | y1:t−1, x1:t) for t = 2, . . . , T. The final sampled sequence y is called a particle, and is given an unnormalized importance weight of ˜w = ˜wT · p($ | y1:T ) where ˜wT was built up via ˜wt := ˜wt−1 · p(y1:t, x1:t) p(y1:t−1, x1:t−1) q(yt | y1:t−1, x1:t) (7) The SMC procedure consists of generating a system of M weighted particles whose unnormalized importance weights ˜w(m) : 1 ≤m ≤M are normalized into w(m) := ˜w(m)/ PM m=1 ˜w(m). As M →∞, SMC provides a consistent estimate of the marginal likelihood p(x) as 1 M PM m=1 ˜w(m), and samples from the weighted particle system are distributed as samples from the desired posterior p(y | x) (Doucet and Johansen, 2009). Particle Gibbs. We employ SMC as a kernel in an MCMC sampler (Andrieu et al., 2010). In particular, we use a block Gibbs sampler in which we iteratively resample the hidden labeling y of a sentence x conditioned on the current labelings for all other sentences in the corpus. In this context, the algorithm is called conditional SMC since one particle is always fixed to the previous sampler state for the sentence being resampled, which ensures that the MCMC procedure is ergodic. At a high level, this procedure is analogous to other Gibbs samplers (e.g. for topic models), except that the conditional SMC (CSMC) kernel uses auxiliary variables (particles) in order to generate the new block variable assignments. The procedure is outlined in Algorithm 1. Given a previous latent state assignment y′ 1:T and observations x1:T , the CSMC kernel produces a new latent state assignment via M auxiliary particles where one particle is fixed to the previous assignment. For ergodicity, M ≥2, where larger values of M may improve mixing rate at the expense of increased computation per step. Proposal distribution. The choice of proposal distribution q is crucial to the performance of SMC methods. In the case of continuous latent variables, it is common to propose yt from the transition probability p(Yt | y1:t−1) because this distribution usually has a simple form that permits efficient sampling. However, it is possible to do better in the case of discrete latent variables. The optimal proposal distribution is the one which minimizes the variance of the importance weights, and is given by q(yt | y1:t−1, x1:t) := p(yt | y1:t−1, x1:t) (8) = p(yt | y1:t−1)p(xt | yt) p(xt | y1:t−1) where p(xt | y1:t−1)= X yt∈Y p(yt | y1:t−1)p(xt | yt) (9) Substituting this expression in equation (7) and simplifying yields the incremental weight update: ˜wt := ˜wt−1 · p(xt | y1:t−1) (10) Resampling. In filtering applications, it is common to use resampling operations to prevent weight degeneracy. We do not find resampling necessary here for three reasons. First, note that we resample hidden label sequences that are only as long as the number of words in a given sentence. Second, we use a proposal which minimizes the variance of the weights. Finally, we use SMC as a kernel embedded in an MCMC sampler; asymptotically, this procedure yields samples from the desired posterior regardless of degeneracy (which only affects the mixing rate). Practically speaking, one can diagnose the need for resampling via the effective sample size (ESS) of the particle system: ESS := 1 PM m=1( ˜w(m))2 = (PM m=1 w(m))2 PM m=1(w(m))2 In our experiments, we find that ESS remains high (a significant fraction of M) even for long sentences, suggesting that resampling is not necessary to enable mixing of the the Gibbs sampler. Decoding. In order to obtain a single latent variable assignment for evaluation purposes, we simply take the state of the Markov chain after a fixed number of iterations of particle Gibbs. In principle, one could collect many samples during particle Gibbs and use them to perform minimum Bayes risk decoding under a given loss function. However, this approach is somewhat slower and did not appear to improve performance in preliminary experiments 1033 Algorithm 1 Conditional SMC 1: procedure CSMC(x1:T , y′ 1:T , M) 2: Draw y(m) 1 (eqn. 8) for m ∈[1, M −1] 3: Set y(M) 1 = y′ 1 4: Set ˜w(m) 1 (eqn. 10) for m ∈[1, M] 5: for t = 2 to T do 6: Draw y(m) t (eqn. 8) for m ∈[1, M −1] 7: Set yM t = y′ t 8: Set ˜w(m) t (eqn. 10) for m ∈[1, M] 9: Set ˜w(m) = ˜w(m) T p($|y1:T ) for m ∈[1, M] 10: Draw index k where p(k = m) ∝˜w(m) 11: return y(k) 1:T 3.2 Segmental sampler We now present an sampler for settings such as NER where each latent label emits a segment consisting of 1 or more words. We make use of the same transition distribution p(yt | y1:t−1), which determines the probability of a label in a given context, and an emission distribution p(xt | yt) (namely Pyt); these are assumed to be drawn from hierarchical Pitman-Yor processes described in §2.5 and §2.1, respectively. To allow the xt to be a multi-word string, we simply augment the character set with a distinguished space symbol ∈Σ that separates words within a string. For instance, New York would be generated as the 9-symbol sequence New York$. Although the model emits New York all at once, we still formulate our inference procedure as a particle filter that proposes one tag for each word. Thus, for a given segment label type y, we allow two tag types for its words: • I-y corresponds to a non-final word in a segment of type y (in effect, a word with a following attached). • E-y corresponds to the final word in a segment of type y. For instance, x1:2 = New York would be annotated as a location segment by defining y1:2 = I-LOC E-LOC. This says that y1:2 has jointly emitted x1:2, an event with prior probability PLOC(New York). Each word that is not part of a named entity is considered to be a singleword segment. For example, if the next word were x3 = hosted then it should be tagged with y3 = hosted as in §2.5, in which case x3 was emitted with probability 1. To adapt the sampler described in §3.1 for the segmental case, we need only to define the transition and emission probabilities used in equation (8) and its denominator (9). For the transition probabilities, we want to model the sequence of segment labels. If yt−1 is an I- tag, we take p(yt | y1:t−1) = 1 , since then yt merely continues an existing segment. Otherwise yt starts a new segment, and we take p(yt | y1:t−1) = 1 to be defined by the PYP’s probability Gy1:t−1(yt) as usual, but where we interpret the subscript y1:t−1 to refer to the possibly shorter sequence of segment labels implied by those t −1 tags. For the emission probabilities, if yt has the form I-y or E-y, then its associated emission probability no longer has the form p(xt | yt), since the choice of xt also depends on any words emitted earlier in the segment. Let s ≤t be the starting position of the segment that contains t. If yt = E-y, then the emission probability is proportional to Py(xs xs+1 . . . xt). If yt = I-y then the emission probability is proportional to the prefix probability P x Py(x) where x ranges over all strings in Σ∗that have xs xs+1 . . . xt as a proper prefix. Prefix probabilities in Hy are easy to compute because Hy has the form of a language model, and prefix probabilities in Py are therefore also easy to compute (using a prefix tree for efficiency). This concludes the description of the segmental sampler. Note that the particle Gibbs procedure is unchanged. 4 Inducing parts-of-speech with type-level supervision Automatically inducing parts-of-speech from raw text is a challenging problem (Goldwater et al., 2005). Our focus here is on the easier problem of type-supervised part-of-speech induction, in which (partial) dictionaries are used to guide inference (Garrette and Baldridge, 2012; Li et al., 2012). Conditioned on the unlabeled corpus and dictionary, we use the MCMC procedure described in §3.1 to impute the latent parts-of-speech. Since dictionaries are freely available for hundreds of languages,6 we see this as a mild additional requirement in practice over the purely unsupervised setting. In prior work, dictionaries have been used as constraints on possible parts-of-speech: words appearing in the dictionary take one of their known parts6https://www.wiktionary.org/ 1034 of-speech. In our setting, however, the dictionaries are not constraints but evidence. If monthly is listed in (only) the adjective lexicon, this tells us that PADJ sometimes generates monthly and therefore that HADJ may also tend to generate other words that end with -ly. However, for us, PADV(monthly) > 0 as well, allowing us to still correctly treat monthly as a possible adverb if we later encounter it in a training or test corpus. 4.1 Experiments We follow the experimental procedure described in Li et al. (2012), and use their released code and data to compare to their best model: a second-order maximum entropy Markov model parametrized with log-linear features (SHMM-ME). This model uses hand-crafted features designed to distinguish between different parts-of-speech, and it has special handling for rare words. This approach is surprisingly effective and outperforms alternate approaches such as cross-lingual transfer (Das and Petrov, 2011). However, it also has limitations, since words that do not appear in the dictionary will be unconstrained, and spurious or incorrect lexical entries may lead to propagation of errors. The lexicons are taken from the Wiktionary project; their size and coverage are documented by (Li et al., 2012). We evaluate our model on multi-lingual data released as part of the CoNLL 2007 and CoNLL-X shared tasks. In particular, we use the same set of languages as Li et al. (2012).7 For our method, we impute the parts-of-speech by running particle Gibbs for 100 epochs, where one epoch consists of resampling the states for a each sentence in the corpus. The final sampler state is then taken as a 1-best tagging of the unlabeled data. Results. The results are reported in Table 1. We find that our hierarchical sequence memoizer (HSM) matches or exceeds the performance of the baseline (SHMM-ME) for nearly all the tested languages, particularly for morphologically rich languages such as German where the spelling distributions Hy may capture regularities. It is interesting to note that our model performs worse relative to the baseline for English; one possible explanation is that the baseline uses hand-engineered features whereas ours does not, and these features may have been tuned using English data for validation. 7With the exception of Dutch. Unlike the other CoNLL languages, Dutch includes phrases, and the procedure by which these were split into tokens was not fully documented. Our generative model is supposed to exploit lexicons well. To see what is lost from using a generative model, we also compared with Li et al. (2012) on standard supervised tagging without any lexicons. Even here our generative model is very competive, losing only on English and Swedish. 5 Boostrapping NER with type-level supervision Name lists and dictionaries are useful for NER particularly when in-domain annotations are scarce. However, with little annotated data, discriminative training may be unable to reliably estimate lexical feature weights and may overfit. In this section, we are interested in evaluating our proposed Bayesian model in the context of low-resource NER. 5.1 Data Most languages do not have corpora annotated for parts-of-speech, named-entities, syntactic parses, or other linguistic annotations. Therefore, rapidly deploying natural language technologies in a new language may be challenging. In the context of facilitating relief responses in emergencies such as natural disasters, the DARPA LORELEI (Low Resource Languages for Emergent Incidents) program has sponsored the development and release of representative “language packs” for Turkish and Uzbek with more languages planned (Strassel and Tracey, 2016). We use the named-entity annotations as part of these language packs which include persons, locations, organizations, and geo-political entities, in order to explore bootstrapping named-entity recognition from small amounts of data. We consider two types of data: x in-context annotations, where sentences are fully annotated for named-entities, and y lexical resources. The LORELEI language packs lack adequate indomain lexical resources for our purposes. Therefore, we simulate in-domain lexical resources by holding out portions of the annotated development data and deriving dictionaries and name lists from them. For each label y ∈ {PER, LOC, ORG, GPE, CONTEXT}, our lexicon for y lists all distinct y-labeled strings that appear in the held-out data. This setup ensures that the labels associated with lexicon entries correspond to the annotation guidelines used in the data we use for evaluation. It avoids possible problems that might arise when leveraging noisy out-of-domain knowledge bases, which we may explore in future. 1035 Model Danish German Greek English Italian Portuguese Spanish Swedish Mean Wiktionary SHMM-ME 83.3 85.8 79.2 87.1 86.5 84.5 86.4 86.1 84.9 HSM 83.7 90.7 81.7 84.0 86.7 85.5 87.6 86.8 85.8 Supervised SHMM-ME 93.9 97.4 95.1 95.8 93.8 95.5 93.8 95.5 95.1 HSM 95.2 97.4 97.4 95.2 94.5 96.0 95.6 92.2 95.3 Table 1: Part-of-speech induction results in multiple languages. 5.2 Evaluation In this section we report supervised NER experiments on two low-resource languages: Turkish and Uzbek. We vary both the amount of supervision as well as the size of the lexical resources. A challenge when evaluating the performance of a model with small amounts of training data is that there may be high-variance in the results. In order to have more confidence in our results, we perform bootstrap resampling experiments in which the training set, evaluation set, and lexical resources are randomized across several replications of the same experiment (for each of the data conditions). We use 10 replications for each of the data conditions reported in Figures 1–2, and report both the mean performance and 95% confidence intervals. Baseline. We use the Stanford NER system with a standard set of language-independent features (Finkel et al., 2005).8. This model is a conditional random field (CRF) with feature templates which include character n-grams as well as word shape features. Crucially, we also incorporate lexical features. The CRF parameters are regularized using an L1 penalty and optimized via Orthant-wise limited-memory quasi-Newton optimization (Andrew and Gao, 2007). For both our proposed method and the discriminative baseline, we use a fixed set of hyperparameters (i.e. we do not use a separate validation set for tuning each data condition). In order to make a fair comparison to the CRF, we use our sampler for forward inference only, without resampling on the test data. Results. We show learning curves as a function of supervised training corpus size. Figure 1 shows that our generative model strongly beats the baseline in this low-data regime. In particular, when there is little annotated training data, our proposed generative model can compensate by exploiting the lexicon, while the discriminative baseline scores terribly. The performance gap decreases with larger 8We also experimented with neural models, but found that the CRF outperformed them in low-data conditions. supervised corpora, which is consistent with prior results comparing generative and discriminative training (Ng and Jordan, 2002). In Figure 2, we show the effect of the lexicon’s size: as expected, larger lexicons are better. The generative approach significantly outperforms the discriminative baseline at any lexicon size, although its advantage drops for smaller lexicons or larger training corpora. In Figure 1 we found that increasing the pseudocount c consistently decreases performance, so we used c = 1 in our other experiments.9 6 Conclusion This paper has described a generative model for low-resource sequence labeling and segmentation tasks using lexical resources. Experiments in semisupervised and low-resource settings have demonstrated its applicability to part-of-speech induction and low-resource named-entity recognition. There are many potential avenues for future work. Our model may be useful in the context of active learning where efficient re-estimation and performance in low-data conditions are important. It would also be interesting to explore more expressive parameterizations, such recurrent neural networks for Hy. In the space of neural methods, differentiable memory (Santoro et al., 2016) may be more flexible than the PYP prior, while retaining the ability of the model to cache strings observed in the gazetteer. Acknowledgments This work was supported by the JHU Human Language Technology Center of Excellence, DARPA LORELEI, and NSF grant IIS-1423276. Thanks to Jay Feldman for early discussions. 9Why? Even a pseudocount of c = 1 is enough to ensure that Py(s) ≫Hy(s), since the prior probability Hy(s) is rather small for most strings in the lexicon. Indeed, perhaps c < 1 would have increased performance, particularly if the lexicon reflects out-of-domain data. This could be arranged, in effect, by using a hierarchical Bayesian model in which the lexicon and corpus emissions are not drawn from the identical distribution Py but only from similar (coupled) distributions. 1036 100 200 300 400 500 # sentence 0 10 20 30 40 50 60 F1 Model baseline c=1 c=10 c=100 Figure 1: Absolute NER performance for Turkish (y-axis) as a function of corpus size (x-axis). The y-axis gives the F1 score on a held-out evaluation set (averaged over 10 bootstrap replicates, with error bars showing 95% confidence intervals). Our generative approach is compared to a baseline discriminative model with lexicon features (lowest curve). 500 held-out sentences were used to create the lexicon for both methods. Note that increasing the pseudocount c for lexicon entries (upper curves) tends to decrease performance for the generative model; we therefore take c = 1 in all other experiments. This graph shows Turkish; the corresponding Uzbek figure is available as supplementary material. 100 200 300 400 500 # sentence 0 10 20 30 40 F1 model - F1 baseline Gazetteer size 1000 100 10 Figure 2: Relative NER performance for Turkish (y-axis) as a function of corpus size (x-axis). In this graph, c = 1 is constant and the curves instead compare different lexicon sizes derived from 10, 100, and 1000 held-out sentences. The y-axis now gives the difference F1model −F1baseline, so positive values indicate improvement over the baseline due to the proposed model. Gains are highest for large lexicons and for small corpora. Again, the corresponding Uzbek figure is available as supplementary material. 1037 References Rie Kubota Ando and Tong Zhang. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research 6:1817–1853. Galen Andrew and Jianfeng Gao. 2007. Scalable training of L1-regularized log-linear models. In Proceedings of the 24th International Conference on Machine Learning. pages 33–40. Christophe Andrieu, Arnaud Doucet, and Roman Holenstein. 2010. Particle Markov chain Monte Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 72(3):269–342. Phil Blunsom and Trevor Cohn. 2010. A hierarchical Pitman-Yor process HMM for unsupervised partof-speech induction. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics. Benjamin Borschinger and Mark Johnson. 2011. A particle filter algorithm for Bayesian wordsegmentation. In Proceedings of the Australasian Language Technology Association Workshop 2011. Canberra, Australia, pages 10–18. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12:2493–2537. Dipanjan Das and Slav Petrov. 2011. Unsupervised part-of-speech tagging with bilingual graph-based projections. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. pages 600–609. Arnaud Doucet and Adam M. Johansen. 2009. A tutorial on particle filtering and smoothing: Fifteen years later. Handbook of Nonlinear Filtering 12:656–704. Markus Dreyer and Jason Eisner. 2011. Discovering morphological paradigms from plain text using a Dirichlet process mixture model. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Edinburgh, pages 616–627. Gregory Dubbin and Phil Blunsom. 2012. Unsupervised Bayesian part of speech inference with particle Gibbs. In Proceedings of the 2012 European Conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I. SpringerVerlag, Berlin, Heidelberg, ECML PKDD’12, pages 760–773. Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by Gibbs sampling. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics. Stroudsburg, PA, USA, ACL ’05, pages 363–370. Dan Garrette and Jason Baldridge. 2012. Typesupervised hidden Markov models for part-ofspeech tagging with incomplete tag dictionaries. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. pages 821–831. Jan Gasthaus and Yee Whye Teh. 2010. Improvements to the sequence memoizer. In NIPS. pages 685–693. Sharon Goldwater and Thomas L. Griffiths. 2007. A fully Bayesian approach to unsupervised part-ofspeech tagging. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics. Prague, Czech Republic, pages 744–751. Sharon Goldwater, Mark Johnson, and Thomas L. Griffiths. 2005. Interpolating between types and tokens by estimating power-law generators. In Advances in Neural Information Processing Systems. pages 459–466. Mark Johnson, Thomas L. Griffiths, and Sharon Goldwater. 2007. Bayesian inference for PCFGs via Markov chain Monte Carlo. In HLT-NAACL. pages 139–146. Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language modeling. Computing Research Repository arXiv:1602.02410. Shen Li, Joao V Grac¸a, and Ben Taskar. 2012. Wiki-ly supervised part-of-speech tagging. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. pages 1389– 1398. Daichi Mochihashi, Takeshi Yamada, and Naonori Ueda. 2009. Bayesian unsupervised word segmentation with nested Pitman-Yor language modeling. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1. pages 100–108. Andrew Y. Ng and Michael I. Jordan. 2002. On discriminative vs. generative classifiers: A comparison of logistic regression and naive Bayes. Advances in Neural Information Processing Systems 2:841–848. Jim Pitman. 1996. Some developments of the Blackwell-MacQueen urn scheme. In T. S. Ferguson, L. S. Shapley, and J. B. MacQueen, editors, Statistics, Probability and Game Theory: Papers in Honor of David Blackwell, Institute of Mathematical Statistics, volume 30 of IMS Lecture NotesMonograph series, pages 245–267. 1038 Jim Pitman and Marc Yor. 1997. The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator. The Annals of Probability pages 855– 900. Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy P. Lillicrap. 2016. One-shot learning with memory-augmented neural networks. Computing Research Repository arXiv:1605.06065. Andrew Smith, Trevor Cohn, and Miles Osborne. 2005. Logarithmic opinion pools for conditional random fields. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics. pages 18–25. Andrew Smith and Miles Osborne. 2006. Using gazetteers in discriminative information extraction. In Proceedings of the Tenth Conference on Computational Natural Language Learning. pages 133–140. Stephanie Strassel and Jennifer Tracey. 2016. Lorelei language packs: Data, tools, and resources for technology development in low resource languages. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016). European Language Resources Association (ELRA), Paris, France. Charles Sutton, Michael Sindelar, and Andrew McCallum. 2006. Reducing weight undertraining in structured discriminative learning. In Proceedings of the Main Conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics. Stroudsburg, PA, USA, HLT-NAACL ’06, pages 89–95. Yee Whye Teh. 2006. A hierarchical Bayesian language model based on Pitman-Yor processes. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. pages 985–992. Nilesh Tripuraneni, Shixiang Gu, Hong Ge, and Zoubin Ghahramani. 2015. Particle Gibbs for infinite hidden Markov models. In Proceedings of the 28th International Conference on Neural Information Processing Systems. MIT Press, Cambridge, MA, USA, NIPS’15, pages 2395–2403. Frank Wood, C´edric Archambeau, Jan Gasthaus, Lancelot James, and Yee Whye Teh. 2009. A stochastic memoizer for sequence data. In Proceedings of the 26th Annual International Conference on Machine Learning. pages 1129–1136. 1039
2017
95
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1040–1050 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1096 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1040–1050 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1096 Semi-Supervised QA with Generative Domain-Adaptive Nets Zhilin Yang Junjie Hu Ruslan Salakhutdinov William W. Cohen School of Computer Science Carnegie Mellon University {zhiliny,junjieh,rsalakhu,wcohen}@cs.cmu.edu Abstract We study the problem of semi-supervised question answering—-utilizing unlabeled text to boost the performance of question answering models. We propose a novel training framework, the Generative Domain-Adaptive Nets. In this framework, we train a generative model to generate questions based on the unlabeled text, and combine model-generated questions with human-generated questions for training question answering models. We develop novel domain adaptation algorithms, based on reinforcement learning, to alleviate the discrepancy between the modelgenerated data distribution and the humangenerated data distribution. Experiments show that our proposed framework obtains substantial improvement from unlabeled text. 1 Introduction Recently, various neural network models were proposed and successfully applied to the tasks of questions answering (QA) and/or reading comprehension (Xiong et al., 2016; Dhingra et al., 2016; Yang et al., 2017). While achieving stateof-the-art performance, these models rely on a large amount of labeled data. However, it is extremely difficult to collect large-scale question answering datasets. Historically, many of the question answering datasets have only thousands of question answering pairs, such as WebQuestions (Berant et al., 2013), MCTest (Richardson et al., 2013), WikiQA (Yang et al., 2015), and TREC-QA (Voorhees and Tice, 2000). Although larger question answering datasets with hundreds of thousands of question-answer pairs have been collected, including SQuAD (Rajpurkar et al., 2016), MSMARCO (Nguyen et al., 2016), and NewsQA (Trischler et al., 2016a), the data collection process is expensive and time-consuming in practice. This hinders real-world applications for domain-specific question answering. Compared to obtaining labeled question answer pairs, it is trivial to obtain unlabeled text data. In this work, we study the following problem of semi-supervised question answering: is it possible to leverage unlabeled text to boost the performance of question answering models, especially when only a small amount of labeled data is available? The problem is challenging because conventional manifold-based semi-supervised learning algorithms (Zhu and Ghahramani, 2002; Yang et al., 2016a) cannot be straightforwardly applied. Moreover, since the main foci of most question answering tasks are extraction rather than generation, it is also not sensible to use unlabeled text to improve language modeling as in machine translation (Gulcehre et al., 2015). To better leverage the unlabeled text, we propose a novel neural framework called Generative Domain-Adaptive Nets (GDANs). The starting point of our framework is to use linguistic tags to extract possible answer chunks in the unlabeled text, and then train a generative model to generate questions given the answer chunks and their contexts. The model-generated questionanswer pairs and the human-generated questionanswer pairs can then be combined to train a question answering model, referred to as a discriminative model in the following text. However, there is discrepancy between the model-generated data distribution and the human-generated data distribution, which leads to suboptimal discriminative models. To address this issue, we further propose two domain adaptation techniques that treat the model-generated data distribution as a different domain. First, we use an additional domain tag to 1040 indicate whether a question-answer pair is modelgenerated or human-generated. We condition the discriminative model on the domain tags so that the discriminative model can learn to factor out domain-specific and domain-invariant representations. Second, we employ a reinforcement learning algorithm to fine-tune the generative model to minimize the loss of the discriminative model in an adversarial way. In addition, we present a simple and effective baseline method for semi-supervised question answering. Although the baseline method performs worse than our GDAN approach, it is extremely easy to implement and can still lead to substantial improvement when only limited labeled data is available. We experiment on the SQuAD dataset (Rajpurkar et al., 2016) with various labeling rates and various amounts of unlabeled data. Experimental results show that our GDAN framework consistently improves over both the supervised learning setting and the baseline methods, including adversarial domain adaptation (Ganin and Lempitsky, 2014) and dual learning (Xia et al., 2016). More specifically, the GDAN model improves the F1 score by 9.87 points in F1 over the supervised learning setting when 8K labeled question-answer pairs are used. Our contribution is four-fold. First, different from most of the previous neural network studies on question answering, we study a critical but challenging problem, semi-supervised question answering. Second, we propose the Generative Domain-Adaptive Nets that employ domain adaptation techniques on generative models with reinforcement learning algorithms. Third, we introduce a simple and effective baseline method. Fourth, we empirically show that our framework leads to substantial improvements. 2 Semi-Supervised Question Answering Let us first introduce the problem of semisupervised question answering. Let L = {q(i), a(i), p(i)}N i=1 denote a question answering dataset of N instances, where q(i), a(i), and p(i) are the question, answer, and paragraph of the i-th instance respectively. The goal of question answering is to produce the answer a(i) given the question q(i) along with the paragraph p(i). We will drop the superscript ·(i) when the context is unambiguous. In our formulation, following the setting in SQuAD (Rajpurkar et al., 2016), we specifically focus on extractive question answering, where a is always a consecutive chunk of text in p. More formally, let p = (p1, p2, · · · , pT ) be a sequence of word tokens with T being the length, then a can always be represented as a = (pj, pj+1, · · · , pk−1, pk), where j and k are the start and end token indices respectively. The questions can also be represented as a sequence of word tokens q = (q1, q2, · · · , qT ′) with length T ′. In addition to the labeled dataset L, in the semisupervised setting, we are also given a set of unlabeled data, denoted as U = {a(i), p(i)}M i=1, where M is the number of unlabeled instances. Note that it is usually trivial to have access to an almost infinite number of paragraphs p from sources such as Wikipedia articles and other web pages. And since the answer a is always a consecutive chunk in p, we argue that it is also sensible to extract possible answer chunks from the unlabeled text using linguistic tags. We will discuss the technical details of answer chunk extraction in Section 4.1, and in the formulation of our framework, we assume that the answer chunks a are available. Given both the labeled data L and the unlabeled data U, the goal of semi-supervised question answering is to learn a question answering model D that captures the probability distribution P(a|p, q). We refer to this question answering model D as the discriminative model, in contrast to the generative model that we will present in Section 3.2. 2.1 A Simple Baseline We now present a simple baseline for semisupervised question answering. Given a paragraph p = (p1, p2, · · · , pT ) and the answer a = (pj, pj+1, · · · , pk−1, pk), we extract (pj−W , pj−W+1, · · · , pj−1, pk+1, pk+2, pk+W ) from the paragraph and treat it as the question. Here W is the window size and is set at 5 in our experiments so that the lengths of the questions are similar to human-generated questions. The context-based question-answer pairs on U are combined with human-generated pairs on L for training the discriminative model. Intuitively, this method extracts the contexts around the answer chunks to serve as hints for the question answering model. Surprisingly, this simple baseline method leads to substantial improvements when labeled data is limited. 1041 3 Generative Domain-Adaptive Nets Though the simple method described in Section 2.1 can lead to substantial improvement, we aim to design a learning-based model to move even further. In this section, we will describe the model architecture and the training algorithms for the GDANs. We will use a notation in the context of question answering following Section 2, but one should be able to extend the notion of GDANs to other applications as well. The GDAN framework consists of two models, a discriminative model and a generative model. We will first discuss the two models in detail in the context of question answering, and then present an algorithm based on reinforcement learning to combine the two models. 3.1 Discriminative Model The discriminative model learns the conditional probability of an answer chunk given the paragraph and the question, i.e., P(a|p, q). We employ a gated-attention (GA) reader (Dhingra et al., 2016) as our base model in this work, but our framework does not make any assumptions about the base models being used. The discriminative model is referred to as D. The GA model consists of K layers with K being a hyper-parameter. Let Hk p be the intermediate paragraph representation at layer k, and Hq be the question representation. The paragraph representation Hk p is a T × d matrix, and the question representation Hq is a T ′ × d matrix, where d is the dimensionality of the representations. Given the paragraph p, we apply a bidirectional Gated Recurrent Unit (GRU) network (Chung et al., 2014) on top of the embeddings of the sequence (p1, p2, · · · , pT ), and obtain the initial paragraph representation H0 p. Given the question q, we also apply another bidirectional GRU to obtain the question representation Hq. The question and paragraph representations are combined with the gated-attention (GA) mechanism (Dhingra et al., 2016). More specifically, for each paragraph token pi, we compute αj = exp hT q,jhk−1 p,i PT ′ j′=1 exp hT q,j′hk−1 p,i hk p,i = T ′ X j=1 αjhq,j ⊙hk−1 p,i where hk p,i is the i-th row of Hk p and hq,j is the j-th row of Hq. Since the answer a is a sequence of consecutive word tokens in the paragraph p, we apply two softmax layers on top of HK p to predict the start and end indices of a, following Yang et al. (2017). 3.1.1 Domain Adaptation with Tags We will train our discriminative model on both model-generated question-answer pairs and human-generated pairs. However, even a welltrained generative model will produce questions somewhat different from human-generated ones. Learning from both human-generated data and model-generated data can thus lead to a biased model. To alleviate this issue, we propose to view the model-generated data distribution and the human-generated data distribution as two different data domains and explicitly incorporate domain adaptation into the discriminative model. More specifically, we use a domain tag as an additional input to the discriminative model. We use the tag “d true” to represent the domain of human-generated data (i.e., the true data), and “d gen” for the domain of model-generated data. Following a practice in domain adaptation (Johnson et al., 2016; Chu et al., 2017), we append the domain tag to the end of both the questions and the paragraphs. By introducing the domain tags, we expect the discriminative model to factor out domain-specific and domain-invariant representations. At test time, the tag “d true” is appended. 3.2 Generative Model The generative model learns the conditional probability of generating a question given the paragraph and the answer, i.e., P(q|p, a). We implement the generative model as a sequence-tosequence model (Sutskever et al., 2014) with a copy mechanism (Gu et al., 2016; Gulcehre et al., 2016). The generative model consists of an encoder and a decoder. An encoder is a GRU that encodes the input paragraph into a sequence of hidden states H. We inject the answer information by appending an additional zero/one feature to the word embeddings of the paragraph tokens; i.e., if a word token appears in the answer, the feature is set at one, otherwise zero. The decoder is another GRU with an attention mechanism over the encoder hidden states H. At each time step, the generation probabilities over all 1042 Algorithm 1 Training Generative DomainAdaptive Nets Input: labeled data L, unlabeled data U, #iterations TG and TD Initialize G by MLE training on L Randomly initialize D while not stopping do for t ←1 to TD do Update D to maximize J(L, d true, D) + J(UG, d gen, D) with SGD end for for t ←1 to TG do Update G to maximize J(UG, d true, D) with Reinforce and SGD end for end while return model D word types are defined with a copy mechanism: poverall = gtpvocab + (1 −gt)pcopy (1) where gt is the probability of generating the token from the vocabulary, while (1 −gt) is the probability of copying a token from the paragraph. The probability gt is computed based on the current hidden state ht: gt = σ(wT g ht) where σ denotes the logistic function and wg is a vector of model parameters. The generation probabilities pvocab are defined as a softmax function over the word types in the vocabulary, and the copying probabilities pcopy are defined as a softmax function over the word types in the paragraph. Both pvocab and pcopy are defined as a function of the current hidden state ht and the attention results (Gu et al., 2016). 3.3 Training Algorithm We first define the objective function of the GDANs, and then present an algorithm to optimize the given objective function. Similar to the Generative Adversarial Nets (GANs) (Goodfellow et al., 2014) and adversarial domain adaptation (Ganin and Lempitsky, 2014), the discriminative model and the generative model have different objectives in our framework. However, rather than formulating the objective as an adversarial game between the two models (Goodfellow et al., 2014; Ganin and Lempitsky, 2014), in our framework, the discriminative model relies on the data generated by the generative model, while the generative model aims to match the model-generated data distribution with the human-generated data distribution using the signals from the discriminative model. Given a labeled dataset L = {p(i), q(i), a(i)}N i=1, the objective function of a discriminative model D for a supervised learning setting can be written as P p(i),q(i),a(i)∈L log PD(a(i)|p(i), q(i)), where PD is a probability distribution defined by the model D. Since we also incorporate domain tags into the model D, we denote the objective function as J(L, tag, D) = 1 |L| X p(i),q(i),a(i)∈L log PD,tag(a(i)|p(i), q(i)) meaning that the domain tag, “tag”, is appended to the dataset L. We use |L| = N to denote the number of the instances in the dataset L. The objective function is averaged over all instances such that we can balance labeled and unlabeled data. Let UG denote the dataset obtained by generating questions on the unlabeled dataset U with the generative model G. The objective of the discriminative model is then to maximize J for both labeled and unlabeled data under the domain adaptation notions, i.e., J(L, d true, D) + J(UG, d gen, D). Now we discuss the objective of the generative model. Similar to the dual learning (Xia et al., 2016) framework, one can define an autoencoder objective. In this case, the generative model aims to generate questions that can be reconstructed by the discriminative model, i.e., maximizing J(UG, d gen, D). However, this objective function can lead to degenerate solutions because the questions can be thought of as an overcomplete representation of the answers (Vincent et al., 2010). For example, given p and a, the generative model might learn to generate trivial questions such as copying the answers, which does not contributed to learning a better D. Instead, we leverage the discriminative model to better match the model-generated data distribution with the human-generated data distribution. We propose to define an adversarial training objective J(UG, d true, D). We append the tag “d true” instead of “d gen” for the model-generated data to “fool” the discriminative model. Intuitively, the goal of G is to generate ”useful” questions where the usefulness is measured by the probability that the generated questions can be answered correctly by D. 1043 (a) Training the discriminative model on labeled data. (b) Training the discriminative model on unlabeled data. (c) Training the generative model on unlabeled data. Figure 1: Model architecture and training. Red boxes denote the modules being updated. “d true” and “d gen” are two domain tags. D is the discriminative model and G is the generative model. The objectives for the three cases are all to minimize the cross entropy loss of the answer chunks. The overall objective function now can be written as maxD J(L, d true, D) + J(UG, d gen, D) maxG J(UG, d true, D) With the above objective function in mind, we present a training algorithm in Algorithm 1 to train a GDAN. We first pretrain the generative model on the labeled data L with maximum likelihood estimation (MLE): max G N X i=1 T ′ X t=1 log PG(q(i) t |q(i) <t, p(i), a(i)) where PG is the probability defined by Eq. 1. We then alternatively update D and G based on their objectives. To update D, we sample one batch from the labeled data L and one batch from the unlabeled data UG, and combine the two batches to perform a gradient update step. Since the output of G is discrete and non-differentiable, we use the Reinforce algorithm (Williams, 1992) to update G. The action space is all possible questions with length T ′ (possibly with padding) and the reward is the objective function J(UG, d true, D). Let θG be the parameters of G. The gradient can be written as ∂J(UG, d true, D) ∂θG = EPG(q|p,a)(log PD,d true(a|p, q) −b)∂log PG(q|p, a) ∂θG where we use an average reward from samples as the baseline b. We approximate the expectation EPG(q|p,a) by sampling one instance at a time from PG(q|p, a) and then do an update step. This training algorithm is referred to as reinforcement learning (RL) training in the following sections. The overall architecture and training algorithm are illustrated in Figure 1. MLE vs RL. The generator G has two training phases–MLE training and RL training, which are different in that: 1) RL training does not require labels, so G can explore a broader data domain of p using unlabeled data, while MLE training requires labels; 2) MLE maximizes log P(q|p, a), while RL maximizes log PD(a|q, p). Since log P(q|a, p) is the sum of log P(q|p) and log P(a|q, p) (plus a constant), maximizing log P(a|q, p) does not require modeling log P(q|p) that is irrelevant to QA, which makes optimization easier. Moreover, maximizing log P(a|q, p) is consistent with the goal of QA. 4 Experiments 4.1 Answer Extraction As discussed in Section 2, our model assumes that answers are available for unlabeled data. In this section, we introduce how we use linguistic tags and rules to extract answer chunks from unlabeled text. To extract answers from massive unlabelled Wikipedia articles, we first sample 205,511 Wikipedia articles that are not used in the training, development and test sets in the SQuAD dataset. We extract the paragraphs from each article, and limit the length of each paragraph at the word level to be less than 850. In total, we obtain 950,612 1044 Table 1: Sampled generated questions given the paragraphs and the answers. P means paragraphs, A means answers, GQ means groundtruth questions, and Q means questions generated by our models. MLE refers to maximum likelihood training, and RL refers to reinforcement learning so as to maximize J(UG, d true, D). We truncate the paragraphs to only show tokens around the answer spans with a window size of 20. P1: is mediated by ige , which triggers degranulation of mast cells and basophils when cross - linked by antigen . type ii hypersensitivity occurs when antibodies bind to antigens on the patient ’ s own cells , marking them for destruction . this A: type ii hypersensitivity GQ: antibody - dependent hypersensitivity belongs to what class of hypersensitivity ? Q (MLE): what was the UNK of the patient ’ s own cells ? Q (RL): what occurs when antibodies bind to antigens on the patient ’ s own cells by antigen when cross P2: an additional warming of the earth ’ s surface . they calculate with confidence that co0 has been responsible for over half the enhanced greenhouse effect . they predict that under a “ business as usual ” ( bau ) scenario , A: over half GQ: how much of the greenhouse effect is due to carbon dioxide ? Q (MLE): what is the enhanced greenhouse effect ? Q (RL): what the enhanced greenhouse effect that co0 been responsible for P3: ) narrow gauge lines , which are the remnants of five formerly government - owned lines which were built in mountainous areas . A: mountainous areas GQ: where were the narrow gauge rail lines built in victoria ? Q (MLE): what is the government government government - owned lines built ? Q (RL): what were the remnants of government - owned lines built in P4: but not both ). in 0000 , bankamericard was renamed and spun off into a separate company known today as visa inc . A: visa inc . GQ: what present - day company did bankamericard turn into ? Q (MLE): what was the separate company bankamericard ? Q (RL): what today as bankamericard off into a separate company known today as spun off into a separate company known today P5: legrande writes that ” the formulation of a single all - encompassing definition of the term is extremely difficult , if A: legrande GQ: who wrote that it is difficult to produce an all inclusive definition of civil disobedience ? Q (MLE): what is the term of a single all all all all encompassing definition of a single all Q (RL): what writes ” the formulation of a single all - encompassing definition of the term all encompassing encompassing encompassing encompassing paragraphs from unlabelled articles. Answers in the SQuAD dataset can be categorized into ten types, i.e., “Date”, “Other Numeric”, “Person”, “Location”, “Other Entity”, “Common Noun Phrase”, “Adjective Phrase”, “Verb Phrase”, “Clause” and “Other” (Rajpurkar et al., 2016). For each paragraph from the unlabeled articles, we utilize Stanford Part-Of-Speech (POS) tagger (Toutanova et al., 2003) to label each word with the corresponding POS tag, and implement a simple constituency parser to extract the noun phrase, verb phrase, adjective and clause based on a small set of constituency grammars. Next, we use Stanford Named Entity Recognizer (NER) (Finkel et al., 2005) to assign each word with one of the seven labels, i.e., “Date”, “Money”, “Percent”, “location”, “Organization” and “Time”. We then categorize a span of consecutive words with the same NER tags of either “Money” or “Percent” as the answer of the type “Other Numeric”. Similarly, we categorize a span of consecutive words with the same NER tags of “Organization” as the answer of the type “Other Entity”. Finally, we subsample five answers from all the extracted answers for each paragraph according to the percentage of answer types in the SQuAD dataset. We obtain 4,753,060 answers in total, which is about 50 times larger than the number of answers in the SQuAD dataset. 4.2 Settings and Comparison Methods The original SQuAD dataset consists of 87,636 training instances and 10,600 development instances. Since the test set is not published, we split 10% of the training set as the test set, and the remaining 90% serves as the actual training set. Instances are split based on articles; i.e., paragraphs in one article always appear in only one set. We tune the hyper-parameters and perform early stopping on the development set using the F1 scores, and the performance is evaluated on the test set using both F1 scores and exact matching (EM) scores (Rajpurkar et al., 2016). We compare the following methods. SL is 1045 the supervised learning setting where we train the model D solely on the labeled data L. Context is the simple context-based method described in Section 2.1. Context + domain is the “Context” method with domain tags as described in Section 3.1.1. Gen is to train a generative model and use the generated questions as additional training data. Gen + GAN refers to the domain adaptation method using GANs (Ganin and Lempitsky, 2014); in contrast to the original work, the generative model is updated using Reinforce. Gen + dual refers to the dual learning method (Xia et al., 2016). Gen + domain is “Gen” with domain tags, while the generative model is trained with MLE and fixed. Gen + domain + adv is the approach we propose (Cf. Figure 1 and Algorithm 1), with “adv” meaning adversarial training based on Reinforce. We use our own implementation of “Gen + GAN” and “Gen + dual”, since the GAN model (Ganin and Lempitsky, 2014) does not handle discrete features and the dual learning model (Xia et al., 2016) cannot be directly applied to question answering. When implementing these two baselines, we adopt the learning schedule introduced by Ganin and Lempitsky (2014), i.e., gradually increasing the weights of the gradients for the generative model G. 4.3 Results and Analysis We study the performance of different models with varying labeling rates and unlabeled dataset sizes. Labeling rates are the percentage of training instances that are used to train D. The results are reported in Table 2. Though the unlabeled dataset we collect consists of around 5 million instances, we also sample a subset of around 50,000 instances to evaluate the effects of the size of unlabeled data. The highest labeling rate in Table 2 is 0.9 because 10% of the training instances are used for testing. Since we do early stopping on the development set using the F1 scores, we also report the development F1. We report two metrics, the F1 scores and the exact matching (EM) scores (Rajpurkar et al., 2016), on the test set. All metrics are computed using the official evaluation scripts. SL v.s. SSL. We observe that semi-supervised learning leads to consistent improvements over supervised learning in all cases. Such improvements are substantial when labeled data is limited. For example, the GDANs improve over supervised learning by 9.87 points in F1 and 7.26 points in EM when the labeling rate is 0.1. With our semisupervised learning approach, we can use only 0.1 training instances to obtain even better performance than a supervised learning approach with 0.2 training instances, saving more than half of the labeling costs. Comparison with Baselines. By comparing “Gen + domain + adv” with “Gen + GAN” and “Gen + Dual”, it is clear that the GDANs perform substantially better than GANs and dual learning. With labeling rate 0.1, GDANs outperform dual learning and GANs by 2.47 and 4.29 points respectively in terms of F1. Ablation Study. We also perform an ablation study by examining the effects of “domain” and “adv” when added to “gen”. It can be seen that both the domain tags and the adversarial training contribute to the performance of the GDANs when the labeling rate is equal to or less than 0.5. With labeling rate 0.9, adding domain tags still leads to better performance but adversarial training does not seem to improve the performance by much. Unlabeled Data Size. Moreover, we observe that the performance can be further improved when a larger unlabeled dataset is used, though the gain is relatively less significant compared to changing the model architectures. For example, increasing the unlabeled dataset size from 50K to 5M, the performance of GDANs increases by 0.38 points in F1 and 0.52 points in EM. Context-Based Method. Surprisingly, the simple context-based method, though performing worse than GDANs, still leads to substantial gains; e.g., 7.00 points in F1 with labeling rate 0.1. Adding domain tags can improve the performance of the context-based method as well. MLE vs RL. We plot the loss curve of −J(UG, d gen, D) for both the MLE-trained generator (“Gen + domain”) and the RL-trained generator (“Gen + domain + adv”) in Figure 2. We observe that the training loss for D on RLgenerated questions is lower than MLE-generated questions, which confirms that RL training maximizes log P(a|p, q). Samples of Generated Questions. We present some questions generated by our model in Table 1. The generated questions are post-processed by removing repeated subs-sequences. Compared to MLE-generated questions, RL-generated questions are more informative (Cf., P1, P2, and P4), and contain less “UNK” (unknown) tokens (Cf., 1046 Figure 2: Comparison of discriminator training loss −J(UG, d gen, D) on generated QA pairs. The lower the better. MLE refers to questions generated by maximum likelihood training, and RL refers to questions generated by reinforcement learning. P1). Moreover, both semantically and syntactically, RL-generated questions are more accurate (Cf., P3 and P5). 5 Related Work Semi-Supervised Learning. Semi-supervised learning has been extensively studied in literature (Zhu, 2005). A batch of novel models have been recently proposed for semi-supervised learning based on representation learning techniques, such as generative models (Kingma et al., 2014), ladder networks (Rasmus et al., 2015) and graph embeddings (Yang et al., 2016a). However, most of the semi-supervised learning methods are based on combinations of the supervised loss p(y|x) and an unsupervised loss p(x). In the context of reading comprehension, directly modeling the likelihood of a paragraph would not possibly improve the supervised task of question answering. Moreover, traditional graph-based semisupervised learning (Zhu and Ghahramani, 2002) cannot be easily extended to modeling the unlabeled answer chunks. Domain Adaptation. Domain adaptation has been successfully applied to various tasks, such as classification (Ganin and Lempitsky, 2014) and machine translation (Johnson et al., 2016; Chu et al., 2017). Several techniques on domain adaptation (Glorot et al., 2011) focus on learning distribution invariant features by sharing the intermediate representations for downstream tasks. Another line of research on domain adaptation attempt to match the distance between different domain distributions in a low dimensional space (Long et al., 2015; Baktashmotlagh et al., 2013). There are also methods seeking a domain transition from the source domain to the target domain (Gong et al., 2012; Gopalan et al., 2011; Pan et al., 2011). Our work gets inspiration from a practice in Johnson et al. (2016) and Chu et al. (2017) based on appending domain tags. However, our method is different from the above methods in that we apply domain adaptation techniques to the outputs of a generative model rather than a natural data domain. Question Answering. Various neural models based on attention mechanisms (Wang and Jiang, 2016; Seo et al., 2016; Xiong et al., 2016; Wang et al., 2016; Dhingra et al., 2016; Kadlec et al., 2016; Trischler et al., 2016b; Sordoni et al., 2016; Cui et al., 2016; Chen et al., 2016) have been proposed to tackle the tasks of question answering and reading comprehension. However, the performance of these neural models largely relies on a large amount of labeled data available for training. Learning with Multiple Models. GANs (Goodfellow et al., 2014) formulated a adversarial game between a discriminative model and a generative model for generating realistic images. Ganin and Lempitsky (Ganin and Lempitsky, 2014) employed a similar idea to use two models for domain adaptation. Review networks (Yang et al., 2016b) employ a discriminative model as a regularizer for training a generative model. In the context of machine translation, given a language pair, various recent work studied jointly training models to learn the mappings in both directions (Tu et al., 2016; Xia et al., 2016). 6 Conclusions We study a critical and challenging problem, semi-supervised question answering. We propose a novel neural framework called Generative Domain-Adaptive Nets, which incorporate domain adaptation techniques in combination with generative models for semi-supervised learning. Empirically, we show that our approach leads to substantial improvements over supervised learning models and outperforms several strong baselines including GANs and dual learning. In the future, we plan to apply our approach to more question answering datasets in different domains. It will also be intriguing to generalize GDANs to other applications. Acknowledgements. This work was funded by the Office of Naval Research grants N000141512791 and N000141310721 and NVIDIA. 1047 Table 2: Performance with various labeling rates, unlabeled data sizes |U|, and methods. “Dev” denotes the development set, and “test” denotes the test set. F1 and EM are two metrics. Labeling rate |U| Method Dev F1 Test F1 Test EM 0.1 50K SL 0.4262 0.3815 0.2492 0.1 50K Context 0.5046 0.4515 0.2966 0.1 50K Context + domain 0.5139 0.4575 0.3036 0.1 50K Gen 0.5049 0.4553 0.3018 0.1 50K Gen + GAN 0.4897 0.4373 0.2885 0.1 50K Gen + dual 0.5036 0.4555 0.3005 0.1 50K Gen + domain 0.5234 0.4703 0.3145 0.1 50K Gen + domain + adv 0.5313 0.4802 0.3218 0.2 50K SL 0.5134 0.4674 0.3163 0.2 50K Context 0.5652 0.5132 0.3573 0.2 50K Context + domain 0.5672 0.5200 0.3581 0.2 50K Gen 0.5643 0.5159 0.3618 0.2 50K Gen + GAN 0.5525 0.5037 0.3470 0.2 50K Gen + dual 0.5720 0.5192 0.3612 0.2 50K Gen + domain 0.5749 0.5216 0.3658 0.2 50K Gen + domain + adv 0.5867 0.5394 0.3781 0.5 50K SL 0.6280 0.5722 0.4187 0.5 50K Context 0.6300 0.5740 0.4195 0.5 50K Context + domain 0.6307 0.5791 0.4237 0.5 50K Gen 0.6237 0.5717 0.4155 0.5 50K Gen + GAN 0.6110 0.5590 0.4044 0.5 50K Gen + dual 0.6368 0.5746 0.4163 0.5 50K Gen + domain 0.6378 0.5826 0.4261 0.5 50K Gen + domain + adv 0.6375 0.5831 0.4267 0.9 50K SL 0.6611 0.6070 0.4534 0.9 50K Context 0.6560 0.6028 0.4507 0.9 50K Context + domain 0.6553 0.6105 0.4557 0.9 50K Gen 0.6464 0.5970 0.4445 0.9 50K Gen + GAN 0.6396 0.5874 0.4317 0.9 50K Gen + dual 0.6511 0.5892 0.4340 0.9 50K Gen + domain 0.6611 0.6102 0.4573 0.9 50K Gen + domain + adv 0.6585 0.6043 0.4497 0.1 5M SL 0.4262 0.3815 0.2492 0.1 5M Context 0.5140 0.4641 0.3014 0.1 5M Context + domain 0.5166 0.4599 0.3083 0.1 5M Gen 0.5099 0.4619 0.3103 0.1 5M Gen + domain 0.5301 0.4703 0.3227 0.1 5M Gen + domain + adv 0.5442 0.4840 0.3270 0.9 5M SL 0.6611 0.6070 0.4534 0.9 5M Context 0.6605 0.6026 0.4473 0.9 5M Context + domain 0.6642 0.6066 0.4548 0.9 5M Gen 0.6647 0.6065 0.4600 0.9 5M Gen + domain 0.6726 0.6092 0.4599 0.9 5M Gen + domain + adv 0.6670 0.6102 0.4531 1048 References Mahsa Baktashmotlagh, Mehrtash T Harandi, Brian C Lovell, and Mathieu Salzmann. 2013. Unsupervised domain adaptation by domain invariant projection. In ICCV. pages 769–776. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In EMNLP. Danqi Chen, Jason Bolton, and Christopher D Manning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. arXiv preprint arXiv:1606.02858 . Chenhui Chu, Raj Dabre, and Sadao Kurohashi. 2017. An empirical comparison of simple domain adaptation methods for neural machine translation. arXiv preprint arXiv:1701.03214 . Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 . Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. 2016. Attention-overattention neural networks for reading comprehension. arXiv preprint arXiv:1607.04423 . Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. 2016. Gated-attention readers for text comprehension. arXiv preprint arXiv:1606.01549 . Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In ACL. Association for Computational Linguistics, pages 363–370. Yaroslav Ganin and Victor Lempitsky. 2014. Unsupervised domain adaptation by backpropagation. arXiv preprint arXiv:1409.7495 . Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In ICML. pages 513–520. Boqing Gong, Yuan Shi, Fei Sha, and Kristen Grauman. 2012. Geodesic flow kernel for unsupervised domain adaptation. In CVPR. IEEE, pages 2066– 2073. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In NIPS. pages 2672–2680. Raghuraman Gopalan, Ruonan Li, and Rama Chellappa. 2011. Domain adaptation for object recognition: An unsupervised approach. In ICCV. IEEE, pages 999–1006. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. arXiv preprint arXiv:1603.06393 . Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. arXiv preprint arXiv:1603.08148 . Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On using monolingual corpora in neural machine translation. arXiv preprint arXiv:1503.03535 . Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi´egas, Martin Wattenberg, Greg Corrado, et al. 2016. Google’s multilingual neural machine translation system: Enabling zero-shot translation. arXiv preprint arXiv:1611.04558 . Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. 2016. Text understanding with the attention sum reader network. arXiv preprint arXiv:1603.01547 . Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. 2014. Semi-supervised learning with deep generative models. In NIPS. pages 3581–3589. Mingsheng Long, Yue Cao, Jianmin Wang, and Michael I Jordan. 2015. Learning transferable features with deep adaptation networks. In ICML. pages 97–105. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268 . Sinno Jialin Pan, Ivor W Tsang, James T Kwok, and Qiang Yang. 2011. Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks 22(2):199–210. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In EMNLP. Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. 2015. Semisupervised learning with ladder networks. In NIPS. pages 3546–3554. Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP. volume 3. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603 . 1049 Alessandro Sordoni, Philip Bachman, Adam Trischler, and Yoshua Bengio. 2016. Iterative alternating neural attention for machine reading. arXiv preprint arXiv:1606.02245 . Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In NIPS. pages 3104–3112. Kristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In NAACL. Association for Computational Linguistics, pages 173–180. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2016a. Newsqa: A machine comprehension dataset. arXiv preprint arXiv:1611.09830 . Adam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. 2016b. Natural language comprehension with the epireader. arXiv preprint arXiv:1606.02270 . Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In ACL. Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. 2010. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. JMLR 11(Dec):3371–3408. Ellen M Voorhees and Dawn M Tice. 2000. Building a question answering test collection. In SIGIR. ACM, pages 200–207. Shuohang Wang and Jing Jiang. 2016. Machine comprehension using match-lstm and answer pointer. arXiv preprint arXiv:1608.07905 . Zhiguo Wang, Haitao Mi, Wael Hamza, and Radu Florian. 2016. Multi-perspective context matching for machine comprehension. arXiv preprint arXiv:1612.04211 . Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning 8(3-4):229–256. Yingce Xia, Di He, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. arXiv preprint arXiv:1611.00179 . Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604 . Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. Wikiqa: A challenge dataset for open-domain question answering. In EMNLP. Citeseer, pages 2013– 2018. Zhilin Yang, William Cohen, and Ruslan Salakhutdinov. 2016a. Revisiting semi-supervised learning with graph embeddings. In ICML. Zhilin Yang, Bhuwan Dhingra, Ye Yuan, Junjie Hu, William W Cohen, and Ruslan Salakhutdinov. 2017. Words or characters? fine-grained gating for reading comprehension. In ICLR. Zhilin Yang, Ye Yuan, Yuexin Wu, William W Cohen, and Ruslan R Salakhutdinov. 2016b. Review networks for caption generation. In NIPS. pages 2361– 2369. Xiaojin Zhu. 2005. Semi-supervised learning literature survey . Xiaojin Zhu and Zoubin Ghahramani. 2002. Learning from labeled and unlabeled data with label propagation . 1050
2017
96
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1051–1062 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1097 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1051–1062 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1097 From Language to Programs: Bridging Reinforcement Learning and Maximum Marginal Likelihood Kelvin Guu Statistics Stanford University [email protected] Panupong Pasupat Computer Science Stanford University [email protected] Evan Zheran Liu Computer Science Stanford University [email protected] Percy Liang Computer Science Stanford University [email protected] Abstract Our goal is to learn a semantic parser that maps natural language utterances into executable programs when only indirect supervision is available: examples are labeled with the correct execution result, but not the program itself. Consequently, we must search the space of programs for those that output the correct result, while not being misled by spurious programs: incorrect programs that coincidentally output the correct result. We connect two common learning paradigms, reinforcement learning (RL) and maximum marginal likelihood (MML), and then present a new learning algorithm that combines the strengths of both. The new algorithm guards against spurious programs by combining the systematic search traditionally employed in MML with the randomized exploration of RL, and by updating parameters such that probability is spread more evenly across consistent programs. We apply our learning algorithm to a new neural semantic parser and show significant gains over existing state-of-theart results on a recent context-dependent semantic parsing task. 1 Introduction We are interested in learning a semantic parser that maps natural language utterances into executable programs (e.g., logical forms). For example, in Figure 1, a program corresponding to the utterance transforms an initial world state into a new world state. We would like to learn from indirect supervision, where each training example is only labeled with the correct output (e.g. a target world state), but not the program that produced that out"The man in the yellow hat moves to the left of the woman in blue.” Spurious: move(hasShirt(red), 1) Correct: move(hasHat(yellow), leftOf(hasShirt(blue))) 1 2 3 1 2 3 BEFORE AFTER z* z' 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 p(z') = 10-4 p(z*) = 10-6 red yellow hasHat blue hasShirt leftOf move move 1 hasShirt Figure 1: The task is to map natural language utterances to a program that manipulates the world state. The correct program captures the true meaning of the utterances, while spurious programs arrive at the correct output for the wrong reasons. We develop methods to prevent the model from being drawn to spurious programs. put (Clarke et al., 2010; Liang et al., 2011; Krishnamurthy and Mitchell, 2012; Artzi and Zettlemoyer, 2013; Liang et al., 2017). The process of constructing a program can be formulated as a sequential decision-making process, where feedback is only received at the end of the sequence when the completed program is executed. In the natural language processing literature, there are two common approaches for handling this situation: 1) reinforcement learning (RL), particularly the REINFORCE algorithm (Williams, 1992; Sutton et al., 1999), which maximizes the expected reward of a sequence of actions; and 2) maximum marginal likelihood (MML), which treats the sequence of actions as a latent variable, and then maximizes the marginal likelihood of observing the correct program output (Dempster et al., 1977). While the two approaches have enjoyed success on many tasks, we found them to work poorly out of the box for our task. This is because in addition to the sparsity of correct programs, our task also requires weeding out spurious programs (Pasupat and Liang, 2016): incorrect interpretations 1051 of the utterances that accidentally produce the correct output, as illustrated in Figure 1. We show that MML and RL optimize closely related objectives. Furthermore, both MML and RL methods have a mechanism for exploring program space in search of programs that generate the correct output. We explain why this exploration tends to quickly concentrate around short spurious programs, causing the model to sometimes overlook the correct program. To address this problem, we propose RANDOMER, a new learning algorithm with two parts: First, we propose randomized beam search, an exploration strategy which combines the systematic beam search traditionally employed in MML with the randomized off-policy exploration of RL. This increases the chance of finding correct programs even when the beam size is small or the parameters are not pre-trained. Second, we observe that even with good exploration, the gradients of both the RL and MML objectives may still upweight entrenched spurious programs more strongly than correct programs with low probability under the current model. We propose a meritocratic parameter update rule, a modification to the MML gradient update, which more equally upweights all programs that produce the correct output. This makes the model less likely to overfit spurious programs. We apply RANDOMER to train a new neural semantic parser, which outputs programs in a stackbased programming language. We evaluate our resulting system on SCONE, the context-dependent semantic parsing dataset of Long et al. (2016). Our approach outperforms standard RL and MML methods in a direct comparison, and achieves new state-of-the-art results, improving over Long et al. (2016) in all three domains of SCONE, and by over 30% accuracy on the most challenging one. 2 Task We consider the semantic parsing task in the SCONE dataset1 (Long et al., 2016). As illustrated in Figure 1, each example consists of a world containing several objects (e.g., people), each with certain properties (e.g., shirt color and hat color). Given the initial world state w0 and a sequence of M natural language utterances u = (u1, . . . , uM), the task is to generate a program that manipulates the world state according to the utterances. Each 1 https://nlp.stanford.edu/projects/scone utterance um describes a single action that transforms the world state wm−1 into a new world state wm. For training, the system receives weakly supervised examples with input x = (u, w0) and the target final world state y = wM. The dataset includes 3 domains: ALCHEMY, TANGRAMS, and SCENE. The description of each domain can be found in Appendix B. The domains highlight different linguistic phenomena: ALCHEMY features ellipsis (e.g., “throw the rest out”, “mix”); TANGRAMS features anaphora on actions (e.g., “repeat step 3”, “bring it back”); and SCENE features anaphora on entities (e.g., “he moves back”, “...to his left”). Each domain contains roughly 3,700 training and 900 test examples. Each example contains 5 utterances and is labeled with the target world state after each utterance, but not the target program. Spurious programs. Given a training example (u, w0, wM), our goal is to find the true underlying program z∗which reflects the meaning of u. The constraint that z∗must transform w0 into wM, i.e. z(w0) = wM, is not enough to uniquely identify the true z∗, as there are often many z satisfying z(w0) = wM: in our experiments, we found at least 1600 on average for each example. Almost all do not capture the meaning of u (see Figure 1). We refer to these incorrect z’s as spurious programs. Such programs encourage the model to learn an incorrect mapping from language to program operations: e.g., the spurious program in Figure 1 would cause the model to learn that “man in the yellow hat” maps to hasShirt(red). Spurious programs in SCONE. In this dataset, utterances often reference objects in different ways (e.g. a person can be referenced by shirt color, hat color, or position). Hence, any target programming language must also support these different reference strategies. As a result, even a single action such as moving a person to a target destination can be achieved by many different programs, each selecting the person and destination in a different way. Across multiple actions, the number of programs grows combinatorially.2 Only a few programs actually implement the correct reference strategy as defined by the utterance. This problem would be more severe in any more general-purpose language (e.g. Python). 2The number of well-formed programs in SCENE exceeds 1015 1052 3 Model We formulate program generation as a sequence prediction problem. We represent a program as a sequence of program tokens in postfix notation; for example, move(hasHat(yellow), leftOf(hasShirt(blue))) is linearized as yellow hasHat blue hasShirt leftOf move. This representation also allows us to incrementally execute programs from left to right using a stack: constants (e.g., yellow) are pushed onto the stack, while functions (e.g., hasHat) pop appropriate arguments from the stack and push back the computed result (e.g., the list of people with yellow hats). Appendix B lists the full set of program tokens, Z, and how they are executed. Note that each action always ends with an action token (e.g., move). Given an input x = (u, w0), the model generates program tokens z1, z2, . . . from left to right using a neural encoder-decoder model with attention (Bahdanau et al., 2015). Throughout the generation process, the model maintains an utterance pointer, m, initialized to 1. To generate zt, the model’s encoder first encodes the utterance um into a vector em. Then, based on em and previously generated tokens z1:t−1, the model’s decoder defines a distribution p(zt | x, z1:t−1) over the possible values of zt ∈Z. The next token zt is sampled from this distribution. If an action token (e.g., move) is generated, the model increments the utterance pointer m. The process terminates when all M utterances are processed. The final probability of generating a particular program z = (z1, . . . , zT ) is p(z | x) = QT t=1 p(zt | x, z1:t−1). Encoder. The utterance um under the pointer is encoded using a bidirectional LSTM: hF i = LSTM(hF i−1, Φu(um,i)) hB i = LSTM(hB i+1, Φu(um,i)) hi = [hF i ; hB i ], where Φu(um,i) is the fixed GloVe word embedding (Pennington et al., 2014) of the ith word in um. The final utterance embedding is the concatenation em = [hF |um|; hB 1 ]. Decoder. Unlike Bahdanau et al. (2015), which used a recurrent network for the decoder, we opt for a feed-forward network for simplicity. We use em and an embedding f(z1:t−1) of the previous execution history (described later) as inputs to compute an attention vector ct: qt = ReLU(Wq[em; f(z1:t−1)]) αi ∝exp(q⊤ t Wahi) (i = 1, . . . , |um|) ct = X i αihi. Finally, after concatenating qt with ct, the distribution over the set Z of possible program tokens is computed via a softmax: p(zt | x, z1:t−1) ∝exp(Φz(zt)⊤Ws[qt; ct]), where Φz(zt) is the embedding for token zt. Execution history embedding. We compare two options for f(z1:t−1), our embedding of the execution history. A standard approach is to simply take the k most recent tokens zt−k:t−1 and concatenate their embeddings. We will refer to this as TOKENS and use k = 4 in our experiments. We also consider a new approach which leverages our ability to incrementally execute programs using a stack. We summarize the execution history by embedding the state of the stack at time t −1, achieved by concatenating the embeddings of all values on the stack. (We limit the maximum stack size to 3.) We refer to this as STACK. 4 Reinforcement learning versus maximum marginal likelihood Having formulated our task as a sequence prediction problem, we must still choose a learning algorithm. We first compare two standard paradigms: reinforcement learning (RL) and maximum marginal likelihood (MML). In the next section, we propose a better alternative. 4.1 Comparing objective functions Reinforcement learning. From an RL perspective, given a training example (x, y), a policy makes a sequence of decisions z = (z1, . . . , zT ), and then receives a reward at the end of the episode: R(z) = 1 if z executes to y and 0 otherwise (dependence on x and y has been omitted from the notation). We focus on policy gradient methods, in which a stochastic policy function is trained to maximize the expected reward. In our setup, pθ(z | x) is the policy (with parameters θ), and its expected reward on a given example (x, y) is G(x, y) = X z R(z) pθ(z | x), (1) 1053 where the sum is over all possible programs. The overall RL objective, JRL, is the expected reward across examples: JRL = X (x,y) G(x, y). (2) Maximum marginal likelihood. The MML perspective assumes that y is generated by a partially-observed random process: conditioned on x, a latent program z is generated, and conditioned on z, the observation y is generated. This implies the marginal likelihood: pθ(y | x) = X z p(y | z) pθ(z | x). (3) Note that since the execution of z is deterministic, pθ(y | z) = 1 if z executes to y and 0 otherwise. The log marginal likelihood of the data is then JMML = log LMML, (4) where LMML = Y (x,y) pθ(y | x). (5) To estimate our model parameters θ, we maximize JMML with respect to θ. With our choice of reward, the RL expected reward (1) is equal to the MML marginal probability (3). Hence the only difference between the two formulations is that in RL we optimize the sum of expected rewards (2), whereas in MML we optimize the product (5).3 4.2 Comparing gradients In both policy gradient and MML, the objectives are typically optimized via (stochastic) gradient ascent. The gradients of JRL and JMML are closely related. They both have the form: ∇θJ = X (x,y) Ez∼q [R(z)∇log pθ(z | x)] (6) = X (x,y) X z q(z)R(z)∇log pθ(z | x), where q(z) equals qRL(z) = pθ(z | x) for JRL, (7) qMML(z) = R(z)pθ(z | x) P ˜z R(˜z)pθ(˜z | x) (8) = pθ(z | x, R(z) ̸= 0) for JMML. 3 Note that the log of the product in (5) does not equal the sum in (2). Taking a step in the direction of ∇log pθ(z | x) upweights the probability of z, so we can heuristically think of the gradient as attempting to upweight each reward-earning program z by a gradient weight q(z). In Subsection 5.2, we argue why qMML is better at guarding against spurious programs, and propose an even better alternative. 4.3 Comparing gradient approximation strategies It is often intractable to compute the gradient (6) because it involves taking an expectation over all possible programs. So in practice, the expectation is approximated. In the policy gradient literature, Monte Carlo integration (MC) is the typical approximation strategy. For example, the popular REINFORCE algorithm (Williams, 1992) uses Monte Carlo sampling to compute an unbiased estimate of the gradient: ∆MC = 1 B X z∈S [R(z) −c]∇log pθ(z | x), (9) where S is a collection of B samples z(b) ∼q(z), and c is a baseline (Williams, 1992) used to reduce the variance of the estimate without altering its expectation. In the MML literature for latent sequences, the expectation is typically approximated via numerical integration (NUM) instead: ∆NUM = X z∈S q(z)R(z)∇log pθ(z | x). (10) where the programs in S come from beam search. Beam search. Beam search generates a set of programs via the following process. At step t of beam search, we maintain a beam Bt of at most B search states. Each state s ∈Bt represents a partially constructed program, s = (z1, . . . , zt) (the first t tokens of the program). For each state s in the beam, we generate all possible continuations, cont(s) = cont((z1, . . . , zt)) = {(z1, . . . , zt, zt+1) | zt+1 ∈Z} . We then take the union of these continuations, cont(Bt) = S s∈Bt cont(s). The new beam Bt+1 is simply the highest scoring B continuations in cont(Bt), as scored by the policy, pθ(s | x). Search is halted after a fixed number of iterations 1054 or when there are no continuations possible. S is then the set of all complete programs discovered during beam search. We will refer to this as beam search MML (BS-MML). In both policy gradient and MML, we think of the procedure used to produce the set of programs S as an exploration strategy which searches for programs that produce reward. One advantage of numerical integration is that it allows us to decouple the exploration strategy from the gradient weights assigned to each program. 5 Tackling spurious programs In this section, we illustrate why spurious programs are problematic for the most commonly used methods in RL (REINFORCE) and MML (beam search MML). We describe two key problems and propose a solution to each, based on insights gained from our comparison of RL and MML in Section 4. 5.1 Spurious programs bias exploration As mentioned in Section 4, REINFORCE and BSMML both employ an exploration strategy to approximate their respective gradients. In both methods, exploration is guided by the current model policy, whereby programs with high probability under the current policy are more likely to be explored. A troubling implication is that programs with low probability under the current policy are likely to be overlooked by exploration. If the current policy incorrectly assigns low probability to the correct program z∗, it will likely fail to discover z∗during exploration, and will consequently fail to upweight the probability of z∗. This repeats on every gradient step, keeping the probability of z∗perpetually low. The same feedback loop can also cause already highprobability spurious programs to gain even more probability. From this, we see that exploration is sensitive to initial conditions: the rich get richer, and the poor get poorer. Since there are often thousands of spurious programs and only a few correct programs, spurious programs are usually found first. Once spurious programs get a head start, exploration increasingly biases towards them. As a remedy, one could try initializing parameters such that the model puts a uniform distribution over all possible programs. A seemingly reasonable tactic is to initialize parameters such that the Spurious: move(hasShirt(red), 1) Correct: move(hasHat(yellow), leftOf(hasShirt(blue))) 1 2 3 1 2 3 z* z' 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 p(z') = 10-4 p(z*) = 10-6 red yellow hasHat blue hasShirt leftOf move move 1 hasShirt Figure 2: Two possible paths in the tree of all possible programs. One path leads to the spurious program z′ (red) while the longer path leads to the correct program z∗(gold). Each edge represents a decision and shows the probability of that decision under a uniform policy. The shorter program has two orders of magnitude higher probability. model policy puts near-uniform probability over the decisions at each time step. However, this causes shorter programs to have orders of magnitude higher probability than longer programs, as illustrated in Figure 2 and as we empirically observe. A more sophisticated approach might involve approximating the total number of programs reachable from each point in the programgenerating decision tree. However, we instead propose to reduce sensitivity to the initial distribution over programs. Solution: randomized beam search One solution to biased exploration is to simply rely less on the untrustworthy current policy. We can do this by injecting random noise into exploration. In REINFORCE, a common solution is to sample from an ϵ-greedy variant of the current policy. On the other hand, MML exploration with beam search is deterministic. However, it has a key advantage over REINFORCE-style sampling: even if one program occupies almost all probability under the current policy (a peaky distribution), beam search will still use its remaining beam capacity to explore at least B −1 other programs. In contrast, sampling methods will repeatedly visit the mode of the distribution. To get the best of both worlds, we propose a simple ϵ-greedy randomized beam search. Like regular beam search, at iteration t we compute the set of all continuations cont(Bt) and sort them by their model probability pθ(s | x). But instead of selecting the B highest-scoring continuations, we choose B continuations one by one without replacement from cont(Bt). When choosing a continuation from the remaining pool, we either uniformly sample a random continuation with probability ϵ, or pick the highest-scoring continuation in the pool with probability 1 −ϵ. Empirically, we 1055 find that this performs much better than both classic beam search and ϵ-greedy sampling (Table 3). 5.2 Spurious programs dominate gradients In both RL and MML, even if exploration is perfect and the gradient is exactly computed, spurious programs can still be problematic. Even if perfect exploration visits every program, we see from the gradient weights q(z) in (7) and (8) that programs are weighted proportional to their current policy probability. If a spurious program z′ has 100 times higher probability than z∗as in Figure 2, the gradient will spend roughly 99% of its magnitude upweighting towards z′ and only 1% towards z∗even though the two programs get the same reward. This implies that it would take many updates for z∗to catch up. In fact, z∗may never catch up, depending on the gradient updates for other training examples. Simply increasing the learning rate is inadequate, as it would cause the model to take overly large steps towards z′, potentially causing optimization to diverge. Solution: the meritocratic update rule To solve this problem, we want the upweighting to be more “meritocratic”: any program that obtains reward should be upweighted roughly equally. We first observe that JMML already improves over JRL in this regard. From (6), we see that the gradient weight qMML(z) is the policy distribution restricted to and renormalized over only rewardearning programs. This renormalization makes the gradient weight uniform across examples: even if all reward-earning programs for a particular example have very low model probability, their combined gradient weight P z qMML(z) is always 1. In our experiments, JMML performs significantly better than JRL (Table 4). However, while JMML assigns uniform weight across examples, it is still not uniform over the programs within each example. Hence we propose a new update rule which goes one step further in pursuing uniform updates. Extending qMML(z), we define a β-smoothed version: qβ(z) = qMML(z)β P ˜z qMML(˜z)β . (11) When β = 0, our weighting is completely uniform across all reward-earning programs within an example while β = 1 recovers the original MML weighting. Our new update rule is to simply take a modified gradient step where q = qβ.4 We will refer to this as the β-meritocratic update rule. 5.3 Summary of the proposed approach We described two problems5 and their solutions: we reduce exploration bias using ϵ-greedy randomized beam search and perform more balanced optimization using the β-meritocratic parameter update rule. We call our resulting approach RANDOMER. Table 1 summarizes how RANDOMER combines desirable qualities from both REINFORCE and BS-MML. 6 Experiments Evaluation. We evaluate our proposed methods on all three domains of the SCONE dataset. Accuracy is defined as the percentage of test examples where the model produces the correct final world state wM. All test examples have M = 5 (5utts), but we also report accuracy after processing the first 3 utterances (3utts). To control for the effects of randomness, we train 5 instances of each model with different random seeds. We report the median accuracy of the instances unless otherwise noted. Training. Following Long et al. (2016), we decompose each training example into smaller examples. Given an example with 5 utterances, u = [u1, . . . , u5], we consider all length-1 and length-2 substrings of u: [u1], [u2], . . . , [u3, u4], [u4, u5] (9 total). We form a new training example from each substring, e.g., (u′, w′ 0, w′ M) where u′ = [u4, u5], w′ 0 = w3 and w′ M = w5. All models are implemented in TensorFlow (Abadi et al., 2015). Model parameters are randomly initialized (Glorot and Bengio, 2010), with no pre-training. We use the Adam optimizer (Kingma and Ba, 2014) (which is applied to the gradient in (6)), a learning rate of 0.001, a minibatch size of 8 examples (different from the beam size), and train until accuracy on the validation set converges (on average about 13,000 steps). We 4 Also, note that if exploration were exhaustive, β = 0 would be equivalent to supervised learning using the set of all reward-earning programs as targets. 5 These problems concern the gradient w.r.t. a single example. The full gradient averages over multiple examples, which helps separate correct from spurious. E.g., if multiple examples all mention “yellow hat”, we will find a correct program parsing this as hasHat(yellow) for each example, whereas the spurious programs we find will follow no consistent pattern. Consequently, spurious gradient contributions may cancel out while correct program gradients will all “vote” in the same direction. 1056 Method Approximation of Eq [·] Exploration strategy Gradient weight q(z) REINFORCE Monte Carlo integration independent sampling pθ(z | x) BS-MML numerical integration beam search pθ(z | x, R(z) ̸= 0) RANDOMER numerical integration randomized beam search qβ(z) Table 1: RANDOMER combines qualities of both REINFORCE (RL) and BS-MML. For approximating the expectation over q in the gradient, we use numerical integration as in BS-MML. Our exploration strategy is a hybrid of search (MML) and off-policy sampling (RL). Our gradient weighting is equivalent to MML when β = 1 and more “meritocratic” than both MML and REINFORCE for lower values of β. use fixed GloVe vectors (Pennington et al., 2014) to embed the words in each utterance. Hyperparameters. For all models, we performed a grid search over hyperparameters to maximize accuracy on the validation set. Hyperparameters include the learning rate, the baseline in REINFORCE, ϵ-greediness and βmeritocraticness. For REINFORCE, we also experimented with a regression-estimated baseline (Ranzato et al., 2015), but found it to perform worse than a constant baseline. 6.1 Main results Comparison to prior work. Table 2 compares RANDOMER to results from Long et al. (2016) as well as two baselines, REINFORCE and BSMML (using the same neural model but different learning algorithms). Our approach achieves new state-of-the-art results by a significant margin, especially on the SCENE domain, which features the most complex program syntax. We report the results for REINFORCE, BS-MML, and RANDOMER on the seed and hyperparameters that achieve the best validation accuracy. We note that REINFORCE performs very well on TANGRAMS but worse on ALCHEMY and very poorly on SCENE. This might be because the program syntax for TANGRAMS is simpler than the other two: there is no other way to refer to objects except by index. We also found that REINFORCE required ϵgreedy exploration to make any progress. Using ϵ-greedy greatly skews the Monte Carlo approximation of ∇JRL, making it more uniformly weighted over programs in a similar spirit to using β-meritocratic gradient weights qβ. However, qβ increases uniformity over reward-earning programs only, rather than over all programs. Effect of randomized beam search. Table 3 shows that ϵ-greedy randomized beam search consistently outperforms classic beam search. Even when we increase the beam size of classic beam ALCHEMY TANGRAMS SCENE system 3utts 5utts 3utts 5utts 3utts 5utts LONG+16 56.8 52.3 64.9 27.6 23.2 14.7 REINFORCE 58.3 44.6 68.5 37.3 47.8 33.9 BS-MML 58.7 47.3 62.6 32.2 53.5 32.5 RANDOMER 66.9 52.9 65.8 37.1 64.8 46.2 Table 2: Comparison to prior work. LONG+16 results are directly from Long et al. (2016). Hyperparameters are chosen by best performance on validation set (see Appendix A). ALCHEMY TANGRAMS SCENE random beam 3utts 5utts 3utts 5utts 3utts 5utts classic beam search None 32 30.3 23.2 0.0 0.0 33.4 20.1 None 128 59.0 46.4 60.9 28.6 24.5 13.9 randomized beam search ϵ = 0.05 32 58.7 45.5 61.1 32.5 33.4 23.0 ϵ = 0.15 32 61.3 48.3 65.2 34.3 50.8 33.5 ϵ = 0.25 32 60.5 48.6 60.0 27.3 54.1 35.7 Table 3: Randomized beam search. All listed models use gradient weight qMML and TOKENS to represent execution history. search to 128, it still does not surpass randomized beam search with a beam of 32, and further increases yield no additional improvement. Effect of β-meritocratic updates. Table 4 evaluates the impact of β-meritocratic parameter updates (gradient weight qβ). More uniform upweighting across reward-earning programs leads to higher accuracy and fewer spurious programs, especially in SCENE. However, no single value of β performs best over all domains. Choosing the right value of β in RANDOMER significantly accelerates training. Figure 3 illustrates that while β = 0 and β = 1 ultimately achieve similar accuracy on ALCHEMY, β = 0 reaches good performance in half the time. Since lowering β reduces trust in the model policy, β < 1 helps in early training when the current policy is untrustworthy. However, as it grows more trustworthy, β < 1 begins to pay a price for ignoring it. Hence, it may be worthwhile to anneal β towards 1 over time. 1057 ALCHEMY TANGRAMS SCENE q(z) 3utts 5utts 3utts 5utts 3utts 5utts qRL 0.2 0.0 0.9 0.6 0.0 0.0 qMML (qβ=1) 61.3 48.3 65.2 34.3 50.8 33.5 qβ=0.25 64.4 48.9 60.6 29.0 42.4 29.7 qβ=0 63.6 46.3 54.0 23.5 61.0 42.4 Table 4: β-meritocratic updates. All listed models use randomized beam search, ϵ = 0.15 and TOKENS to represent execution history. ALCHEMY TANGRAMS SCENE 3utts 5utts 3utts 5utts 3utts 5utts HISTORY 61.3 48.3 65.2 34.3 50.8 33.5 STACK 64.2 53.2 63.0 32.4 59.5 43.1 Table 5: TOKENS vs STACK embedding. Both models use ϵ = 0.15 and gradient weight qMML. Effect of execution history embedding. Table 5 compares our two proposals for embedding the execution history: TOKENS and STACK. STACK performs better in the two domains where an object can be referenced in multiple ways (SCENE and ALCHEMY). STACK directly embeds objects on the stack, invariant to the way in which they were pushed onto the stack, unlike TOKENS. We hypothesize that this invariance increases robustness to spurious behavior: if a program accidentally pushes the right object onto the stack via spurious means, the model can still learn the remaining steps of the program without conditioning on a spurious history. Fitting vs overfitting the training data. Table 6 reveals that BS-MML and RANDOMER use different strategies to fit the training data. On the depicted training example, BS-MML actually achieves higher expected reward / marginal probability than RANDOMER, but it does so by putting most of its probability on a spurious program— a form of overfitting. In contrast, RANDOMER spreads probability mass over multiple rewardearning programs, including the correct ones. As a consequence of overfitting, we observed at test time that BS-MML only references people by positional indices instead of by shirt or hat color, whereas RANDOMER successfully learns to use multiple reference strategies. 7 Related work and discussion Semantic parsing from indirect supervision. Our work is motivated by the classic problem of learning semantic parsers from indirect supervision (Clarke et al., 2010; Liang et al., 2011; Artzi 0 5000 10000 15000 20000 25000 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 RANDOMER beta = 0 RANDOMER beta = 0.25 RANDOMER beta = 1 BS-MML REINFORCE Figure 3: Validation set accuracy (y-axis) across training iterations (x-axis) on ALCHEMY. We compare RANDOMER, BS-MML and REINFORCE. Vertical lines mark the first time each model surpasses 60% accuracy. RANDOMER with β = 0 reaches this point twice as fast as β = 1. REINFORCE plateaus for a long time, then begins to climb after 40k iterations (not shown). Training runs are averaged over 5 seeds. and Zettlemoyer, 2011, 2013; Reddy et al., 2014; Pasupat and Liang, 2015). We are interested in the initial stages of training from scratch, where getting any training signal is difficult due to the combinatorially large search space. We also highlighted the problem of spurious programs which capture reward but give incorrect generalizations. Maximum marginal likelihood with beam search (BS-MML) is traditionally used to learn semantic parsers from indirect supervision. Reinforcement learning. Concurrently, there has been a recent surge of interest in reinforcement learning, along with the wide application of the classic REINFORCE algorithm (Williams, 1992)—to troubleshooting (Branavan et al., 2009), dialog generation (Li et al., 2016), game playing (Narasimhan et al., 2015), coreference resolution (Clark and Manning, 2016), machine translation (Norouzi et al., 2016), and even semantic parsing (Liang et al., 2017). Indeed, the challenge of training semantic parsers from indirect supervision is perhaps better captured by the notion of sparse rewards in reinforcement learning. The RL answer would be better exploration, which can take many forms including simple action-dithering such as ϵ-greedy, entropy regularization (Williams and Peng, 1991), Monte Carlo tree search (Coulom, 2006), randomized value functions (Osband et al., 2014, 2016), and methods which prioritize learning environment dynamics (Duff, 2002) or under-explored states (Kearns and Singh, 2002; Bellemare et al., 2016; Nachum et al., 2016). The majority of these methods employ Monte Carlo sampling for exploration. In 1058 Utterance: the man in the purple shirt and red hat moves just to the right of the man in the red shirt and yellow hat program prob RANDOMER (ϵ = 0.15, β = 0) * move(hasHat(red), rightOf(hasHat(red))) 0.122 * move(hasShirt(purple), rightOf(hasShirt(red))) 0.061 o move(hasHat(red), rightOf(index(allPeople, 1))) 0.059 * move(hasHat(red), rightOf(hasHat(yellow))) 0.019 o move(index(allPeople, 2), rightOf(hasShirt(red))) 0.018 x move(hasHat(red), 8) 0.018 BS-MML o move(index(allPeople, 2), 2) 0.887 x move(index(allPeople, 2), 6) 0.041 x move(index(allPeople, 2), 5) 0.020 x move(index(allPeople, 2), 8) 0.016 x move(index(allPeople, 2), 7) 0.009 x move(index(allPeople, 2), 3) 0.008 Table 6: Top-scoring predictions for a training example from SCENE (* = correct, o = spurious, x = incorrect). RANDOMER distributes probability mass over numerous reward-earning programs (including the correct ones), while classic beam search MML overfits to one spurious program, giving it very high probability. contrast, we find randomized beam search to be more suitable in our setting, because it explores low-probability states even when the policy distribution is peaky. Our β-meritocratic update also depends on the fact that beam search returns an entire set of reward-earning programs rather than one, since it renormalizes over the reward-earning set. While similar to entropy regularization, βmeritocratic update is more targeted as it only increases uniformity of the gradient among rewardearning programs, rather than across all programs. Our strategy of using randomized beam search and meritocratic updates lies closer to MML than RL, but this does not imply that RL has nothing to offer in our setting. With the simple connection between RL and MML we established, much of the literature on exploration and variance reduction in RL can be directly applied to MML problems. Of special interest are methods which incorporate a value function such as actor-critic. Maximum likelihood and RL. It is tempting to group our approach with sequence learning methods which interpolate between supervised learning and reinforcement learning (Ranzato et al., 2015; Venkatraman et al., 2015; Ross et al., 2011; Norouzi et al., 2016; Bengio et al., 2015; Levine, 2014). These methods generally seek to make RL training easier by pre-training or “warm-starting” with fully supervised learning. This requires each training example to be labeled with a reasonably correct output sequence. In our setting, this would amount to labeling each example with the correct program, which is not known. Hence, these methods cannot be directly applied. Without access to correct output sequences, we cannot directly maximize likelihood, and instead resort to maximizing the marginal likelihood (MML). Rather than proposing MML as a form of pre-training, we argue that MML is a superior substitute for the standard RL objective, and that the β-meritocratic update is even better. Simulated annealing. Our β-meritocratic update employs exponential smoothing, which bears resemblance to the simulated annealing strategy of Och (2003); Smith and Eisner (2006); Shen et al. (2015). However, a key difference is that these methods smooth the objective function whereas we smooth an expectation in the gradient. To underscore the difference, we note that fixing β = 0 in our method (total smoothing) is quite effective, whereas total smoothing in the simulated annealing methods would correspond to a completely flat objective function, and an uninformative gradient of zero everywhere. Neural semantic parsing. There has been recent interest in using recurrent neural networks for semantic parsing, both for modeling logical forms (Dong and Lapata, 2016; Jia and Liang, 2016; Liang et al., 2017) and for end-to-end execution (Yin et al., 2015; Neelakantan et al., 2016). We develop a neural model for the context-dependent setting, which is made possible by a new stackbased language similar to Riedel et al. (2016). Acknowledgments. This work was supported by the NSF Graduate Research Fellowship under No. DGE-114747 and the NSF CAREER Award under No. IIS-1552635. Reproducibility. Our code is made available at https://github.com/kelvinguu/lang2program. Reproducible experiments are available at https://worksheets.codalab.org/worksheets/ 0x88c914ee1d4b4a4587a07f36f090f3e5/. References M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, 1059 M. Devin, S. Ghemawat, I. J. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. J´ozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Man´e, R. Monga, S. Moore, D. G. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. A. Tucker, V. Vanhoucke, V. Vasudevan, F. B. Vi´egas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. 2015. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 . Y. Artzi and L. Zettlemoyer. 2011. Bootstrapping semantic parsers from conversations. In Empirical Methods in Natural Language Processing (EMNLP). pages 421–432. Y. Artzi and L. Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics (TACL) 1:49–62. D. Bahdanau, K. Cho, and Y. Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (ICLR). M. Bellemare, S. Srinivasan, G. Ostrovski, T. Schaul, D. Saxton, and R. Munos. 2016. Unifying countbased exploration and intrinsic motivation. In Advances in Neural Information Processing Systems (NIPS). pages 1471–1479. S. Bengio, O. Vinyals, N. Jaitly, and N. Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In Advances in Neural Information Processing Systems (NIPS). pages 1171– 1179. S. Branavan, H. Chen, L. S. Zettlemoyer, and R. Barzilay. 2009. Reinforcement learning for mapping instructions to actions. In Association for Computational Linguistics and International Joint Conference on Natural Language Processing (ACLIJCNLP). pages 82–90. K. Clark and C. D. Manning. 2016. Deep reinforcement learning for mention-ranking coreference models. arXiv preprint arXiv:1609.08667 . J. Clarke, D. Goldwasser, M. Chang, and D. Roth. 2010. Driving semantic parsing from the world’s response. In Computational Natural Language Learning (CoNLL). pages 18–27. R. Coulom. 2006. Efficient selectivity and backup operators in Monte-Carlo tree search. In International Conference on Computers and Games. pages 72–83. A. P. Dempster, L. N. M., and R. D. B. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society: Series B 39(1):1–38. L. Dong and M. Lapata. 2016. Language to logical form with neural attention. In Association for Computational Linguistics (ACL). M. O. Duff. 2002. Optimal Learning: Computational procedures for Bayes-adaptive Markov decision processes. Ph.D. thesis, University of Massachusetts Amherst. X. Glorot and Y. Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In International Conference on Artificial Intelligence and Statistics. R. Jia and P. Liang. 2016. Data recombination for neural semantic parsing. In Association for Computational Linguistics (ACL). M. Kearns and S. Singh. 2002. Near-optimal reinforcement learning in polynomial time. Machine Learning 49(2):209–232. D. Kingma and J. Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . J. Krishnamurthy and T. Mitchell. 2012. Weakly supervised training of semantic parsers. In Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP/CoNLL). pages 754–765. S. Levine. 2014. Motor Skill Learning with Local Trajectory Methods. Ph.D. thesis, Stanford University. J. Li, W. Monroe, A. Ritter, D. Jurafsky, M. Galley, and J. Gao. 2016. Deep reinforcement learning for dialogue generation. In Empirical Methods in Natural Language Processing (EMNLP). C. Liang, J. Berant, Q. Le, and K. D. F. N. Lao. 2017. Neural symbolic machines: Learning semantic parsers on Freebase with weak supervision. In Association for Computational Linguistics (ACL). P. Liang, M. I. Jordan, and D. Klein. 2011. Learning dependency-based compositional semantics. In Association for Computational Linguistics (ACL). pages 590–599. R. Long, P. Pasupat, and P. Liang. 2016. Simpler context-dependent logical forms via model projections. In Association for Computational Linguistics (ACL). O. Nachum, M. Norouzi, and D. Schuurmans. 2016. Improving policy gradient by exploring under-appreciated rewards. arXiv preprint arXiv:1611.09321 . K. Narasimhan, T. Kulkarni, and R. Barzilay. 2015. Language understanding for text-based games using deep reinforcement learning. arXiv preprint arXiv:1506.08941 . A. Neelakantan, Q. V. Le, and I. Sutskever. 2016. Neural programmer: Inducing latent programs with gradient descent. In International Conference on Learning Representations (ICLR). 1060 M. Norouzi, S. Bengio, N. Jaitly, M. Schuster, Y. Wu, D. Schuurmans, et al. 2016. Reward augmented maximum likelihood for neural structured prediction. In Advances In Neural Information Processing Systems. pages 1723–1731. F. J. Och. 2003. Minimum error rate training in statistical machine translation. In Association for Computational Linguistics (ACL). pages 160–167. I. Osband, C. Blundell, A. Pritzel, and B. V. Roy. 2016. Deep exploration via bootstrapped DQN. In Advances In Neural Information Processing Systems. pages 4026–4034. I. Osband, B. V. Roy, and Z. Wen. 2014. Generalization and exploration via randomized value functions. arXiv preprint arXiv:1402.0635 . P. Pasupat and P. Liang. 2015. Compositional semantic parsing on semi-structured tables. In Association for Computational Linguistics (ACL). P. Pasupat and P. Liang. 2016. Inferring logical forms from denotations. In Association for Computational Linguistics (ACL). J. Pennington, R. Socher, and C. D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP). M. Ranzato, S. Chopra, M. Auli, and W. Zaremba. 2015. Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732 . S. Reddy, M. Lapata, and M. Steedman. 2014. Largescale semantic parsing without question-answer pairs. Transactions of the Association for Computational Linguistics (TACL) 2(10):377–392. S. Riedel, M. Bosnjak, and T. Rockt¨aschel. 2016. Programming with a differentiable forth interpreter. CoRR, abs/1605.06640 . S. Ross, G. Gordon, and A. Bagnell. 2011. A reduction of imitation learning and structured prediction to noregret online learning. In Artificial Intelligence and Statistics (AISTATS). S. Shen, Y. Cheng, Z. He, W. He, H. Wu, M. Sun, and Y. Liu. 2015. Minimum risk training for neural machine translation. arXiv preprint arXiv:1512.02433 . D. A. Smith and J. Eisner. 2006. Minimum risk annealing for training log-linear models. In International Conference on Computational Linguistics and Association for Computational Linguistics (COLING/ACL). pages 787–794. R. Sutton, D. McAllester, S. Singh, and Y. Mansour. 1999. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems (NIPS). A. Venkatraman, M. Hebert, and J. A. Bagnell. 2015. Improving multi-step prediction of learned time series models. In Association for the Advancement of Artificial Intelligence (AAAI). pages 3024–3030. R. J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning 8(3):229–256. R. J. Williams and J. Peng. 1991. Function optimization using connectionist reinforcement learning algorithms. Connection Science 3(3):241–268. P. Yin, Z. Lu, H. Li, and B. Kao. 2015. Neural enquirer: Learning to query tables. arXiv preprint arXiv:1512.00965 . A Hyperparameters in Table 2 System ALCHEMY TANGRAMS SCENE REINFORCE Sample size 32 Baseline 10−2 ϵ = 0.15 embed TOKENS Sample size 32 Baseline 10−2 ϵ = 0.15 embed TOKENS Sample size 32 Baseline 10−4 ϵ = 0.15 embed TOKENS BS-MML Beam size 128 embed TOKENS Beam size 128 embed TOKENS Beam size 128 embed TOKENS RANDOMER β = 1 ϵ = 0.05 embed TOKENS β = 1 ϵ = 0.15 embed TOKENS β = 0 ϵ = 0.15 embed STACK 1061 B SCONE domains and program tokens token type semantics Shared across ALCHEMY, TANGRAMS, SCENE 1, 2, 3, . . . constant push: number -1, -2, -3, . .. red, yellow, green, constant push: color orange, purple, brown allObjects constant push: the list of all objects index function pop: a list L and a number i push: the object L[i] (the index starts from 1; negative indices are allowed) prevArgj (j = 1, 2) function pop: a number i push: the j argument from the ith action prevAction action pop: a number i perform: fetch the ith action and execute it using the arguments on the stack Additional tokens for the ALCHEMY domain An ALCHEMY world contains 7 beakers. Each beaker may contain up to 4 units of colored chemical. 1/1 constant push: fraction (used in the drain action) hasColor function pop: a color c push: list of beakers with chemical color c drain action pop: a beaker b and a number or fraction a perform: remove a units of chemical (or all chemical if a = 1/1) from b pour action pop: two beakers b1 and b2 perform: transfer all chemical from b1 to b2 mix action pop: a beaker b perform: turn the color of the chemical in b to brown Additional tokens for the TANGRAMS domain A TANGRAMS world contains a row of tangram pieces with different shapes. The shapes are anonymized; a tangram can be referred to by an index or a history reference, but not by shape. swap action pop: two tangrams t1 and t2 perform: exchange the positions of t1 and t2 remove action pop: a tangram t perform: remove t from the stage add action pop: a number i and a previously removed tangram t perform: insert t to position i Additional tokens for the SCENE domain A SCENE world is a linear stage with 10 positions. Each position may be occupied by a person with a colored shirt and optionally a colored hat. There are usually 1-5 people on the stage. noHat constant push: pseudo-color (indicating that the person is not wearing a hat) hasShirt, hasHat function pop: a color c push: the list of all people with shirt or hat color c hasShirtHat function pop: two colors c1 and c2 push: the list of all people with shirt color c1 and hat color c2 leftOf, rightOf function pop: a person p push: the location index left or right of p create action pop: a number i and two colors c1, c2 perform: add a new person at position i with shirt color c1 and hat color c2 move action pop: a person p and a number i perform: move p to position i swapHats action pop: two people p1 and p2 perform: have p1 and p2 exchange their hats leave action pop: a person p perform: remove p from the stage 1062
2017
97
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1063–1072 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1098 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1063–1072 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1098 Diversity driven attention model for query-based abstractive summarization Preksha Nema† Mitesh M. Khapra† Anirban Laha∗† Balaraman Ravindran† †Indian Institute of Technology Madras, India ∗IBM Research India {preksha,miteshk}@cse.iitm.ac.in [email protected] [email protected] Abstract Abstractive summarization aims to generate a shorter version of the document covering all the salient points in a compact and coherent fashion. On the other hand, query-based summarization highlights those points that are relevant in the context of a given query. The encodeattend-decode paradigm has achieved notable success in machine translation, extractive summarization, dialog systems, etc. But it suffers from the drawback of generation of repeated phrases. In this work we propose a model for the query-based summarization task based on the encode-attend-decode paradigm with two key additions (i) a query attention model (in addition to document attention model) which learns to focus on different portions of the query at different time steps (instead of using a static representation for the query) and (ii) a new diversity based attention model which aims to alleviate the problem of repeating phrases in the summary. In order to enable the testing of this model we introduce a new query-based summarization dataset building on debatepedia. Our experiments show that with these two additions the proposed model clearly outperforms vanilla encode-attend-decode models with a gain of 28% (absolute) in ROUGE-L scores. 1 Introduction Over the past few years neural models based on the encode-attend-decode (Bahdanau et al., 2014) paradigm have shown great success in various natural language generation (NLG) tasks such as machine translation (Bahdanau et al., 2014), abstractive summarization ((Rush et al., 2015),(Nallapati et al., 2016)) dialog (Li et al., 2016), etc. One such NLG problem which has not received enough attention in the past is query based abstractive text summarization where the aim is to generate the summary of a document in the context of a query. In general, abstractive summarization, aims to cover all the salient points of a document in a compact and coherent fashion. On the other hand, query focused summarization highlights those points that are relevant in the context of the query. Thus given a document on “the super bowl”, the query “How was the half-time show?”, would result in a summary that would not cover the actual game itself. Note that there has been some work on query based extractive summarization in the past where the aim is to simply extract the most salient sentence(s) from a document and treat these as a summary. There is no natural language generation involved. Since, we were interested in abstractive (as opposed to extractive) summarization we created a new dataset based on debatepedia. This dataset contains triplets of the form (query, document, summary). Further, each summary is abstractive and not extractive in the sense that the summary does not necessarily comprise of a sentence which is simply copied from the original document. Using this dataset as a testbed, we focus on a recurring problem in models based on the encode-attend-decode paradigm. Specifically, it is observed that the summaries produced by such models contain repeated phrases. Table 1 shows a few such examples of summaries gener1063 Document Snippet: The “natural death” alternative to euthanasia is not keeping someone alive via life support until they die on life support. That would, indeed, be unnatural. The natural alternative is, instead, to allow them to die off of life support. Query: Is euthanasia better than withdrawing life support (non-treatment)? Ground Truth Summary: The alternative to euthanasia is a natural death without life support. Predicted Summary: the large to euthanasia is a natural death life life use Document Snippet: Legalizing same-sex marriage would also be a recognition of basic American principles, and would represent the culmination of our nation’s commitment to equal rights. It is, some have said, the last major civil-rights milestone yet to be surpassed in our two-century struggle to attain the goals we set for this nation at its formation. Query: Is gay marriage a civil right? Ground Truth Summary: Gay marriage is a fundamental equal right. Predicted Summary: gay marriage is a appropriate right right Table 1: Examples showing repeated words in the output of encoder-decoder models ated by such a model when trained on this new dataset. This problem has also been reported by (Chen et al., 2016) in the context of summarization and by (Sankaran et al., 2016) in the context of machine translation. We first provide an intuitive explanation for this problem and then propose a solution for alleviating it. A typical encode-attend-decode model first computes a vectorial representation for the document and the query and then produces a contextual summary one word at a time. Each word is produced by feeding a new context vector to the decoder at each time step by attending to different parts of the document and query. If the decoder produces the same word or phrase repeatedly then it could mean that the context vectors fed to the decoder at these time steps are very similar. We propose a model which explicitly prevents this by ensuring that successive context vectors are orthogonal to each other. Specifically, we subtract out any component that the current context vector has in the direction of the previous context vector. Notice that, we do not require the current context vector to be orthogonal to all previous context vectors but just its immediate predecessor. This enables the model to attend to words repeatedly if required later in the process. To account for the complete history (or all previous context vectors) we also propose an extension of this idea where we pass the sequence of context vectors through a LSTM (Hochreiter and Schmidhuber, 1997) and ensure that the current state produced by the LSTM is orthogonal to the history. At each time step, the state of the LSTM is then fed to the decoder to produce one word in the summary. Our contributions can be summarized as follows: (i) We propose a new dataset for query based abstractive summarization and evaluate encode-attend-decode models on this dataset (ii) We study the problem of repeating phrases in NLG in the context of this dataset and propose two solutions for countering this problem. We show that our method outperforms a vanilla encoder-decoder model with a gain of 28% (absolute) in ROUGE-L score (iii) We also demonstrate that our method clearly outperforms a recent state of the art method proposed for handling the problem of repeating phrases with a gain of 7% (absolute) in ROUGE-L scores (iv) We do a qualitative analysis of the results and show that our model indeed produces outputs with fewer repetitions. 2 Related Work Summarization has been studied in the context of text ((Mani, 2001), (Das and Martins, 2007), (Nenkova and McKeown, 2012)) as well as speech ((Zhu and Penn, 2006), (Zhu et al., 2009)). A vast majority of this work has focused on extractive summarization where the idea is to construct a summary by selecting the most relevant sentences from the document ((Neto et al., 2002), (Erkan and Radev, 2004), (Filippova and Altun, 2013), (Colmenares et al., 2015), (Riedhammer et al., 2010), (Ribeiro et al., 2013)). There has been some work on abstractive summarization in the context of DUC-2003 and DUC-2004 contests (Zajic et al.). We refer the reader to (Das and Martins, 2007) and (Nenkova and McKeown, 2012) for an excellent survey of 1064 the field. Recent research in abstractive summarization has focused on data driven neural models based on the encode-attend-decode paradigm (Bahdanau et al., 2014). For example, (Rush et al., 2015), report state of the art results on the GigaWord and DUC corpus using such a model. Similarly, the work of Lopyrev (2015) uses neural networks to generate news headline from short news stories. Chopra et al. (2016) extend the work of Rush et al. (2015) and report further improvements on the two datasets. Hu et al. (2015) introduced a dataset for Chinese short text summarization and evaluated a similar RNN encoder-decoder model on it. One recurring problem in encoder-decoder models for NLG is that they often repeat the same phrase/word multiple times in the summary (at the cost of both coherency and fluency). Sankaran et al. (2016) study this problem in the context of MT and propose a temporal attention model which enforces the attention weights for successive time steps to be different from each other. Similarly, and more relevant to this work, Chen et al. (2016) propose a distraction based attention model which maintains a history of attention vectors and context vectors. It then subtracts this history from the current attention and context vector. When evaluated on our dataset their method performs poorly. This could be because their method is very aggressive in dealing with the history (as explained later in the Experiments section). On the other hand, our method has a better way of handling history (by passing context vectors through an LSTM recurrent network) which gives us the flexibility to forget/retain some portions of the history and at the same time produce diverse context vectors at successive time steps. We evaluate our method in the context of query based abstractive summarization - a problem which has received almost no attention in the past due to unavailability of datasets. We create a new dataset for this task and show that our method indeed produces better output by reducing the number of repeated phrases produced by encoder decoder models. Average number of words per Document Summary Query 66.4 11.16 9.97 Table 2: Average length of documents/queries/summaries in the dataset 3 Dataset As mentioned earlier, there are no existing datasets for query based abstractive summarization. We create such a dataset from Debatepedia an encyclopedia of pro and con arguments and quotes on critical debate topics. There are 663 debates in the corpus (we have considered only those debates which have at least one query with one document). These 663 debates belong to 53 overlapping categories such as Politics, Law, Crime, Environment, Health, Morality, Religion, etc. A given topic can belong to more than one category. For example, the topic “Eye for an Eye philosophy” belongs to both “Law” as well as “Morality”. The average number of queries per debate is 5 and the average number of documents per query is 4. Please refer to the dataset url1 for more details about number of debates per category. For example, Figure 1 shows the queries associated with the topic “Algae Biofuel”. It also lists the set of documents and an abstractive summary associated with each query. As is obvious from the example, the summary is an abstractive summary and not extracted directly from the document. We crawled 12695 such {query, document, summary} triples from debatepedia (these were all the triples that were available). Table 2 reports the average length of the query, summary and documents in this dataset. We used 10 fold cross validation for all our experiments. Each fold uses 80% of the documents for training, 10% for validation and 10% for testing. 4 Proposed model Given a query q = q1, q2, ..., qk containing k words, a document d = d1, d2, ..., dn containing n words, the task is to generate a contextual summary y = y1, y2, ..., ym containing 1http://www.cse.iitm.ac.in/˜miteshk/ datasets/qbas.html 1065 Figure 1: Queries associated with the topic “algae biofuel” Figure 2: Documents and summaries for a given query m words. This can be modeled as the problem of finding a y∗that maximizes the probability p(y|q, d) which can be further decomposed as: y∗= arg max y m Y t=1 p(yt|y1, ..., yt−1, q, d) (1) We now describe a way of modeling p(yt|y1, ..., yt−1, q, d) using the neural encoderattention-decoder paradigm. The proposed model contains the following components: (i) an encoder RNN for the query (ii) an encoder RNN for the document (iii) attention mechanism for the query (iv) attention mechanism for the document and (v) a decoder RNN. All the RNNs use a GRU cell. Encoder for the query: We use a recurrent neural network with Gated Recurrent Units (GRU) for encoding the query. It reads the query q = q1, q2, ..., qk from left to right and computes a hidden representation for each time-step as: hq i = GRUq(hq i−1, e(qi)) (2) where e(qi) ∈Rd is the d-dimensional embedding of the query word qi. Encoder for the document: This is similar to the query encoder and reads the document d = d1, d2, ..., dn from left to right and computes a hidden representation for each time-step as: hd i = GRUd(hd i−1, e(di)) (3) where e(di) ∈Rd is the d-dimensional embedding of the document word di. Attention mechanism for the query : At each time step, the decoder produces an output word by focusing on different portions of the query (document) with the help of a query (document) attention model. We first describe the query attention model which assigns weights αq t,i to each word in the query at each decoder timestep using the following equations. aq t,i = vT q tanh(Wqst + Uqhq i) (4) αq t,i = exp(aq t,i) Pk j=1 exp(aq t,j) (5) where st is the current state of the decoder at time step t (we will see an exact formula for this soon). Wq ∈Rl2×l1, Uq ∈Rl2×l2, vq ∈Rl2, l1 is the size of the decoder’s hidden state, l2 is both the size of hq i and also the size of the final query representation at time step t, which is computed as: qt = k X i=1 αq t,ihq i (6) Attention mechanism for the document : We now describe the document attention model which assigns weights to each word in the document using the following equations. ad t,i = vT d tanh(Wdst + Udhd i + Zqt) (7) αd t,i = exp(ad t,i) Pn j=1 exp(ad t,j) where st is the current state of the decoder at time step t (we will see an exact formula for this 1066 soon). Wd ∈Rl4×l1, Ud ∈Rl4×l4, Z ∈Rl4×l2, vd ∈Rl2, l4 is the size of hd i and also the size of the final document representation dt which is passed to the decoder at time step t as: dt = n X i=1 αd t,ihd i (8) Note that dt now encodes the relevant information from the document as well as the query (see Equation (7)) at time step t. We refer to this as the context vector for the decoder. Decoder: The hidden state of the decoder st at each time t is again computed using a GRU as follows: st = GRUdec(st−1, [e(yt−1), dt−1]) (9) where, yt−1 gives a distribution over the vocabulary words at timestep t −1 and is computed as: yt = softmax(Wof(Wdecst + Vdecdt)) (10) where Wo ∈RN×l1, Wdec ∈Rl1×l1, Vdec ∈ Rl1×l4, N is the vocabulary size, yt is the final output of the model which defines a probability distribution over the output vocabulary. This is exactly the quantity defined in Equation (1) that we wanted to model (p(yt|y1, ..., yt−1, q, d)). Further, note that, e(yt−1) is the d-dimensional embedding of the word which has the highest probability under the distribution yt−1. Also [e(yt−1), dt−1] means a concatenation of the vectors e(yt−1), dt−1. We chose f to be the identity function. The model as described above is an instantiation of the encoder-attention-decoder idea applied to query based abstractive summarization. As mentioned earlier (and demonstrated later through experiments), this model suffers from the problem of repeating the same phrase/word in the output. We now propose a new attention model which we refer to as diversity based attention model to address this problem. 4.1 Diversity based attention model As hypothesized earlier, if the decoder produces the same phrase/word multiple times then it is possible that the context vectors being fed to the decoder at consecutive time steps are Document Encoder support . . . same Legalizing Gay marriage is a fundamental equal right Decoder Is gay marriage a civil right? Query Encoder Document Attention Diversity Cell Query Attention Figure 3: Proposed model for Query based Abstractive Summarization with (i) query encoder (ii) document encoder (iii) query attention model (iv) diversity based document attention model and (v) decoder. The green and red arrows show the connections for timestep 3 of the decoder. very similar. We propose four models (D1, D2, SD1, SD2) to directly address this problem. D1: In this model, after computing dt as described in Equation (8), we make it orthogonal to the context vector at time t −1: d ′ t = dt −dT t d ′ t−1 d ′T t−1d ′ t−1 d ′ t−1 (11) SD1: The above model imposes a hard orthogonality constraint on the context vector(d ′ t). We also propose a relaxed version of the above model which uses a gating parameter. This gating parameter decides what fraction of the previous context vector should be subtracted from the current context vector using the following equations: γt = Wgdt−1 + bg d ′ t = dt −γt dT t d ′ t−1 d ′T t−1d ′ t−1 d ′ t−1 where Wg ∈Rl4×l4, bg ∈Rl4, l4 is the dimension of dt as defined in equation (8). D2: The above model only ensures that the current context vector is diverse w.r.t the previous context vector. It ignores all history before time step t −1. To account for the history, we treat successive context vectors as a sequence and use 1067 a modified LSTM cell to compute the new state at each time step. Specifically, we use the following set of equations to compute a diverse context at time t: it = σ(Widt + Uiht−1 + bi) ft = σ(Wfdt + Ufht−1 + bf) ot = σ(Wodt + Uoht−1 + bo) ˆct = tanh(Wcdt + Ucht−1 + bc) ct = it ⊙ˆct + ft ⊙ct−1 cdiverse t = ct −ctTct−1 cT t−1ct−1 ct−1 (12) ht = ot ⊙tanh(cdiverse t ) d ′ t = ht (13) where Wi, Wf, Wo, Wc ∈ Rl5×l4, Ui, Uf, Uo, Uc ∈ Rl5×l4, dt is the l4dimensional output of Equation (8); l5 is number of hidden units in the LSTM cell. This final d ′ t from Equation (13) is then used in Equation (9). Note that Equation (12) ensures that state of the LSTM at time step t is orthogonal to the previous history. Figure 3 shows a pictorial representation of the model with a diversity LSTM cell. SD2: This model again uses a relaxed version of the orthogonality constraint used in D2. Specifically, we define a gating parameter gt and replace (12) above by (14) as define below: gt = σ(Wgdt + Ught−1 + bo) cdiverse t = ct −gt ctTct−1 cT t−1ct−1 ct−1 (14) where Wg ∈Rl5×l4, Ug ∈Rl5×l4 5 Baseline Methods We compare with two recently proposed baseline diversity methods (Chen et al., 2016) as described below. Note that these methods were proposed in the context of abstractive summarization (not query based abstractive summarization) and we adapt them for the task of query based abstractive summarization. Below we just highlight the key differences from our model in computing the context vector d ′ t passed to the decoder. M1: This model accumulates all the previous context vectors as Pt−1 j=1 d ′ j and incorporates this history while computing a diverse context vector: d ′ t = tanh(Wcdt −Uc t−1 X j=1 d ′ j) (15) where Wc, Uc ∈Rl4×l4 are diagonal matrices. We then use this diversity driven context d ′ t in Equation (9) and (10). M2: In this model, in addition to computing a diverse context as described in Equation (15), the attention weights at each time step are also forced to be diverse from the attention weights at the previous time step. α ′ t,i = vT a tanh(Was ′ t + Uadt −ba t−1 X j=1 α ′ j,i) where Wa ∈Rl1×l1, Ua ∈Rl1×l4, ba, va ∈Rl1, l1 is the number of hidden units in the decoder GRU. Once again, they maintain a history of attention weights and compute a diverse attention vector by subtracting the history from the current attention vector. 6 Experimental Setup We evaluate our models on the dataset described in section 3. Note that there are no prior baselines on query based abstractive summarization so we could only compare with different variations of the encoder decoder models as described above. Further, we compare our diversity based attention models with existing models for diversity by suitably adapting them to this problem as described earlier. Specifically, we compare the performance of the following models: • Vanilla e-a-d: This is the vanilla encoderattention-decoder model adapted to the problem of abstractive summarization. It contains the following components (i) document encoder (ii) document attention model (iii) decoder. It does not contain an encoder or attention model for the query. This helps us understand the importance of the query. • Queryenc: This model contains the query encoder in addition to the three components used in the vanilla model above. It does not contain any attention model for the query. 1068 • Queryatt: This model contains the query attention model in addition to all the components in Queryenc. • D1: The diversity attention model as described in Section 4.1. • D2: The LSTM based diversity attention model as described in Section 4.1. • SD1: The soft diversity attention model as described in Section 4.1 • SD2: The soft LSTM based diversity attention model as described in Section 4.1 • B1: Diversity cell in Figure3 is replaced by the basic LSTM cell (i.e. cdiverse t = ct instead of using Equation (12). This helps us understand whether simply using an LSTM to track the history of context vectors (without imposing a diversity constraint) is sufficient. • M1: The baseline model which operates on the context vector as described in Section 5. • M2: The baseline model which operates on the attention weights in addition to the context vector as described in Section 5. We used 80% of the data for training, 10% for validation and 10% for testing. We create 10 such folds and report the average Rouge-1, Rouge-2, Rouge-L scores across the 10 folds. The hyperparameters (batch size and GRU cell sizes) of all the models are tuned on the validation set. We tried the following batch sizes : 32, 64 and the following GRU cell sizes 200, 300, 400. We used Adam (Kingma and Ba, 2014) as the optimization algorithm with the initial learning rate set to 0.0004, β1 = 0.9, β2 = 0.999. We used pre-trained publicly available Glove word embeddings2 and fine-tuned them during training. The same word embeddings are used for the query words and the document words. Table 3 summarizes the results of our experiments. 2http://nlp.stanford.edu/projects/glove/ Models ROUGE-1 ROUGE-2 ROUGE-L Vanilla e-a-d 13.73 2.06 12.84 Queryenc 20.87 3.39 19.38 Queryatt 29.28 10.24 28.21 B1 23.18 6.46 22.03 M1 33.06 13.35 32.17 M2 18.42 4.47 17.45 D1 33.85 13.65 32.99 SD1 31.36 11.23 30.5 D2 38.12 16.76 37.31 SD2 41.26 18.75 40.43 Table 3: Performance on various models using fulllength ROUGE metrics 7 Discussions In this section, we discuss the results of the experiments reported in Table 3. 1. Effect of Query: Comparing rows 1 and 2 we observe that adding an encoder for the query and allowing it to influence the outputs of the decoder indeed improves the performance. This is expected as the query contains some keywords which could help in sharpening the focus of the summary. 2. Effect of Query attention model: Comparing rows 2 and 3 we observe that using an attention model to dynamically compute the query representation at each time step improves the results. This suggests that the attention model indeed learns to focus on relevant portions of the query at different time steps. 3. Effect of Diversity models: All the diversity models introduced in the paper (rows 7, 8, 9, 10) give significant improvement over the nondiversity models. In particular, the modified LSTM based diversity model gives the best results. This is indeed very encouraging and Table 4 shows some sample summaries comparing the performance of different models. 4. Comparison with baseline diversity models: The baseline diversity model M1 performs at par with our models D1 and SD1 but not as good as D2 and SD2. However, the model M2 performs very poorly. We believe that simultaneously adding a constraint on the context vectors as well as attention weights (as is indeed the case with M2) is a bit too aggressive and leads to poor performance (although this needs further investigation). 5. Quantitative Analysis: In addition to the qualitative analysis reported in Table 4 we also did a quantitative analysis by counting the num1069 Source:Although cannabis does indeed have some harmful effects, it is no more harmful than legal substances like alcohol and tobacco. As a matter of fact, research by the British Medical Association shows that nicotine is far more addictive than cannabis. Furthermore, the consumption of alcohol and the smoking of cigarettes cause more deaths per year than does the use of cannabis (e.g. through lung cancer, stomach ulcers, accidents caused by drunk driving etc.). The legalization of cannabis will remove an anomaly in the law whereby substances that are more dangerous than cannabis are legal whilst the possession and use of cannabis remains unlawful. Query: is marijuana harmless enough to be considered a medicine G: marijuana is no more harmful than tobacco and alcohol Queryattn: marijuana is no the drug drug for tobacco and tobacco D1: marijuana is no more harmful than tobacco and tobacco SD1: marijuana is more for evidence than tobacco and health D2: marijuana is no more harmful than tobacco and use SD2: marijuana is no more harmful than tobacco and alcohol Source:Fuel cell critics point out that hydrogen is flammable, but so is gasoline. Unlike gasoline, which can pool up and burn for a long time, hydrogen dissipates rapidly. Gas tanks tend to be easily punctured, thin-walled containers, while the latest hydrogen tanks are made from Kevlar. Also, gaseous hydrogen isn’t the only method of storage under consideration–BMW is looking at liquid storage while other researchers are looking at chemical compound storage, such as boron pellets. Query: safety are hydrogen fuel cell vehicles safe G: hydrogen in cars is less dangerous than gasoline Queryattn: hydrogen is hydrogen hydrogen hydrogen fuel energy D1:hydrogen in cars is less natural than gasoline SD1: hydrogen in cars is reduce risk than fuel D2: hydrogen in waste is less effective than gasoline SD2:hydrogen in cars is less dangerous than gasoline Source:The basis of all animal rights should be the Golden Rule: we should treat them as we would wish them to treat us, were any other species in our dominant position. Query: do animals have rights that makes eating them inappropriate G: animals should be treated as we would want to be treated Queryatt: animals should be treated as we would protect to be treated D1: animals should be treated as we most individual to be treated SD1: animals should be treated as we would physically to be treated D2: animals should be treated as we would illegal to be treated SD2: animals should be treated as those would want to be treated Table 4: Summaries generated by different models. In general, we observed that the baseline models which do not use a diversity based attention model tend to produce more repetitions. Notice that the last example shows that our model is not very aggressive in dealing with the history and is able to produce valid repetitions (treated ... treated) when needed ber of sentences containing repeated words generated by different models. Specifically for the 1268 test instances we counted the number of sentences containing repeated words as generated by different modes. Table 5 summarizes this analysis. 8 Conclusion In this work we proposed a query-based summarization method. The unique feature of Model Number Queryattn 498 SD1 352 SD2 344 D1 191 D2 179 Table 5: Average number of sentences with repeating words across 10 folds 1070 the model is a novel diversification mechanism based on successive orthogonalization. This gives us the flexibility to: (i) provide diverse context vectors at successive time steps and (ii) pay attention to words repeatedly if need be later in the summary (as opposed to existing models which aggressively delete the history). We also introduced a new data set and empirically verified we perform significantly better (gain of 28% (absolute) in ROUGE-L score) than applying a plain encode-attend-decode mechanism to this problem. We observe that adding an attention mechanism on the query string gives significant improvements. We also compare with a state of the art diversity model and outperform it by a good margin (gain of 7% (absolute) in ROUGE-L score). The diversification model proposed is general enough to apply to other NLG tasks with suitable modifications and we are currently working on extending this to dialog systems and general summarization. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 . Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, and Hui Jiang. 2016. Distraction-based neural networks for modeling documents. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16). pages 2754–2760. Sumit Chopra, Michael Auli, Alexander M Rush, and SEAS Harvard. 2016. Abstractive sentence summarization with attentive recurrent neural networks. Proceedings of NAACL-HLT16 pages 93–98. Carlos A Colmenares, Marina Litvak, Amin Mantrach, and Fabrizio Silvestri. 2015. Heads: Headline generation as sequence prediction using an abstract feature-rich space. In HLT-NAACL. pages 133–142. Dipanjan Das and Andr´e FT Martins. 2007. A survey on automatic text summarization. Literature Survey for the Language and Statistics II course at CMU 4:192–195. G¨unes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research 22:457–479. Katja Filippova and Yasemin Altun. 2013. Overcoming the lack of parallel data in sentence compression. In EMNLP. Citeseer, pages 1481–1491. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735– 1780. Baotian Hu, Qingcai Chen, and Fangze Zhu. 2015. Lcsts: A large scale chinese short text summarization dataset. arXiv preprint arXiv:1506.05865 . Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Jiwei Li, Michel Galley, Chris Brockett, Georgios P Spithourakis, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural conversation model. arXiv preprint arXiv:1603.06155 . Konstantin Lopyrev. 2015. Generating news headlines with recurrent neural networks. arXiv preprint arXiv:1512.01712 . Inderjeet Mani. 2001. Automatic summarization, volume 3. John Benjamins Publishing. Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, Bing Xiang, et al. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023 . Ani Nenkova and Kathleen McKeown. 2012. A survey of text summarization techniques. In Mining text data, Springer, pages 43–76. Joel Larocca Neto, Alex A Freitas, and Celso AA Kaestner. 2002. Automatic text summarization using a machine learning approach. In Brazilian Symposium on Artificial Intelligence. Springer, pages 205–215. Ricardo Ribeiro, Lu´ıs Marujo, David Martins de Matos, Joao P Neto, Anatole Gershman, and Jaime Carbonell. 2013. Self reinforcement for important passage retrieval. In Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval. ACM, pages 845–848. Korbinian Riedhammer, Benoit Favre, and Dilek Hakkani-T¨ur. 2010. Long story short–global unsupervised models for keyphrase based meeting summarization. Speech Communication 52(10):801– 815. Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685 . Baskaran Sankaran, Haitao Mi, Yaser Al-Onaizan, and Abe Ittycheriah. 2016. Temporal attention model for neural machine translation. arXiv preprint arXiv:1608.02927 . David Zajic, Bonnie Dorr, and Richard Schwartz. ???? Bbn/umd at duc-2004: Topiary. 1071 Xiaodan Zhu and Gerald Penn. 2006. Comparing the roles of textual, acoustic and spoken-language features on spontaneous-conversation summarization. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers. Association for Computational Linguistics, pages 197–200. Xiaodan Zhu, Gerald Penn, and Frank Rudzicz. 2009. Summarizing multiple spoken documents: finding evidence from untranscribed audio. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2. Association for Computational Linguistics, pages 549–557. 1072
2017
98
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1073–1083 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1099 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1073–1083 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1099 Get To The Point: Summarization with Pointer-Generator Networks Abigail See Stanford University [email protected] Peter J. Liu Google Brain [email protected] Christopher D. Manning Stanford University [email protected] Abstract Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points. 1 Introduction Summarization is the task of condensing a piece of text to a shorter version that contains the main information from the original. There are two broad approaches to summarization: extractive and abstractive. Extractive methods assemble summaries exclusively from passages (usually whole sentences) taken directly from the source text, while abstractive methods may generate novel words and phrases not featured in the source text – as a human-written abstract usually does. The extractive approach is easier, because copying large Original Text (truncated): lagos, nigeria (cnn) a day after winning nigeria’s presidency, muhammadu buhari told cnn’s christiane amanpour that he plans to aggressively fight corruption that has long plagued nigeria and go after the root of the nation’s unrest. buhari said he’ll “rapidly give attention” to curbing violence in the northeast part of nigeria, where the terrorist group boko haram operates. by cooperating with neighboring nations chad, cameroon and niger, he said his administration is confident it will be able to thwart criminals and others contributing to nigeria’s instability. for the first time in nigeria’s history, the opposition defeated the ruling party in democratic elections. buhari defeated incumbent goodluck jonathan by about 2 million votes, according to nigeria’s independent national electoral commission. the win comes after a long history of military rule, coups and botched attempts at democracy in africa’s most populous nation. Baseline Seq2Seq + Attention: UNK UNK says his administration is confident it will be able to destabilize nigeria’s economy. UNK says his administration is confident it will be able to thwart criminals and other nigerians. he says the country has long nigeria and nigeria’s economy. Pointer-Gen: muhammadu buhari says he plans to aggressively fight corruption in the northeast part of nigeria. he says he’ll “rapidly give attention” to curbing violence in the northeast part of nigeria. he says his administration is confident it will be able to thwart criminals. Pointer-Gen + Coverage: muhammadu buhari says he plans to aggressively fight corruption that has long plagued nigeria. he says his administration is confident it will be able to thwart criminals. the win comes after a long history of military rule, coups and botched attempts at democracy in africa’s most populous nation. Figure 1: Comparison of output of 3 abstractive summarization models on a news article. The baseline model makes factual errors, a nonsensical sentence and struggles with OOV words muhammadu buhari. The pointer-generator model is accurate but repeats itself. Coverage eliminates repetition. The final summary is composed from several fragments. chunks of text from the source document ensures baseline levels of grammaticality and accuracy. On the other hand, sophisticated abilities that are crucial to high-quality summarization, such as paraphrasing, generalization, or the incorporation of real-world knowledge, are possible only in an abstractive framework (see Figure 5). Due to the difficulty of abstractive summarization, the great majority of past work has been extractive (Kupiec et al., 1995; Paice, 1990; Saggion and Poibeau, 2013). However, the recent success of sequence-to-sequence models (Sutskever 1073 ... Attention Distribution <START> Vocabulary Distribution Context Vector Germany a zoo Partial Summary "beat" Germany emerge victorious in 2-0 win against Argentina on Saturday ... Encoder Hidden States Decoder Hidden States Source Text Figure 2: Baseline sequence-to-sequence model with attention. The model may attend to relevant words in the source text to generate novel words, e.g., to produce the novel word beat in the abstractive summary Germany beat Argentina 2-0 the model may attend to the words victorious and win in the source text. et al., 2014), in which recurrent neural networks (RNNs) both read and freely generate text, has made abstractive summarization viable (Chopra et al., 2016; Nallapati et al., 2016; Rush et al., 2015; Zeng et al., 2016). Though these systems are promising, they exhibit undesirable behavior such as inaccurately reproducing factual details, an inability to deal with out-of-vocabulary (OOV) words, and repeating themselves (see Figure 1). In this paper we present an architecture that addresses these three issues in the context of multi-sentence summaries. While most recent abstractive work has focused on headline generation tasks (reducing one or two sentences to a single headline), we believe that longer-text summarization is both more challenging (requiring higher levels of abstraction while avoiding repetition) and ultimately more useful. Therefore we apply our model to the recently-introduced CNN/ Daily Mail dataset (Hermann et al., 2015; Nallapati et al., 2016), which contains news articles (39 sentences on average) paired with multi-sentence summaries, and show that we outperform the stateof-the-art abstractive system by at least 2 ROUGE points. Our hybrid pointer-generator network facilitates copying words from the source text via pointing (Vinyals et al., 2015), which improves accuracy and handling of OOV words, while retaining the ability to generate new words. The network, which can be viewed as a balance between extractive and abstractive approaches, is similar to Gu et al.’s (2016) CopyNet and Miao and Blunsom’s (2016) Forced-Attention Sentence Compression, that were applied to short-text summarization. We propose a novel variant of the coverage vector (Tu et al., 2016) from Neural Machine Translation, which we use to track and control coverage of the source document. We show that coverage is remarkably effective for eliminating repetition. 2 Our Models In this section we describe (1) our baseline sequence-to-sequence model, (2) our pointergenerator model, and (3) our coverage mechanism that can be added to either of the first two models. The code for our models is available online.1 2.1 Sequence-to-sequence attentional model Our baseline model is similar to that of Nallapati et al. (2016), and is depicted in Figure 2. The tokens of the article wi are fed one-by-one into the encoder (a single-layer bidirectional LSTM), producing a sequence of encoder hidden states hi. On each step t, the decoder (a single-layer unidirectional LSTM) receives the word embedding of the previous word (while training, this is the previous word of the reference summary; at test time it is the previous word emitted by the decoder), and has decoder state st. The attention distribution at is calculated as in Bahdanau et al. (2015): et i = vT tanh(Whhi +Wsst +battn) (1) at = softmax(et) (2) where v, Wh, Ws and battn are learnable parameters. The attention distribution can be viewed as 1www.github.com/abisee/pointer-generator 1074 Source Text Germany emerge victorious in 2-0 win against Argentina on Saturday ... ... <START> Vocabulary Distribution Context Vector Germany a zoo beat a zoo Partial Summary Final Distribution "Argentina" "2-0" Attention Distribution Encoder Hidden States Decoder Hidden States Figure 3: Pointer-generator model. For each decoder timestep a generation probability pgen ∈[0,1] is calculated, which weights the probability of generating words from the vocabulary, versus copying words from the source text. The vocabulary distribution and the attention distribution are weighted and summed to obtain the final distribution, from which we make our prediction. Note that out-of-vocabulary article words such as 2-0 are included in the final distribution. Best viewed in color. a probability distribution over the source words, that tells the decoder where to look to produce the next word. Next, the attention distribution is used to produce a weighted sum of the encoder hidden states, known as the context vector h∗ t : h∗ t = ∑i at ihi (3) The context vector, which can be seen as a fixedsize representation of what has been read from the source for this step, is concatenated with the decoder state st and fed through two linear layers to produce the vocabulary distribution Pvocab: Pvocab = softmax(V ′(V[st,h∗ t ]+b)+b′) (4) where V, V ′, b and b′ are learnable parameters. Pvocab is a probability distribution over all words in the vocabulary, and provides us with our final distribution from which to predict words w: P(w) = Pvocab(w) (5) During training, the loss for timestep t is the negative log likelihood of the target word w∗ t for that timestep: losst = −logP(w∗ t ) (6) and the overall loss for the whole sequence is: loss = 1 T ∑ T t=0 losst (7) 2.2 Pointer-generator network Our pointer-generator network is a hybrid between our baseline and a pointer network (Vinyals et al., 2015), as it allows both copying words via pointing, and generating words from a fixed vocabulary. In the pointer-generator model (depicted in Figure 3) the attention distribution at and context vector h∗ t are calculated as in section 2.1. In addition, the generation probability pgen ∈[0,1] for timestep t is calculated from the context vector h∗ t , the decoder state st and the decoder input xt: pgen = σ(wT h∗h∗ t +wT s st +wT x xt +bptr) (8) where vectors wh∗, ws, wx and scalar bptr are learnable parameters and σ is the sigmoid function. Next, pgen is used as a soft switch to choose between generating a word from the vocabulary by sampling from Pvocab, or copying a word from the input sequence by sampling from the attention distribution at. For each document let the extended vocabulary denote the union of the vocabulary, and all words appearing in the source document. We obtain the following probability distribution over the extended vocabulary: P(w) = pgenPvocab(w)+(1−pgen)∑i:wi=w at i (9) Note that if w is an out-of-vocabulary (OOV) word, then Pvocab(w) is zero; similarly if w does 1075 not appear in the source document, then ∑i:wi=w at i is zero. The ability to produce OOV words is one of the primary advantages of pointer-generator models; by contrast models such as our baseline are restricted to their pre-set vocabulary. The loss function is as described in equations (6) and (7), but with respect to our modified probability distribution P(w) given in equation (9). 2.3 Coverage mechanism Repetition is a common problem for sequenceto-sequence models (Tu et al., 2016; Mi et al., 2016; Sankaran et al., 2016; Suzuki and Nagata, 2016), and is especially pronounced when generating multi-sentence text (see Figure 1). We adapt the coverage model of Tu et al. (2016) to solve the problem. In our coverage model, we maintain a coverage vector ct, which is the sum of attention distributions over all previous decoder timesteps: ct = ∑ t−1 t′=0 at′ (10) Intuitively, ct is a (unnormalized) distribution over the source document words that represents the degree of coverage that those words have received from the attention mechanism so far. Note that c0 is a zero vector, because on the first timestep, none of the source document has been covered. The coverage vector is used as extra input to the attention mechanism, changing equation (1) to: et i = vT tanh(Whhi +Wsst +wcct i +battn) (11) where wc is a learnable parameter vector of same length as v. This ensures that the attention mechanism’s current decision (choosing where to attend next) is informed by a reminder of its previous decisions (summarized in ct). This should make it easier for the attention mechanism to avoid repeatedly attending to the same locations, and thus avoid generating repetitive text. We find it necessary (see section 5) to additionally define a coverage loss to penalize repeatedly attending to the same locations: covlosst = ∑i min(at i,ct i) (12) Note that the coverage loss is bounded; in particular covlosst ≤∑i at i = 1. Equation (12) differs from the coverage loss used in Machine Translation. In MT, we assume that there should be a roughly oneto-one translation ratio; accordingly the final coverage vector is penalized if it is more or less than 1. Our loss function is more flexible: because summarization should not require uniform coverage, we only penalize the overlap between each attention distribution and the coverage so far – preventing repeated attention. Finally, the coverage loss, reweighted by some hyperparameter λ, is added to the primary loss function to yield a new composite loss function: losst = −logP(w∗ t )+λ ∑i min(at i,ct i) (13) 3 Related Work Neural abstractive summarization. Rush et al. (2015) were the first to apply modern neural networks to abstractive text summarization, achieving state-of-the-art performance on DUC-2004 and Gigaword, two sentence-level summarization datasets. Their approach, which is centered on the attention mechanism, has been augmented with recurrent decoders (Chopra et al., 2016), Abstract Meaning Representations (Takase et al., 2016), hierarchical networks (Nallapati et al., 2016), variational autoencoders (Miao and Blunsom, 2016), and direct optimization of the performance metric (Ranzato et al., 2016), further improving performance on those datasets. However, large-scale datasets for summarization of longer text are rare. Nallapati et al. (2016) adapted the DeepMind question-answering dataset (Hermann et al., 2015) for summarization, resulting in the CNN/Daily Mail dataset, and provided the first abstractive baselines. The same authors then published a neural extractive approach (Nallapati et al., 2017), which uses hierarchical RNNs to select sentences, and found that it significantly outperformed their abstractive result with respect to the ROUGE metric. To our knowledge, these are the only two published results on the full dataset. Prior to modern neural methods, abstractive summarization received less attention than extractive summarization, but Jing (2000) explored cutting unimportant parts of sentences to create summaries, and Cheung and Penn (2014) explore sentence fusion using dependency trees. Pointer-generator networks. The pointer network (Vinyals et al., 2015) is a sequence-tosequence model that uses the soft attention distribution of Bahdanau et al. (2015) to produce an output sequence consisting of elements from 1076 the input sequence. The pointer network has been used to create hybrid approaches for NMT (Gulcehre et al., 2016), language modeling (Merity et al., 2016), and summarization (Gu et al., 2016; Gulcehre et al., 2016; Miao and Blunsom, 2016; Nallapati et al., 2016; Zeng et al., 2016). Our approach is close to the Forced-Attention Sentence Compression model of Miao and Blunsom (2016) and the CopyNet model of Gu et al. (2016), with some small differences: (i) We calculate an explicit switch probability pgen, whereas Gu et al. induce competition through a shared softmax function. (ii) We recycle the attention distribution to serve as the copy distribution, but Gu et al. use two separate distributions. (iii) When a word appears multiple times in the source text, we sum probability mass from all corresponding parts of the attention distribution, whereas Miao and Blunsom do not. Our reasoning is that (i) calculating an explicit pgen usefully enables us to raise or lower the probability of all generated words or all copy words at once, rather than individually, (ii) the two distributions serve such similar purposes that we find our simpler approach suffices, and (iii) we observe that the pointer mechanism often copies a word while attending to multiple occurrences of it in the source text. Our approach is considerably different from that of Gulcehre et al. (2016) and Nallapati et al. (2016). Those works train their pointer components to activate only for out-of-vocabulary words or named entities (whereas we allow our model to freely learn when to use the pointer), and they do not mix the probabilities from the copy distribution and the vocabulary distribution. We believe the mixture approach described here is better for abstractive summarization – in section 6 we show that the copy mechanism is vital for accurately reproducing rare but in-vocabulary words, and in section 7.2 we observe that the mixture model enables the language model and copy mechanism to work together to perform abstractive copying. Coverage. Originating from Statistical Machine Translation (Koehn, 2009), coverage was adapted for NMT by Tu et al. (2016) and Mi et al. (2016), who both use a GRU to update the coverage vector each step. We find that a simpler approach – summing the attention distributions to obtain the coverage vector – suffices. In this respect our approach is similar to Xu et al. (2015), who apply a coverage-like method to image captioning, and Chen et al. (2016), who also incorporate a coverage mechanism (which they call ‘distraction’) as described in equation (11) into neural summarization of longer text. Temporal attention is a related technique that has been applied to NMT (Sankaran et al., 2016) and summarization (Nallapati et al., 2016). In this approach, each attention distribution is divided by the sum of the previous, which effectively dampens repeated attention. We tried this method but found it too destructive, distorting the signal from the attention mechanism and reducing performance. We hypothesize that an early intervention method such as coverage is preferable to a post hoc method such as temporal attention – it is better to inform the attention mechanism to help it make better decisions, than to override its decisions altogether. This theory is supported by the large boost that coverage gives our ROUGE scores (see Table 1), compared to the smaller boost given by temporal attention for the same task (Nallapati et al., 2016). 4 Dataset We use the CNN/Daily Mail dataset (Hermann et al., 2015; Nallapati et al., 2016), which contains online news articles (781 tokens on average) paired with multi-sentence summaries (3.75 sentences or 56 tokens on average). We used scripts supplied by Nallapati et al. (2016) to obtain the same version of the the data, which has 287,226 training pairs, 13,368 validation pairs and 11,490 test pairs. Both the dataset’s published results (Nallapati et al., 2016, 2017) use the anonymized version of the data, which has been pre-processed to replace each named entity, e.g., The United Nations, with its own unique identifier for the example pair, e.g., @entity5. By contrast, we operate directly on the original text (or non-anonymized version of the data),2 which we believe is the favorable problem to solve because it requires no pre-processing. 5 Experiments For all experiments, our model has 256dimensional hidden states and 128-dimensional word embeddings. For the pointer-generator models, we use a vocabulary of 50k words for both source and target – note that due to the pointer network’s ability to handle OOV words, we can use 2at www.github.com/abisee/pointer-generator 1077 ROUGE METEOR 1 2 L exact match + stem/syn/para abstractive model (Nallapati et al., 2016)* 35.46 13.30 32.65 seq-to-seq + attn baseline (150k vocab) 30.49 11.17 28.08 11.65 12.86 seq-to-seq + attn baseline (50k vocab) 31.33 11.81 28.83 12.03 13.20 pointer-generator 36.44 15.66 33.42 15.35 16.65 pointer-generator + coverage 39.53 17.28 36.38 17.32 18.72 lead-3 baseline (ours) 40.34 17.70 36.57 20.48 22.21 lead-3 baseline (Nallapati et al., 2017)* 39.2 15.7 35.5 extractive model (Nallapati et al., 2017)* 39.6 16.2 35.3 Table 1: ROUGE F1 and METEOR scores on the test set. Models and baselines in the top half are abstractive, while those in the bottom half are extractive. Those marked with * were trained and evaluated on the anonymized dataset, and so are not strictly comparable to our results on the original text. All our ROUGE scores have a 95% confidence interval of at most ±0.25 as reported by the official ROUGE script. The METEOR improvement from the 50k baseline to the pointer-generator model, and from the pointer-generator to the pointer-generator+coverage model, were both found to be statistically significant using an approximate randomization test with p < 0.01. a smaller vocabulary size than Nallapati et al.’s (2016) 150k source and 60k target vocabularies. For the baseline model, we also try a larger vocabulary size of 150k. Note that the pointer and the coverage mechanism introduce very few additional parameters to the network: for the models with vocabulary size 50k, the baseline model has 21,499,600 parameters, the pointer-generator adds 1153 extra parameters (wh∗, ws, wx and bptr in equation 8), and coverage adds 512 extra parameters (wc in equation 11). Unlike Nallapati et al. (2016), we do not pretrain the word embeddings – they are learned from scratch during training. We train using Adagrad (Duchi et al., 2011) with learning rate 0.15 and an initial accumulator value of 0.1. (This was found to work best of Stochastic Gradient Descent, Adadelta, Momentum, Adam and RMSProp). We use gradient clipping with a maximum gradient norm of 2, but do not use any form of regularization. We use loss on the validation set to implement early stopping. During training and at test time we truncate the article to 400 tokens and limit the length of the summary to 100 tokens for training and 120 tokens at test time.3 This is done to expedite training and testing, but we also found that truncating the article can raise the performance of the model 3The upper limit of 120 is mostly invisible: the beam search algorithm is self-stopping and almost never reaches the 120th step. (see section 7.1 for more details). For training, we found it efficient to start with highly-truncated sequences, then raise the maximum length once converged. We train on a single Tesla K40m GPU with a batch size of 16. At test time our summaries are produced using beam search with beam size 4. We trained both our baseline models for about 600,000 iterations (33 epochs) – this is similar to the 35 epochs required by Nallapati et al.’s (2016) best model. Training took 4 days and 14 hours for the 50k vocabulary model, and 8 days 21 hours for the 150k vocabulary model. We found the pointer-generator model quicker to train, requiring less than 230,000 training iterations (12.8 epochs); a total of 3 days and 4 hours. In particular, the pointer-generator model makes much quicker progress in the early phases of training. To obtain our final coverage model, we added the coverage mechanism with coverage loss weighted to λ = 1 (as described in equation 13), and trained for a further 3000 iterations (about 2 hours). In this time the coverage loss converged to about 0.2, down from an initial value of about 0.5. We also tried a more aggressive value of λ = 2; this reduced coverage loss but increased the primary loss function, thus we did not use it. We tried training the coverage model without the loss function, hoping that the attention mechanism may learn by itself not to attend repeatedly to the same locations, but we found this to be ineffective, with no discernible reduction in repetition. We also tried training with coverage from the first 1078 iteration rather than as a separate training phase, but found that in the early phase of training, the coverage objective interfered with the main objective, reducing overall performance. 6 Results 6.1 Preliminaries Our results are given in Table 1. We evaluate our models with the standard ROUGE metric (Lin, 2004b), reporting the F1 scores for ROUGE1, ROUGE-2 and ROUGE-L (which respectively measure the word-overlap, bigram-overlap, and longest common sequence between the reference summary and the summary to be evaluated). We obtain our ROUGE scores using the pyrouge package.4 We also evaluate with the METEOR metric (Denkowski and Lavie, 2014), both in exact match mode (rewarding only exact matches between words) and full mode (which additionally rewards matching stems, synonyms and paraphrases).5 In addition to our own models, we also report the lead-3 baseline (which uses the first three sentences of the article as a summary), and compare to the only existing abstractive (Nallapati et al., 2016) and extractive (Nallapati et al., 2017) models on the full dataset. The output of our models is available online.6 Given that we generate plain-text summaries but Nallapati et al. (2016; 2017) generate anonymized summaries (see Section 4), our ROUGE scores are not strictly comparable. There is evidence to suggest that the original-text dataset may result in higher ROUGE scores in general than the anonymized dataset – the lead-3 baseline is higher on the former than the latter. One possible explanation is that multi-word named entities lead to a higher rate of n-gram overlap. Unfortunately, ROUGE is the only available means of comparison with Nallapati et al.’s work. Nevertheless, given that the disparity in the lead-3 scores is (+1.1 ROUGE-1, +2.0 ROUGE-2, +1.1 ROUGEL) points respectively, and our best model scores exceed Nallapati et al. (2016) by (+4.07 ROUGE1, +3.98 ROUGE-2, +3.73 ROUGE-L) points, we may estimate that we outperform the only previous abstractive system by at least 2 ROUGE points allround. 4pypi.python.org/pypi/pyrouge/0.1.3 5www.cs.cmu.edu/~alavie/METEOR 6www.github.com/abisee/pointer-generator 1-grams 2-grams 3-grams 4-grams sentences 0 10 20 30 % that are duplicates pointer-generator, no coverage pointer-generator + coverage reference summaries Figure 4: Coverage eliminates undesirable repetition. Summaries from our non-coverage model contain many duplicated n-grams while our coverage model produces a similar number as the reference summaries. 6.2 Observations We find that both our baseline models perform poorly with respect to ROUGE and METEOR, and in fact the larger vocabulary size (150k) does not seem to help. Even the better-performing baseline (with 50k vocabulary) produces summaries with several common problems. Factual details are frequently reproduced incorrectly, often replacing an uncommon (but in-vocabulary) word with a morecommon alternative. For example in Figure 1, the baseline model appears to struggle with the rare word thwart, producing destabilize instead, which leads to the fabricated phrase destabilize nigeria’s economy. Even more catastrophically, the summaries sometimes devolve into repetitive nonsense, such as the third sentence produced by the baseline model in Figure 1. In addition, the baseline model can’t reproduce out-of-vocabulary words (such as muhammadu buhari in Figure 1). Further examples of all these problems are provided in the supplementary material. Our pointer-generator model achieves much better ROUGE and METEOR scores than the baseline, despite many fewer training epochs. The difference in the summaries is also marked: outof-vocabulary words are handled easily, factual details are almost always copied correctly, and there are no fabrications (see Figure 1). However, repetition is still very common. Our pointer-generator model with coverage improves the ROUGE and METEOR scores further, convincingly surpassing the best abstractive model 1079 Article: smugglers lure arab and african migrants by offering discounts to get onto overcrowded ships if people bring more potential passengers, a cnn investigation has revealed. (...) Summary: cnn investigation uncovers the business inside a human smuggling ring. Article: eyewitness video showing white north charleston police officer michael slager shooting to death an unarmed black man has exposed discrepancies in the reports of the first officers on the scene. (...) Summary: more questions than answers emerge in controversial s.c. police shooting. Figure 5: Examples of highly abstractive reference summaries (bold denotes novel words). of Nallapati et al. (2016) by several ROUGE points. Despite the brevity of the coverage training phase (about 1% of the total training time), the repetition problem is almost completely eliminated, which can be seen both qualitatively (Figure 1) and quantitatively (Figure 4). However, our best model does not quite surpass the ROUGE scores of the lead-3 baseline, nor the current best extractive model (Nallapati et al., 2017). We discuss this issue in section 7.1. 7 Discussion 7.1 Comparison with extractive systems It is clear from Table 1 that extractive systems tend to achieve higher ROUGE scores than abstractive, and that the extractive lead-3 baseline is extremely strong (even the best extractive system beats it by only a small margin). We offer two possible explanations for these observations. Firstly, news articles tend to be structured with the most important information at the start; this partially explains the strength of the lead-3 baseline. Indeed, we found that using only the first 400 tokens (about 20 sentences) of the article yielded significantly higher ROUGE scores than using the first 800 tokens. Secondly, the nature of the task and the ROUGE metric make extractive approaches and the lead3 baseline difficult to beat. The choice of content for the reference summaries is quite subjective – sometimes the sentences form a self-contained summary; other times they simply showcase a few interesting details from the article. Given that the articles contain 39 sentences on average, there are many equally valid ways to choose 3 or 4 highlights in this style. Abstraction introduces even more options (choice of phrasing), further decreasing the likelihood of matching the reference summary. For example, smugglers profit from desperate migrants is a valid alternative abstractive summary for the first example in Figure 5, but it scores 0 ROUGE with respect to the reference summary. This inflexibility of ROUGE is exacerbated by only having one reference summary, which has been shown to lower ROUGE’s reliability compared to multiple reference summaries (Lin, 2004a). Due to the subjectivity of the task and thus the diversity of valid summaries, it seems that ROUGE rewards safe strategies such as selecting the first-appearing content, or preserving original phrasing. While the reference summaries do sometimes deviate from these techniques, those deviations are unpredictable enough that the safer strategy obtains higher ROUGE scores on average. This may explain why extractive systems tend to obtain higher ROUGE scores than abstractive, and even extractive systems do not significantly exceed the lead-3 baseline. To explore this issue further, we evaluated our systems with the METEOR metric, which rewards not only exact word matches, but also matching stems, synonyms and paraphrases (from a predefined list). We observe that all our models receive over 1 METEOR point boost by the inclusion of stem, synonym and paraphrase matching, indicating that they may be performing some abstraction. However, we again observe that the lead-3 baseline is not surpassed by our models. It may be that news article style makes the lead3 baseline very strong with respect to any metric. We believe that investigating this issue further is an important direction for future work. 7.2 How abstractive is our model? We have shown that our pointer mechanism makes our abstractive system more reliable, copying factual details correctly more often. But does the ease of copying make our system any less abstractive? Figure 6 shows that our final model’s summaries contain a much lower rate of novel n-grams (i.e., those that don’t appear in the article) than the reference summaries, indicating a lower degree of abstraction. Note that the baseline model produces novel n-grams more frequently – however, this statistic includes all the incorrectly copied words, UNK tokens and fabrications alongside the good instances of abstraction. 1080 1-grams 2-grams 3-grams 4-grams sentences 0 20 40 60 80 100 % that are novel pointer-generator + coverage sequence-to-sequence + attention baseline reference summaries Figure 6: Although our best model is abstractive, it does not produce novel n-grams (i.e., n-grams that don’t appear in the source text) as often as the reference summaries. The baseline model produces more novel n-grams, but many of these are erroneous (see section 7.2). Article: andy murray (...) is into the semi-finals of the miami open , but not before getting a scare from 21 year-old austrian dominic thiem, who pushed him to 4-4 in the second set before going down 3-6 6-4, 6-1 in an hour and three quarters. (...) Summary: andy murray defeated dominic thiem 3-6 6-4, 6-1 in an hour and three quarters. Article: (...) wayne rooney smashes home during manchester united ’s 3-1 win over aston villa on saturday. (...) Summary: manchester united beat aston villa 3-1 at old trafford on saturday. Figure 7: Examples of abstractive summaries produced by our model (bold denotes novel words). In particular, Figure 6 shows that our final model copies whole article sentences 35% of the time; by comparison the reference summaries do so only 1.3% of the time. This is a main area for improvement, as we would like our model to move beyond simple sentence extraction. However, we observe that the other 65% encompasses a range of abstractive techniques. Article sentences are truncated to form grammatically-correct shorter versions, and new sentences are composed by stitching together fragments. Unnecessary interjections, clauses and parenthesized phrases are sometimes omitted from copied passages. Some of these abilities are demonstrated in Figure 1, and the supplementary material contains more examples. Figure 7 shows two examples of more impressive abstraction – both with similar structure. The dataset contains many sports stories whose summaries follow the X beat Y ⟨score⟩on ⟨day⟩template, which may explain why our model is most confidently abstractive on these examples. In general however, our model does not routinely produce summaries like those in Figure 7, and is not close to producing summaries like in Figure 5. The value of the generation probability pgen also gives a measure of the abstractiveness of our model. During training, pgen starts with a value of about 0.30 then increases, converging to about 0.53 by the end of training. This indicates that the model first learns to mostly copy, then learns to generate about half the time. However at test time, pgen is heavily skewed towards copying, with a mean value of 0.17. The disparity is likely due to the fact that during training, the model receives word-by-word supervision in the form of the reference summary, but at test time it does not. Nonetheless, the generator module is useful even when the model is copying. We find that pgen is highest at times of uncertainty such as the beginning of sentences, the join between stitched-together fragments, and when producing periods that truncate a copied sentence. Our mixture model allows the network to copy while simultaneously consulting the language model – enabling operations like stitching and truncation to be performed with grammaticality. In any case, encouraging the pointer-generator model to write more abstractively, while retaining the accuracy advantages of the pointer module, is an exciting direction for future work. 8 Conclusion In this work we presented a hybrid pointergenerator architecture with coverage, and showed that it reduces inaccuracies and repetition. We applied our model to a new and challenging longtext dataset, and significantly outperformed the abstractive state-of-the-art result. Our model exhibits many abstractive abilities, but attaining higher levels of abstraction remains an open research question. 9 Acknowledgment We thank the ACL reviewers for their helpful comments. This work was begun while the first author was an intern at Google Brain and continued at Stanford. Stanford University gratefully acknowledges the support of the DARPA DEFT Program AFRL contract no. FA8750-13-2-0040. Any opinions in this material are those of the authors alone. 1081 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations. Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, and Hui Jiang. 2016. Distraction-based neural networks for modeling documents. In International Joint Conference on Artificial Intelligence. Jackie Chi Kit Cheung and Gerald Penn. 2014. Unsupervised sentence enhancement for automatic summarization. In Empirical Methods in Natural Language Processing. Sumit Chopra, Michael Auli, and Alexander M Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In North American Chapter of the Association for Computational Linguistics. Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In EACL 2014 Workshop on Statistical Machine Translation. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12:2121–2159. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Association for Computational Linguistics. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Association for Computational Linguistics. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Neural Information Processing Systems. Hongyan Jing. 2000. Sentence reduction for automatic text summarization. In Applied natural language processing. Philipp Koehn. 2009. Statistical machine translation. Cambridge University Press. Julian Kupiec, Jan Pedersen, and Francine Chen. 1995. A trainable document summarizer. In International ACM SIGIR conference on Research and development in information retrieval. Chin-Yew Lin. 2004a. Looking for a few good metrics: Automatic summarization evaluation-how many samples are enough? In NACSIS/NII Test Collection for Information Retrieval (NTCIR) Workshop. Chin-Yew Lin. 2004b. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out: ACL workshop. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. In NIPS 2016 Workshop on Multi-class and Multi-label Learning in Extremely Large Label Spaces. Haitao Mi, Baskaran Sankaran, Zhiguo Wang, and Abe Ittycheriah. 2016. Coverage embedding models for neural machine translation. In Empirical Methods in Natural Language Processing. Yishu Miao and Phil Blunsom. 2016. Language as a latent variable: Discrete generative models for sentence compression. In Empirical Methods in Natural Language Processing. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. SummaRuNNer: A recurrent neural network based sequence model for extractive summarization of documents. In Association for the Advancement of Artificial Intelligence. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, C¸ aglar Gulc¸ehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence RNNs and beyond. In Computational Natural Language Learning. Chris D Paice. 1990. Constructing literature abstracts by computer: techniques and prospects. Information Processing & Management 26(1):171–186. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In International Conference on Learning Representations. Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Empirical Methods in Natural Language Processing. Horacio Saggion and Thierry Poibeau. 2013. Automatic text summarization: Past, present and future. In Multi-source, Multilingual Information Extraction and Summarization, Springer, pages 3–21. Baskaran Sankaran, Haitao Mi, Yaser Al-Onaizan, and Abe Ittycheriah. 2016. Temporal attention model for neural machine translation. arXiv preprint arXiv:1608.02927 . Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Neural Information Processing Systems. Jun Suzuki and Masaaki Nagata. 2016. RNN-based encoder-decoder approach with word frequency estimation. arXiv preprint arXiv:1701.00138 . 1082 Sho Takase, Jun Suzuki, Naoaki Okazaki, Tsutomu Hirao, and Masaaki Nagata. 2016. Neural headline generation on abstract meaning representation. In Empirical Methods in Natural Language Processing. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Association for Computational Linguistics. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Neural Information Processing Systems. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C Courville, Ruslan Salakhutdinov, Richard S Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning. Wenyuan Zeng, Wenjie Luo, Sanja Fidler, and Raquel Urtasun. 2016. Efficient summarization with read-again and copy mechanism. arXiv preprint arXiv:1611.03382 . 1083
2017
99
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1–11 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1 Probabilistic FastText for Multi-Sense Word Embeddings Ben Athiwaratkun∗ Cornell University [email protected] Andrew Gordon Wilson Cornell University [email protected] Anima Anandkumar AWS & Caltech [email protected] Abstract We introduce Probabilistic FastText, a new model for word embeddings that can capture multiple word senses, sub-word structure, and uncertainty information. In particular, we represent each word with a Gaussian mixture density, where the mean of a mixture component is given by the sum of n-grams. This representation allows the model to share statistical strength across sub-word structures (e.g. Latin roots), producing accurate representations of rare, misspelt, or even unseen words. Moreover, each component of the mixture can capture a different word sense. Probabilistic FastText outperforms both FASTTEXT, which has no probabilistic model, and dictionary-level probabilistic embeddings, which do not incorporate subword structures, on several word-similarity benchmarks, including English RareWord and foreign language datasets. We also achieve state-ofart performance on benchmarks that measure ability to discern different meanings. Thus, the proposed model is the first to achieve multi-sense representations while having enriched semantics on rare words. 1 Introduction Word embeddings are foundational to natural language processing. In order to model language, we need word representations to contain as much semantic information as possible. Most research has focused on vector word embeddings, such as WORD2VEC (Mikolov et al., 2013a), where words with similar meanings are mapped to nearby points in a vector space. Following the ∗Work done partly during internship at Amazon. seminal work of Mikolov et al. (2013a), there have been numerous works looking to learn efficient word embeddings. One shortcoming with the above approaches to word embedding that are based on a predefined dictionary (termed as dictionary-based embeddings) is their inability to learn representations of rare words. To overcome this limitation, character-level word embeddings have been proposed. FASTTEXT (Bojanowski et al., 2016) is the state-of-the-art character-level approach to embeddings. In FASTTEXT, each word is modeled by a sum of vectors, with each vector representing an n-gram. The benefit of this approach is that the training process can then share strength across words composed of common roots. For example, with individual representations for “circum” and “navigation”, we can construct an informative representation for “circumnavigation”, which would otherwise appear too infrequently to learn a dictionary-level embedding. In addition to effectively modelling rare words, character-level embeddings can also represent slang or misspelled words, such as “dogz”, and can share strength across different languages that share roots, e.g. Romance languages share latent roots. A different promising direction involves representing words with probability distributions, instead of point vectors. For example, Vilnis and McCallum (2014) represents words with Gaussian distributions, which can capture uncertainty information. Athiwaratkun and Wilson (2017) generalizes this approach to multimodal probability distributions, which can naturally represent words with different meanings. For example, the distribution for “rock” could have mass near the word “jazz” and “pop”, but also “stone” and “basalt”. Athiwaratkun and Wilson (2018) further developed this approach to learn hierarchical word representations: for example, the word “music” can 2 be learned to have a broad distribution, which encapsulates the distributions for “jazz” and “rock”. In this paper, we propose Probabilistic FastText (PFT), which provides probabilistic characterlevel representations of words. The resulting word embeddings are highly expressive, yet straightforward and interpretable, with simple, efficient, and intuitive training procedures. PFT can model rare words, uncertainty information, hierarchical representations, and multiple word senses. In particular, we represent each word with a Gaussian or a Gaussian mixture density, which we name PFT-G and PFT-GM respectively. Each component of the mixture can represent different word senses, and the mean vectors of each component decompose into vectors of n-grams, to capture character-level information. We also derive an efficient energybased max-margin training procedure for PFT. We perform comparison with FASTTEXT as well as existing density word embeddings W2G (Gaussian) and W2GM (Gaussian mixture). Our models extract high-quality semantics based on multiple word-similarity benchmarks, including the rare word dataset. We obtain an average weighted improvement of 3.7% over FASTTEXT (Bojanowski et al., 2016) and 3.1% over the dictionary-level density-based models. We also observe meaningful nearest neighbors, particularly in the multimodal density case, where each mode captures a distinct meaning. Our models are also directly portable to foreign languages without any hyperparameter modification, where we observe strong performance, outperforming FASTTEXT on many foreign word similarity datasets. Our multimodal word representation can also disentangle meanings, and is able to separate different senses in foreign polysemies. In particular, our models attain state-of-the-art performance on SCWS, a benchmark to measure the ability to separate different word meanings, achieving 1.0% improvement over a recent density embedding model W2GM (Athiwaratkun and Wilson, 2017). To the best of our knowledge, we are the first to develop multi-sense embeddings with high semantic quality for rare words. Our code and embeddings are publicly available. 1 2 Related Work Early word embeddings which capture semantic information include Bengio et al. (2003), Col1https://github.com/benathi/multisense-prob-fasttext lobert and Weston (2008), and Mikolov et al. (2011). Later, Mikolov et al. (2013a) developed the popular WORD2VEC method, which proposes a log-linear model and negative sampling approach that efficiently extracts rich semantics from text. Another popular approach GLOVE learns word embeddings by factorizing co-occurrence matrices (Pennington et al., 2014). Recently there has been a surge of interest in making dictionary-based word embeddings more flexible. This flexibility has valuable applications in many end-tasks such as language modeling (Kim et al., 2016), named entity recognition (Kuru et al., 2016), and machine translation (Zhao and Zhang, 2016; Lee et al., 2017), where unseen words are frequent and proper handling of these words can greatly improve the performance. These works focus on modeling subword information in neural networks for tasks such as language modeling. Besides vector embeddings, there is recent work on multi-prototype embeddings where each word is represented by multiple vectors. The learning approach involves using a cluster centroid of context vectors (Huang et al., 2012), or adapting the skip-gram model to learn multiple latent representations (Tian et al., 2014). Neelakantan et al. (2014) furthers adapts skip-gram with a non-parametric approach to learn the embeddings with an arbitrary number of senses per word. Chen et al. (2014) incorporates an external dataset WORDNET to learn sense vectors. We compare these models with our multimodal embeddings in Section 4. 3 Probabilistic FastText We introduce Probabilistic FastText, which combines a probabilistic word representation with the ability to capture subword structure. We describe the probabilistic subword representation in Section 3.1. We then describe the similarity measure and the loss function used to train the embeddings in Sections 3.2 and 3.3. We conclude by briefly presenting a simplified version of the energy function for isotropic Gaussian representations (Section 3.4), and the negative sampling scheme we use in training (Section 3.5). 3.1 Probabilistic Subword Representation We represent each word with a Gaussian mixture with K Gaussian components. That is, a word 3 beau iful <bea beautiful ful> (a) bank river cash atm (b) bank river bank cash atm (c) Figure 1: (1a) a Gaussian component and its subword structure. The bold arrow represents the final mean vector, estimated from averaging the grey n-gram vectors. (1b) PFT-G model: Each Gaussian component’s mean vector is a subword vector. (1c) PFT-GM model: For each Gaussian mixture distribution, one component’s mean vector is estimated by a subword structure whereas other components are dictionary-based vectors. w is associated with a density function f(x) = PK i=1 pw,iN(x; ⃗µw,i, Σw,i) where {µw,i}K k=1 are the mean vectors and {Σw,i} are the covariance matrices, and {pw,i}K k=1 are the component probabilities which sum to 1. The mean vectors of Gaussian components hold much of the semantic information in density embeddings. While these models are successful based on word similarity and entailment benchmarks (Vilnis and McCallum, 2014; Athiwaratkun and Wilson, 2017), the mean vectors are often dictionary-level, which can lead to poor semantic estimates for rare words, or the inability to handle words outside the training corpus. We propose using subword structures to estimate the mean vectors. We outline the formulation below. For word w, we estimate the mean vector µw with the average over n-gram vectors and its dictionary-level vector. That is, µw = 1 |NGw| + 1  vw + X g∈NGw zg   (1) where zg is a vector associated with an n-gram g, vw is the dictionary representation of word w, and NGw is a set of n-grams of word w. Examples of 3,4-grams for a word “beautiful”, including the beginning-of-word character ‘⟨’ and end-of-word character ‘⟩’, are: • 3-grams: ⟨be, bea, eau, aut, uti, tif, ful, ul⟩ • 4-grams: ⟨bea, beau .., iful ,ful⟩ This structure is similar to that of FASTTEXT (Bojanowski et al., 2016); however, we note that FASTTEXT uses single-prototype deterministic embeddings as well as a training approach that maximizes the negative log-likelihood, whereas we use a multi-prototype probabilistic embedding and for training we maximize the similarity between the words’ probability densities, as described in Sections 3.2 and 3.3 Figure 1a depicts the subword structure for the mean vector. Figure 1b and 1c depict our models, Gaussian probabilistic FASTTEXT (PFTG) and Gaussian mixture probabilistic FASTTEXT (PFT-GM). In the Gaussian case, we represent each mean vector with a subword estimation. For the Gaussian mixture case, we represent one Gaussian component’s mean vector with the subword structure whereas other components’ mean vectors are dictionary-based. This model choice to use dictionary-based mean vectors for other components is to reduce to constraint imposed by the subword structure and promote independence for meaning discovery. 3.2 Similarity Measure between Words Traditionally, if words are represented by vectors, a common similarity metric is a dot product. In the case where words are represented by distribution functions, we use the generalized dot product in Hilbert space ⟨·, ·⟩L2, which is called the expected likelihood kernel (Jebara et al., 2004). We define the energy E(f, g) between two words f and g to be E(f, g) = log⟨f, g⟩L2 = log R f(x)g(x) dx. With Gaussian mixtures f(x) = PK i=1 piN(x; ⃗µf,i, Σf,i) and g(x) = PK i=1 qiN(x; ⃗µg,i, Σg,i), PK i=1 pi = 1, and PK i=1 qi = 1, the energy has a closed form: E(f, g) = log K X j=1 K X i=1 piqjeξi,j (2) where ξj,j is the partial energy which corresponds to the similarity between component i of the first 4 word f and component j of the second word g.2 ξi,j ≡log N(0; ⃗µf,i −⃗µg,j, Σf,i + Σg,j) = −1 2 log det(Σf,i + Σg,j) −D 2 log(2π) −1 2(⃗µf,i −⃗µg,j)⊤(Σf,i + Σg,j)−1(⃗µf,i −⃗µg,j) (3) Figure 2 demonstrates the partial energies among the Gaussian components of two words. Interaction between GM components rock:0 pop:0 pop:1 rock:1 ⇠0,1 ⇠0,0 ⇠1,1 ⇠1,0 bang, crack, snap basalt, boulder, sand jazz, punk, indie funk, pop-rock, band Figure 2: The interactions among Gaussian components of word rock and word pop. The partial energy is the highest for the pair rock:0 (the zeroth component of rock) and pop:1 (the first component of pop), reflecting the similarity in meanings. 3.3 Loss Function The model parameters that we seek to learn are vw for each word w and zg for each n-gram g. We train the model by pushing the energy of a true context pair w and c to be higher than the negative context pair w and n by a margin m. We use Adagrad (Duchi et al., 2011) to minimize the following loss to achieve this outcome: L(f, g) = max [0, m −E(f, g) + E(f, n)] . (4) We describe how to sample words as well as its positive and negative contexts in Section 3.5. This loss function together with the Gaussian mixture model with K > 1 has the ability to extract multiple senses of words. That is, for a word with multiple meanings, we can observe each mode to represent a distinct meaning. For instance, one density mode of “star” is close to the densities of “celebrity” and “hollywood” whereas another mode of “star” is near the densities of “constellation” and “galaxy”. 2The orderings of indices of the components for each word are arbitrary. 3.4 Energy Simplification In theory, it can be beneficial to have covariance matrices as learnable parameters. In practice, Athiwaratkun and Wilson (2017) observe that spherical covariances often perform on par with diagonal covariances with much less computational resources. Using spherical covariances for each component, we can further simplify the energy function as follows: ξi,j = −α 2 · ||µf,i −µg,j||2 , (5) where the hyperparameter α is the scale of the inverse covariance term in Equation 3. We note that Equation 5 is equivalent to Equation 3 up to an additive constant given that the covariance matrices are spherical and the same for all components. 3.5 Word Sampling To generate a context word c of a given word w, we pick a nearby word within a context window of a fixed length ℓ. We also use a word sampling technique similar to Mikolov et al. (2013b). This subsampling procedure selects words for training with lower probabilities if they appear frequently. This technique has an effect of reducing the importance of words such as ‘the’, ‘a’, ‘to’ which can be predominant in a text corpus but are not as meaningful as other less frequent words such as ‘city’, ‘capital’, ‘animal’, etc. In particular, word w has probability P(w) = 1 − p t/f(w) where f(w) is the frequency of word w in the corpus and t is the frequency threshold. A negative context word is selected using a distribution Pn(w) ∝U(w)3/4 where U(w) is a unigram probability of word w. The exponent 3/4 also diminishes the importance of frequent words and shifts the training focus to other less frequent words. 4 Experiments We have proposed a probabilistic FASTTEXT model which combines the flexibility of subword structure with the density embedding approach. In this section, we show that our probabilistic representation with subword mean vectors with the simplified energy function outperforms many word similarity baselines and provides disentangled meanings for polysemies. First, we describe the training details in Section 4.1. We provide qualitative evaluation in Section 5 4.2, showing meaningful nearest neighbors for the Gaussian embeddings, as well as the ability to capture multiple meanings by Gaussian mixtures. Our quantitative evaluation in Section 4.3 demonstrates strong performance against the baseline models FASTTEXT (Bojanowski et al., 2016) and the dictionary-level Gaussian (W2G) (Vilnis and McCallum, 2014) and Gaussian mixture embeddings (Athiwaratkun and Wilson, 2017) (W2GM). We train our models on foreign language corpuses and show competitive results on foreign word similarity benchmarks in Section 4.4. Finally, we explain the importance of the n-gram structures for semantic sharing in Section 4.5. 4.1 Training Details We train our models on both English and foreign language datasets. For English, we use the concatenation of UKWAC and WACKYPEDIA (Baroni et al., 2009) which consists of 3.376 billion words. We filter out word types that occur fewer than 5 times which results in a vocabulary size of 2,677,466. For foreign languages, we demonstrate the training of our model on French, German, and Italian text corpuses. We note that our model should be applicable for other languages as well. We use FRWAC (French), DEWAC (German), ITWAC (Italian) datasets (Baroni et al., 2009) for text corpuses, consisting of 1.634, 1.716 and 1.955 billion words respectively. We use the same threshold, filtering out words that occur less than 5 times in each corpus. We have dictionary sizes of 1.3, 2.7, and 1.4 million words for FRWAC, DEWAC, and ITWAC. We adjust the hyperparameters on the English corpus and use them for foreign languages. Note that the adjustable parameters for our models are the loss margin m in Equation 4 and the scale α in Equation 5. We search for the optimal hyperparameters in a grid m ∈{0.01, 0.1, 1, 10, 100} and α ∈{ 1 5×10−3 , 1 10−3 , 1 2×10−4 , 1 1×10−4 } on our English corpus. The hyperpameter α affects the scale of the loss function; therefore, we adjust the learning rate appropriately for each α. In particular, the learning rates used are γ = {10−4, 10−5, 10−6} for the respective α values. Other fixed hyperparameters include the number of Gaussian components K = 2, the context window length ℓ= 10 and the subsampling threshold t = 10−5. Similar to the setup in FASTTEXT, we use n-grams where n = 3, 4, 5, 6 to estimate the mean vectors. 4.2 Qualitative Evaluation - Nearest neighbors We show that our embeddings learn the word semantics well by demonstrating meaningful nearest neighbors. Table 1 shows examples of polysemous words such as rock, star, and cell. Table 1 shows the nearest neighbors of polysemous words. We note that subword embeddings prefer words with overlapping characters as nearest neighbors. For instance, “rock-y”, “rockn”, and “rock” are both close to the word “rock”. For the purpose of demonstration, we only show words with meaningful variations and omit words with small character-based variations previously mentioned. However, all words shown are in the top100 nearest words. We observe the separation in meanings for the multi-component case; for instance, one component of the word “bank” corresponds to a financial bank whereas the other component corresponds to a river bank. The single-component case also has interesting behavior. We observe that the subword embeddings of polysemous words can represent both meanings. For instance, both “lava-rock” and “rock-pop” are among the closest words to “rock”. 4.3 Word Similarity Evaluation We evaluate our embeddings on several standard word similarity datasets, namely, SL-999 (Hill et al., 2014), WS-353 (Finkelstein et al., 2002), MEN-3k (Bruni et al., 2014), MC-30 (Miller and Charles, 1991), RG-65 (Rubenstein and Goodenough, 1965), YP-130 (Yang and Powers, 2006), MTurk(-287,-771) (Radinsky et al., 2011; Halawi et al., 2012), and RW-2k (Luong et al., 2013). Each dataset contains a list of word pairs with a human score of how related or similar the two words are. We use the notation DATASET-NUM to denote the number of word pairs NUM in each evaluation set. We note that the dataset RW focuses more on infrequent words and SimLex-999 focuses on the similarity of words rather than relatedness. We also compare PFT-GM with other multi-prototype embeddings in the literature using SCWS (Huang et al., 2012), a word similarity dataset that is aimed to measure the ability of embeddings to discern multiple meanings. We calculate the Spearman correlation (Spearman, 1904) between the labels and our scores gen6 Word Co. Nearest Neighbors rock 0 rock:0, rocks:0, rocky:0, mudrock:0, rockscape:0, boulders:0 , coutcrops:0, rock 1 rock:1, punk:0, punk-rock:0, indie:0, pop-rock:0, pop-punk:0, indie-rock:0, band:1 bank 0 bank:0, banks:0, banker:0, bankers:0, bankcard:0, Citibank:0, debits:0 bank 1 bank:1, banks:1, river:0, riverbank:0, embanking:0, banks:0, confluence:1 star 0 stars:0, stellar:0, nebula:0, starspot:0, stars.:0, stellas:0, constellation:1 star 1 star:1, stars:1, star-star:0, 5-stars:0, movie-star:0, mega-star:0, super-star:0 cell 0 cell:0, cellular:0, acellular:0, lymphocytes:0, T-cells:0, cytes:0, leukocytes:0 cell 1 cell:1, cells:1, cellular:0, cellular-phone:0, cellphone:0, transcellular:0 left 0 left:0, right:1, left-hand:0, right-left:0, left-right-left:0, right-hand:0, leftwards:0 left 1 left:1, leaving:0, leavings:0, remained:0, leave:1, enmained:0, leaving-age:0, sadly-departed:0 Word Nearest Neighbors rock rock, rock-y, rockn, rock-, rock-funk, rock/, lava-rock, nu-rock, rock-pop, rock/ice, coral-rock bank bank-, bank/, bank-account, bank., banky, bank-to-bank, banking, Bank, bank/cash, banks.** star movie-stars, star-planet, G-star, star-dust, big-star, starsailor, 31-star, star-lit, Star, starsign, pop-stars cell cellular, tumour-cell, in-cell, cell/tumour, 11-cell, T-cell, sperm-cell, 2-cells, Cell-to-cell left left, left/joined, leaving, left,right, right, left)and, leftsided, lefted, leftside Table 1: Nearest neighbors of PFT-GM (top) and PFT-G (bottom). The notation w:i denotes the ith mixture component of the word w. D 50 300 W2G W2GM PFT-G PFT-GM FASTTEXT W2G W2GM PFT-G PFT-GM SL-999 29.35 29.31 27.34 34.13 38.03 38.84 39.62 35.85 39.60 WS-353 71.53 73.47 67.17 71.10 73.88 78.25 79.38 73.75 76.11 MEN-3K 72.58 73.55 70.61 73.90 76.37 78.40 78.76 77.78 79.65 MC-30 76.48 79.08 73.54 79.75 81.20 82.42 84.58 81.90 80.93 RG-65 73.30 74.51 70.43 78.19 79.98 80.34 80.95 77.57 79.81 YP-130 41.96 45.07 37.10 40.91 53.33 46.40 47.12 48.52 54.93 MT-287 64.79 66.60 63.96 67.65 67.93 67.74 69.65 66.41 69.44 MT-771 60.86 60.82 60.40 63.86 66.89 70.10 70.36 67.18 69.68 RW-2K 28.78 28.62 44.05 42.78 48.09 35.49 42.73 50.37 49.36 AVG. 42.32 42.76 44.35 46.47 49.28 47.71 49.54 49.86 51.10 Table 2: Spearman’s Correlation ρ × 100 on Word Similarity Datasets. erated by the embeddings. The Spearman correlation is a rank-based correlation measure that assesses how well the scores describe the true labels. The scores we use are cosine-similarity scores between the mean vectors. In the case of Gaussian mixtures, we use the pairwise maximum score: s(f, g) = max i∈1,...,K max j∈1,...,K µf,i · µg,j ||µf,i|| · ||µg,j||. (6) The pair (i, j) that achieves the maximum cosine similarity corresponds to the Gaussian component pair that is the closest in meanings. Therefore, this similarity score yields the most related senses of a given word pair. This score reduces to a cosine similarity in the Gaussian case (K = 1). 4.3.1 Comparison Against Dictionary-Level Density Embeddings and FASTTEXT We compare our models against the dictionarylevel Gaussian and Gaussian mixture embeddings in Table 2, with 50-dimensional and 300dimensional mean vectors. The 50-dimensional results for W2G and W2GM are obtained directly from Athiwaratkun and Wilson (2017). For comparison, we use the public code3 to train the 300dimensional W2G and W2GM models and the publicly available FASTTEXT model4. We calculate Spearman’s correlations for each of the word similarity datasets. These datasets vary greatly in the number of word pairs; therefore, we mark each dataset with its size for visibil3https://github.com/benathi/word2gm 4https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki. en.zip 7 ity. For a fair and objective comparison, we calculate a weighted average of the correlation scores for each model. Our PFT-GM achieves the highest average score among all competing models, outperforming both FASTTEXT and the dictionary-level embeddings W2G and W2GM. Our unimodal model PFT-G also outperforms the dictionary-level counterpart W2G and FASTTEXT. We note that the model W2GM appears quite strong according to Table 2, beating PFT-GM on many word similarity datasets. However, the datasets that W2GM performs better than PFT-GM often have small sizes such as MC-30 or RG-65, where the Spearman’s correlations are more subject to noise. Overall, PFT-GM outperforms W2GM by 3.1% and 8.7% in 300 and 50 dimensional models. In addition, PFT-G and PFT-GM also outperform FASTTEXT by 1.2% and 3.7% respectively. 4.3.2 Comparison Against Multi-Prototype Models In Table 3, we compare 50 and 300 dimensional PFT-GM models against the multi-prototype embeddings described in Section 2 and the existing multimodal density embeddings W2GM. We use the word similarity dataset SCWS (Huang et al., 2012) which contains words with potentially many meanings, and is a benchmark for distinguishing senses. We use the maximum similarity score (Equation 6), denoted as MAXSIM. AVESIM denotes the average of the similarity scores, rather than the maximum. We outperform the dictionary-based density embeddings W2GM in both 50 and 300 dimensions, demonstrating the benefits of subword information. Our model achieves state-of-the-art results, similar to that of Neelakantan et al. (2014). 4.4 Evaluation on Foreign Language Embeddings We evaluate the foreign-language embeddings on word similarity datasets in respective languages. We use Italian WORDSIM353 and Italian SIMLEX-999 (Leviant and Reichart, 2015) for Italian models, GUR350 and GUR65 (Gurevych, 2005) for German models, and French WORDSIM353 (Finkelstein et al., 2002) for French models. For datasets GUR350 and GUR65, we use the results reported in the FASTTEXT publication (Bojanowski et al., 2016). For other datasets, we train FASTTEXT models for comparison using the Model Dim ρ × 100 HUANG AVGSIM 50 62.8 TIAN MAXSIM 50 63.6 W2GM MAXSIM 50 62.7 NEELAKANTAN AVGSIM 50 64.2 PFT-GM MAXSIM 50 63.7 CHEN-M AVGSIM 200 66.2 W2GM MAXSIM 200 65.5 NEELAKANTAN AVGSIM 300 67.2 W2GM MAXSIM 300 66.5 PFT-GM MAXSIM 300 67.2 Table 3: Spearman’s Correlation ρ × 100 on word similarity dataset SCWS. public code5 on our text corpuses. We also train dictionary-level models W2G, and W2GM for comparison. Table 4 shows the Spearman’s correlation results of our models. We outperform FASTTEXT on many word similarity benchmarks. Our results are also significantly better than the dictionary-based models, W2G and W2GM. We hypothesize that W2G and W2GM can perform better than the current reported results given proper pre-processing of words due to special characters such as accents. We investigate the nearest neighbors of polysemies in foreign languages and also observe clear sense separation. For example, piano in Italian can mean “floor” or “slow”. These two meanings are reflected in the nearest neighbors where one component is close to piano-piano, pianod which mean “slowly” whereas the other component is close to piani (floors), istrutturazione (renovation) or infrastruttre (infrastructure). Table 5 shows additional results, demonstrating that the disentangled semantics can be observed in multiple languages. 4.5 Qualitative Evaluation - Subword Decomposition One of the motivations for using subword information is the ability to handle out-of-vocabulary words. Another benefit is the ability to help improve the semantics of rare words via subword sharing. Due to an observation that text corpuses follow Zipf’s power law (Zipf, 1949), words at the tail of the occurrence distribution appears much 5https://github.com/facebookresearch/fastText.git 8 Lang. Evaluation FASTTEXT w2g w2gm pft-g pft-gm FR WS353 38.2 16.73 20.09 41.0 41.3 DE GUR350 70 65.01 69.26 77.6 78.2 GUR65 81 74.94 76.89 81.8 85.2 IT WS353 57.1 56.02 61.09 60.2 62.5 SL-999 29.3 29.44 34.91 29.3 33.7 Table 4: Word similarity evaluation on foreign languages. Word Meaning Nearest Neighbors (IT) secondo 2nd Secondo (2nd), terzo (3rd) , quinto (5th), primo (first), quarto (4th), ultimo (last) (IT) secondo according to conformit (compliance), attenendosi (following), cui (which), conformemente (accordance with) (IT) porta lead, bring portano (lead), conduce (leads), portano, porter, portando (bring), costringe (forces) (IT) porta door porte (doors), finestrella (window), finestra (window), portone (doorway), serratura (door lock) (FR) voile veil voiles (veil), voiler (veil), voilent (veil), voilement, foulard (scarf), voils (veils), voilant (veiling) (FR) voile sail catamaran (catamaran), driveur (driver), nautiques (water), Voile (sail), driveurs (drivers) (FR) temps weather brouillard (fog), orageuses (stormy), nuageux (cloudy) (FR) temps time mi-temps (half-time), partiel (partial), Temps (time), annualis (annualized), horaires (schedule) (FR) voler steal envoler (fly), voleuse (thief), cambrioler (burgle), voleur (thief), violer (violate), picoler (tipple) (FR) voler fly airs (air), vol (flight), volent (fly), envoler (flying), atterrir (land) Table 5: Nearest neighbors of polysemies based on our foreign language PFT-GM models. less frequently. Training these words to have a good semantic representation is challenging if done at the word level alone. However, an ngram such as ‘abnorm’ is trained during both occurrences of “abnormal” and “abnormality” in the corpus, hence further augments both words’s semantics. Figure 3 shows the contribution of n-grams to the final representation. We filter out to show only the n-grams with the top-5 and bottom-5 similarity scores. We observe that the final representations of both words align with n-grams “abno”, “bnor”, “abnorm”, “anbnor”, “<abn”. In fact, both “abnormal” and “abnormality” share the same top-5 n-grams. Due to the fact that many rare words such as “autobiographer”, “circumnavigations”, or “hypersensitivity” are composed from many common sub-words, the n-gram structure can help improve the representation quality. 5 Numbers of Components It is possible to train our approach with K > 2 mixture components; however, Athiwaratkun and Wilson (2017) observe that dictionary-level Gaussian mixtures with K = 3 do not overall improve word similarity results, even though these mixtures can discover 3 distinct senses for certain words. Indeed, while K > 2 in principle allows for greater flexibility than K = 2, most words can be very flexibly modelled with a mixture of two Figure 3: Contribution of each n-gram vector to the final representation for word “abnormal” (top) and “abnormality” (bottom). The x-axis is the cosine similarity between each n-gram vector z(w) g and the final vector µw. Gaussians, leading to K = 2 representing a good balance between flexibility and Occam’s razor. Even for words with single meanings, our PFT model with K = 2 often learns richer representations than a K = 1 model. For example, the two mixture components can learn to cluster to9 gether to form a more heavy tailed unimodal distribution which captures a word with one dominant meaning but with close relationships to a wide range of other words. In addition, we observe that our model with K components can capture more than K meanings. For instance, in K = 1 model, the word pairs (“cell”, “jail”) and (“cell”, “biology”) and (“cell”, “phone”) will all have positive similarity scores based on K = 1 model. In general, if a word has multiple meanings, these meanings are usually compressed into the linear substructure of the embeddings (Arora et al., 2016). However, the pairs of non-dominant words often have lower similarity scores, which might not accurately reflect their true similarities. 6 Conclusion and Future Work We have proposed models for probabilistic word representations equipped with flexible sub-word structures, suitable for rare and out-of-vocabulary words. The proposed probabilistic formulation incorporates uncertainty information and naturally allows one to uncover multiple meanings with multimodal density representations. Our models offer better semantic quality, outperforming competing models on word similarity benchmarks. Moreover, our multimodal density models can provide interpretable and disentangled representations, and are the first multi-prototype embeddings that can handle rare words. Future work includes an investigation into the trade-off between learning full covariance matrices for each word distribution, computational complexity, and performance. This direction can potentially have a great impact on tasks where the variance information is crucial, such as for hierarchical modeling with probability distributions (Athiwaratkun and Wilson, 2018). Other future work involves co-training PFT on many languages. Currently, existing work on multi-lingual embeddings align the word semantics on pre-trained vectors (Smith et al., 2017), which can be suboptimal due to polysemies. We envision that the multi-prototype nature can help disambiguate words with multiple meanings and facilitate semantic alignment. References Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2016. Linear algebraic structure of word senses, with applications to polysemy. CoRR abs/1601.03764. http://arxiv.org/abs/1601.03764. Ben Athiwaratkun and Andrew Gordon Wilson. 2017. Multimodal word distributions. In ACL. https://arxiv.org/abs/1704.08424. Ben Athiwaratkun and Andrew Gordon Wilson. 2018. On modeling hierarchical data via probabilistic order embeddings. ICLR . Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The wacky wide web: a collection of very large linguistically processed web-crawled corpora. Language Resources and Evaluation 43(3):209–226. https://doi.org/10.1007/s10579-009-9081-4. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research 3:1137–1155. http://www.jmlr.org/papers/v3/bengio03a.html. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. CoRR abs/1607.04606. http://arxiv.org/abs/1607.04606. Elia Bruni, Nam Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. J. Artif. Int. Res. 49(1):1–47. http://dl.acm.org/citation.cfm?id=2655713.2655714. Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A unified model for word sense representation and disambiguation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 2529, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL. pages 1025–1035. http://aclweb.org/anthology/D/D14/D14-1110.pdf. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: deep neural networks with multitask learning. In Machine Learning, Proceedings of the TwentyFifth International Conference (ICML 2008), Helsinki, Finland, June 5-9, 2008. pages 160–167. http://doi.acm.org/10.1145/1390156.1390177. John C. Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12:2121–2159. http://dl.acm.org/citation.cfm?id=2021068. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2002. Placing search in context: the concept revisited. ACM Trans. Inf. Syst. 20(1):116–131. http://doi.acm.org/10.1145/503104.503110. 10 Iryna Gurevych. 2005. Using the structure of a conceptual network in computing semantic relatedness. In Natural Language Processing - IJCNLP 2005, Second International Joint Conference, Jeju Island, Korea, October 11-13, 2005, Proceedings. pages 767– 778. Guy Halawi, Gideon Dror, Evgeniy Gabrilovich, and Yehuda Koren. 2012. Large-scale learning of word relatedness with constraints. In The 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’12, Beijing, China, August 12-16, 2012. pages 1406–1414. http://doi.acm.org/10.1145/2339530.2339751. Felix Hill, Roi Reichart, and Anna Korhonen. 2014. Simlex-999: Evaluating semantic models with (genuine) similarity estimation. CoRR abs/1408.3456. http://arxiv.org/abs/1408.3456. Eric H. Huang, Richard Socher, Christopher D. Manning, and Andrew Y. Ng. 2012. Improving word representations via global context and multiple word prototypes. In The 50th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, July 8-14, 2012, Jeju Island, Korea - Volume 1: Long Papers. pages 873–882. http://www.aclweb.org/anthology/P12-1092. Tony Jebara, Risi Kondor, and Andrew Howard. 2004. Probability product kernels. Journal of Machine Learning Research 5:819–844. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M. Rush. 2016. Character-aware neural language models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 1217, 2016, Phoenix, Arizona, USA.. pages 2741– 2749. Onur Kuru, Ozan Arkan Can, and Deniz Yuret. 2016. Charner: Character-level named entity recognition. In COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, December 11-16, 2016, Osaka, Japan. pages 911–921. http://aclweb.org/anthology/C/C16/C16-1087.pdf. Jason Lee, Kyunghyun Cho, and Thomas Hofmann. 2017. Fully character-level neural machine translation without explicit segmentation. TACL 5:365–378. https://transacl.org/ojs/index.php/tacl/article/view/1051. Ira Leviant and Roi Reichart. 2015. Judgment language matters: Multilingual vector space models for judgment language aware lexical semantics. CoRR abs/1508.00106. http://arxiv.org/abs/1508.00106. Minh-Thang Luong, Richard Socher, and Christopher D. Manning. 2013. Better word representations with recursive neural networks for morphology. In CoNLL. Sofia, Bulgaria. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. CoRR abs/1301.3781. http://arxiv.org/abs/1301.3781. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013b. Efficient estimation of word representations in vector space. CoRR abs/1301.3781. http://arxiv.org/abs/1301.3781. Tomas Mikolov, Stefan Kombrink, Luk´as Burget, Jan Cernock´y, and Sanjeev Khudanpur. 2011. Extensions of recurrent neural network language model. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2011, May 22-27, 2011, Prague Congress Center, Prague, Czech Republic. pages 5528–5531. https://doi.org/10.1109/ICASSP.2011.5947611. George A. Miller and Walter G. Charles. 1991. Contextual Correlates of Semantic Similarity. Language & Cognitive Processes 6(1):1–28. https://doi.org/10.1080/01690969108406936. Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. 2014. Efficient nonparametric estimation of multiple embeddings per word in vector space. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL. pages 1059–1069. http://aclweb.org/anthology/D/D14/D14-1113.pdf. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL. pages 1532–1543. http://aclweb.org/anthology/D/D14/D14-1162.pdf. Kira Radinsky, Eugene Agichtein, Evgeniy Gabrilovich, and Shaul Markovitch. 2011. A word at a time: Computing word relatedness using temporal semantic analysis. In Proceedings of the 20th International Conference on World Wide Web. WWW ’11, pages 337–346. http://doi.acm.org/10.1145/1963405.1963455. Herbert Rubenstein and John B. Goodenough. 1965. Contextual correlates of synonymy. Commun. ACM 8(10):627–633. http://doi.acm.org/10.1145/365628.365657. Samuel L. Smith, David H. P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. CoRR abs/1702.03859. http://arxiv.org/abs/1702.03859. C. Spearman. 1904. The proof and measurement of association between two things. American Journal of Psychology 15:88–103. 11 Fei Tian, Hanjun Dai, Jiang Bian, Bin Gao, Rui Zhang, Enhong Chen, and Tie-Yan Liu. 2014. A probabilistic model for learning multi-prototype word embeddings. In COLING 2014, 25th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, August 23-29, 2014, Dublin, Ireland. pages 151–160. http://aclweb.org/anthology/C/C14/C14-1016.pdf. Luke Vilnis and Andrew McCallum. 2014. Word representations via gaussian embedding. CoRR abs/1412.6623. http://arxiv.org/abs/1412.6623. Dongqiang Yang and David M. W. Powers. 2006. Verb similarity on the taxonomy of wordnet. In In the 3rd International WordNet Conference (GWC-06), Jeju Island, Korea. Shenjian Zhao and Zhihua Zhang. 2016. An efficient character-level neural machine translation. CoRR abs/1608.04738. http://arxiv.org/abs/1608.04738. G.K. Zipf. 1949. Human behavior and the principle of least effort: an introduction to human ecology. Addison-Wesley Press. https://books.google.com/books?id=1tx9AAAAIAAJ.
2018
1
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 97–109 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 97 Hierarchical Losses and New Resources for Fine-grained Entity Typing and Linking Shikhar Murty* UMass Amherst [email protected] Patrick Verga* UMass Amherst [email protected] Luke Vilnis UMass Amherst [email protected] Irena Radovanovic Chan Zuckerberg Initiative [email protected] Andrew McCallum UMass Amherst [email protected] Abstract Extraction from raw text to a knowledge base of entities and fine-grained types is often cast as prediction into a flat set of entity and type labels, neglecting the rich hierarchies over types and entities contained in curated ontologies. Previous attempts to incorporate hierarchical structure have yielded little benefit and are restricted to shallow ontologies. This paper presents new methods using real and complex bilinear mappings for integrating hierarchical information, yielding substantial improvement over flat predictions in entity linking and fine-grained entity typing, and achieving new state-of-the-art results for end-to-end models on the benchmark FIGER dataset. We also present two new human-annotated datasets containing wide and deep hierarchies which we will release to the community to encourage further research in this direction: MedMentions, a collection of PubMed abstracts in which 246k mentions have been mapped to the massive UMLS ontology; and TypeNet, which aligns Freebase types with the WordNet hierarchy to obtain nearly 2k entity types. In experiments on all three datasets we show substantial gains from hierarchy-aware training. 1 Introduction Identifying and understanding entities is a central component in knowledge base construction (Roth et al., 2015) and essential for enhancing downstream tasks such as relation extraction *equal contribution Data and code for experiments: https://github. com/MurtyShikhar/Hierarchical-Typing (Yaghoobzadeh et al., 2017b), question answering (Das et al., 2017; Welbl et al., 2017) and search (Dalton et al., 2014). This has led to considerable research in automatically identifying entities in text, predicting their types, and linking them to existing structured knowledge sources. Current state-of-the-art models encode a textual mention with a neural network and classify the mention as being an instance of a fine grained type or entity in a knowledge base. Although in many cases the types and their entities are arranged in a hierarchical ontology, most approaches ignore this structure, and previous attempts to incorporate hierarchical information yielded little improvement in performance (Shimaoka et al., 2017). Additionally, existing benchmark entity typing datasets only consider small label sets arranged in very shallow hierarchies. For example, FIGER (Ling and Weld, 2012), the de facto standard fine grained entity type dataset, contains only 113 types in a hierarchy only two levels deep. In this paper we investigate models that explicitly integrate hierarchical information into the embedding space of entities and types, using a hierarchy-aware loss on top of a deep neural network classifier over textual mentions. By using this additional information, we learn a richer, more robust representation, gaining statistical efficiency when predicting similar concepts and aiding the classification of rarer types. We first validate our methods on the narrow, shallow type system of FIGER, out-performing state-of-the-art methods not incorporating hand-crafted features and matching those that do. To evaluate on richer datasets and stimulate further research into hierarchical entity/typing prediction with larger and deeper ontologies, we introduce two new human annotated datasets. The first is MedMentions, a collection of PubMed ab98 stracts in which 246k concept mentions have been annotated with links to the Unified Medical Language System (UMLS) ontology (Bodenreider, 2004), an order of magnitude more annotations than comparable datasets. UMLS contains over 3.5 million concepts in a hierarchy having average depth 14.4. Interestingly, UMLS does not distinguish between types and entities (an approach we heartily endorse), and the technical details of linking to such a massive ontology lead us to refer to our MedMentions experiments as entity linking. Second, we present TypeNet, a curated mapping from the Freebase type system into the WordNet hierarchy. TypeNet contains over 1900 types with an average depth of 7.8. In experimental results, we show improvements with a hierarchically-aware training loss on each of the three datasets. In entity-linking MedMentions to UMLS, we observe a 6% relative increase in accuracy over the base model. In experiments on entity-typing from Wikipedia into TypeNet, we show that incorporating the hierarchy of types and including a hierarchical loss provides a dramatic 29% relative increase in MAP. Our models even provide benefits for shallow hierarchies allowing us to match the state-of-art results of Shimaoka et al. (2017) on the FIGER (GOLD) dataset without requiring hand-crafted features. We will publicly release the TypeNet and MedMentions datasets to the community to encourage further research in truly fine-grained, hierarchical entity-typing and linking. 2 New Corpora and Ontologies 2.1 MedMentions Over the years researchers have constructed many large knowledge bases in the biomedical domain (Apweiler et al., 2004; Davis et al., 2008; Chatraryamontri et al., 2017). Many of these knowledge bases are specific to a particular sub-domain encompassing a few particular types such as genes and diseases (Pi˜nero et al., 2017). UMLS (Bodenreider, 2004) is particularly comprehensive, containing over 3.5 million concepts (UMLS does not distinguish between entities and types) defining their relationships and a curated hierarchical ontology. For example LETM1 Protein IS-A Calcium Binding Protein IS-A Binding Protein IS-A Protein IS-A Genome Encoded Entity. This fact makes UMLS particularly well suited for methods explicitly exploiting hierarchical structure. Accurately linking textual biological entity mentions to an existing knowledge base is extremely important but few richly annotated resources are available. Even when resources do exist, they often contain no more than a few thousand annotated entity mentions which is insufficient for training state-of-the-art neural network entity linkers. State-of-the-art methods must instead rely on string matching between entity mentions and canonical entity names (Leaman et al., 2013; Wei et al., 2015; Leaman and Lu, 2016). To address this, we constructed MedMentions, a new, large dataset identifying and linking entity mentions in PubMed abstracts to specific UMLS concepts. Professional annotators exhaustively annotated UMLS entity mentions from 3704 PubMed abstracts, resulting in 246,000 linked mention spans. The average depth in the hierarchy of a concept from our annotated set is 14.4 and the maximum depth is 43. MedMentions contains an order of magnitude more annotations than similar biological entity linking PubMed datasets (Do˘gan et al., 2014; Wei et al., 2015; Li et al., 2016). Additionally, these datasets contain annotations for only one or two entity types (genes or chemicals and disease etc.). MedMentions instead contains annotations for a wide diversity of entities linking to UMLS. Statistics for several other datasets are in Table 1 and further statistics are in 2. Dataset mentions unique entities MedMentions 246,144 25,507 BCV-CDR 28,797 2,356 NCBI Disease 6,892 753 BCII-GN Train 6,252 1,411 NLM Citation GIA 1,205 310 Table 1: Statistics from various biological entity linking data sets from scientific articles. NCBI Disease (Do˘gan et al., 2014) focuses exclusively on disease entities. BCV-CDR (Li et al., 2016) contains both chemicals and diseases. BCII-GN and NLM (Wei et al., 2015) both contain genes. Statistic Train Dev Test #Abstracts 2,964 370 370 #Sentences 28,457 3,497 3,268 #Mentions 199,977 24,026 22,141 #Entities 22,416 5,934 5,521 Table 2: MedMentions statistics. 99 2.2 TypeNet TypeNet is a new dataset of hierarchical entity types for extremely fine-grained entity typing. TypeNet was created by manually aligning Freebase types (Bollacker et al., 2008) to noun synsets from the WordNet hierarchy (Fellbaum, 1998), naturally producing a hierarchical type set. To construct TypeNet, we first consider all Freebase types that were linked to more than 20 entities. This is done to eliminate types that are either very specific or very rare. We also remove all Freebase API types, e.g. the [/freebase, /dataworld, /schema, /atom, /scheme, and /topics] domains. For each remaining Freebase type, we generate a list of candidate WordNet synsets through a substring match. An expert annotator then attempted to map the Freebase type to one or more synsets in the candidate list with a parent-of, child-of or equivalence link by comparing the definitions of each synset with example entities of the Freebase type. If no match was found, the annotator manually formulated queries for the online WordNet API until an appropriate synset was found. See Table 9 for an example annotation. Two expert annotators independently aligned each Freebase type before meeting to resolve any conflicts. The annotators were conservative with assigning equivalence links resulting in a greater number of child-of links. The final dataset contained 13 parent-of, 727 child-of, and 380 equivalence links. Note that some Freebase types have multiple child-of links to WordNet, making TypeNet, like WordNet, a directed acyclic graph. We then took the union of each of our annotated Freebase types, the synset that they linked to, and any ancestors of that synset. Typeset Count Depth Gold KB links CoNLL-YAGO 4 1 Yes OntoNotes 5.0 19 1 No Gillick et al. (2014) 88 3 Yes Figer 112 2 Yes Hyena 505 9 No Freebase 2k 2 Yes WordNet 16k 14 No TypeNet* 1,941 14 Yes Table 3: Statistics from various type sets. TypeNet is the largest type hierarchy with a gold mapping to KB entities. *The entire WordNet could be added to TypeNet increasing the total size to 17k types. We also added an additional set of 614 FB →FB links 4. This was done by computing conditional probabilities of Freebase types given other Freebase types from a collection of 5 million randomly chosen Freebase entities. The conditional probability P(t2 | t1) of a Freebase type t2 given another Freebase type t1 was calculated as #(t1,t2) #t1 . Links with a conditional probability less than or equal to 0.7 were discarded. The remaining links were manually verified by an expert annotator and valid links were added to the final dataset, preserving acyclicity. Freebase Types 1081 WordNet Synsets 860 child-of links 727 equivalence links 380 parent-of links 13 Freebase-Freebase links 614 Table 4: Stats for the final TypeNet dataset. childof, parent-of, and equivalence links are from Freebase types →WordNet synsets. 3 Model 3.1 Background: Entity Typing and Linking We define a textual mention m as a sentence with an identified entity. The goal is then to classify m with one or more labels. For example, we could take the sentence m = “Barack Obama is the President of the United States.” with the identified entity string Barack Obama. In the task of entity linking, we want to map m to a specific entity in a knowledge base such as “m/02mjmr” in Freebase. In mention-level typing, we label m with one or more types from our type system T such as tm = {president, leader, politician} (Ling and Weld, 2012; Gillick et al., 2014; Shimaoka et al., 2017). In entity-level typing, we instead consider a bag of mentions Be which are all linked to the same entity. We label Be with te, the set of all types expressed in all m ∈Be (Yao et al., 2013; Neelakantan and Chang, 2015; Verga et al., 2017; Yaghoobzadeh et al., 2017a). 3.2 Mention Encoder Our model converts each mention m to a d dimensional vector. This vector is used to classify the type or entity of the mention. The basic model depicted in Figure 1 concatenates the averaged word embeddings of the mention string with the output of a convolutional neural network (CNN). The 100 Barack Obama is the president of the USA Mean Max Pool MLP CNN Figure 1: Sentence encoder for all our models. The input to the CNN consists of the concatenation of position embeddings with word embeddings. The output of the CNN is concatenated with the mean of mention surface form embeddings, and then passed through a 2 layer MLP. word embeddings of the mention string capture global, context independent semantics while the CNN encodes a context dependent representation. 3.2.1 Token Representation Each sentence is made up of s tokens which are mapped to dw dimensional word embeddings. Because sentences may contain mentions of more than one entity, we explicitly encode a distinguished mention in the text using position embeddings which have been shown to be useful in state of the art relation extraction models (dos Santos et al., 2015; Lin et al., 2016) and machine translation (Vaswani et al., 2017). Each word embedding is concatenated with a dp dimensional learned position embedding encoding the token’s relative distance to the target entity. Each token within the distinguished mention span has position 0, tokens to the left have a negative distance from [−s, 0), and tokens to the right of the mention span have a positive distance from (0, s]. We denote the final sequence of token representations as M. 3.2.2 Sentence Representation The embedded sequence M is then fed into our context encoder. Our context encoder is a single layer CNN followed by a tanh non-linearity to produce C. The outputs are max pooled across time to get a final context embedding, mCNN. ci = tanh(b + w X j=0 W[j]M[i −⌊w 2 ⌋+ j]) mCNN = max 0≤i≤n−w+1 ci Each W[j] ∈Rd×d is a CNN filter, the bias b ∈ Rd, M[i] ∈Rd is a token representation, and the max is taken pointwise. In all of our experiments we set w = 5. In addition to the contextually encoded mention, we create a global mention encoding, mG, by averaging the word embeddings of the tokens within the mention span. The final mention representation mF is constructed by concatenating mCNN and mG and applying a two layer feed-forward network with tanh non-linearity (see Figure 1): mF = W2 tanh(W1 mSFM mCNN  + b1) + b2 4 Training 4.1 Mention-Level Typing Mention level entity typing is treated as multilabel prediction. Given the sentence vector mF, we compute a score for each type in typeset T as: yj = tj⊤mF where tj is the embedding for the jth type in T and yj is its corresponding score. The mention is labeled with tm, a binary vector of all types where tm j = 1 if the jth type is in the set of gold types for m and 0 otherwise. We optimize a multi-label binary cross entropy objective: Ltype(m) = − X j tm j log yj + (1 −tm j ) log(1 −yj) 4.2 Entity-Level Typing In the absence of mention-level annotations, we instead must rely on distant supervision (Mintz et al., 2009) to noisily label all mentions of entity e with all types belonging to e. This procedure inevitably leads to noise as not all mentions of an entity express each of its known types. To alleviate this noise, we use multi-instance multi-label learning (MIML) (Surdeanu et al., 2012) which operates over bags rather than mentions. A bag of mentions Be = {m1, m2, . . . , mn} is the set of 101 all mentions belonging to entity e. The bag is labeled with te, a binary vector of all types where te j = 1 if the jth type is in the set of gold types for e and 0 otherwise. For every entity, we subsample k mentions from its bag of mentions. Each mention is then encoded independently using the model described in Section 3.2 resulting in a bag of vectors. Each of the k sentence vectors mi F is used to compute a score for each type in te: yi j = tj⊤mi F where tj is the embedding for the jth type in te and yi is a vector of logits corresponding to the ith mention. The final bag predictions are obtained using element-wise LogSumExp pooling across the k logit vectors in the bag to produce entity level logits y: y = log X i exp(yi) We use these final bag level predictions to optimize a multi-label binary cross entropy objective: Ltype(Be) = − X j te j log yj + (1 −te j) log(1 −yj) 4.3 Entity Linking Entity linking is similar to mention-level entity typing with a single correct class per mention. Because the set of possible entities is in the millions, linking models typically integrate an alias table mapping entity mentions to a set of possible candidate entities. Given a large corpus of entity linked data, one can compute conditional probabilities from mention strings to entities (Spitkovsky and Chang, 2012). In many scenarios this data is unavailable. However, knowledge bases such as UMLS contain a canonical string name for each of its curated entities. State-of-the-art biological entity linking systems tend to operate on various string edit metrics between the entity mention string and the set of canonical entity strings in the existing structured knowledge base (Leaman et al., 2013; Wei et al., 2015). For each mention in our dataset, we generate 100 candidate entities ec = (e1, e2, . . . , e100) each with an associated string similarity score csim. See Appendix A.5.1 for more details on candidate generation. We generate the sentence representation mF using our encoder and compute a similarity score between mF and the learned embedding e of each of the candidate entities. This score and string cosine similarity csim are combined via a learned linear combination to generate our final score. The final prediction at test time ˆe is the maximally similar entity to the mention. φ(m, e) = α e⊤mF + β csim(m, e) ˆe = argmax e∈ec φ(m, e) We optimize this model by multinomial cross entropy over the set of candidate entities and correct entity e. Llink(m, ec) = −φ(m, e) + log X e′∈ec exp φ(m, e′) 5 Encoding Hierarchies Both entity typing and entity linking treat the label space as prediction into a flat set. To explicitly incorporate the structure between types/entities into our training, we add an additional loss. We consider two methods for modeling the hierarchy of the embedding space: real and complex bilinear maps, which are two of the state-of-the-art knowledge graph embedding models. 5.1 Hierarchical Structure Models Bilinear: Our standard bilinear model scores a hypernym link between (c1, c2) as: s(c1, c2) = c1⊤Ac2 where A ∈Rd×d is a learned real-valued nondiagonal matrix and c1 is the child of c2 in the hierarchy. This model is equivalent to RESCAL (Nickel et al., 2011) with a single IS-A relation type. The type embeddings are the same whether used on the left or right side of the relation. We merge this with the base model by using the parameter A as an additional map before type/entity scoring. Complex Bilinear: We also experiment with a complex bilinear map based on the ComplEx model (Trouillon et al., 2016), which was shown to have strong performance predicting the hypernym relation in WordNet, suggesting suitability for asymmetric, transitive relations such as those in our type hierarchy. ComplEx uses complex valued vectors for types, and diagonal complex matrices for relations, using Hermitian inner products (taking the complex conjugate of the second argument, equivalent to treating the right-hand-side 102 type embedding to be the complex conjugate of the left hand side), and finally taking the real part of the score1. The score of a hypernym link between (c1, c2) in the ComplEx model is defined as: s(c1, c2) = Re(< c1, rIS-A, c2 >) = Re( X k c1krk¯c2k) = ⟨Re(c1), Re(rIS-A), Re(c2)⟩ + ⟨Re(c1), Im(rIS-A), Im(c2)⟩ + ⟨Im(c1), Re(rIS-A), Im(c2)⟩ −⟨Im(c1), Im(rIS-A), Re(c2)⟩ where c1, c2 and rIS-A are complex valued vectors representing c1, c2 and the IS-A relation respectively. Re(z) represents the real component of z and Im(z) is the imaginary component. As noted in Trouillon et al. (2016), the above function is antisymmetric when rIS-A is purely imaginary. Since entity/type embeddings are complex vectors, in order to combine it with our base model, we also need to represent mentions with complex vectors for scoring. To do this, we pass the output of the mention encoder through two different affine transformations to generate a real and imaginary component: Re(mF) = WrealmF + breal Im(mF) = WimgmF + bimg where mF is the output of the mention encoder, and Wreal, Wimg ∈Rd×d and breal, bimg ∈Rd . 5.2 Training with Hierarchies Learning a hierarchy is analogous to learning embeddings for nodes of a knowledge graph with a single hypernym/IS-A relation. To train these embeddings, we sample (c1, c2) pairs, where each pair is a positive link in our hierarchy. For each positive link, we sample a set N of n negative links. We encourage the model to output high scores for positive links, and low scores for negative links via a binary cross entropy (BCE) loss: Lstruct = −log σ(s(c1i, c2i)) + X N log(1 −σ(s(c1i, c′ 2i))) L = Ltype/link + γLstruct 1This step makes the scoring function technically not bilinear, as it commutes with addition but not complex multiplication, but we term it bilinear for ease of exposition. where s(c1, c2) is the score of a link (c1, c2), and σ(·) is the logistic sigmoid. The weighting parameter γ is ∈{0.1, 0.5, 0.8, 1, 2.0, 4.0}. The final loss function that we optimize is L. 6 Experiments We perform three sets of experiments: mentionlevel entity typing on the benchmark dataset FIGER, entity-level typing using Wikipedia and TypeNet, and entity linking using MedMentions. 6.1 Models CNN: Each mention is encoded using the model described in Section 3.2. The resulting embedding is used for classification into a flat set labels. Specific implementation details can be found in Appendix A.2. CNN+Complex: The CNN+Complex model is equivalent to the CNN model but uses complex embeddings and Hermitian dot products. Transitive: This model does not add an additional hierarchical loss to the training objective (unless otherwise stated). We add additional labels to each entity corresponding to the transitive closure, or the union of all ancestors of its known types. This provides a rich additional learning signal that greatly improves classification of specific types. Hierarchy: These models add an explicit hierarchical loss to the training objective, as described in Section 5, using either complex or real-valued bilinear mappings, and the associated parameter sharing. 6.2 Mention-Level Typing in FIGER To evaluate the efficacy of our methods we first compare against the current state-of-art models of Shimaoka et al. (2017). The most widely used type system for fine-grained entity typing is FIGER which consists of 113 types organized in a 2 level hierarchy. For training, we use the publicly available W2M data (Ren et al., 2016) and optimize the mention typing loss function defined in Section4.1 with the additional hierarchical loss where specified. For evaluation, we use the manually annotated FIGER (GOLD) data by Ling and Weld (2012). See Appendix A.2 and A.3 for specific implementation details. 6.2.1 Results In Table 5 we see that our base CNN models (CNN and CNN+Complex) match LSTM models of Shimaoka et al. (2017) and Gupta et al. (2017), the 103 Model Acc Macro F1 Micro F1 Ling and Weld (2012) 47.4 69.2 65.5 Shimaoka et al. (2017) † 55.6 75.1 71.7 Gupta et al. (2017)† 57.7 72.8 72.1 Shimaoka et al. (2017)‡ 59.6 78.9 75.3 CNN 57.0 75.0 72.2 + hierarchy 58.4 76.3 73.6 CNN+Complex 57.2 75.3 72.9 + hierarchy 59.7 78.3 75.4 Table 5: Accuracy and Macro/Micro F1 on FIGER (GOLD). † is an LSTM model. ‡ is an attentive LSTM along with additional hand crafted features. previous state-of-the-art for models without handcrafted features. When incorporating structure into our models, we gain 2.5 points of accuracy in our CNN+Complex model, matching the overall state of the art attentive LSTM that relied on handcrafted features from syntactic parses, topic models, and character n-grams. The structure can help our model predict lower frequency types which is a similar role played by hand-crafted features. 6.3 Entity-Level Typing in TypeNet Next we evaluate our models on entity-level typing in TypeNet using Wikipedia. For each entity, we follow the procedure outlined in Section 4.2. We predict labels for each instance in the entity’s bag and aggregate them into entity-level predictions using LogSumExp pooling. Each type is assigned a predicted score by the model. We then rank these scores and calculate average precision for each of the types in the test set, and use these scores to calculate mean average precision (MAP). We evaluate using MAP instead of accuracy which is standard in large knowledge base link prediction tasks (Verga et al., 2017; Trouillon et al., 2016). These scores are calculated only over Freebase types, which tend to be lower in the hierarchy. This is to avoid artificial score inflation caused by trivial predictions such as ‘entity.’ See Appendix A.4 for more implementation details. 6.3.1 Results Table 6 shows the results for entity level typing on our Wikipedia TypeNet dataset. We see that both the basic CNN and the CNN+Complex models perform similarly with the CNN+Complex model doing slightly better on the full data regime. We also see that both models get an improvement when adding an explicit hierarchy loss, even before adding in the transitive closure. The transitive closure itself gives an additional increase Model Low Data Full Data CNN 51.72 68.15 + hierarchy 54.82 75.56 + transitive 57.68 77.21 + hierarchy + transitive 58.74 78.59 CNN+Complex 50.51 69.83 + hierarchy 55.30 72.86 + transitive 53.71 72.18 + hierarchy + transitive 58.81 77.21 Table 6: MAP of entity-level typing in Wikipedia data using TypeNet. The second column shows results using 5% of the total data. The last column shows results using the full set of 344,246 entities. Model original normalized mention tfidf 61.09 74.66 CNN 67.42 82.40 + hierarchy 67.73 82.77 CNN+Complex 67.23 82.17 + hierarchy 68.34 83.52 Table 7: Accuracy on entity linking in MedMentions. Maximum recall is 81.82% because we use an imperfect alias table to generate candidates. Normalized scores consider only mentions which contain the gold entity in the candidate set. Mention tfidf is csim from Section 4.3. in performance to both models. In both of these cases, the basic CNN model improves by a greater amount than CNN+Complex. This could be a result of the complex embeddings being more difficult to optimize and therefore more susceptible to variations in hyperparameters. When adding in both the transitive closure and the explicit hierarchy loss, the performance improves further. We observe similar trends when training our models in a lower data regime with ~150,000 examples, or about 5% of the total data. In all cases, we note that the baseline models that do not incorporate any hierarchical information (neither the transitive closure nor the hierarchy loss) perform ~9 MAP worse, demonstrating the benefits of incorporating structure information. 6.4 MedMentions Entity Linking with UMLS In addition to entity typing, we evaluate our model’s performance on an entity linking task using MedMentions, our new PubMed / UMLS dataset described in Section 2.1. 6.4.1 Results Table 7 shows results for baselines and our proposed variant with additional hierarchical loss. None of these models incorporate transitive clo104 Tips and Pitfalls in Direct Ligation of Large Spontaneous Splenorenal Shunt during Liver Transplantation Patients with large spontaneous splenorenal shunt . . . baseline: Direct [Direct →General Modifier →Qualifier →Property or Attribute] +hierarchy: Ligature (correct) [Ligature →Surgical Procedures →medical treatment approach ] A novel approach for selective chemical functionalization and localized assembly of one-dimensional nanostructures. baseline: Structure [Structure →order or structure →general epistemology] +hierarchy: Nanomaterials (correct) [Nanomaterials →Nanoparticle Complex →Drug or Chemical by Structure] Gcn5 is recruited onto the il-2 promoter by interacting with the NFAT in T cells upon TCR stimulation . baseline: Interleukin-27 [Interleukin-27 →IL2 →Interleukin Gene] +hierarchy: IL2 Gene (correct) [IL2 Gene →Interleukin Gene] Table 8: Example predictions from MedMentions. Each example shows the sentence with entity mention span in bold. Baseline, shows the predicted entity and its ancestors of a model not incorporating structure. Finally, +hierarchy shows the prediction and ancestors for a model which explicitly incorporates the hierarchical structure information. sure information, due to difficulty incorporating it in our candidate generation, which we leave to future work. The Normalized metric considers performance only on mentions with an alias table hit; all models have 0 accuracy for mentions otherwise. We also report the overall score for comparison in future work with improved candidate generation. We see that incorporating structure information results in a 1.1% reduction in absolute error, corresponding to a ~6% reduction in relative error on this large-scale dataset. Table 8 shows qualitative predictions for models with and without hierarchy information incorporated. Each example contains the sentence (with target entity in bold), predictions for the baseline and hierarchy aware models, and the ancestors of the predicted entity. In the first and second example, the baseline model becomes extremely dependent on TFIDF string similarities when the gold candidate is rare (≤10 occurrences). This shows that modeling the structure of the entity hierarchy helps the model disambiguate rare entities. In the third example, structure helps the model understand the hierarchical nature of the labels and prevents it from predicting an entity that is overly specific (e.g predicting Interleukin-27 rather than the correct and more general entity IL2 Gene). Note that, in contrast with the previous tasks, the complex hierarchical loss provides a significant boost, while the real-valued bilinear model does not. A possible explanation is that UMLS is a far larger/deeper ontology than even TypeNet, and the additional ability of complex embeddings to model intricate graph structure is key to realizing gains from hierarchical modeling. 7 Related Work By directly linking a large set of mentions and typing a large set of entities with respect to a new ontology and corpus, and our incorporation of structural learning between the many entities and types in our ontologies of interest, our work draws on many different but complementary threads of research in information extraction, knowledge base population, and completion. Our structural, hierarchy-aware loss between types and entities draws on research in Knowledge Base Inference such as Jain et al. (2018), Trouillon et al. (2016) and Nickel et al. (2011). Combining KB completion with hierarchical structure in knowledge bases has been explored in (Dalvi et al., 2015; Xie et al., 2016). Recently, Wu et al. (2017) proposed a hierarchical loss for text classification. Linking mentions to a flat set of entities, often in Freebase or Wikipedia, is a long-standing task in NLP (Bunescu and Pasca, 2006; Cucerzan, 2007; Durrett and Klein, 2014; Francis-Landau et al., 2016). Typing of mentions at varying levels of granularity, from CoNLL-style named entity recognition (Tjong Kim Sang and De Meulder, 2003), to the more fine-grained recent approaches (Ling and Weld, 2012; Gillick et al., 2014; Shimaoka et al., 2017), is also related to our task. A few prior attempts to incorporate a very shallow hierarchy into fine-grained entity typing have not lead to significant or consistent improvements (Gillick et al., 2014; Shimaoka et al., 2017). The knowledge base Yago (Suchanek et al., 2007) includes integration with WordNet and type hierarchies have been derived from its type system (Yosef et al., 2012). Del Corro et al. (2015) use manually crafted rules and patterns (Hearst patterns (Hearst, 1992), appositives, etc) to automati105 cally match entity types to Wordnet synsets. Recent work has moved towards unifying these two highly related tasks by improving entity linking by simultaneously learning a fine grained entity type predictor (Gupta et al., 2017). Learning hierarchical structures or transitive relations between concepts has been the subject of much recent work (Vilnis and McCallum, 2015; Vendrov et al., 2016; Nickel and Kiela, 2017) We draw inspiration from all of this prior work, and contribute datasets and models to address previous challenges in jointly modeling the structure of large-scale hierarchical ontologies and mapping textual mentions into an extremely fine-grained space of entities and types. 8 Conclusion We demonstrate that explicitly incorporating and modeling hierarchical information leads to increased performance in experiments on entity typing and linking across three challenging datasets. Additionally, we introduce two new humanannotated datasets: MedMentions, a corpus of 246k mentions from PubMed abstracts linked to the UMLS knowledge base, and TypeNet, a new hierarchical fine-grained entity typeset an order of magnitude larger and deeper than previous datasets. While this work already demonstrates considerable improvement over non-hierarchical modeling, future work will explore techniques such as Box embeddings (Vilnis et al., 2018) and Poincar´e embeddings (Nickel and Kiela, 2017) to represent the hierarchical embedding space, as well as methods to improve recall in the candidate generation process for entity linking. Most of all, we are excited to see new techniques from the NLP community using the resources we have presented. 9 Acknowledgements We thank Nicholas Monath, Haw-Shiuan Chang and Emma Strubell for helpful comments on early drafts of the paper. Creation of the MedMentions corpus is supported and managed by the Meta team at the Chan Zuckerberg Initiative. A pre-release of the dataset is available at http://github.com/chanzuckerberg/ MedMentions. This work was supported in part by the Center for Intelligent Information Retrieval and the Center for Data Science, in part by the Chan Zuckerberg Initiative under the project Scientific Knowledge Base Construction., and in part by the National Science Foundation under Grant No. IIS-1514053. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor. References Rolf Apweiler, Amos Bairoch, Cathy H Wu, Winona C Barker, Brigitte Boeckmann, Serenella Ferro, Elisabeth Gasteiger, Hongzhan Huang, Rodrigo Lopez, Michele Magrane, et al. 2004. Uniprot: the universal protein knowledgebase. Nucleic acids research, 32(suppl 1):D115–D119. Olivier Bodenreider. 2004. The unified medical language system (umls): integrating biomedical terminology. Nucleic acids research, 32(suppl 1):D267– D270. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247–1250. AcM. Razvan C Bunescu and Marius Pasca. 2006. Using encyclopedic knowledge for named entity disambiguation. In Eacl, volume 6, pages 9–16. Andrew Chatr-aryamontri, Rose Oughtred, Lorrie Boucher, Jennifer Rust, Christie Chang, Nadine K Kolas, Lara O’Donnell, Sara Oster, Chandra Theesfeld, Adnane Sellam, et al. 2017. The biogrid interaction database: 2017 update. Nucleic acids research, 45(D1):D369–D379. Silviu Cucerzan. 2007. Large-scale named entity disambiguation based on wikipedia data. In Proceedings of the 2007 joint conference on empirical methods in natural language processing and computational natural language learning (EMNLP-CoNLL). Jeffrey Dalton, Laura Dietz, and James Allan. 2014. Entity query feature expansion using knowledge base links. In Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval, pages 365–374. ACM. Bhavana Dalvi, Einat Minkov, Partha P Talukdar, and William W Cohen. 2015. Automatic gloss finding for a knowledge base using ontological constraints. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining, pages 369–378. ACM. Rajarshi Das, Manzil Zaheer, Siva Reddy, and Andrew McCallum. 2017. Question answering on knowledge bases and text using universal schema and memory networks. In Proceedings of the 55th Annual Meeting of the Association for Computational 106 Linguistics (Volume 2: Short Papers), pages 358– 365, Vancouver, Canada. Association for Computational Linguistics. Allan Peter Davis, Cynthia G Murphy, Cynthia A Saraceni-Richards, Michael C Rosenstein, Thomas C Wiegers, and Carolyn J Mattingly. 2008. Comparative toxicogenomics database: a knowledgebase and discovery tool for chemical– gene–disease networks. Nucleic acids research, 37(suppl 1):D786–D792. Luciano Del Corro, Abdalghani Abujabal, Rainer Gemulla, and Gerhard Weikum. 2015. Finet: Context-aware fine-grained named entity typing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Rezarta Islamaj Do˘gan, Robert Leaman, and Zhiyong Lu. 2014. Ncbi disease corpus: a resource for disease name recognition and concept normalization. Journal of biomedical informatics, 47:1–10. Greg Durrett and Dan Klein. 2014. A joint model for entity analysis: Coreference, typing, and linking. Transactions of the Association for Computational Linguistics, 2:477–490. Christiane Fellbaum. 1998. WordNet. Wiley Online Library. Matthew Francis-Landau, Greg Durrett, and Dan Klein. 2016. Capturing semantic similarity for entity linking with convolutional neural networks. In Proceedings of NAACL-HLT, pages 1256–1261. Dan Gillick, Nevena Lazic, Kuzman Ganchev, Jesse Kirchner, and David Huynh. 2014. Contextdependent fine-grained entity type tagging. CoRR, abs/1412.1820. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS). Nitish Gupta, Sameer Singh, and Dan Roth. 2017. Entity linking via joint encoding of types, descriptions, and context. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2671–2680, Copenhagen, Denmark. Association for Computational Linguistics. Marti A Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the International Conference on Computational Linguistics (COLING). Prachi Jain, Shikhar Murty, Mausam, and Soumen Chakrabarti. 2018. Mitigating the effect of out-ofvocabulary entity pairs in matrix factorization for knowledge base inference. In The 27th International Joint Conference on Artificial Intelligence (IJCAI), Stockholm, Sweden. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Robert Leaman, Rezarta Islamaj Do˘gan, and Zhiyong Lu. 2013. Dnorm: disease name normalization with pairwise learning to rank. Bioinformatics, 29(22):2909–2917. Robert Leaman and Zhiyong Lu. 2016. Taggerone: joint named entity recognition and normalization with semi-markov models. Bioinformatics, 32(18):2839–2846. Jiao Li, Yueping Sun, Robin J Johnson, Daniela Sciaky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J Mattingly, Thomas C Wiegers, and Zhiyong Lu. 2016. Biocreative v cdr task corpus: a resource for chemical disease relation extraction. Database, 2016. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2124–2133, Berlin, Germany. Association for Computational Linguistics. Xiao Ling and Daniel S Weld. 2012. Fine-grained entity recognition. In Twenty-Sixth AAAI Conference on Artificial Intelligence. Edward Loper and Steven Bird. 2002. Nltk: The natural language toolkit. In Proceedings of the ACL-02 Workshop on Effective tools and methodologies for teaching natural language processing and computational linguistics-Volume 1, pages 63–70. Association for Computational Linguistics. Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1003–1011, Suntec, Singapore. Association for Computational Linguistics. Arvind Neelakantan and Ming-Wei Chang. 2015. Inferring missing entity type instances for knowledge base completion: New dataset and methods. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 515–525, Denver, Colorado. Association for Computational Linguistics. Maximilian Nickel and Douwe Kiela. 2017. Poincar\’e embeddings for learning hierarchical representations. arXiv preprint arXiv:1705.08039. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of the International Conference on Machine Learning (ICML). 107 Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Janet Pi˜nero, `Alex Bravo, N´uria Queralt-Rosinach, Alba Guti´errez-Sacrist´an, Jordi Deu-Pons, Emilio Centeno, Javier Garc´ıa-Garc´ıa, Ferran Sanz, and Laura I Furlong. 2017. Disgenet: a comprehensive platform integrating information on human diseaseassociated genes and variants. Nucleic acids research, 45(D1):D833–D839. Xiang Ren, Wenqi He, Meng Qu, Clare R. Voss, Heng Ji, and Jiawei Han. 2016. Label noise reduction in entity typing by heterogeneous partial-label embedding. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pages 1825–1834. Benjamin Roth, Nicholas Monath, David Belanger, Emma Strubell, Patrick Verga, and Andrew McCallum. 2015. Building knowledge bases with universal schema: Cold start and slot-filling approaches. C´ıcero Nogueira dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convolutional neural networks. In Proceedings of the Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing ACL. Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, and Sebastian Riedel. 2017. Neural architectures for fine-grained entity type classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1271–1280, Valencia, Spain. Association for Computational Linguistics. Valentin I Spitkovsky and Angel X Chang. 2012. A cross-lingual dictionary for english wikipedia concepts. Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research. Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In Proceedings of the International Conference on World Wide Web (WWW). Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning, pages 455–465. Association for Computational Linguistics. Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4, pages 142–147. Association for Computational Linguistics. Th´eo Trouillon, Johannes Welbl, Sebastian Riedel, ´Eric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In Proceedings of the International Conference on Machine Learning (ICML). Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Conference on Advances in Neural Information Processing (NIPS). Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. 2016. Order-embeddings of images and language. ICLR. Patrick Verga, Arvind Neelakantan, and Andrew McCallum. 2017. Generalizing to unseen entities and entity pairs with row-less universal schema. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 613–622, Valencia, Spain. Association for Computational Linguistics. Luke Vilnis, Xiang Li, Shikhar Murty, and Andrew McCallum. 2018. Probabilistic embedding of knowledge graphs with box lattice measures. In The 56th Annual Meeting of the Association for Computational Linguistics (ACL), Melbourne, Australia. Luke Vilnis and Andrew McCallum. 2015. Word representations via gaussian embedding. ICLR. Chih-Hsuan Wei, Hung-Yu Kao, and Zhiyong Lu. 2015. Gnormplus: an integrative approach for tagging genes, gene families, and protein domains. BioMed research international, 2015. Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2017. Constructing datasets for multi-hop reading comprehension across documents. arXiv preprint arXiv:1710.06481. Cinna Wu, Mark Tygert, and Yann LeCun. 2017. Hierarchical loss for classification. CoRR, abs/1709.01062. Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2016. Representation learning of knowledge graphs with hierarchical types. In IJCAI, pages 2965–2971. Yadollah Yaghoobzadeh, Heike Adel, and Hinrich Sch¨utze. 2017a. Corpus-level fine-grained entity typing. arXiv preprint arXiv:1708.02275. 108 Yadollah Yaghoobzadeh, Heike Adel, and Hinrich Sch¨utze. 2017b. Noise mitigation for neural entity typing and relation extraction. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1183–1194, Valencia, Spain. Association for Computational Linguistics. Limin Yao, Sebastian Riedel, and Andrew McCallum. 2013. Universal schema for entity type prediction. In Proceedings of the 2013 workshop on Automated knowledge base construction, pages 79–84. ACM. Mohamed Amir Yosef, Sandro Bauer, Johannes Hoffart, Marc Spaniol, and Gerhard Weikum. 2012. Hyena: Hierarchical type classification for entity names. In Proceedings of the International Conference on Computational Linguistics (COLING). 109 A Supplementary Materials A.1 TypeNet Construction Freebase type: musical chord Example entities: psalms chord, power chord harmonic seventh chord chord.n.01: a straight line connecting two points on a curve chord.n.02: a combination of three or more notes that blend harmoniously when sounded together musical.n.01: a play or film whose action and dialogue is interspersed with singing and dancing Table 9: Example given to TypeNet annotators. Here, the Freebase type to be linked is musical chord. This type is annotated in Freebase belonging to the entities psalms chord, harmonic seventh chord, and power chord. Below the list of example entities are candidate WordNet synsets obtained by substring matching between the Freebase type and all WordNet synsets. The correctly aligned synset is chord.n.02 shown in bold. A.2 Model Implementation Details For all of our experiments, we use pretrained 300 dimensional word vectors from Pennington et al. (2014). These embeddings are fixed during training. The type vectors and entity vectors are all 300 dimensional vectors initialized using Glorot initialization (Glorot and Bengio, 2010). The number of negative links for hierarchical training n ∈ {16, 32, 64, 128, 256}. For regularization, we use dropout (Srivastava et al., 2014) with p ∈{0.5, 0.75, 0.8} on the sentence encoder output and L2 regularize all learned parameters with λ ∈{1e-5, 5e-5, 1e-4}. All our parameters are optimized using Adam (Kingma and Ba, 2014) with a learning rate of 0.001. We tune our hyper-parameters via grid search and early stopping on the development set. A.3 FIGER Implementation Details To train our models, we use the mention typing loss function defined in Section-5. For models with structure training, we additionally add in the hierarchical loss, along with a weight that is obtained by tuning on the dev set. We follow the same inference time procedure as Shimaoka et al. (2017) For each mention, we first assign the type with the largest probability according to the logits, and then assign additional types based on the condition that their corresponding probability be greater than 0.5. A.4 Wikipedia Data and Implementation Details At train time, each training example randomly samples an entity bag of 10 mentions. At test time we classify bags of 20 mentions of an entity. The dataset contains a total of 344,246 entities mapped to the 1081 Freebase types from TypeNet. We consider all sentences in Wikipedia between 10 and 50 tokens long. Tokenization and sentence splitting was performed using NLTK (Loper and Bird, 2002). From these sentences, we considered all entities annotated with a cross-link in Wikipedia that we could link to Freebase and assign types in TypeNet. We then split the data by entities into a 90-5-5 train, dev, test split. A.5 UMLS Implementation details We pre-process each string by lowercasing and removing stop words. We consider ngrams from size 1 to 5 and keep the top 100,000 features and the final vectors are L2 normalized. For each mention, In our experiments we consider the top 100 most similar entities as the candidate set. A.5.1 Candidate Generation Details Each mention and each canonical entity string in UMLS are mapped to TFIDF character ngram vectors. We pre-process each string by lowercasing and removing stop words. We consider ngrams from size 1 to 5 and keep the top 100,000 features and the final vectors are L2 normalized. For each mention, we calculate the cosine similarity, csim, between the mention string and each canonical entity string. In our experiments we consider the top 100 most similar entities as the candidate set.
2018
10
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1088–1097 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1088 TDNN: A Two-stage Deep Neural Network for Prompt-independent Automated Essay Scoring Cancan Jin1 Ben He1,3 Kai Hui2 Le Sun3,4 1School of Computer & Control Engineering, University of Chinese Academy of Sciences, Beijing, China 2 SAP SE, Berlin, Germany 3 Institute of Software, Chinese Academy of Sciences, Beijing, China 4 Beijing Advanced Innovation Center for Language Resources, Beijing, China [email protected], [email protected] [email protected] , [email protected] Abstract Existing automated essay scoring (AES) models rely on rated essays for the target prompt as training data. Despite their successes in prompt-dependent AES, how to effectively predict essay ratings under a prompt-independent setting remains a challenge, where the rated essays for the target prompt are not available. To close this gap, a two-stage deep neural network (TDNN) is proposed. In particular, in the first stage, using the rated essays for nontarget prompts as the training data, a shallow model is learned to select essays with an extreme quality for the target prompt, serving as pseudo training data; in the second stage, an end-to-end hybrid deep model is proposed to learn a prompt-dependent rating model consuming the pseudo training data from the first step. Evaluation of the proposed TDNN on the standard ASAP dataset demonstrates a promising improvement for the prompt-independent AES task. 1 Introduction Automated essay scoring (AES) utilizes natural language processing and machine learning techniques to automatically rate essays written for a target prompt (Dikli, 2006). Currently, the AES systems have been widely used in large-scale English writing tests, e.g. Graduate Record Examination (GRE), to reduce the human efforts in the writing assessments (Attali and Burstein, 2006). Existing AES approaches are promptdependent, where, given a target prompt, rated essays for this particular prompt are required for training (Dikli, 2006; Williamson, 2009; Foltz et al., 1999). While the established models are effective (Chen and He, 2013; Taghipour and Ng, 2016; Alikaniotis et al., 2016; Cummins et al., 2016; Dong et al., 2017), we argue that the models for prompt-independent AES are also desirable to allow for better feasibility and flexibility of AES systems especially when the rated essays for a target prompt are difficult to obtain or even unaccessible. For example, in a writing test within a small class, students are asked to write essays for a target prompt without any rated examples, where the prompt-dependent methods are unlikely to provide effective AES due to the lack of training data. Prompt-independent AES, however, has drawn little attention in the literature, where there only exists unrated essays written for the target prompt, as well as the rated essays for several non-target prompts. We argue that it is not straightforward, if possible, to apply the established promptdependent AES methods for the mentioned prompt-independent scenario. On one hand, essays for different prompts may differ a lot in the uses of vocabulary, the structure, and the grammatic characteristics; on the other hand, however, established prompt-dependent AES models are designed to learn from these prompt-specific features, including the on/off-topic degree, the tf idf weights of topical terms (Attali and Burstein, 2006; Dikli, 2006), and the n-gram features extracted from word semantic embeddings (Dong and Zhang, 2016; Alikaniotis et al., 2016). Consequently, the prompt-dependent models can hardly learn generalized rules from rated essays for nontarget prompts, and are not suitable for the promptindependent AES. Being aware of this difficulty, to this end, a twostage deep neural network, coined as TDNN, is proposed to tackle the prompt-independent AES problem. In particular, to mitigate the lack of the prompt-dependent labeled data, at the first stage, 1089 a shallow model is trained on a number of rated essays for several non-target prompts; given a target prompt and a set of essays to rate, the trained model is employed to generate pseudo training data by selecting essays with the extreme quality. At the second stage, a novel end-to-end hybrid deep neural network learns prompt-dependent features from these selected training data, by considering semantic, part-of-speech, and syntactic features. The contributions in this paper are threefold: 1) a two-stage learning framework is proposed to bridge the gap between the target and non-target prompts, by only consuming rated essays for nontarget prompts as training data; 2) a novel deep model is proposed to learn from pseudo labels by considering semantic, part-of-speech, and syntactic features; and most importantly, 3) to the best of our knowledge, the proposed TDNN is actually the first approach dedicated to addressing the prompt-independent AES. Evaluation on the standard ASAP dataset demonstrates the effectiveness of the proposed method. The rest of this paper is organized as follows. In Section 2, we describe our novel TDNN model, including the two-stage framework and the proposed deep model. Following that, we describe the setup of our empirical study in Section 3, thereafter present the results and provide analyzes in Section 4. Section 5 recaps existing literature and put our work in context, before drawing final conclusions in Section 6. 2 Two-stage Deep Neural Network for AES In this section, the proposed two-stage deep neural network (TDNN) for prompt-independent AES is described. To accurately rate an essay, on one hand, we need to consider its pertinence to the given prompt; on the other hand, the organization, the analyzes, as well as the uses of the vocabulary are all crucial for the assessment. Henceforth, both prompt-dependent and -independent factors should be considered, but the latter ones actually do not require prompt-dependent training data. Accordingly, in the proposed framework, a supervised ranking model is first trained to learn from prompt-independent data, hoping to roughly assess essays without considering the prompt; subsequently, given the test dataset, namely, a set of essays for a target prompt, a subset of essays are selected as positive and negative training data based on the prediction of the trained model from the first stage; ultimately, a novel deep model is proposed to learn both prompt-dependent and -independent factors on this selected subset. As indicated in Figure 1, the proposed framework includes two stages. 2.1 Overview Figure 1: The architecture of the TDNN framework for prompt-independent AES. Prompt-independent stage. Only the promptindependent factors are considered to train a shallow model, aiming to recognize the essays with the extreme quality in the test dataset, where the rated essays for non-target prompts are used for training. Intuitively, one could recognize essays with the highest and the lowest scores correctly by solely examining their quality of writing, e.g., the number of typos, without even understanding them, and the prompt-independent features such as the number of grammatic and spelling errors should be sufficient to fulfill this screening procedure. Accordingly, a supervised model trained solely on prompt-independent features is employed to identify the essays with the highest and lowest scores in a given set of essays for the target prompt, which are used as the positive and negative training data in the follow-up prompt-dependent learning phase. Prompt-dependent stage. Intuitively, most essays are with a quality in between the extremes, requiring a good understanding of their meaning to make an accurate assessment, e.g., whether the examples from the essay are convincing or whether the analyzes are insightful, making the consideration of prompt-dependent features crucial. To achieve that, a model is trained to learn from the comparison between essays with the highest and lowest scores for the target prompt according to the predictions from the first step. Akin to the settings in transductive transfer learning (Pan and 1090 Yang, 2010), given essays for a particular prompt, quite a few confident essays at two extremes are selected and are used to train another model for a fine-grained content-based prompt-dependent assessment. To enable this, a powerful deep model is proposed to consider the content of the essays from different perspectives using semantic, part-of-speech (POS) and syntactic network. After being trained with the selected essays, the deep model is expected to memorize the properties of a good essay in response to the target prompt, thereafter accurately assessing all essays for it. In Section 2.2, building blocks for the selection of the training data and the proposed deep model are described in details. 2.2 Building Blocks Select confident essays as training data. The identification of the extremes is relatively simple, where a RankSVM (Joachims, 2002) is trained on essays for different non-target prompts, avoiding the risks of over-fitting some particular prompts. A set of established prompt-independent features are employed, which are listed in Table 2. Given a prompt and a set of essays for evaluation, to begin with, the trained RankSVM is used to assign prediction scores to individual prompt-essay pairs, which are uniformly transformed into a 10point scale. Thereafter, the essays with predicted scores in [0, 4] and [8, 10] are selected as negative and positive examples respectively, serving as the bad and good templates for training in the next stage. Intuitively, an essay with a score beyond eight out of a 10-point scale is considered good, while the one receiving less than or equal to four, is considered to be with a poor quality. A hybrid deep model for fine-grained assessment. To enable a prompt-dependent assessment, a model is desired to comprehensively capture the ways in which a prompt is described or discussed in an essay. In this paper, semantic meaning, part-of-speech (POS), and the syntactic taggings of the token sequence from an essay are considered, grasping the quality of an essay for a target prompt. The model architecture is summarized in Figure 2. Intuitively, the model learns the semantic meaning of an essay by encoding it in terms of a sequence of word embeddings, denoted as −→e sem, hoping to understand what the essay is about; in addition, the part-of-speech information is encoded as a sequence of POS taggings, coined as −→e pos; ultimately, the structural connections between different components in an essay (e.g., terms or phrases) are further captured via syntactic network, leading to −→e synt, where the model learns the organization of the essay. Akin to (Li et al., 2015) and (Zhou and Xu, 2015), biLSTM is employed as a basic component to encode a sequence. Three features are separately captured using the stacked bi-LSTM layers as building blocks to encode different embeddings, whose outputs are subsequently concatenated and fed into several dense layers, generating the ultimate rating. In the following, the architecture of the model is described in details. - Semantic embedding. Akin to the existing works (Alikaniotis et al., 2016; Taghipour and Ng, 2016), semantic word embeddings, namely, the pre-trained 50-dimension GloVe (Pennington et al., 2014), are employed. On top of the word embeddings, two bi-LSTM layers are stacked, namely, the essay layer is constructed on top of the sentence layer, ending up with the semantic representation of the whole essay, which is denoted as −→e sem in Figure 2. - Part-Of-Speech (POS) embeddings for individual terms are first generated by the Stanford Tagger (Toutanova et al., 2003), where 36 different POS tags present. Accordingly, individual words are embedded with 36-dimensional one-hot representation, and is transformed to a 50-dimensional vector through a lookup layer. After that, two biLSTM layers are stacked, leading to −→e pos. Take Figure 3 for example, given a sentence “Attention please, here is an example.”, it is first converted into a POS sequence using the tagger, namely, VB, VBP, RB, VBZ, DT, NN; thereafter it is further mapped to vector space through one-hot embedding and a lookup layer. - Syntactic embedding aims at encoding an essay in terms of the syntactic relationships among different syntactic components, by encoding an essay recursively. The Stanford Parser (Socher et al., 2013) is employed to label the syntactic structure of words and phrases in sentences, accounting for 59 different types in total. Similar to (Tai et al., 2015), we opt for three stacked bi-LSTM, aiming at encoding individual phrases, sentences, and ultimately the whole essay in sequence. In particular, according to the hierarchical structure from a parsing tree, the phrase-level bi-LSTM first encodes different phrases by consuming syntactic 1091 Figure 2: The model architecture of the proposed hybrid deep learning model. embeddings (−→ Sti in Figure 2) from a lookup table of individual syntactic units in the tree; thereafter, the encoded dense layers in individual sentences are further consumed by a sentence-level bi-LSTM, ending up with sentence-level syntactic representations, which are ultimately combined by the essay-level bi-LSTM, resulting in −→e synt. For example, the parsed tree for a sentence “Attention please, here is an example.” is displayed in Figure 3. To start with, the sentence is parsed into ((NP VP)(NP VP NP)), and the dense embeddings are fetched from a lookup table for all tokens, namely, NP and VP; thereafter, the phraselevel bi-LSTM encodes (NP VP) and (NP VP NP) separately, which are further consumed by the sentence-level bi-LSTM. Afterward, essay-level bi-LSTM further combines the representations of different sentences into −→e synt. (ROOT (S (S (NP (VB Attention)) (VP (VBP please))) (, ,) (NP (RB here)) (VP (VBZ is) (NP (DT an) (NN example))) (. .))) Figure 3: An example of the context-free phrase structure grammar tree. - Combination. A feed-forward network linearly transforms the concatenated representations of an essay from the mentioned three perspectives into a scalar, which is further normalized into [0, 1] with a sigmoid function. 2.3 Objective and Training Objective. Mean square error (MSE) is optimized, which is widely used as a loss function in regression tasks. Given N pairs of a target prompt pi and an essay ei, MSE measures the average value of square error between the normalized gold standard rating r∗(pi, ei) and the predicted rating r(pi, ei) assigned by the AES model, as summarized in Equation 1. 1 N N ∑ i=1 ( r(pi, ei) −r∗(pi, ei) )2 (1) Optimization. Adam (Kingma and Ba, 2014) is employed to minimize the loss over the training data. The initial learning rate η is set to 0.01 and the gradient is clipped between [−10, 10] during training. In addition, dropout (Srivastava et al., 2014) is introduced for regularization with a dropout rate of 0.5, and 64 samples are used in each batch with batch normalization (Ioffe and Szegedy, 2015). 30% of the training data are reserved for validation. In addition, early stopping (Yao et al., 2007) is employed according to the validation loss, namely, the training is terminated if no decrease of the loss is observed for ten consecutive epochs. Once training is finished, 1092 Prompt #Essays Avg Length Score Range 1 1783 350 2-12 2 1800 350 1-6 3 1726 150 0-3 4 1772 150 0-3 5 1805 150 0-4 6 1800 150 0-4 7 1569 250 0-30 8 723 650 0-60 Table 1: Statistics for the ASAP dataset. akin to (Dong et al., 2017), the model with the best quadratic weighted kappa on the validation set is selected. 3 Experimental Setup Dataset. The Automated Student Assessment Prize (ASAP) dataset has been widely used for AES (Alikaniotis et al., 2016; Chen and He, 2013; Dong et al., 2017), and is also employed as the prime evaluation instrument herein. In total, ASAP consists of eight sets of essays, each of which associates to one prompt, and is originally written by students between Grade 7 and Grade 10. As summarized in Table 1, essays from different sets differ in their rating criteria, length, as well as the rating distribution1. Cross-validation. To fully employ the rated data, a prompt-wise eight-fold cross validation on the ASAP is used for evaluation. In each fold, essays corresponding to a prompt is reserved for testing, and the remaining essays are used as training data. Evaluation metric. The model outputs are first uniformly re-scaled into [0, 10], mirroring the range of ratings in practice. Thereafter, akin to (Yannakoudakis et al., 2011; Chen and He, 2013; Alikaniotis et al., 2016), we report our results primarily based on the quadratic weighted Kappa (QWK), examining the agreement between the predicted ratings and the ground truth. Pearson correlation coefficient (PCC) and Spearman rankorder correlation coefficient (SCC) are also reported. The correlations obtained from individual folds, as well as the average over all eight folds, are reported as the ultimate results. Competing models. Since the promptindependent AES is of interests in this work, the existing AES models are adapted for prompt-independent rating prediction, serving as baselines. This is due to the facts that the 1Details of this dataset can be found at https://www. kaggle.com/c/asap-aes. No. Feature 1 Mean & variance of word length in characters 2 Mean & variance of sentence length in words 3 Essay length in characters and words 4 Number of prepositions and commas 5 Number of unique words in an essay 6 Mean number of clauses per sentence 7 Mean length of clauses 8 Maximum number of clauses of a sentence in an essay 9 Number of spelling errors 10 Average depth of the parser tree of each sentence in an essay 11 Average depth of each leaf node in the parser tree of each sentence Table 2: Handcrafted features used in learning the prompt-independent RankSVM. prompt-dependent and -independent models differ a lot in terms of problem settings and model designs, especially in their requirements for the training data, where the latter ones release the prompt-dependent requirements and thereby are accessible to more data. - RankSVM, using handcrafted features for AES (Yannakoudakis et al., 2011; Chen et al., 2014), is trained on a set of pre-defined promptindependent features as listed in Table 2, where the features are standardized beforehand to remove the mean and variance. The RankSVM is also used for the prompt-independent stage in our proposed TDNN model. In particular, the linear kernel RankSVM2 is employed, where C is set to 5 according to our pilot experiments. - 2L-LSTM. Two-layer bi-LSTM with GloVe for AES (Alikaniotis et al., 2016) is employed as another baseline. Regularized word embeddings are dropped to avoid over-fitting the prompt-specific features. - CNN-LSTM. This model (Taghipour and Ng, 2016) employs a convolutional (CNN) layer over one-hot representations of words, followed by an LSTM layer to encode word sequences in a given essay. A linear layer with sigmoid activation function is then employed to predict the essay rating. - CNN-LSTM-ATT. This model (Dong et al., 2017) employs a CNN layer to encode word sequences into sentences, followed by an LSTM layer to generate the essay representation. An attention mechanism is added to model the influence of individual sentences on the final essay representation. 2http://svmlight.joachims.org/ 1093 For the proposed TDNN model, as introduced in Section 2.2, different variants of TDNN are examined by using one or multiple components out of the semantic, POS and the syntactic networks. The combinations being considered are listed in the following. In particular, the dimensions of POS tags and syntactic network are fixed to 50, whereas the sizes of the hidden units in LSTM, as well as the output units of the linear layers are tuned by grid search. - TDNN(Sem) only includes the semantic building block, which is similar to the two-layer LSTM neural network from (Alikaniotis et al., 2016) but without regularizing the word embeddings; - TDNN(Sem+POS) employs the semantic and the POS building blocks; - TDNN(Sem+Synt) uses the semantic and the syntactic network building blocks; - TDNN(POS+Synt) includes the POS and the syntactic network building blocks; - TDNN(ALL) employs all three building blocks. The use of POS or syntactic network alone is not presented for brevity given the facts that they perform no better than TDNN(POS+Synt) in our pilot experiments. Source code of the TDNN model is publicly available to enable further comparison3. 4 Results and Analyzes In this section, the evaluation results for different competing methods are compared and analyzed in terms of their agreements with the manual ratings using three correlation metrics, namely, QWK, PCC and SCC, where the best results for each prompt is highlighted in bold in Table 3. It can be seen that, for seven out of all eight prompts, the proposed TDNN variants outperform the baselines by a margin in terms of QWK, and the TDNN variant with semantic and syntactic features, namely, TDNN(Sem+Synt), consistently performs the best among different competing methods. More precisely, as indicated in the bottom right corner in Table 3, on average, TDNN(Sem+Synt) outperforms the baselines by at least 25.52% under QWK, by 10.28% under PCC, and by 15.66% under SCC, demonstrating that the proposed model not only correlates better with the manual ratings in terms of QWK, but also linearly (PCC) and monotonically (SCC) correlates better with the manual ratings. As for the 3https://github.com/ucasir/TDNN4AES four baselines, note that, the relatively underperformed deep models suffer from larger variances of performance under different prompts, e.g., for prompts two and eight, 2L-LSTM’s QWK is lower than 0.3. This actually confirms our choice of RankSVM for the first stage in TDNN, since a more complicated model (like 2L-LSTM) may end up with learning prompt-dependent signals, making it unsuitable for the prompt-independent rating prediction. As a comparison, RankSVM performs more stable among different prompts. As for the different TDNN variants, it turns out that the joint uses of syntactic network with semantic or POS features can lead to better performances. This indicates that, when learning the prompt-dependent signals, apart from the widelyused semantic features, POS features and the sentence structure taggings (syntactic network) are also essential in learning the structure and the arrangement of an essay in response to a particular prompt, thereby being able to improve the results. It is also worth mentioning, however, when using all three features, the TDNN actually performs worse than when only using (any) two features. One possible explanation is that the uses of all three features result in a more complicated model, which over-fits the training data. In addition, recall that the prompt-independent RankSVM model from the first stage enables the proposed TDNN in learning prompt-dependent information without manual ratings for the target prompt. Therefore, one would like to understand how good the trained RankSVM is in feeding training data for the model in the second stage. In particular, the precision, recall and F-score (P/R/F) of the essays selected by RanknSVM, namely, the negative ones rated between [0, 4], and the positive ones rated between [8, 10], are displayed in Figure 4. It can be seen that the P/R/F scores of both positive and negative classes differ a lot among different prompts. Moreover, it turns out that the P/R/F scores do not necessarily correlate with the performance of the TDNN model. Take TDNN(Sem+Synt), the best TDNN variant, as an example: as indicated in Table 4, the performance and the P/R/F scores of the pseudo examples are only weakly correlated in most cases. To gain a better understanding in how the quality of pseudo examples affects the performance of TDNN, the sanctity of the selected essays are examined. In Figure 5, the relative precision of 1094 Eval. Metric QWK PCC SCC QWK PCC SCC QWK PCC SCC Method Prompt 1 Prompt 2 Prompt 3 RankSVM .7371 .6915 .6726 .4666 .4956 .4993 .4637 .5584 .5357 2L-LSTM .4687 .6570 .4213 .2788 .6202 .6337 .5018 .6410 .6197 CNN-LSTM .4320 .6933 .5108 .3230 .6513 .6395 .5454 .6844 .6541 CNN-LSTM-ATT .6256 .7430 .6612 .4348 .7200 .6724 .4219 .5927 .6327 TDNN(Sem) .7292 .7366 .7190 .6220 .7138 .7372 .6038 .6613 .6714 TDNN(Sem+POS) .7305 .7413 .7209 .6551 .7276 .7469 .6112 .6706 .6809 TDNN(Sem+Synt) .7688 .7759 .7318 .6859 .7292 .7593 .6281 .6759 .7028 TDNN(POS+Synt) .7663 .7700 .7310 .6808 .7225 .7581 .6219 .6803 .6984 TDNN(All) .7310 .7584 .7300 .6596 .7210 .7496 .6146 .6772 .6943 Method Prompt 4 Prompt 5 Prompt 6 RankSVM .5112 .6250 .6325 .6690 .7103 .6651 .5285 .5443 .5239 2L-LSTM .5754 .6527 .6354 .5128 .7375 .7360 .4951 .6528 .6669 CNN-LSTM .7065 .7564 .7346 .6594 .6722 .6536 .5810 .6460 .6447 CNN-LSTM-ATT .4665 .7224 .7383 .5348 .6531 .6505 .5149 .6291 .6637 TDNN(Sem) .7398 .7412 .6934 .6874 .7585 .7323 .6278 .6524 .7205 TDNN(Sem+POS) .7450 .7601 .7119 .6943 .7716 .7341 .6540 .6780 .7239 TDNN(Sem+Synt) .7578 .7616 .7492 .7366 .7993 .7960 .6752 .6903 .7434 TDNN(POS+Synt) .7561 .7591 .7440 .7332 .7983 .7866 .6593 .6759 .7354 TDNN(All) .7527 .7609 .7251 .7302 .7974 .7794 .6557 .6874 .7350 Method Prompt 7 Prompt 8 Average RankSVM .5858 .6436 .6429 .4075 .5889 .6087 .5462 .6072 .5976 2L-LSTM .6690 .7637 .7607 .2486 .5137 .4979 .4687 .6548 .6214 CNN-LSTM .6609 .6849 .6865 .3812 .4666 .3872 .5362 .6569 .6139 CNN-LSTM-ATT .6002 .6314 .6223 .4468 .5358 .4536 .5057 .6535 .6368 TDNN(Sem) .5482 .6957 .6902 .5003 .6083 .6545 .5875 .6779 .6795 TDNN(Sem+POS) .6239 .7111 .7243 .5519 .6219 .6614 .6582 .7103 .7130 TDNN(Sem+Synt) .6587 .7201 .7380 .5741 .6324 .6713 .6856 .7244 .7365 TDNN(POS+Synt) .6464 .7172 .7349 .5631 .6281 .6698 .6784 .7189 .7322 TDNN(All) .6396 .7114 .7300 .5622 .6267 .6631 .6682 .7176 .7258 Table 3: Correlations between AES and manual ratings for different competing methods are reported for individual prompts. The average results among different prompts are summarized in the bottom right. The best results are highlighted in bold for individual prompts. Neg/Pos Metric QWK PCC SCC [0, 4] Precision +0.5151 +0.4286 +0.4471 Recall - 0.2362 - 0.1363 - 0.3491 F-score +0.4135 +0.4062 +0.1703 [8, 10] Precision +0.3526 +0.3224 +0.3885 Recall +0.0063 - 0.0415 - 0.2112 F-score +0.8339 +0.6905 +0.4221 Table 4: Linear correlations between the performance of TDNN(Sem+Synt) and the precision, recall, and F-score of the selected pseudo examples. Prpt 1 2 3 4 5 6 7 8 Neg 191 245 847 428 501 209 454 60 Pos 623 470 65 295 277 426 267 418 Table 5: The numbers of the selected positive and negative essays for each prompt. the selected positive and negative training data by RankSVM are displayed for all eight prompts in terms of their concordance with the manual ratings, by computing the number of positive (negative) essays that are better (worse) than all negative (positive) essays. It can be seen that, such relative precision is at least 80% and mostly beyond 90% on different prompts, indicating that the overlap of the selected positive and negative essays are fairly small, guaranteeing that the deep model in the second stage at least learns from correct labels, which are crucial for the success of our TDNN model. Beyond that, we further investigate the class balance of the selected training data from the first 1095 (a) Negative (b) Positive Figure 4: The precision, recall and F-score of the pseudo negative or positive examples, which are rated within [0, 4] or [8, 10] by RankSVM. Figure 5: The sanctity of the selected positive and negative essays by RankSVM. The x-axis indicates different prompts and the y-axis is the relative precision. stage, which could also influence the ultimate results. The number of selected positive and negative essays are reported in Table 5, where for prompts three and eight the training data suffers from serious imbalanced problem, which may explain their lower performance (namely, the two lowest QWKs among different prompts). On one hand, this is actually determined by real distribution of ratings for a particular prompt, e.g., how many essays are with an extreme quality for a given prompt in the target data. On the other hand, a fine-grained tuning of the RankSVM (e.g., tuning C+ and C−for positive and negative examples separately) may partially resolve the problem, which is left for the future work. 5 Related Work Classical regression and classification algorithms are widely used for learning the rating model based on a variety of text features including lexical, syntactic, discourse and semantic features (Larkey, 1998; Rudner, 2002; Attali and Burstein, 2006; Mcnamara et al., 2015; Phandi et al., 2015). There are also approaches that see AES as a preference ranking problem by applying learning to ranking algorithms to learn the rating model. Results show improvement of learning to rank approaches over classical regression and classification algorithms (Chen et al., 2014; Yannakoudakis et al., 2011). In addition, Chen & He propose to incorporate the evaluation metric into the loss function of listwise learning to rank for AES (Chen and He, 2013). Recently, there have been efforts in developing AES approaches based on deep neural networks (DNN), for which feature engineering is not required. Taghipour & Ng explore a variety of neural network model architectures based on recurrent neural networks which can effectively encode the information required for essay scoring and learn the complex connections in the data through the non-linear neural layers (Taghipour and Ng, 2016). Alikaniotis et al. introduce a neural network model to learn the extent to which specific words contribute to the text’s score, which 1096 is embedded in the word representations. Then a two-layer bi-directional Long-Short Term Memory networks (bi-LSTM) is used to learn the meaning of texts, and finally the essay score is predicted through a mutli-layer feed-forward network (Alikaniotis et al., 2016). Dong & Zhang employ a hierarchical convolutional neural network (CNN) model, with a lower layer representing sentence structure and an upper layer representing essay structure based on sentence representations, to learn features automatically (Dong and Zhang, 2016). This model is later improved by employing attention layers. Specifically, the model learns text representation with LSTMs which can model the coherence and co-reference among sequences of words and sentences, and uses attention pooling to capture more relevant words and sentences that contribute to the final quality of essays (Dong et al., 2017). Song et al. propose a deep model for identifying discourse modes in an essay (Song et al., 2017). While the literature has shown satisfactory performance of prompt-dependent AES, how to achieve effective essay scoring in a promptindependent setting remains to be explored. Chen & He studied the usefulness of promptindependent text features and achieved a humanmachine rating agreement slightly lower than the use of all text features (Chen and He, 2013) for prompt-dependent essay scoring prediction. A constrained multi-task pairwise preference learning approach was proposed in (Cummins et al., 2016) to combine essays from multiple prompts for training. However, as shown by (Dong and Zhang, 2016; Zesch et al., 2015; Phandi et al., 2015), straightforward applications of existing AES methods for prompt-independent AES lead to a poor performance. 6 Conclusions & Future Work This study aims at addressing the promptindependent automated essay scoring (AES), where no rated essay for the target prompt is available. As demonstrated in the experiments, two kinds of established prompt-dependent AES models, namely, RankSVM for AES (Yannakoudakis et al., 2011; Chen et al., 2014) and the deep models for AES (Alikaniotis et al., 2016; Taghipour and Ng, 2016; Dong et al., 2017), fail to provide satisfactory performances, justifying our arguments in Section 1 that the application of established prompt-dependent AES models on promptindependent AES is not straightforward. Therefore, a two-stage TDNN learning framework was proposed to utilize the prompt-independent features to generate pseudo training data for the target prompt, on which a hybrid deep neural network model is proposed to learn a rating model consuming semantic, part-of-speech, and syntactic signals. Through the experiments on the ASAP dataset, the proposed TDNN model outperforms the baselines, and leads to promising improvement in the human-machine agreement. Given that our approach in this paper is similar to the methods for transductive transfer learning (Pan and Yang, 2010), we argue that the proposed TDNN could be further improved by migrating the non-target training data to the target prompt (Busto and Gall, 2017). Further study of the uses of transfer learning algorithms on promptindependent AES needs to be undertaken. Acknowledgments This work is supported in part by the National Natural Science Foundation of China (61472391), and the Project of Beijing Advanced Innovation Center for Language Resources (451122512). References Dimitrios Alikaniotis, Helen Yannakoudakis, and Marek Rei. 2016. Automatic text scoring using neural networks. In ACL (1). The Association for Computer Linguistics. Y. Attali and J. Burstein. 2006. Automated essay scoring with e-rater R⃝v. 2. The Journal of Technology, Learning and Assessment 4(3). Pau Panareda Busto and Juergen Gall. 2017. Open set domain adaptation. In ICCV. IEEE Computer Society, pages 754–763. Hongbo Chen and Ben He. 2013. Automated essay scoring by maximizing human-machine agreement. In EMNLP. ACL, pages 1741–1752. Hongbo Chen, Jungang Xu, and Ben He. 2014. Automated essay scoring by capturing relative writing quality. Comput. J. 57(9):1318–1330. Ronan Cummins, Meng Zhang, and Ted Briscoe. 2016. Constrained multi-task learning for automated essay scoring. In ACL (1). The Association for Computer Linguistics. S. Dikli. 2006. An overview of automated scoring of essays. The Journal of Technology, Learning and Assessment 5(1). 1097 Fei Dong and Yue Zhang. 2016. Automatic features for essay scoring - an empirical study. In EMNLP. The Association for Computational Linguistics, pages 1072–1077. Fei Dong, Yue Zhang, and Jie Yang. 2017. Attentionbased recurrent convolutional neural network for automatic essay scoring. In CoNLL. Association for Computational Linguistics, pages 153–162. Peter W Foltz, Darrell Laham, and Thomas K Landauer. 1999. Automated essay scoring: Applications to educational technology. In World Conference on Educational Multimedia, Hypermedia and Telecommunications. volume 1999, pages 939–944. Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML. JMLR.org, volume 37 of JMLR Workshop and Conference Proceedings, pages 448–456. Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In KDD. ACM, pages 133– 142. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR abs/1412.6980. http://arxiv.org/abs/1412.6980. Leah S. Larkey. 1998. Automatic essay grading using text categorization techniques. In SIGIR. ACM, pages 90–95. Jiwei Li, Thang Luong, Dan Jurafsky, and Eduard H. Hovy. 2015. When are tree structures necessary for deep learning of representations? In EMNLP. The Association for Computational Linguistics, pages 2304–2314. Danielle S. Mcnamara, Scott A. Crossley, Rod D. Roscoe, Laura K. Allen, and Jianmin Dai. 2015. A hierarchical classification approach to automated essay scoring. Assessing Writing 23:35–59. Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10):1345–1359. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP. ACL, pages 1532– 1543. Peter Phandi, Kian Ming Adam Chai, and Hwee Tou Ng. 2015. Flexible domain adaptation for automated essay scoring using correlated linear regression. In EMNLP. The Association for Computational Linguistics, pages 431–439. L. M Rudner. 2002. Automated essay scoring using bayes’ theorem. National Council on Measurement in Education New Orleans La 1(2):3–21. Richard Socher, John Bauer, Christopher D. Manning, and Andrew Y. Ng. 2013. Parsing with compositional vector grammars. In ACL (1). The Association for Computer Linguistics, pages 455–465. Wei Song, Dong Wang, Ruiji Fu, Lizhen Liu, Ting Liu, and Guoping Hu. 2017. Discourse mode identification in essays. In ACL (1). Association for Computational Linguistics, pages 112–122. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15:1929–1958. http://jmlr.org/papers/v15/srivastava14a.html. Kaveh Taghipour and Hwee Tou Ng. 2016. A neural approach to automated essay scoring. In EMNLP. The Association for Computational Linguistics, pages 1882–1891. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In ACL (1). The Association for Computer Linguistics, pages 1556–1566. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In HLT-NAACL. The Association for Computational Linguistics. D.M. Williamson. 2009. A framework for implementing automated scoring. In Annual Meeting of the American Educational Research Association and the National Council on Measurement in Education, San Diego, CA. Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading ESOL texts. In ACL. The Association for Computer Linguistics, pages 180–189. Yuan Yao, Lorenzo Rosasco, and Andrea Caponnetto. 2007. On early stopping in gradient descent learning. Constructive Approximation 26(2):289–315. Torsten Zesch, Michael Wojatzki, and Dirk ScholtenAkoun. 2015. Task-independent features for automated essay grading. In BEA@NAACL-HLT. The Association for Computer Linguistics, pages 224– 232. Jie Zhou and Wei Xu. 2015. End-to-end learning of semantic role labeling using recurrent neural networks. In ACL (1). The Association for Computer Linguistics, pages 1127–1137.
2018
100
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1098–1107 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1098 Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation Tiancheng Zhao, Kyusong Lee and Maxine Eskenazi Language Technologies Institute Carnegie Mellon University Pittsburgh, Pennsylvania, USA {tianchez, kyusongl, max+}@cs.cmu.edu Abstract The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence representation learning method that can integrate with any existing encoderdecoder dialog models for interpretable response generation. Building upon variational autoencoders (VAEs), we present two novel models, DI-VAE and DI-VST that improve VAEs and can discover interpretable semantics via either auto encoding or context predicting. Our methods have been validated on real-world dialog datasets to discover semantic representations and enhance encoder-decoder models with interpretable generation.1 1 Introduction Classic dialog systems rely on developing a meaning representation to represent the utterances from both the machine and human users (Larsson and Traum, 2000; Bohus et al., 2007). The dialog manager of a conventional dialog system outputs the system’s next action in a semantic frame that usually contains hand-crafted dialog acts and slot values (Williams and Young, 2007). Then a natural language generation module is used to generate the system’s output in natural language based on the given semantic frame. This approach suffers from generalization to more complex domains because it soon become intractable to man1Data and code are available at https://github. com/snakeztc/NeuralDialog-LAED. ually design a frame representation that covers all of the fine-grained system actions. The recently developed neural dialog system is one of the most prominent frameworks for developing dialog agents in complex domains. The basic model is based on encoder-decoder networks (Cho et al., 2014) and can learn to generate system responses without the need for hand-crafted meaning representations and other annotations. Figure 1: Our proposed models learn a set of discrete variables to represent sentences by either autoencoding or context prediction. Although generative dialog models have advanced rapidly (Serban et al., 2016; Li et al., 2016; Zhao et al., 2017), they cannot provide interpretable system actions as in the conventional dialog systems. This inability limits the effectiveness of generative dialog models in several ways. First, having interpretable system actions enables human to understand the behavior of a dialog system and better interpret the system intentions. Also, modeling the high-level decision-making policy in dialogs enables useful generalization and dataefficient domain adaptation (Gaˇsi´c et al., 2010). Therefore, the motivation of this paper is to develop an unsupervised neural recognition model that can discover interpretable meaning representations of utterances (denoted as latent actions) as a set of discrete latent variables from a large unlabelled corpus as shown in Figure 1. The discovered meaning representations will then be integrated with encoder decoder networks to achieve interpretable dialog generation while preserving 1099 all the merit of neural dialog systems. We focus on learning discrete latent representations instead of dense continuous ones because discrete variables are easier to interpret (van den Oord et al., 2017) and can naturally correspond to categories in natural languages, e.g. topics, dialog acts and etc. Despite the difficulty of learning discrete latent variables in neural networks, the recently proposed Gumbel-Softmax offers a reliable way to back-propagate through discrete variables (Maddison et al., 2016; Jang et al., 2016). However, we found a simple combination of sentence variational autoencoders (VAEs) (Bowman et al., 2015) and Gumbel-Softmax fails to learn meaningful discrete representations. We then highlight the anti-information limitation of the evidence lowerbound objective (ELBO) in VAEs and improve it by proposing Discrete Information VAE (DI-VAE) that maximizes the mutual information between data and latent actions. We further enrich the learning signals beyond auto encoding by extending Skip Thought (Kiros et al., 2015) to Discrete Information Variational Skip Thought (DI-VST) that learns sentence-level distributional semantics. Finally, an integration mechanism is presented that combines the learned latent actions with encoder decoder models. The proposed systems are tested on several realworld dialog datasets. Experiments show that the proposed methods significantly outperform the standard VAEs and can discover meaningful latent actions from these datasets. Also, experiments confirm the effectiveness of the proposed integration mechanism and show that the learned latent actions can control the sentence-level attributes of the generated responses and provide humaninterpretable meaning representations. 2 Related Work Our work is closely related to research in latent variable dialog models. The majority of models are based on Conditional Variational Autoencoders (CVAEs) (Serban et al., 2016; Cao and Clark, 2017) with continuous latent variables to better model the response distribution and encourage diverse responses. Zhao et al., (2017) further introduced dialog acts to guide the learning of the CVAEs. Discrete latent variables have also been used for task-oriented dialog systems (Wen et al., 2017), where the latent space is used to represent intention. The second line of related work is enriching the dialog context encoder with more fine-grained information than the dialog history. Li et al., (2016) captured speakers’ characteristics by encoding background information and speaking style into the distributed embeddings. Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses. The proposed method also relates to sentence representation learning using neural networks. Most work learns continuous distributed representations of sentences from various learning signals (Hill et al., 2016), e.g. the Skip Thought learns representations by predicting the previous and next sentences (Kiros et al., 2015). Another area of work focused on learning regularized continuous sentence representation, which enables sentence generation by sampling the latent space (Bowman et al., 2015; Kim et al., 2017). There is less work on discrete sentence representations due to the difficulty of passing gradients through discrete outputs. The recently developed Gumbel Softmax (Jang et al., 2016; Maddison et al., 2016) and vector quantization (van den Oord et al., 2017) enable us to train discrete variables. Notably, discrete variable models have been proposed to discover document topics (Miao et al., 2016) and semi-supervised sequence transaction (Zhou and Neubig, 2017) Our work differs from these as follows: (1) we focus on learning interpretable variables; in prior research the semantics of latent variables are mostly ignored in the dialog generation setting. (2) we improve the learning objective for discrete VAEs and overcome the well-known posterior collapsing issue (Bowman et al., 2015; Chen et al., 2016). (3) we focus on unsupervised learning of salient features in dialog responses instead of hand-crafted features. 3 Proposed Methods Our formulation contains three random variables: the dialog context c, the response x and the latent action z. The context often contains the discourse history in the format of a list of utterances. The response is an utterance that contains a list of word tokens. The latent action is a set of discrete variables that define high-level attributes of x. Before introducing the proposed framework, we first identify two key properties that are essential in or1100 der for z to be interpretable: 1. z should capture salient sentence-level features about the response x. 2. The meaning of latent symbols z should be independent of the context c. The first property is self-evident. The second can be explained: assume z contains a single discrete variable with K classes. Since the context c can be any dialog history, if the meaning of each class changes given a different context, then it is difficult to extract an intuitive interpretation by only looking at all responses with class k ∈[1, K]. Therefore, the second property looks for latent actions that have context-independent semantics so that each assignment of z conveys the same meaning in all dialog contexts. With the above definition of interpretable latent actions, we first introduce a recognition network R : qR(z|x) and a generation network G. The role of R is to map an sentence to the latent variable z and the generator G defines the learning signals that will be used to train z’s representation. Notably, our recognition network R does not depend on the context c as has been the case in prior work (Serban et al., 2016). The motivation of this design is to encourage z to capture context-independent semantics, which are further elaborated in Section 3.4. With the z learned by R and G, we then introduce an encoder decoder network F : pF(x|z, c) and and a policy network π : pπ(z|c). At test time, given a context c, the policy network and encoder decoder will work together to generate the next response via ˜x = pF(x|z ∼pπ(z|c), c). In short, R, G, F and π are the four components that comprise our proposed framework. The next section will first focus on developing R and G for learning interpretable z and then will move on to integrating R with F and π in Section 3.3. 3.1 Learning Sentence Representations from Auto-Encoding Our baseline model is a sentence VAE with discrete latent space. We use an RNN as the recognition network to encode the response x. Its last hidden state hR |x| is used to represent x. We define z to be a set of K-way categorical variables z = {z1...zm...zM}, where M is the number of variables. For each zm, its posterior distribution is defined as qR(zm|x) = Softmax(WqhR |x| + bq). During training, we use the Gumbel-Softmax trick to sample from this distribution and obtain lowvariance gradients. To map the latent samples to the initial state of the decoder RNN, we define {e1...em...eM} where em ∈RK×D and D is the generator cell size. Thus the initial state of the generator is: hG 0 = PM m=1 em(zm). Finally, the generator RNN is used to reconstruct the response given hG 0 . VAEs is trained to maxmimize the evidence lowerbound objective (ELBO) (Kingma and Welling, 2013). For simplicity, later discussion drops the subscript m in zm and assumes a single latent z. Since each zm is independent, we can easily extend the results below to multiple variables. 3.1.1 Anti-Information Limitation of ELBO It is well-known that sentence VAEs are hard to train because of the posterior collapse issue. Many empirical solutions have been proposed: weakening the decoder, adding auxiliary loss etc. (Bowman et al., 2015; Chen et al., 2016; Zhao et al., 2017). We argue that the posterior collapse issue lies in ELBO and we offer a novel decomposition to understand its behavior. First, instead of writing ELBO for a single data point, we write it as an expectation over a dataset: LVAE = Ex[EqR(z|x)[log pG(x|z)] −KL(qR(z|x)∥p(z))] (1) We can expand the KL term as Eq. 2 (derivations in Appendix A.1) and rewrite ELBO as: Ex[KL(qR(z|x)∥p(z))] = (2) I(Z, X)+KL(q(z)∥p(z)) LVAE = Eq(z|x)p(x)[log p(x|z)] −I(Z, X) −KL(q(z)∥p(z)) (3) where q(z) = Ex[qR(z|x)] and I(Z, X) is the mutual information between Z and X. This expansion shows that the KL term in ELBO is trying to reduce the mutual information between latent variables and the input data, which explains why VAEs often ignore the latent variable, especially when equipped with powerful decoders. 3.1.2 VAE with Information Maximization and Batch Prior Regularization A natural solution to correct the anti-information issue in Eq. 3 is to maximize both the data likeli1101 hood lowerbound and the mutual information between z and the input data: LVAE + I(Z, X) = EqR(z|x)p(x)[log pG(x|z)] −KL(q(z)∥p(z)) (4) Therefore, jointly optimizing ELBO and mutual information simply cancels out the informationdiscouraging term. Also, we can still sample from the prior distribution for generation because of KL(q(z)∥p(z)). Eq. 4 is similar to the objectives used in adversarial autoencoders (Makhzani et al., 2015; Kim et al., 2017). Our derivation provides a theoretical justification to their superior performance. Notably, Eq. 4 arrives at the same loss function proposed in infoVAE (Zhao S et al., 2017). However, our derivation is different, offering a new way to understand ELBO behavior. The remaining challenge is how to minimize KL(q(z)∥p(z)), since q(z) is an expectation over q(z|x). When z is continuous, prior work has used adversarial training (Makhzani et al., 2015; Kim et al., 2017) or Maximum Mean Discrepancy (MMD) (Zhao S et al., 2017) to regularize q(z). It turns out that minimizing KL(q(z)∥p(z)) for discrete z is much simpler than its continuous counterparts. Let xn be a sample from a batch of N data points. Then we have: q(z) ≈1 N N X n=1 q(z|xn) = q′(z) (5) where q′(z) is a mixture of softmax from the posteriors q(z|xn) of each xn. We can approximate KL(q(z)∥p(z)) by: KL(q′(z)∥p(z)) = K X k=1 q′(z = k) log q′(z = k) p(z = k) (6) We refer to Eq. 6 as Batch Prior Regularization (BPR). When N approaches infinity, q′(z) approaches the true marginal distribution of q(z). In practice, we only need to use the data from each mini-batch assuming that the mini batches are randomized. Last, BPR is fundamentally different from multiplying a coefficient < 1 to anneal the KL term in VAE (Bowman et al., 2015). This is because BPR is a non-linear operation log sum exp. For later discussion, we denote our discrete infoVAE with BPR as DI-VAE. 3.2 Learning Sentence Representations from the Context DI-VAE infers sentence representations by reconstruction of the input sentence. Past research in distributional semantics has suggested the meaning of language can be inferred from the adjacent context (Harris, 1954; Hill et al., 2016). The distributional hypothesis is especially applicable to dialog since the utterance meaning is highly contextual. For example, the dialog act is a wellknown utterance feature and depends on dialog state (Austin, 1975; Stolcke et al., 2000). Thus, we introduce a second type of latent action based on sentence-level distributional semantics. Skip thought (ST) is a powerful sentence representation that captures contextual information (Kiros et al., 2015). ST uses an RNN to encode a sentence, and then uses the resulting sentence representation to predict the previous and next sentences. Inspired by ST’s robust performance across multiple tasks (Hill et al., 2016), we adapt our DI-VAE to Discrete Information Variational Skip Thought (DI-VST) to learn discrete latent actions that model distributional semantics of sentences. We use the same recognition network from DI-VAE to output z’s posterior distribution qR(z|x). Given the samples from qR(z|x), two RNN generators are used to predict the previous sentence xp and the next sentences xn. Finally, the learning objective is to maximize: LDI-VST = EqR(z|x)p(x))[log(pn G(xn|z)pp G(xp|z))] −KL(q(z)∥p(z)) (7) 3.3 Integration with Encoder Decoders We now describe how to integrate a given qR(z|x) with an encoder decoder and a policy network. Let the dialog context c be a sequence of utterances. Then a dialog context encoder network can encode the dialog context into a distributed representation he = Fe(c). The decoder Fd can generate the responses ˜x = Fd(he, z) using samples from qR(z|x). Meanwhile, we train π to predict the aggregated posterior Ep(x|c)[qR(z|x)] from c via maximum likelihood training. This model is referred as Latent Action Encoder Decoder (LAED) with the following objective. LLAED(θF, θπ) = EqR(z|x)p(x,c)[logpπ(z|c) + log pF(x|z, c)] (8) 1102 Also simply augmenting the inputs of the decoders with latent action does not guarantee that the generated response exhibits the attributes of the give action. Thus we use the controllable text generation framework (Hu et al., 2017) by introducing LAttr, which reuses the same recognition network qR(z|x) as a fixed discriminator to penalize the decoder if its generated responses do not reflect the attributes in z. LAttr(θF) = EqR(z|x)p(c,x)[log qR(z|F(c, z))] (9) Since it is not possible to propagate gradients through the discrete outputs at Fd at each word step, we use a deterministic continuous relaxation (Hu et al., 2017) by replacing output of Fd with the probability of each word. Let ot be the normalized probability at step t ∈[1, |x|], the inputs to qR at time t are then the sum of word embeddings weighted by ot, i.e. hR t = RNN(hR t−1, Eot) and E is the word embedding matrix. Finally this loss is combined with LLAED and a hyperparameter λ to have Attribute Forcing LAED. LattrLAED = LLAED + λLAttr (10) 3.4 Relationship with Conditional VAEs It is not hard to see LLAED is closely related to the objective of CVAEs for dialog generation (Serban et al., 2016; Zhao et al., 2017), which is: LCVAE = Eq[log p(x|z, c)]−KL(q(z|x, c)∥p(z|c)) (11) Despite their similarities, we highlight the key differences that prohibit CVAE from achieving interpretable dialog generation. First LCVAE encourages I(x, z|c) (Agakov, 2005), which learns z that capture context-dependent semantics. More intuitively, z in CVAE is trained to generate x via p(x|z, c) so the meaning of learned z can only be interpreted along with its context c. Therefore this violates our goal of learning context-independent semantics. Our methods learn qR(z|x) that only depends on x and trains qR separately to ensure the semantics of z are interpretable standalone. 4 Experiments and Results The proposed methods are evaluated on four datasets. The first corpus is Penn Treebank (PTB) (Marcus et al., 1993) used to evaluate sentence VAEs (Bowman et al., 2015). We used the version pre-processed by Mikolov (Mikolov et al., 2010). The second dataset is the Stanford Multi-Domain Dialog (SMD) dataset that contains 3,031 human-Woz, task-oriented dialogs collected from 3 different domains (navigation, weather and scheduling) (Eric and Manning, 2017). The other two datasets are chat-oriented data: Daily Dialog (DD) and Switchboard (SW) (Godfrey and Holliman, 1997), which are used to test whether our methods can generalize beyond task-oriented dialogs but also to to open-domain chatting. DD contains 13,118 multi-turn human-human dialogs annotated with dialog acts and emotions. (Li et al., 2017). SW has 2,400 human-human telephone conversations that are annotated with topics and dialog acts. SW is a more challenging dataset because it is transcribed from speech which contains complex spoken language phenomenon, e.g. hesitation, self-repair etc. 4.1 Comparing Discrete Sentence Representation Models The first experiment used PTB and DD to evaluate the performance of the proposed methods in learning discrete sentence representations. We implemented DI-VAE and DI-VST using GRURNN (Chung et al., 2014) and trained them using Adam (Kingma and Ba, 2014). Besides the proposed methods, the following baselines are compared. Unregularized models: removing the KL(q|p) term from DI-VAE and DI-VST leads to a simple discrete autoencoder (DAE) and discrete skip thought (DST) with stochastic discrete hidden units. ELBO models: the basic discrete sentence VAE (DVAE) or variational skip thought (DVST) that optimizes ELBO with regularization term KL(q(z|x)∥p(z)). We found that standard training failed to learn informative latent actions for either DVAE or DVST because of the posterior collapse. Therefore, KL-annealing (Bowman et al., 2015) and bag-of-word loss (Zhao et al., 2017) are used to force these two models learn meaningful representations. We also include the results for VAE with continuous latent variables reported on the same PTB (Zhao et al., 2017). Additionally, we report the perplexity from a standard GRU-RNN language model (Zaremba et al., 2014). The evaluation metrics include reconstruction perplexity (PPL), KL(q(z)∥p(z)) and the mutual information between input data and latent vari1103 ables I(x, z). Intuitively a good model should achieve low perplexity and KL distance, and simultaneously achieve high I(x, z). The discrete latent space for all models are M=20 and K=10. Mini-batch size is 30. Dom Model PPL KL(q∥p) I(x, z) PTB RNNLM 116.22 VAE 73.49 15.94* DAE 66.49 2.20 0.349 DVAE 70.84 0.315 0.286 DI-VAE 52.53 0.133 1.18 DD RNNLM 31.15 DST xp:28.23 xn:28.16 0.588 1.359 DVST xp:30.36 xn:30.71 0.007 0.081 DI-VST xp:28.04 xn:27.94 0.088 1.028 Table 1: Results for various discrete sentence representations. The KL for VAE is KL(q(z|x)∥p(z)) instead of KL(q(z)∥p(z)) (Zhao et al., 2017) Table 1 shows that all models achieve better perplexity than an RNNLM, which shows they manage to learn meaningful q(z|x). First, for autoencoding models, DI-VAE is able to achieve the best results in all metrics compared other methods. We found DAEs quickly learn to reconstruct the input but they are prone to overfitting during training, which leads to lower performance on the test data compared to DI-VAE. Also, since there is no regularization term in the latent space, q(z) is very different from the p(z) which prohibits us from generating sentences from the latent space. In fact, DI-VAE enjoys the same linear interpolation properties reported in (Bowman et al., 2015) (See Appendix A.2). As for DVAEs, it achieves zero I(x, z) in standard training and only manages to learn some information when training with KL-annealing and bag-of-word loss. On the other hand, our methods achieve robust performance without the need for additional processing. Similarly, the proposed DI-VST is able to achieve the lowest PPL and similar KL compared to the strongly regularized DVST. Interestingly, although DST is able to achieve the highest I(x, z), but PPL is not further improved. These results confirm the effectiveness of the proposed BPR in terms of regularizing q(z) while learning meaningful posterior q(z|x). In order to understand BPR’s sensitivity to batch size N, a follow-up experiment varied the batch size from 2 to 60 (If N=1, DI-VAE is equivalent to DVAE). Figure 2 show that as N increases, Figure 2: Perplexity and I(x, z) on PTB by varying batch size N. BPR works better for larger N. perplexity, I(x, z) monotonically improves, while KL(q∥p) only increases from 0 to 0.159. After N > 30, the performance plateaus. Therefore, using mini-batch is an efficient trade-off between q(z) estimation and computation speed. The last experiment in this section investigates the relation between representation learning and the dimension of the latent space. We set a fixed budget by restricting the maximum number of modes to be about 1000, i.e. KM ≈1000. We then vary the latent space size and report the same evaluation metrics. Table 2 shows that models with multiple small latent variables perform significantly better than those with large and few latent variables. K, M KM PPL KL(q∥p) I(x, z) 1000, 1 1000 75.61 0.032 0.335 10, 3 1000 71.42 0.071 0.607 4, 5 1024 68.43 0.088 0.809 Table 2: DI-VAE on PTB with different latent dimensions under the same budget. 4.2 Interpreting Latent Actions The next question is to interpret the meaning of the learned latent action symbols. To achieve this, the latent action of an utterance xn is obtained from a greedy mapping: an = argmaxk qR(z = k|xn). We set M=3 and K=5, so that there are at most 125 different latent actions, and each xn can now be represented by a1-a2-a3, e.g. “How are you?” →1-4-2. Assuming that we have access to manually clustered data according to certain classes 1104 (e.g. dialog acts), it is unfair to use classic cluster measures (Vinh et al., 2010) to evaluate the clusters from latent actions. This is because the uniform prior p(z) evenly distribute the data to all possible latent actions, so that it is expected that frequent classes will be assigned to several latent actions. Thus we utilize the homogeneity metric (Rosenberg and Hirschberg, 2007) that measures if each latent action contains only members of a single class. We tested this on the SW and DD, which contain human annotated features and we report the latent actions’ homogeneity w.r.t these features in Table 3. On DD, results show DI-VST SW DD Act Topic Act Emotion DI-VAE 0.48 0.08 0.18 0.09 DI-VST 0.33 0.13 0.34 0.12 Table 3: Homogeneity results (bounded [0, 1]). works better than DI-VAE in terms of creating actions that are more coherent for emotion and dialog acts. The results are interesting on SW since DI-VST performs worse on dialog acts than DIVAE. One reason is that the dialog acts in SW are more fine-grained (42 acts) than the ones in DD (5 acts) so that distinguishing utterances based on words in x is more important than the information in the neighbouring utterances. We then apply the proposed methods to SMD which has no manual annotation and contains taskoriented dialogs. Two experts are shown 5 randomly selected utterances from each latent action and are asked to give an action name that can describe as many of the utterances as possible. Then an Amazon Mechanical Turk study is conducted to evaluate whether other utterances from the same latent action match these titles. 5 workers see the action name and a different group of 5 utterances from that latent action. They are asked to select all utterances that belong to the given actions, which tests the homogeneity of the utterances falling in the same cluster. Negative samples are included to prevent random selection. Table 4 shows that both methods work well and DI-VST achieved better homogeneity than DI-VAE. Since DI-VAE is trained to reconstruct its input and DI-VST is trained to model the context, they group utterances in different ways. For example, DI-VST would group “Can I get a restaurant”, “I am looking for a restaurant” into one action where Model Exp Agree Worker κ Match Rate DI-VAE 85.6% 0.52 71.3% DI-VST 93.3% 0.48 74.9% Table 4: Human evaluation results on judging the homogeneity of latent actions in SMD. DI-VAE may denote two actions for them. Finally, Table 4.2 shows sample annotation results, which show cases of the different types of latent actions discovered by our models. Model Action Sample utterance DI-VAE scheduling - sys: okay, scheduling a yoga activity with Tom for the 8th at 2pm. - sys: okay, scheduling a meeting for 6 pm on Tuesday with your boss to go over the quarterly report. requests - usr: find out if it ’s supposed to rain - usr: find nearest coffee shop DI-VST ask schedule info - usr: when is my football activity and who is going with me? - usr: tell me when my dentist appointment is? requests - usr: how about other coffee? - usr: 11 am please Table 5: Example latent actions discovered in SMD using our methods. 4.3 Dialog Response Generation with Latent Actions Finally we implement an LAED as follows. The encoder is a hierarchical recurrent encoder (Serban et al., 2016) with bi-directional GRU-RNNs as the utterance encoder and a second GRU-RNN as the discourse encoder. The discourse encoder output its last hidden state he |x|. The decoder is another GRU-RNN and its initial state of the decoder is obtained by hd 0 = he |x| + PM m=1 em(zm), where z comes from the recognition network of the proposed methods. The policy network π is a 2-layer multi-layer perceptron (MLP) that models pπ(z|he |x|). We use up to the previous 10 utterances as the dialog context and denote the LAED using DI-VAE latent actions as AE-ED and the one uses DI-VST as ST-ED. First we need to confirm whether an LAED can generate responses that are consistent with the semantics of a given z. To answer this, we use a pre-trained recognition network R to check if a generated response carries the attributes in 1105 the given action. We generate dialog responses on a test dataset via ˜x = F(z ∼ π(c), c) with greedy RNN decoding. The generated responses are passed into the R and we measure attribute accuracy by counting ˜x as correct if z = argmaxk qR(k|˜x). Table 4.3 shows our generated Domain AE-ED +Lattr ST-ED +Lattr SMD 93.5% 94.8% 91.9% 93.8% DD 88.4% 93.6% 78.5% 86.1% SW 84.7% 94.6% 57.3% 61.3% Table 6: Results for attribute accuracy with and without attribute loss. responses are highly consistent with the given latent actions. Also, latent actions from DI-VAE achieve higher attribute accuracy than the ones from DI-VST, because z from auto-encoding is explicitly trained for x reconstruction. Adding Lattr is effective in forcing the decoder to take z into account during its generation, which helps the most in more challenging open-domain chatting data, e.g. SW and DD. The accuracy of ST-ED on SW is worse than the other two datasets. The reason is that SW contains many short utterances that can be either a continuation of the same speaker or a new turn from the other speaker, whereas the responses in the other two domains are always followed by a different speaker. The more complex context pattern in SW may require special treatment. We leave it for future work. The second experiment checks if the policy network π is able to predict the right latent action given just the dialog context. We report both accuracy, i.e. argmaxk qR(k|x) = argmaxk′ pπ(k′|c) and perplexity of pπ(z|c). The perplexity measure is more useful for open domain dialogs because decision-making in complex dialogs is often one-to-many given a similar context (Zhao et al., 2017). Table 7 shows the prediction scores on SMD AE-ED 3.045 (51.5% sys 52.4% usr 50.5%) ST-ED 1.695 (75.5% sys 82.1% usr 69.2%) DD SW AE-ED 4.47 (35.8%) 4.46 (31.68%) ST-ED 3.89 (47.5%) 3.68 (33.2%) Table 7: Performance of policy network. Lattr is included in training. the three dialog datasets. These scores provide useful insights to understand the complexity of a dialog dataset. For example, accuracy on opendomain chatting is harder than the task-oriented SMD data. Also, it is intuitive that predicting system actions is easier than predicting user actions on SMD. Also, in general the prediction scores for ST-ED are higher the ones for AE-ED. The reason is related to our previous discussion about the granularity of the latent actions. Since latent actions from DI-VST mainly model the the type of utterances used in certain types of context, it is easier for the policy network to predict latent actions from DI-VST. Therefore, choosing the type of latent actions is a design choice and depends on the type of interpretability that is needed. We finish with an example generated from the two variants of LAED on SMD as shown in Table 8. Given a dialog context, our systems are able to output a probability distribution over different latent actions that have interpretable meaning along with their natural language realizations. c usr: Where does my friend live? Model Action Generated Responses AEED give loc info - Tom is at 753 University Ave, and a road block. p(z|c)=0.34 - Comfort Inn is at 7 miles away. give user info - Your home address is 5671 barringer street. p(z|c)=0.22 - Your home is at 10 ames street. STED give loc info - Jill’s house is 8 miles away at 347 Alta Mesa Ave. p(z|c)=0.93 - Jill lives at 347 Alta Mesa Ave. Table 8: Interpretable dialog generation on SMD with top probable latent actions. AE-ED predicts more fine-grained but more error-prone actions. 5 Conclusion and Future Work This paper presents a novel unsupervised framework that enables the discovery of discrete latent actions and interpretable dialog response generation. Our main contributions reside in the two sentence representation models DI-VAE and DIVST, and their integration with the encoder decoder models. Experiments show the proposed methods outperform strong baselines in learning discrete latent variables and showcase the effectiveness of interpretable dialog response generation. Our findings also suggest promising future research directions, including learning better context-based latent actions and using reinforce1106 ment learning to adapt policy networks. We believe that this work is an important step forward towards creating generative dialog models that can not only generalize to large unlabelled datasets in complex domains but also be explainable to human users. References Felix Vsevolodovich Agakov. 2005. Variational Information Maximization in Stochastic Environments. Ph.D. thesis, University of Edinburgh. John Langshaw Austin. 1975. How to do things with words. Oxford university press. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of machine Learning research 3(Jan):993–1022. Dan Bohus, Antoine Raux, Thomas K Harris, Maxine Eskenazi, and Alexander I Rudnicky. 2007. Olympus: an open-source framework for conversational spoken language interface research. In Proceedings of the workshop on bridging the gap: Academic and industrial research in dialog technologies. Association for Computational Linguistics, pages 32–39. Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. 2015. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349 . Kris Cao and Stephen Clark. 2017. Latent variable dialogue models and their diversity. arXiv preprint arXiv:1702.05962 . Xi Chen, Diederik P Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. 2016. Variational lossy autoencoder. arXiv preprint arXiv:1611.02731 . Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 . Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 . Mihail Eric and Christopher D Manning. 2017. Keyvalue retrieval networks for task-oriented dialogue. arXiv preprint arXiv:1705.05414 . Milica Gaˇsi´c, Filip Jurˇc´ıˇcek, Simon Keizer, Franc¸ois Mairesse, Blaise Thomson, Kai Yu, and Steve Young. 2010. Gaussian processes for fast policy optimisation of pomdp-based dialogue managers. In Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Association for Computational Linguistics, pages 201– 204. John J Godfrey and Edward Holliman. 1997. Switchboard-1 release 2. Linguistic Data Consortium, Philadelphia . Zellig S Harris. 1954. Distributional structure. Word 10(2-3):146–162. Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. arXiv preprint arXiv:1602.03483 . Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward controlled generation of text. In International Conference on Machine Learning. pages 1587–1596. Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144 . Yoon Kim, Kelly Zhang, Alexander M Rush, Yann LeCun, et al. 2017. Adversarially regularized autoencoders for generating discrete structures. arXiv preprint arXiv:1706.04223 . Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114 . Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in neural information processing systems. pages 3294–3302. Staffan Larsson and David R Traum. 2000. Information state and dialogue management in the trindi dialogue move engine toolkit. Natural language engineering 6(3-4):323–340. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural conversation model. arXiv preprint arXiv:1603.06155 . Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. Dailydialog: A manually labelled multi-turn dialogue dataset. arXiv preprint arXiv:1710.03957 . Chris J Maddison, Andriy Mnih, and Yee Whye Teh. 2016. The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712 . 1107 Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. 2015. Adversarial autoencoders. arXiv preprint arXiv:1511.05644 . Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computational linguistics 19(2):313–330. Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing. In International Conference on Machine Learning. pages 1727–1736. Tomas Mikolov, Martin Karafi´at, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Interspeech. volume 2, page 3. Andrew Rosenberg and Julia Hirschberg. 2007. Vmeasure: A conditional entropy-based external cluster evaluation measure. In Proceedings of the 2007 joint conference on empirical methods in natural language processing and computational natural language learning (EMNLP-CoNLL). Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2016. A hierarchical latent variable encoder-decoder model for generating dialogues. arXiv preprint arXiv:1605.06069 . Andreas Stolcke, Noah Coccaro, Rebecca Bates, Paul Taylor, Carol Van Ess-Dykema, Klaus Ries, Elizabeth Shriberg, Daniel Jurafsky, Rachel Martin, and Marie Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational linguistics 26(3):339–373. Aaron van den Oord, Oriol Vinyals, et al. 2017. Neural discrete representation learning. In Advances in Neural Information Processing Systems. pages 6309–6318. Nguyen Xuan Vinh, Julien Epps, and James Bailey. 2010. Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance. Journal of Machine Learning Research 11(Oct):2837–2854. Tsung-Hsien Wen, Yishu Miao, Phil Blunsom, and Steve Young. 2017. Latent intention dialogue models. arXiv preprint arXiv:1705.10229 . Jason D Williams and Steve Young. 2007. Partially observable markov decision processes for spoken dialog systems. Computer Speech & Language 21(2):393–422. Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2016. Topic augmented neural response generation with a joint attention mechanism. arXiv preprint arXiv:1606.08340 . Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329 . Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. arXiv preprint arXiv:1703.10960 . Shengjia Zhao S, Jiaming Song, and Stefano Ermon. 2017. Infovae: Information maximizing variational autoencoders. arXiv preprint arXiv:1706.02262 . Chunting Zhou and Graham Neubig. 2017. Multispace variational encoder-decoders for semisupervised labeled sequence transduction. arXiv preprint arXiv:1704.01691 .
2018
101
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1108–1117 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1108 Learning to Control the Specificity in Neural Response Generation Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, Jun Xu and Xueqi Cheng University of Chinese Academy of Sciences, Beijing, China CAS Key Lab of Network Data Science and Technology, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China {zhangruqing,fanyixing}@software.ict.ac.cn {guojiafeng,lanyanyan,junxu,cxq}@ict.ac.cn Abstract In conversation, a general response (e.g., “I don’t know”) could correspond to a large variety of input utterances. Previous generative conversational models usually employ a single model to learn the relationship between different utteranceresponse pairs, thus tend to favor general and trivial responses which appear frequently. To address this problem, we propose a novel controlled response generation mechanism to handle different utterance-response relationships in terms of specificity. Specifically, we introduce an explicit specificity control variable into a sequence-to-sequence model, which interacts with the usage representation of words through a Gaussian Kernel layer, to guide the model to generate responses at different specificity levels. We describe two ways to acquire distant labels for the specificity control variable in learning. Empirical studies show that our model can significantly outperform the state-of-theart response generation models under both automatic and human evaluations. 1 Introduction Human-computer conversation is a critical and challenging task in AI and NLP. There have been two major streams of research in this direction, namely task oriented dialog and general purpose dialog (i.e., chit-chat). Task oriented dialog aims to help people complete specific tasks such as buying tickets or shopping, while general purpose dialog attempts to produce natural and meaningful conversations with people regarding a wide range of topics in open domains (Perez-Marin, 2011; Sordoni et al.). In recent years, the latter has atMust support! Cheer! Support! It’s good. My friends and I are shocked! Figure 1: Rank-frequency distribution of the responses in the chit-chat corpus, with x and y axes being lg(rank order) and lg(frequency) respectively. tracted much attention in both academia and industry as a way to explore the possibility in developing a general purpose AI system in language (e.g., chatbots). A widely adopted approach to general purpose dialog is learning a generative conversational model from large scale social conversation data. Most methods in this line are constructed within the statistical machine translation (SMT) framework, where a sequence-to-sequence (Seq2Seq) model is learned to “translate” an input utterance into a response. However, general purpose dialog is intrinsically different from machine translation. In machine translation, since every sentence and its translation are semantically equivalent, there exists a 1-to-1 relationship between them. However, in general purpose dialog, a general response (e.g., “I don’t know”) could correspond to a large variety of input utterances. For example, in the chit-chat corpus used in this study (as shown in Figure 1), the top three most frequently appeared responses are “Must support! Cheer!”, “Support! It’s good.”, and “My friends and I are shocked!”, where the response “Must support! Cheer!” is used for 1216 different input utterances. Previous Seq2Seq models, which treat all the utteranceresponse pairs uniformly and employ a single 1109 model to learn the relationship between them, will inevitably favor such general responses with high frequency. Although these responses are safe for replying different utterances, they are boring and trivial since they carry little information, and may quickly lead to an end of the conversation. There have been a few efforts attempting to address this issue in literature. Li et al. (2016a) proposed to use the Maximum Mutual Information (MMI) as the objective to penalize general responses. It could be viewed as a post-processing approach which did not solve the generation of trivial responses fundamentally. Xing et al. (2017) pre-defined a set of topics from an external corpus to guide the generation of the Seq2Seq model. However, it is difficult to ensure that the topics learned from the external corpus are consistent with that in the conversation corpus, leading to the introduction of additional noises. Zhou et al. (2017) introduced latent responding factors to model multiple responding mechanisms. However, these latent factors are usually difficult in interpretation and it is hard to decide the number of the latent factors. In our work, we propose a novel controlled response generation mechanism to handle different utterance-response relationships in terms of specificity. The key idea is inspired by our observation on everyday conversation between humans. In human-human conversation, people often actively control the specificity of responses depending on their own response purpose (which might be affected by a variety of underlying factors like their current mood, knowledge state and so on). For example, they may provide some interesting and specific responses if they like the conversation, or some general responses if they want to end it. They may provide very detailed responses if they are familiar with the topic, or just “I don’t know” otherwise. Therefore, we propose to simulate the way people actively control the specificity of the response. We employ a Seq2Seq framework and further introduce an explicit specificity control variable to represent the response purpose of the agent. Meanwhile, we assume that each word, beyond the semantic representation which relates to its meaning, also has another representation which relates to the usage preference under different response purpose. We name this representation as the usage representation of words. The specificity control variable then interacts with the usage representation of words through a Gaussian Kernel layer, and guides the Seq2Seq model to generate responses at different specificity levels. We refer to our model as Specificity Controlled Seq2Seq model (SC-Seq2Seq). Note that unlike the work by (Xing et al., 2017), we do not rely on any external corpus to learn our model. All the model parameters are learned on the same conversation corpus in an end-to-end way. We employ distant supervision to train our SCSeq2Seq model since the specificity control variable is unknown in the raw data. We describe two ways to acquire distant labels for the specificity control variable, namely Normalized Inverse Response Frequency (NIRF) and Normalized Inverse Word Frequency (NIWF). By using normalized values, we restrict the specificity control variable to be within a pre-defined continuous value range with each end has very clear meaning on the specificity. This is significantly different from the discrete latent factors in (Zhou et al., 2017) which are difficult in interpretation. We conduct an empirical study on a large public dataset, and compare our model with several state-of-the-art response generation methods. Empirical results show that our model can generate either general or specific responses, and significantly outperform existing methods under both automatic and human evaluations. 2 Related Work In this section, we briefly review the related work on conversational models and response specificity. 2.1 Conversational Models Automatic conversation has attracted increasing attention over the past few years. At the very beginning, people started the research using handcrafted rules and templates (Walker et al., 2001; Williams et al., 2013; Henderson et al., 2014). These approaches required little data for training but huge manual effort to build the model, which is very time-consuming. For now, conversational models fall into two major categories: retrieval-based and generation-based. Retrievalbased conversational models search the most suitable response from candidate responses using different schemas (Kearns, 2000; Wang et al., 2013; Yan et al., 2016). These methods rely on preexisting responses, thus are difficult to be exten1110 ded to open domains (Zhou et al., 2017). With the large amount of conversation data available on the Internet, generation-based conversational models developed within a SMT framework (Ritter et al., 2011; Cho et al., 2014; Bahdanau et al., 2015) show promising results. Shang et al. (2015) generated replies for short-text conversation by encoder-decoder-based neural network with local and global attentions. Serban et al. (2016) built an end-to-end dialogue system using generative hierarchical neural network. Gu et al. (2016) introduced copynet to simulate the repeating behavior of humans in conversation. Similarly, our model is also based on the encoder-decoder framework. 2.2 Response Specificity Some recent studies began to focus on generating more specific or informative responses in conversation. It is also called a diversity problem since if each response is more specific, it would be more diverse between responses of different utterances. As an early work, Li et al. (2016a) used Maximum Mutual Information (MMI) as the objective to penalize general responses. Later, Li et al. (2017) proposed a data distillation method, which trains a series of generative models at different levels of specificity and uses a reinforcement learning model to choose the model best suited for decoding depending on the conversation context. These methods circumvented the general response issue by using either a post-processing approach or a data selection approach. Besides, Li et al. (2016b) tried to build a personalized conversation engine by adding extra personal information. Xing et al. (2017) incorporated the topic information from an external corpus into the Seq2Seq framework to guide the generation. However, external dataset may not be always available or consistent with the conversation dataset in topics. Zhou et al. (2017) introduced latent responding factors to the Seq2Seq model to avoid generating safe responses. However, these latent factors are usually difficult in interpretation and hard to decide the number. Moreover, Mou et al. (2016) proposed a content-introducing approach to generate a response based on a predicted keyword. Yao et al. (2016) attempted to improve the specificity with the reinforcement learning framework by using the averaged IDF score of the words in the response as a reward. Shen et al. (2017) presented a conditional variational framework for generating specific responses based on specific attributes. Unlike these existing methods, we introduce an explicit specificity control variable into a Seq2Seq model to handle different utterance-response relationships in terms of specificity. 3 Specificity Controlled Seq2Seq Model In this section, we present the Specificity Controlled Seq2Seq model (SC-Seq2Seq), a novel Seq2Seq model designed for actively controlling the generated responses in terms of specificity. 3.1 Model Overview The basic idea of a generative conversational model is to learn the mapping from an input utterance to its response, typically using an encoderdecoder framework. Formally, given an input utterance sequence X = (x1, x2, . . . , xT ) and a target response sequence Y = (y1, y2, . . . , yT ′), a neural Seq2Seq model is employed to learn p(Y|X) based on the training corpus D = {(X, Y)|Y is the response of X}. By maximizing the likelihood of all the utterance-response pairs with a single mapping mechanism, the learned Seq2Seq model will inevitably favor those general responses that can correspond to a large variety of input utterances. To address this issue, we assume that there are different mapping mechanisms between utteranceresponse pairs with respect to their specificity relation. Rather than involving some latent factors, we propose to introduce an explicit variable s into a Seq2Seq model to handle different utteranceresponse mappings in terms of specificity. By doing so, we hope that (1) s would have explicit meaning on specificity, and (2) s could not only interpret but also actively control the generation of the response Y given the input utterance X. The goal of our model becomes to learn p(Y|X, s) over the corpus D, where we acquire distant labels for s from the same corpus for learning. The overall architecture of SC-Seq2Seq is depicted in Figure 2, and we will detail our model as follows. 3.1.1 Encoder The encoder is to map the input utterance X into a compact vector that can capture its essential topics. Specifically, we use a bi-directional GRU (Cho et al., 2014) as the utterance encoder, and each word xi is firstly represented by its semantic representation ei mapped by semantic embedding 1111 Attentive Read    𝒉#$    𝒉#%    𝒉#&    𝒉#' <eos> my my name name is is John P(“John”) = 𝑃)(“John”) + 𝑃*(“John”) Semantic Representation Usage Representation    𝒉#'    𝒉+    𝒉,    𝒉    𝒉.    𝒉/ what is your name ? Utterance Encoder Response Decoder 𝑬 U Gaussian Kernel Layer Specificity Control Variable Semantic-based & Specificity-based Generation Figure 2: The overall architecture of SC-Seq2Seq model. matrix E as the input of the encoder. Then, the encoder represents the utterance X as a series of hidden vectors {ht}T t=1 modeling the sequence from both forward and backward directions. Finally, we use the final backward hidden state as the initial hidden state of the decoder. 3.1.2 Decoder The decoder is to generate a response Y given the hidden representations of the input utterance X under some specificity level denoted by the control variable s. Specifically, at step t, we define the probability of generating any target word yt by a “mixture” of probabilities: p(yt) = βpM(yt) + γpS(yt), (1) where pM(yt) denotes the semantic-based generation probability, pS(yt) denotes the specificitybased generation probability, β and γ are the coefficients. Specifically, pM(yt) is defined the same as that in traditional Seq2Seq model (Sutskever et al., 2014): pM(yt = w) = wT(Wh M ·hyt +We M ·et−1 +bM), (2) where w is a one-hot indicator vector of the word w and et−1 is the semantic representation of the t −1-th generated word in decoder. Wh M, We M and bM are parameters. hyt is the t-th hidden state in the decoder which is computed by: hyt = f(yt−1, hyt−1, ct), (3) where f is a GRU unit and ct is the context vector to allow the decoder to pay different attention to different parts of input at different steps (Bahdanau et al., 2015). pS(yt) denotes the generation probability of the target word given the specificity control variable s. Here we introduce a Gaussian Kernel layer to define this probability. Specifically, we assume that each word, beyond its semantic representation e, also has a usage representation u mapped by usage embedding matrix U. The usage representation of a word denotes its usage preference under different specificity. The specificity control variable s then interacts with the usage representations through the Gaussian Kernel layer to produce the specificity-based generation probability pS(yt): pS(yt = w) = 1 √ 2πσ exp(−(ΨS(U, w) −s)2 2σ2 ), ΨS(U, w) = σ(wT(U · WU + bU)), (4) where σ2 is the variance, and ΨS(·) maps the word usage representation into a real value with the specificity control variable s as the mean of the Gaussian distribution. WU and bU are parameters to be learned. Note here in general we can use any realvalue function to define ΨS(U, w). In this work, we use the sigmoid function σ(·) for ΨS(U, w) since we want to define s within the range [0,1] so that each end has very clear meaning on the specificity, i.e., 0 denotes the most general response while 1 denotes the most specific response. In the next section, we will also keep this property when we define the distant label for the control variable. 3.2 Distant Supervision We train our SC-Seq2Seq model by maximizing the log likelihood of generating responses over the training set D: L = X (X,Y)∈D log P(Y|X, s; θ). (5) 1112 where θ denotes all the model parameters. Note here since s is an explicit control variable in our model, we need the triples (X, Y, s) for training. However, s is not directly available in the raw conversation corpus, thus we acquire distant labels for s to learn our model. We introduce two ways of distant supervision on the specificity control variable s, namely Normalized Inverse Response Frequency (NIRF) and Normalized Inverse Word Frequency (NIWF). 3.2.1 Normalized Inverse Response Frequency Normalized Inverse Response Frequency (NIRF) is based on the assumption that a response is more general if it corresponds to more input utterances in the corpus. Therefore, we use the inverse frequency of a response in a conversation corpus to indicate its specificity level. Specifically, we first build the response collection R by extracting all the responses from D. For a response Y ∈R, let fY denote its corpus frequency in R, we compute its Inverse Response Frequency (IRF) as: IRFY = log(1 + |R|)/fY, (6) where |R| denotes the size of the response collection R. Next, we use the min-max normalization method (Jain et al., 2005) to obtain the NIRF value. Namely, NIRFY = IRFY −minY′∈R(IRFY′) maxY′∈R(IRFY′) −minY′∈R(IRFY′). (7) where max(IRFR) and min(IRFR) denotes the maximal and minimum IRF value in R respectively. The NIRF value is then used as the distant label of s in training. Note here by using normalized values, we aim to constrain the specificity control variable s to be within the pre-defined continuous value range [0,1]. 3.2.2 Normalized Inverse Word Frequency Normalized Inverse Word Frequency (NIWF) is based on the assumption that the specificity level of a response depends on the collection of words it contains, and the sentence is more specific if it contains more specific words. Hence, we can use the inverse corpus frequency of the words to indicate the specificity level of a response. Specifically, for a word y in the response Y, we first obtain its Inverse Word Frequency (IWF) by: IWFy = log(1 + |R|)/fy, (8) where fy denotes the number of responses in R containing the word y. Since a response usually contains a collection of words, there would be multiple ways to define the response-level IWF value, e.g., sum, average, minimum or maximum of the IWF values of all the words. In our work, we find that the best performance can be achieved by using the maximum of the IWF of all the words in Y to represent the response-level IWF by IWFY = maxy∈Y(IWFy). (9) This is reasonable since a response is specific as long as it contains some specific words. We do not require all the words in a response to be specific, thus sum, average, and minimum would not be appropriate operators for computing the responselevel IWF. Again, we use min-max normalization to obtain the NIWF value for the response Y. 3.3 Specificity Controlled Response Generation Given a new input utterance, we can employ the learned SC-Seq2Seq model to generate responses at different specificity levels by varying the control variable s. In this way, we can simulate human conversations where one can actively control the response specificity depending on his/her own mind. When we apply our model to a chatbot, there might be different ways to use the control variable for conversation in practice. If we want the agent to always generate informative responses, we can set s to 1 or some values close to 1. If we want the agent to be more dynamic, we can sample s within the range [0,1] to enrich the styles in the response. We may further employ some reinforcement learning technique to learn to adjust the control variable depending on users’ feedbacks. This would make the agent even more vivid, and we leave this as our future work. 4 Experiment In this section, we conduct experiments to verify the effectiveness of our proposed model. 4.1 Dataset Description We conduct our experiments on the public Short Text Conversation (STC) dataset1 released in NTCIR-13. STC maintains a large repository of post-comment pairs from the Sina Weibo which is one of the popular Chinese social sites. 1http://ntcirstc.noahlab.com.hk/STC2/stc-cn.htm 1113 Utterance-response pairs 3,788,571 Utterance vocabulary #w 120,930 Response vocabulary #w 524,791 Utterance max #w 38 Utterance avg #w 13 Response max #w 74 Response avg #w 10 Table 1: Short Text Conversation (STC) data statistics: #w denotes the number of Chinese words. STC dataset contains roughly 3.8 million postcomment pairs, which could be used to simulate the utterance-response pairs in conversation. We employ the Jieba Chinese word segmenter2 to tokenize the utterances and responses into sequences of Chinese words, and the detailed dataset statistics are shown in Table 1. We randomly selected two subsets as the development and test dataset, each containing 10k pairs. The left pairs are used for training. 4.2 Baselines Methods We compare our proposed SC-Seq2Seq model against several state-of-the-art baselines: (1) Seq2Seq-att: the standard Seq2Seq model with the attention mechanism (Bahdanau et al., 2015); (2) MMI-bidi: the Seq2Seq model using Maximum Mutual Information (MMI) as the objective function to reorder the generated responses (Li et al., 2016a); (3) MARM: the Seq2Seq model with a probabilistic framework to model the latent responding mechanisms (Zhou et al., 2017); (4) Seq2Seq+IDF: an extension of Seq2Seq-att by optimizing specificity under the reinforcement learning framework, where the reward is calculated as the sentence level IDF score of the generated response (Yao et al., 2016). We refer to our model trained using NIRF and NIWF as SCSeq2SeqNIRF and SC-Seq2SeqNIWF respectively. 4.3 Implementation Details As suggested in (Shang et al., 2015), we construct two separate vocabularies for utterances and responses by using 40,000 most frequent words on each side in the training data, covering 97.7% words in utterances and 96.1% words in responses respectively. All the remaining words are replaced by a special token <UNK> symbol. We implemented our model in Tensorflow3. We 2https://pypi.python.org/pypi/jieba 3https://www.tensorflow.org/ tuned the hyper-parameters via the development set. Specifically, we use one layer of bi-directional GRU for encoder and another uni-directional GRU for decoder, with the GRU hidden unit size set as 300 in both the encoder and decoder. The dimension of semantic word embeddings in both utterances and responses is 300, while the dimension of usage word embeddings in responses is 50. We apply the Adam algorithm (Kingma and Ba, 2015) for optimization, where the parameters of Adam are set as in (Kingma and Ba, 2015). The variance σ2 of the Gaussian Kernel layer is set as 1, and all other trainable parameters are randomly initialized by uniform distribution within [-0.08,0.08]. The mini-batch size for the update is set as 128. We clip the gradient when its norm exceeds 5. Our model is trained on a Tesla K80 GPU card, and we run the training for up to 12 epochs, which takes approximately five days. We select the model that achieves the lowest perplexity on the development dataset, and we report results on the test dataset. 4.4 Evaluation Methodologies For evaluation, we follow the existing work and employ both automatic and human evaluations: (1) distinct-1 & distinct-2 (Li et al., 2016a): we count numbers of distinct unigrams and bigrams in the generated responses, and divide the numbers by total number of generated unigrams and bigrams. Distinct metrics (both the numbers and the ratios) can be used to evaluate the specificity/diversity of the responses. (2) BLEU (Papineni et al., 2002): BLEU has been proved strongly correlated with human evaluations. BLEU-n measures the average n-gram precision on a set of reference sentences. (3) Average & Extrema (Serban et al., 2017): Average and Extrema projects the generated response and the ground truth response into two separate vectors by taking the mean over the word embeddings or taking the extremum of each dimension respectively, and then computes the cosine similarity between them. (4) Human evaluation: Three labelers with rich Weibo experience were recruited to conduct evaluation. Responses from different models are randomly mixed for labeling. Labelers refer to 300 random sampled test utterances and score the quality of the responses with the following criteria: 1) +2: the response is not only semantically relevant and grammatical, but also informat1114 Models distinct-1 distinct-2 BLEU-1 BLEU-2 Average Extrema SC-Seq2SeqNIRF s = 1 5258/0.064 16195/0.269 15.109 7.023 0.578 0.380 s = 0.8 5337/0.065 16105/0.271 15.112 7.003 0.578 0.381 s = 0.5 5318/0.065 16183/0.269 15.054 7.001 0.578 0.380 s = 0.2 5323/0.065 16087/0.270 15.168 7.032 0.580 0.380 s = 0 5397/0.066 16319/0.271 15.093 7.011 0.577 0.380 SC-Seq2SeqNIWF s = 1 11588/0.116 27144/0.347 12.392 5.869 0.554 0.353 s = 0.8 6006/0.051 17843/0.257 11.492 5.703 0.553 0.350 s = 0.5 2835/0.050 9537/0.235 16.122 7.674 0.609 0.399 s = 0.2 1534/0.048 5117/0.218 8.313 4.058 0.542 0.335 s = 0 1038/0.046 3154/0.211 4.417 3.283 0.549 0.334 Table 2: Model analysis of our SC-Seq2Seq under the automatic evaluation. Models distinct-1 distinct-2 BLEU-1 BLEU-2 Average Extrema Seq2Seq-att 5048/0.060 15976/0.168 15.062 6.964 0.575 0.376 MMI-bidi 5074/0.082 12162/0.287 15.772 7.215 0.586 0.381 MARM 2566/0.096 3294/0.312 7.321 3.774 0.512 0.336 Seq2Seq+IDF 4722/0.052 15384/0.229 14.423 6.743 0.572 0.369 SC-Seq2SeqNIWF,s=1 11588/0.116 27144/0.347 12.392 5.869 0.554 0.353 SC-Seq2SeqNIWF,s=0.5 2835/0.050 9537/0.235 16.122 7.674 0.609 0.399 Table 3: Comparisons between our SC-Seq2Seq and the baselines under the automatic evaluation. ive and interesting; 2) +1: the response is grammatical and can be used as a response to the utterance, but is too trivial (e.g., “I don’t know”); 3) +0: the response is semantically irrelevant or ungrammatical (e.g., grammatical errors or UNK). Agreements to measure inter-rater consistency among three labelers are calculated with the Fleiss’ kappa (Fleiss and Cohen, 1973). 4.5 Evaluation Results Model Analysis: We first analyze our models trained with different distant supervision information. For each model, given a test utterance, we vary the control variable s by setting it to five different values (i.e., 0, 0.2, 0.5, 0.8, 1) to check whether the learned model can actually achieve different specificity levels. As shown in Table 2, we find that: (1) The SC-Seq2Seq model trained with NIRF cannot work well. The test performances are almost the same with different s value. This is surprising since the NIRF definition seems to be directly corresponding to the specificity of a response. By conducting further analysis, we find that even though the conversation dataset is large, it is still limited and a general response could appear very few times in this corpus. In other words, the inverse frequency of a response is very weakly correlated with its response specificity. (2) The SC-Seq2Seq model trained with NIWF can achieve our purpose. By varying the control variable s from 0 to 1, the generated responses turn from general to specific as measured by the distinct metrics. The results indicate that the max inverse word frequency in a response is a good distant label for the response specificity. (3) When we compare the generated responses against ground truth data, we find the SC-Seq2SeqNIWF model with the control variable s set to 0.5 can achieve the best performances. The results indicate that there are diverse responses in real data in terms of specificity, and it is necessary to take a balanced setting if we want to fit the ground truth. Baseline Comparison: The performance comparisons between our model and the baselines are shown in Table 3. We have the following observations: (1) By using MMI as the objective, MMI-bidi can improve the specificity (in terms of distinct ratios) over the traditional Seq2Seq-att model. (2) MARM can achieve the best distinct ratios among the baseline methods, but the worst in terms of the distinct numbers. The results indicate that MARM tends to generate specific but very short responses. Meanwhile, its low BLEU scores also show that the responses generated by MARM deviate from the ground truth significantly. (3) By using the IDF information as the reward to train 1115 +2 +1 +0 kappa Seq2Seq-att 29.32% 25.27% 45.41% 0.448 MMI-bidi 30.40% 24.85% 44.75% 0.471 MARM 20.11% 27.96% 51.93% 0.404 Seq2Seq+IDF 28.81% 23.87% 47.33% 0.418 SC-Seq2SeqNIWF,s=1 42.47% 14.29% 43.24% 0.507 SC-Seq2SeqNIWF,s=0.5 20.62% 40.16% 39.22% 0.451 SC-Seq2SeqNIWF,s=0 14.34% 46.38% 39.28% 0.526 Table 4: Results on the human evaluation. the Seq2Seq model, the Seq2Seq+IDF does not show much advantages, but only achieves comparable results as MMI-bidi. (4) By setting the control variable s to 1, our SC-Seq2SeqNIWF model can achieve the best specificity performance as evaluated by the distinct metrics. By setting the control variable s to 0.5, our SC-Seq2SeqNIWF model can best fit the ground truth data as evaluated by the BLEU scores, Average and Extrema. All the improvements over the baseline models are statistically significant (p-value < 0.01). These results demonstrate the effectiveness as well as the flexibility of our controlled generation model. Table 4 shows the human evaluation results. We can observe that: (1) SC-Seq2SeqNIWF,s=1 generates the most informative responses and interesting (labeled as “+2”) and the least general responses than all the baseline models. Meanwhile, SC-Seq2SeqNIWF,s=0 generates the most general responses (labeled as “+1”); (2) MARM generates the most bad responses (labeled as “+0”), which indicates the drawbacks of the unknown latent responding mechanisms; (3) The kappa values of our models are all larger than 0.4, considered as “moderate agreement” regarding quality of responses. The largest kappa value is achieved by SC-Seq2SeqNIWF,s=0, which seems reasonable since it is easy to reach an agreement on general responses. Sign tests demonstrate the improvements of SC-Seq2SeqNIWF,s=1 to the baseline models are statistically significant (p-value < 0.01). All the human judgement results again demonstrate the effectiveness of our controlled generation mechanism. 4.6 Case Study To better understand how different models perform, we conduct some case studies. We randomly sample three utterances from the test dataset, and show the responses generated by different models. Utterance: 三室一厅的帐篷,你值得拥有!A tent with three rooms and one hall, you’re worth it! Seq2Seq-att: 这个不错,很好!It looks good, very good! MMI-bidi: 这个不错,我喜欢。It looks good, I like it. MARM: 不错。Good. Seq2Seq+IDF: 帐篷不错。The tent is good. s=1: 我想知道这帐篷是什么牌子的?I want to know what the brand of this tent is? s=0.8: 我想知道这是在哪买的?I want to know where to buy it? s=0.5: 我也想买一个,可惜买不起!I want one too, but it’s a pity that I can’t afford it. s=0.2: 不错不错!Good, good. s=0: 真的吗?Really? Utterance: 哎哟! 被发现了! 汤唯有必要那么美吗? Oh! Got caught! Why is Tang Wei so beautiful? Seq2Seq-att: 我超喜欢她!I really like her! MMI-bidi: 好美。我超喜欢她!So beautiful. I really like her! MARM: 我不喜欢汤唯。I don’t like Tang Wei. Seq2Seq+IDF: 我超喜欢她的,我喜欢。I really like her, I like. s=1: 我超喜欢她的眼妆,她的气质很美!I really like her eye make-up, and her temperament is so beautiful! s=0.8: 我超喜欢她的,她的眼睛很雪亮!I really like her, and her eyes are so bright! s=0.5: 我觉得她很美,很喜欢她!I think she is very beautiful, and I really like her! s=0.2: 我觉得她很美!I think she is very beautiful. s=0: 是的。Yes. Utterance: 将来也要养一只这么乖的的萨摩,让它陪着孩子一起长大。I will keep such a sweet samoyed, and let it accompany my children to grow up. Seq2Seq-att: 这不是萨摩吗?Is it a samoyed? MMI-bidi: 好可爱的萨摩。What a lovely samoyed. MARM: 好可爱啊。So lovely. Seq2Seq+IDF: 好可爱的狗狗。What a lovely dog. s=1: 萨摩耶好可爱,真想养一只! The samoyed is so lovely, and I really want to keep one! s=0.8: 萨摩好可爱,好想掐掐。The samoyed is so lovely, and I really want to pinch it. s=0.5: 好可爱的狗狗,好可爱的狗狗。What a lovely dog, what a lovely dog. s=0.2: 好可爱!好可爱!So lovely, so lovely! s=0: 好可爱!So lovely! Table 5: Examples of response generation from the STC test data. s = 1, 0.8, 0.5, 0.2, 0 are the outputs of our SC-Seq2SeqNIWF with different s values. As shown in Table 5, we can find that: (1) The responses generated by the four baselines are often quite general and short, which may quickly lead to an end of the conversation. (2) SC-Seq2SeqNIWF with large control variable values (i.e., s > 0.5) can generate very long and specific responses. In these responses, we can find many informative words. For example, in case 2 with s as 1 and 0.8, we can find words like “眼妆(eye make-up)”, “气 质(temperament)” and “雪亮(bright)” which are quite specific and strongly related to the conversation topic of “beauty”. (3) When we decrease the control variable value, the generated responses become more and more general and shorter from our SC-Seq2SeqNIWF model. 1116 爸爸(dad) 水果(fruits) 脂肪肝(fatty liver) 单反相机(DSLR) Usage Semantic Usage Semantic Usage Semantic Usage Semantic 更好(better) 妈妈(mother) 尝试(attempt) 蔬菜(vegetables) 坐久(outsit) 胖(fat) 亚洲杯(Asian Cup) 照相机(camera) 睡觉(sleep) 哥哥(brother) 诱惑(tempt) 牛奶(milk) 素食主义(vegetarian) 减肥(diet) 读取(read) 摄影(photography) 快乐(happy) 老公(husband) 表现(express) 西瓜(watermelon) 散步(walk) 高血压(hypertension) 半球(hemispherical) 镜头(shot) 无聊(boring) 爷爷(grandfather) 拥有(own) 米饭(rice) 因果关系(causality) 亚健康(sub-health) 防辐射(anti-radiation) 影楼(studio) 电影(movie) 姑娘(girl) 梦想(dream) 巧克力(chocolate) 哑铃(dumbbell) 呕吐(emesis) 无人机(UAV) 写真(image) Table 6: Target words and their top-5 similar words under usage and semantic representations respectively. fatty liver outsit fat fatty liver fat outsit (a) usage (b) semantic Figure 3: t-SNE embeddings of usage and semantic vectors. 4.7 Analysis on Usage Representations We also conduct some analysis to understand the usage representations of words introduced in our model. We randomly sample 500 words from our SC-Seq2SeqNIWF and apply t-SNE (Maaten and Hinton, 2008) to visualize both usage and semantic embeddings. As shown in Figure 3, we can see that the two distributions are significantly different. In the usage space, words like “脂 肪肝(fatty liver)” and “久坐(outsit)” lie closely which are both specific words, and both are far from the general words like “胖(fat)”. On the contrary, in the semantic space, “脂肪肝(fatty liver)” is close to “胖(fat)” since they are semantically related, and both are far from the word “久坐(outsit)”. Furthermore, given some sampled target words, we also show the top-5 similar words based on cosine similarity under both representations in Table 6. Again, we can see that the nearest neighbors of a same word are quite different under two representations. Neighbors based on semantic representations are semantically related, while neighbors based on usage representations are not so related but with similar specificity levels. 5 Conclusion We propose a novel controlled response generation mechanism to handle different utteranceresponse relationships in terms of specificity. We introduce an explicit specificity control variable into the Seq2Seq model, which interacts with the usage representation of words to generate responses at different specificity levels. Empirical results showed that our model can generate either general or specific responses, and significantly outperform state-of-the-art generation methods. 6 Acknowledgments This work was funded by the 973 Program of China under Grant No. 2014CB340401, the National Natural Science Foundation of China (NSFC) under Grants No. 61425016, 61472401, 61722211, and 20180290, the Youth Innovation Promotion Association CAS under Grants No. 20144310, and 2016102, and the National Key R&D Program of China under Grants No. 2016QY02D0405. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of the conference on empirical methods in natural language processing. Joseph L Fleiss and Jacob Cohen. 1973. The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. Educational and psychological measurement, 33(3):613– 619. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th annual meeting of the Association for Computational Linguistics. Association for Computational Linguistics. Matthew Henderson, Blaise Thomson, and Jason D Williams. 2014. The second dialog state tracking 1117 challenge. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 263–272. Anil Jain, Karthik Nandakumar, and Arun Ross. 2005. Score normalization in multimodal biometric systems. Pattern recognition, 38(12):2270–2285. Michael Kearns. 2000. Cobot in lambdamoo: A social statistics agent. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In NAACL. Jiwei Li, Michel Galley, Chris Brockett, Georgios P Spithourakis, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural conversation model. In Proceedings of the 54th annual meeting of the Association for Computational Linguistics. Jiwei Li, Will Monroe, and Dan Jurafsky. 2017. Data distillation for controlling specificity in dialogue generation. arXiv preprint arXiv:1702.06703. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov):2579–2605. Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, and Zhi Jin. 2016. Sequence to backward and forward sequences: A content-introducing approach to generative short-text conversation. In COLING. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Diana Perez-Marin. 2011. Conversational Agents and Natural Language Interaction: Techniques and Effective Practices: Techniques and Effective Practices. IGI Global. Alan Ritter, Colin Cherry, and William B Dolan. 2011. Data-driven response generation in social media. In Proceedings of the conference on empirical methods in natural language processing, pages 583–593. Association for Computational Linguistics. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In AAAI, pages 3776–3784. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In AAAI. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In Proceedings of the 53th annual meeting of the Association for Computational Linguistics. Association for Computational Linguistics. Xiaoyu Shen, Hui Su, Yanran Li, Wenjie Li, Shuzi Niu, Yang Zhao, Akiko Aizawa, and Guoping Long. 2017. A conditional variational framework for dialog generation. In Proceedings of the 55th annual meeting of the Association for Computational Linguistics. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. A neural network approach to context-sensitive generation of conversational responses. In NAACL-HLT, pages 196–205. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In NIPS, pages 3104–3112. Marilyn A Walker, Rebecca Passonneau, and Julie E Boland. 2001. Quantitative and qualitative evaluation of darpa communicator spoken dialogue systems. In Proceedings of the 39th Annual Meeting on Association for Computational Linguistics, pages 515–522. Association for Computational Linguistics. Hao Wang, Zhengdong Lu, Hang Li, and Enhong Chen. 2013. A dataset for research on short-text conversations. In Proceedings of the conference on empirical methods in natural language processing. Jason Williams, Antoine Raux, Deepak Ramachandran, and Alan Black. 2013. The dialog state tracking challenge. In Proceedings of the SIGDIAL 2013 Conference, pages 404–413. Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In AAAI, pages 3351– 3357. Rui Yan, Yiping Song, and Hua Wu. 2016. Learning to respond with deep neural networks for retrievalbased human-computer conversation system. In Proceedings of the 39st annual international ACM SIGIR conference on Research and development in information retrieval, pages 55–64. ACM. Kaisheng Yao, Baolin Peng, Geoffrey Zweig, and Kam-Fai Wong. 2016. An attentional neural conversation model with improved specificity. arXiv preprint arXiv:1606.01292. Ganbin Zhou, Ping Luo, Rongyu Cao, Fen Lin, Bo Chen, and Qing He. 2017. Mechanism-aware neural machine for dialogue response generation. In AAAI, pages 3400–3407.
2018
102
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1118–1127 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1118 Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network Xiangyang Zhou∗, Lu Li∗, Daxiang Dong, Yi Liu, Ying Chen, Wayne Xin Zhao†, Dianhai Yu and Hua Wu Baidu Inc., Beijing, China nzhouxiangyang, lilu12, dongdaxiang, liuyi05, chenying04, v zhaoxin, yudianhai, wu hua o @baidu.com Abstract Human generates responses relying on semantic and functional dependencies, including coreference relation, among dialogue elements and their context. In this paper, we investigate matching a response with its multi-turn context using dependency information based entirely on attention. Our solution is inspired by the recently proposed Transformer in machine translation (Vaswani et al., 2017) and we extend the attention mechanism in two ways. First, we construct representations of text segments at different granularities solely with stacked self-attention. Second, we try to extract the truly matched segment pairs with attention across the context and response. We jointly introduce those two kinds of attention in one uniform neural network. Experiments on two large-scale multi-turn response selection tasks show that our proposed model significantly outperforms the state-of-the-art models. 1 Introduction Building a chatbot that can naturally and consistently converse with human-beings on opendomain topics draws increasing research interests in past years. One important task in chatbots is response selection, which aims to select the bestmatched response from a set of candidates given the context of a conversation. Besides playing a critical role in retrieval-based chatbots (Ji et al., 2014), response selection models have been used in automatic evaluation of dialogue generation ∗Equally contributed. † Work done as a visiting scholar at Baidu. Wayne Xin Zhao is an associate professor of Renmin University of China and can be reached at batmanfl[email protected]. (Lowe et al., 2017) and the discriminator of GANbased (Generative Adversarial Networks) neural dialogue generation (Li et al., 2017). Conversation Context Speaker A: Hi I am looking to see what packages are installed on my system, I don’t see a path, is the list being held somewhere else? Speaker B: Try dpkg - get-selections Speaker A: What is that like? A database for packages instead of a flat file structure? Speaker B: dpkg is the debian package manager - get-selections simply shows you what packages are handed by it Response of Speaker A: No clue what do you need it for, its just reassurance as I don’t know the debian package manager Figure 1: Example of human conversation on Ubuntu system troubleshooting. Speaker A is seeking for a solution of package management in his/her system and speaker B recommend using, the debian package manager, dpkg. But speaker A does not know dpkg, and asks a backchannel-question (Stolcke et al., 2000), i.e., “no clue what do you need it for?”, aiming to double-check if dpkg could solve his/her problem. Text segments with underlines in the same color across context and response can be seen as matched pairs. Early studies on response selection only use the last utterance in context for matching a reply, which is referred to as single-turn response selection (Wang et al., 2013). Recent works show that the consideration of a multi-turn context can facilitate selecting the next utterance (Zhou et al., 2016; Wu et al., 2017). The reason why richer contextual information works is that human generated responses are heavily dependent on the previous dialogue segments at different granularities (words, phrases, sentences, etc), both semantically and functionally, over multiple turns rather than one turn (Lee et al., 2006; Traum and Heeman, 1996). Figure 1 illustrates semantic connectivities between segment pairs across context and response. As demonstrated, generally there are two kinds of matched segment pairs at different granularities across context and response: (1) surface text relevance, for example the lexical overlap of words “packages”-“package” and phrases “debian package manager”-“debian pack1119 age manager”. (2) latent dependencies upon which segments are semantically/functionally related to each other. Such as the word “it” in the response, which refers to “dpkg” in the context, as well as the phrase “its just reassurance” in the response, which latently points to “what packages are installed on my system”, the question that speaker A wants to double-check. Previous studies show that capturing those matched segment pairs at different granularities across context and response is the key to multiturn response selection (Wu et al., 2017). However, existing models only consider the textual relevance, which suffers from matching response that latently depends on previous turns. Moreover, Recurrent Neural Networks (RNN) are conveniently used for encoding texts, which is too costly to use for capturing multi-grained semantic representations (Lowe et al., 2015; Zhou et al., 2016; Wu et al., 2017). As an alternative, we propose to match a response with multi-turn context using dependency information based entirely on attention mechanism. Our solution is inspired by the recently proposed Transformer in machine translation (Vaswani et al., 2017), which addresses the issue of sequence-to-sequence generation only using attention, and we extend the key attention mechanism of Transformer in two ways: self-attention By making a sentence attend to itself, we can capture its intra word-level dependencies. Phrases, such as “debian package manager”, can be modeled with wordlevel self-attention over word-embeddings, and sentence-level representations can be constructed in a similar way with phraselevel self-attention. By hierarchically stacking self-attention from word embeddings, we can gradually construct semantic representations at different granularities. cross-attention By making context and response attend to each other, we can generally capture dependencies between those latently matched segment pairs, which is able to provide complementary information to textual relevance for matching response with multi-turn context. We jointly introduce self-attention and crossattention in one uniform neural matching network, namely the Deep Attention Matching Network (DAM), for multi-turn response selection. In practice, DAM takes each single word of an utterance in context or response as the centric-meaning of an abstractive semantic segment, and hierarchically enriches its representation with stacked self-attention, gradually producing more and more sophisticated segment representations surrounding the centric-word. Each utterance in context and response are matched based on segment pairs at different granularities, considering both textual relevance and dependency information. In this way, DAM generally captures matching information between the context and the response from word-level to sentence-level, important matching features are then distilled with convolution & maxpooling operations, and finally fused into one single matching score via a single-layer perceptron. We test DAM on two large-scale public multiturn response selection datasets, the Ubuntu Corpus v1 and Douban Conversation Corpus. Experimental results show that our model significantly outperforms the state-of-the-art models, and the improvement to the best baseline model on R10@1 is over 4%. What is more, DAM is expected to be convenient to deploy in practice because most attention computation can be fully parallelized (Vaswani et al., 2017). Our contributions are two-folds: (1) we propose a new matching model for multi-turn response selection with selfattention and cross-attention. (2) empirical results show that our proposed model significantly outperforms the state-of-the-art baselines on public datasets, demonstrating the effectiveness of selfattention and cross-attention. 2 Related Work 2.1 Conversational System To build an automatic conversational agent is a long cherished goal in Artificial Intelligence (AI) (Turing, 1950). Previous researches include taskoriented dialogue system, which focuses on completing tasks in vertical domain, and chatbots, which aims to consistently and naturally converse with human-beings on open-domain topics. Most modern chatbots are data-driven, either in a fashion of information-retrieval (Ji et al., 2014; Banchs and Li, 2012; Nio et al., 2014; Ameixa et al., 2014) or sequence-generation (Ritter et al., 2011). The retrieval-based systems enjoy the advantage of informative and fluent responses because it searches a large dialogue repository and selects 1120 Input Representation Matching Aggregation Word Embedding Representation Module ! Word-word Matching with Cross-Attention "# "$ "% Matching Score g(c,r) 3D Matching Image Q Ui R Multi-grained Representations Mui,r self Mui,r cross Figure 2: Overview of Deep Attention Matching Network. candidate that best matches the current context. The generation-based models, on the other hand, learn patterns of responding from dialogues and can directly generalize new responses. 2.2 Response Selection Researches on response selection can be generally categorized into single-turn and multi-turn. Most early studies are single-turn that only consider the last utterance for matching response (Wang et al., 2013, 2015). Recent works extend it to multiturn conversation scenario, Lowe et al.,(2015) and Zhou et al.,(2016) use RNN to read context and response, use the last hidden states to represent context and response as two semantic vectors, and measure their relevance. Instead of only considering the last states of RNN, Wu et al.,(2017) take hidden state at each time step as a text segment representation, and measure the distance between context and response via segment-segment matching matrixes. Nevertheless, matching with dependency information is generally ignored in previous works. 2.3 Attention Attention has been proven to be very effective in Natural Language Processing (NLP) (Bahdanau et al., 2015; Yin et al., 2016; Lin et al., 2017) and other research areas (Xu et al., 2015). Recently, Vaswani et al.,(2017) propose a novel sequenceto-sequence generation network, the Transformer, which is entirely based on attention. Not only Transformer can achieve better translation results than convenient RNN-based models, but also it is very fast in training/predicting as the computation of attention can be fully parallelized. Previous works on attention mechanism show the superior ability of attention to capture semantic dependencies, which inspires us to improve multi-turn response selection with attention mechanism. 3 Deep Attention Matching Network 3.1 Problem Formalization Given a dialogue data set D = {(c, r, y)Z}N Z=1, where c = {u0, ..., un−1} represents a conversation context with {ui}n−1 i=0 as utterances and r as a response candidate. y ∈{0, 1} is a binary label, indicating whether r is a proper response for c. Our goal is to learn a matching model g(c, r) with D, which can measure the relevance between any context c and candidate response r. 3.2 Model Overview Figure 2 gives an overview of DAM, which generally follows the representation-matchingaggregation framework to match response with multi-turn context. For each utterance ui = [wui,k] nui−1 k=0 in a context and its response candidate r = [wr,t]nr−1 t=0 , where nui and nr stand for the numbers of words, DAM first looks up a shared word embedding table and represents ui and r as sequences of word embeddings, namely U0 i = 1121 [e0 ui,0, ..., e0 ui,nui−1] and R0 = [e0 r,0, ..., e0 r,nr−1] respectively, where e ∈Rd denotes a d-dimension word embedding. A representation module then starts to construct semantic representations at different granularities for ui and r. Practically, L identical layers of self-attention are hierarchically stacked, each lth self-attention layer takes the output of the l −1th layer as its input, and composites the input semantic vectors into more sophisticated representations based on self-attention. In this way, multigrained representations of ui and r are gradually constructed, denoted as [Ul i]L l=0 and [Rl]L l=0 respectively. Given [U0 i , ..., UL i ] and [R0, ..., RL], utterance ui and response r are then matched with each other in a manner of segment-segment similarity matrix. Practically, for each granularity l ∈ [0...L], two kinds of matching matrixes are constructed, i.e., the self-attention-match Mui,r,l self and cross-attention-match Mui,r,l cross, measuring the relevance between utterance and response with textual information and dependency information respectively. Those matching scores are finally merged into a 3D matching image Q1. Each dimension of Q represents each utterance in context, each word in utterance and each word in response respectively. Important matching information between segment pairs across multi-turn context and candidate response is then extracted via convolution with max-pooling operations, and further fused into one matching score via a single-layer perceptron, representing the matching degree between the response candidate and the whole context. Specifically, we use a shared component, the Attentive Module, to implement both selfattention in representation and cross-attention in matching. We will discuss in detail the implementation of Attentive Module and how we used it to implement both self-attention and cross-attention in following sections. 3.3 Attentive Module Figure 3 shows the structure of Attentive Module, which is similar to that used in Transformer (Vaswani et al., 2017). Attentive Module has three input sentences: the query sentence, the key sentence and the value sentence, namely Q = [ei]nQ−1 i=0 , K = [ei]nK−1 i=0 , V = [ei]nV−1 i=0 respec1We refer to it as Q because it is like a cube. query Attention Weighted Sum key value Sum & Norm Feed-Forward Sum & Norm Figure 3: Attentive Module. tively, where nQ, nK and nV denote the number of words in each sentence and ei stands for a ddimension embedding, nK is equal to nV. The Attentive Module first takes each word in the query sentence to attend to words in the key sentence via Scaled Dot-Product Attention (Vaswani et al., 2017), then applies those attention results upon the value sentence, which is defined as: Att(Q, K) =  softmax(Q[i] · KT √ d ) nQ−1 i=0 (1) Vatt = Att(Q, K) · V ∈RnQ×d (2) where Q[i] is the ith embedding in the query sentence Q. Each row of Vatt, denoted as Vatt[i], stores the fused semantic information of words in the value sentence that possibly have dependencies to the ith word in query sentence. For each i, Vatt[i] and Q[i] are then added up together, compositing them into a new representation that contains their joint meanings. A layer normalization operation (Ba et al., 2016) is then applied, which prevents vanishing or exploding of gradients. A feed-forward network FFN with RELU (LeCun et al., 2015) activation is then applied upon the normalization result, in order to further process the fused embeddings, defined as: FFN(x) = max(0, xW1 + b1)W2 + b2 (3) where, x is a 2D-tensor in the same shape of query sentence Q and W1, b1, W2, b2 are learnt parameters. This kind of activation is empirically useful in other works, and we also adapt it in our model. The result FFN(x) is a 2D-tensor that has the same shape as x, FFN(x) is then residually added (He et al., 2016) to x, and the fusion result is then normalized as the final outputs. We refer to the whole Attentive Module as: AttentiveModule(Q, K, V) (4) 1122 As described, Attentive Module can capture dependencies across query sentence and key sentence, and further use the dependency information to composite elements in the query sentence and the value sentence into compositional representations. We exploit this property of the Attentive Module to construct multi-grained semantic representations as well as match with dependency information. 3.4 Representation Given U0 i or R0, the word-level embedding representations for utterance ui or response r, DAM takes U0 i ro R0 as inputs and hierarchically stacks the Attentive Module to construct multi-grained representations of ui and r, which is formulated as: Ul+1 i = AttentiveModule(Ul i, Ul i, Ul i) (5) Rl+1 = AttentiveModule(Rl, Rl, Rl) (6) where l ranges from 0 to L −1, denoting the different levels of granularity. By this means, words in each utterance or response repeatedly function together to composite more and more holistic representations, we refer to those multi-grained representations as [U0 i , ..., UL i ] and [R0, ..., RL] hereafter. 3.5 Utterance-Response Matching Given [Ul i]L l=0 and [Rl]L l=0, two kinds of segmentsegment matching matrixes are constructed at each level of granularity l, i.e., the self-attention-match Mui,r,l self and cross-attention-match Mui,r,l cross. Mui,r,l self is defined as: Mui,r,l self = {Ul i[k] T · Rl[t]}nui×nr (7) in which, each element in the matrix is the dotproduct of Ul i[k] and Rl[t], the kth embedding in Ul i and the tth embedding in Rl, reflecting the textual relevance between the kth segment in ui and tth segment in r at the lth granularity. The crossattention-match matrix is based on cross-attention, which is defined as: eU l i = AttentiveModule(Ul i, Rl, Rl) (8) eR l = AttentiveModule(Rl, Ul i, Ul i) (9) Mui,r,l cross = {eU l i[k] T · eR l[t]}nui×nr (10) where we use Attentive Module to make Ul i and Rl crossly attend to each other, constructing two new representations for both of them, written as eU l i and eR l respectively. Both eU l i and eR l implicitly capture semantic structures that cross the utterance and response. In this way, those inter-dependent segment pairs are close to each other in representations, and dot-products between those latently inter-dependent pairs could get increased, providing dependency-aware matching information. 3.6 Aggregation DAM finally aggregates all the segmental matching degrees across each utterance and response into a 3D matching image Q, which is defined as: Q = {Qi,k,t}n×nui×nr (11) where each pixel Qi,k,t is formulated as: Qi,k,t = [Mui,r,l self [k, t]]L l=0 ⊕[Mui,r,l cross[k, t]]L l=0 (12) ⊕is concatenation operation, and each pixel has 2(L + 1) channels, storing the matching degrees between one certain segment pair at different levels of granularity. DAM then leverages twolayered 3D convolution with max-pooling operations to distill important matching features from the whole image. The operation of 3D convolution with max-pooling is the extension of typical 2D convolution, whose filters and strides are 3D cubes2. We finally compute matching score g(c, r) based on the extracted matching features fmatch(c, r) via a single-layer perceptron, which is formulated as: g(c, r) = σ(W3fmatch(c, r) + b3) (13) where W3 and b3 are learnt parameters, and σ is sigmoid function that gives the probability if r is a proper candidate to c. The loss function of DAM is the negative log likelihood, defined as: p(y|c, r) = g(c, r)y + (1 −g(c, r))(1 −y) (14) L(·) = − X (c,r,y)∈D log(p(y|c, r)) (15) 4 Experiment 2https://www.tensorflow.org/api docs/python/tf/nn/conv3d 1123 Ubuntu Corpus Douban Conversation Corpus R2@1 R10@1 R10@2 R10@5 MAP MRR P@1 R10@1 R10@2 R10@5 DualEncoderlstm 0.901 0.638 0.784 0.949 0.485 0.527 0.320 0.187 0.343 0.720 DualEncoderbilstm 0.895 0.630 0.780 0.944 0.479 0.514 0.313 0.184 0.330 0.716 MV-LSTM 0.906 0.653 0.804 0.946 0.498 0.538 0.348 0.202 0.351 0.710 Match-LSTM 0.904 0.653 0.799 0.944 0.500 0.537 0.345 0.202 0.348 0.720 Multiview 0.908 0.662 0.801 0.951 0.505 0.543 0.342 0.202 0.350 0.729 DL2R 0.899 0.626 0.783 0.944 0.488 0.527 0.330 0.193 0.342 0.705 SMNdynamic 0.926 0.726 0.847 0.961 0.529 0.569 0.397 0.233 0.396 0.724 DAM 0.938 0.767 0.874 0.969 0.550 0.601 0.427 0.254 0.410 0.757 DAMfirst 0.927 0.736 0.854 0.962 0.528 0.579 0.400 0.229 0.396 0.741 DAMlast 0.932 0.752 0.861 0.965 0.539 0.583 0.408 0.242 0.407 0.748 DAMself 0.931 0.741 0.859 0.964 0.527 0.574 0.382 0.221 0.403 0.750 DAMcross 0.932 0.749 0.863 0.966 0.535 0.585 0.400 0.234 0.411 0.733 Table 1: Experimental results of DAM and other comparison approaches on Ubuntu Corpus V1 and Douban Conversation Corpus. 4.1 Dataset We test DAM on two public multi-turn response selection datasets, the Ubuntu Corpus V1 (Lowe et al., 2015) and the Douban Conversation Corpus (Wu et al., 2017). The former one contains multiturn dialogues about Ubuntu system troubleshooting in English and the later one is crawled from a Chinese social networking on open-domain topics. The Ubuntu training set contains 0.5 million multiturn contexts, and each context has one positive response that generated by human and one negative response which is randomly sampled. Both validation and testing sets of Ubuntu Corpus have 50k contexts, where each context is provided with one positive response and nine negative replies. The Douban corpus is constructed in a similar way to the Ubuntu Corpus, except that its validation set contains 50k instances with 1:1 positive-negative ratios and the testing set of Douban corpus is consisted of 10k instances, where each context has 10 candidate responses, collected via a tiny invertedindex system (Lucene3), and labels are manually annotated. 4.2 Evaluation Metric We use the same evaluation metrics as in previous works (Wu et al., 2017). Each comparison model is asked to select k best-matched response from n available candidates for the given conversation context c, and we calculate the recall of the true positive replies among the k selected ones as the main evaluation metric, denoted as Rn@k = Pk i=1 yi Pn i=1 yi , where yi is the binary label for each candidate. In addition to Rn@k, we use MAP (Mean Average Precision) (Baeza3https://lucenent.apache.org/ Yates et al., 1999), MRR (Mean Reciprocal Rank) (Voorhees et al., 1999), and Precision-at-one P@1 especially for Douban corpus, following the setting of previous works (Wu et al., 2017). 4.3 Comparison Methods RNN-based models : Previous best performing models are based on RNNs, we choose representative models as baselines, including SMNdynamic(Wu et al., 2017), Multiview(Zhou et al., 2016), DualEncoderlstm and DualEncoderbilstm (Lowe et al., 2015), DL2R (Yan et al., 2016), Match-LSTM (Wang and Jiang, 2017) and MV-LSTM (Pang et al., 2016), where SMNdynamic achieves the best scores against all the other published works, and we take it as our stateof-the-art baseline. Ablation : To verify the effects of multi-grained representation, we setup two comparison models, i.e., DAMfirst and DAMlast, which dispense with the multi-grained representations in DAM, and use representation results from the 0th layer and Lth layer of self-attention instead. Moreover, we setup DAMself and DAMcross, which only use self-attention-match or cross-attention-match respectively, in order to examine the effectiveness of both self-attention-match and cross-attention-match. 4.4 Model Training We copy the reported evaluation results of all baselines for comparison. DAM is implemented in tensorflow4, and the used vocabularies, word em4https://www.tensorflow.org. Our code and data will be available at https://github.com/baidu/Dialogue/DAM 1124 bedding sizes for Ubuntu corpus and Douban corpus are all set as same as the SMN (Wu et al., 2017). We consider at most 9 turns and 50 words for each utterance (response) in our experiments, word embeddings are pre-trained using training sets via word2vec (Mikolov et al., 2013), similar to previous works. We use zero-pad to handle the variable-sized input and parameters in FFN are set to 200, same as word-embedding size. We test stacking 1-7 self-attention layers, and reported our results with 5 stacks of self-attention because it gains the best scores on validation set. The 1st convolution layer has 32 [3,3,3] filters with [1,1,1] stride, and its max-pooling size is [3,3,3] with [3,3,3] stride. The 2nd convolution layer has 16 [3,3,3] filters with [1,1,1] stride, and its maxpooling size is also [3,3,3] with [3,3,3] stride. We tune DAM and the other ablation models with adam optimizer (Le et al., 2011) to minimize loss function defined in Eq 15. Learning rate is initialized as 1e-3 and gradually decreased during training, and the batch-size is 256. We use validation sets to select the best models and report their performances on test sets. 4.5 Experiment Result Table 1 shows the evaluation results of DAM as well as all comparison models. As demonstrated, DAM significantly outperforms other competitors on both Ubuntu Corpus and Douban Conversation Corpus, including SMNdynamic, which is the state-of-the-art baseline, demonstrating the superior power of attention mechanism in matching response with multi-turn context. Besides, both the performances of DAMfirst and DAMself decrease a lot compared with DAM, which shows the effectiveness of self-attention and cross-attention. Both DAMfirst and DAMlast underperform DAM, which demonstrates the benefits of using multigrained representations. Also the absence of self-attention-match brings down the precision, as shown in DAMcross, exhibiting the necessity of jointly considering textual relevance and dependency information in response selection. One notable point is that, while DAMfirst is able to achieve close performance to SMNdynamic, it is about 2.3 times faster than SMNdynamic in our implementation as it is very simple in computation. We believe that DAMfirst is more suitable to the scenario that has limitations in computation time or memories but requires high precise, such as industry application or working as an component in other neural networks like GANs. 5 Analysis We use the Ubuntu Corpus for analyzing how selfattention and cross-attention work in DAM from both quantity analysis as well as visualization. 5.1 Quantity Analysis We first study how DAM performs in different utterance number of context. The left part in Figure 4 shows the changes of R10@1 on Ubuntu Corpus across contexts with different number of utterance. As demonstrated, while being good at matching response with long context that has more than 4 utterances, DAM can still stably deal with short context that only has 2 turns. average number of words in each turn number of turns in context Figure 4: DAM’s performance on Ubuntu Corpus across different contexts. The left part shows the performance in different utterance number of context. The right part shows performance in different average utterance text length of context as well as self-attention stack depth. Moreover, the right part of Figure 4 gives the comparison of performance across different contexts with different average utterance text length and self-attention stack depth. As demonstrated, stacking self-attention can consistently improve matching performance for contexts having different average utterance text length, implying the stability advantage of using multi-grained semantic representations. The performance of matching short utterances, that have less than 10 words, is obviously lower than the other longer ones. This is because the shorter the utterance text is, the fewer information it contains, and the more difficult for selecting the next utterance, while stacking self-attention can still help in this case. However for long utterances like containing more than 30 words, stacking self-attention can significantly improve the matching performance, which means that the more information an utterance contains, the more stacked self-attention it needs to capture its intra semantic structures. 1125 no clue what do you need it for. its just reassurance as i dont know the debain package manager hi i am looking to see what packages are installed on my system i.1 dont see.1 a path is the list being held somewhere else turn 0 response self−attention−match in stack 0 no clue what do you need it for. its just reassurance as i dont know the debain package manager hi i am looking to see what packages are installed on my system i.1 dont see.1 a path is the list being held somewhere else turn 0 response self−attention−match in stack 2 no clue what do you need it for. its just reassurance as i dont know the debain package manager hi i am looking to see what packages are installed on my system i.1 dont see.1 a path is the list being held somewhere else turn 0 response self−attention−match in stack 4 no clue what do you need it for. its just reassurance as i dont know the debain package manager hi i am looking to see what packages are installed on my system i.1 dont see.1 a path is the list being held somewhere else turn 0 response cross−attention−match in stack 4 no clue what do you need it for its just reassurance as i dont know the debain package manager no clue what do you need it for. its just reassurance as i dont know the debain package manager response response self−attention of response in stack 3 hi i am looking to see what packages are installed on my system i.1 dont see.1 a path is the list being held somewhere else hi i am looking to see what packages are installed on my system i.1 dont see.1 a path is the list being held somewhere else. turn 0 turn 0 self−attention of turn 0 in stack 3 no clue what do you need it for its just reassurance as i dont know the debain package manager hi i am looking to see what packages are installed on my system i.1 dont see.1 a path is the list being held somewhere else. turn 0 response attention of response over turn 0 in stack 4 hi i am looking to see what packages are installed on my system i.1 dont see.1 a path is the list being held somewhere else no clue what do you need it for. its just reassurance as i dont know the debain package manager response turn 0 attention of turn 0 over response in stack 4 self-attention cross-attention somewhere else no clue what do you need it for. its just reassurance as i dont know the debain package manager hi i am looking to see what packages are installed on my system dont a path is the list being held somewhere else turn 0 response prior−match in stack 2 no clue what do you need it for. its just reassurance as i dont know the debain package manager hi i am looking to see what packages are installed on my system dont a path is the list being held somewhere else turn 0 response prior−match in stack 4 no clue what do you need it for. its just reassurance as i dont know the debain package manager hi i am looking to see what packages are installed on my system dont a path is the list being held somewhere else turn 0 response posterior−match in stack 4 package manager hi i am looking to see what packages are installed on my system dont a path is the list being held somewhere else hi i am looking to see what packages are installed on my system i.1 dont see.1 a path is the list being held somewhere else. turn 0 turn 0 self−attention of turn 0 in stack 3 no clue what do you need it for its just reassurance as i dont know the debain package manager hi i am looking to see what packages are installed on my system i.1 dont see.1 a path is the list being held somewhere else. turn 0 response attention of response over turn 0 in stack 4 hi i am looking to see what packages are installed on my system dont a path is the list being held somewhere else no clue what do you need it for. its just reassurance as i dont know the debain package manager response turn 0 attention of turn 0 over response in stack 4 posterior-match self-attention cross-attention no clue what do you need it for. its just reassurance as i dont know the debain package manager hi i am looking to see what packages are installed on my system dont a path is the list being held somewhere else turn 0 response prior−match in stack 0 no clue what do you need it for. its just reassurance as i dont know the debain package manager hi i am looking to see what packages are installed on my system dont a path is the list being held somewhere else turn 0 response prior−match in stack 2 no clue what do you need it for. its just reassurance as i dont know the debain package manager hi i am looking to see what packages are installed on my system dont a path is the list being held somewhere else turn 0 response prior−match in stack 4 no clue what do you need it for. its just reassurance as i dont know the debain package manager hi i am looking to see what packages are installed on my system dont a path is the list being held somewhere else turn 0 response posterior−match in stack 4 no clue what do you need it for its just reassurance as i dont know the debain package manager no clue what do you need it for. its just reassurance as i dont know the debain package manager response response self−attention of response in stack 3 hi i am looking to see what packages are installed on my system dont a path is the list being held somewhere else hi i am looking to see what packages are installed on my system i.1 dont see.1 a path is the list being held somewhere else. turn 0 turn 0 self−attention of turn 0 in stack 3 no clue what do you need it for its just reassurance as i dont know the debain package manager hi i am looking to see what packages are installed on my system i.1 dont see.1 a path is the list being held somewhere else. turn 0 response attention of response over turn 0 in stack 4 hi i am looking to see what packages are installed on my system dont a path is the list being held somewhere else no clue what do you need it for. its just reassurance as i dont know the debain package manager response turn 0 attention of turn 0 over response in stack 4 prior-match posterior-match self-attention cross-attention self-attention-match in stack 0 self-attention-match in stack 2 self-attention-match in stack 4 cross-attention-match in stack 4 self-attention-match cross-attention-match Figure 5: Visualization of self-attention-match, cross-attention-match as well as the distribution of self-attention and crossattention in matching response with the first utterance in Figure 1. Each colored grid represents the matching degree or attention score between two words. The deeper the color is, the more important this grid is. 5.2 Visualization We study the case in Figure 1 for analyzing in detail how self-attention and cross-attention work. Practically, we apply a softmax operation over self-attention-match and cross-attention-match, to examine the variance of dominating matching pairs during stacking self-attention or applying cross-attention. Figure 5 gives the visualization results of the 0th, 2nd and 4th self-attention-match matrixes, the 4th cross-attention-match matrix, as well as the distribution of self-attention and crossattention in the 4th layer in matching response with the first utterance (turn 0) due to space limitation. As demonstrated, important matching pairs in selfattention-match in stack 0 are nouns, verbs, like “package” and “packages”, those are similar in topics. However matching scores between prepositions or pronouns pairs, such as “do” and “what”, become more important in self-attention-match in stack 4. The visualization results of self-attention show the reason why matching between prepositions or pronouns matters, as demonstrated, selfattention generally capture the semantic structure of “no clue what do you need package manager” for “do” in response and “what packages are installed” for “what” in utterance, making segments surrounding “do” and “what” close to each other in representations, thus increases their dot-product results. Also as shown in Figure 5, self-attentionmatch and cross-attention-match capture complementary information in matching utterance with response. Words like “reassurance” and “its” in response significantly get larger matching scores in cross-attention-match compared with self-attention-match. According to the visualization of cross-attention, “reassurance” generally depends on “system” “don’t” and “held” in utterance, which makes it close to words like “list”, “installed” or “held” of utterance. Scores of crossattention-match trend to centralize on several segments, which probably means that those segments in response generally capture structure-semantic information across utterance and response, amplifying their matching scores against the others. 5.3 Error Analysis To understand the limitations of DAM and where the future improvements might lie, we analyze 100 strong bad cases from test-set that fail in R10@5. We find two major kinds of bad cases: (1) fuzzycandidate, where response candidates are basically proper for the conversation context, except for a few improper details. (2) logical-error, where response candidates are wrong due to logical mismatch, for example, given a conversation context A: “I just want to stay at home tomorrow.”, B: “Why not go hiking? I can go with 1126 you.”, response candidate like “Sure, I was planning to go out tomorrow.” is logically wrong because it is contradictory to the first utterance of speaker A. We believe generating adversarial examples, rather than randomly sampling, during training procedure may be a good idea for addressing both fuzzy-candidate and logical-error, and to capture logic-level information hidden behind conversation text is also worthy to be studied in the future. 6 Conclusion In this paper, we investigate matching a response with its multi-turn context using dependency information based entirely on attention. Our solution extends the attention mechanism of Transformer in two ways: (1) using stacked selfattention to harvest multi-grained semantic representations. (2) utilizing cross-attention to match with dependency information. Empirical results on two large-scale datasets demonstrate the effectiveness of self-attention and cross-attention in multi-turn response selection. We believe that both self-attention and cross-attention could benefit other research area, including spoken language understanding, dialogue state tracking or seq2seq dialogue generation. We would like to explore in depth how attention can help improve neural dialogue modeling for both chatbots and taskoriented dialogue systems in our future work. Acknowledgement We gratefully thank the anonymous reviewers for their insightful comments. This work is supported by the National Basic Research Program of China (973 program, No. 2014CB340505). References David Ameixa, Luisa Coheur, Pedro Fialho, and Paulo Quaresma. 2014. Luke, i am your father: dealing with out-of-domain requests by using movies subtitles. In International Conference on Intelligent Virtual Agents, pages 13–21. Springer. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Ricardo Baeza-Yates, Berthier Ribeiro-Neto, et al. 1999. Modern information retrieval, volume 463. ACM press New York. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. international conference on learning representations. Rafael E Banchs and Haizhou Li. 2012. Iris: a chatoriented dialogue system based on the vector space model. In Proceedings of the ACL 2012 System Demonstrations, pages 37–42. Association for Computational Linguistics. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. Zongcheng Ji, Zhengdong Lu, and Hang Li. 2014. An information retrieval approach to short text conversation. arXiv preprint arXiv:1408.6988. Quoc V Le, Jiquan Ngiam, Adam Coates, Abhik Lahiri, Bobby Prochnow, and Andrew Y Ng. 2011. On optimization methods for deep learning. In Proceedings of the 28th International Conference on International Conference on Machine Learning, pages 265–272. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature, 521(7553):436–444. Alan Lee, Rashmi Prasad, Aravind Joshi, Nikhil Dinesh, and Bonnie Webber. 2006. Complexity of dependencies in discourse: Are dependencies in discourse more complex than in syntax. In Proceedings of the 5th International Workshop on Treebanks and Linguistic Theories, Prague, Czech Republic, page 12. Jiwei Li, Will Monroe, Tianlin Shi, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. In EMNLP, pages 372–381. Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. international conference on learning representations. Ryan Lowe, Michael Noseworthy, Iulian V Serban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an automatic turing test: Learning to evaluate dialogue responses. In ACL, pages 372–381. Ryan Lowe, Nissan Pow, Iulian Vlad Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multiturn dialogue systems. annual meeting of the special interest group on discourse and dialogue, pages 285–294. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. 1127 Lasguido Nio, Sakriani Sakti, Graham Neubig, Tomoki Toda, Mirna Adriani, and Satoshi Nakamura. 2014. Developing non-goal dialog system based on examples of drama television. In Natural Interaction with Robots, Knowbots and Smartphones, pages 355– 361. Springer. Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Shengxian Wan, and Xueqi Cheng. 2016. Text matching as image recognition. In AAAI, pages 2793–2799. Alan Ritter, Colin Cherry, and William B Dolan. 2011. Data-driven response generation in social media. In In Proc. EMNLP, pages 583–593. Association for Computational Linguistics. Andreas Stolcke, Klaus Ries, Noah Coccaro, Elizabeth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema, and Marie Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational linguistics, 26(3):339–373. David R Traum and Peter A Heeman. 1996. Utterance units in spoken dialogue. In Workshop on Dialogue Processing in Spoken Language Systems, pages 125–140. Alan M Turing. 1950. Computing machinery and intelligence. Mind, 59(236):433–460. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000–6010. Ellen M Voorhees et al. 1999. The trec-8 question answering track report. In Trec, pages 77–82. Hao Wang, Zhengdong Lu, Hang Li, and Enhong Chen. 2013. A dataset for research on short-text conversations. In EMNLP, pages 935–945. Mingxuan Wang, Zhengdong Lu, Hang Li, and Qun Liu. 2015. Syntax-based deep matching of short texts. International Joint Conferences on Artificial Intelligence. Shuohang Wang and Jing Jiang. 2017. Machine comprehension using match-lstm and answer pointer. international conference on learning representations. Yu Wu, Wei Wu, Ming Zhou, and Zhoujun Li. 2017. Sequential match network: A new architecture for multi-turn response selection in retrieval-based chatbots. In ACL, pages 372–381. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning, pages 2048–2057. Rui Yan, Yiping Song, and Hua Wu. 2016. Learning to respond with deep neural networks for retrievalbased human-computer conversation system. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, pages 55–64. Wenpeng Yin, Hinrich Sch¨utze, Bing Xiang, and Bowen Zhou. 2016. Abcnn: Attention-based convolutional neural network for modeling sentence pairs. Transactions of the Association of Computational Linguistics, 4(1):259–272. Xiangyang Zhou, Daxiang Dong, Hua Wu, Shiqi Zhao, Dianhai Yu, Hao Tian, Xuan Liu, and Rui Yan. 2016. Multi-view response selection for human-computer conversation. In EMNLP, pages 372–381.
2018
103
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1128–1137 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1128 MOJITALK: Generating Emotional Responses at Scale Xianda Zhou Dept. of Computer Science and Technology Tsinghua University Beijing, 100084 China [email protected] William Yang Wang Department of Computer Science University of California, Santa Barbara Santa Barbara, CA 93106 USA [email protected] Abstract Generating emotional language is a key step towards building empathetic natural language processing agents. However, a major challenge for this line of research is the lack of large-scale labeled training data, and previous studies are limited to only small sets of human annotated sentiment labels. Additionally, explicitly controlling the emotion and sentiment of generated text is also difficult. In this paper, we take a more radical approach: we exploit the idea of leveraging Twitter data that are naturally labeled with emojis. We collect a large corpus of Twitter conversations that include emojis in the response and assume the emojis convey the underlying emotions of the sentence. We investigate several conditional variational autoencoders training on these conversations, which allow us to use emojis to control the emotion of the generated text. Experimentally, we show in our quantitative and qualitative analyses that the proposed models can successfully generate highquality abstractive conversation responses in accordance with designated emotions. 1 Introduction A critical research problem for artificial intelligence is to design intelligent agents that can perceive and generate human emotions. In the past decade, there has been significant progress in sentiment analysis (Pang et al., 2002, 2008; Liu, 2012) and natural language understanding—e.g., classifying the sentiment of online reviews. To build empathetic conversational agents, machines must also have the ability to learn to generate emotional sentences. Figure 1: An example Twitter conversation with emoji in the response (top). We collected a large amount of these conversations, and trained a reinforced conditional variational autoencoder model to automatically generate abstractive emotional responses given any emoji. One of the major challenges is the lack of largescale, manually labeled emotional text datasets. Due to the cost and complexity of manual annotation, most prior research studies primarily focus on small-sized labeled datasets (Pang et al., 2002; Maas et al., 2011; Socher et al., 2013), which are not ideal for training deep learning models with a large number of parameters. In recent years, a handful of medium to large scale, emotional corpora in the area of emotion analysis (Go et al., 2016) and dialog (Li et al., 2017b) are proposed. However, all of them are limited to a traditional, small set of labels, for example, “happiness,” “sadness,” “anger,” etc. or simply binary “positive” and “negative.” Such coarse-grained classification labels make it difficult to capture the nuances of human emotion. To avoid the cost of human annotation, we propose the use of naturally-occurring emoji-rich Twitter data. We construct a dataset using Twitter conversations with emojis in the response. The fine-grained emojis chosen by the users in the response can be seen as the natural label for the emotion of the response. We assume that the emotions and nuances of emojis are established through the extensive usage by Twitter users. If we can create agents that 1129 are able to imitate Twitter users’ language style when using those emojis, we claim that, to some extent, we have captured those emotions. Using a large collection of Twitter conversations, we then trained a conditional generative model to automatically generate the emotional responses. Figure 1 shows an example. To generate emotional responses in dialogs, another technical challenge is to control the target emotion labels. In contrast to existing work (Huang et al., 2017) that uses information retrieval to generate emotional responses, the research question we are pursuing in this paper, is to design novel techniques that can generate abstractive responses of any given arbitrary emotions, without having human annotators to label a huge amount of training data. To control the target emotion of the response, we investigate several encoder-decoder generation models, including a standard attention-based SEQ2SEQ model as the base model, and a more sophisticated CVAE model (Kingma and Welling, 2013; Sohn et al., 2015), as VAE is recently found convenient in dialog generation (Zhao et al., 2017). To explicitly improve emotion expression, we then experiment with several extensions to the CVAE model, including a hybrid objective with policy gradient. The performance in emotion expression is automatically evaluated by a separate sentence-to-emoji classifier (Felbo et al., 2017). Additionally, we conducted a human evaluation to assess the quality of the generated emotional text. Results suggest that our method is capable of generating state-of-the-art emotional text at scale. Our main contributions are three-fold: • We provide a publicly available, large-scale dataset of Twitter conversation-pairs naturally labeled with fine-grained emojis. • We are the first to use naturally labeled emojis for conducting large-scale emotional response generation for dialog. • We apply several state-of-the-art generative models to train an emotional response generation system, and analysis confirms that our models deliver strong performance. In the next section, we outline related work on sentiment analysis and emoji on Twitter data, as well as neural generative models. Then, we will introduce our new emotional research dataset and formalize the task. Next, we will describe the neural models we applied for the task. Finally, we will show automatic evaluation and human evaluation results, and some generated examples. Experiment details can be found in supplementary materials. 2 Related Work In natural language processing, sentiment analysis (Pang et al., 2002) is an area that involves designing algorithms for understanding emotional text. Our work is aligned with some recent studies on using emoji-rich Twitter data for sentiment classification. Eisner et al. (2016) proposes a method for training emoji embedding EMOJI2VEC, and combined with word2vec (Mikolov et al., 2013), they apply the embeddings for sentiment classification. DeepMoji (Felbo et al., 2017) is closely related to our study: It makes use of a large, naturally labeled Twitter emoji dataset, and train an attentive bi-directional long short-term memory network (Hochreiter and Schmidhuber, 1997) model for sentiment analysis. Instead of building a sentiment classifier, our work focuses on generating emotional responses, given the context and the target emoji. Our work is also in line with the recent progress of the application of Variational Autoencoder (VAE) (Kingma and Welling, 2013) in dialog generation. VAE (Kingma and Welling, 2013) encodes data in a probability distribution, and then samples from the distribution to generate examples. However, the original frameworks do not support end-to-end generation. Conditional VAE (CVAE) (Sohn et al., 2015; Larsen et al., 2015) was proposed to incorporate conditioning option in the generative process. Recent research in dialog generation shows that language generated by VAE models enjoy significantly greater diversity than traditional SEQ2SEQ models (Zhao et al., 2017), which is a preferable property toward building a true-to-life dialog agents. In dialog research, our work aligns with recent advances in sequence-to-sequence models (Sutskever et al., 2014) using long shortterm memory networks (Hochreiter and Schmidhuber, 1997). A slightly altered version of this model serves as our base model. Our modification enabled it to condition on single emojis. Li 1130 184,500 9,505 5,558 2,771 38,479 9,455 5,114 2,532 30,447 9,298 5,026 2,332 25,018 8,385 4,738 2,293 19,832 8,341 4,623 1,698 16,934 8,293 4,531 1,534 17,009 8,144 4,287 1,403 15,563 7,101 4,205 1,258 15,046 6,939 4,066 1,091 14,121 6,769 3,973 698 13,887 6,625 3,841 627 13,741 6,558 3,863 423 13,147 6,374 3,236 250 10,927 6,031 3,072 243 10,104 5,849 3,088 154 9,546 5,624 2,969 130 Table 1: All 64 emoji labels, and number of conversations labeled by each emoji. et al. (2016) use a reinforcement learning algorithm to improve the vanilla sequence-to-sequence model for non-task-oriented dialog systems, but their reinforced and its follow-up adversarial models (Li et al., 2017a) also do not model emotions or conditional labels. Zhao et al. (2017) recently introduced conditional VAE for dialog modeling, but neither did they model emotions in the conversations, nor explore reinforcement learning to improve results. Given a dialog history, Xie et. al.’s work recommends suitable emojis for current conversation. Xie et. al. (2016)compress the dialog history to vector representation through a hierarchical RNN and then map it to a emoji by a classifier, while in our model, the representation for original tweet, combined with the emoji embedding, is used to generate a response. 3 Dataset We start by describing our dataset and approaches to collecting and processing the data. Social media is a natural source of conversations, and people use emojis extensively within their posts. However, not all emojis are used to express emotion and frequency of emojis are unevenly distributed. Inspired by DeepMoji (Felbo et al., 2017), we use 64 common emojis as labels (see Table 1), and collect a large corpus of Twitter conversations conBefore: @amy miss you soooo much!!! After: miss you soo much! Label: Figure 2: An artificial example illustrating preprocess procedure and choice of emoji label. Note that emoji occurrences in responses are counted before the deduplication process. taining those emojis. Note that emojis with the difference only in skin tone are considered the same emoji. 3.1 Data Collection We crawled conversation pairs consisting of an original post and a response on Twitter from 12th to 14th of August, 2017. The response to a conversation must include at least one of the 64 emoji labels. Due to the limit of Twitter streaming API, tweets are filtered on the basis of words. In our case, a tweet can be reached only if at least one of the 64 emojis is used as a word, meaning it has to be a single character separated by blank space. However, this kind of tweets is arguably cleaner, as it is often the case that this emoji is used to wrap up the whole post and clusters of repeated emojis are less likely to appear in such tweets. For both original tweets and responses, only English tweets without multimedia contents (such as URL, image or video) are allowed, since we assume that those contents are as important as the text itself for the machine to understand the conversation. If a tweet contains less than three alphabetical words, the conversation is not included in the dataset. 3.2 Emoji Labeling Then we label responses with emojis. If there are multiple types of emoji in a response, we use the emoji with most occurrences inside the response. Among those emojis with same occurrences, we choose the least frequent one across the whole corpus, on the hypothesis that less frequent tokens better represent what the user wants to express. See Figure 2 for example. 1131 3.3 Data Preprocessing During preprocessing, all mentions and hashtags are removed, and punctuation1 and emojis are separated if they are adjacent to words. Words with digits are all treated as the same special token. In some cases, users use emojis and symbols in a cluster to express emotion extensively. To normalize the data, words with more than two repeated letters, symbol strings of more than one repeated punctuation symbols or emojis are shortened, for example, ‘!!!!’ is shortened to ‘!’, and ‘yessss’ to ‘yess’. Note that we do not reduce duplicate letters completely and convert the word to the ‘correct’ spelling (‘yes’ in the example) since the length of repeated letters represents the intensity of emotion. By distinguishing ‘yess’ from ‘yes’, the emotional intensity is partially preserved in our dataset. Then all symbols, emojis, and words are tokenized. Finally, we build a vocabulary of size 20K according to token frequency. Any tokens outside the vocabulary are replaced by the same special token. We randomly split the corpus into 596,959 /32,600/32,600 conversation pairs for train /validation/test set2. Distribution of emoji labels within the corpus is presented in Table 1. 4 Generative Models In this work, our goal is to generate emotional responses to tweets with the emotion specified by an emoji label. We assembled several generative models and trained them on our dataset. 4.1 Base: Attention-Based Sequence-to-Sequence Model Traditional studies use deep recurrent architecture and encoder-decoder models to generate conversation responses, mapping original texts to target responses. Here we use a sequence-to-sequence (SEQ2SEQ) model (Sutskever et al., 2014) with global attention mechanism (Luong et al., 2015) as our base model (See Figure 3). We use randomly initialized embedding vectors to represent each word. To specifically model the 1Emoticons (e.g. ‘:)’, ‘(-:’) are made of mostly punctuation marks. They are not examined in this paper. Common emoticons are treated as words during preprocessing. 2We will release the dataset with all tweets in its original form before preprocessing. To comply with Twitter’s policy, we will include the tweet IDs in our release, and provide a script for downloading the tweets using the official API. No information of the tweet posters is collected. Figure 3: From bottom to top is a forward pass of data during training. Left: the base model encodes the original tweets in vo, and generates responses by decoding from the concatenation of vo and the embedded emoji, ve. Right: In the CVAE model, all additional components (outlined in gray) can be added incrementally to the base model. A separate encoder encodes the responses in x. Recognition network inputs x and produces the latent variable z by reparameterization trick. During training, The latent variable z is concatenated with vo and ve and fed to the decoder. emotion, we compute the embedding vector of the emoji label the same way as word embeddings. The emoji embedding is further reduced to smaller size vector ve through a dense layer. We pass the embeddings of original tweets through a bidirectional RNN encoder of GRU cells (Schuster and Paliwal, 1997; Chung et al., 2014). The encoder outputs a vector vo that represents the original tweet. Then vo and ve are concatenated and fed to a 1-layer RNN decoder of GRU cells. A response is then generated from the decoder. 4.2 Conditional Variational Autoencoder (CVAE) Having similar encoder-decoder structures, SEQ2SEQ can be easily extended to a Conditional Variational Autoencoder (CVAE) (Sohn et al., 2015). Figure 3 illustrates the model: response encoder, recognition network, and prior network 1132 are added on top of the SEQ2SEQ model. Response encoder has the same structure to original tweet encoder, but it has separate parameters. We use embeddings to represent Twitter responses and pass them through response encoder. Mathematically, CVAE is trained by maximizing a variational lower bound on the conditional likelihood of x given c, according to: p(x|c) = Z p(x|z, c)p(z|c)dz (1) z, c and x are random variables. z is the latent variable. In our case, the condition c = [vo; ve], target x represents the response. Decoder is used to approximate p(x|z, c), denoted as pD(x|z, c). Prior network is introduced to approximate p(z|c), denoted as pP (z|c). Recognition network qR(z|x, c) is introduced to approximate true posterior p(z|x, c) and will be absent during generation phase. By assuming that the latent variable has a multivariate Gaussian distribution with a diagonal covariance matrix, the lower bound to log p(x|c) can then be written by: −L(θD, θP , θR; x, c) = KL(qR(z|x, c)||pP (z|c)) −EqR(z|x,c)(log pD(x|z, c)) (2) θD, θP , θR are parameters of those networks. In recognition/prior network, we first pass the variables through an MLP to get the mean and log variance of z’s distribution. Then we run a reparameterization trick (Kingma and Welling, 2013) to sample latent variables. During training, z by the recognition network is passed to the decoder and trained to approximate z′ by the prior network. While during testing, the target response is absent, and z′ by the prior network is passed to the decoder. Our CVAE inherits the same attention mechanism from the base model connecting the original tweet encoder to the decoder, which makes our model deviate from previous works of CVAE on text data. Based on the attention memory as well as c and z, a response is finally generated from the decoder. When handling text data, the VAE models that apply recurrent neural networks as the structure of their encoders/decoders may first learn to ignore the latent variable, and explain the data with the more easily optimized decoder. The latent variables lose its functionality, and the VAE deteriorates to a plain SEQ2SEQ model mathematically (Bowman et al., 2015). Some previous methods effectively alleviate this problem. Such methods are also important to keep a balance between the two items of the loss, namely KL loss and reconstruction loss. We use techniques of KL annealing, early stopping (Bowman et al., 2015) and bag-of-word loss (Zhao et al., 2017) in our models. The general loss with bag-of-word loss (see supplementary materials for details) is rewritten as: L′ = L + Lbow (3) 4.3 Reinforced CVAE In order to further control the emotion of our generation more explicitly, we combine policy gradient techniques on top of the CVAE above and proposed Reinforced CVAE model for our task. We first train an emoji classifier on our dataset separately and fix its parameters thereafter. The classifier is used to produce reward for the policy training. It is a skip connected model of Bidirectional GRU-RNN layers (Felbo et al., 2017). During the policy training, we first get the generated response x′ by passing x and c through the CVAE, then feeding generation x′ to classifier and get the probability of the emoji label as reward R. Let θ be parameters of our network, REINFORCE algorithm (Williams, 1992) is used to maximize the expected reward of generated responses: J (θ) = Ep(x|c)(Rθ(x, c)) (4) The gradient of Equation 4 is approximated using the likelihood ratio trick (Glynn, 1990; Williams, 1992): ∇J (θ) = (R −r)∇ |x| X t log p(xt|c, x1:t−1) (5) r is the baseline value to keep estimate unbiased and reduce its variance. In our case, we directly pass x through emoji classifier and compute the probability of the emoji label as r. The model then encourages response generation that has R > r. As REINFORCE objective is unrelated to response generation, it may make the generation model quickly deteriorate to some generic responses. To stabilize the training process, we propose two straightforward techniques to constrain the policy training: 1133 1. Adjust rewards according to the position of the emoji label when all labels are ranked from high to low in order of the probability given by the emoji classifier. When the probability of the emoji label is of high rank among all possible emojis, we assume that the model has succeeded in emotion expression, thus there is no need to adjust parameters toward higher probability in this response. Modified policy gradient is written as: ∇J ′(θ) = α(R −r)∇ |x| X t log p(xt|c, x1:t−1) (6) where α ∈[0, 1] is a variant coefficient. The higher R ranks in all types of emoji label, the closer α is to 0. 2. Train Reinforced CVAE by a hybrid objective of REINFORCE and variational lower bound objective, learning towards both emotion accuracy and response appropriateness: minθL′′ = L′ −λJ ′ (7) λ is a balancing coefficient, which is set to 1 in our experiments. The algorithm outlining the training process of Reinforced CVAE can be found in the supplementary materials. 5 Experimental Results and Analyses We conducted several experiments to finalize the hyper-parameters of our models (Table 2). During training, fully converged base SEQ2SEQ model is used to initialize its counterparts in CVAE models. Pretraining is vital to the success of our models since it is essentially hard for them to learn a latent variable space from total randomness. For more details, please refer to the supplementary materials. In this section, we first report and analyze the general results of our models, including perplexity, loss and emotion accuracy. Then we take a closer look at the generation quality as well as our models’ capability of expressing emotion. 5.1 General To generally evaluate the performance of our models, we use generation perplexity and top-1/top-5 Emoji Accuracy Model Perplexity Top1 Top5 Development Base 127.0 34.2% 57.6% CVAE 37.1 40.7% 75.3% Reinforced CVAE 38.1 42.2% 76.9% Test Base 130.6 33.9% 58.1% CVAE 36.9 41.4% 75.1% Reinforced CVAE 38.3 42.1% 77.3% Table 2: Generation perplexity and emoji accuracy of the three models. emoji accuracy on the test set. Perplexity indicates how much difficulty the model is having when generating responses. We also use top-5 emoji accuracy, since the meaning of different emojis may overlap with only a subtle difference. The machine may learn that similarity and give multiple possible labels as the answer. Note that we use the same emoji classifier for evaluation. Its accuracy (see supplementary materials) may not seem perfect, but it is the stateof-the-art emoji classifier given so many classes. Also, it’s reasonable to use the same classifier in training for automated evaluation, as is in (Hu et al., 2017). We can obtain meaningful results as long as the classifier is able to capture the semantic relationship between emojis (Felbo et al., 2017). As is shown in Table 2, CVAE significantly reduces the perplexity and increases the emoji accuracy over base model. Reinforced CVAE also adds to the emoji accuracy at the cost of a slight increase in perplexity. These results confirm that proposed methods are effective toward the generation of emotional responses. When converged, the KL loss is 27.0/25.5 for the CVAE/Reinforced CVAE respectively, and reconstruction loss 42.2/40.0. The models achieved a balance between the two items of loss, confirming that they have successfully learned a meaningful latent variable. 5.2 Generation Diversity SEQ2SEQ generates in a monotonous way, as several generic responses occur repeatedly, while the generation of CVAE models is of much more diversity. To showcase this disparity, we calculated the type-token ratios of unigrams/bigrams/trigrams in generated responses as 1134 Figure 4: Top5 emoji accuracy of the first 32 emoji labels. Each bar represents an emoji and its length represents how many of all responses to the original tweets are top5 accurate. Different colors represent different models. Emojis are numbered in the order of frequencies in the dataset. No.0 is , for instance, No.1 and so on. Top: CVAE v. Base. Bottom: Reinforced CVAE v. CVAE. If Reinforced CVAE scores higher, the margin is marked in orange. If lower, in black. the diversity score. As shown in Table 3, results show that CVAE models beat the base models by a large margin. Diversity scores of Reinforced CVAE are reasonably compromised since it’s generating more emotional responses. 5.3 Controllability of Emotions There are potentially multiple types of emotion in reaction to an utterance. Our work makes it possible to generate a response to an arbitrary emotion by conditioning the generation on a specific type of emoji. In this section, we generate one response in reply to each original tweet in the dataset and condition on each emoji of the selected 64 emoModel Unigram BiTriBase 0.0061 0.0199 0.0362 CVAE 0.0191 0.131 0.365 Reinforced CVAE 0.0160 0.118 0.337 Target responses 0.0353 0.370 0.757 Table 3: Type-token ratios of the generation by the three models. Scores of tokenized humangenerated target responses are given for reference. Setting Model v. Base Win Lose Tie reply CVAE 42.4% 43.0% 14.6% reply Reinforced CVAE 40.6% 39.6% 19.8% emoji CVAE 48.4% 26.2% 25.4% emoji Reinforced CVAE 50.0% 19.6% 30.4% Table 4: Results of human evaluation. Tests are conducted pairwise between CVAE models and the base model. jis. We may have recorded some original tweets with different replies in the dataset, but an original tweet only need to be used once for each emoji, so we eliminate duplicate original tweets in the dataset. There are 30,299 unique original tweets in the test set. Figure 4 shows the top-5 accuracy of each type of the first 32 emoji labels when the models generates responses from the test set conditioning on the same emoji. The results show that CVAE models increase the accuracy over every type of emoji label. Reinforced CVAE model sees a bigger increase on the less common emojis, confirming the effect of the emoji-specified policy training. 5.4 Human Evaluation We employed crowdsourced judges to evaluate a random sample of 100 items (Table 4), each being assigned to 5 judges on the Amazon Mechanical Turk. We present judges original tweets and generated responses. In the first setting of human evaluation, judges are asked to decide which one of the two generated responses better reply the original tweet. In the second setting, the emoji label is presented with the item discription, and judges are asked to pick one of the two generated responses that they decide better fits this emoji. (These two settings of evaluation are conducted separately so that it will not affect judges’ verdicts.) Order of two generated responses under one item is permuted. Ties are permitted for an1135 Content sorry guys , was gunna stream tonight but i ’m still feeling like crap and my voice disappeared . i will make it up to you Target Emotion Base i ’m sorry you ’re going to be missed it i ’m sorry for your loss i ’m sorry you ’re going to be able to get it CVAE hope you are okay hun ! hi jason , i ’ll be praying for you im sorry u better suck u off Reinforced CVAE hope you ’re feeling it hope you had a speedy recovery man ! hope you feel better soon , please get well soon dude i ’m so sorry for that i wanna hear it and i ’m sorry i can ’t go to canada with you but i wanna be away from canada Content add me in there my bro Target Emotion Base i ’m not sure you ’ll be there i ’m here for you i ’m not ready for you CVAE you know , you need to tell me in your hometown ! you will be fine bro , i ’ll be in the gym for you i can ’t wait Reinforced CVAE you might have to get me hip hop off . good luck bro ! this is about to be healthy i ’m still undecided and i ’m still waiting Content don ’t tell me match of the day is delayed because of this shit Target Emotion Base i ’m not even a fan of the game i ’m not sure if you ever have any chance to talk to someone else i ’m sorry i ’m not doubting you CVAE you can ’t do it bc you ’re in my mentions see now a good point hiya , unfortunately , it ’s not Reinforced CVAE oh my god i ’m saying this as long as i remember my twitter fab mate , you ’ll enjoy the game and you ’ll get a win it ’s the worst Content g i needed that laugh lmfaoo Target Emotion Base i ’m glad you enjoyed it i ’m not gonna lie i ’m sorry i ’m not laughing CVAE good ! have a good time i don ’t plan on that me too . but it ’s a lot of me . Reinforced CVAE thank you for your tweet , you didn ’t know how much i guess that ’s a bad idea , u gotta hit me up on my phone i feel bad at this and i hope you can make a joke Table 5: Some examples from our generated emotional responses. Context is the original tweet, and target emotion is specified by the emoji. Following are the responses generated by each of the three models based on the context and the target emotion. swers. We batch five items as one assignment and insert an item with two identical outputs as the sanity check. Anyone who failed to choose “tie” for that item is considered as a careless judge and is therefore rejected from our test. We then conducted a simplified Turing test. Each item we present judges an original tweet, its reply by a human, and its response generated from Reinforced CVAE model. We ask judges to decide which of the two given responses is written by a human. Other parts of the setting are similar to above-mentioned tests. It turned out 18% of the test subjects mistakenly chose machine-generated responses as human written, and 27% stated that they were not able to distinguish between the two responses. In regard of the inter-rater agreement, there are four cases. The ideal situation is that all five judges choose the same answer for a item, and in the worst-case scenario, at most two judges choose the same answer. In light of this, we have counted that 32%/33%/31%/5% of all items have 5/4/3/2 judges in agreement, showing that our experiment has a reasonably reliable inter-rater agreement. 5.5 Case Study We sampled some generated responses from all three models, and list them in Figure 5. Given 1136 an original tweet, we would like to generate responses with three different target emotions. SEQ2SEQ only chooses to generate most frequent expressions, forming a predictable pattern for its generation (See how every sampled response by the base model starts with “I’m”). On the contrary, generation from the CVAE model is diverse, which is in line with previous quantitative analysis. However, the generated responses are sometimes too diversified and unlikely to reply to the original tweet. Reinforced CVAE somtetimes tends to generate a lengthy response by stacking up sentences (See the responses to the first tweet when conditioning on the ‘folded hands’ emoji and the ‘sad face’ emoji). It learns to break the length limit of sequence generation during hybrid training, since the variational lower bound objective is competing with REINFORCE objective. The situation would be more serious is λ in Equation 7 is set higher. However, this phenomenon does not impair the fluency of generated sentences, as can be seen in Figure 5. 6 Conclusion and Future Work In this paper, we investigate the possibility of using naturally annotated emoji-rich Twitter data for emotional response generation. More specifically, we collected more than half a million Twitter conversations with emoji in the response and assumed that the fine-grained emoji label chosen by the user expresses the emotion of the tweet. We applied several state-of-the-art neural models to learn a generation system that is capable of giving a response with an arbitrarily designated emotion. We performed automatic and human evaluations to understand the quality of generated responses. We trained a large scale emoji classifier and ran the classifier on the generated responses to evaluate the emotion accuracy of the generated response. We performed an Amazon Mechanical Turk experiment, by which we compared our models with a baseline sequence-to-sequence model on metrics of relevance and emotion. Experimentally, it is shown that our model is capable of generating high-quality emotional responses, without the need of laborious human annotations. Our work is a crucial step towards building intelligent dialog agents. We are also looking forward to transferring the idea of naturally-labeled emojis to task-oriented dialog and multi-turn dialog generation problems. Due to the nature of social media text, some emotions, such as fear and disgust, are underrepresented in the dataset, and the distribution of emojis is unbalanced to some extent. We will keep accumulating data and increase the ratio of underrepresented emojis, and advance toward more sophisticated abstractive generation methods. 1137 References Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal J´ozefowicz, and Samy Bengio. 2015. Generating sentences from a continuous space. CONLL. Junyoung Chung, C¸ aglar G¨ulc¸ehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. NIPS 2014 Deep Learning and Representation Learning Workshop. Ben Eisner, Tim Rockt¨aschel, Isabelle Augenstein, Matko Boˇsnjak, and Sebastian Riedel. 2016. emoji2vec: Learning emoji representations from their description. SocialNLP at EMNLP. Bjarke Felbo, Alan Mislove, Anders Søgaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. EMNLP. Peter W Glynn. 1990. Likelihood ratio gradient estimation for stochastic systems. Communications of the ACM, 33(10):75–84. Alec Go, Richa Bhayani, and Lei Huang. 2016. Sentiment140. http://help.sentiment140. com/. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward controlled generation of text. In International Conference on Machine Learning, pages 1587–1596. Chieh-Yang Huang, Tristan Labetoulle, Ting-Hao Kenneth Huang, Yi-Pei Chen, Hung-Chen Chen, Vallari Srivastava, and Lun-Wei Ku. 2017. Moodswipe: A soft keyboard that suggests messages based on userspecified emotions. EMNLP Demo. Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. ICLR. Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, Hugo Larochelle, and Ole Winther. 2015. Autoencoding beyond pixels using a learned similarity metric. ICML. Jiwei Li, Will Monroe, Alan Ritter, and Dan Jurafsky. 2016. Deep reinforcement learning for dialogue generation. EMNLP. Jiwei Li, Will Monroe, Tianlin Shi, Alan Ritter, and Dan Jurafsky. 2017a. Adversarial learning for neural dialogue generation. EMNLP. Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017b. Dailydialog: A manually labelled multi-turn dialogue dataset. IJCNLP. Bing Liu. 2012. Sentiment analysis and opinion mining. Synthesis lectures on human language technologies, 5(1):1–167. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. EMNLP. Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 142–150. Association for Computational Linguistics. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. ICLR. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, pages 79–86. Association for Computational Linguistics. Bo Pang, Lillian Lee, et al. 2008. Opinion mining and sentiment analysis. Foundations and Trends R⃝in Information Retrieval, 2(1–2):1–135. Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673–2681. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631–1642. Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015. Learning structured output representation using deep conditional generative models. In Advances in Neural Information Processing Systems, pages 3483–3491. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256. Ruobing Xie, Zhiyuan Liu, Rui Yan, and Maosong Sun. 2016. Neural emoji recommendation in dialogue systems. arXiv preprint arXiv:1612.04609. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. ACL.
2018
104
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1138–1148 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1138 Taylor’s Law for Human Linguistic Sequences Tatsuru Kobayashi∗ Graduate School of Information Science and Technology, University of Tokyo 7-3-1 Hongo, Bunkyo-ku Tokyo 113-8656 Japan Kumiko Tanaka-Ishii† Research Center for Advanced Science and Technology, University of Tokyo 4-6-1 Komaba, Meguro-ku Tokyo 153-8904 Japan Abstract Taylor’s law describes the fluctuation characteristics underlying a system in which the variance of an event within a time span grows by a power law with respect to the mean. Although Taylor’s law has been applied in many natural and social systems, its application for language has been scarce. This article describes a new quantification of Taylor’s law in natural language and reports an analysis of over 1100 texts across 14 languages. The Taylor exponents of written natural language texts were found to exhibit almost the same value. The exponent was also compared for other language-related data, such as the child-directed speech, music, and programming language code. The results show how the Taylor exponent serves to quantify the fundamental structural complexity underlying linguistic time series. The article also shows the applicability of these findings in evaluating language models. 1 Introduction Taylor’s law characterizes how the variance of the number of events for a given time and space grows with respect to the mean, forming a power law. It is a quantification method for the clustering behavior of a system. Since the pioneering studies of this concept (Smith, 1938; Taylor, 1961), a substantial number of studies have been conducted across various domains, including ecology, life science, physics, finance, and human dynamics, as well summarized in (Eisler, Bartos, and Kertész, 2007). ∗[email protected][email protected] More recently, Cohen and Xu (2015) reported Taylor exponents for random sampling from various distributions, and Calif and Schmitt (2015) reported Taylor’s law in wind energy data using a non-parametric regression. Those two papers also refer to research about Taylor’s law in a wide range of fields. Despite such diverse application across domains, there has been little analysis based on Taylor’s law in studying natural language. The only such report, to the best of our knowledge, is Gerlach and Altmann (2014), but they measured the mean and variance by means of the vocabulary size within a document. This approach essentially differs from the original concept of Taylor analysis, which fundamentally counts the number of events, and thus the theoretical background of Taylor’s law as presented in Eisler, Bartos, and Kertész (2007) cannot be applied to interpret the results. For the work described in this article, we applied Taylor’s law for texts, in a manner close to the original concept. We considered lexical fluctuation within texts, which involves the cooccurrence and burstiness of word alignment. The results can thus be interpreted according to the analytical results of Taylor’s law, as described later. We found that the Taylor exponent is indeed a characteristic of texts and is universal across various kinds of texts and languages. These results are shown here for data including over 1100 singleauthor texts across 14 languages and large-scale newspaper data. Moreover, we found that the Taylor exponents for other symbolic sequential data, including child-directed speech, programming language code, and music, differ from those for written natural language texts, thus distinguishing different kinds of data sources. The Taylor exponent in this sense could categorize and quantify the structural 1139 complexity of language. The Chomsky hierarchy (Chomsky, 1956) is, of course, the most important framework for such categorization. The Taylor exponent is another way to quantify the complexity of natural language: it allows for continuous quantification based on lexical fluctuation. Since the Taylor exponent can quantify and characterize one aspect of natural language, our findings are applicable in computational linguistics to assess language models. At the end of this article, in §5, we report how the most basic character-based long short-term memory (LSTM) unit produces texts with a Taylor exponent of 0.50, equal to that of a sequence of independent and identically distributed random variables (an i.i.d. sequence). This shows how such models are limited in producing consistent co-occurrence among words, as compared with a real text. Taylor analysis thus provides a possible direction to reconsider the limitations of language models. 2 Related Work This work can be situated as a study to quantify the complexity underlying texts. As summarized in (Tanaka-Ishii and Aihara, 2015), measures for this purpose include the entropy rate (Takahira, Tanaka-Ishii, and Lukasz, 2016; Bentz et al., 2017) and those related to the scaling behaviors of natural language. Regarding the latter, certain power laws are known to hold universally in linguistic data. The most famous among these are Zipf’s law (Zipf, 1965) and Heaps’ law (Heaps, 1978). Other, different kinds of power laws from Zipf’s law are obtained through various methods of fluctuation analysis, but the question of how to quantify the fluctuation existing in language data has been controversial. Our work is situated as one such case of fluctuation analysis. In real data, the occurrence timing of a particular event is often biased in a bursty, clustered manner, and fluctuation analysis quantifies the degree of this bias. Originally, this was motivated by a study of how floods of the Nile River occur in clusters (i.e., many floods coming after an initial flood) (Hurst, 1951). Such clustering phenomena have been widely reported in both natural and social domains (Eisler, Bartos, and Kertész, 2007). Fluctuation analysis for language originates in (Ebeling and Pöeschel, 1994), which applied the approach to characters. That work corresponds to observing the average of the variances of each character’s number of occurrences within a time span. Their method is strongly related to ours but different from two viewpoints: (1) Taylor analysis considers the variance with respect to the mean, rather than time; and (2) Taylor analysis does not average results over all elements. Because of these differences, the method in (Ebeling and Pöeschel, 1994) cannot distinguish real texts from an i.i.d. process when applied to word sequences (Takahashi and Tanaka-Ishii, 2018). Event clustering phenomena cause a sequence to resemble itself in a self-similar manner. Therefore, studies of the fluctuation underlying a sequence can take another form of long-range correlation analysis, to consider the similarity between two subsequences underlying a time series. This approach requires a function to calculate the similarity of two sequences, and the autocorrelation function (ACF) is the main function considered. Since the ACF only applies to numerical data, both Altmann, Pierrehumbert, and Motter (2009) and Tanaka-Ishii and Bunde (2016) applied long-range correlation analysis by transforming text into intervals and showed how natural language texts are long-range correlated. Another recent work (Lin and Tegmark, 2016) proposed using mutual information instead of the ACF. Mutual information, however, cannot detect the long-range correlation underlying texts. All these works studied correlation phenomena via only a few texts and did not show any underlying universality with respect to data and language types. One reason is that analysis methods for long-range correlation are nontrivial to apply to texts. Overall, the analysis based on Taylor’s law in the present work belongs to the former approach of fluctuation analysis and shows the law’s vast applicability and stability for written texts and even beyond, quantifying universal complexity underlying human linguistic sequences. 3 Measuring the Taylor Exponent 3.1 Proposed method Given a set of elements W (words), let X = X1, X2, . . . , XN be a discrete time series of length N, where Xi ∈W for all i = 1, 2, . . . , N, i.e., each Xi represents a word. For a given segment length ∆t ∈N (a positive integer), a data sample X is segmented by the length ∆t. The number of occurrences of a specific word wk ∈W is counted for every segment, and the mean µk and standard 1140 deviation σk across segments are obtained. Doing this for all word kinds w1, . . . , w|W| ∈W gives the distribution of σ with respect to µ. Following a previous work (Eisler, Bartos, and Kertész, 2007), in this article Taylor’s law is defined to hold when µ and σ are correlated by a power law in the following way: σ ∝µα. (1) Experimentally, the Taylor exponent α is known to take a value within the range of 0.5 ≤α ≤ 1.0 across a wide variety of domains as reported in (Eisler, Bartos, and Kertész, 2007), including finance, meteorology, agriculture, and biology. Mathematically, it is analytically proven that α = 0.5 for an i.i.d process, and the proof is included as Supplementary Material. On the other hand, α = 1.0 when all segments always contain the same proportion of the elements of W. For example, suppose that W = {a, b}. If b always occurs twice as often as a in all segments (e.g., three a and six b in one segment, two a and four b in another, etc.), then both the mean and standard deviation for b are twice those for a, so the exponent is 1.0. In a real text, this cannot occur for all W, so α < 1.0 for natural language text. Nevertheless, for a subset of words in W, this could happen, especially for a template-like sequence. For instance, consider a programming statement: while (i < 1000) do i-. Here, the words while and do always occur once, whereas i always occurs twice. This example shows that the exponent indicates how consistently words depend on each other in W, i.e., how words co-occur systematically in a coherent manner, thus indicating that the Taylor exponent is partly related to grammaticality. To measure the Taylor exponent α, the mean and standard deviation are computed for every word kind1 and then plotted in log-log coordinates. The number of points in this work was the number of different words. We fitted the points to a linear function in log-log coordinates by the least-squares method. We naturally took the logarithm of both cµα and σ to estimate the exponent, because Taylor’s law is a power law. The coefficient ˆc, and exponent ˆα are then estimated as the 1 In this work, words are not lemmatized, e.g. “say,” “said,” and “says” are all considered different words. This was chosen so in this work because the Taylor exponent considers systematic co-occurrence of words, and idiomatic phrases should thus be considered in their original forms. following: ˆc, ˆα = arg min c,α ϵ(c, α), ϵ(c, α) = v u u t 1 |W| |W| ∑ k=1 (log σk −log cµα k)2. This fit function could be a problem depending on the distribution of errors between the data points and the regression line. As seen later, the error distribution seems to differ with the kind of data: for a random source the error seems Gaussian, and so the above formula is relevant, whereas for real data, the distribution is biased. Changing the fit function according to the data source, however, would cause other essential problems for fair comparison. Here, because Cohen and Xu (2015) reported that most empirical works on Taylor’s law used least-squares regression (including their own), this work also uses the above scheme2, with the error defined as ϵ(ˆc, ˆα). 3.2 Data Table 1 lists all the data used for this article. The data consisted of natural language texts, languagerelated sequences, and randomized data, listed as different blocks in the table. The natural language texts consisted of 1142 single-author long texts (first block, extracted from Project Gutenberg and Aozora Bunko across 14 languages3, with the second block listing individual samples taken from Project Gutenberg together with the complete works of Shakespeare), and newspapers (third block, from the Gigaword corpus, available from the Linguistic Data Consortium in English, Chinese, and other major languages). Other sequences appear in the fourth block: the enwiki8 100-MB dump dataset (consisting of tag-annotated text from English Wikipedia), the 10 longest child-directed speech utterances in CHILDES data4 (preprocessed by extracting only children’s utterances), four program sources (in Lisp, Haskell, C++, and Python, crawled from 2The code for estimating the exponent is available from https://github.com/Group-TanakaIshii/ word_taylor. 3All texts above a size threshold (1 megabyte) were extracted from the two archives, resulting in 1142 texts. 4Child Language Data Exchange System (MacWhinney, 2000; Bol, 1995; Lieven, Salomo, and Tomasello, 2009; Rondal, 1985; Behrens, 2006; Gil and Tadmor, 2007; OshimaTakane et al., 1995; Smoczynska, 1985; An ¯delkovi´c, Ševa, and Moskovljevi´c, 2001; Benedet et al., 2004; Plunkett and Strömqvist, 1992) 1141 Table 1: Data we used in this article. For each dataset, length is the number of words, vocabulary is the number of different words. For detail of the data kind, see §3.2. Texts Language ˆα Number Length Vocabulary mean of samples Mean Min Max Mean Min Max English 0.58 910 313127.4 185939 2488933 17237.7 7321 69812 French 0.57 66 197519.3 169415 1528177 22098.3 14106 57193 Finnish 0.55 33 197519.3 149488 396920 33597.1 26275 47263 Chinese 0.61 32 629916.8 315099 4145117 15352.9 9153 60950 Dutch 0.57 27 256859.2 198924 435683 19159.1 13880 31595 German 0.59 20 236175.0 184321 331322 24242.3 11079 37228 Gutenberg Italian 0.57 14 266809.0 196961 369326 29103.5 18641 45032 Spanish 0.58 12 363837.2 219787 903051 26111.1 18111 36507 Greek 0.58 10 159969.2 119196 243953 22805.7 15877 31386 Latin 0.57 2 505743.5 205228 806259 59667.5 28739 90596 Portuguese 0.56 1 261382.0 261382 261382 24719.0 24719 24719 Hungarian 0.57 1 198303.0 198303 198303 38384.0 38384 38384 Tagalog 0.59 1 208455.0 208455 208455 26335.0 26335 26335 Aozora Japanese 0.59 13 616677.2 105343 2951320 19760.0 6620 49100 Moby Dick English 0.58 1 254655.0 254655 254655 20473.0 20473 20473 Hong Lou Meng Chinese 0.59 1 701256.0 701256 701256 18451.0 18451 18451 Les Miserables French 0.57 1 691407.0 690417 690417 31956.0 31956 31956 Shakespeare (All) English 0.59 1 1000238.0 1000238 1000238 40840.0 40840 40840 WSJ English 0.56 1 22679513.0 22679513 22679513 137467.0 137467 137467 NYT English 0.58 1 1528137194.0 1528137194 1528137194 3155495.0 3155495 3155495 People’s Daily Chinese 0.58 1 19420853.0 19420853 19420853 172140.0 172140 172140 Mainichi Japanese 0.56 24 (yrs) 31321594.3 24483331 40270706 145534.5 127290 169270 enwiki8 tag-annotated 0.63 1 14647848.0 14647848 14647848 1430791.0 1430791 1430791 CHILDES various 0.68 10 193434.0 48952 448772 9908.0 5619 17893 Programs 0.79 4 34161018.8 3697199 68622162 838907.8 127653 1545127 Music 0.79 12 135993.4 76629 215480 9187.9 907 27043 Moby Dick (shuffled) 0.50 10 254655.0 254655 254655 20473.0 20473 20473 Moby Dick (bigram) 0.50 10 300001.0 300001 300001 16963.8 16893 17056 3-layer stacked LSTM (English) 0.50 1 256425.0 256425 256425 50115.0 50115 50115 (character-based) Neural MT (English) 0.57 1 623235.0 623235 623235 27370.0 27370 27370 large representative archives, parsed, and stripped of natural language comments), and 12 pieces of musical data (long symphonies and so forth, transformed from MIDI into text with the software SMF2MML5, with annotations removed). As for the randomized data listed in the last block, we took the text of Moby Dick and generated 10 different shuffled samples and bigramgenerated sequences. We also introduced LSTMgenerated texts to consider the utility of our findings, as explained in §5. 4 Taylor Exponents for Real Data Figure 1 shows typical distributions for natural language texts, with two single-author texts ((a) 5http://shaw.la.coocan.jp/smf2mml/ and (b)) and two multiple-author texts (newspapers, (c) and (d)), in English and Chinese, respectively. The segment size was ∆t = 5620 words6, i.e., each segment had 5620 words and the horizontal axis indicates the averaged frequency of a specific word within a segment of 5620 words. The points at the upper right represent the most frequent words, whereas those at the lower left represent the least frequent. Although the plots exhibited different distributions, they could globally be considered roughly aligned in a power-law 6 In comparison, Figure 6 shows the effect on the exponent of varying ∆t. As seen in that figure, larger ∆t increased the differences in exponent among different data sets, making the differences more distinguishable. Thus, ∆t had better be as large as possible while keeping µ and σ computable. For this article, we chose ∆t = 5620, which was one of the ∆t values used in Figure 6. 1142 (a) Moby Dick (b) Hong Lou Meng (c) Wall Street Journal (d) People’s Daily Figure 1: Examples of Taylor’s law for natural language texts. Moby Dick and Hong Lou Meng are representative of single-author texts, and the two newspapers are representative of multipleauthor texts, in English and Chinese, respectively. Each point represents a kind of word. The values of σ and µ for each word kind are plotted across texts within segments of size ∆t = 5620. The Taylor exponents obtained by the least-squares method were all around 0.58. manner. This finding is non-trivial, as seen in other analyses based on Taylor’s law (Eisler, Bartos, and Kertész, 2007). The exponent α was almost the same even though English and Chinese are different languages using different kinds of script. As explained in §3.1, the Taylor exponent indicates the degree of consistent co-occurrence among words. The value of 0.58 obtained here suggests that the words of natural language texts are not strongly or consistently coherent with respect to each other. Nevertheless, the value is well above 0.5, and for the real data listed in Table 1 (first to third blocks), not a single sample gave an exponent as low as 0.5. Although the overall global tendencies in Figure 1 followed power laws, many points deviated significantly from the regression lines. The words with the greatest fluctuation were often keywords. For example, among words in Moby Dick with large µ, those with the largest σ included whale, captain, and sailor, whereas those with the smallest σ included functional words such as to, that, and with. The Taylor exponent depended only slightly on the data size. Figure 2 shows this dependency Figure 2: Taylor exponent ˆα (vertical axis) calculated for the two largest texts: The New York Times and The Mainichi newspapers. To evaluate the exponent’s dependence on the text size, parts of each text were taken and the exponents were calculated for those parts, with points taken logarithmically. The window size was ∆t = 5620. As the text size grew, the Taylor exponent slightly decreased. for the two largest data sets used, The New York Times (NYT, 1.5 billion words) and The Mainichi (24 years) newspapers. When the data size was increased, the exponent exhibited a slight tendency to decrease. For the NYT, the decrease seemed to have a lower limit, as the figure shows that the exponent stabilized at around 107 words. The reason for this decrease can be explained as follows. The Taylor exponent becomes larger when some words occur in a clustered manner. Making the text size larger increases the number of segments (since ∆t was fixed in this experiment). If the number of clusters does not increase as fast as the increase in the number of segments, then the number of clusters per segment becomes smaller, leading to a smaller exponent. In other words, the influence of each consecutive co-occurrence of a particular word decays slightly as the overall text size grows. Analysis of different kinds of data showed how the Taylor exponent differed according to the data source. Figure 3 shows plots for samples from enwiki8 (tagged Wikipedia), the child-directed speech of Thomas (taken from CHILDES), programming language data sets, and music. The distributions appear different from those for the natural language texts, and the exponents were significantly larger. This means that these data sets contained expressions with fixed forms much more frequently than did the natural language texts. 1143 (a) enwiki8 (Wikipedia, tagged) (b) Thomas (CHILDES) (c) Lisp (d) Bach’s St Matthew Passion Figure 3: Examples of Taylor’s law for alternative data sets listed in Table 1: enwiki8 (tag-annotated Wikipedia), Thomas (longest in CHILDES), Lisp source code, and the music of Bach’s St Matthew Passion. These examples exhibited larger Taylor exponents than did typical natural language texts. Figure 4 summarizes the overall picture among the different data sources. The median and quantiles of the Taylor exponent were calculated for the different kinds of data listed in Table 1. The first two boxes show results with an exponent of 0.50. These results were each obtained from 10 random samples of the randomized sequences. We will return to these results in the next section. The remaining boxes show results for real data. The exponents for texts from Project Gutenberg ranged from 0.53 to 0.68. Figure 5 shows a histogram of these texts with respect to the value of ˆα. The number of texts decreased significantly at a value of 0.63, showing that the distribution of the Taylor exponent was rather tight. The kinds of texts at the upper limit of exponents for Project Gutenberg included structured texts of fixed style, such as dictionaries, lists of histories, and Bibles. The majority of texts were in English, followed by French and then other languages, as listed in Table 1. Whether α distinguishes languages is a difficult question. The histogram suggests that Chinese texts exhibited larger values than did texts in Indo-European languages. We conducted a statistical test to evaluate whether this difference was significant as compared to English. Since the numbers of texts were very different, we used the non-parametric statistical test of the BrunnerMunzel method, among various possible methods, to test a null hypothesis of whether α was equal for the two distributions (Brunner and Munzel, 2000). The p-value for Chinese was p = 1.24 × 10−16, thus rejecting the null hypothesis at the significance level of 0.01. This confirms that α was generally larger for Chinese texts than for English texts. Similarly, the null hypothesis was rejected for Finnish and French, but it was accepted for German and Japanese at the 0.01 significance level. Since Japanese was accepted despite its large difference from English, we could not conclude whether the Taylor exponent distinguishes languages. Turning to the last four columns of Figure 4, representing the enwiki8, child-directed speech (CHILDES), programming language, and music data, the Taylor exponents clearly differed from those of the natural language texts. Given the template-like nature of these four data sources, the results were somewhat expected. The kind of data thus might be distinguishable using the Taylor exponent. To confirm this, however, would require assembling a larger data set. Applying this approach with Twitter data and adult utterances would produce interesting results and remains for our future work. The Taylor exponent also differed according to ∆t, and Figure 6 shows the dependence of ˆα on ∆t. For each kind of data shown in Figure 4, the mean exponent is plotted for various ∆t. As reported in (Eisler, Bartos, and Kertész, 2007), the exponent is known to grow when the segment size gets larger. The reason is that words occur in a bursty, clustered manner at all length scales: no matter how large the segment size becomes, a segment will include either many or few instances of a given word, leading to larger variance growth. This phenomenon suggests how word cooccurrences in natural language are self-similar. The Taylor exponent is initially 0.5 when the segment size is very small. This can be analytically explained as follows (Eisler, Bartos, and Kertész, 2007). Consider the case of ∆t=1. Let n be the frequency of a particular word in a segment. We have ⟨n⟩≪1.0, because the possibility of a specific word appearing in a segment becomes very small. Because ⟨n⟩2 ≈0, σ2 = ⟨n2⟩−⟨n⟩2 ≈ ⟨n2⟩. Because n = 1 or 0 (with ∆t=1), ⟨n2⟩= ⟨n⟩= µ. Thus, σ2 ≈µ. Overall, the results show the possibility of ap1144 Figure 4: Box plots of the Taylor exponents for different kinds of data. Each point represents one sample, and samples from the same kind of data are contained in each box plot. The first two boxes are for the randomized data, while the remaining boxes are for real data, including both the natural language texts and language-related sequences. Each box ranges between the quantiles, with the middle line indicating the median, the whiskers showing the maximum and minimum, and some extreme values lying beyond. Figure 5: Histogram of Taylor exponents for long texts in Project Gutenberg (1129 texts). The legend indicates the languages, in frequency order. Each bar shows the number of texts with that value of ˆα. Because of the skew of languages in the original conception of Project Gutenberg, the majority of the texts are in English, shown in blue, whereas texts in other languages are shown in other colors. The histogram shows how the Taylor exponent ranged fairly tightly around the mean, and natural language texts with an exponent larger than 0.63 were rare. plying Taylor’s exponent to quantify the complexity underlying coherence among words. Grammatical complexity was formalized by Chomsky via the Chomsky hierarchy (Chomsky, 1956), which describes grammar via rewriting rules. The constraints placed on the rules distinguish four different levels of grammar: regular, context-free, context-sensitive, and phrase structure. As indicated in (Badii and Politi, 1997), however, this does not quantify the complexity on a continuous scale. For example, we might want to quantify the complexity of child-directed speech as compared to that of adults, and this could be addressed in only a limited way through the Chomsky hierarchy. Another point is that the hierarchy is sentence-based and does not consider fluctuation in the kinds of words appearing. 5 Evaluation of Machine-Generated Text by the Taylor Exponent The main contribution of this paper is the findings of Taylor’s law behavior for real texts as presented thus far. This section explains the applicability of these findings, through results obtained with baseline language models. As mentioned previously, i.i.d. mathematical processes have a Taylor exponent of 0.50. We show here that, even if a process is not trivially i.i.d., the exponent often takes a value of 0.50 1145 Figure 6: Growth of ˆα with respect to ∆t, averaged across data sets within each data kind. The plot labeled “random” shows the average for the two datasets of randomized text from Moby Dick (shuffled and bigrams, as explained in §5). Since this analysis required a large amount of computation, for the large data sets (such as newspaper and programming language data), 4 million words were taken from each kind of data and used here. When ∆t was small, the Taylor exponent was close to 0.5, as theoretically described in the main text. As ∆t was increased, the value of ˆα grew. The maximum ∆t was about 10,000, or about one-tenth of the length of one long literary text. For the kinds of data investigated here, ˆα grew almost linearly. The results show that, at a given ∆t, the Taylor exponent has some capability to distinguish different kinds of text data. (a) Moby Dick (shuffled) (b) Moby Dick (bigram) Figure 7: Taylor analysis of a shuffled text of Moby Dick and a randomized text generated by a bigram model. Both exhibited an exponent of 0.50. for random processes, including texts produced by standard language models such as n-gram based models. A more complete work in this direction is reported in (Takahashi and Tanaka-Ishii, 2018). Figure 7 shows samples from each of two simple random processes. Figure 7a shows the behavior of a shuffled text of Moby Dick. Obviously, (a) Text produced by LSTM (3-layer stacked character-based) (b) Machine-translated text using neural language model Figure 8: Taylor analysis for two texts produced by standard neural language models: (a) a stacked LSTM model that learned the complete works of Shakespeare; and (b) a machine translation of Les Misérables (originally in French, translated into English), from a neural language model. since the sequence was almost i.i.d. following Zipf distribution, the Taylor exponent was 0.50. Given that the Taylor exponent becomes larger for a sequence with words dependent on each other, as explained in §3, we would expect that a sequence generated by an n-gram model would exhibit an exponent larger than 0.50. The simplest such model is the bigram model, so a sequence of 300,000 words was probabilistically generated using a bigram model of Moby Dick. Figure 7b shows the Taylor analysis, revealing that the exponent remained 0.50. This result does not depend much on the quality of the individual samples. The first and second box plots in Figure 4 show the distribution of exponents for 10 different samples for the shuffled and bigram-generated texts, respectively. The exponents were all around 0.50, with small variance. State-of-the-art language models are based on neural models, and they are mainly evaluated by perplexity and in terms of the performance of individual applications. Since their architecture is complex, quality evaluation has become an issue. One possible improvement would be to use an evaluation method that qualitatively differs from judging application performance. One such method is to verify whether the properties underlying natural language hold for texts generated by language models. The Taylor exponent is one such possibility, among various properties of natural language texts. As a step toward this approach, Figure 8 shows two results produced by neural language models. Figure 8a shows the result for a sample of 2 million characters produced by a stan1146 dard (three-layer) stacked character-based LSTM unit that learned the complete works of Shakespeare. The model was optimized to minimize the cross-entropy with a stochastic gradient algorithm to predict the next character from the previous 128 characters. See (Takahashi and TanakaIshii, 2017) for the details of the experimental settings. The Taylor exponent of the generated text was 0.50. This indicates that the character-level language model could not capture or reproduce the word-level clustering behavior in text. This analysis sheds light on the quality of the language model, separate from the prediction accuracy. The application of Taylor’s law for a wider range of language models appears in (Takahashi and Tanaka-Ishii, 2018). Briefly, state-of-theart word-level language models can generate text whose Taylor exponent is larger than 0.50 but smaller than that of the dataset used for training. This indicates both the capability of modeling burstiness in text and the room for improvement. Also, the perplexity values correlate well with the Taylor exponents. Therefore, Taylor exponent can reasonably serve for evaluating machinegenerated text. In contrast to character-level neural language models, neural-network-based machine translation (NMT) models are, in fact, capable of maintaining the burstiness of the original text. Figure 8b shows the Taylor analysis for a machinetranslated text of Les Misérables (from French to English), obtained from Google NMT (Wu et al., 2016). We split the text into 5000-character portions because of the API’s limitation (See (Takahashi and Tanaka-Ishii, 2017) for the details). As is expected and desirable, the translated text retains the clustering behavior of the original text, as the Taylor exponent of 0.57 is equivalent to that of the original text. 6 Conclusion We have proposed a method to analyze whether a natural language text follows Taylor’s law, a scaling property quantifying the degree of consistent co-occurrence among words. In our method, a sequence of words is divided into given segments, and the mean and standard deviation of the frequency of every kind of word are measured. The law is considered to hold when the standard deviation varies with the mean according to a power law, thus giving the Taylor exponent. Theoretically, an i.i.d. process has a Taylor exponent of 0.5, whereas larger exponents indicate sequences in which words co-occur systematically. Using over 1100 texts across 14 languages, we showed that written natural language texts follow Taylor’s law, with the exponent distributed around 0.58. This value differed greatly from the exponents for other data sources: enwiki8 (tagged Wikipedia, 0.63), child-directed speech (CHILDES, around 0.68), and programming language and music data (around 0.79). These Taylor exponents imply that a written text is more complex than programming source code or music with regard to fluctuation of its components. None of the real data exhibited an exponent equal to 0.5. We conducted more detailed analysis varying the data size and the segment size. Taylor’s law and its exponent can also be applied to evaluate machine-generated text. We showed that a character-based LSTM language model generated text with a Taylor exponent of 0.5. This indicates one limitation of that model. Our future work will include an analysis using other kinds of data, such as Twitter data and adult utterances, and a study of how Taylor’s law relates to grammatical complexity for different sequences. Another direction will be to apply fluctuation analysis in formulating a statistical test to evaluate the structural complexity underlying a sequence. Acknowledgments This work was supported by JST Presto Grant Number JPMJPR14E5 and HITE funding. We thank Shuntaro Takahashi for offering his comments and providing the machine-generated data reported in §5. References Altmann, Eduardo G., Janet B. Pierrehumbert, and Adilson E. Motter. 2009. Beyond word frequency: Bursts, lulls, and scaling in the temporal distributions of words. PLOS ONE, 4(11):1–7. An ¯delkovi´c, Darinka, Nada Ševa, and Jasmina Moskovljevi´c. 2001. Serbian Corpus of Early Child Language. Laboratory for Experimental Psychology, Faculty of Philosophy, and Department of General Linguistics, Faculty of Philology, University of Belgrade. Badii, Remo and Antonio Politi. 1997. Complexity: Hierarchical structures and scaling in physics. Cambridge University Press. 1147 Behrens, Heike. 2006. The input-output relationship in first language acquisition. Language and Cognitive Processes, 21:2–24. Benedet, Maria, Celis Cruz, Maria Carrasco, and Catherine Snow. 2004. Spanish BecaCESNo Corpus. TalkBank. Bentz, Christian, Dimitrios Alikaniotis, Michael Cysouw, and Ramon Ferrer i Cancho. 2017. The entropy of words—learnability and edxpressibvity across more than 1000 langauges. Entropy, (6). Bol, Gerard W. 1995. Implicational scaling in child language acquisition: the order of production of Dutch verb constructions, Amsterdam Series in Child Language Development, chapter 3. Amsterdam: Institute for General Linguistics. Brunner, Edgar and Ullrich Munzel. 2000. The nonparametric behrens-fisher problem: Asymtotic theory and a small-sample approximation. Biometrical Journal, 42:17–25. Calif, Rudy and François G. Schmitt. 2015. Taylor law in wind energy data. Resources, 4(4):787–795. Chomsky, Noam. 1956. Three models for the description of language. IRE Transactions on Information Theory, 2:113–124. Cohen, Joel E. and Meng Xu. 2015. Random sampling of skewed distributions implies taylor’s power law of fluctuation scaling. Proceedings of the National Academy of Sciences, 112(25):7749–7754. Ebeling, Werner and Thorsten Pöeschel. 1994. Entropy and long-range correlations in literary english. Europhys. Letters, 26:241–246. Eisler, Zoltán, Imre Bartos, and János Kertész. 2007. Fluctuation scaling in complex systems: Taylor’s law and beyond. Advances in Physics, pages 89– 142. Gerlach, Martin and Eduardo G. Altmann. 2014. Scaling laws and fluctuations in the statistics of word frequencies. New Journal of Physics, 16(11):113010. Gil, David and Uri Tadmor. 2007. The MPI-EVA Jakarta Child Language Database. A joint project of the Department of Linguistics, Max Planck Institute for Evolutionary Anthropology and the Center for Language and Culture Studies, Atma Jaya Catholic University. Heaps, Harold S. 1978. Information Retrieval: Computational and Theoretical Aspects. Academic Press, Inc., Orlando, FL, USA. Hurst, Harold E. 1951. Long-term storage capacity of reservoirs. Transactions of the American Society of Civil Engineers, 116:770–808. Lieven, Elena, Dorothé Salomo, and Michael Tomasello. 2009. Two-year-old children’s production of multiword utterances: A usage-based analysis. Cognitive Linguistics, 20(3):481–508. Lin, Henry W. and Max Tegmark. 2016. Critical behavior from deep dynamics: A hidden dimension in natural language. arXiv preprint, abs/1606.06737. MacWhinney, Brian. 2000. The Childes Project. New York: Psychology Press. Montemurro, Marcelo A. and Pedro A. Pury. 2002. Long-range fractal correlations in literary corpora. Fractals, 10:451–461. Oshima-Takane, Yuriko, Brian MacWhinney, Hidetosi Sirai, Susanne Miyata, and Norio Naka. 1995. CHILDES manual for Japanese. Montreal: McGill University. Plunkett, Kim and Sven Strömqvist. 1992. The acquisition of scandinavian languages. In D. I. Slobin, editor, The crosslinguistic study of language acquisition, volume 3. Lawrence Erlbaum Associates, pages 457–556. Rondal, Jean A. 1985. Adult-child interaction and the process of language acquisition. Praeger Publishers. Smith, H. Fairfield. 1938. An empirical law describing hetero-geneity in the yields of agricultural crops. Journal of Agriculture Science, 28(1). Smoczynska, Magdalena. 1985. The acquisition of polish. In D. I. Slobin, editor, The crosslinguistic study of language acquisition. Lawrence Erlbaum Associates, pages 595–686. Takahashi, Shuntaro and Kumiko Tanaka-Ishii. 2017. Do neural nets learn statistical laws behind natural langauge? PLoS One. In press. Takahashi, Shuntaro and Kumiko Tanaka-Ishii. 2018. Assesing language models with scaling properties. arXiv preprint, arXiv:1804.08881. Takahira, Ryosuke, Kumiko Tanaka-Ishii, and Debowski Lukasz. 2016. Entropy rate estimates for natural language : A new extrapolation of compressed large-scale corpora. Entropy, 18(10). Tanaka-Ishii, Kumiko and Shunsuke Aihara. 2015. Text constancy measures. Computational Linguistics, 41:481–502. Tanaka-Ishii, Kumiko and Armin Bunde. 2016. Longrange memory in literary texts: On the universal clustering of the rare words. PLOS One. Online journal. Taylor, L. Roy. 1961. Aggregation, variance and the mean. Nature, 732:189–190. Wu, Yonghui, Mike Schuster, Zhifeng Chen, Quoc Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, and et al. 2016. Google ’s neural machine translation system: Bridging the gap between human and machine translation. arXiv. 1148 Zipf, George K. 1965. Human behavior and the principle of least effort: An introduction to human ecology. Hafner.
2018
105
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1149–1159 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1149 A Framework for Representing Language Acquisition in a Population Setting Jordan Kodner University of Pennsylvania Department of Linguistics, Dept. of Computer and Info. Science [email protected] Christopher M. Cerezo Falco University of Pennsylvania Department. of Electrical and Systems Engineering [email protected] Abstract Language variation and change are driven both by individuals’ internal cognitive processes and by the social structures through which language propagates. A wide range of computational frameworks have been proposed to connect these drivers. We compare the strengths and weaknesses of existing approaches and propose a new analytic framework which combines previous network models’ ability to capture realistic social structure with practically and more elegant computational properties. The framework privileges the process of language acquisition and embeds learners in a social network but is modular so that population structure can be combined with different acquisition models. We demonstrate two applications for the framework: a test of practical concerns that arise when modeling acquisition in a population setting and an application of the framework to recent work on phonological mergers in progress. 1 Introduction The process of language change should be thought of as a two-step cycle in which 1) individuals acquire their native languages from their predecessors then 2) pass them on to their successors. Small changes accrue over time this way and create both small-scale interpersonal variation and large-scale typological differences. It is easy to draw a strong analogy here between linguistic evolution and biological evolution. Both feature classic descent with modification, except while phenotypes are transmitted through genes and acted on by natural selection, language is both transmitted through and constrained by the individual (Cavalli-Sforza and Feldman, 1981; Ritt, 2004, etc.). But while evolution, linguistic or otherwise, is driven by forces acting on the individual, it unfolds on the level of populations (Cavalli-Sforza and Feldman, 1981). The influence of communitylevel social factors on the path of language change is a major focus of sociolinguistics (Labov, 2001; Milroy and Milroy, 1985; Rogers Everett, 1995). Ideally, one could observe population-level variation unfold in real time while testing out individual factors, but this is impossible because nobody can travel back in time or fit entire natural environments into a lab. Change that has already happened is out of reach, and change in progress is buried in a world of confounds. The classic sociolinguistic method instead approaches the problem by inferring causal factors from patterns discovered in field interviews and corpora (Labov, 1994; Labov et al., 2005, etc.). This is the primary source of empirical data in the field and the only way to look at language change in a naturalistic setting, but it is limited in that it cannot test cause and effect directly. More recently, controlled experimental studies have emerged as a complementary line of research which manipulate causal factors directly (Johnson et al., 1999; Campbell-Kibler, 2009, etc.), but are inherently removed natural time and scale. A third approach, the one we build upon here, relies on computational modeling to simulate how sociolinguistic factors might work together in larger populations (Klein, 1966; Blythe and Croft, 2012; Kauhanen, 2016, etc.). It has long been argued that language acquisition is the primary cause of language change (Sweet, 1899; Lightfoot, 1979; Niyogi, 1998, etc.). In the last few decades, this connection has been modeled computationally (Gibson and Wexler, 1994; Kirby et al., 2000; Yang, 2000, 1150 etc.), leading to the strong conclusion that change is the inevitable consequence of mixed linguistic input or finite learning periods (Niyogi and Berwick, 1996), even if children are “perfect” learners. An important result connecting the learner and population emphasizes the need for this line of work: the space of paths of change available in populations is formally larger than the paths available to linear chains of iterated learners. Niyogi and Berwick (2009) prove formally that even perfectly-mixed (i.e., uniform and homogeneous social network) populations admit phase transitions in the path of change unavailable to chains of single learners commonly implemented in iterated learning (Kirby et al., 2000). This suggests that small-population experimental studies in sociolinguistics and in child language acquisition do not paint the full picture of language change. We introduce a new framework for modeling language change in populations. It has an outer loop to represent generational progression, but it replaces the inner loop which calculates randomized interactions between agents with a single formula that is defined generally enough to allow the simulation of a wide range of scenarios. It builds upon the principled formalism described by Niyogi and Berwick (1996, et seq.), privileging the acquisition model and separating it from the population model. The resulting modular framework is described in the following sections. First, Section 1.1 presents a survey of previous simulation work followed by a description of the new population model in Section 2. Next, Section 3 addresses practical concerns relating population size to assumptions about language acquisition. Finally, Section 4 introduces a case study on phonological change which demonstrates the need for appropriate models both of acquisition and populations. 1.1 Related Work Computational models for the propagation of linguistic variation have been employed with a variety of research goals in mind. Every paper implements its own framework with few exceptions, so comparison across studies is difficult. Additionally, since each model is essentially ‘boutique,’ it is always possible that models are designed consciously or unconsciously to achieve a specific outcome rather driven by underlying principles. We group these frameworks into three classes according to their implementation, swarm, network, and algebraic, and discusses their strengths and weaknesses. The first class, called swarm here, models populations as collections of agents placed on a grid. They “swarm” around randomly according to some movement function, and “interact” when they occupy adjacent grid spaces (Satterfield, 2001; Harrison et al., 2002; Ke et al., 2008; Stanford and Kenny, 2013). This tends toward concrete interpretation, for example, more mobile populations are expressed directly by more mobile agents. They capture Bloomfield (1933)’s “principle of density” which describes the observation that geographically or socially close individuals interact more frequently than those farther away. On the other hand, they provide little control over network structure, relying on series of explicit movement constraints in order to direct their agents, and since each one moves randomly at each iteration, these models have potentially thousands of degrees of freedom. Such simulations should be run many times if any sort of statistically expected results are to be computed. The second class, network frameworks, model speakers as nodes and interaction probabilities as weighted edges on network graphs (Minett and Wang, 2008; Baxter et al., 2009; Fagyal et al., 2010; Blythe and Croft, 2012; Kauhanen, 2016). These frameworks offer precise control over social network structure and can test specific community models from within sociolinguistics. However, implementations usually proceed by some kind of iterative probabilistic node-pair selection process, and in this way suffer from the same statistical pitfalls as swarm frameworks. In contrast to swarm models, interaction is rigidly restricted to immediately connected nodes, so to achieve gradient interaction probabilities, edges must be frequently updated or nearly fully-connected graphs with carefully assigned edge weights would need to be constructed and motivated. The third class, algebraic frameworks, present analytic methods for determining the state of the network at the end of each iteration rather than relying on stochastic simulation of individual agents (Niyogi and Berwick, 1996, 1997; Yang, 2000; Baxter et al., 2006; Minett and Wang, 2008; Niyogi and Berwick, 2009). Removing that inner loop is a more mathematically elegant approach and avoids dealing unnecessarily with statistics behind random trials. Removing that loop speeds 1151 up calculation as well, making larger simulations more tractable than with network or swarm frameworks. But this power is achieved by sacrificing the social network. Up to this point, such models have, to our knowledge, only been defined over perfectly-mixed (i.e., no network effects) populations. That assumption is useful for reasoning about the mathematical theory behind language change, but it hinders such models’ utility in empirical studies. For example, though Baxter et al. (2006) and Minett and Wang (2008) implement algebraic models for perfectly mixed populations, they fall back on network models to model network effects. 2 Framework for Transmission in Social Networks Algebraic frameworks have their mathematical advantage, but network frameworks provide a richer model for representing real-world population structures and swarm models capture density effects by default. An ideal framework would combine the benefits of all three of these. Here we do just that. We introduce a framework that instantiates Niyogi and Berwick (1996)’s acquisitiondriven formalism where change is handled explicitly as a two-step alternation between individual learners learning and populations interacting. It provides an analytic solution to the state of a network structure over which swarm-like behavior can be modeled. We begin by conceptualizing the framework in terms of agents traveling probabilistically over a network structure as in Algo. 1 before introducing the analytic solution. There is an individual standing at every node in the graph, and at every iteration, each individual begins at some location and travels along the network’s edges, at each step deciding to continue on or to stop and interact with the agent at that node. Any two agents with a nonzero weight path between them could potentially interact, so the overall probability of an interaction is a function of the shape of the network and the decay rate of the step probability. The shorter and higher weighted the path between two agents, the more likely they are to interact. This corresponds to the gradient interaction probabilities of swarm frameworks. Algorithm 1: One iteration of the propagation model conceptualized on the level of an individual agent for each individual node do Begin traveling; while traveling do Randomly select an outgoing edge by weight and follow it OR stop travel; increase chance of stopping next time; end Interact with the individual at the current node; end 2.1 Representing the Network Social networks are typically conceived of as graph structures with individuals as vertices and the social or geographical connections between individuals as edges, and this allows for a great deal of flexibility. If edges are undirected, then all interactions are equal and bidirectional, but if edges are directed, interactions may or may not be. Edges can be weighted to represent likelihood of interaction or some measure of social valuation, and this too can vary over time. Lastly, it is possible to add and remove nodes themselves to capture births, deaths, or migration. The network structure is represented computationally here as an adjacency matrix A. In a population of n individuals, this is n × n where each element aij is the weight of the connection from individual j to individual i. The matrix must be column stochastic (all columns sum to 1 and contain only positive elements) so that edge weights can be interpreted as probabilities. The special case where the matrix is symmetric (every aij = aji) models undirected edges, and more strongly, the model reduces to perfectly-mixed populations when each aij = 1 n. We define a notion of communities over the nodes of the network in order to add the option to categorize groups of individuals. Membership among c communities is identified with an n × c indicator matrix C. Depending on the problem at hand, it is possible to calculate the average behavior of the learners within each community directly without having to calculate the behavior of each individual member. 1152 2.2 Propagation in the Network In a typical network model, the edge weights between nodes in A are interpreted directly as interaction probabilities, meaning that individuals only ever interact with their immediate graph neighbors. We take a different approach by allowing the agents to “travel” and potentially interact with any other agent whose node is connected by a path of non-zero edges. If the number of traveling steps were fixed at k, the probability of each pair interacting would be defined as Ak. It is more complicated for us since the number of steps traveled is a random variable. The probability of j interacting with i (p(ij)) is the probability of them interacting after k steps times the probability of k for all values of k as in Eqn. 1. Combining this intuition with A yields the interaction probabilities for all i, j pairs. p(ij) = X k p(ij|k steps) p(k steps) (1) The pattern of linguistic variants or grammars (in the formal sense where grammar g is the intensional equivalent of language Lg) within a network unfolds as a dynamical system over the course of many iterations, and learners’ positions within the network mediate which ones they eventually acquire. In a system with g grammars and n individuals, a n × g row-stochastic matrix G specifies the probability with which each community expresses each grammar. Given this notion of interaction and the specification of grammars expressed within a network, it is possible to compute the distribution of grammars presented to each learner. This is the learners’ linguistic environment and is represented by a matrix E in the same form as G⊤. An environment function En(Gt, A) = Et+1 shown in Eqn. 2 calculates E by first calculating all the interaction probabilities in the network then multiplying those by the grammars which every agent expresses to get the environment E. The α parameter from the geometric distribution1 defines the travel decay rate. A lower α defines conceptually more mobile agents. More generally, En is a special case of E(Gt, Ct, At) = Et+1 where the number of communities equals the number of individuals (c = n). 1In this paper, jump probabilities decay according to a geometric distribution, but other distributions including the Poisson have been implemented as well. C becomes the identity matrix without loss of generality, so the network’s initial condition does not have to be defined explicitly. For any other community definition, an initial condition has to be defined as in Eqn. 3 which specifies the starting point in the network that each agent conceptually begins traveling from. The output of E is a g × c matrix giving the environment of the average agent in each community.2 En(Gt, A) = G⊤ t α (I −(1 −α)A)−1 (2) E(Gt, C, A) = En(Gt, A)C(C⊤C)−1 (3) The output of E must be broadcast to g × n, which would result in the loss of some information unless the assumption can be made that each community is internally uniform. However, when that assumption can be made, the n × n adjacency matrix admits a c × c equitable partition Aπ (Eqn. 4) (Schaub et al., 2016) which permits an alternate environment function EEP (Gt, C, A) shown in Eqn. 5 that is equivalent to the lossless En if A. If n ≫c, EEP is much faster to calculate because it only inverts a small c × c matrix rather than a large n × n. This makes it feasible to run much larger simulations than what has been done in the past. Aπ = (C⊤C)−1C⊤AC (4) EEP = αG⊤C (I −(1 −α)Aπ)−1 (C⊤C)−1 (5) 2.3 Learning in the Network The environment function describes what inputs Et+1 are available to learners given the language expressed by the mature speakers of the previous age cohort with grammars Gt. The second component of the framework describes the learning algorithm A(Et+1) = Gt+1, how individuals respond to their input environment. The resulting Gt+1 describes which grammars those learners will eventually contribute to the subsequent generation’s environment Et+2. This back-andforth between adults’ grammars G and childrens’ environment E is the two-step cycle of language change (Fig. 1). In neutral change, learners would acquire grammars at the rates that they are expressed in their environments, but there is good reason to believe 2(I −(1 −α)A)−1 and C(C⊤C)−1 can be precomputed if network structure does not change over time. 1153 . . . Gt →Et+1 →Gt+1 . . . Gt+i →Et+i+1 . . . Figure 1: Language change as an alternation between G and E matrices that most language change involves differential fitness between competing variants, and most nontrivial learning algorithms yield some kind of fitness (Kroch, 1989; Yang, 2000; Blythe and Croft, 2012, etc.), so A is rarely neutral. A neutral and simple advantaged model are both considered in Section 3, and a more complex learning algorithm is described for Section 4. 3 Application: Testing Assumptions The general nature of the framework described here renders it suitable for reproducing the results of previous works and evaluating their assumptions. To demonstrate this, we reproduce the major result from Kauhanen (2016), which tested the behavior of neutral change in networks of singlegrammar learners, in order to dissect two of its primary assumptions. Implemented in a typical network framework, the original setup contains n = 200 individuals in probabilistically generated centralized networks in which individuals mature categorically to the single most frequent grammar in their input. The author found that categorical neutral change produced chaotic paths of change regardless of network shape and that periodically “rewiring” some of the network edges smoothed this out. Without commenting on rewiring, we find that the combination of n and choice of categorical learners conspire to create the chaotic results. We create two communities, both centralized along the lines of the single cluster in Kauhanen (2016), initialize all members of cluster 1 with grammar g1 and all members of cluster 2 with grammar g2, and additional edges are added between members of clusters 1 and 2 to allow interaction. G is converted to an indicator matrix at the end of each learning iteration by rounding values to 0 and 1 in order to model categorical learners who only internalize the most common grammar in their inputs as in the original model. In a pair of infinitely large clusters or two clusters where individuals are permitted to learn a probabilistic distribution of grammars, each cluster should homogenize to a 50/50 distribution of g1 and g2 after some number of iterations depending on the specifics of the network shape and setting for α creating the red curves in Fig. 2. At n = 20000, each of 10 trials roughly follows the path of the predicted curve, but when run at the original n = 200 for 10 trials, this produces the type of chaotic behavior which Kauhanen (2016) attempts to repair. The outcome appears to be the result of an assumption made out of convenience (n = 200) rather than a principled decision. Figure 2: Predicted curve (red); neutral change at n = 200 (left; Kauhanen (2016)); neutral change at n = 20000 (right) To further explore the impact of the population size assumption, we experiment on a model of advantaged change, which is typically contrasted with neutral change because of its tendency to produce “well-behaved” S-curve change (Blythe and Croft, 2012; Kauhanen, 2016). This time, only a single cluster is created, and the advantaged grammar is initially assigned to 1% of the population. As seen in Figure 3, results are chaotic for n = 200 once again and near predicted for n = 20000. This is important because at n = 200, advantaged change is chaotic, and most simulations both rise and fall. An experimenter who only studied advantaged change in small population might concluded that it is as ill-behaved as neutral change. While the conclusions that Kauhanen (2016) draws appear valid for n = 200, it is not clear to what extent they can be projected onto larger populations. This demonstrates the need for carefully choosing one’s modeling assumptions and testing them out when possible. 4 Application: Mergers in Progress The acquisition of phonological mergers in mixed input settings presents an interesting problem. It appears that mergers have an inherent advantage because they tend to spread at the expense of distinctions, and once they begin, they are rarely reversed (Labov, 1994). Yang (2009)’s acquisition model quantifies this advantage as the relatively 1154 Figure 3: predicted curve (red); advantaged change at n = 200 (left; cf. Kauhanen (2016)); advantaged change at n = 20000 (right) lower chance of misinterpretation if a listener assumes the merged grammar instead of the nonmerged grammar once a sufficient proportion of the environment is merged. Applied to Johnson (2007)’s detailed population study of the frontier of the COT-CAUGHT merger in the small towns along the border between Rhode Island and Massachusetts, this accurately predicts the ratio of merged input for a child to acquire the merged grammar, however when applied to a perfectly mixed population of learners, it fails to model the spread of the merged grammar in the population. Yang’s model is input-driven, so it is conducive to simulation with minimal assumptions past those drawn from the empirical data. We test the behavior of this learning model in a typical population network and demonstrate that it produces a reasonable path of change. 4.1 Background The COT-CAUGHT merger, also called the low back merger describes the phenomenon present in varieties of North American English spoken in eastern New England, western Pennsylvania, the American West, and Canada among others where the vowel in words like cot and the vowel in words like caught have come to be pronounced the same (Labov et al., 2005, pp. 58-65). The geographical extent of the merger is currently expanding, which might be expected if the merger has a cognitive or social advantage associated with it. Johnson (2007)’s study of the merger’s frontier on the border Rhode Island and Massachusetts uncovered an interesting social dynamic that illustrates the merger’s speed: there are families where the parents and older siblings non-merged, but the younger siblings are. The merger has swept through in only a few years and passed between the siblings. Yang (2009) seeks to understand why mergers have an advantage from a cognitive perspective, and his model treats the acquisition of mergers as an evolutionary process. Learners who receive both merged (M+) and non-merged (M−) input entertain both a merged (g+) and non-merged (g−) grammar and reward whichever grammar successfully parses the input. This kind of variational learner (Yang, 2000) is essentially an adaptation of the classic evolutionary Linear Reward Punishment model (Bush and Mosteller, 1953). The fitness of each grammar is the probability in the limit that it will fail to parse any given input, and since it is virtually always the case that this probability is different for both grammars, fitness is virtually always asymmetric. The variational learner is characterized as follows. Given two grammars and an input token s, The learner parses s with g1 with probability p and with g2 with probability q = 1 −p. p is rewarded according to whether the choice of g successfully parses s (g →s) or it fails to (g ↛s), where γ is some small constant. p′ = ( p + γq, g →s (1 −γ)p, g ↛s Given a specific problem, one can calculate a penalty probability C for each g, the proportion of input that would cause g ↛s. The grammar with the lower C has the advantage, so the other one will be driven down in the long run. C can be estimated from type frequencies in a corpus, and the model is non-parametric because these values do not depend on γ. lim t→∞pt = C2 C1 + C2 lim t→∞qt = C1 C1 + C2 To understand the COT-CAUGHT merger empirically, one must reason about what kind of input would trigger a penalty and then calculate the penalty probabilities of the merged grammar C+ and non-merged grammar C−from a corpus. This model considers parsing failure to be the rate of initial misinterpretation, and for a vowel merger, the only inputs that could create an initial misinterpretation are minimal pairs because they become homophones. Examples of COT-CAUGHT minimal pairs include cot-caught, Don-Dawn, stock-stalk, odd-awed, collar-caller, and so on. The merged g+ grammar collapses would-be minimal pairs into homophones, so the penalty 1155 rate C+ comes down to lexical access. Under the observation that more frequent homophones are retrieved first regardless of syntactic context (Caramazza et al., 2001), g+ listeners only suffer initial misinterpretation when the less frequent member of a pair is uttered regardless of the rate of M+. If H is the sum token frequency of all minimal pairs and hi o, hi oh are the frequencies of the ith pair’s members, then C+ is calculated by Eqn. 6. In contrast, g−listeners are sensitive to the phonemic distinction, so they misinterpret M−input at the rate of mishearing one vowel for the other ϵ (Peterson and Barney, 1952) (second half of Eqn. 7). And given M+ input, they misinterpret whenever they hear the phoneme which g− does not expect (e.g., a merged speaker pronouncing cot with the CAUGHT vowel) times the probability of not mishearing that vowel (1-ϵ) plus ϵ times the probability of hearing the right vowel (i.e., the merged speaker pronounces cot with the COT vowel but it is misheard anyway) (first half of Eqn. 7). Since g−misinterpretation rates are a function of the rate of M+ (p) in the environment, there is a threshold of M+ speakers above which the merged grammar has a fitness advantage over the non-merged one. C+ = 1 H X i min(hi o, hi oh) (6) C−= 1 H X i  p0((1 −ϵoh)hi o + ϵohhi oh) (7) +q0(ϵohhi o + ϵohhi oh)  Calculating this threshold for the frequent minimal pairs that Yang extracts from the Wortschatz project (Biemann et al., 2004) corpus3 and mishearing rates from Peterson and Barney (1952), the Yang model predicts that a learner exposed to at least ∼17% COT-CAUGHT-merged input will acquire the merger. This threshold represents a strong advantage for M+ because it is well under the 50% threshold expected for neutral (non-advantaged) change and it is very close to what was found in Johnson (2007)’s sociolinguistic study. It predicts that younger children may have g+ while their parents and even older siblings 3Don (1052) – Dawn (736); collar (403) – caller (23); knotty (25) – naughty (195); odd (830) – awed (80); Otto (67) – auto (260); tot (9) – taught (1327); cot (39) – caught (2444); pond (258) – pawned (31); hock (25) – hawk (127); nod (180) – gnawed (53); sod (30) – sawed (37) have g−if the 17% threshold was crossed in E after the acquisition period of the older sibling but before that of the younger sibling. 4.2 Model Setup All the mechanics behind the learning model reduce to a simple statement: learners acquires g+ iff > 17% of their input is M+ and they acquire g−otherwise. However, this kind of categorical learner in a perfectly-mixed population leads to immediate fixation at either g−or g+ in a single iteration, since the proportion of g+ speakers in the population is equivalent to the proportion of M+ input in every learner’s environment. This is not realistic change. Clearly, social network structure is at least as important as the learning algorithm in modeling the spread of the merger. We model the change in a non-uniform social network of 100 centralized clusters of 75 individuals each. 75 was chosen as half Dunbar’s number, the maximum number of reliable social connections that an adult can maintain (Dunbar, 2010). There are two grammars, g+ and g−, and learners internalize one or the other according to the 17% threshold of M+ in their input. One cluster represents the source of the merger and is initialized at 100% g+, while the rest begin 100% g−. Inter-cluster connections are chosen randomly so that some connections are between central members of the clusters and some are between peripheral members. The one merged cluster is connected to half the other clusters representing those at the frontier of the change, and each other cluster is connected to five randomly chosen ones.4 This network structure echoes work in sociolinguistics, in particular, Milroy and Milroy (1985)’s notion of strong and weak connections in language change, where weak connections between social clusters are particularly important for propagation of a change. Propagation of the merged grammar is calculated by En because we are interested in the behavior of individuals without loss of precision and because it cannot be assumed that each cluster is internally uniform.5 Since the spread of the merger has been rapid enough to detect over a period of a few years, iterations are modeled as short age co4Originally, the clusters were set up as a “stepping-stone” chain with the merged community at one end, and that produced a similar S-curve. The structure presented here is more geographically plausible but not crucial for the results. 5α = 0.45. 1156 horts rather than full generations in the first experiments by updating only a randomly chosen 10% of nodes at each iteration because only a fraction of the population is learning at any given time. A model where every node is updated is investigated as well. 4.3 Results The behavior of this simulation is shown graphically in Figure 4. The fine/colored lines indicate the rate of M+ within each initially non-merged cluster, and the bold/black line shows the average rate across all initially non-merged. The merger spreads from cluster to cluster in succession over the “weak” inter-cluster connections and through each cluster over the ‘strong’ connections before moving on to the next ones. Figure 4: Spread of merger across communities (fine/colored) and population average (bold/black) Most individual clusters exhibit a period of time in which only a few early adopter (Rogers Everett, 1995) members have the merger, a period of rapid diffusion of the merger, then some time where a few laggards resist the merger. As a result, most clusters exhibit an S-like shape. A few clusters change rapidly because of their especially wellconnected positions in the network, and some lag behind the rest because they are poorly connected to the rest of the network. More interestingly, the population-wide average, the population-level data at the kind of granularity that is often studied, yields a smooth S-curve with a shallower slope than the individual clusters. The fact that it arises naturally here in a network that conforms with typical network shapes but was otherwise randomly generated is encouraging because the experiment was not set up so that it would produce such a curve, and the steep rate of change in individual clusters is what is expected for a change that is rapid enough to affect siblings differently. In the above simulation, only a fraction of nodes were updated at each iteration in order to model a rapid change. In order to confirm that this choice is not affecting the results and to test a purer implementation of the framework presented here, we remove that constraint and update every node at each iteration. Figure 5 shows what happens over 20 iterations in a network that is otherwise identical but with 2/5 as many inter-cluster connections as the original. A qualitatively similar pattern arises, so the choice to update only a fraction of the population is not crucially affecting the results. Figure 5: Spread of merger across communities (fine/colored) and population average (bold/black) In all experiments so far, social connections were fixed at the first iteration even though connections in real populations tend to change over time. To investigate that modeling assumption, we perform another simulation in which connections are randomly updated both within and across clusters at each iteration akin to Kauhanen (2016)’s rewiring. The result as shown in Figure 6 is similar to before, with one major difference. The individual clusters transition more closely in time because no individual cluster remains poorly connected or especially well connected throughout the entire simulation. Finally, we test our assumptions about population size by repeating the experiments on a smaller network of 40 clusters of 18 individuals. The results are qualitatively similar, but the S-curve appears to be more sensitive to probabilistic connections in the network. To explore this, we present the average network-wide rate of (M+) across 10 trials, revealing that an S-like curve is formed each time but that its slope varies. A few trials never 1157 Figure 6: Spread of merger within communities (fine/colored) and as population average (bold/black). Network updated. reach 100% because some of the clusters are not connected to the innovative one. The slope varies between trials, indicating that the rate of change is a function of both the population structure and the learning algorithm, but the network size does not substantially affect these results. Figure 7: Single small network trial (left); average curves from 10 trials (right) 5 Discussion The algebraic-network framework for modeling population-level language change presented here has substantial practical and theoretical advantages over previous ones. It is much simpler computationally than previous frameworks because it calculates the statistically expected behavior of each generation analytically and therefore removes the entire inner loop of calculating stochastic inter-agent interactions from the simulation. It follows the Niyogi and Berwick (1996) formalism for language change which presents a clean and modular way of reasoning about the problem and promotes the centrality of language acquisition. In addition to the core algorithm, the framework offers enough flexibility to represent a wide variety of processes from the highly abstract (e.g., Kauhanen (2016)) to those grounded in sociolinguistic and acquisition research (e.g., Yang (2009)). In our investigation of Kauhanen’s basic assumptions, we discover how seemingly innocuous decisions about population size and learning conspire to drive simulation results. If learners are conceived as categorical learners, population size becomes a deciding factor in the path of change. So while the original results are interesting and meaningful, they may only valid for small (on the order of 102) populations. In our simulation of the spread of the COTCAUGHT merger, we show how a cognitivelymotivated model of acquisition requires a network model in order to represent population-level language change. The population is represented as a collection of individual clusters based on sociological work, but the clusters themselves are connected randomly. The fact that S-curves arise naturally from these networks underscores their centrality to language change. One problem that this line of simulation work has always faced has been the lack of viable comparison between models because every study implements its own learning, network, and interaction models. The modular nature of our framework advances against this trend since it is now possible to hold the population model constant while slotting in various learning models to test them against one another and vice-versa. Finally, since this framework reduces to Niyogi & Berwick’s models in perfectly-mixed populations, it can be used to reason about the formal dynamics of language change as well. Without simulation, it would be difficult or impossible to undercover the interplay between acquisition and social structure on the propagation of language change. Neither factor alone can account for the theoretical or empirically observed patterns. Simulations of this kind which explicitly model both simultaneously is well equipped to provide insights that fieldwork and laboratory work cannot. As such, it is an invaluable complement to those more traditional methodologies. Acknowledgments We thank Charles Yang for is input and audiences at FWAV 4 and DiGS 19 for comments on earlier versions of this work. This research was funded by an NDSEG fellowship awarded to the first author by the US Dept. of Defense. 1158 References Gareth J Baxter, Richard A Blythe, William Croft, and Alan J McKane. 2006. Utterance selection model of language change. Physical Review E, 73(4):046118. Gareth J Baxter, Richard A Blythe, William Croft, and Alan J McKane. 2009. Modeling language change: an evaluation of trudgill’s theory of the emergence of new zealand english. Language Variation and Change, 21(02):257–296. Christian Biemann, Stefan Bordag, Gerhard Heyer, Uwe Quasthoff, and Christian Wolff. 2004. Language-independent methods for compiling monolingual lexical data. In International Conference on Intelligent Text Processing and Computational Linguistics, pages 217–228. Springer. Leonard Bloomfield. 1933. Language history: from Language (1933 ed.). Holt, Rinehart and Winston. Richard A Blythe and William Croft. 2012. S-curves and the mechanisms of propagation in language change. Language, 88(2):269–304. Robert R Bush and Frederick Mosteller. 1953. A mathematical model for simple learning. In Selected Papers of Frederick Mosteller, pages 221– 234. Springer. Kathryn Campbell-Kibler. 2009. The nature of sociolinguistic perception. Language Variation and Change, 21(1):135–156. Alfonso Caramazza, Albert Costa, Michele Miozzo, and Yanchao Bi. 2001. The specific-word frequency effect: implications for the representation of homophones in speech production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 27(6):1430. Luigi Luca Cavalli-Sforza and Marcus W Feldman. 1981. Cultural transmission and evolution: a quantitative approach. 16. Princeton University Press. Robin Dunbar. 2010. How many friends does one person need?: Dunbar’s number and other evolutionary quirks. Faber & Faber. Zsuzsanna Fagyal, Samarth Swarup, Anna Mar´ıa Escobar, Les Gasser, and Kiran Lakkaraju. 2010. Centers and peripheries: Network roles in language change. Lingua, 120(8):2061–2079. Edward Gibson and Kenneth Wexler. 1994. Triggers. Linguistic inquiry, 25(3):407–454. K David Harrison, Mark Dras, Berk Kapicioglu, et al. 2002. Agent-based modeling of the evolution of vowel harmony. In PROCEEDINGS-NELS, 32; VOL 1, pages 217–236. Daniel E Johnson. 2007. Stability and change along a dialect boundary: The low vowel mergers of southeastern new england. University of Pennsylvania Working Papers in Linguistics, 13(1):7. Keith Johnson, Elizabeth A Strand, and Mariapaola D’Imperio. 1999. Auditory–visual integration of talker gender in vowel perception. Journal of Phonetics, 27(4):359–384. Henri Kauhanen. 2016. Neutral change. Journal of Linguistics, pages 1–32. Jinyun Ke, Tao Gong, and William SY Wang. 2008. Language change and social networks. Communications in Computational Physics, 3(4):935–949. ed. Knight Chris Kirby, Simon, Michael StuddertKennedy, and James Hurford. 2000. The evolutionary emergence of language: social function and the origins of linguistic form. Cambridge University Press. Sheldon Klein. 1966. Historical change in language using monte carlo techniques. Mechanical Translation and Computational Linguistics, 9(3):67–81. Anthony S Kroch. 1989. Reflexes of grammar in patterns of language change. Language variation and change, 1(03):199–244. William Labov. 1994. Principles of language change: Internal factors. William Labov. 2001. Principles of language change: Social factors. Malden, MA: Blackwell. William Labov, Sharon Ash, and Charles Boberg. 2005. The atlas of North American English: Phonetics, phonology and sound change. Walter de Gruyter. David W Lightfoot. 1979. Principles of diachronic syntax. Cambridge Studies in Linguistics London, 23. James Milroy and Lesley Milroy. 1985. Linguistic change, social network and speaker innovation. Journal of linguistics, 21(02):339–384. James W Minett and William SY Wang. 2008. Modelling endangered languages: The effects of bilingualism and social structure. Lingua, 118(1):19–45. Partha Niyogi. 1998. The logical problem of language change. In The Informational Complexity of Learning, pages 173–205. Springer. Partha Niyogi and Robert C Berwick. 1996. A language learning model for finite parameter spaces. Cognition, 61(1):161–193. Partha Niyogi and Robert C Berwick. 1997. A dynamical systems model for language change. Complex Systems, 11(3):161–204. Partha Niyogi and Robert C Berwick. 2009. The proper treatment of language acquisition and change in a population setting. Proceedings of the National Academy of Sciences, 106(25):10124–10129. 1159 Gordon E Peterson and Harold L Barney. 1952. Control methods used in a study of the vowels. The Journal of the acoustical society of America, 24(2):175– 184. Nikolaus Ritt. 2004. Selfish sounds and linguistic evolution: A Darwinian approach to language change. Cambridge University Press. M Rogers Everett. 1995. Diffusion of innovations. New York, 12. Teresa Satterfield. 2001. Toward a sociogenetic solution: Examining language formation processes through swarm modeling. Social Science Computer Review, 19(3):281–295. Michael T Schaub, Neave O’Clery, Yazan N Billeh, Jean-Charles Delvenne, Renaud Lambiotte, and Mauricio Barahona. 2016. Graph partitions and cluster synchronization in networks of oscillators. Chaos: An Interdisciplinary Journal of Nonlinear Science. James N Stanford and Laurence A Kenny. 2013. Revisiting transmission and diffusion: An agent-based model of vowel chain shifts across large communities. Language Variation and Change, 25(2):119. Henry Sweet. 1899. The practical study of languages. London: Dent. Charles Yang. 2009. Population structure and language change. Ms., University of Pennsylvania. Charles D Yang. 2000. Internal and external forces in language change. Language variation and change, 12(03):231–250.
2018
106
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1160–1170 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1160 Prefix Lexicalization of Synchronous CFGs using Synchronous TAG Logan Born and Anoop Sarkar Simon Fraser University School of Computing Science {loborn,anoop}@sfu.ca Abstract We show that an ε-free, chain-free synchronous context-free grammar (SCFG) can be converted into a weakly equivalent synchronous tree-adjoining grammar (STAG) which is prefix lexicalized. This transformation at most doubles the grammar’s rank and cubes its size, but we show that in practice the size increase is only quadratic. Our results extend Greibach normal form from CFGs to SCFGs and prove new formal properties about SCFG, a formalism with many applications in natural language processing. 1 Introduction Greibach normal form (GNF; Greibach, 1965) is an important construction in formal language theory which allows every context-free grammar (CFG) to be rewritten so that the first character of each rule is a terminal symbol. A grammar in GNF is said to be prefix lexicalized, because the prefix of every production is a lexical item. GNF has a variety of theoretical and practical applications, including for example the proofs of the famous theorems due to Shamir and Chomsky-Sch¨utzenberger (Shamir, 1967; Chomsky and Sch¨utzenberger, 1963; Autebert et al., 1997). Other applications of prefix lexicalization include proving coverage of parsing algorithms (Gray and Harrison, 1972) and decidability of equivalence problems (Christensen et al., 1995). By using prefix lexicalized synchronous context-free grammars (SCFGs), Watanabe et al. (2006) and Siahbani et al. (2013) obtain asymptotic and empirical speed improvements on a machine translation task. Using a prefix lexicalized grammar ensures that target sentences can be generated from left to right, which allows the use of beam search to constrain their decoder’s search space as it performs a left-to-right traversal of translation hypotheses. To achieve these results, new grammars had to be heuristically constrained to include only prefix lexicalized productions, as there is at present no way to automatically convert an existing SCFG to a prefix lexicalized form. This work investigates the formal properties of prefix lexicalized synchronous grammars as employed by Watanabe et al. (2006) and Siahbani et al. (2013), which have received little theoretical attention compared to non-synchronous prefix lexicalized grammars. To this end, we first prove that SCFG is not closed under prefix lexicalization. Our main result is that there is a method for prefix lexicalizing an SCFG by converting it to an equivalent grammar in a different formalism, namely synchronous tree-adjoining grammar (STAG) in regular form. Like the GNF transformation for CFGs, our method at most cubes the grammar size, but we show empirically that the size increase is only quadratic for grammars used in existing NLP tasks. The rank is at most doubled, and we maintain O(n3k) parsing complexity for grammars of rank k. We conclude that although SCFG does not have a prefix lexicalized normal form like GNF, our conversion to prefix lexicalized STAG offers a practical alternative. 2 Background 2.1 SCFG An SCFG is a tuple G = (N, Σ, P, S) where N is a finite nonterminal alphabet, Σ is a finite terminal alphabet, S ∈N is a distinguished nonterminal called the start symbol, and P is a finite set of synchronous rules of the form (1) ⟨A1 →α1, A2 →α2⟩ for some A1, A2 ∈N and strings α1, α2 ∈(N ∪ Σ)∗.1 Every nonterminal which appears in α1 1A variant formalism exists which requires that A1 = A2; this is called syntax-directed transduction grammar (Lewis and Stearns, 1968) or syntax-directed translation schemata (Aho and Ullman, 1969). This variant is weakly equivalent to SCFG, but SCFG has greater strong generative capacity (Crescenzi et al., 2015). 1161 A 1 A↓2 a B 2 B↓1 b A c A∗ B d ⟨ ⟩⟨ ⟩  A A a A ↓1 c , B 1 b B d  Figure 1: An example of synchronous rewriting in an STAG (left) and the resulting tree pair (right). must be linked to exactly one nonterminal in α2, and vice versa. We write these links using numerical annotations, as in (2). (2) ⟨A →A 1 B 2 , B →B 2 A 1 ⟩ An SCFG has rank k if no rule in the grammar contains more than k pairs of linked nodes. In every step of an SCFG derivation, we rewrite one pair of linked nonterminals with a rule from P, in essentially the same way we would rewrite a single nonterminal in a non-synchronous CFG. For example, (3) shows linked A and B nodes being rewritten using (2): (3) ⟨X 1 A 2 , B 2 Y 1 ⟩⇒⟨X 1 A 2 B 3 , B 3 A 2 Y 1 ⟩ Note how the 1 and 2 in rule (2) are renumbered to 2 and 3 during rewriting, to avoid an ambiguity with the 1 already present in the derivation. An SCFG derivation is complete when it contains no more nonterminals to rewrite. A completed derivation represents a string pair generated by the grammar. 2.2 STAG An STAG (Shieber, 1994) is a tuple G = (N, Σ, T, S) where N is a finite nonterminal alphabet, Σ is a finite terminal alphabet, S ∈N is a distinguished nonterminal called the start symbol, and T is a finite set of synchronous tree pairs of the form (4) ⟨t1, t2⟩ where t1 and t2 are elementary trees as defined in Joshi et al. (1975). A substitution site is a leaf node marked by ↓which may be rewritten by another tree; a foot node is a leaf marked by ∗that may be used to rewrite a tree-internal node. Every substitution site in t1 must be linked to exactly one nonterminal in t2, and vice versa. As in SCFG, we write these links using numbered annotations; rank is defined for STAG the same way as for SCFG. In every step of an STAG derivation, we rewrite one pair of linked nonterminals with a tree pair from T, using the same substitution and adjunction operations defined for non-synchronous TAG. For example, Figure 1 shows linked A and B nodes being rewritten and the tree pair resulting from this operation. See Joshi et al. (1975) for details about the underlying TAG formalism. 2.3 Terminology We use synchronous production as a cover term for either a synchronous rule in an SCFG or a synchronous tree pair in an STAG. Following Siahbani et al. (2013), we refer to the left half of a synchronous production as the source side, and the right half as the target side; this terminology captures the intuition that synchronous grammars model translational equivalence between a source phrase and its translation into a target language. Other authors refer to the two halves as the left and right components (Crescenzi et al., 2015) or, viewing the grammar as a transducer, the input and the output (Engelfriet et al., 2017). We call a grammar ε-free if it contains no productions whose source or target side produces only the empty string ε. 2.4 Synchronous Prefix Lexicalization Previous work (Watanabe et al., 2006; Siahbani et al., 2013) has shown that it is useful for the target side of a synchronous grammar to start with a terminal symbol. For this reason, we define a synchronous grammar to be prefix lexicalized when the leftmost character of the target side2 of every synchronous production in that grammar is a terminal symbol. Formally, this means that every synchronous rule in a prefix lexicalized SCFG (PL-SCFG) is 2All of the proofs in this work admit a symmetrical variant which prefix lexicalizes the source side instead of the target. We are not aware of any applications in NLP where sourceside prefix lexicalization is useful, so we do not address this case. 1162 of the form (5) ⟨A1 →α1, A2 →aα2⟩ where A1, A2 ∈N, α1, α2 ∈(N∪Σ)∗and a ∈Σ. Every synchronous tree pair in a prefix lexicalized STAG (PL-STAG) is of the form (6)  A1 α1 , A2 aα2  where A1, A2 ∈N, α1, α2 ∈(N∪Σ)∗and a ∈Σ. 3 Closure under Prefix Lexicalization We now prove that the class SCFG is not closed under prefix lexicalization. Theorem 1. There exists an SCFG which cannot be converted to an equivalent PL-SCFG. Proof. The SCFG in (7) generates the language L = {⟨aibjci, bjai⟩| i ≥0, j ≥1}, but this language cannot be generated by any PL-SCFG: (7) ⟨S →A 1 , S →A 1 ⟩ ⟨A →aA 1 c, A →A 1 a⟩ ⟨A →bB 1 , A →bB 1 ⟩ ⟨A →b, A →b⟩ ⟨B →bB 1 , B →bB 1 ⟩ ⟨B →b, B →b⟩ Suppose, for the purpose of contradiction, that some PL-SCFG does generate L; call this grammar G. Then the following derivations must all be possible in G for some nontermials U, V, X, Y : i) ⟨U 1 , V 1 ⟩⇒∗⟨bkU 1 bm, bnV 1 bp⟩, where k + m = n + p and n ≥1 ii) ⟨X 1 , Y 1 ⟩⇒∗⟨aqX 1 cq, arY 1 as⟩, where q = r + s and r ≥1 iii) ⟨S 1 , S 1 ⟩⇒∗⟨α1X 1 α2, bα3Y 1 α4⟩, where α1, ..., α4 ∈(N ∪Σ)∗ iv) ⟨X 1 , Y 1 ⟩⇒∗⟨α5U 1 α6, α7V 1 α8⟩, where α5, α6, α8 ∈(N ∪Σ)∗, α7 ∈Σ(N ∪ Σ)∗ i and ii follow from the same arguments used in the pumping lemma for (non-synchronous) context free languages (Bar-Hillel et al., 1961): strings in L can contain arbitrarily many as, bs, and cs, so there must exist some pumpable cycles which generate these characters. In i, k + m = n + p because the final derived strings must contain an equal number of bs, and n ≥1 because G is prefix lexicalized; in ii the constraints on q, r and s follow likewise from L. iii follows from the fact that, in order to pump on the cycle in ii, this cycle must be reachable from the start symbol. iv follows from the fact that a context-free production cannot generate a discontinuous span. Once the cycle in i has generated a b, it is impossible for ii to generate an a on one side of the b and a c on the other. Therefore i must always be derived strictly later than ii, as shown in iv. Now we obtain a contradiction. Given that G can derive all of i through iv, the following derivation is also possible: (8) ⟨S 1 , S 1 ⟩ ⇒∗ ⟨α1X 1 α2, bα3Y 1 α4⟩ ⇒∗ ⟨α1aqX 1 cqα2, bα3arY 1 asα4⟩ ⇒∗ ⟨α1aqα5U 1 α6cqα2, bα3arα7V 1 α8asα4⟩ ⇒∗ ⟨α1aqα5bkU 1 bmα6cqα2, bα3arα7bnV 1 bpα8asα4⟩ But since n, r ≥1, the target string derived this way contains an a before a b and does not belong to L. This is a contradiction: if G is a PL-SCFG then it must generate i through iv, but if so then it also generates strings which do not belong to L. Thus no PL-SCFG can generate L, and SCFG must not be closed under prefix lexicalization. ■ There also exist grammars which cannot be prefix lexicalized because they contain cyclic chain rules. If an SCFG can derive something of the form ⟨X 1 , Y 1 ⟩⇒∗⟨xX 1 , Y 1 ⟩, then it can generate arbitrarily many symbols in the source string without adding anything to the target string. Prefix lexicalizing the grammar would force it to generate some terminal symbol in the target string at each step of the derivation, making it unable to generate the original language where a source string may be unboundedly longer than its corresponding target. We call an SCFG chain-free if it does not contain a cycle of chain rules of this form. The remainder of this paper focuses on chain-free grammars, like (7), which cannot be converted to PL-SCFG despite containing no such cycles. 4 Prefix Lexicalization using STAG We now present a method for prefix lexicalizing an SCFG by converting it to an STAG. 1163 ⟨X 1 , A 1 ⟩⇒⟨α1Y1 1 β1, B1 1 γ1⟩⇒⟨α1α2Y2 1 β2β1, B2 1 γ2γ1⟩ ⇒∗⟨α1 · · · αtYt 1 βt · · · β1, Bt 1 γt · · · γ1⟩⇒⟨α1 · · · αtαt+1βt · · · β1, aγt+1γt · · · γ1⟩ Figure 2: A target-side terminal leftmost derivation. a ∈Σ; X, A, Yi, Bi ∈N; and αi, βi, γi ∈(N ∪Σ)∗.  SXA α1 , SXA aα2  (a) ⟨X →α1, A →aα2⟩  SXA YXA 1 α1 , SXA aα2BXA ↓1  (b) ⟨Y →α1, B →aα2⟩  ZXA YXA 1 α1 ZXA∗β1 , CXA α2 BXA ↓1  (c) ⟨Y →α1Z 1 β1, B →C 1 α2⟩  YXA α1 YXA∗β1 , CXA α2  (d) ⟨X →α1Y 1 β1, A →C 1 α2⟩ Figure 3: Tree-pairs in GXA and the rules in G from which they derive. Theorem 2. Given a rank-k SCFG G which is εfree and chain-free, an STAG H exists such that H is prefix lexicalized and L(G) = L(H). The rank of H is at most 2k, and |H| = O(|G|3). Proof. Let G = (N, Σ, P, S) be an ε-free, chainfree SCFG. We provide a constructive method for prefix lexicalizing the target side of G. We begin by constructing an intermediate grammar GXA for each pair of nonterminals X, A ∈N \ {S}. For each pair X, A ∈N \ {S}, GXA will be constructed to generate the language of sentential forms derivable from ⟨X 1 , A 1 ⟩ via a target-side terminal leftmost derivation (TTLD). A TTLD is a derivation of the form in Figure 2, where the leftmost nonterminal in the target string is expanded until it produces a terminal symbol as the first character. We write ⟨X 1 , A 1 ⟩ ⇒∗ TTLD ⟨u, v⟩ to mean that ⟨X 1 , A 1 ⟩ derives ⟨u, v⟩ by way of a TTLD; in this notation, LXA = {⟨u, v⟩|⟨X 1 , A 1 ⟩⇒∗ TTLD ⟨u, v⟩} is the language of sentential forms derivable from ⟨X 1 , A 1 ⟩via a TTLD. Given X, A ∈ N \ {S} we formally define GXA as an STAG over the terminal alphabet ΣXA = N ∪Σ and nonterminal alphabet NXA = {YXA|Y ∈N}, with start symbol SXA. NXA contains nonterminals indexed by XA to ensure that two intermediate grammars GXA and GY B do not interact as long as ⟨X, A⟩̸= ⟨Y, B⟩. GXA contains four kinds of tree pairs: 3 • For each rule in G of the form ⟨X →α1, A →aα2⟩, a ∈Σ, αi ∈(N∪Σ)∗, we add a tree pair of the form in Figure 3(a). • For each rule in G of the form ⟨Y →α1, B →aα2⟩, a ∈Σ, αi ∈(N∪Σ)∗, Y, B ∈N \ {S}, we add a tree pair of the form in Figure 3(b). • For each rule in G of the form ⟨Y →α1Z 1 β1, B →C 1 α2⟩, Y, Z, B, C ∈ N \ {S}, αi, βi ∈(N ∪Σ)∗, we add a tree pair of the form in Figure 3(c). As a special case, if Y = Z we collapse the root node and adjunction site to produce a tree pair of the following form: (9)  ZXA 1 α1ZXA ∗β1 , CXA α2BXA ↓1  • For each rule in G of the form ⟨X →α1Y 1 β1, A →C 1 α2⟩, Y, C ∈N, αi, βi ∈(N ∪Σ)∗, we add a tree pair of the form in Figure 3(d). 3In all cases, we assume that symbols in N (not NXA) retain any links they bore in the original grammar, even though they belong to the terminal alphabet in GXA and therefore do not participate in rewriting operations. In the final constructed grammar, these symbols will belong to the nonterminal alphabet again, and the links will function normally. 1164 ⟨A →B 2 cA 1 , A →A 1 cB 2 ⟩  AAA 1 B ↓2 c AAA∗ , AAA c B ↓2 AAA ↓1  Figure 4: An SCFG rule and a tree pair based off that rule, taken from an intermediate grammar GAA. The tree pair is formed according to the pattern illustrated in Figure 3(c). Observe that the B nodes retain the link they bore in the original rule. This link is not functional in the intermediate grammar (that is, it cannot be used for synchronous rewriting) because B /∈NAA, but it will be functional when this tree pair is added to the final grammar H. Figure 4 gives a concrete example of constructing an intermediate grammar tree pair on the basis of an SCFG rule. Lemma 1. GXA generates the language LXA. Proof. This can be shown by induction over derivations of increasing length. The proof is straightforward but very long, so we provide only a sketch; the complete proof is provided in the supplementary material. As a base case, observe that a tree of the shape in Figure 3(a) corresponds straightforwardly to the derivation (10) ⟨X 1 , A 1 ⟩⇒⟨α1, aα2⟩ which is a TTLD starting from ⟨X, A⟩. By construction, therefore, every TTLD of the shape in (10) corresponds to some tree in GXA of shape 3(a); likewise every derivation in GXA comprising a single tree of shape 3(a) corresponds to a TTLD of the shape in (10). As a second base case, note that a tree of the shape in Figure 3(b) corresponds to the last step of a TTLD like (11): (11) ⟨X 1 , A 1 ⟩⇒∗ TTLD ⟨uY 1 v, B 1 w⟩ ⇒⟨uα1v, aα2w⟩ In the other direction, the last step of any TTLD of the shape in (11) will involve some rule of the shape ⟨Y →α1, B →aα2⟩; by construction GXA must contain a corresponding tree pair of shape 3(b). Together, these base cases establish a one-toone correspondence between single-tree derivations in GXA and the last step of a TTLD starting from ⟨X, A⟩. Now, assume that the last n steps of every TTLD starting from ⟨X, A⟩correspond to some derivation over n trees in GXA, and vice versa. Then the last n + 1 steps of that TTLD will also correspond to some n + 1 tree derivation in GXA, and vice versa. To see this, consider the step n + 1 steps before the end of the TTLD. This step may be in the middle of the derivation, or it may be the first step of the derivation. If it is in the middle, then this step must involve a rule of the shape (12) ⟨Y →α1Z 1 β1, B →C 1 α2⟩ The existence of such a rule in G implies the existence of a corresponding tree in GXA of the shape in Figure 3(c). Adding this tree to the existing n-tree derivation yields a new n + 1 tree derivation corresponding to the last n + 1 steps of the TTLD.4 In the other direction, if the n + 1th tree5 of a derivation in GXA is of the shape in Figure 3(c), then this implies the existence of a production in G of the shape in (12). By assumption the first n trees of the derivation in GXA correspond to some TTLD in G; by prepending the rule from (12) to this TTLD we obtain a new TTLD of length n + 1 which corresponds to the entire n + 1 tree derivation in GXA. Finally, consider the case where the TTLD is only n + 1 steps long. The first step must involve a rule of the form (13) ⟨X →α1Y 1 β1, A →C 1 α2⟩ The existence of such a rule implies the existence of a corresponding tree in GXA of the shape in Figure 3(d). Adding this tree to the derivation which corresponds to the last n steps of the TTLD yields a new n+1 tree derivation corresponding to the entire n + 1 step TTLD. In the other direction, if the last tree of an n + 1 tree derivation in GA is of the shape in Figure 3(d), then this implies the 4It is easy to verify by inspection of Figure 3 that whenever one rule from G can be applied to the output of another rule, then the tree pairs in GXA which correspond to these rules can compose with one another. Thus we can add the new tree to the existing derivation and be assured that it will compose with one of the trees that is already present. 5Although trees in GXA may contain symbols from the nonterminal alphabet of G, these symbols belong to the terminal alphabet in GXA. Only nonterminals in NXA will be involved in this derivation, and by construction there is at most one such nonterminal per tree. Thus a well-formed derivation structure in GXA will never branch, and we can refer to the n + 1th tree pair as the one which is at depth n in the derivation structure. 1165 existence of a production in G of the shape in (13). By assumption the first n trees of the derivation in GXA correspond to some TTLD in G; by prepending the rule from (13) to this TTLD we obtain a new TTLD of length n + 1 which corresponds to the entire n + 1 tree derivation in GXA. Taken together, these cases establish a one-toone correspondence between derivations in GXA and TTLDs which start from ⟨X, A⟩; in turn they confirm that GXA generates the desired language LXA. Once we have constructed an intermediate grammar GXA for each X, A ∈N \ {S}, we obtain the final STAG H as follows: 1. Convert the input SCFG G to an equivalent STAG. For each rule ⟨A1 →α1, A2 →α2⟩, where Ai ∈N, αi ∈(N ∪Σ)∗, create a tree pair of the form (14)  A1 α1 , A2 α2  where each pair of linked nonterminals in the original rule become a pair of linked substitution sites in the tree pair. The terminal and nonterminal alphabets and start symbol are unchanged. Call the resulting STAG H. 2. For all X, A ∈N \ {S}, add all of the tree pairs from the intermediate grammar GXA to the new grammar H. Expand N to include the new nonterminal symbols in NXA. 3. For every X, A ∈N, in all tree pairs where the target tree’s leftmost leaf is labeled with A and this node is linked to an X, replace this occurrence of A with SXA. Also replace the linked node in the source tree. 4. For every X, A ∈N, let RXA be the set of all tree pairs rooted in SXA, and let TXA be the set of all tree pairs whose target tree’s leftmost leaf is labeled with SXA. For every ⟨s, t⟩∈TXA and every ⟨s′, t′⟩∈RXA, substitute or adjoin s′ and t′ into the linked SXA nodes in s and t, respectively. Add the derived trees to H. 5. For all X, A ∈N, let TXA be defined as above. Remove all tree pairs in TXA from H. 6. For all X, A ∈N, let RXA be defined as above. Remove all tree pairs in RXA from H. We now claim that H generates the same language as the original grammar G, and all of the target trees in H are prefix lexicalized. The first claim follows directly from the construction. Step 1 merely rewrites the grammar in a new formalism. From Lemma 1 it is clear that steps 2–3 do not change the generated language: the set of string pairs generable from a pair of SXA nodes is identical to the set generable from ⟨X, A⟩ in the original grammar. Step 4 replaces some nonterminals by all possible alternatives; steps 5– 6 then remove the trees which were used in step 4, but since all possible combinations of these trees have already been added to the grammar, removing them will not alter the language. The second claim follows from inspection of the tree pairs generated in Figure 3. Observe that, by construction, for all X, A ∈N every target tree rooted in SXA is prefix lexicalized. Thus the trees created in step 4 are all prefix lexicalized variants of non-lexicalized tree pairs; steps 5–6 then remove the non-lexicalized trees from the grammar. ■ Figure 5 gives an example of this transformation applied to a small grammar. Note how A nodes at the left edge of the target trees end up rewritten as SAA nodes, as per step 4 of the transformation. 5 Complexity & Formal Properties Our conversion generates a subset of the class of prefix lexicalized STAGs in regular form, which we abbreviate to PL-RSTAG (regular form for TAG is defined in Rogers 1994). This section discusses some formal properties of PL-RSTAG. Generative Capacity PL-RSTAG is weakly equivalent to the class of ε-free, chain-free SCFGs: this follows immediately from the proof that our transformation does not change the language generated by the input SCFG. Note that every TAG in regular form generates a context-free language (Rogers, 1994). Alignments and Reordering PL-RSTAG generates the same set of reorderings (alignments) as SCFG. Observe that our transformation does not cause nonterminals which were linked in the original grammar to become unlinked, as noted for example in Figure 4. Thus subtrees which are gener1166 ⟨S →B 2 cA 1 , S →A 1 cB 2 ⟩ ⟨A →B 2 cA 1 , A →A 1 cB 2 ⟩ ⟨A →a, A →a⟩ ⟨B →b, B →b⟩  S B ↓1 c SAA a , S SAA a c B ↓1   A B ↓1 c SAA AAA 2 a , A SAA a AAA ↓2 c B ↓1   S B ↓1 c SAA AAA 2 a , S SAA a AAA ↓2 c B ↓1   A B ↓1 c SAA a , A SAA a c B ↓1   AAA 1 B ↓2 c AAA∗ , AAA c B ↓2 AAA ↓1   AAA B ↓1 c AAA∗ , AAA c B ↓1   B b , B b   A a , A a  Figure 5: An SCFG and the STAG which prefix lexicalizes it. Non-productive trees have been omitted. Grammar |G| |H| % of G prefix lexicalized log|G|(|H|) Siahbani and Sarkar (2014a) (Zh-En) 18.5M 23.6T 63% 1.84 Example (7) 6 14 66% 1.47 ITG (10000 translation pairs) 10,003 170,000 99.97% 1.31 Table 1: Grammar sizes before and after prefix lexicalization, showing O(n2) size increase instead of the worst case O(n3). |G| and |H| give the grammar size before and after prefix lexicalization; log|G| |H| is the increase as a power of the initial size. We also show the percentage of productions which are already prefix lexicalized in G. ated by linked nonterminals in the original grammar will still be generated by linked nonterminals in the final grammar, so no reordering information is lost or added.6 This result holds despite the fact that our transformation is only applicable to chainfree grammars: chain rules cannot introduce any reorderings, since by definition they involve only a single pair of linked nonterminals. Grammar Rank If the input SCFG G has rank k, then the STAG H produced by our transformation has rank at most 2k. To see this, observe that the construction of the intermediate grammars increases the rank by at most 1 (see Figure 3(b)). When a prefix lexicalized tree is substituted at the left edge of a non-lexicalized tree, the link on the substitution site will be consumed, but up to k + 1 new links will be introduced by the substituting tree, so that the final tree will have rank at most 2k. In the general case, rank-k STAG is more powerful than rank-k SCFG; for example, a rank-4 SCFG is required to generate the reordering in ⟨S →A 1 B 2 C 3 D 4 , S →C 3 A 1 D 4 B 2 ⟩ (Wu, 1997), but this reordering is captured by the 6Although we consume one link whenever we substitute a prefix lexicalized tree at the left edge of an unlexicalized tree, that link can still be remembered and used to reconstruct the reorderings which occurred between the two sentences. following rank-3 STAG:  S X A ↓1 X 2 C ↓3 , S C ↓3 A ↓1 X ↓2   X B ↓1 X∗ D ↓2 , X D ↓2 B ↓1  For this reason, we speculate that it is possible to further transform the grammars produced by our lexicalization in order to reduce their rank, but the details of this transformation remain as future work. This potentially poses a solution to an issue raised by Siahbani and Sarkar (2014b). On a Chinese-English translation task, they find that sentences like (15) involve reorderings which cannot be captured by a rank-2 prefix lexicalized SCFG: (15) T¯a bˇuch¯ong shu¯o , li´anh´e zh`engfˇu m`uqi´an zhu`angku`ang wˇend`ıng ... He added that the coalition government is now in stable condition ... If rank-k PL-RSTAG is more powerful than rank-k 1167 SCFG, using a PL-RSTAG here would permit capturing more reorderings without using grammars of higher rank. Parse Complexity Because the grammar produced is in regular form, each side can be parsed in time O(n3) (Rogers, 1994), for an overall parse complexity of O(n3k), where n is sentence length and k is the grammar rank. Grammar Size and Experiments If H is the PL-RSTAG produced by applying our transformation to an SCFG G, then H contains O(|G|3) elementary tree pairs, where |G| is the number of synchronous productions in G. When the set of nonterminals N is small compared to |G|, a tighter bound is given by O(|G|2|N|2). Table 1 shows the actual size increase on a variety of grammars: here |G| is the size of the initial grammar, |H| is the size after applying our transformation, and the increase is expressed as a power of the original grammar size. We apply our transformation to the grammar from Siahbani and Sarkar (2014a), which was created for a ChineseEnglish translation task known to involve complex reorderings that cannot be captured by PL-SCFG (Siahbani and Sarkar, 2014b). We also consider the grammar in (7) and an ITG (Wu, 1997) containing 10,000 translation pairs, which is a grammar of the sort that has previously been used for word alignment tasks (cf Zhang and Gildea 2005). We always observe an increase within O(|G|2) rather than the worst-case O(|G|3), because |N| is small relative to |G| in most grammars used for NLP tasks. We also investigated how the proportion of prefix lexicalized rules in the original grammar affects the overall size increase. We sampled grammars with varying proportions of prefix lexicalized rules from the grammar in Siahbani and Sarkar (2014a); Table 2 shows the result of lexicalizing these samples. We find that the worst case size increase occurs when 50% of the original grammar is already prefix lexicalized. This is because the size increase depends on both the number of prefix lexicalized trees in the intermediate grammars (which grows with the proportion of lexicalized rules) and the number of productions which need to be lexicalized (which shrinks as the proportion of prefix lexicalized rules increases). At 50%, both factors contribute appreciably to the grammar size, analogous to how the function f(x) = x(1 −x) takes its maximum at x = 0.5. |G| |H| % of G prefix lexicalized log|G|(|H|) 15k 42.4M 10% 1.83 15k 74.9M 20% 1.89 15k 97.8M 30% 1.91 15k 112M 40% 1.93 15k 118M 50% 1.93 15k 114M 60% 1.93 15k 102M 70% 1.92 15k 78.2M 80% 1.89 15k 43.6M 90% 1.83 Table 2: Effect of prefix lexicalized rules in G on final grammar size. 6 Applications The LR decoding algorithm from Watanabe et al. (2006) relies on prefix lexicalized rules to generate a prefix of the target sentence during machine translation. At each step, a translation hypothesis is expanded by rewriting the leftmost nonterminal in its target string using some grammar rule; the prefix of this rule is appended to the existing translation and the remainder of the rule is pushed onto a stack, in reverse order, to be processed later. Translation hypotheses are stored in stacks according to the length of their translated prefix, and beam search is used to traverse these hypotheses and find a complete translation. During decoding, the source side is processed by an Earley-style parser, with the dot moving around to process nonterminals in the order they appear on the target side. Since the trees on the target side of our transformed grammar are all of depth 1, and none of these trees can compose via the adjunction operation, they can be treated like context-free rules and used as-is in this decoding algorithm. The only change required to adapt LR decoding to use a PL-RSTAG is to make the source side use a TAG parser instead of a CFG parser; an Earley-style parser for TAG already exists (Joshi and Schabes, 1997), so this is a minor adjustment. Combined with the transformation in Section 4, this suggests a method for using LR decoding without sacrificing translation quality. Previously, LR decoding required the use of heuristically generated PL-SCFGs, which cannot model some reorderings (Siahbani and Sarkar, 2014a). Now, an SCFG tailored for a translation task can be transformed directly to PL-RSTAG and used for decod1168 ing; unlike a heuristically induced PL-SCFG, the transformed PL-RSTAG will generate the same language as the original SCFG which is known to handle more reorderings. Note that, since applying our transformation may double the rank of a grammar, this method may prove prohibitively slow. This highlights the need for future work to examine the generative power of rank-k PL-RSTAG relative to rankk SCFG in the interest of reducing the rank of the transformed grammar. 7 Related Work Our work continues the study of TAGs and lexicalization (e.g. Joshi et al. 1975; Schabes and Waters 1993). Schabes and Waters (1995) show that TAG can strongly lexicalize CFG, whereas CFG only weakly lexicalizes itself; we show a similar result for SCFGs. Kuhlmann and Satta (2012) show that TAG is not closed under strong lexicalization, and Maletti and Engelfriet (2012) show how to strongly lexicalize TAG using simple context-free tree grammars (CFTGs). Other extensions of GNF to new grammar formalisms include Dymetman (1992) for definite clause grammars, Fernau and Stiebe (2002) for CF valence grammars, and Engelfriet et al. (2017) for multiple CFTGs. Although multiple CFTG subsumes SCFG (and STAG), Engelfriet et al.’s result appears to guarantee only that some side of every synchronous production will be lexicalized, whereas our result guarantees that it is always the target side that will be prefix lexicalized. Lexicalization of synchronous grammars was addressed by Zhang and Gildea (2005), but they consider lexicalization rather than prefix lexicalization, and they only consider SCFGs of rank 2. They motivate their results using a word alignment task, which may be another possible application for our lexicalization. Analogous to our closure result, Aho and Ullman (1969) prove that SCFG does not admit a normal form with bounded rank like Chomsky normal form. Blum and Koch (1999) use intermediate grammars like our GXAs to transform a CFG to GNF. Another GNF transformation (Rosenkrantz, 1967) is used by Schabes and Waters (1995) to define Tree Insertion Grammars (which are also weakly equivalent to CFG). We rely on Rogers (1994) for the claim that our transformed grammars generate context-free languages despite allowing wrapping adjunction; an alternative proof could employ the results of Swanson et al. (2013), who develop their own context-free TAG variant known as osTAG. Kaeshammer (2013) introduces the class of synchronous linear context-free rewriting systems to model reorderings which cannot be captured by a rank-2 SCFG. In the event that rank-k PL-RSTAG is more powerful than rank-k SCFG, our work can be seen as an alternative approach to the same problem. Finally, Nesson et al. (2008) present an algorithm for reducing the rank of an STAG on-the-fly during parsing; this presents a promising avenue for proving a smaller upper bound on the rank increase caused by our transformation. 8 Conclusion and Future Work We have demonstrated a method for prefix lexicalizing an SCFG by converting it to an equivalent STAG. This process is applicable to any SCFG which is ε- and chain-free. Like the original GNF transformation for CFGs our construction at most cubes the grammar size, though when applied to the kinds of synchronous grammars used in machine translation the size is merely squared. Our transformation preserves all of the alignments generated by SCFG, and retains properties such as O(n3k) parsing complexity for grammars of rank k. We plan to verify whether rank-k PL-RSTAG is more powerful than rank-k SCFG in future work, and to reduce the rank of the transformed grammar if possible. We further plan to empirically evaluate our lexicalization on an alignment task and to offer a comparison against the lexicalization due to Zhang and Gildea (2005). Acknowledgements The authors wish to thank the anonymous reviewers for their helpful comments. The research was also partially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC RGPIN-2018-06437 and RGPAS-2018-522574) to the second author. We dedicate this paper to the memory of Prof. Aravind Joshi; a short hallway conversation with him at ACL 2014 was the seed for this paper. 1169 References Alfred V. Aho and Jeffrey D. Ullman. 1969. Syntax directed translations and the pushdown assembler. Journal of Computer and System Sciences 3(1):37– 56. Jean-Michel Autebert, Jean Berstel, and Luc Boasson. 1997. Context-free languages and pushdown automata. In Grzegorz Rozenberg and Arto Salomaa, editors, Handbook of Formal Languages, Vol. 1, Springer-Verlag New York, Inc., New York, NY, USA, pages 111–174. http://dl.acm.org/ citation.cfm?id=267846.267849. Yehoshua Bar-Hillel, M. Perles, and Eliahu Shamir. 1961. On formal properties of simple phrase structure grammars. Zeitschrift f¨ur Phonetik, Sprachwissenschaft und Kommunikationsforschung 14:143– 172. Reprinted in Y. Bar-Hillel. (1964). Language and Information: Selected Essays on their Theory and Application, Addison-Wesley 1964, 116–150. Norbert Blum and Robert Koch. 1999. Greibach normal form transformation revisited. Information and Computation 150(1):112–118. https://doi. org/10.1006/inco.1998.2772. Noam Chomsky and Marcel-Paul Sch¨utzenberger. 1963. The algebraic theory of context-free languages. In P. Braffort and D. Hirschberg, editors, Computer Programming and Formal Systems, Elsevier, volume 35 of Studies in Logic and the Foundations of Mathematics, pages 118–161. Søren Christensen, Hans H¨uttel, and Colin Stirling. 1995. Bisimulation equivalence is decidable for all context-free processes. Information and Computation 121(2):143–148. Pierluigi Crescenzi, Daniel Gildea, Andrea Marino, Gianluca Rossi, and Giorgio Satta. 2015. Synchronous context-free grammars and optimal linear parsing strategies. Journal of Computer and System Sciences 81(7):1333–1356. https://doi.org/ 10.1016/j.jcss.2015.04.003. Marc Dymetman. 1992. A generalized greibach normal form for definite clause grammars. In Proceedings of the 14th Conference on Computational Linguistics - Volume 1. Association for Computational Linguistics, Stroudsburg, PA, USA, COLING ’92, pages 366–372. https://doi.org/10. 3115/992066.992126. Joost Engelfriet, Andreas Maletti, and Sebastian Maneth. 2017. Multiple context-free tree grammars: Lexicalization and characterization. arXiv preprint. http://arxiv.org/abs/1707.03457. Henning Fernau and Ralf Stiebe. 2002. Sequential grammars and automata with valences. Theoretical Computer Science 276(1):377– 405. https://doi.org/10.1016/ S0304-3975(01)00282-1. James N. Gray and Michael A. Harrison. 1972. On the covering and reduction problems for contextfree grammars. Journal of the ACM 19(4):675– 698. https://doi.org/10.1145/321724. 321732. Sheila A. Greibach. 1965. A new normal-form theorem for context-free phrase structure grammars. Journal of the ACM 12(1):42–52. https://doi.org/ 10.1145/321250.321254. Aravind Joshi, Leon Levy, and Masako Takahashi. 1975. Tree adjunct grammars. Journal of Computer and System Sciences 10(1):136– 163. https://doi.org/10.1016/ S0022-0000(75)80019-5. Aravind Joshi and Yves Schabes. 1997. Tree-adjoining grammars. In G. Rozenberg and A. Salomaa, editors, Handbook of Formal Languages, Vol. 3: Beyond Words, Springer-Verlag New York, Inc., New York, NY, USA, chapter 2, pages 69–124. Miriam Kaeshammer. 2013. Synchronous linear context-free rewriting systems for machine translation. In M. Carpuat, L. Specia, and D. Wu, editors, Proceedings of the Seventh Workshop on Syntax, Semantics and Structure in Statistical Translation, SSST@NAACL-HLT 2013, Atlanta, GA, USA, 13 June 2013. Association for Computational Linguistics, pages 68–77. http://aclweb.org/ anthology/W/W13/W13-0808.pdf. Marco Kuhlmann and Giorgio Satta. 2012. Treeadjoining grammars are not closed under strong lexicalization. Computational Linguistics 38(3):617– 629. https://doi.org/10.1162/COLI_a_ 00090. Philip M. Lewis and Richard E. Stearns. 1968. Syntax-directed transduction. Journal of the ACM 15(3):465–488. https://doi.org/10. 1145/321466.321477. Andreas Maletti and Joost Engelfriet. 2012. Strong lexicalization of tree adjoining grammars. In The 50th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, July 8-14, 2012, Jeju Island, Korea - Volume 1: Long Papers. The Association for Computational Linguistics, pages 506–515. http://www. aclweb.org/anthology/P12-1053. Rebecca Nesson, Giorgio Satta, and Stuart M. Shieber. 2008. Optimal k-arization of synchronous tree-adjoining grammar. In Proceedings of ACL-08: HLT. Association for Computational Linguistics, Columbus, Ohio, pages 604–612. http://www.aclweb.org/anthology/P/ P08/P08-1069.pdf. James Rogers. 1994. Capturing CFLs with tree adjoining grammars. In Proceedings of the 32nd Annual Meeting of the Association for Computational 1170 Linguistics. Association for Computational Linguistics, Stroudsburg, PA, USA, ACL ’94, pages 155– 162. https://doi.org/10.3115/981732. 981754. Daniel J. Rosenkrantz. 1967. Matrix equations and normal forms for context-free grammars. Journal of the Association for Computing Machinery 14(3):501–507. Yves Schabes and Richard C. Waters. 1993. Lexicalized context-free grammars. In L. Schubert, editor, 31st Annual Meeting of the Association for Computational Linguistics, 22-26 June 1993, Ohio State University, Columbus, Ohio, USA, Proceedings.. ACL, pages 121–129. http://aclweb. org/anthology/P/P93/P93-1017.pdf. Yves Schabes and Richard C. Waters. 1995. Tree insertion grammar: Cubic-time, parsable formalism that lexicalizes context-free grammar without changing the trees produced. Computational Linguistics 21(4):479–513. Eliahu Shamir. 1967. A representation theorem for algebraic and context-free power series in noncommuting variables. Information and Control 11(1/2):239–254. https://doi.org/10. 1016/S0019-9958(67)90529-3. Stuart M. Shieber. 1994. Restricting the weakgenerative capacity of synchronous tree-adjoining grammars. Computational Intelligence 10:371– 385. https://doi.org/10.1111/j. 1467-8640.1994.tb00003.x. Maryam Siahbani, Baskaran Sankaran, and Anoop Sarkar. 2013. Efficient left-to-right hierarchical phrase-based translation with improved reordering. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1089–1099. http://www.aclweb.org/ anthology/D13-1110. Maryam Siahbani and Anoop Sarkar. 2014a. Expressive hierarchical rule extraction for left-to-right translation. In Proceedings of the 11th Biennial Conference of the Association for Machine Translation in the Americas (AMTA-2014)., Vancouver, Canada. Maryam Siahbani and Anoop Sarkar. 2014b. Two improvements to left-to-right decoding for hierarchical phrase-based machine translation. In A. Moschitti, B. Pang, and W. Daelemans, editors, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL. ACL, pages 221– 226. http://aclweb.org/anthology/D/ D14/D14-1028.pdf. Ben Swanson, Elif Yamangil, Eugene Charniak, and Stuart M. Shieber. 2013. A context free TAG variant. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL 2013, 4-9 August 2013, Sofia, Bulgaria, Volume 1: Long Papers. The Association for Computational Linguistics, pages 302–310. http://aclweb. org/anthology/P/P13/P13-1030.pdf. Taro Watanabe, Hajime Tsukada, and Hideki Isozaki. 2006. Left-to-right target generation for hierarchical phrase-based translation. In N. Calzolari, C. Cardie, and P. Isabelle, editors, ACL 2006, 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, Sydney, Australia, 17-21 July 2006. The Association for Computational Linguistics. http: //aclweb.org/anthology/P06-1098. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics 23(3):377–403. Hao Zhang and Daniel Gildea. 2005. Stochastic lexicalized inversion transduction grammar for alignment. In K. Knight, H. T. Ng, and K. Oflazer, editors, ACL 2005, 43rd Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, 25-30 June 2005, University of Michigan, USA. The Association for Computational Linguistics, pages 475–482. http://aclweb. org/anthology/P/P05/P05-1059.pdf.
2018
107
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1171–1180 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1171 Straight to the Tree: Constituency Parsing with Neural Syntactic Distance Yikang Shen∗† MILA University of Montréal Zhouhan Lin∗† MILA University of Montréal AdeptMind Scholar Athul Paul Jacob† MILA University of Waterloo Alessandro Sordoni Microsoft Research Montréal, Canada Aaron Courville and Yoshua Bengio MILA University of Montréal, CIFAR Abstract In this work, we propose a novel constituency parsing scheme. The model predicts a vector of real-valued scalars, named syntactic distances, for each split position in the input sentence. The syntactic distances specify the order in which the split points will be selected, recursively partitioning the input, in a top-down fashion. Compared to traditional shiftreduce parsing schemes, our approach is free from the potential problem of compounding errors, while being faster and easier to parallelize. Our model achieves competitive performance amongst single model, discriminative parsers in the PTB dataset and outperforms previous models in the CTB dataset. 1 Introduction Devising fast and accurate constituency parsing algorithms is an important, long-standing problem in natural language processing. Parsing has been useful for incorporating linguistic prior in several related tasks, such as relation extraction, paraphrase detection (Callison-Burch, 2008), and more recently, natural language inference (Bowman et al., 2016) and machine translation (Eriguchi et al., 2017). Neural network-based approaches relying on dense input representations have recently achieved competitive results for constituency parsing (Vinyals et al., 2015; Cross and Huang, 2016; Liu and Zhang, 2017b; Stern et al., 2017a). Generally speaking, either these approaches produce the parse tree sequentially, by governing ∗Equal contribution. Corresponding authors: [email protected], [email protected]. † Work done while at Microsoft Research, Montreal. Figure 1: An example of how syntactic distances (d1 and d2) describe the structure of a parse tree: consecutive words with larger predicted distance are split earlier than those with smaller distances, in a process akin to divisive clustering. the sequence of transitions in a transition-based parser (Nivre, 2004; Zhu et al., 2013; Chen and Manning, 2014; Cross and Huang, 2016), or use a chart-based approach by estimating non-linear potentials and performing exact structured inference by dynamic programming (Finkel et al., 2008; Durrett and Klein, 2015; Stern et al., 2017a). Transition-based models decompose the structured prediction problem into a sequence of local decisions. This enables fast greedy decoding but also leads to compounding errors because the model is never exposed to its own mistakes during training (Daumé et al., 2009). Solutions to this problem usually complexify the training procedure by using structured training through beamsearch (Weiss et al., 2015; Andor et al., 2016) and dynamic oracles (Goldberg and Nivre, 2012; Cross and Huang, 2016). On the other hand, chartbased models can incorporate structured loss functions during training and benefit from exact inference via the CYK algorithm but suffer from higher computational cost during decoding (Durrett and Klein, 2015; Stern et al., 2017a). In this paper, we propose a novel, fully-parallel 1172 model for constituency parsing, based on the concept of “syntactic distance”, recently introduced by (Shen et al., 2017) for language modeling. To construct a parse tree from a sentence, one can proceed in a top-down manner, recursively splitting larger constituents into smaller constituents, where the order of the splits defines the hierarchical structure. The syntactic distances are defined for each possible split point in the sentence. The order induced by the syntactic distances fully specifies the order in which the sentence needs to be recursively split into smaller constituents (Figure 1): in case of a binary tree, there exists a oneto-one correspondence between the ordering and the tree. Therefore, our model is trained to reproduce the ordering between split points induced by the ground-truth distances by means of a margin rank loss (Weston et al., 2011). Crucially, our model works in parallel: the estimated distance for each split point is produced independently from the others, which allows for an easy parallelization in modern parallel computing architectures for deep learning, such as GPUs. Along with the distances, we also train the model to produce the constituent labels, which are used to build the fully labeled tree. Our model is fully parallel and thus does not require computationally expensive structured inference during training. Mapping from syntactic distances to a tree can be efficiently done in O(n log n), which makes the decoding computationally attractive. Despite our strong conditional independence assumption on the output predictions, we achieve good performance for single model discriminative parsing in PTB (91.8 F1) and CTB (86.5 F1) matching, and sometimes outperforming, recent chart-based and transition-based parsing models. 2 Syntactic Distances of a Parse Tree In this section, we start from the concept of syntactic distance introduced in Shen et al. (2017) for unsupervised parsing via language modeling and we extend it to the supervised setting. We propose two algorithms, one to convert a parse tree into a compact representation based on distances between consecutive words, and another to map the inferred representation back to a complete parse tree. The representation will later be used for supervised training. We formally define the syntactic distances of a parse tree as follows: Algorithm 1 Binary Parse Tree to Distance (∪represents the concatenation operator of lists) 1: function DISTANCE(node) 2: if node is leaf then 3: d ←[] 4: c ←[] 5: t ←[node.tag] 6: h ←0 7: else 8: childl, childr ←children of node 9: dl, cl, tl, hl ←Distance(childl) 10: dr, cr, tr, hr ←Distance(childr) 11: h ←max(hl, hr) + 1 12: d ←dl ∪[h] ∪dr 13: c ←cl ∪[node.label] ∪cr 14: t ←tl ∪tr 15: end if 16: return d, c, t, h 17: end function Definition 2.1. Let T be a parse tree that contains a set of leaves (w0, ..., wn). The height of the lowest common ancestor for two leaves (wi, wj) is noted as ˜di j. The syntactic distances of T can be any vector of scalars d = (d1, ..., dn) that satisfy: sign(di −dj) = sign( ˜di−1 i −˜dj−1 j ) (1) In other words, d induces the same ranking order as the quantities ˜dj i computed between pairs of consecutive words in the sequence, i.e. ( ˜d0 1, ..., ˜dn−1 n ). Note that there are n −1 syntactic distances for a sentence of length n. Example 2.1. Consider the tree in Fig. 1 for which ˜d0 1 = 2, ˜d1 2 = 1. An example of valid syntactic distances for this tree is any d = (d1, d2) such that d1 > d2. Given this definition, the parsing model predicts a sequence of scalars, which is a more natural setting for models based on neural networks, rather than predicting a set of spans. For comparison, in most of the current neural parsing methods, the model needs to output a sequence of transitions (Cross and Huang, 2016; Chen and Manning, 2014). Let us first consider the case of a binary parse tree. Algorithm 1 provides a way to convert it to a tuple (d, c, t), where d contains the height of the inner nodes in the tree following a left-to-right (in order) traversal, c the constituent labels for each node in the same order and t the part-of-speech 1173 (a) Boxes in the bottom are words and their corresponding POS tags predicted by an external tagger. The vertical bars in the middle are the syntactic distances, and the brackets on top of them are labels of constituents. The bottom brackets are the predicted unary label for each words, and the upper brackets are predicted labels for other constituent. (b) The corresponding inferred grammar tree. Figure 2: Inferring the parse tree with Algorithm 2 given distances, constituent labels, and POS tags. Starting with the full sentence, we pick split point 1 (as it is assigned to the larger distance) and assign label S to span (0,5). The left child span (0,1) is assigned with a tag PRP and a label NP, which produces an unary node and a terminal node. The right child span (1,5) is assigned the label ∅, coming from implicit binarization, which indicates that the span is not a real constituent and all of its children are instead direct children of its parent. For the span (1,5), the split point 4 is selected. The recursion of splitting and labeling continues until the process reaches a terminal node. Algorithm 2 Distance to Binary Parse Tree 1: function TREE(d,c,t) 2: if d = [] then 3: node ←Leaf(t) 4: else 5: i ←arg maxi(d) 6: childl ←Tree(d<i, c<i, t<i) 7: childr ←Tree(d>i, c>i, t≥i) 8: node ←Node(childl, childr, ci) 9: end if 10: return node 11: end function (POS) tags of each word in the left-to-right order. d is a valid vector of syntactic distances satisfying Definition 2.1. Once a model has learned to predict these variables, Algorithm 2 can reconstruct a unique binary tree from the output of the model (ˆd,ˆc,ˆt). The idea in Algorithm 2 is similar to the top-down parsing method proposed by Stern et al. (2017a), but differs in one key aspect: at each recursive call, there is no need to estimate the confidence for every split point. The algorithm simply chooses the split point i with the maximum ˆdi, and assigns to the span the predicted label ˆci. This makes the running time of our algorithm to be in O(n log n), compared to the O(n2) of the greedy top-down algorithm by (Stern et al., 2017a). Figure 2 shows an example of the reconstruction of parse tree. Alternatively, the tree reconstruction process can also be done in a bottom-up manner, which requires the recursive composition of adjacent spans according to the ranking induced by their syntactic distance, a process akin to agglomerative clustering. One potential issue is the existence of unary and n-ary nodes. We follow the method proposed by Stern et al. (2017a) and add a special empty label ∅to spans that are not themselves full constituents but simply arise during the course of implicit binarization. For the unary nodes that contains one nonterminal node, we take the common approach of treating these as additional atomic labels alongside all elementary nonterminals (Stern et al., 2017a). For all terminal nodes, we determine whether it belongs to a unary chain or not by predicting an additional label. If it is predicted with a label different from the empty label, we conclude that it is a direct child of a unary constituent with that label. Otherwise if it is predicted to have an empty label, we conclude that it is a child of a bigger constituent which has other constituents or words as its siblings. 1174 An n-ary node can arbitrarily be split into binary nodes. We choose to use the leftmost split point. The split point may also be chosen based on model prediction during training. Recovering an n-ary parse tree from the predicted binary tree simply requires removing the empty nodes and split combined labels corresponding to unary chains. Algorithm 2 is a divide-and-conquer algorithm. The running time of this procedure is O(n log n). However, the algorithm is naturally adapted for execution in a parallel environment, which can further reduce its running time to O(log n). 3 Learning Syntactic Distances We use neural networks to estimate the vector of syntactic distances for a given sentence. We use a modified hinge loss, where the target distances are generated by the tree-to-distance conversion given by Algorithm 1. Section 3.1 will describe in detail the model architecture, and Section 3.2 describes the loss we use in this setting. 3.1 Model Architecture Given input words w = (w0, w1, ..., wn), we predict the tuple (d, c, t). The POS tags t are given by an external Part-Of-Speech (POS) tagger. The syntactic distances d and constituent labels c are predicted using a neural network architecture that stacks recurrent (LSTM (Hochreiter and Schmidhuber, 1997)) and convolutional layers. Words and tags are first mapped to sequences of embeddings ew 0 , ..., ew n and et 0, ..., et n. Then the word embeddings and the tag embeddings are concatenated together as inputs for a stack of bidirectional LSTM layers: hw 0 , ..., hw n = BiLSTMw([ew 0 , et 0], ..., [ew n, et n]) (2) where BiLSTMw(·) is the word-level bidirectional layer, which gives the model enough capacity to capture long-term syntactical relations between words. To predict the constituent labels for each word, we pass the hidden states representations hw 0 , ..., hw n through a 2-layer network FFw c , with softmax output: p(cw i |w) = softmax(FFw c (hw i )) (3) To compose the necessary information for inferring the syntactic distances and the constituency label information, we perform an additional convolution: gs 1, . . . , gs n = CONV(hw 0 , ..., hw n) (4) where gs i can be seen as a draft representation for each split position in Algorithm 2. Note that the subscripts of gs i s start with 1, since we have n −1 positions as non-terminal constituents. Then, we stack a bidirectional LSTM layer on top of gs i: hs 1, ..., hs n = BiLSTMs(gs 1, . . . , gs n) (5) where BiLSTMs fine-tunes the representation by conditioning on other split position representations. Interleaving between LSTM and convolution layers turned out empirically to be the best choice over multiple variations of the model, including using self-attention (Vaswani et al., 2017) instead of LSTM. To calculate the syntactic distances for each position, the vectors hs 1, . . . , hs n are transformed through a 2-layer feed-forward network FFd with a single output unit (this can be done in parallel with 1x1 convolutions), with no activation function at the output layer: ˆdi = FFd(hs i), (6) For predicting the constituent labels, we pass the same representations hs 1, . . . , hs n through another 2-layer network FFs c, with softmax output. p(cs i|w) = softmax(FFs c(hs i)) (7) The overall architecture is shown in Figure 2a. Since the output (d, c, t) can be unambiguously transfered to a unique parse tree, the model implicitly makes all parsing decisions inside the recurrent and convolutional layers. 3.2 Objective Given a set of training examples D = {⟨dk, ck, tk, wk⟩}K k=1, the training objective is the sum of the prediction losses of syntactic distances dk and constituent labels ck. Due to the categorical nature of variable c, we use a standard softmax classifier with a crossentropy loss Llabel for constituent labels, using the estimated probabilities obtained in Eq. 3 and 7. A naïve loss function for estimating syntactic distances is the mean-squared error (MSE): Lmse dist = X i (di −ˆdi)2 (8) 1175 Figure 3: The overall visualization of our model. Circles represent hidden states, triangles represent convolution layers, block arrows represent feed-forward layers, arrows represent recurrent connections. The bottom part of the model predicts unary labels for each input word. The ∅is treated as a special label together with other labels. The top part of the model predicts the syntactic distances and the constituent labels. The inputs of model are the word embeddings concatenated with the POS tag embeddings. The tags are given by an external Part-Of-Speech tagger. The MSE loss forces the model to regress on the exact value of the true distances. Given that only the ranking induced by the ground-truth distances in d is important, as opposed to the absolute values themselves, using an MSE loss over-penalizes the model by ignoring ranking equivalence between different predictions. Therefore, we propose to minimize a pair-wise learning-to-rank loss, similar to those proposed in (Burges et al., 2005). We define our loss as a variant of the hinge loss as: Lrank dist = X i,j>i [1 −sign(di −dj)( ˆdi −ˆdj)]+, (9) where [x]+ is defined as max(0, x). This loss encourages the model to reproduce the full ranking order induced by the ground-truth distances. The final loss for the overall model is just the sum of individual losses L = Llabel + Lrank dist . 4 Experiments We evaluate our model described above on 2 different datasets, the standard Wall Street Journal (WSJ) part of the Penn Treebank (PTB) dataset, and the Chinese Treebank (CTB) dataset. For evaluating the F1 score, we use the standard evalb1 tool. We provide both labeled and unlabeled F1 score, where the former takes into consideration the constituent label for each predicted 1http://nlp.cs.nyu.edu/evalb/ constituent, while the latter only considers the position of the constituents. In the tables below, we report the labeled F1 scores for comparison with previous work, as this is the standard metric usually reported in the relevant literature. 4.1 Penn Treebank For the PTB experiments, we follow the standard train/valid/test separation and use sections 2-21 for training, section 22 for development and section 23 for test set. Following this split, the dataset has 45K training sentences and 1700, 2416 sentences for valid/test respectively. The placeholders with the -NONE- tag are stripped from the dataset during preprocessing. The POS tags are predicted with the Stanford Tagger (Toutanova et al., 2003). We use a hidden size of 1200 for each direction on all LSTMs, with 0.3 dropout in all the feedforward connections, and 0.2 recurrent connection dropout (Merity et al., 2017). The convolutional filter size is 2. The number of convolutional channels is 1200. As a common practice for neural network based NLP models, the embedding layer that maps word indexes to word embeddings is randomly initialized. The word embeddings are sized 400. Following (Merity et al., 2017), we randomly swap an input word embedding during training with the zero vector with probability of 0.1. We found this helped the model to generalize better. Training is conducted with Adam algorithm with l2 regularization decay 1 × 10−6. We pick the result obtaining the highest labeled F1 1176 Model LP LR F1 Single Model Vinyals et al. (2015) 88.3 Zhu et al. (2013) 90.7 90.2 90.4 Dyer et al. (2016) 89.8 Watanabe and Sumita (2015) 90.7 Cross and Huang (2016) 92.1 90.5 91.3 Liu and Zhang (2017b) 92.1 91.3 91.7 Stern et al. (2017a) 93.2 90.3 91.8 Liu and Zhang (2017a) 91.8 Gaddy et al. (2018) 92.1 Stern et al. (2017b) 92.5 92.5 92.5 Our Model 92.0 91.7 91.8 Ensemble Shindo et al. (2012) 92.4 Vinyals et al. (2015) 90.5 Semi-supervised Zhu et al. (2013) 91.5 91.1 91.3 Vinyals et al. (2015) 92.8 Re-ranking Charniak and Johnson (2005) 91.8 91.2 91.5 Huang (2008) 91.2 92.2 91.7 Dyer et al. (2016) 93.3 Table 1: Results on the PTB dataset WSJ test set, Section 23. LP, LR represents labeled precision and recall respectively. on the validation set, and report the corresponding test F1, together with other statistics. We report our results in Table 1. Our best model obtains a labeled F1 score of 91.8 on the test set (Table 1). Detailed dev/test set performances, including label accuracy is reported in Table 3. Our model performs achieves good performance for single-model constituency parsing trained without external data. The best result from (Stern et al., 2017b) is obtained by a generative model. Very recently, we came to knowledge of Gaddy et al. (2018), which uses character-level LSTM features coupled with chart-based parsing to improve performance. Similar sub-word features can be also used in our model. We leave this investigation for future works. For comparison, other models obtaining better scores either use ensembles, benefit from semi-supervised learning, or recur to re-ranking of a set of candidates. 4.2 Chinese Treebank We use the Chinese Treebank 5.1 dataset, with articles 001-270 and 440-1151 for training, articles Model LP LR F1 Single Model Charniak (2000) 82.1 79.6 80.8 Zhu et al. (2013) 84.3 82.1 83.2 Wang et al. (2015) 83.2 Watanabe and Sumita (2015) 84.3 Dyer et al. (2016) 84.6 Liu and Zhang (2017b) 85.9 85.2 85.5 Liu and Zhang (2017a) 86.1 Our Model 86.6 86.4 86.5 Semi-supervised Zhu et al. (2013) 86.8 84.4 85.6 Wang and Xue (2014) 86.3 Wang et al. (2015) 86.6 Re-ranking Charniak and Johnson (2005) 83.8 80.8 82.3 Dyer et al. (2016) 86.9 Table 2: Test set performance comparison on the CTB dataset 301-325 as development set, and articles 271-300 for test set. This is a standard split in the literature (Liu and Zhang, 2017b). The -NONE- tags are stripped as well. The hidden size for the LSTM networks is set to 1200. We use a dropout rate of 0.4 on the feed-forward connections, and 0.1 recurrent connection dropout. The convolutional layer has 1200 channels, with a filter size of 2. We use 400 dimensional word embeddings. During training, input word embeddings are randomly swapped with the zero vector with probability of 0.1. We also apply a l2 regularization weighted by 1×10−6 on the parameters of the network. Table 2 reports our results compared to other benchmarks. To the best of our knowledge, we set a new stateof-the-art for single-model parsing achieving 86.5 F1 on the test set. The detailed statistics are shown in Table 3. 4.3 Ablation Study We perform an ablation study by removing components from a network trained with the best set of hyperparameters, and re-train the ablated version from scratch. This gives an idea of the relative contributions of each of the components in the model. Results are reported in Table 4. It seems that the top LSTM layer has a relatively big impact on performance. This may give additional capacity to the model for capturing long-term dependencies useful for label prediction. We also exper1177 dev/test result Prec. Recall F1 label accuracy PTB labeled 91.7/92.0 91.8/91.7 91.8/91.8 94.9/95.4% unlabeled 93.0/93.2 93.0/92.8 93.0/93.0 CTB labeled 89.4/86.6 89.4/86.4 89.4/86.5 92.2/91.1% unlabeled 91.1/88.9 91.1/88.6 91.1/88.8 Table 3: Detailed experimental results on PTB and CTB datasets Model LP LR F1 Full model 92.0 91.7 91.8 w/o top LSTM 91.0 90.5 90.7 w. embedding 91.9 91.6 91.7 w. MSE loss 90.3 90.0 90.1 Table 4: Ablation test on the PTB dataset. “w/o top LSTM” is the full model without the top LSTM layer. “w. embedding” stands for the full model using the pretrained word embeddings. “w. MSE loss” stands for the full model trained with MSE loss. imented by using 300D GloVe (Pennington et al., 2014) embedding for the input layer but this didn’t yield improvements over the model’s best performance. Unsurprisingly, the model trained with MSE loss underperforms considerably a model trained with the rank loss. 4.4 Parsing Speed The prediction of syntactic distances can be batched in modern GPU architectures. The distance to tree conversion is a O(n log n) (n stand for the number of words in the input sentence) divide-and-conquer algorithm. We compare the parsing speed of our parser with other state-ofthe-art neural parsers in Table 5. As the syntactic distance computation can be performed in parallel within a GPU, we first compute the distances in a batch, then we iteratively decode the tree with Algorithm 2. It is worth to note that this comparison may be unfair since some of the reported results may use very different hardware settings. We couldn’t find the source code to re-run them on our hardware, to give a fair enough comparison. In our setting, we use an NVIDIA TITAN Xp graphics card for running the neural network part, and the distance to tree inference is run on an Intel Core i7-6850K CPU, with 3.60GHz clock speed. Model # sents/sec Petrov and Klein (2007) 6.2 Zhu et al. (2013) 89.5 Liu and Zhang (2017b) 79.2 Stern et al. (2017a) 75.5 Our model 111.1 Our model w/o tree inference 351 Table 5: Parsing speed in sentences per second on the PTB dataset. 5 Related Work Parsing natural language with neural network models has recently received growing attention. These models have attained state-of-the-art results for dependency parsing (Chen and Manning, 2014) and constituency parsing (Dyer et al., 2016; Cross and Huang, 2016; Coavoux and Crabbé, 2016). Early work in neural network based parsing directly use a feed-forward neural network to predict parse trees (Chen and Manning, 2014). Vinyals et al. (2015) use a sequence-tosequence framework where the decoder outputs a linearized version of the parse tree given an input sentence. Generally, in these models, the correctness of the output tree is not strictly ensured (although empirically observed). Other parsing methods ensure structural consistency by operating in a transition-based setting (Chen and Manning, 2014) by parsing either in the top-down direction (Dyer et al., 2016; Liu and Zhang, 2017b), bottom-up (Zhu et al., 2013; Watanabe and Sumita, 2015; Cross and Huang, 2016) and recently in-order (Liu and Zhang, 2017a). Transition-based methods generally suffer from compounding errors due to exposure bias: during testing, the model is exposed to a very different regime (i.e. decisions sampled from the model itself) than what was encountered during training (i.e. the ground-truth decisions) (Daumé et al., 2009; Goldberg and Nivre, 2012). This can have catastrophic effects on test performance but 1178 can be mitigated to a certain extent by using beamsearch instead of greedy decoding. (Stern et al., 2017b) proposes an effective inference method for generative parsing, which enables direct decoding in those models. More complex training methods have been devised in order to alleviate this problem (Goldberg and Nivre, 2012; Cross and Huang, 2016). Other efforts have been put into neural chart-based parsing (Durrett and Klein, 2015; Stern et al., 2017a) which ensure structural consistency and offer exact inference with CYK algorithm. (Gaddy et al., 2018) includes a simplified CYK-style inference, but the complexity still remains in O(n3). In this work, our model learns to produce a particular representation of a tree in parallel. Representations can be computed in parallel, and the conversion from representation to a full tree can efficiently be done with a divide-and-conquer algorithm. As our model outputs decisions in parallel, our model doesn’t suffer from the exposure bias. Interestingly, a series of recent works, both in machine translation (Gu et al., 2018) and speech synthesis (Oord et al., 2017), considered the sequence of output variables conditionally independent given the inputs. 6 Conclusion We presented a novel constituency parsing scheme based on predicting real-valued scalars, named syntactic distances, whose ordering identify the sequence of top-down split decisions. We employ a neural network model that predicts the distances d and the constituent labels c. Given the algorithms presented in Section 2, we can build an unambiguous mapping between each (d, c, t) and a parse tree. One peculiar aspect of our model is that it predicts split decisions in parallel. Our experiments show that our model can achieve strong performance compare to previous models, while being significantly more efficient. Since the architecture of model is no more than a stack of standard recurrent and convolution layers, which are essential components in most academic and industrial deep learning frameworks, the deployment of this method would be straightforward. Acknowledgement The authors would like to thank Compute Canada for providing the computational resources. The authors would also like to thank Jackie Chi Kit Cheung for the helpful discussions. Zhouhan Lin would like to thank AdeptMind for generously supporting his research via scholarship. References Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 2442–2452. Samuel R. Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D. Manning, and Christopher Potts. 2016. A fast unified model for parsing and sentence understanding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1466–1477. Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hullender. 2005. Learning to rank using gradient descent. In Proceedings of the 22Nd International Conference on Machine Learning. pages 89–96. Chris Callison-Burch. 2008. Syntactic constraints on paraphrases extracted from parallel corpora. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 196–205. Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference. Association for Computational Linguistics, pages 132–139. Eugene Charniak and Mark Johnson. 2005. Coarseto-fine n-best parsing and maxent discriminative reranking. In Proceedings of the 43rd annual meeting on association for computational linguistics. Association for Computational Linguistics, pages 173– 180. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 740–750. Maximin Coavoux and Benoit Crabbé. 2016. Neural greedy constituent parsing with dynamic oracles. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics: Volume 1, Long Papers. Association for Computational Linguistics, pages 172–182. 1179 James Cross and Liang Huang. 2016. Span-based constituency parsing with a structure-label system and provably optimal dynamic oracles. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1–â ˘A¸S11. Hal Daumé, John Langford, and Daniel Marcu. 2009. Search-based structured prediction. Machine learning 75(3):297–325. Greg Durrett and Dan Klein. 2015. Neural crf parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, pages 302–312. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 199â ˘A¸S–209. Akiko Eriguchi, Yoshimasa Tsuruoka, and Kyunghyun Cho. 2017. Learning to parse and translate improves neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, pages 72–78. Jenny Rose Finkel, Alex Kleeman, and Christopher D. Manning. 2008. Efficient, feature-based, conditional random field parsing. In Proceedings of ACL. Association for Computational Linguistics, pages 959–967. David Gaddy, Mitchell Stern, and Dan Klein. 2018. Whatâ ˘A´Zs going on in neural constituency parsers? an analysis. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Yoav Goldberg and Joakim Nivre. 2012. A dynamic oracle for arc-eager dependency parsing. In COLING 2012, 24th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers. pages 959–976. Jiatao Gu, James Bradbury, Caiming Xiong, Victor OK Li, and Richard Socher. 2018. Non-autoregressive neural machine translation. In Proceedings of International Conference on Learning Representations. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Liang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In Proceedings of ACL-08: HLT. Association for Computational Linguistics, pages 586–594. Jiangming Liu and Yue Zhang. 2017a. In-order transition-based constituent parsing. Transactions of the Association of Computational Linguistics 5(1):413–424. Jiangming Liu and Yue Zhang. 2017b. Shift-reduce constituent parsing with neural lookahead features. Transactions of the Association for Computational Linguistics 5:45–58. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2017. Regularizing and optimizing lstm language models. arXiv preprint arXiv:1708.02182 . Joakim Nivre. 2004. Incrementality in deterministic dependency parsing. In Proceedings of the Workshop on Incremental Parsing: Bringing Engineering and Cognition Together. Association for Computational Linguistics, pages 50–57. Aaron van den Oord, Yazhe Li, Igor Babuschkin, Karen Simonyan, Oriol Vinyals, Koray Kavukcuoglu, George van den Driessche, Edward Lockhart, Luis C Cobo, Florian Stimberg, et al. 2017. Parallel wavenet: Fast high-fidelity speech synthesis. arXiv preprint arXiv:1711.10433 . Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). pages 1532–1543. Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference. pages 404–411. Yikang Shen, Zhouhan Lin, Chin-Wei Huang, and Aaron Courville. 2017. Neural language modeling by jointly learning syntax and lexicon. In Proceedings of the International Conference on Learning Representations. Hiroyuki Shindo, Yusuke Miyao, Akinori Fujino, and Masaaki Nagata. 2012. Bayesian symbol-refined tree substitution grammars for syntactic parsing. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Volume 1, Long Papers. Association for Computational Linguistics, pages 440–448. Mitchell Stern, Jacob Andreas, and Dan Klein. 2017a. A minimal span-based neural constituency parser. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 818–827. Mitchell Stern, Daniel Fried, and Dan Klein. 2017b. Effective inference for generative neural parsing. arXiv preprint arXiv:1707.08976 . 1180 Kristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language TechnologyVolume 1. Association for Computational Linguistics, pages 173–180. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems. pages 6000–6010. Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Advances in Neural Information Processing Systems. pages 2773–2781. Zhiguo Wang, Haitao Mi, and Nianwen Xue. 2015. Feature optimization for constituent parsing via neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). volume 1, pages 1138–1147. Zhiguo Wang and Nianwen Xue. 2014. Joint pos tagging and transition-based constituent parsing in chinese with non-local features. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 733–742. Taro Watanabe and Eiichiro Sumita. 2015. Transitionbased neural constituent parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing: Volume 1, Long Papers. pages 1169–1179. David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. 2015. Structured training for neural network transition-based parsing. arXiv preprint arXiv:1506.06158 . Jason Weston, Samy Bengio, and Nicolas Usunier. 2011. Wsabie: Scaling up to large vocabulary image annotation. In IJCAI 2011, Proceedings of the 22nd International Joint Conference on Artificial Intelligence. pages 2764–2770. Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. 2013. Fast and accurate shiftreduce constituent parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 434–443.
2018
108
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1181–1189 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1181 Gaussian Mixture Latent Vector Grammars Yanpeng Zhao, Liwen Zhang, Kewei Tu School of Information Science and Technology, ShanghaiTech University, Shanghai, China {zhaoyp1,zhanglw1,tukw}@shanghaitech.edu.cn Abstract We introduce Latent Vector Grammars (LVeGs), a new framework that extends latent variable grammars such that each nonterminal symbol is associated with a continuous vector space representing the set of (infinitely many) subtypes of the nonterminal. We show that previous models such as latent variable grammars and compositional vector grammars can be interpreted as special cases of LVeGs. We then present Gaussian Mixture LVeGs (GMLVeGs), a new special case of LVeGs that uses Gaussian mixtures to formulate the weights of production rules over subtypes of nonterminals. A major advantage of using Gaussian mixtures is that the partition function and the expectations of subtype rules can be computed using an extension of the inside-outside algorithm, which enables efficient inference and learning. We apply GM-LVeGs to part-of-speech tagging and constituency parsing and show that GM-LVeGs can achieve competitive accuracies. Our code is available at https://github.com/zhaoyanpeng/lveg. 1 Introduction In constituency parsing, refining coarse syntactic categories of treebank grammars (Charniak, 1996) into fine-grained subtypes has been proven effective in improving parsing results. Previous approaches to refining syntactic categories use tree annotations (Johnson, 1998), lexicalization (Charniak, 2000; Collins, 2003), or linguistically motivated category splitting (Klein and Manning, 2003). Matsuzaki et al. (2005) introduce latent variable grammars, in which each syntactic category (represented by a nonterminal) is split into a fixed number of subtypes and a discrete latent variable is used to indicate the subtype of the nonterminal when it appears in a specific parse tree. Since the latent variables are not observable in treebanks, the grammar is learned using expectation-maximization. Petrov et al. (2006) present a split-merge approach to learning latent variable grammars, which hierarchically splits each nonterminal and merges ineffective splits. Petrov and Klein (2008b) further allow a nonterminal to have different splits in different production rules, which results in a more compact grammar. Recently, neural approaches become very popular in natural language processing (NLP). An important technique in neural approaches to NLP is to represent discrete symbols such as words and syntactic categories with continuous vectors or embeddings. Since the distances between such vector representations often reflect the similarity between the corresponding symbols, this technique facilitates more informed smoothing in learning functions of symbols (e.g., the probability of a production rule). In addition, what a symbol represents may subtly depend on its context, and a continuous vector representation has the potential of representing each instance of the symbol in a more precise manner. For constituency parsing, recursive neural networks (Socher et al., 2011) and their extensions such as compositional vector grammars (Socher et al., 2013) can be seen as representing nonterminals in a context-free grammar with continuous vectors. However, exact inference in these models is intractable. In this paper, we introduce latent vector grammars (LVeGs), a novel framework of grammars with fine-grained nonterminal subtypes. A LVeG associates each nonterminal with a continuous vector space that represents the set of (infinitely many) subtypes of the nonterminal. For each in1182 stance of a nonterminal that appears in a parse tree, its subtype is represented by a latent vector. For each production rule over nonterminals, a nonnegative continuous function specifies the weight of any fine-grained production rule over subtypes of the nonterminals. Compared with latent variable grammars which assume a small fixed number of subtypes for each nonterminal, LVeGs assume an unlimited number of subtypes and are potentially more expressive. By having weight functions of varying smoothness for different production rules, LVeGs can also control the level of subtype granularity for different productions, which has been shown to improve the parsing accuracy (Petrov and Klein, 2008b). In addition, similarity between subtypes of a nonterminal can be naturally modeled by the distance between the corresponding vectors, so by using continuous and smooth weight functions we can ensure that similar subtypes will have similar syntactic behaviors. We further present Gaussian Mixture LVeGs (GM-LVeGs), a special case of LVeGs that uses mixtures of Gaussian distributions as the weight functions of fine-grained production rules. A major advantage of GM-LVeGs is that the partition function and the expectations of fine-grained production rules can be computed using an extension of the inside-outside algorithm. This makes it possible to efficiently compute the gradients during discriminative learning of GM-LVeGs. We evaluate GM-LVeGs on part-of-speech tagging and constituency parsing on a variety of languages and corpora and show that GM-LVeGs achieve competitive results. It shall be noted that many modern state-ofthe-art constituency parsers predict how likely a constituent is based on not only local information (such as the production rules used in composing the constituent), but also contextual information of the constituent. For example, the neural CRF parser (Durrett and Klein, 2015) looks at the words before and after the constituent; and RNNG (Dyer et al., 2016) looks at the constituents that are already predicted (in the stack) and the words that are not processed (in the buffer). In this paper, however, we choose to focus on the basic framework and algorithms of LVeGs and leave the incorporation of contextual information for future work. We believe that by laying a solid foundation for LVeGs, our work can pave the way for many interesting extensions of LVeGs in the future. 2 Latent Vector Grammars A latent vector grammar (LVeG) considers subtypes of nonterminals as continuous vectors and associates each nonterminal with a latent vector space representing the set of its subtypes. For each production rule, the LVeG defines a weight function over the subtypes of the nonterminal involved in the production rule. In this way, it models the space of refinements of the production rule. 2.1 Model Definition A latent vector grammar is defined as a 5-tuple G = (N, S, Σ, R, W), where N is a finite set of nonterminal symbols, S ∈N is the start symbol, Σ is a finite set of terminal symbols such that N ∩Σ = ∅, R is a set production rules of the form X  γ where X ∈N and γ ∈(N ∪Σ)∗, W is a set of rule weight functions indexed by production rules in R (to be defined below). In the following discussion, we consider R in the Chomsky normal form (CNF) for clarity of presentation. However, it is straightforward to extend our formulation to the general case. Unless otherwise specified, we always use capital letters A, B, C, . . . for nonterminal symbols and use bold lowercase letters a, b, c, . . . for their subtypes. Note that subtypes are represented by continuous vectors. For a production rule of the form A  BC, its weight function is WABC(a, b, c). For a production rule of the form A  w where w ∈Σ, its weight function is WAw(a). The weight functions should be non-negative, continuous and smooth, and hence fine-grained production rules of similar subtypes of a nonterminal would have similar weight assignments. Rule weights can be normalized such that P B,C R b,c WABC(a, b, c)dbdc = 1, which leads to a probabilistic context-free grammar (PCFG). Whether the weights are normalized or not leads to different model classes and accordingly different estimation methods. However, the two model classes are proven equivalent by Smith and Johnson (2007). 2.2 Relation to Other Models Latent variable grammars (LVGs) (Matsuzaki et al., 2005; Petrov et al., 2006) associate each nonterminal with a discrete latent variable, which is used to indicate the subtype of the nonterminal when it appears in a parse tree. Through nonterminal-splitting and the 1183 expectation-maximization algorithm, fine-grained production rules can be automatically induced from a treebank. We show that LVGs can be seen as a special case of LVeGs. Specifically, we can use one-hot vectors in LVeGs to represent latent variables in LVGs and define weight functions in LVeGs accordingly. Consider a production rule r : A  BC. In a LVG, each nonterminal is split into a number of subtypes. Suppose A, B, and C are split into nA, nB, and nC subtypes respectively. ax is the x-th subtype of A, by is the y-th subtype of B, and cz is the z-th subtype of C. ax  bycz is a finegrained production rule of A  BC, where x = 1, . . . , nA, y = 1, . . . , nB, and z = 1, . . . , nC. The probabilities of all the fine-grained production rules can be represented by a rank-3 tensor ΘABC ∈RnA×nB×nC. To cast the LVG as a LVeG, we require that the latent vectors in the LVeG must be one-hot vectors. We achieve this by defining weight functions that output zero if any of the input vectors is not one-hot. Specifically, we define the weight function of the production rule A  BC as: Wr(a, b, c) = X x,y,z ΘABCcba × (δ(a −ax) × δ(b −by) × δ(c −cz)) , (1) where δ(·) is the Dirac delta function, ax ∈RnA, by ∈RnB, cz ∈RnC are one-hot vectors (which are zero everywhere with the exception of a single 1 at the x-th index of ax, the y-th index of by, and the z-th index of cz) and ΘABC is multiplied sequentially by c, b, and a. Compared with LVGs, LVeGs have the following advantages. While a LVG contains a finite, typically small number of subtypes for each nonterminal, a LVeG uses a continuous space to represent an infinite number of subtypes. When equipped with weight functions of sufficient complexity, LVeGs can represent more fine-grained syntactic categories and production rules than LVGs. By controlling the complexity and smoothness of the weight functions, a LVeG is also capable of representing any level of subtype granularity. Importantly, this allows us to change the level of subtype granularity for the same nonterminal in different production rules, which is similar to multi-scale grammars (Petrov and Klein, 2008b). In addition, with a continuous space of subtypes in a LVeG, similarity between subtypes can be naturally modeled by their distance in the space and can be automatically learned from data. Consequently, with continuous and smooth weight functions, fine-grained production rules over similar subtypes would have similar weights in LVeGs, eliminating the need for the extra smoothing steps that are necessary in training LVGs. Compositional vector grammars (CVGs) (Socher et al., 2013), an extension of recursive neural networks (RNNs) (Socher et al., 2011), can also be seen as a special case of LVeGs. For a production rule r : A  BC, a CVG can be interpreted as specifying its weight function Wr(a, b, c) in the following way. First, a neural network f indexed by B and C is used to compute a parent vector p = fBC(b, c). Next, the score of the parent vector is computed using a base PCFG and a vector vBC: s(p) = vT BCp + log P(A  BC) , (2) where P(A  BC) is the rule probability from the base PCFG. Then, the weight function of the production rule A  BC is defined as: Wr(a, b, c) = exp (s(p)) × δ(a −p) . (3) This form of weight functions in CVGs leads to point estimation of latent vectors in a parse tree, i.e., for each nonterminal in a given parse tree, only one subtype in the whole subtype space would lead to a non-zero weight of the parse. In addition, different parse trees of the same substring typically lead to different point estimations of the subtype vector at the root nonterminal. Consequently, CVGs cannot use dynamic programming for inference and hence have to resort to greedy search or beam search. 3 Gaussian Mixture LVeGs A major challenge in applying LVeGs to parsing is that it is impossible to enumerate the infinite number of subtypes. Previous work such as CVGs resorts to point estimation and greedy search. In this section we present Gaussian Mixture LVeGs (GMLVeGs), which use mixtures of Gaussian distributions as the weight functions in LVeGs. Because Gaussian mixtures have the nice property of being closed under product, summation, and marginalization, we can compute the partition function and the expectations of fine-grained production rules using dynamic programming. This in turn makes efficient learning and parsing possible. 1184 3.1 Representation In a GM-LVeG, the weight function of a production rule r is defined as a Gaussian mixture containing Kr mixture components: Wr(r) = Kr X k=1 ρr,k N(r|µr,k, Σr,k) , (4) where r is the concatenation of the latent vectors of the nonterminals in r, which denotes a finegrained production rule of r. ρr,k > 0 is the k-th mixture weight (the mixture weights do not necessarily sum up to 1), N(r|µr,k, Σr,k) is the k-th Gaussian distribution parameterized by mean µr,k and covariance matrix Σr,k, and Kr is the number of mixture components, which can be different for different production rules. Below we write N(r|µr,k, Σr,k) as Nr,k(r) for brevity. Given a production rule of the form A  BC, the GMLVeG expects r = [a; b; c] and a, b, c ∈Rd, where d is the dimension of the vectors a, b, c. We use the same dimension for all the subtype vectors. For the sake of computational efficiency, we use diagonal or spherical Gaussian distributions, whose covariance matrices are diagonal, so that the inverse of covariance matrices in Equation 15– 16 can be computed in linear time. A spherical Gaussian has a diagonal covariance matrix where all the diagonal elements are equal, so it has fewer free parameters than a diagonal Gaussian and results in faster learning and parsing. We empirically find that spherical Gaussians lead to slightly better balance between the efficiency and the parsing accuracy than diagonal Gaussians. 3.2 Parsing The goal of parsing is to find the most probable parse tree T ∗with unrefined nonterminals for a sentence w of n words w1:n = w1 . . . wn. This is formally defined as: T ∗= argmax T∈G(w) P(T|w) , (5) where G(w) denotes the set of parse trees with unrefined nonterminals for w. In a PCFG, T ∗can be found using dynamic programming such as the CYK algorithm. However, parsing becomes intractable with LVeGs, and even with LVGs, the special case of LVeGs. A common practice in parsing with LVGs is to use max-rule parsing (Petrov et al., 2006; Petrov and Klein, 2007). The basic idea of max-rule parsing is to decompose the posteriors over parses into the posteriors over production rules approximately. This requires calculating the expected counts of unrefined production rules in parsing the input sentence. Since Gaussian mixtures are closed under product, summation, and marginalization, in GM-LVeGs the expected counts can be calculated using the inside-outside algorithm in the following way. Given a sentence w1:n, we first calculate the inside score sA I (a, i, j) and outside score sA O(a, i, j) for a nonterminal A over a span wi:j using Equation 6 and Equation 7 in Table 1 respectively. Note that both sA I (a, i, j) and sA O(a, i, j) are mixtures of Gaussian distributions of the subtype vector a. Next, using Equation 8 in Table 1, we calculate the score s(A  BC, i, k, j) (1 ≤i ≤k < j ≤n), where ⟨A  BC, i, k, j⟩ represents a production rule A  BC with nonterminals A, B, and C spanning words wi:j, wi,k, and wk+1:j respectively in the sentence w1:n. Then the expected count (or posterior) of ⟨A  BC, i, k, j⟩ is calculated as: q(A  BC, i, k, j) = s(A  BC, i, k, j) sI(S, 1, n) , (9) where sI(S, 1, n) is the inside score for the start symbol S spanning the whole sentence w1:n. After calculating all the expected counts, we can use the MAX-RULE-PRODUCT algorithm (Petrov and Klein, 2007) for parsing, which returns a parse with the highest probability that all the production rules are correct. Its objective function is given by T ∗ q = argmax T∈G(w) Y e∈T q(e) , (10) where e ranges over all the 4-tuples ⟨A  BC, i, k, j⟩in the parse tree T. This objective function can be efficiently solved by dynamic programming such as the CYK algorithm. Although the time complexity of the insideoutside algorithm with GM-LVeGs is polynomial in the sentence length and the nonterminal number, in practice the algorithm is still slow because the number of Gaussian components in the inside and outside scores increases dramatically with the recursion depth. To speed up the computation, we prune Gaussian components in the inside and outside scores using the following technique. Suppose we have a minimum pruning threshold kmin 1185 sA I (a, i, j) = P ABC∈R P k=i,··· ,j−1 ZZ WABC(a, b, c) × sB I (b, i, k) × sC I (c, k + 1, j) dbdc .(6) sA O(a, i, j) = P BCA∈R P k=1,··· ,i−1 ZZ WBCA(b, c, a) × sB O(b, k, j) × sC I (c, k, i −1) dbdc + P BAC∈R P k=j+1,··· ,n ZZ WBAC(b, a, c) × sB O(b, i, k) × sC I (c, j + 1, k) dbdc .(7) s(A  BC, i, k, j) = ZZZ WABC(a, b, c) × sA O(a, i, j) × sB I (b, i, k) × sC I (c, k + 1, j) dadbdc .(8) Table 1: Equation 6: sA I (a, i, j) is the inside score of a nonterminal A over a span wi:j in the sentence w1:n, where 1 ≤ i < j ≤n. Equation 7: sA O (a, i, j) is the outside score of a nonterminal A over a span wi:j in the sentence w1:n, where 1 ≤i ≤j ≤n. Equation 8: s(A  BC, i, k, j) is the score of a production rule A  BC with nonterminals A, B, and C spanning words wi:j, wi,k, and wk+1:j respectively in the sentence w1:n, where 1 ≤i ≤k < j ≤n. and a maximum pruning threshold kmax. Given an inside or outside score with kc Gaussian components, if kc ≤kmin, then we do not prune any Gaussian component; otherwise, we compute kallow = min{kmin + floor(kϑ c ), kmax} (0 ≤ϑ ≤ 1 is a constant) and keep only kallow components with the largest mixture weights. In addition to component pruning, we also employ two constituent pruning techniques to reduce the search space during parsing. The first technique is used by Petrov et al. (2006). Before parsing a sentence with a GM-LVeG, we run the inside-outside algorithm with the treebank grammar and calculate the posterior probability of every nonterminal spanning every substring. Then a nonterminal would be pruned from a span if its posterior probability is below a pre-specified threshold pmin. When parsing with GM-LVeGs, we only consider the unpruned nonterminals for each span. The second constituent pruning technique is similar to the one used by Socher et al. (2013). Note that for a strong constituency parser such as the Berkeley parser (Petrov and Klein, 2007), the constituents in the top 200 best parses of a sentence can cover almost all the constituents in the gold parse tree. So we first use an existing constituency parser to run k-best parsing with k = 200 on the input sentence. Then we parse with a GM-LVeG and only consider the constituents that appear in the top 200 parses. Note that this method is different from the re-ranking technique because it may produce a parse different from the top 200 parses. 3.3 Learning Given a training dataset D = {(Ti, wi) | i = 1, . . . , m} containing m samples, where Ti is the gold parse tree with unrefined nonterminals for the sentence wi, the objective of discriminative learning is to minimize the negative log conditional likelihood: L(Θ) = −log m Y i=1 P(Ti|wi; Θ) , (11) where Θ represents the set of parameters of the GM-LVeG. We optimize the objective function using the Adam (Kingma and Ba, 2014) optimization algorithm. The derivative with respect to Θr, the parameters of the weight function Wr(r) of an unrefined production rule r, is calculated as follows (the derivation is in the supplementary material): ∂L(Θ) ∂Θr = m X i=1 Z ∂Wr(r) ∂Θr (12) × EP(t|wi)[fr(t)] −EP(t|Ti)[fr(t)] Wr(r)  dr , where t indicates a parse tree with nonterminal subtypes, and fr(t) is the number of occurrences of the unrefined rule r in the unrefined parse tree that is obtained by replacing all the subtypes in t with the corresponding nonterminals. The two expectations in Equation 12 can be efficiently computed using the inside-outside algorithm. Because the second expectation is conditioned on the parse tree Ti, in Equation 6 and Equation 7 we can skip all the summations and assign the values of B, C, and k according to Ti. 1186 In GM-LVeGs, Θr is the set of parameters in a Gaussian mixture: Θr = {(ρr,k, µr,k, Σr,k)|k = 1, . . . , Kr} . (13) According to Equation 12, we need to take the derivatives of Wr(r) respect to ρr,k, µr,k, and Σr,k respectively: ∂Wr(r)/∂ρr,k = Nr,k(r) , (14) ∂Wr(r)/∂µr,k = ρr,kNr,k(r)Σ−1 r,k(r −µr,k) ,(15) ∂Wr(r)/∂Σr,k = ρr,kNr,k(r)Σ−1 r,k 1 2  −I (16) + (r −µr,k)(r −µr,k)T Σ−1 r,k  . Substituting Equation 14–16 into Equation 12, we have the full gradient formulations of all the parameters. In spite of the integral in Equation 12, we can derive a closed-form solution for the gradient of each parameter, which is shown in the supplementary material. In order to keep each mixture weight ρr,k positive, we do not directly optimize ρr,k; instead, we set ρr,k = exp(θρr,k) and optimize θρr,k by gradient descent. We use a similar trick to keep each covariance matrix Σr,k positive definite. Since we use the inside-outside algorithm described in Section 3.2 to calculate the two expectations in Equation 12, we face the same efficiency problem that we encounter in parsing. To speed up the computation,we again use both component pruning and constituent pruning introduced in Section 3.2. Because gradient descent is often sensitive to the initial values of the parameters, we employ the following informed initialization method. Mixture weights are initialized using the treebank grammar. Suppose in the treebank grammar P(r) is the probability of a production rule r. We initialize the mixture weights in the weight function Wr by ρr,k = α · P(r) where α > 1 is a constant. We initialize all the covariance matrices to identity matrices and initialize each mean with a value uniformly sampled from [−0.05, 0.05]. 4 Experiment We evaluate the GM-LVeG on part-of-speech (POS) tagging and constituency parsing and compare it against its special cases such as LVGs and CVGs. It shall be noted that in this paper we focus on the basic framework of LVeGs and aim to show its potential advantage over previous special cases. It is therefore not our goal to compete with the latest state-of-the-art approaches to tagging and parsing. In particular, we currently do not incorporate contextual information of words and constituents during tagging and parsing, while such information is critical in achieving state-of-the-art accuracy. We will discuss future improvements of LVeGs in Section 5. 4.1 Datasets Parsing. We use the Wall Street Journal corpus from the Penn English Treebank (WSJ) (Marcus et al., 1994). Following the standard data splitting, we use sections 2 to 21 for training, section 23 for testing, and section 22 for development. We preprocess the treebank using a right-branching binarization procedure to obtain an unannotated X-bar grammar, so that there are only binary and unary production rules. To deal with the problem of unknown words in testing, we adopt the unknown word features used in the Berkeley parser and set the unknown word threshold to 1. Specifically, any word occurring less than two times is replaced by one of the 60 unknown word categories. Tagging. (1) We use Wall Street Journal corpus from the Penn English Treebank (WSJ) (Marcus et al., 1994). Following the standard data splitting, we use sections 0 to 18 for training, sections 22 to 24 for testing, and sections 19 to 21 for development. (2) The Universal Dependencies treebank 1.4 (UD) (Nivre et al., 2016), in which English, French, German, Russian, Spanish, Indonesian, Finnish, and Italian treebanks are used. We use the original data splitting of these corpora for training and testing. For both WSJ and UD English treebanks, we deal with unknown words in the same way as we do in parsing. For the rest of the data, we use only one unknown word category and the unknown word threshold is also set to 1. 4.2 POS Tagging POS tagging is the task of labeling each word in a sentence with the most probable part-of-speech tag. Here we focus on POS tagging with Hidden Markov Models (HMMs). Because HMMs are equivalent to probabilistic regular grammars, we can extend HMMs with both LVGs and LVeGs. Specifically, the hidden states in HMMs can be seen as nonterminals in regular grammars and therefore can be associated with latent variables or latent vectors. 1187 We implement two training methods for LVGs. The first (LVG-G) is generative training using expectation-maximization that maximizes the joint probability of the sentence and the tags. The second (LVG-D) is discriminative training using gradient descent that maximizes the conditional probability of the tags given the sentence. In both cases, each nonterminal is split into a fixed number of subtypes. In our experiments we test 1, 2, 4, 8, and 16 subtypes of each nonterminal. Due to the limited space, we only report experimental results of LVG with 16 subtypes for each nonterminal. Full experimental results can be found in the supplementary material. We experiment with two different GM-LVeGs: GM-LVeG-D with diagonal Gaussians and GMLVeG-S with spherical Gaussians. In both cases, we fix the number of Gaussian components Kr to 4 and the dimension of the latent vectors d to 3. We do not use any pruning techniques in learning and inference because we find that our algorithm is fast enough with the current setting of Kr and d. We train the GM-LVeGs for 20 epoches and select the models with the best token accuracy on the development data for the final testing. We report both token accuracy and sentence accuracy of POS tagging in Table 2. It can be seen that, on all the testing data, GM-LVeGs consistently surpass LVGs in terms of both token accuracy and sentence accuracy. GM-LVeG-D is slightly better than GM-LVeG-S in sentence accuracy, producing the best sentence accuracy on 5 of the 9 testing datasets. GM-LVeG-S performs slightly better than GM-LVeG-D in token accuracy on 5 of the 9 datasets. Overall, there is not significant difference between GM-LVeG-D and GMLVeG-S. However, GM-LVeG-S admits more efficient learning than GM-LVeG-D in practice since it has fewer parameters. 4.3 Parsing For efficiency, we train GM-LVeGs only on sentences with no more than 50 words (totally 39115 sentences). Since we have found that spherical Gaussians are better than diagonal Gaussians considering both model performance and learning efficiency, here we use spherical Gaussians in the weight functions. The dimension of latent vectors d is set to 3, and all the Gaussian mixtures have Kr = 4 components. We use α = 8 in initializing mixture weights. We train the GM-LVeG for 15 epoches and select the model with the highest F1 score on the development data for the final testing. We use component pruning in both learning and parsing, with kmax = 50 and ϑ = 0.35 in both learning and parsing, kmin = 40 in learning and kmin = 20 in parsing. During learning we use the first constituent pruning technique with the pruning threshold pmin = 1e −5, and during parsing we use the second constituent pruning technique based on the Berkeley parser which produced 133 parses on average for each testing sentence. As can be seen, we use weaker pruning during training than during testing. This is because in training stronger pruning (even if accurate) results in worse estimation of the first expectation in Equation 12, which makes gradient computation less accurate. We compare LVeGs with CVGs and several variants of LVGs: (1) LVG-G-16 and LVG-D-16, which are LVGs with 16 subtypes for each nonterminal with discriminative and generative training respectively (accuracies obtained from Petrov and Klein (2008a)); (2) Multi-scale grammars (Petrov and Klein, 2008b), trained without using the span features in order for a fair comparison; (3) Berkeley parser (Petrov and Klein, 2007) (accuracies obtained from Petrov and Klein (2008b) because Petrov and Klein (2007) do not report exact match scores). The experimental results are shown in Table 3. It can be seen that GM-LVeG-S produces the best F1 scores on both the development data and the testing data. It surpasses the Berkeley parser by 0.92% in F1 score on the testing data. Its exact match score on the testing data is only slightly lower than that of LVG-D-16. We further investigate the influence of the latent vector dimension and the Gaussian component number on the efficiency and the parsing accuracy . We experiment on a small dataset (statistics of this dataset are in the supplemental material). We first fix the component number to 4 and experiment with the dimension 2, 3, 4, 5, 6, 7, 8, 9. Then we fix the dimension to 3 and experiment with the component number 2, 3, 4, 5, 6, 7, 8, 9. F1 scores on the development data are shown in the first row in Figure 1. Average time consumed per epoch in learning is shown in the second row in Figure 1. When Kr = 4, the best dimension is 5; when d = 3, the best Gaussian component number is 3. A higher dimension or a larger Gaussian component number hurts the model performance and requires much more time for learning. Thus 1188 Model WSJ English French German Russian Spanish Indonesian Finnish Italian T S T S T S T S T S T S T S T S T S LVG-D-16 96.62 48.74 92.31 52.67 93.75 34.90 87.38 20.98 81.91 12.25 92.47 24.82 89.27 20.29 83.81 19.29 94.81 45.19 LVG-G-16 96.78 50.88 93.30 57.54 94.52 34.90 88.92 24.05 84.03 16.63 93.21 27.37 90.09 21.19 85.01 20.53 95.46 48.26 GM-LVeG-D 96.99 53.10 93.66 59.46 94.73 39.60 89.11 24.77 84.21 17.84 93.76 32.48 90.24 21.72 85.27 23.30 95.61 50.72 GM-LVeG-S 97.00 53.11 93.55 58.11 94.74 39.26 89.14 25.58 84.06 18.44 93.52 30.66 90.12 21.72 85.35 22.07 95.62 49.69 Table 2: Token accuracy (T) and sentence accuracy (S) for POS tagging on the testing data. Model dev (all) test ≤40 test (all) F1 F1 EX F1 EX LVG-G-16 88.70 35.80 LVG-D-16 89.30 39.40 Multi-Scale 89.70 39.60 89.20 37.20 Berkeley Parser 90.60 39.10 90.10 37.10 CVG (SU-RNN) 91.20 91.10 90.40 GM-LVeG-S 91.24 91.38 41.51 91.02 39.24 Table 3: Parsing accuracy on the testing data of WSJ. EX indicates the exact match score. our choice of Kr = 4 and d = 3 in GM-LVeGs for parsing is a good balance between the efficiency and the parsing accuracy. 83.5 84.0 84.5 85.0 85.5 F1 Score 2 4 6 8 d: dimension 50 75 100 125 150 175 Time (min) Per Epoch 2 4 6 8 kr: # of Gaussian components Figure 1: F1 score and average time (min) consumed per epoch in learning. Left: # of Gaussian components fixed to 4 with different dimensions; Right: dimension of Gaussians fixed to 3 with different # of Gaussian components. 5 Discussion It shall be noted that in this paper we choose to focus on the basic framework and algorithms of LVeGs, and therefore we leave a few important extensions for future work. One extension is to incorporate contextual information of words and constituents. which is a crucial technique that can be found in most state-of-the-art approaches to parsing or POS tagging. One possible way to utilize contextual information in LVeGs is to allow the words in the context of an anchored production rule to influence the rule’s weight function. For example, we may learn neural networks to predict the parameters of the Gaussian mixture weight functions in a GM-LVeG from the pre-trained embeddings of the words in the context. In GM-LVeGs, we currently use the same number of Gaussian components for all the weight functions. A more desirable way would be automatically determining the number of Gaussian components for each production rule based on the ideal refinement granularity of the rule, e.g., we may need more Gaussian components for NP  DT NN than for NP  DT JJ, since the latter is rarely used. There are a few possible ways to learn the component numbers such as greedy addition and removal, the split-merge method, and sparsity priors over mixture weights. An interesting extension beyond LVeGs is to have a single continuous space for subtypes of all the nonterminals. Ideally, subtypes of the same nonterminal or similar nonterminals are close to each other. The benefit is that similarity between nonterminals can now be modeled. 6 Conclusion We present Latent Vector Grammars (LVeGs) that associate each nonterminal with a latent continuous vector space representing the set of subtypes of the nonterminal. For each production rule, a LVeG defines a continuous weight function over the subtypes of the nonterminals involved in the rule. We show that LVeGs can subsume latent variable grammars and compositional vector grammars as special cases. We then propose Gaussian mixture LVeGs (GM-LVeGs). which formulate weight functions of production rules by mixtures of Gaussian distributions. The partition function and the expectations of fine-grained production rules in GM-LVeGs can be efficiently computed using dynamic programming, which makes learning and inference with GM-LVeGs feasible. 1189 We empirically show that GM-LVeGs can achieve competitive accuracies on POS tagging and constituency parsing. Acknowledgments This work was supported by the National Natural Science Foundation of China (61503248), Major Program of Science and Technology Commission Shanghai Municipal (17JC1404102), and Program of Shanghai Subject Chief Scientist (A type) (No.15XD1502900). We would like to thank the anonymous reviewers for their careful reading and useful comments. References Eugene Charniak. 1996. Tree-bank grammars. In Proceedings of the 30th National Conference on Artificial Intelligence, volume 2, pages 1031–1036. Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of the 1st Meeting of the North American Chapter of the Association for Computational Linguistics, pages 132–139. Association for Computational Linguistics. Michael Collins. 2003. Head-driven statistical models for natural language parsing. Computational linguistics, 29(4):589–637. Greg Durrett and Dan Klein. 2015. Neural CRF parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, pages 302–312. Association for Computational Linguistics. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 199–209. Association for Computational Linguistics. Mark Johnson. 1998. PCFG models of linguistic tree representations. Computational Linguistics, 24(4):613–632. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Dan Klein and Christopher D Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st annual meeting on Association for Computational Linguistics, pages 423–430. Association for Computational Linguistics. Mitchell Marcus, Grace Kim, Mary Ann Marcinkiewicz, Robert MacIntyre, Ann Bies, Mark Ferguson, Karen Katz, and Britta Schasberger. 1994. The penn treebank: annotating predicate argument structure. In Proceedings of the workshop on Human Language Technology, pages 114–119. Association for Computational Linguistics. Takuya Matsuzaki, Yusuke Miyao, and Jun’ichi Tsujii. 2005. Probabilistic CFG with latent annotations. In Proceedings of the 43rd annual meeting on Association for Computational Linguistics, pages 75–82. Association for Computational Linguistics. Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D. Manning, Ryan T. McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal dependencies v1: A multilingual treebank collection. In Proceedings of the 10th International Conference on Language Resources and Evaluation, pages 1659–1666. Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the 44th annual meeting of the Association for Computational Linguistics, pages 433–440. Association for Computational Linguistics. Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Proceedings of the 2007 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 404–411. Association for Computational Linguistics. Slav Petrov and Dan Klein. 2008a. Discriminative loglinear grammars with latent variables. In Advances in Neural Information Processing Systems 20, pages 1153–1160. Slav Petrov and Dan Klein. 2008b. Sparse multi-scale grammars for discriminative latent variable parsing. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 867–876. Association for Computational Linguistics. Noah A Smith and Mark Johnson. 2007. Weighted and probabilistic context-free grammars are equally expressive. Computational Linguistics, 33(4):477– 491. Richard Socher, John Bauer, Christopher D Manning, et al. 2013. Parsing with compositional vector grammars. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, volume 1, pages 455–465. Association for Computational Linguistics. Richard Socher, Cliff C Lin, Chris Manning, and Andrew Y Ng. 2011. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the 28th International Conference on Machine Learning, pages 129–136.
2018
109
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 110–121 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 110 Improving Knowledge Graph Embedding Using Simple Constraints Boyang Ding1,2, Quan Wang1,2,3∗, Bin Wang1,2, Li Guo1,2 1Institute of Information Engineering, Chinese Academy of Sciences 2School of Cyber Security, University of Chinese Academy of Sciences 3State Key Laboratory of Information Security, Chinese Academy of Sciences {dingboyang,wangquan,wangbin,guoli}@iie.ac.cn Abstract Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Early works performed this task via simple models developed over KG triples. Recent attempts focused on either designing more complicated triple scoring models, or incorporating extra information beyond triples. This paper, by contrast, investigates the potential of using very simple constraints to improve KG embedding. We examine non-negativity constraints on entity representations and approximate entailment constraints on relation representations. The former help to learn compact and interpretable representations for entities. The latter further encode regularities of logical entailment between relations into their distributed representations. These constraints impose prior beliefs upon the structure of the embedding space, without negative impacts on efficiency or scalability. Evaluation on WordNet, Freebase, and DBpedia shows that our approach is simple yet surprisingly effective, significantly and consistently outperforming competitive baselines. The constraints imposed indeed improve model interpretability, leading to a substantially increased structuring of the embedding space. Code and data are available at https://github.com/i ieir-km/ComplEx-NNE_AER. 1 Introduction The past decade has witnessed great achievements in building web-scale knowledge graphs (KGs), e.g., Freebase (Bollacker et al., 2008), DBpedia (Lehmann et al., 2015), and Google’s Knowledge ∗Corresponding author: Quan Wang. Vault (Dong et al., 2014). A typical KG is a multirelational graph composed of entities as nodes and relations as different types of edges, where each edge is represented as a triple of the form (head entity, relation, tail entity). Such KGs contain rich structured knowledge, and have proven useful for many NLP tasks (Wasserman-Pritsker et al., 2015; Hoffmann et al., 2011; Yang and Mitchell, 2017). Recently, the concept of knowledge graph embedding has been presented and quickly become a hot research topic. The key idea there is to embed components of a KG (i.e., entities and relations) into a continuous vector space, so as to simplify manipulation while preserving the inherent structure of the KG. Early works on this topic learned such vectorial representations (i.e., embeddings) via just simple models developed over KG triples (Bordes et al., 2011, 2013; Jenatton et al., 2012; Nickel et al., 2011). Recent attempts focused on either designing more complicated triple scoring models (Socher et al., 2013; Bordes et al., 2014; Wang et al., 2014; Lin et al., 2015b; Xiao et al., 2016; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017), or incorporating extra information beyond KG triples (Chang et al., 2014; Zhong et al., 2015; Lin et al., 2015a; Neelakantan et al., 2015; Guo et al., 2015; Luo et al., 2015b; Xie et al., 2016a,b; Xiao et al., 2017). See (Wang et al., 2017) for a thorough review. This paper, by contrast, investigates the potential of using very simple constraints to improve the KG embedding task. Specifically, we examine two types of constraints: (i) non-negativity constraints on entity representations and (ii) approximate entailment constraints over relation representations. By using the former, we learn compact representations for entities, which would naturally induce sparsity and interpretability (Murphy et al., 2012). By using the latter, we further encode regularities of logical entailment between relations into their 111 distributed representations, which might be advantageous to downstream tasks like link prediction and relation extraction (Rockt¨aschel et al., 2015; Guo et al., 2016). These constraints impose prior beliefs upon the structure of the embedding space, and will help us to learn more predictive embeddings, without significantly increasing the space or time complexity. Our work has some similarities to those which integrate logical background knowledge into KG embedding (Rockt¨aschel et al., 2015; Wang et al., 2015; Guo et al., 2016, 2018). Most of such works, however, need grounding of first-order logic rules. The grounding process could be time and space inefficient especially for complicated rules. To avoid grounding, Demeester et al. (2016) tried to model rules using only relation representations. But their work creates vector representations for entity pairs rather than individual entities, and hence fails to handle unpaired entities. Moreover, it can only incorporate strict, hard rules which usually require extensive manual effort to create. Minervini et al. (2017b) proposed adversarial training which can integrate first-order logic rules without grounding. But their work, again, focuses on strict, hard rules. Minervini et al. (2017a) tried to handle uncertainty of rules. But their work assigns to different rules a same confidence level, and considers only equivalence and inversion of relations, which might not always be available in a given KG. Our approach differs from the aforementioned works in that: (i) it imposes constraints directly on entity and relation representations without grounding, and can easily scale up to large KGs; (ii) the constraints, i.e., non-negativity and approximate entailment derived automatically from statistical properties, are quite universal, requiring no manual effort and applicable to almost all KGs; (iii) it learns an individual representation for each entity, and can successfully make predictions between unpaired entities. We evaluate our approach on publicly available KGs of WordNet, Freebase, and DBpedia as well. Experimental results indicate that our approach is simple yet surprisingly effective, achieving significant and consistent improvements over competitive baselines, but without negative impacts on efficiency or scalability. The non-negativity and approximate entailment constraints indeed improve model interpretability, resulting in a substantially increased structuring of the embedding space. The remainder of this paper is organized as follows. We first review related work in Section 2, and then detail our approach in Section 3. Experiments and results are reported in Section 4, followed by concluding remarks in Section 5. 2 Related Work Recent years have seen growing interest in learning distributed representations for entities and relations in KGs, a.k.a. KG embedding. Early works on this topic devised very simple models to learn such distributed representations, solely on the basis of triples observed in a given KG, e.g., TransE which takes relations as translating operations between head and tail entities (Bordes et al., 2013), and RESCAL which models triples through bilinear operations over entity and relation representations (Nickel et al., 2011). Later attempts roughly fell into two groups: (i) those which tried to design more complicated triple scoring models, e.g., the TransE extensions (Wang et al., 2014; Lin et al., 2015b; Ji et al., 2015), the RESCAL extensions (Yang et al., 2015; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017), and the (deep) neural network models (Socher et al., 2013; Bordes et al., 2014; Shi and Weninger, 2017; Schlichtkrull et al., 2017; Dettmers et al., 2018); (ii) those which tried to integrate extra information beyond triples, e.g., entity types (Guo et al., 2015; Xie et al., 2016b), relation paths (Neelakantan et al., 2015; Lin et al., 2015a), and textual descriptions (Xie et al., 2016a; Xiao et al., 2017). Please refer to (Nickel et al., 2016a; Wang et al., 2017) for a thorough review of these techniques. In this paper, we show the potential of using very simple constraints (i.e., nonnegativity constraints and approximate entailment constraints) to improve KG embedding, without significantly increasing the model complexity. A line of research related to ours is KG embedding with logical background knowledge incorporated (Rockt¨aschel et al., 2015; Wang et al., 2015; Guo et al., 2016, 2018). But most of such works require grounding of first-order logic rules, which is time and space inefficient especially for complicated rules. To avoid grounding, Demeester et al. (2016) proposed lifted rule injection, and Minervini et al. (2017b) investigated adversarial training. Both works, however, can only handle strict, hard rules which usually require extensive effort to create. Minervini et al. (2017a) tried to handle uncertainty of background knowledge. But their work 112 considers only equivalence and inversion between relations, which might not always be available in a given KG. Our approach, in contrast, imposes constraints directly on entity and relation representations without grounding. And the constraints used are quite universal, requiring no manual effort and applicable to almost all KGs. Non-negativity has long been a subject studied in various research fields. Previous studies reveal that non-negativity could naturally induce sparsity and, in most cases, better interpretability (Lee and Seung, 1999). In many NLP-related tasks, nonnegativity constraints are introduced to learn more interpretable word representations, which capture the notion of semantic composition (Murphy et al., 2012; Luo et al., 2015a; Fyshe et al., 2015). In this paper, we investigate the ability of non-negativity constraints to learn more accurate KG embeddings with good interpretability. 3 Our Approach This section presents our approach. We first introduce a basic embedding technique to model triples in a given KG (§ 3.1). Then we discuss the nonnegativity constraints over entity representations (§ 3.2) and the approximate entailment constraints over relation representations (§ 3.3). And finally we present the overall model (§ 3.4). 3.1 A Basic Embedding Model We choose ComplEx (Trouillon et al., 2016) as our basic embedding model, since it is simple and efficient, achieving state-of-the-art predictive performance. Specifically, suppose we are given a KG containing a set of triples O = {(ei, rk, ej)}, with each triple composed of two entities ei, ej ∈E and their relation rk ∈R. Here E is the set of entities and R the set of relations. ComplEx then represents each entity e ∈E as a complex-valued vector e ∈Cd, and each relation r ∈R a complex-valued vector r ∈Cd, where d is the dimensionality of the embedding space. Each x ∈Cd consists of a real vector component Re(x) and an imaginary vector component Im(x), i.e., x = Re(x) + iIm(x). For any given triple (ei, rk, ej) ∈E × R × E, a multilinear dot product is used to score that triple, i.e., φ(ei, rk, ej) ≜Re(⟨ei, rk, ¯ej⟩) ≜Re( X ℓ[ei]ℓ[rk]ℓ[¯ej]ℓ), (1) where ei, rk, ej ∈Cd are the vectorial representations associated with ei, rk, ej, respectively; ¯ej is the conjugate of ej; [·]ℓis the ℓ-th entry of a vector; and Re(·) means taking the real part of a complex value. Triples with higher φ(·, ·, ·) scores are more likely to be true. Owing to the asymmetry of this scoring function, i.e., φ(ei, rk, ej) ̸= φ(ej, rk, ei), ComplEx can effectively handle asymmetric relations (Trouillon et al., 2016). 3.2 Non-negativity of Entity Representations On top of the basic ComplEx model, we further require entities to have non-negative (and bounded) vectorial representations. In fact, these distributed representations can be taken as feature vectors for entities, with latent semantics encoded in different dimensions. In ComplEx, as well as most (if not all) previous approaches, there is no limitation on the range of such feature values, which means that both positive and negative properties of an entity can be encoded in its representation. However, as pointed out by Murphy et al. (2012), it would be uneconomical to store all negative properties of an entity or a concept. For instance, to describe cats (a concept), people usually use positive properties such as cats are mammals, cats eat fishes, and cats have four legs, but hardly ever negative properties like cats are not vehicles, cats do not have wheels, or cats are not used for communication. Based on such intuition, this paper proposes to impose non-negativity constraints on entity representations, by using which only positive properties will be stored in these representations. To better compare different entities on the same scale, we further require entity representations to stay within the hypercube of [0, 1]d, as approximately Boolean embeddings (Kruszewski et al., 2015), i.e., 0 ≤Re(e), Im(e) ≤1, ∀e ∈E, (2) where e ∈Cd is the representation for entity e ∈ E, with its real and imaginary components denoted by Re(e), Im(e) ∈Rd; 0 and 1 are d-dimensional vectors with all their entries being 0 or 1; and ≥, ≤ , = denote the entry-wise comparisons throughout the paper whenever applicable. As shown by Lee and Seung (1999), non-negativity, in most cases, will further induce sparsity and interpretability. 3.3 Approximate Entailment for Relations Besides the non-negativity constraints over entity representations, we also study approximate entailment constraints over relation representations. By approximate entailment, we mean an ordered pair 113 of relations that the former approximately entails the latter, e.g., BornInCountry and Nationality, stating that a person born in a country is very likely, but not necessarily, to have a nationality of that country. Each such relation pair is associated with a weight to indicate the confidence level of entailment. A larger weight stands for a higher level of confidence. We denote by rp λ−→rq the approximate entailment between relations rp and rq, with confidence level λ. This kind of entailment can be derived automatically from a KG by modern rule mining systems (Gal´arraga et al., 2015). Let T denote the set of all such approximate entailments derived beforehand. Before diving into approximate entailment, we first explore the modeling of strict entailment, i.e., entailment with infinite confidence level λ = +∞. The strict entailment rp →rq states that if relation rp holds then relation rq must also hold. This entailment can be roughly modelled by requiring φ(ei, rp, ej) ≤φ(ei, rq, ej), ∀ei, ej ∈E, (3) where φ(·, ·, ·) is the score for a triple predicted by the embedding model, defined by Eq. (1). Eq. (3) can be interpreted as follows: for any two entities ei and ej, if (ei, rp, ej) is a true fact with a high score φ(ei, rp, ej), then the triple (ei, rq, ej) with an even higher score should also be predicted as a true fact by the embedding model. Note that given the non-negativity constraints defined by Eq. (2), a sufficient condition for Eq. (3) to hold, is to further impose Re(rp) ≤Re(rq), Im(rp) = Im(rq), (4) where rp and rq are the complex-valued representations for rp and rq respectively, with the real and imaginary components denoted by Re(·), Im(·) ∈ Rd. That means, when the constraints of Eq. (4) (along with those of Eq. (2)) are satisfied, the requirement of Eq. (3) (or in other words rp →rq) will always hold. We provide a proof of sufficiency as supplementary material. Next we examine the modeling of approximate entailment. To this end, we further introduce the confidence level λ and allow slackness in Eq. (4), which yields λ Re(rp) −Re(rq)  ≤α, (5) λ Im(rp) −Im(rq) 2 ≤β. (6) Here α, β ≥0 are slack variables, and (·)2 means an entry-wise operation. Entailments with higher confidence levels show less tolerance for violating the constraints. When λ = +∞, Eqs. (5) – (6) degenerate to Eq. (4). The above analysis indicates that our approach can model entailment simply by imposing constraints over relation representations, without traversing all possible (ei, ej) entity pairs (i.e., grounding). In addition, different confidence levels are encoded in the constraints, making our approach moderately tolerant of uncertainty. 3.4 The Overall Model Finally, we combine together the basic embedding model of ComplEx, the non-negativity constraints on entity representations, and the approximate entailment constraints over relation representations. The overall model is presented as follows: min Θ,{α,β} X D+∪D− log 1 + exp(−yijkφ(ei, rk, ej))  + µ X T 1⊤(α + β) + η∥Θ∥2 2, s.t. λ Re(rp) −Re(rq)  ≤α, λ Im(rp) −Im(rq) 2 ≤β, α, β ≥0, ∀rp λ−→rq ∈T , 0 ≤Re(e), Im(e) ≤1, ∀e ∈E. (7) Here, Θ ≜{e : e ∈E} ∪{r : r ∈R} is the set of all entity and relation representations; D+ and D− are the sets of positive and negative training triples respectively; a positive triple is directly observed in the KG, i.e., (ei, rk, ej) ∈O; a negative triple can be generated by randomly corrupting the head or the tail entity of a positive triple, i.e., (e′ i, rk, ej) or (ei, rk, e′ j); yijk = ±1 is the label (positive or negative) of triple (ei, rk, ej). In this optimization, the first term of the objective function is a typical logistic loss, which enforces triples to have scores close to their labels. The second term is the sum of slack variables in the approximate entailment constraints, with a penalty coefficient µ ≥0. The motivation is, although we allow slackness in those constraints we hope the total slackness to be small, so that the constraints can be better satisfied. The last term is L2 regularization to avoid over-fitting, and η ≥0 is the regularization coefficient. To solve this optimization problem, the approximate entailment constraints (as well as the corresponding slack variables) are converted into penalty terms and added to the objective function, while the non-negativity constraints remain as they are. As such, the optimization problem of Eq. (7) can 114 be rewritten as: min Θ X D+∪D− log 1 + exp(−yijkφ(ei, rk, ej))  + µ X T λ1⊤ Re(rp)−Re(rq)  + + µ X T λ1⊤Im(rp)−Im(rq) 2+ η∥Θ∥2 2, s.t. 0 ≤Re(e), Im(e) ≤1, ∀e ∈E, (8) where [x]+ = max(0, x) with max(·, ·) being an entry-wise operation. The equivalence between Eq. (7) and Eq. (8) is shown in the supplementary material. We use SGD in mini-batch mode as our optimizer, with AdaGrad (Duchi et al., 2011) to tune the learning rate. After each gradient descent step, we project (by truncation) real and imaginary components of entity representations into the hypercube of [0, 1]d, to satisfy the non-negativity constraints. While favouring a better structuring of the embedding space, imposing the additional constraints will not substantially increase model complexity. Our approach has a space complexity of O(nd + md), which is the same as that of ComplEx. Here, n is the number of entities, m the number of relations, and O(nd + md) to store a d-dimensional complex-valued vector for each entity and each relation. The time complexity (per iteration) of our approach is O(sd+td+¯nd), where s is the average number of triples in a mini-batch, ¯n the average number of entities in a mini-batch, and t the total number of approximate entailments in T . O(sd) is to handle triples in a mini-batch, O(td) penalty terms introduced by the approximate entailments, and O(¯nd) further the non-negativity constraints on entity representations. Usually there are much fewer entailments than triples, i.e., t ≪s, and also ¯n ≤2s.1 So the time complexity of our approach is on a par with O(sd), i.e., the time complexity of ComplEx. 4 Experiments and Results This section presents our experiments and results. We first introduce the datasets used in our experiments (§ 4.1). Then we empirically evaluate our approach in the link prediction task (§ 4.2). After that, we conduct extensive analysis on both entity representations (§ 4.3) and relation representations (§ 4.4) to show the interpretability of our model. 1There will be at most 2s entities contained in s triples. Code and data used in the experiments are available at https://github.com/iieir-km/ ComplEx-NNE_AER. 4.1 Datasets The first two datasets we used are WN18 and FB15K, released by Bordes et al. (2013).2 WN18 is a subset of WordNet containing 18 relations and 40,943 entities, and FB15K a subset of Freebase containing 1,345 relations and 14,951 entities. We create our third dataset from the mapping-based objects of core DBpedia.3 We eliminate relations not included within the DBpedia ontology such as HomePage and Logo, and discard entities appearing less than 20 times. The final dataset, referred to as DB100K, is composed of 470 relations and 99,604 entities. Triples on each datasets are further divided into training, validation, and test sets, used for model training, hyperparameter tuning, and evaluation respectively. We follow the original split for WN18 and FB15K, and draw a split of 597,572/ 50,000/50,000 triples for DB100K. We further use AMIE+ (Gal´arraga et al., 2015)4 to extract approximate entailments automatically from the training set of each dataset. As suggested by Guo et al. (2018), we consider entailments with PCA confidence higher than 0.8.5 As such, we extract 17 approximate entailments from WN18, 535 from FB15K, and 56 from DB100K. Table 1 gives some examples of these approximate entailments, along with their confidence levels. Table 2 further summarizes the statistics of the datasets. 4.2 Link Prediction We first evaluate our approach in the link prediction task, which aims to predict a triple (ei, rk, ej) with ei or ej missing, i.e., predict ei given (rk, ej) or predict ej given (ei, rk). Evaluation Protocol: We follow the protocol introduced by Bordes et al. (2013). For each test triple (ei, rk, ej), we replace its head entity ei with every entity e′ i ∈E, and calculate a score for the corrupted triple (e′ i, rk, ej), e.g., φ(e′ i, rk, ej) defined by Eq. (1). Then we sort these scores in de2https://everest.hds.utc.fr/doku.php? id=en:smemlj12 3http://downloads.dbpedia.org/2016-10/ core/ 4https://www.mpi-inf.mpg.de/departmen ts/databases-and-information-systems/res earch/yago-naga/amie/ 5PCA confidence is the confidence under the partial completeness assumption. See (Gal´arraga et al., 2015) for details. 115 hypernym−1 1.00 −−→hyponym synset domain topic of−1 0.99 −−→member of domain topic instance hypernym−1 0.98 −−→instance hyponym /people/place of birth−1 1.00 −−→/location/people born here /film/directed by−1 0.98 −−→/director/film /country/admin divisions 0.91 −−→/country/1st level divisions owner 0.95 −−→owning company child−1 0.92 −−→parent distributing company 0.92 −−→distributing label Table 1: Approximate entailments extracted from WN18 (top), FB15K (middle), and DB100K (bottom), where r−1 means the inverse of relation r. Dataset # Ent # Rel # Train/Valid/Test # Cons WN18 40,943 18 141,442 5,000 5,000 17 FB15K 14,951 1,345 483,142 50,000 59,071 535 DB100K 99,604 470 597,572 50,000 50,000 56 Table 2: Statistics of datasets, where the columns respectively indicate the number of entities, relations, training/validation/test triples, and approximate entailments. scending order, and get the rank of the correct entity ei. During ranking, we remove corrupted triples that already exist in either the training, validation, or test set, i.e., the filtered setting as described in (Bordes et al., 2013). This whole procedure is repeated while replacing the tail entity ej. We report on the test set the mean reciprocal rank (MRR) and the proportion of correct entities ranked in the top n (HITS@N), with n = 1, 3, 10. Comparison Settings: We compare the performance of our approach against a variety of KG embedding models developed in recent years. These models can be categorized into three groups: • Simple embedding models that utilize triples alone without integrating extra information, including TransE (Bordes et al., 2013), DistMult (Yang et al., 2015), HolE (Nickel et al., 2016b), ComplEx (Trouillon et al., 2016), and ANALOGY (Liu et al., 2017). Our approach is developed on the basis of ComplEx. • Other extensions of ComplEx that integrate logical background knowledge in addition to triples, including RUGE (Guo et al., 2018) and ComplExR (Minervini et al., 2017a). The former requires grounding of first-order logic rules. The latter is restricted to relation equivalence and inversion, and assigns an identical confidence level to all different rules. • Latest developments or implementations that achieve current state-of-the-art performance reported on the benchmarks of WN18 and FB15K, including R-GCN (Schlichtkrull et al., 2017), ConvE (Dettmers et al., 2018), and Single DistMult (Kadlec et al., 2017).6 The first two are built based on neural network architectures, which are, by nature, more complicated than the simple models. The last one is a re-implementation of DistMult, generating 1000 to 2000 negative training examples per positive one, which leads to better performance but requires significantly longer training time. We further evaluate our approach in two different settings: (i) ComplEx-NNE that imposes only the Non-Negativity constraints on Entity representations, i.e., optimization Eq. (8) with µ = 0; and (ii) ComplEx-NNE+AER that further imposes the Approximate Entailment constraints over Relation representations besides those non-negativity ones, i.e., optimization Eq. (8) with µ > 0. Implementation Details: We compare our approach against all the three groups of baselines on the benchmarks of WN18 and FB15K. We directly report their original results on these two datasets to avoid re-implementation bias. On DB100K, the newly created dataset, we take the first two groups of baselines, i.e., those simple embedding models and ComplEx extensions with logical background knowledge incorporated. We do not use the third group of baselines due to efficiency and complexity issues. We use the code provided by Trouillon et al. (2016)7 for TransE, DistMult, and ComplEx, and the code released by their authors for ANALOGY8 and RUGE9. We re-implement HolE and ComplExR so that all the baselines (as well as our approach) share the same optimization mode, i.e., SGD with AdaGrad and gradient normalization, to facilitate a fair comparison.10 We follow Trouillon et al. (2016) to adopt a ranking loss for TransE and a logistic loss for all the other methods. 6We do not consider Ensemble DistMult (Dettmers et al., 2018) which combines several different models together, to facilitate a fair comparison. 7https://github.com/ttrouill/complex 8https://github.com/quark0/ANALOGY 9https://github.com/iieir-km/RUGE 10An exception here is that ANALOGY uses asynchronous SGD with AdaGrad (Liu et al., 2017). 116 WN18 FB15K HITS@N HITS@N MRR 1 3 10 MRR 1 3 10 TransE (Bordes et al., 2013) 0.454 0.089 0.823 0.934 0.380 0.231 0.472 0.641 DistMult (Yang et al., 2015) 0.822 0.728 0.914 0.936 0.654 0.546 0.733 0.824 HolE (Nickel et al., 2016b) 0.938 0.930 0.945 0.949 0.524 0.402 0.613 0.739 ComplEx (Trouillon et al., 2016) 0.941 0.936 0.945 0.947 0.692 0.599 0.759 0.840 ANALOGY (Liu et al., 2017) 0.942 0.939 0.944 0.947 0.725 0.646 0.785 0.854 RUGE (Guo et al., 2018) — — — — 0.768 0.703 0.815 0.865 ComplExR (Minervini et al., 2017a) 0.940 — 0.943 0.947 — — — — R-GCN (Schlichtkrull et al., 2017) 0.814 0.686 0.928 0.955 0.651 0.541 0.736 0.825 R-GCN+ (Schlichtkrull et al., 2017) 0.819 0.697 0.929 0.964 0.696 0.601 0.760 0.842 ConvE (Dettmers et al., 2018) 0.942 0.935 0.947 0.955 0.745 0.670 0.801 0.873 Single DistMult (Kadlec et al., 2017) 0.797 — — 0.946 0.798 — — 0.893 ComplEx-NNE (this work) 0.941 0.937 0.944 0.948 0.727∗ 0.659∗ 0.772∗ 0.845∗ ComplEx-NNE+AER (this work) 0.943 0.940 0.945 0.948 0.803∗ 0.761∗ 0.831∗ 0.874∗ Table 3: Link prediction results on the test sets of WN18 and FB15K. Results for TransE and DistMult are taken from (Trouillon et al., 2016). Results for the other baselines are taken from the original papers. Missing scores not reported in the literature are indicated by “—”. Best scores are highlighted in bold, and “∗” indicates statistically significant improvements over ComplEx. HITS@N MRR 1 3 10 TransE 0.111 0.016 0.164 0.270 DistMult 0.233 0.115 0.301 0.448 HolE 0.260 0.182 0.309 0.411 ComplEx 0.242 0.126 0.312 0.440 ANALOGY 0.252 0.143 0.323 0.427 RUGE 0.246 0.129 0.325 0.433 ComplExR 0.253 0.167 0.294 0.420 ComplEx-NNE 0.298∗ 0.229∗ 0.330∗ 0.426 ComplEx-NNE+AER 0.306∗ 0.244∗ 0.334∗ 0.418 Table 4: Link prediction results on the test set of DB100K, with best scores highlighted in bold, statistically significant improvements marked by “∗”. Among those baselines, RUGE and ComplExR require additional logical background knowledge. RUGE makes use of soft rules, which are extracted by AMIE+ from the training sets. As suggested by Guo et al. (2018), length-1 and length-2 rules with PCA confidence higher than 0.8 are utilized. Note that our approach also makes use of AMIE+ rules with PCA confidence higher than 0.8. But it only considers entailments between a pair of relations, i.e., length-1 rules. ComplExR takes into account equivalence and inversion between relations. We derive such axioms directly from our approximate entailments. If rp λ1 −→rq and rq λ2 −→rp with λ1, λ2 > 0.8, we think relations rp and rq are equivalent. And similarly, if r−1 p λ1 −→rq and r−1 q λ2 −→rp with λ1, λ2 > 0.8, we consider rp as an inverse of rq. For all the methods, we create 100 mini-batches on each dataset, and conduct a grid search to find hyperparameters that maximize MRR on the validation set, with at most 1000 iterations over the training set. Specifically, we tune the embedding size d ∈{100, 150, 200}, the L2 regularization coefficient η ∈{0.001, 0.003, 0.01, 0.03, 0.1}, the ratio of negative over positive training examples α ∈{2, 10}, and the initial learning rate γ ∈{0.01, 0.05, 0.1, 0.5, 1.0}. For TransE, we tune the margin of the ranking loss δ ∈{0.1, 0.2, 0.5, 1, 2, 5, 10}. Other hyperparameters of ANALOGY and RUGE are set or tuned according to the default settings suggested by their authors (Liu et al., 2017; Guo et al., 2018). After getting the best ComplEx model, we tune the relation constraint penalty of our approach ComplEx-NNE+AER (µ in Eq. (8)) in the range of {10−5, 10−4, · · · , 104, 105}, with all its other hyperparameters fixed to their optimal configurations. We then directly set µ = 0 to get the optimal ComplEx-NNE model. The weight of soft constraints in ComplExR is tuned in the same range as µ. The optimal configurations for our approach are: d = 200, η = 0.03, α = 10, γ = 1.0, µ = 10 on WN18; d = 200, η=0.01, α=10, γ = 0.5, µ = 10−3 on FB15K; and d = 150, η = 0.03, α = 10, γ = 0.1, µ = 10−5 on DB100K. Experimental Results: Table 3 presents the results on the test sets of WN18 and FB15K, where the results for the baselines are taken directly from 117 previous literature. Table 4 further provides the results on the test set of DB100K, with all the methods tuned and tested in (almost) the same setting. On all the datasets, we test statistical significance of the improvements achieved by ComplEx-NNE/ ComplEx-NNE+AER over ComplEx, by using a paired t-test. The reciprocal rank or HITS@N value with n = 1, 3, 10 for each test triple is used as paired data. The symbol “∗” indicates a significance level of p < 0.05. The results demonstrate that imposing the nonnegativity and approximate entailment constraints indeed improves KG embedding. ComplEx-NNE and ComplEx-NNE+AER perform better than (or at least equally well as) ComplEx in almost all the metrics on all the three datasets, and most of the improvements are statistically significant (except those on WN18). More interestingly, just by introducing these simple constraints, ComplEx-NNE+ AER can beat very strong baselines, including the best performing basic models like ANALOGY, those previous extensions of ComplEx like RUGE or ComplExR, and even the complicated developments or implementations like ConvE or Single DistMult. This demonstrates the superiority of our approach. 4.3 Analysis on Entity Representations This section inspects how the structure of the entity embedding space changes when the constraints are imposed. We first provide the visualization of entity representations on DB100K. On this dataset each entity is associated with a single type label.11 We pick 4 types reptile, wine region, species, and programming language, and randomly select 30 entities from each type. Figure 1 visualizes the representations of these entities learned by ComplEx and ComplEx-NNE+AER (real components only), with the optimal configurations determined by link prediction (see § 4.2 for details, applicable to all analysis hereafter). During the visualization, we normalize the real component of each entity by [˜x]ℓ= [x]ℓ−min(x) max(x)−min(x), where min(x) or max(x) is the minimum or maximum entry of x respectively. We observe that after imposing the non-negativity constraints, ComplEx-NNE+AER indeed obtains compact and interpretable representations for entities. Each entity is represented by only a relatively small number of “active” dimensions. And entities 11http://downloads.dbpedia.org/2016-10/ core-i18n/en/instance_types_wkd_uris_en. ttl.bz2 0 50 100 150 0 50 100 150 ComplEx-NNE+AER ComplEx Figure 1: Visualization of real components of entity representations (rows) learned by ComplExNNE+AER (left) and ComplEx (right). From top to bottom, entities belong to type reptile, wine region, species, and programming language in turn. Values range from 0 (white) via 0.5 (orange) to 1 (black). Best viewed in color. 0 2 4 6 8 10 12 14 16 18 20 k value 3.8 4.0 4.2 4.4 4.6 4.8 Average entropy ComplEx ComplEx-NNE ComplEx-NNE+AER Figure 2: Average entropy over all dimensions of real components of entity representations learned by ComplEx (circles), ComplEx-NNE (squares), and ComplEx-NNE+AER (triangles) as K varies. with the same type tend to activate the same set of dimensions, while entities with different types often get clearly different dimensions activated. Then we investigate the semantic purity of these dimensions. Specifically, we collect the representations of all the entities on DB100K (real components only). For each dimension of these representations, top K percent of entities with the highest activation values on this dimension are picked. We can calculate the entropy of the type distribution of the entities selected. This entropy reflects diversity of entity types, or in other words, semantic purity. If all the K percent of entities have the same type, we will get the lowest entropy of zero (the highest semantic purity). On the contrary, if each of them has a distinct type, we will get the highest entropy (the lowest semantic purity). Figure 2 shows the average entropy over all dimensions of entity representations (real components only) learned by ComplEx, ComplEx-NNE, and ComplEx-NNE+ 118 country location_country owning_company owner spouse−1 spouse child−1 parent position honours offical_language language -0.57 -0.08 -0.52 -0.81 -0.05 -0.10 -0.00 0.01 -0.06 -0.00 -0.57 -0.08 -0.52 -0.81 -0.05 -0.09 -0.00 0.02 -0.06 -0.00 -0.06 -0.42 0.60 -0.68 0.30 -0.06 -0.05 0.80 0.22 0.56 -0.06 -0.42 0.60 -0.68 0.30 -0.06 -0.05 0.80 0.22 0.57 0.15 1.39 -0.87 -0.63 -0.10 -0.00 0.00 -0.00 0.00 -0.00 0.15 1.39 -0.87 -0.63 -0.10 -0.00 0.00 -0.00 0.00 -0.00 0.33 -0.29 0.47 -0.63 0.45 -0.13 -0.04 0.08 -0.21 -0.02 0.33 -0.29 0.47 -0.64 0.45 0.13 0.04 -0.08 0.20 0.02 -0.81 -0.11 -0.39 -1.01 -0.09 -0.21 -0.01 0.23 0.16 -0.34 -0.81 -0.10 0.73 -1.01 0.30 -0.20 -0.01 0.23 0.16 -0.35 -0.84 -0.44 -0.61 -0.86 -0.04 -0.39 -0.32 -0.02 0.09 -0.01 -0.84 -0.41 -0.60 -0.80 -0.04 -0.39 -0.32 -0.03 0.09 -0.01 Real Component Imaginary Component Figure 3: Visualization of relation representations learned by ComplEx-NNE+AER, with the top 4 relations from the equivalence class, the middle 4 the inversion class, and the bottom 4 others. AER, as K varies. We can see that after imposing the non-negativity constraints, ComplEx-NNE and ComplEx-NNE+AER can learn entity representations with latent dimensions of consistently higher semantic purity. We have conducted the same analyses on imaginary components of entity representations, and observed similar phenomena. The results are given as supplementary material. 4.4 Analysis on Relation Representations This section further provides a visual inspection of the relation embedding space when the constraints are imposed. To this end, we group relation pairs involved in the DB100K entailment constraints into 3 classes: equivalence, inversion, and others.12 We choose 2 pairs of relations from each class, and visualize these relation representations learned by ComplEx-NNE+AER in Figure 3, where for each relation we randomly pick 5 dimensions from both its real and imaginary components. By imposing the approximate entailment constraints, these relation representations can encode logical regularities quite well. Pairs of relations from the first class (equivalence) tend to have identical representations rp ≈rq, those from the second class (inversion) complex conjugate representations rp ≈¯rq; and the others representations that Re(rp) ≤Re(rq) and Im(rp) ≈Im(rq). 12Equivalence and inversion are detected using heuristics introduced in § 4.2 (implementation details). See the supplementary material for detailed properties of these three classes. 5 Conclusion This paper investigates the potential of using very simple constraints to improve KG embedding. Two types of constraints have been studied: (i) the non-negativity constraints to learn compact, interpretable entity representations, and (ii) the approximate entailment constraints to further encode logical regularities into relation representations. Such constraints impose prior beliefs upon the structure of the embedding space, and will not significantly increase the space or time complexity. Experimental results on benchmark KGs demonstrate that our method is simple yet surprisingly effective, showing significant and consistent improvements over strong baselines. The constraints indeed improve model interpretability, yielding a substantially increased structuring of the embedding space. Acknowledgments We would like to thank all the anonymous reviewers for their insightful and valuable suggestions, which help to improve the quality of this paper. This work is supported by the National Key Research and Development Program of China (No. 2016QY03D0503) and the Fundamental Theory and Cutting Edge Technology Research Program of the Institute of Information Engineering, Chinese Academy of Sciences (No. Y7Z0261101). References Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data. pages 1247–1250. Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. 2014. A semantic matching energy function for learning with multi-relational data. Machine Learning 94(2):233–259. Antoine Bordes, Nicolas Usunier, Alberto Garc´ıaDur´an, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in Neural Information Processing Systems. pages 2787–2795. Antoine Bordes, Jason Weston, Ronan Collobert, and Yoshua Bengio. 2011. Learning structured embeddings of knowledge bases. In Proceedings of the 25th AAAI Conference on Artificial Intelligence. pages 301–306. 119 Kai-Wei Chang, Wen-tau Yih, Bishan Yang, and Christopher Meek. 2014. Typed tensor decomposition of knowledge bases for relation extraction. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. pages 1568–1579. Thomas Demeester, Tim Rockt¨aschel, and Sebastian Riedel. 2016. Lifted rule injection for relation embeddings. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. pages 1389–1399. Tim Dettmers, Minervini Pasquale, Stenetorp Pontus, and Sebastian Riedel. 2018. Convolutional 2D knowledge graph embeddings. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence. pages 1811–1818. Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pages 601–610. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12(Jul):2121–2159. Alona Fyshe, Leila Wehbe, Partha P. Talukdar, Brian Murphy, and Tom M. Mitchell. 2015. A compositional and interpretable semantic space. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 32– 41. Luis Antonio Gal´arraga, Christina Teflioudi, Katja Hose, and Fabian M. Suchanek. 2015. Fast rule mining in ontological knowledge bases with AMIE+. The VLDB Journal 24(6):707–730. Shu Guo, Quan Wang, Bin Wang, Lihong Wang, and Li Guo. 2015. Semantically smooth knowledge graph embedding. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. pages 84–94. Shu Guo, Quan Wang, Lihong Wang, Bin Wang, and Li Guo. 2016. Jointly embedding knowledge graphs and logical rules. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. pages 192–202. Shu Guo, Quan Wang, Lihong Wang, Bin Wang, and Li Guo. 2018. Knowledge graph embedding with iterative guidance from soft rules. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence. pages 4816–4823. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S. Weld. 2011. Knowledge-based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics. pages 541–550. Rodolphe Jenatton, Nicolas L. Roux, Antoine Bordes, and Guillaume R. Obozinski. 2012. A latent factor model for highly multi-relational data. In Advances in Neural Information Processing Systems. pages 3167–3175. Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Knowledge graph embedding via dynamic mapping matrix. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. pages 687–696. Rudolf Kadlec, Ondrej Bajgar, and Jan Kleindienst. 2017. Knowledge base completion: Baselines strike back. In Proceedings of the 2nd Workshop on Representation Learning for NLP. pages 69–74. German Kruszewski, Denis Paperno, and Marco Baroni. 2015. Deriving Boolean structures from distributional vectors. Transactions of the Association for Computational Linguistics 3:375–388. Daniel D. Lee and H. Sebastian Seung. 1999. Learning the parts of objects by non-negative matrix factorization. Nature 401:788–791. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, S¨oren Auer, et al. 2015. DBpedia: A largescale, multilingual knowledge base extracted from Wikipedia. Semantic Web Journal 6(2):167–195. Yankai Lin, Zhiyuan Liu, Huanbo Luan, Maosong Sun, Siwei Rao, and Song Liu. 2015a. Modeling relation paths for representation learning of knowledge bases. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pages 705–714. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015b. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of the 29th AAAI Conference on Artificial Intelligence. pages 2181–2187. Hanxiao Liu, Yuexin Wu, and Yiming Yang. 2017. Analogical inference for multi-relational embeddings. In Proceedings of the 34th International Conference on Machine Learning. pages 2168–2178. Hongyin Luo, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2015a. Online learning of interpretable word embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pages 1687–1692. 120 Yuanfei Luo, Quan Wang, Bin Wang, and Li Guo. 2015b. Context-dependent knowledge graph embedding. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pages 1656–1661. Pasquale Minervini, Luca Costabello, Emir Mu˜noz, V´ıt Nov´aˇcek, and Pierre-Yves Vandenbussche. 2017a. Regularizing knowledge graph embeddings via equivalence and inversion axioms. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases. pages 668–683. Pasquale Minervini, Thomas Demeester, Tim Rockt¨aschel, and Sebastian Riedel. 2017b. Adversarial sets for regularising neural link predictors. In Proceedings of the 33rd Conference on Uncertainty in Artificial Intelligence. Brian Murphy, Partha Talukdar, and Tom Mitchell. 2012. Learning effective and interpretable semantic models using non-negative sparse embedding. In Proceedings of COLING 2012. pages 1933–1950. Arvind Neelakantan, Benjamin Roth, and Andrew McCallum. 2015. Compositional vector space models for knowledge base completion. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. pages 156–166. Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. 2016a. A review of relational machine learning for knowledge graphs. Proceedings of the IEEE 104(1):11–33. Maximilian Nickel, Lorenzo Rosasco, and Tomaso Poggio. 2016b. Holographic embeddings of knowledge graphs. In Proceedings of the 30th AAAI Conference on Artificial Intelligence. pages 1955–1961. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th International Conference on Machine Learning. pages 809–816. Tim Rockt¨aschel, Sameer Singh, and Sebastian Riedel. 2015. Injecting logical background knowledge into embeddings for relation extraction. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 1119–1129. Michael Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2017. Modeling relational data with graph convolutional networks. arXiv:1703.06103 . Baoxu Shi and Tim Weninger. 2017. ProjE: Embedding projection for knowledge graph completion. In Proceedings of the 31st AAAI Conference on Artificial Intelligence. pages 1236–1242. Richard Socher, Danqi Chen, Christopher D. Manning, and Andrew Y. Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in Neural Information Processing Systems. pages 926–934. Th´eo Trouillon, Johannes Welbl, Sebastian Riedel, Eric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In Proceedings of the 33rd International Conference on Machine Learning. pages 2071–2080. Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. 2017. Knowledge graph embedding: A survey of approaches and applications. IEEE Transactions on Knowledge and Data Engineering 29(12):2724– 2743. Quan Wang, Bin Wang, and Li Guo. 2015. Knowledge base completion using embeddings and rules. In Proceedings of the 24th International Joint Conference on Artificial Intelligence. pages 1859–1865. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the 28th AAAI Conference on Artificial Intelligence. pages 1112–1119. Evgenia Wasserman-Pritsker, William W. Cohen, and Einat Minkov. 2015. Learning to identify the best contexts for knowledge-based WSD. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pages 1662–1667. Han Xiao, Minlie Huang, and Xiaoyan Zhu. 2016. TransG: A generative model for knowledge graph embedding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. pages 2316–2325. Han Xiao, Minlie Huang, and Xiaoyan Zhu. 2017. SSP: Semantic space projection for knowledge graph embedding with text descriptions. In Proceedings of the 31st AAAI Conference on Artificial Intelligence. pages 3104–3110. Ruobing Xie, Zhiyuan Liu, Jia Jia, Huanbo Luan, and Maosong Sun. 2016a. Representation learning of knowledge graphs with entity descriptions. In Proceedings of the 30th AAAI Conference on Artificial Intelligence. pages 2659–2665. Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2016b. Representation learning of knowledge graphs with hierarchical types. In Proceedings of the 25th International Joint Conference on Artificial Intelligence. pages 2965–2971. Bishan Yang and Tom Mitchell. 2017. Leveraging knowledge bases in LSTMs for improving machine reading. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. pages 1436–1446. 121 Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In Proceedings of the International Conference on Learning Representations. Huaping Zhong, Jianwen Zhang, Zhen Wang, Hai Wan, and Zheng Chen. 2015. Aligning knowledge and text embeddings by entity descriptions. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pages 267–272.
2018
11
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1190–1199 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1190 Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples Vidur Joshi, Matthew Peters, Mark Hopkins Allen Institute for AI, Seattle, WA {vidurj, matthewp, markh}@allenai.org Abstract We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Journal alone that achieves over 90% F1 on the Brown corpus. For more syntactically distant domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45% to 73% using approximately five dozen training examples. In the process, we demonstrate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art of 92.6%. 1 Introduction Statistical parsers are often criticized for their performance outside of the domain they were trained on. The most straightforward remedy would be more training data in the target domain, but building treebanks (Marcus et al., 1993) is expensive. In this paper, we revisit this issue in light of recent developments in neural natural language processing. Our paper rests on two observations: 1. It is trivial to train on partial annotations using a span-focused model. Stern et al. (2017a) demonstrated that a parser with minimal dependence between the decisions that produce a parse can achieve state-of-the-art performance. We modify their parser, henceGiven [ the circle [ at the right ] with [ designated center, designated perpendicular, and radius 5 ] ] . In [ the figure above ] , [ [ AD = 4 ] , [ AB = 3 ] and [ CD = 9 ] ] . [ Diameter AC ] is perpendicular [ to chord BD ] [ at E ] . Figure 1: An example of partial annotations. Annotators indicate that a span is a constituent by enclosing it in square brackets. forth MSP, so that it trains directly on individual labeled spans instead of parse trees. This results in a parser that can be trained, with no adjustments to the training regime, from partial sentence bracketings. 2. The use of contextualized word representations (Peters et al., 2017; McCann et al., 2017) greatly reduces the amount of data needed to train linguistic models. Contextualized word representations, which encode tokens conditioned on their context in a sentence, have been shown to give significant boosts across a variety of NLP tasks, and also to reduce the amount of data needed by an order of magnitude in some tasks. Taken together, this suggests a way to rapidly extend a newswire-trained parser to new domains. Specifically, we will show it is possible to achieve large out-of-domain performance improvements using only dozens of partially annotated sentences, like those shown in Figure 1. The resulting parser also does not suffer any degradation on the newswire domain. 1191 Along the way, we provide several other notable contributions: • We raise the state-of-the-art single-model F1score for constituency parsing from 92.6% to 94.3% on the Wall Street Journal (WSJ) test set. A trained model is publicly available.1 • We show that, even without domain-specific training data, our parser has much less out-ofdomain degradation than previous parsers on “newswire-adjacent” domains like the Brown corpus. • We provide a version of MSP which predicts its own POS tags (rather than requiring a third-party tagger). 2 The Reconciled Span Parser (RSP) When we allow annotators to selectively annotate important phenomena, we make the process faster and simpler (Mielens et al., 2015). Unfortunately, this produces a disconnect between the model (which typically asserts the probability of a full parse tree) and the annotation task (which asserts the correctness of some subcomponent, like a constituent span or a dependency arc). There is a body of research (Hwa, 1999; Li et al., 2016) that discusses how to bridge this gap by modifying the training data, training algorithm, or the training objective. Alternatively, we could just better align the model with the annotation task. Specifically, we could train a parser whose base model predicts exactly what we ask the annotator to annotate, e.g. whether a particular span is a constituent. This makes it trivial to train with partial or full annotations, because the training data reduces to a collection of span labels in either case. Luckily, recent state-of-the-art results that model NLP tasks as independently classified spans (Stern et al., 2017a) suggest this strategy is currently viable. In this section, we present the Reconciled Span Parser (RSP), a modified version of the Minimal Span Parser (MSP) of Stern et al. (2017a). RSP differs from MSP in the following ways: • It is trained on a span classification task. MSP trains on a maximum margin objective; that is, the loss function penalizes the 1http://allennlp.org/models violation of a margin between the scores of the gold parse and the next highest scoring parse decoded. This couples its training procedure with its decoding procedure, resulting in two versions, a top-down parser and a chart parser. To allow our model to be trained on partial annotations, we change the training task to be the span classification task described below. • It uses contextualized word representations instead of predicted part-of-speech tags. Our model uses contextualized word representations as described in Peters et al. (2018). It does not take part-of-speech-tags as input, eliminating the dependence of the parser on a newswire-trained POS-tagger. 2.1 Overview We will view a parse tree as a labeling of all the spans of a sentence such that: • Every constituent span is labeled with the sequence of non-terminals assigned to it in the parse tree. For instance, span (2, 4) in Figure 2b is labeled with the sequence ⟨S, VP⟩, as shown in Figure 2a. • Every non-constituent is labeled with the empty sequence. Given a sentence represented by a sequence of tokens x of length n, define spans(x) = {(i, j) | 0 ≤i < j ≤n}. Define a parse for sentence x as a function π : spans(x) 7→L where L is the set of all sequences of non-terminal tags, including the empty sequence. We model the probability of a parse as the independent product of its span labels: Pr(π|x) = Y s∈spans(x) Pr(π(s) | x, s) ⇒log Pr(π|x) = X s∈spans(x) log Pr(π(s) | x, s) Hence, we will train a base model σ(l | x, s) to estimate the log probability of label l for span s (given sentence x), and we will score the overall parse with: score(π|x) = X s∈spans(x) σ(π(s) | x, s) 1192 S VBG, ∅ NN, ∅ ∅ ∅ ∅ She enjoys playing tennis . 0 1 2 3 4 5 input PRP, NP ∅ ∅ VBZ, ∅ S, VP VP . , ∅ (a) Spans classified by the parsing procedure. Note that leaves have their part-of-speech tags predicted in addition to their sequence of non-terminals. S . . VP S VP NP NN tennis VBG playing VBZ enjoys NP PRP She (b) The resulting parse tree. Figure 2: The correspondence between labeled spans and a parse tree. This diagram is adapted from figure 1 in (Stern et al., 2017a). Note that this probability model accords mass to mis-structured trees (e.g. overlapping spans like (2, 5) and (3, 7) cannot both be constituents of a well-formed tree). We solve the following Integer Linear Program (ILP)2 to find the highest scoring parse that admits a well-formed tree: max δ X (i,j)∈spans(x) v+ (i,j)δ(i,j) + v− (i,j)(1 −δ(i,j)) subject to: i < k < j < m =⇒ δ(i,j) + δ(k,m) ≤1 (i, j) ∈spans(x) =⇒ δ(i,j) ∈{0, 1} where: v+ (i,j) = max l s.t. l̸=∅σ(l | x, (i, j)) v− (i,j) = σ(∅| x, (i, j)) 2There are a number of ways to reconcile the span conflicts, including an adaptation of the standard dynamic programming chart parsing algorithm to work with spans of an unbinarized tree. However it turns out that the classification model rarely produces span conflicts, so all methods we tried performed equivalently well. 2.2 Classification Model For our span classification model σ(l | x, s), we use the model from (Stern et al., 2017a), which leverages a method for encoding spans from (Wang and Chang, 2016; Cross and Huang, 2016). First, it creates a sentence encoding by running a two-layer bidirectional LSTM over the sentence to obtain forward and backward encodings for each position i, denoted by fi and bi respectively. Then, spans are encoded by the difference in LSTM states immediately before and after the span; that is, span (i, j) is encoded as the concatenation of the vector differences fj −fi−1 and bi −bj+1. A one-layer feedforward network maps each span representation to a distribution over labels. Classification Model Parameters and Initializations We preserve the settings used in Stern et al. (2017a) where possible. As a result, the size of the hidden dimensions of the LSTM and the feedforward network is 250. The dropout ratio for the LSTM is set to 0.4 . Unlike the model it is based on, our model uses word embeddings of length 1124. These result from concatenating a 100 dimension learned word embedding, with a 1024 di1193 Parser Rec Prec F1 RNNG (Dyer et al., 2016) 91.7 MSP (Stern et al., 2017a) 90.6 93.0 91.8 (Stern et al., 2017b) 92.6 92.6 92.6 RSP 93.8 94.8 94.3 Table 1: Parsing performance on WSJTEST, along with the results of other recent single-model parsers trained without external parse data. Recall Precision F1 all features 94.20 94.77 94.48 –ELMo 91.63 93.05 92.34 Table 2: Feature ablation on WSJDEV. mension learned linear combination of the internal states of a bidirectional language model run on the input sentence as described in Peters et al. (2018). We refer to them below as ELMo (Embeddings from Language Models). For the learned embeddings, words with n occurrences in the training data are replaced by ⟨UNK⟩with probability 1+ n 10 1+n . This does not affect the ELMo component of the word embeddings. As a result, even common words are replaced with probability at least 1 10, making the model rely on the ELMo embeddings instead of the learned embeddings. To make the model self-contained, it does not take part-ofspeech tags as input. Using a linear layer over the last hidden layer of the classification model, partof-speech tags are predicted for spans containing single words. 3 Analysis of RSP 3.1 Performance on Newswire On WSJTEST3, RSP outperforms (see Table 1) all previous single models trained on WSJTRAIN by a significant margin, raising the state-of-the-art result from 92.6% to 94.3%. Additionally, our predicted part-of-speech tags achieve 97.72%4 accuracy on WSJTEST. 3For all our experiments on the WSJ component of the Penn Treebank (Marcus et al., 1993), we use the standard split which is sections 2-21 for training, henceforth WSJTRAIN, section 22 for development, henceforth WSJDEV, and 23 for testing, henceforth WSJTEST. 4The split we used is not standard for part-of-speech tagging. As a result, we do not compare to part-of-speech taggers. 3.2 Beyond Newswire The Brown Corpus The Brown corpus (Marcus et al., 1993) is a standard benchmark used to assess WSJ-trained parsers outside of the newswire domain. When (Kummerfeld et al., 2012) parsed the various Brown verticals with the (then state-of-the-art) Charniak parser (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a), it achieved F1 scores between 83% and 86%, even though its F1 score on WSJTEST was 92.1%. In Table 3, we discover that RSP does not suffer nearly as much degradation, with an average F1-score of 90.3%. To determine whether this increased portability is because of the parser architecture or the use of ELMo vectors, we also run MSP on the Brown verticals. We used the Stanford tagger5 (Toutanova et al., 2003) to tag WSJTRAIN and the Brown verticals so that MSP could be given these at train and test time. We learned that most of the improvement can be attributed to the ELMo word representations. In fact, even if we use MSP with gold POS tags, the average performance is 3.4% below RSP. Question Bank and Genia Despite being a standard benchmark for parsing domain adaptation, the Brown corpus has considerable commonality with newswire text. It is primarily composed of well-formed sentences with similar syntactic phenomena. Perhaps the main challenge with the Brown corpus is a difference in vocabulary, rather than a difference in syntax, which may explain the success of RSP, which leverages contextualized embeddings learned from a large corpus. If we try to run RSP on a more syntactically divergent corpus like QuestionBank6 (Judge et al., 2006), we find much more performance degradation. This is unsurprising, since WSJTRAIN does not contain many examples of question syntax. But how many examples do we need, to get good performance? 5We used the english-left3words-distsim.tagger model from the 2017-06-09 release of the Stanford POS tagger since it achieved the best accuracy on the Brown corpus. 6For all our experiments on QuestionBank, we use the following split: sentences 1-1000 and 2001-3000 for training, henceforth QBANKTRAIN, 1001-1500 and 3001-3500 for development, henceforth QBANKDEV, and 1501-2000 and 2501-4000 for testing, henceforth QBANKTEST. This split is described at https://nlp.stanford.edu/data/QuestionBankStanford.shtml. 1194 Section F1 RSP MSP + Stanford POS tags MSP + gold POS tags Charniak F (popular) 91.42 87.01 87.84 85.91 G (biographies) 90.04 86.14 86.91 84.56 K (general) 90.08 85.53 86.46 84.09 L (mystery) 89.65 85.61 86.47 83.95 M (science) 90.52 86.91 87.52 84.65 N (adventure) 91.00 86.53 87.53 85.2 P (romance) 89.76 85.77 86.59 84.09 R (humor) 89.54 84.98 85.69 83.60 average 90.25 86.06 86.88 84.51 Table 3: Parsing performance on Brown verticals. MSP refers to the Minimal Span Parser (Stern et al., 2017a). Charniak refers to the Charniak parser with reranking and self-training (Charniak, 2000; Charniak and Johnson, 2005; McClosky et al., 2006a). MSP + Stanford POS tags refers to MSP trained and tested using part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003). Training Data Rec. Prec. F1 WSJ QBANK 40k 0 91.07 88.77 89.91 0 2k 94.44 96.23 95.32 40k 2k 95.84 97.02 96.43 40k 50 93.85 95.91 94.87 40k 100 95.08 96.06 95.57 40k 400 94.94 97.05 95.99 Table 4: Performance of RSP on QBANKDEV. Training Data Rec Prec F1 WSJ GENIA 40k 0 72.51 88.84 79.85 0k 14k 88.04 92.30 90.12 40k 14k 88.24 92.33 90.24 40k 50 82.30 90.55 86.23 40k 100 83.94 89.97 86.85 40k 500 85.52 91.01 88.18 Table 5: Performance of RSP on GENIADEV. For the experiments summarized in table 4 and table 5 involving 40k sentences from WSJTRAIN, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing an equal number of target domain and WSJTRAIN sentences. Surprisingly, with only 50 annotated questions (see Table 4), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%. This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN. The resulting system improves slightly on WSJTEST getting 94.38%. On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005), we see a similar, if somewhat less dramatic, trend. See Table 5. With 50 annotated sentences, performance on GENIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky’s thesis (McClosky, 2010) – the one that trains on all 14k sentences from GENIATRAIN and self-trains using 270k sentences from PubMed. That parser achieves 87.6%, which we outperform with just 500 sentences from GENIATRAIN. These results suggest that it is currently feasible to extend a parser to a syntactically distant domain (for which no gold parses exist) with a couple hours of effort. We explore this possibility in the next section. 4 Rapid Parser Extension To create a parser for their geometry question answering system, (Seo et al., 2015) did the following: • Designed regular expressions to identify mathematical expressions. • Replaced the identified expressions with dummy words. • Parsed the resulting sentences. 1195 FRAG . . NP 24 and QS = 10 SYM = NP PR , , NP PQRS PP In the rhombus (a) Training only on WSJTRAIN. FRAG . . NP PR = 24 and QS = 10 , , PP In the rhombus PQRS (b) Retraining on partial annotations. Figure 3: The top-level split for the development sentence “In the rhombus PQRS, PR = 24 and QS = 10.” before and after retraining RSP on 63 partially annotated geometry statements. • Substituted the regex-analyzed expressions for the dummy words in the parses. It is clear why this was necessary. Figure 3 (top) shows how RSP (trained only on WSJTRAIN) parses the sentence “In the rhombus PQRS, PR = 24 and QS = 10.” The result is completely wrong, and useless to a downstream application. Still, beyond just the inconvenience of building additional infrastructure, there are downsides to the “regex-and-replace” strategy: 1. It assumes that each expression always maps to the same constituent label. Consider “2x = 3y”. This is a verb phrase in the sentence “In the above figure, x is prime and 2x = 3y.” However, it is a noun phrase in the sentence “The equation 2x = 3y has 2 solutions.” If we replace both instances with the same dummy word, the parser will almost certainly become confused in one of the two instances. 2. It assumes that each expression is always a constituent. Suppose that we replace the expression “AB < 30” with a dummy word. This means we cannot properly parse a sentence like “When angle AB < 30, the lines are parallel,” because the constituent “angle AB” no longer exists in the resulting sentence. 3. It does not handle other syntactic variation. As we will see in the next section, the geometry domain has a propensity for using right-attaching participial adjective phrases, like “labeled x” in the phrase “the segment labeled x.” Encouraging a parser to recognize this syntactic construct is out-of-scope for the “regex-and-replace” strategy. Instead, we propose directly extending the parser by providing a few domain-specific examples like those in Figure 1. Because RSP’s model directly predicts span constituency, we can simply mark up a sentence with the “tricky” domain-specific constituents that the model will not already have learned from WSJTRAIN. For instance, we mark up NOUN-LABEL constructs like “chord BD”, and equations like “AD = 4”. From these marked-up sentences, we can extract training instances declaring the constituency of certain spans (like “to chord BD” in the third example) and the implied non-constituency of certain spans (like “perpendicular to chord” in the third example). We also allow annotators to explicitly declare the non-constituency of a span via an alternative markup (not shown). We do not require annotators to provide span labels (although they can if desired). If a training instance merely declares a span to be a constituent (but does not provide a particular label), then the loss function only records loss when that span is classified as a non-constituent (i.e. any label is ok). 5 Experiments 5.1 Geometry Questions We took the publicly available training data from (Seo et al., 2015), split the data into sentences, and then annotated each sentence as in Figure 1. Next, we randomly split these sentences into GEOTRAIN and GEODEV7. After removing duplicate sentences spanning both sets, we ended up with 63 annotated sentences in GEOTRAIN and 62 in GEODEV. In GEOTRAIN, we made an average of 2.8 constituent declarations and 0.3 (explicit) nonconstituent declarations per sentence. After preparing the data, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing 50 randomly selected WSJTRAIN sentences, plus all of GEOTRAIN. The results are in table 6. After fine-tuning, the model 7GEOTRAIN and GEODEV are available at https://github.com/vidurj/parser-adaptation/tree/master/data. 1196 Training Data GEODEV WSJTEST correct constituents % error-free % F1 WSJTRAIN 71.9 45.2 94.28 WSJTRAIN + GEOTRAIN 87.0 72.6 94.30 Table 6: RSP performance on GEODEV. Training Data BIOCHEMDEV WSJTEST correct constituents % error-free % F1 WSJTRAIN 70.1 27.0 94.28 WSJTRAIN + BIOCHEMTRAIN 79.5 46.7 94.23 Table 7: RSP performance on BIOCHEMDEV. • Given [ a circle with [ the tangent shown ] ] . • Find the hypotenuse of [ the triangle labeled t ] . • Examine [ the following diagram with [ the square highlighted ] ] . Figure 4: Three partial annotations targeting right-attaching participial adjectives. gets 87% of the 185 annotations on GEODEV correct, compared with 71.9% before fine-tuning8. Moreover, the fraction of sentences with no errors increases from 45.2% to 72.6%. With only a few dozen partially-annotated training examples, not only do we see a large increase in domain performance, but there is also no degradation in the parser’s performance on newswire. Some GEODEV parses have enormous qualitative differences, like the example shown in Figure 3. For the GEODEV sentences on which we get errors after retraining, the errors fall predominantly into three categories. First, approximately 44% have some mishandled math syntax, like failing to recognize “dimensions 16 by 8” as a constituent, or providing a flat structuring of the equation “BAC = 1/4 * ACB” (instead of recognizing “1/4 * ACB” as a subconstituent). Second, approximately 19% have PP-attachment errors. Third, another 19% fail to correctly analyze right-attaching participial adjectives like “labeled x” in the noun phrase “the segment labeled x” or 8This improvement has a p-value of 10−4 under the onesided, two-sample difference between proportions test. “indicated” in the noun phrase “the center indicated.” This phenomenon is unusually frequent in geometry but was insufficiently marked-up in our training examples. For instance, while we have a training instance “Find [ the measure of [ the angle designated by x ] ],” it does not explicitly highlight the constituency of “designated by x”. This suggests that in practice, this domain adaptation method could benefit from an iterative cycle in which a user assesses the parser’s errors on their target domain, creates some partial annotations that address these issues, retrains the parser, and then repeats the process until satisfied. As a proof-of-concept, we invented 3 additional sentences with right-attaching participial adjectives (shown in Figure 4), added them to GEOTRAIN, and then retrained. Indeed, the handling of participial adjectives in GEODEV improved, increasing the overall percentage of correctly identified constituents to 88.6% and the percentage of errorfree sentences to 75.8%. 5.2 Biomedicine and Chemistry We ran a similar experiment using biomedical and chemistry text, taken from the unannotated data provided by (Nivre et al., 2007). We partially annotated 134 sentences and randomly split them into BIOCHEMTRAIN (72 sentences) and BIOCHEMDEV (62 sentences)9. In BIOCHEMTRAIN, we made an average of 4.2 constituent declarations per sentence. We made no nonconstituent declarations. Again, we started with RSP trained on WSJTRAIN, and fine-tuned it on minibatches containing annotations from 50 randomly selected WSJ9BIOCHEMTRAIN and BIOCHEMDEV are available at https://github.com/vidurj/parser-adaptation/tree/master/data. 1197 TRAIN sentences, plus all of BIOCHEMTRAIN. Table 7 shows the improvement in the percentage of correctly-identified annotated constituents and the percentage of test sentences for which the parse agrees with every annotation. As with the geometry domain, we get significant improvements using only dozens of partially annotated training sentences. 6 Related Work The two major themes of this paper, domain adaptation and learning from partial annotation, each have a long tradition in natural language processing. 6.1 Domain Adaptation Domain adaptation has been recognized as a major NLP problem for over a decade (Ben-David et al., 2006; Blitzer et al., 2006; Daum´e, 2007; Finkel and Manning, 2009). In particular, domain adaptation for parsers (Plank, 2011; Ma and Xia, 2013) has received considerable attention. Much of this work (McClosky et al., 2006b; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Kawahara and Uchimoto, 2008; McClosky et al., 2010; Sagae, 2010; Baucom et al., 2013; Yu et al., 2015) has focused on how to best use co-training (Blum and Mitchell, 1998) or self-training to augment a small domain corpus, or how to best combine models to perform well on a particular domain. In this work, we focus on the direct impact that just a few dozen partially annotated out-of-domain examples can have, when using a particular neural model with contextualized word representations. Co-training, self-training, and model combination are orthogonal to our approach. Our work is a spiritual successor to (Garrette and Baldridge, 2013), which shows how to train a part-of-speech tagger with a minimal amount of annotation effort. 6.2 Learning from Partial Annotation Most literature on training parsers from partial annotations (Sassano and Kurohashi, 2010; Spreyer et al., 2010; Flannery et al., 2011; Flannery and Mori, 2015; Mielens et al., 2015) focuses on dependency parsing. (Li et al., 2016) provides a good overview. Here we highlight three important highlevel strategies. The first is “complete-then-train” (Mirroshandel and Nasr, 2011; Majidi and Crane, 2013), which “completes” every partially annotated dependency parse by finding the most likely parse (according to an already trained parser model) that respects the constraints of the partial annotations. These “completed” parses are then used to train a new parser. The second strategy (Nivre et al., 2014; Li et al., 2016) is similar to “complete-then-train,” but integrates parse completion into the training process. At each iteration, new “complete” parses are created using the parser model from the most recent training iteration. The third strategy (Li et al., 2014, 2016) transforms each partial annotation into a forest of parses that encodes all fully-specified parses permitted by the partial annotation. Then, the training objective is modified to support optimization over these forests. Our work differs from these in two respects. First, since we are training a constituency parser, our partial annotations are constituent bracketings rather than dependency arcs. Second, and more importantly, we can use the partial annotations for training without modifying either the training algorithm or the training data. While the bulk of the literature on training from partial annotations focuses on dependency parsing, the earliest papers (Pereira and Schabes, 1992; Hwa, 1999) focus on constituency parsing. These leverage an adapted version of the inside-outside algorithm for estimating the parameters of a probabilistic context-free grammar (PCFG). Our work is not tied to PCFG parsing, nor does it require a specialized training algorithm when going from full annotations to partial annotations. 7 Conclusion Recent developments in neural natural language processing have made it very easy to build custom parsers. Not only do contextualized word representations help parsers learn the syntax of new domains with very few examples, but they also work extremely well with parsing models that correspond directly with a granular and intuitive annotation task (like identifying whether a span is a constituent). This allows you to train with either full or partial annotations without any change to the training process. This work provides a convenient path forward for the researcher who requires a parser for their domain, but laments that “parsers don’t work outside of newswire.” With a couple hours of effort 1198 (and a layman’s understanding of syntactic building blocks), they can get significant performance improvements. We envision an iterative use case in which a user assesses a parser’s errors on their target domain, creates some partial annotations to teach the parser how to fix these errors, then retrains the parser, repeating the process until they are satisfied. References Eric Baucom, Levi King, and Sandra K¨ubler. 2013. Domain adaptation for parsing. In RANLP. Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. 2006. Analysis of representations for domain adaptation. In NIPS. John Blitzer, Ryan T. McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In EMNLP. Avrim Blum and Tom Mitchell. 1998. Combining labeled and unlabeled data with co-training. In Proceedings of the eleventh annual conference on Computational learning theory. ACM, pages 92–100. Eugene Charniak. 2000. A maximum-entropy-inspired parser. In ANLP. Eugene Charniak and Mark Johnson. 2005. Coarseto-fine n-best parsing and maxent discriminative reranking. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, pages 173–180. James Cross and Liang Huang. 2016. Span-based constituency parsing with a structure-label system and provably optimal dynamic oracles. In EMNLP. Hal Daum´e. 2007. Frustratingly easy domain adaptation. CoRR abs/0907.1815. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. CoRR abs/1602.07776. http://arxiv.org/abs/1602.07776. Jenny Rose Finkel and Christopher D. Manning. 2009. Hierarchical bayesian domain adaptation. In HLTNAACL. Daniel Flannery, Yusuke Miyao, Graham Neubig, and Shinsuke Mori. 2011. Training dependency parsers from partially annotated corpora. In IJCNLP. Daniel Flannery and Shinsuke Mori. 2015. Combining active learning and partial annotation for domain adaptation of a japanese dependency parser. In IWPT. Dan Garrette and Jason Baldridge. 2013. Learning a part-of-speech tagger from two hours of annotation. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 138–147. Rebecca Hwa. 1999. Supervised grammar induction using training data with limited constituent information. CoRR cs.CL/9905001. John Judge, Aoife Cahill, and Josef Van Genabith. 2006. Questionbank: Creating a corpus of parseannotated questions. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 497–504. Daisuke Kawahara and Kiyotaka Uchimoto. 2008. Learning reliability of parses for domain adaptation of dependency parsing. In IJCNLP. Jonathan K. Kummerfeld, David Leo Wright Hall, James R. Curran, and Dan Klein. 2012. Parser showdown at the wall street corral: An empirical investigation of error types in parser output. In EMNLP-CoNLL. Zhenghua Li, Min Zhang, and Wenliang Chen. 2014. Soft cross-lingual syntax projection for dependency parsing. In COLING. Zhenghua Li, Yue Zhang, Jiayuan Chao, and Min Zhang. 2016. Training dependency parsers with partial annotation. CoRR abs/1609.09247. Xuezhe Ma and Fei Xia. 2013. Dependency parser adaptation with subtrees from auto-parsed target domain data. In ACL. Saeed Majidi and Gregory R. Crane. 2013. Committee-based active learning for dependency parsing. In TPDL. Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computational linguistics 19(2):313–330. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In NIPS. David McClosky. 2010. Any domain parsing: automatic domain adaptation for natural language parsing . David McClosky, Eugene Charniak, and Mark Johnson. 2006a. Effective self-training for parsing. In HLT-NAACL. David McClosky, Eugene Charniak, and Mark Johnson. 2006b. Reranking and self-training for parser adaptation. In ACL. 1199 David McClosky, Eugene Charniak, and Mark Johnson. 2010. Automatic domain adaptation for parsing. In HLT-NAACL. Jason Mielens, Liang Sun, and Jason Baldridge. 2015. Parse imputation for dependency annotations. In ACL. Seyed Abolghasem Mirroshandel and Alexis Nasr. 2011. Active learning for dependency parsing using partially annotated sentences. In IWPT. Joakim Nivre, Yoav Goldberg, and Ryan T. McDonald. 2014. Constrained arc-eager dependency parsing. Computational Linguistics 40:249–527. Joakim Nivre, Johan Hall, Sandra K¨ubler, Ryan McDonald, Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007. The conll 2007 shared task on dependency parsing. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). Fernando Pereira and Yves Schabes. 1992. Insideoutside reestimation from partially bracketed corpora. In ACL. M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer. 2018. Deep contextualized word representations. ArXiv e-prints . Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, and Russell Power. 2017. Semi-supervised sequence tagging with bidirectional language models. In ACL. Barbara Plank. 2011. Domain adaptation for parsing. Citeseer. Roi Reichart and Ari Rappoport. 2007. Self-training for enhancement and domain adaptation of statistical parsers trained on small datasets. In ACL. Kenji Sagae. 2010. Self-training without reranking for parser domain adaptation and its impact on semantic role labeling. Kenji Sagae and Jun’ichi Tsujii. 2007. Dependency parsing and domain adaptation with lr models and parser ensembles. In EMNLP-CoNLL. Manabu Sassano and Sadao Kurohashi. 2010. Using smaller constituents rather than sentences in active learning for japanese dependency parsing. In ACL. Min Joon Seo, Hannaneh Hajishirzi, Ali Farhadi, Oren Etzioni, and Clint Malcolm. 2015. Solving geometry problems: Combining text and diagram interpretation. In EMNLP. Kathrin Spreyer, Lilja Ovrelid, and Jonas Kuhn. 2010. Training parsers on partial trees: A cross-language comparison. In LREC. Mitchell Stern, Jacob Andreas, and Dan Klein. 2017a. A minimal span-based neural constituency parser. CoRR abs/1705.03919. http://arxiv.org/abs/1705.03919. Mitchell Stern, Daniel Fried, and Dan Klein. 2017b. Effective inference for generative neural parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017. pages 1695–1700. https://aclanthology.info/papers/D171178/d17-1178. Yuka Tateisi, Akane Yakushiji, Tomoko Ohta, and Jun’ichi Tsujii. 2005. Syntax annotation for the genia corpus. In Companion Volume to the Proceedings of Conference including Posters/Demos and tutorial abstracts. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich partof-speech tagging with a cyclic dependency network. In Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, HLT-NAACL 2003, Edmonton, Canada, May 27 - June 1, 2003. http://aclweb.org/anthology/N/N03/N03-1033.pdf. Wenhui Wang and Baobao Chang. 2016. Graph-based dependency parsing with bidirectional lstm. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 2306–2315. Juntao Yu, Mohab Elkaref, and Bernd Bohnet. 2015. Domain adaptation for dependency parsing via selftraining. In IWPT.
2018
110
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1200–1211 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1200 Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations Vered Shwartz Ido Dagan Computer Science Department, Bar-Ilan University, Ramat-Gan, Israel [email protected] [email protected] Abstract Revealing the implicit semantic relation between the constituents of a nouncompound is important for many NLP applications. It has been addressed in the literature either as a classification task to a set of pre-defined relations or by producing free text paraphrases explicating the relations. Most existing paraphrasing methods lack the ability to generalize, and have a hard time interpreting infrequent or new noun-compounds. We propose a neural model that generalizes better by representing paraphrases in a continuous space, generalizing for both unseen noun-compounds and rare paraphrases. Our model helps improving performance on both the noun-compound paraphrasing and classification tasks. 1 Introduction Noun-compounds hold an implicit semantic relation between their constituents. For example, a ‘birthday cake’ is a cake eaten on a birthday, while ‘apple cake’ is a cake made of apples. Interpreting noun-compounds by explicating the relationship is beneficial for many natural language understanding tasks, especially given the prevalence of nouncompounds in English (Nakov, 2013). The interpretation of noun-compounds has been addressed in the literature either by classifying them to a fixed inventory of ontological relationships (e.g. Nastase and Szpakowicz, 2003) or by generating various free text paraphrases that describe the relation in a more expressive manner (e.g. Hendrickx et al., 2013). Methods dedicated to paraphrasing nouncompounds usually rely on corpus co-occurrences of the compound’s constituents as a source of explicit relation paraphrases (e.g. Wubben, 2010; Versley, 2013). Such methods are unable to generalize for unseen noun-compounds. Yet, most noun-compounds are very infrequent in text (Kim and Baldwin, 2007), and humans easily interpret the meaning of a new noun-compound by generalizing existing knowledge. For example, consider interpreting parsley cake as a cake made of parsley vs. resignation cake as a cake eaten to celebrate quitting an unpleasant job. We follow the paraphrasing approach and propose a semi-supervised model for paraphrasing noun-compounds. Differently from previous methods, we train the model to predict either a paraphrase expressing the semantic relation of a noun-compound (predicting ‘[w2] made of [w1]’ given ‘apple cake’), or a missing constituent given a combination of paraphrase and noun-compound (predicting ‘apple’ given ‘cake made of [w1]’). Constituents and paraphrase templates are represented as continuous vectors, and semantically-similar paraphrase templates are embedded in proximity, enabling better generalization. Interpreting ‘parsley cake’ effectively reduces to identifying paraphrase templates whose “selectional preferences” (Pantel et al., 2007) on each constituent fit ‘parsley’ and ‘cake’. A qualitative analysis of the model shows that the top ranked paraphrases retrieved for each noun-compound are plausible even when the constituents never co-occur (Section 4). We evaluate our model on both the paraphrasing and the classification tasks (Section 5). On both tasks, the model’s ability to generalize leads to improved performance in challenging evaluation settings.1 1The code is available at github.com/vered1986/panic 1201 2 Background 2.1 Noun-compound Classification Noun-compound classification is the task concerned with automatically determining the semantic relation that holds between the constituents of a noun-compound, taken from a set of pre-defined relations. Early work on the task leveraged information derived from lexical resources and corpora (e.g. Girju, 2007; ´O S´eaghdha and Copestake, 2009; Tratz and Hovy, 2010). More recent work broke the task into two steps: in the first step, a nouncompound representation is learned from the distributional representation of the constituent words (e.g. Mitchell and Lapata, 2010; Zanzotto et al., 2010; Socher et al., 2012). In the second step, the noun-compound representations are used as feature vectors for classification (e.g. Dima and Hinrichs, 2015; Dima, 2016). The datasets for this task differ in size, number of relations and granularity level (e.g. Nastase and Szpakowicz, 2003; Kim and Baldwin, 2007; Tratz and Hovy, 2010). The decision on the relation inventory is somewhat arbitrary, and subsequently, the inter-annotator agreement is relatively low (Kim and Baldwin, 2007). Specifically, a noun-compound may fit into more than one relation: for instance, in Tratz (2011), business zone is labeled as CONTAINED (zone contains business), although it could also be labeled as PURPOSE (zone whose purpose is business). 2.2 Noun-compound Paraphrasing As an alternative to the strict classification to predefined relation classes, Nakov and Hearst (2006) suggested that the semantics of a noun-compound could be expressed with multiple prepositional and verbal paraphrases. For example, apple cake is a cake from, made of, or which contains apples. The suggestion was embraced and resulted in two SemEval tasks. SemEval 2010 task 9 (Butnariu et al., 2009) provided a list of plausible human-written paraphrases for each nouncompound, and systems had to rank them with the goal of high correlation with human judgments. In SemEval 2013 task 4 (Hendrickx et al., 2013), systems were expected to provide a ranked list of paraphrases extracted from free text. Various approaches were proposed for this task. Most approaches start with a pre-processing step of extracting joint occurrences of the constituents from a corpus to generate a list of candidate paraphrases. Unsupervised methods apply information extraction techniques to find and rank the most meaningful paraphrases (Kim and Nakov, 2011; Xavier and Lima, 2014; Pasca, 2015; Pavlick and Pasca, 2017), while supervised approaches learn to rank paraphrases using various features such as co-occurrence counts (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013) or the distributional representations of the nouncompounds (Van de Cruys et al., 2013). One of the challenges of this approach is the ability to generalize. If one assumes that sufficient paraphrases for all noun-compounds appear in the corpus, the problem reduces to ranking the existing paraphrases. It is more likely, however, that some noun-compounds do not have any paraphrases in the corpus or have just a few. The approach of Van de Cruys et al. (2013) somewhat generalizes for unseen noun-compounds. They represented each noun-compound using a compositional distributional vector (Mitchell and Lapata, 2010) and used it to predict paraphrases from the corpus. Similar noun-compounds are expected to have similar distributional representations and therefore yield the same paraphrases. For example, if the corpus does not contain paraphrases for plastic spoon, the model may predict the paraphrases of a similar compound such as steel knife. In terms of sharing information between semantically-similar paraphrases, Nulty and Costello (2010) and Surtani et al. (2013) learned “is-a” relations between paraphrases from the co-occurrences of various paraphrases with each other. For example, the specific ‘[w2] extracted from [w1]’ template (e.g. in the context of olive oil) generalizes to ‘[w2] made from [w1]’. One of the drawbacks of these systems is that they favor more frequent paraphrases, which may co-occur with a wide variety of more specific paraphrases. 2.3 Noun-compounds in other Tasks Noun-compound paraphrasing may be considered as a subtask of the general paraphrasing task, whose goal is to generate, given a text fragment, additional texts with the same meaning. However, general paraphrasing methods do not guarantee to explicate implicit information conveyed in the original text. Moreover, the most notable source for extracting paraphrases is multiple translations of the same text (Barzilay and McKeown, 1202 (23) made (28) apple (4145) cake ... (7891) of (1) [w1] (2) [w2] (3) [p] of cake made [w1] MLPw ˆ w1i = 28 (23) made (28) apple (4145) cake ... (7891) of (1) [w1] (2) [w2] (3) [p] (78) [w2] containing [w1] ... (131) [w2] made of [w1] ... [p] cake apple MLPp ˆpi = 78 Figure 1: An illustration of the model predictions for w1 and p given the triplet (cake, made of, apple). The model predicts each component given the encoding of the other two components, successfully predicting ‘apple’ given ‘cake made of [w1]’, while predicting ‘[w2] containing [w1]’ for ‘cake [p] apple’. 2001; Ganitkevitch et al., 2013; Mallinson et al., 2017). If a certain concept can be described by an English noun-compound, it is unlikely that a translator chose to translate its foreign language equivalent to an explicit paraphrase instead. Another related task is Open Information Extraction (Etzioni et al., 2008), whose goal is to extract relational tuples from text. Most system focus on extracting verb-mediated relations, and the few exceptions that addressed noun-compounds provided partial solutions. Pal and Mausam (2016) focused on segmenting multi-word nouncompounds and assumed an is-a relation between the parts, as extracting (Francis Collins, is, NIH director) from “NIH director Francis Collins”. Xavier and Lima (2014) enriched the corpus with compound definitions from online dictionaries, for example, interpreting oil industry as (industry, produces and delivers, oil) based on the WordNet definition “industry that produces and delivers oil”. This method is very limited as it can only interpret noun-compounds with dictionary entries, while the majority of English noun-compounds don’t have them (Nakov, 2013). 3 Paraphrasing Model As opposed to previous approaches, that focus on predicting a paraphrase template for a given nouncompound, we reformulate the task as a multitask learning problem (Section 3.1), and train the model to also predict a missing constituent given the paraphrase template and the other constituent. Our model is semi-supervised, and it expects as input a set of noun-compounds and a set of constrained part-of-speech tag-based templates that make valid prepositional and verbal paraphrases. Section 3.2 details the creation of training data, and Section 3.3 describes the model. 3.1 Multi-task Reformulation Each training example consists of two constituents and a paraphrase (w2, p, w1), and we train the model on 3 subtasks: (1) predict p given w1 and w2, (2) predict w1 given p and w2, and (3) predict w2 given p and w1. Figure 1 demonstrates the predictions for subtasks (1) (right) and (2) (left) for the training example (cake, made of, apple). Effectively, the model is trained to answer questions such as “what can cake be made of?”, “what can be made of apple?”, and “what are the possible relationships between cake and apple?”. The multi-task reformulation helps learning better representations for paraphrase templates, by embedding semantically-similar paraphrases in proximity. Similarity between paraphrases stems either from lexical similarity and overlap between the paraphrases (e.g. ‘is made of’ and ‘made of’), or from shared constituents, e.g. ‘[w2] involved in [w1]’ and ‘[w2] in [w1] industry’ can share [w1] = insurance and [w2] = company. This allows the model to predict a correct paraphrase for a given noun-compound, even when the constituents do not occur with that paraphrase in the corpus. 3.2 Training Data We collect a training set of (w2, p, w1, s) examples, where w1 and w2 are constituents of a nouncompound w1w2, p is a templated paraphrase, and s is the score assigned to the training instance.2 2We refer to “paraphrases” and “paraphrase templates” interchangeably. In the extracted templates, [w2] always precedes [w1], probably because w2 is normally the head noun. 1203 We use the 19,491 noun-compounds found in the SemEval tasks datasets (Butnariu et al., 2009; Hendrickx et al., 2013) and in Tratz (2011). To extract patterns of part-of-speech tags that can form noun-compound paraphrases, such as ‘[w2] VERB PREP [w1]’, we use the SemEval task training data, but we do not use the lexical information in the gold paraphrases. Corpus. Similarly to previous noun-compound paraphrasing approaches, we use the Google Ngram corpus (Brants and Franz, 2006) as a source of paraphrases (Wubben, 2010; Li et al., 2010; Surtani et al., 2013; Versley, 2013). The corpus consists of sequences of n terms (for n ∈ {3, 4, 5}) that occur more than 40 times on the web. We search for n-grams following the extracted patterns and containing w1 and w2’s lemmas for some noun-compound in the set. We remove punctuation, adjectives, adverbs and some determiners to unite similar paraphrases. For example, from the 5-gram ‘cake made of sweet apples’ we extract the training example (cake, made of, apple). We keep only paraphrases that occurred at least 5 times, resulting in 136,609 instances. Weighting. Each n-gram in the corpus is accompanied with its frequency, which we use to assign scores to the different paraphrases. For instance, ‘cake of apples’ may also appear in the corpus, although with lower frequency than ‘cake from apples’. As also noted by Surtani et al. (2013), the shortcoming of such a weighting mechanism is that it prefers shorter paraphrases, which are much more common in the corpus (e.g. count(‘cake made of apples’) ≪count(‘cake of apples’)). We overcome this by normalizing the frequencies for each paraphrase length, creating a distribution of paraphrases in a given length. Negative Samples. We add 1% of negative samples by selecting random corpus words w1 and w2 that do not co-occur, and adding an example (w2, [w2] is unrelated to [w1], w1, sn), for some predefined negative samples score sn. Similarly, for a word wi that did not occur in a paraphrase p we add (wi, p, UNK, sn) or (UNK, p, wi, sn), where UNK is the unknown word. This may help the model deal with non-compositional noun-compounds, where w1 and w2 are unrelated, rather than forcibly predicting some relation between them. 3.3 Model For a training instance (w2, p, w1, s), we predict each item given the encoding of the other two. Encoding. We use the 100-dimensional pretrained GloVe embeddings (Pennington et al., 2014), which are fixed during training. In addition, we learn embeddings for the special words [w1], [w2], and [p], which are used to represent a missing component, as in “cake made of [w1]”, “[w2] made of apple”, and “cake [p] apple”. For a missing component x ∈{[p], [w1], [w2]} surrounded by the sequences of words v1:i−1 and vi+1:n, we encode the sequence using a bidirectional long-short term memory (bi-LSTM) network (Graves and Schmidhuber, 2005), and take the ith output vector as representing the missing component: bLS(v1:i, x, vi+1:n)i. In bi-LSTMs, each output vector is a concatenation of the outputs of the forward and backward LSTMs, so the output vector is expected to contain information on valid substitutions both with respect to the previous words v1:i−1 and the subsequent words vi+1:n. Prediction. We predict a distribution of the vocabulary of the missing component, i.e. to predict w1 correctly we need to predict its index in the word vocabulary Vw, while the prediction of p is from the vocabulary of paraphrases in the training set, Vp. We predict the following distributions: ˆp = softmax(Wp · bLS( ⃗w2, [p], ⃗w1)2) ˆ w1 = softmax(Ww · bLS( ⃗w2, ⃗p1:n, [w1])n+1) ˆ w2 = softmax(Ww · bLS([w2], ⃗p1:n, ⃗w1)1) (1) where Ww ∈R|Vw|×2d, Wp ∈R|Vp|×2d, and d is the embeddings dimension. During training, we compute cross-entropy loss for each subtask using the gold item and the prediction, sum up the losses, and weight them by the instance score. During inference, we predict the missing components by picking the best scoring index in each distribution:3 ˆpi = argmax(ˆp) ˆ w1i = argmax( ˆ w1) ˆ w2i = argmax( ˆ w2) (2) The subtasks share the pre-trained word embeddings, the special embeddings, and the biLSTM parameters. Subtasks (2) and (3) also share Ww, the MLP that predicts the index of a word. 3In practice, we pick the k best scoring indices in each distribution for some predefined k, as we discuss in Section 5. 1204 [w1] [w2] Predicted Paraphrases [w2] Paraphrase Predicted [w1] Paraphrase [w1] Predicted [w2] cataract surgery [w2] of [w1] surgery [w2] to treat [w1] heart [w2] to treat [w1] cataract surgery [w2] on [w1] brain drug [w2] to remove [w1] back patient [w2] in patients with [w1] knee transplant software company [w2] of [w1] company [w2] engaged in [w1] management [w2] engaged in [w1] software company [w2] to develop [w1] production firm [w2] in [w1] industry computer engineer [w2] involved in [w1] business industry stone wall [w2] is of [w1] meeting [w2] held in [w1] spring [w2] held in [w1] morning party [w2] of [w1] afternoon meeting [w2] is made of [w1] hour rally [w2] made of [w1] day session Table 1: Examples of top ranked predicted components using the model: predicting the paraphrase given w1 and w2 (left), w1 given w2 and the paraphrase (middle), and w2 given w1 and the paraphrase (right). [w2] is for [w1] [w2] belongs to [w1] [w2] pertaining to [w1] [w2] issued by [w1] [w2] related to [w1] [w2] by way of [w1] [w2] in terms of [w1] [w2] done by [w1] [w2] to produce [w1] [w2] involved in [w1] [w2] with [w1] [w2] composed of [w1] [w2] employed in [w1] [w2] owned by [w1] [w2] by means of [w1] [w2] to make [w1] [w2] produced by [w1] [w2] source of [w1] [w2] found in [w1] [w2] offered by [w1] [w2] out of [w1] [w2] held by [w1] [w2] for use in [w1] [w2] consists of [w1] [w2] relating to [w1] [w2] devoted to [w1] [w2] engaged in [w1] [w2] occur in [w1] [w2] caused by [w1] [w2] supplied by [w1] [w2] part of [w1] [w2] provided by [w1] [w2] generated by [w1] [w2] made of [w1] [w2] consisting of [w1] [w2] is made of [w1] [w2] for [w1] [w2] from [w1] [w2] created by [w1] [w2] given by [w1] [w2] of providing [w1] [w2] belonging to [w1] [w2] aimed at [w1] [w2] conducted by [w1] [w2] dedicated to [w1] [w2] made by [w1] [w2] because of [w1] [w2] included in [w1] [w2] with respect to [w1] [w2] given to [w1] Figure 2: A t-SNE map of a sample of paraphrases, using the paraphrase vectors encoded by the biLSTM, for example bLS([w2] made of [w1]). Implementation Details. The model is implemented in DyNet (Neubig et al., 2017). We dedicate a small number of noun-compounds from the corpus for validation. We train for up to 10 epochs, stopping early if the validation loss has not improved in 3 epochs. We use Momentum SGD (Nesterov, 1983), and set the batch size to 10 and the other hyper-parameters to their default values. 4 Qualitative Analysis To estimate the quality of the proposed model, we first provide a qualitative analysis of the model outputs. Table 1 displays examples of the model outputs for each possible usage: predicting the paraphrase given the constituent words, and predicting each constituent word given the paraphrase and the other word. The examples in the table are from among the top 10 ranked predictions for each componentpair. We note that most of the (w2, paraphrase, w1) triplets in the table do not occur in the training data, but are rather generalized from similar examples. For example, there is no training instance for “company in the software industry” but there is a “firm in the software industry” and a company in many other industries. While the frequent prepositional paraphrases are often ranked at the top of the list, the model also retrieves more specified verbal paraphrases. The list often contains multiple semanticallysimilar paraphrases, such as ‘[w2] involved in [w1]’ and ‘[w2] in [w1] industry’. This is a result of the model training objective (Section 3) which positions the vectors of semantically-similar paraphrases close to each other in the embedding space, based on similar constituents. To illustrate paraphrase similarity we compute a t-SNE projection (Van Der Maaten, 2014) of the embeddings of all the paraphrases, and draw a sample of 50 paraphrases in Figure 2. The projection positions semantically-similar but lexicallydivergent paraphrases in proximity, likely due to 1205 many shared constituents. For instance, ‘with’, ‘from’, and ‘out of’ can all describe the relation between food words and their ingredients. 5 Evaluation: Noun-Compound Interpretation Tasks For quantitative evaluation we employ our model for two noun-compound interpretation tasks. The main evaluation is on retrieving and ranking paraphrases (§5.1). For the sake of completeness, we also evaluate the model on classification to a fixed inventory of relations (§5.2), although it wasn’t designed for this task. 5.1 Paraphrasing Task Definition. The general goal of this task is to interpret each noun-compound to multiple prepositional and verbal paraphrases. In SemEval 2013 Task 4,4 the participating systems were asked to retrieve a ranked list of paraphrases for each noun-compound, which was automatically evaluated against a similarly ranked list of paraphrases proposed by human annotators. Model. For a given noun-compound w1w2, we first predict the k = 250 most likely paraphrases: ˆp1, ..., ˆpk = argmaxk ˆp, where ˆp is the distribution of paraphrases defined in Equation 1. While the model also provides a score for each paraphrase (Equation 1), the scores have not been optimized to correlate with human judgments. We therefore developed a re-ranking model that receives a list of paraphrases and re-ranks the list to better fit the human judgments. We follow Herbrich (2000) and learn a pairwise ranking model. The model determines which of two paraphrases of the same noun-compound should be ranked higher, and it is implemented as an SVM classifier using scikit-learn (Pedregosa et al., 2011). For training, we use the available training data with gold paraphrases and ranks provided by the SemEval task organizers. We extract the following features for a paraphrase p: 1. The part-of-speech tags contained in p 2. The prepositions contained in p 3. The number of words in p 4. Whether p ends with the special [w1] symbol 5. cosine(bLS([w2], p, [w1])2, ⃗Vp ˆpi) · ˆpˆpi where ⃗Vp ˆpi is the biLSTM encoding of the predicted paraphrase computed in Equation 1 and ˆpˆpi 4https://www.cs.york.ac.uk/semeval-2013/task4 is its confidence score. The last feature incorporates the original model score into the decision, as to not let other considerations such as preposition frequency in the training set take over. During inference, the model sorts the list of paraphrases retrieved for each noun-compound according to the pairwise ranking. It then scores each paraphrase by multiplying its rank with its original model score, and prunes paraphrases with final score < 0.025. The values for k and the threshold were tuned on the training set. Evaluation Settings. The SemEval 2013 task provided a scorer that compares words and ngrams from the gold paraphrases against those in the predicted paraphrases, where agreement on a prefix of a word (e.g. in derivations) yields a partial scoring. The overall score assigned to each system is calculated in two different ways. The ‘isomorphic’ setting rewards both precision and recall, and performing well on it requires accurately reproducing as many of the gold paraphrases as possible, and in much the same order. The ‘non-isomorphic’ setting rewards only precision, and performing well on it requires accurately reproducing the top-ranked gold paraphrases, with no importance to order. Baselines. We compare our method with the published results from the SemEval task. The SemEval 2013 baseline generates for each nouncompound a list of prepositional paraphrases in an arbitrary fixed order. It achieves a moderately good score in the non-isomorphic setting by generating a fixed set of paraphrases which are both common and generic. The MELODI system performs similarly: it represents each nouncompound using a compositional distributional vector (Mitchell and Lapata, 2010) which is then used to predict paraphrases from the corpus. The performance of MELODI indicates that the system was rather conservative, yielding a few common paraphrases rather than many specific ones. SFS and IIITH, on the other hand, show a more balanced trade-off between recall and precision. As a sanity check, we also report the results of a baseline that retrieves ranked paraphrases from the training data collected in Section 3.2. This baseline has no generalization abilities, therefore it is expected to score poorly on the recall-aware isomorphic setting. 1206 Method isomorphic non-isomorphic Baselines SFS (Versley, 2013) 23.1 17.9 IIITH (Surtani et al., 2013) 23.1 25.8 MELODI (Van de Cruys et al., 2013) 13.0 54.8 SemEval 2013 Baseline (Hendrickx et al., 2013) 13.8 40.6 This paper Baseline 3.8 16.1 Our method 28.2 28.4 Table 2: Results of the proposed method and the baselines on the SemEval 2013 task. Category % False Positive (1) Valid paraphrase missing from gold 44% (2) Valid paraphrase, slightly too specific 15% (3) Incorrect, common prepositional paraphrase 14% (4) Incorrect, other errors 14% (5) Syntactic error in paraphrase 8% (6) Valid paraphrase, but borderline grammatical 5% False Negative (1) Long paraphrase (more than 5 words) 30% (2) Prepositional paraphrase with determiners 25% (3) Inflected constituents in gold 10% (4) Other errors 35% Table 3: Categories of false positive and false negative predictions along with their percentage. Results. Table 2 displays the performance of the proposed method and the baselines in the two evaluation settings. Our method outperforms all the methods in the isomorphic setting. In the nonisomorphic setting, it outperforms the other two systems that score reasonably on the isomorphic setting (SFS and IIITH) but cannot compete with the systems that focus on achieving high precision. The main advantage of our proposed model is in its ability to generalize, and that is also demonstrated in comparison to our baseline performance. The baseline retrieved paraphrases only for a third of the noun-compounds (61/181), expectedly yielding poor performance on the isomorphic setting. Our model, which was trained on the very same data, retrieved paraphrases for all nouncompounds. For example, welfare system was not present in the training data, yet the model predicted the correct paraphrases “system of welfare benefits”, “system to provide welfare” and others. Error Analysis. We analyze the causes of the false positive and false negative errors made by the model. For each error type we sample 10 nouncompounds. For each noun-compound, false positive errors are the top 10 predicted paraphrases which are not included in the gold paraphrases, while false negative errors are the top 10 gold paraphrases not found in the top k predictions made by the model. Table 3 displays the manually annotated categories for each error type. Many false positive errors are actually valid paraphrases that were not suggested by the human annotators (error 1, “discussion by group”). Some are borderline valid with minor grammatical changes (error 6, “force of coalition forces”) or too specific (error 2, “life of women in community” instead of “life in community”). Common prepositional paraphrases were often retrieved although they are incorrect (error 3). We conjecture that this error often stem from an n-gram that does not respect the syntactic structure of the sentence, e.g. a sentence such as “rinse away the oil from baby ’s head” produces the n-gram “oil from baby”. With respect to false negative examples, they consisted of many long paraphrases, while our model was restricted to 5 words due to the source of the training data (error 1, “holding done in the case of a share”). Many prepositional paraphrases consisted of determiners, which we conflated with the same paraphrases without determiners (error 2, “mutation of a gene”). Finally, in some paraphrases, the constituents in the gold paraphrase appear in inflectional forms (error 3, “holding of shares” instead of “holding of share”). 5.2 Classification Noun-compound classification is defined as a multiclass classification problem: given a pre-defined set of relations, classify w1w2 to the relation that holds between w1 and w2. Potentially, the corpus co-occurrences of w1 and w2 may contribute to the classification, e.g. ‘[w2] held at [w1]’ indicates a TIME relation. Tratz and Hovy (2010) included such features in their classifier, but ablation tests showed that these features had a relatively small contribution, probably due to the sparseness of the paraphrases. Recently, Shwartz and Waterson (2018) showed that paraphrases may contribute to the classification when represented in a continuous space. 1207 Model. We generate a paraphrase vector representation ⃗ par(w1w2) for a given noun-compound w1w2 as follows. We predict the indices of the k most likely paraphrases: ˆp1, ..., ˆpk = argmaxk ˆp, where ˆp is the distribution on the paraphrase vocabulary Vp, as defined in Equation 1. We then encode each paraphrase using the biLSTM, and average the paraphrase vectors, weighted by their confidence scores in ˆp: ⃗ par(w1w2) = Pk i=1 ˆpˆpi · ⃗Vp ˆpi Pk i=1 ˆpˆpi (3) We train a linear classifier, and represent w1w2 in a feature vector f(w1w2) in two variants: paraphrase: f(w1w2) = ⃗ par(w1w2), or integrated: concatenated to the constituent word embeddings f(w1w2) = [ ⃗ par(w1w2), ⃗w1, ⃗w2]. The classifier type (logistic regression/SVM), k, and the penalty are tuned on the validation set. We also provide a baseline in which we ablate the paraphrase component from our model, representing a nouncompound by the concatenation of its constituent embeddings f(w1w2) = [ ⃗w1, ⃗w2] (distributional). Datasets. We evaluate on the Tratz (2011) dataset, which consists of 19,158 instances, labeled in 37 fine-grained relations (Tratz-fine) or 12 coarse-grained relations (Tratz-coarse). We report the performance on two different dataset splits to train, test, and validation: a random split in a 75:20:5 ratio, and, following concerns raised by Dima (2016) about lexical memorization (Levy et al., 2015), on a lexical split in which the sets consist of distinct vocabularies. The lexical split better demonstrates the scenario in which a noun-compound whose constituents have not been observed needs to be interpreted based on similar observed noun-compounds, e.g. inferring the relation in pear tart based on apple cake and other similar compounds. We follow the random and full-lexical splits from Shwartz and Waterson (2018). Baselines. We report the results of 3 baselines representative of different approaches: 1) Feature-based (Tratz and Hovy, 2010): we reimplement a version of the classifier with features from WordNet and Roget’s Thesaurus. 2) Compositional (Dima, 2016): a neural architecture that operates on the distributional representations of the noun-compound and its constituents. Noun-compound representations are learned with Dataset & Split Method F1 Tratz fine Random Tratz and Hovy (2010) 0.739 Dima (2016) 0.725 Shwartz and Waterson (2018) 0.714 distributional 0.677 paraphrase 0.505 integrated 0.673 Tratz fine Lexical Tratz and Hovy (2010) 0.340 Dima (2016) 0.334 Shwartz and Waterson (2018) 0.429 distributional 0.356 paraphrase 0.333 integrated 0.370 Tratz coarse Random Tratz and Hovy (2010) 0.760 Dima (2016) 0.775 Shwartz and Waterson (2018) 0.736 distributional 0.689 paraphrase 0.557 integrated 0.700 Tratz coarse Lexical Tratz and Hovy (2010) 0.391 Dima (2016) 0.372 Shwartz and Waterson (2018) 0.478 distributional 0.370 paraphrase 0.345 integrated 0.393 Table 4: Classification results. For each dataset split, the top part consists of baseline methods and the bottom part of methods from this paper. The best performance in each part appears in bold. the Full-Additive (Zanzotto et al., 2010) and Matrix (Socher et al., 2012) models. We report the results from Shwartz and Waterson (2018). 3) Paraphrase-based (Shwartz and Waterson, 2018): a neural classification model that learns an LSTM-based representation of the joint occurrences of w1 and w2 in a corpus (i.e. observed paraphrases), and integrates distributional information using the constituent embeddings. Results. Table 4 displays the methods’ performance on the two versions of the Tratz (2011) dataset and the two dataset splits. The paraphrase model on its own is inferior to the distributional model, however, the integrated version improves upon the distributional model in 3 out of 4 settings, demonstrating the complementary nature of the distributional and paraphrase-based methods. The contribution of the paraphrase component is especially noticeable in the lexical splits. As expected, the integrated method in Shwartz and Waterson (2018), in which the paraphrase representation was trained with the objective of classification, performs better than our integrated model. The superiority of both integrated models in the lexical splits confirms that paraphrases are beneficial for classification. 1208 Example Noun-compounds Gold Distributional Example Paraphrases printing plant PURPOSE OBJECTIVE [w2] engaged in [w1] marketing expert development expert TOPICAL OBJECTIVE [w2] in [w1] [w2] knowledge of [w1] weight/job loss OBJECTIVE CAUSAL [w2] of [w1] rubber band rice cake CONTAINMENT PURPOSE [w2] made of [w1] [w2] is made of [w1] laboratory animal LOCATION/PART-WHOLE ATTRIBUTE [w2] in [w1], [w2] used in [w1] Table 5: Examples of noun-compounds that were correctly classified by the integrated model while being incorrectly classified by distributional, along with top ranked indicative paraphrases. Analysis. To analyze the contribution of the paraphrase component to the classification, we focused on the differences between the distributional and integrated models on the Tratz-Coarse lexical split. Examination of the per-relation F1 scores revealed that the relations for which performance improved the most in the integrated model were TOPICAL (+11.1 F1 points), OBJECTIVE (+5.5), ATTRIBUTE (+3.8) and LOCATION/PART WHOLE (+3.5). Table 5 provides examples of noun-compounds that were correctly classified by the integrated model while being incorrectly classified by the distributional model. For each noun-compound, we provide examples of top ranked paraphrases which are indicative of the gold label relation. 6 Compositionality Analysis Our paraphrasing approach at its core assumes compositionality: only a noun-compound whose meaning is derived from the meanings of its constituent words can be rephrased using them. In §3.2 we added negative samples to the training data to simulate non-compositional nouncompounds, which are included in the classification dataset (§5.2). We assumed that these compounds, more often than compositional ones would consist of unrelated constituents (spelling bee, sacred cow), and added instances of random unrelated nouns with ‘[w2] is unrelated to [w1]’. Here, we assess whether our model succeeds to recognize non-compositional noun-compounds. We used the compositionality dataset of Reddy et al. (2011) which consists of 90 nouncompounds along with human judgments about their compositionality in a scale of 0-5, 0 being non-compositional and 5 being compositional. For each noun-compound in the dataset, we predicted the 15 best paraphrases and analyzed the errors. The most common error was predicting paraphrases for idiomatic compounds which may have a plausible concrete interpretation or which originated from one. For example, it predicted that silver spoon is simply a spoon made of silver and that monkey business is a business that buys or raises monkeys. In other cases, it seems that the strong prior on one constituent leads to ignoring the other, unrelated constituent, as in predicting “wedding made of diamond”. Finally, the “unrelated” paraphrase was predicted for a few compounds, but those are not necessarily non-compositional (application form, head teacher). We conclude that the model does not address compositionality and suggest to apply it only to compositional compounds, which may be recognized using compositionality prediction methods as in Reddy et al. (2011). 7 Conclusion We presented a new semi-supervised model for noun-compound paraphrasing. The model differs from previous models by being trained to predict both a paraphrase given a noun-compound, and a missing constituent given the paraphrase and the other constituent. This results in better generalization abilities, leading to improved performance in two noun-compound interpretation tasks. In the future, we plan to take generalization one step further, and explore the possibility to use the biLSTM for generating completely new paraphrase templates unseen during training. Acknowledgments This work was supported in part by an Intel ICRI-CI grant, the Israel Science Foundation grant 1951/17, the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1), and Theo Hoffenberg. Vered is also supported by the Clore Scholars Programme (2017), and the AI2 Key Scientific Challenges Program (2017). 1209 References Regina Barzilay and R. Kathleen McKeown. 2001. Extracting paraphrases from a parallel corpus. In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics. http://aclweb.org/anthology/P01-1008. Thorsten Brants and Alex Franz. 2006. Web 1t 5-gram version 1 . Cristina Butnariu, Su Nam Kim, Preslav Nakov, Diarmuid ´O S´eaghdha, Stan Szpakowicz, and Tony Veale. 2009. Semeval-2010 task 9: The interpretation of noun compounds using paraphrasing verbs and prepositions. In Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions (SEW-2009). Association for Computational Linguistics, Boulder, Colorado, pages 100–105. http://www.aclweb.org/anthology/W09-2416. Corina Dima. 2016. Proceedings of the 1st Workshop on Representation Learning for NLP, Association for Computational Linguistics, chapter On the Compositionality and Semantic Interpretation of English Noun Compounds, pages 27–39. https://doi.org/10.18653/v1/W16-1604. Corina Dima and Erhard Hinrichs. 2015. Automatic noun compound interpretation using deep neural networks and word embeddings. IWCS 2015 page 173. Oren Etzioni, Michele Banko, Stephen Soderland, and Daniel S Weld. 2008. Open information extraction from the web. Communications of the ACM 51(12):68–74. Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The paraphrase database. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 758–764. http://aclweb.org/anthology/N13-1092. Roxana Girju. 2007. Improving the interpretation of noun phrases with cross-linguistic information. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. Association for Computational Linguistics, Prague, Czech Republic, pages 568–575. http://www.aclweb.org/anthology/P07-1072. Alex Graves and J¨urgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Networks 18(5-6):602–610. Iris Hendrickx, Zornitsa Kozareva, Preslav Nakov, Diarmuid ´O S´eaghdha, Stan Szpakowicz, and Tony Veale. 2013. Semeval-2013 task 4: Free paraphrases of noun compounds. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013). Association for Computational Linguistics, pages 138–143. http://aclweb.org/anthology/S13-2025. Ralf Herbrich. 2000. Large margin rank boundaries for ordinal regression. Advances in large margin classifiers pages 115–132. Nam Su Kim and Preslav Nakov. 2011. Largescale noun compound interpretation using bootstrapping and the web as a corpus. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 648–658. http://aclweb.org/anthology/D11-1060. Su Nam Kim and Timothy Baldwin. 2007. Interpreting noun compounds using bootstrapping and sense collocation. In Proceedings of Conference of the Pacific Association for Computational Linguistics. pages 129–136. Omer Levy, Steffen Remus, Chris Biemann, and Ido Dagan. 2015. Do supervised distributional methods really learn lexical inference relations? In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Denver, Colorado, pages 970– 976. http://www.aclweb.org/anthology/N15-1098. Guofu Li, Alejandra Lopez-Fernandez, and Tony Veale. 2010. Ucd-goggle: A hybrid system for noun compound paraphrasing. In Proceedings of the 5th International Workshop on Semantic Evaluation. Association for Computational Linguistics, pages 230–233. Jonathan Mallinson, Rico Sennrich, and Mirella Lapata. 2017. Paraphrasing revisited with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers. Association for Computational Linguistics, Valencia, Spain, pages 881–893. Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive science 34(8):1388–1429. Preslav Nakov. 2013. On the interpretation of noun compounds: Syntax, semantics, and entailment. Natural Language Engineering 19(03):291–330. Preslav Nakov and Marti Hearst. 2006. Using verbs to characterize noun-noun relations. In International Conference on Artificial Intelligence: Methodology, Systems, and Applications. Springer, pages 233– 244. Vivi Nastase and Stan Szpakowicz. 2003. Exploring noun-modifier semantic relations. In Fifth international workshop on computational semantics (IWCS-5). pages 285–301. 1210 Yurii Nesterov. 1983. A method of solving a convex programming problem with convergence rate o (1/k2). In Soviet Mathematics Doklady. volume 27, pages 372–376. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, et al. 2017. Dynet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980 . Paul Nulty and Fintan Costello. 2010. Ucd-pn: Selecting general paraphrases using conditional probability. In Proceedings of the 5th International Workshop on Semantic Evaluation. Association for Computational Linguistics, pages 234–237. Diarmuid ´O S´eaghdha and Ann Copestake. 2009. Using lexical and relational similarity to classify semantic relations. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009). Association for Computational Linguistics, Athens, Greece, pages 621–629. http://www.aclweb.org/anthology/E09-1071. Harinder Pal and Mausam. 2016. Demonyms and compound relational nouns in nominal open ie. In Proceedings of the 5th Workshop on Automated Knowledge Base Construction. Association for Computational Linguistics, San Diego, CA, pages 35–39. http://www.aclweb.org/anthology/W16-1307. Patrick Pantel, Rahul Bhagat, Bonaventura Coppola, Timothy Chklovski, and Eduard Hovy. 2007. ISP: Learning inferential selectional preferences. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference. Association for Computational Linguistics, Rochester, New York, pages 564– 571. http://www.aclweb.org/anthology/N/N07/N071071. Marius Pasca. 2015. Interpreting compound noun phrases using web search queries. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 335–344. https://doi.org/10.3115/v1/N15-1037. Ellie Pavlick and Marius Pasca. 2017. Identifying 1950s american jazz musicians: Fine-grained isa extraction via modifier composition. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Vancouver, Canada, pages 2099–2109. http://aclweb.org/anthology/P17-1192. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12:2825–2830. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pages 1532–1543. http://www.aclweb.org/anthology/D14-1162. Siva Reddy, Diana McCarthy, and Suresh Manandhar. 2011. An empirical study on compositionality in compound nouns. In Proceedings of 5th International Joint Conference on Natural Language Processing. Asian Federation of Natural Language Processing, Chiang Mai, Thailand, pages 210–218. http://www.aclweb.org/anthology/I11-1024. Vered Shwartz and Chris Waterson. 2018. Olive oil is made of olives, baby oil is made for babies: Interpreting noun compounds using paraphrases in a neural model. In The 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). New Orleans, Louisiana. Richard Socher, Brody Huval, D. Christopher Manning, and Y. Andrew Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics, pages 1201– 1211. http://aclweb.org/anthology/D12-1110. Nitesh Surtani, Arpita Batra, Urmi Ghosh, and Soma Paul. 2013. Iiit-h: A corpus-driven co-occurrence based probabilistic model for noun compound paraphrasing. In Second Joint Conference on Lexical and Computational Semantics (* SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013). volume 2, pages 153–157. Stephen Tratz. 2011. Semantically-enriched parsing for natural language understanding. University of Southern California. Stephen Tratz and Eduard Hovy. 2010. A taxonomy, dataset, and classifier for automatic noun compound interpretation. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Uppsala, Sweden, pages 678– 687. http://www.aclweb.org/anthology/P10-1070. Tim Van de Cruys, Stergos Afantenos, and Philippe Muller. 2013. Melodi: A supervised distributional approach for free paraphrasing of noun compounds. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (Se1211 mEval 2013). Association for Computational Linguistics, Atlanta, Georgia, USA, pages 144–147. http://www.aclweb.org/anthology/S13-2026. Laurens Van Der Maaten. 2014. Accelerating t-sne using tree-based algorithms. Journal of machine learning research 15(1):3221–3245. Yannick Versley. 2013. Sfs-tue: Compound paraphrasing with a language model and discriminative reranking. In Second Joint Conference on Lexical and Computational Semantics (* SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013). volume 2, pages 148–152. Sander Wubben. 2010. Uvt: Memory-based pairwise ranking of paraphrasing verbs. In Proceedings of the 5th International Workshop on Semantic Evaluation. Association for Computational Linguistics, pages 260–263. Clarissa Xavier and Vera Lima. 2014. Boosting open information extraction with noun-based relations. In Nicoletta Calzolari (Conference Chair), Khalid Choukri, Thierry Declerck, Hrafn Loftsson, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, and Stelios Piperidis, editors, Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14). European Language Resources Association (ELRA), Reykjavik, Iceland. Fabio Massimo Zanzotto, Ioannis Korkontzelos, Francesca Fallucchi, and Suresh Manandhar. 2010. Estimating linear models for compositional distributional semantics. In Proceedings of the 23rd International Conference on Computational Linguistics. Association for Computational Linguistics, pages 1263–1271.
2018
111
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1212–1221 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1212 Searching for the X-Factor: Exploring Corpus Subjectivity for Word Embeddings Maksim Tkachenko and Chong Cher Chia and Hady W. Lauw School of Information Systems Singapore Management University [email protected] {ccchia.2014,hadywlauw}@smu.edu.sg Abstract We explore the notion of subjectivity, and hypothesize that word embeddings learnt from input corpora of varying levels of subjectivity behave differently on natural language processing tasks such as classifying a sentence by sentiment, subjectivity, or topic. Through systematic comparative analyses, we establish this to be the case indeed. Moreover, based on the discovery of the outsized role that sentiment words play on subjectivity-sensitive tasks such as sentiment classification, we develop a novel word embedding SentiVec which is infused with sentiment information from a lexical resource, and is shown to outperform baselines on such tasks. 1 Introduction Distributional analysis methods such as Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) have been critical for the success of many large-scale natural language processing (NLP) applications (Collobert et al., 2011; Socher et al., 2013; Goldberg, 2016). These methods employ distributional hypothesis (i.e., words used in the same contexts tend to have similar meaning) to derive distributional meaning via context prediction tasks and produce dense word embeddings. While there have been active and ongoing research on improving word embedding methods (see Section 5), there is a relative dearth of study on the impact that an input corpus may have on the quality of the word embeddings. The previous preoccupation centers around corpus size, i.e., a larger corpus is perceived to be richer in statistical information. For instance, popular corpora include Wikipedia, Common Crawl, and Google News. We postulate that there may be variations across corpora owing to factors that affect language use. Intuitively, the many things we write (a work email, a product review, an academic publication, etc.) may each involve certain stylistic, syntactic, and lexical choices, resulting in meaningfully different distributions of word cooccurrences. Consequently, such factors may be encoded in the word embeddings, and input corpora may be differentially informative towards various NLP tasks. In this work, we are interested in the notion of subjectivity. Some NLP tasks, such as sentiment classification, revolve around subjective expressions of likes or dislikes. Others, such as topic classification, revolve around more objective elements of whether a document belongs to a topic (e.g., science, politics). Our central hypothesis is that word embeddings learnt from input corpora of contrasting levels of subjectivity perform differently when classifying sentences by sentiment, subjectivity, or topic. As the first contribution, we outline an experimental scheme to explore this hypothesis in Section 2, and conduct a series of controlled experiments in Section 3 establishing that there exists a meaningful difference between word embeddings derived from objective vs. subjective corpora. We further systematically investigate factors that could potentially explain the differences. Upon discovering from the investigation that sentiment words play a particularly important role in subjectivity-sensitive NLP tasks, such as sentiment classification, as the second contribution, in Section 4 we develop SentiVec, a novel word embedding method infused with information from lexical resources such as a sentiment lexicon. We further identify two alternative lexical objectives: Logistic SentiVec based on discriminative logistic regression, and Spherical SentiVec based on soft clustering effect of von Mises-Fisher distributions. In Section 6, the proposed word embeddings show 1213 evident improvements on sentiment classification, as compared to the base model Word2Vec and other baselines using the same lexical resource. 2 Data and Methodology We lay out the methodology for generating word embeddings of contrasting subjectivity, whose effects are tested on several text classification tasks. 2.1 Generating Word Embeddings As it is difficult to precisely quantify the degree of subjectivity of a corpus, we resort to generating word embeddings from two corpora that contrast sharply in subjectivity, referring to them as the Objective Corpus and the Subjective Corpus. Objective Corpus As virtually all contents are written by humans, an absolutely objective corpus (in the philosophical sense) may prove elusive. There are however exemplars where, by construction, a corpus aspires to be as objective as possible, and probably achieves that in practical terms. We postulate that one such corpus is Wikipedia. Its list of policies and guidelines1, assiduously enforced by an editorial team, specify that an article must be written from a neutral point of view, which among other things means “representing fairly, proportionately, and, as far as possible, without editorial bias, all of the significant views that have been published by reliable sources on a topic.”. Moreover, it is a common resource for training distributional word embeddings and adopted widely by the research community to solve various NLP problems. Hence, in this study, we use Wikipedia as the Objective Corpus. Subjective Corpus By extension, one may then deem a corpus subjective if its content does not at least meet Wikipedia’s neutral point of view requirement. In other words, if the content is replete with personal feelings and opinions. We posit that product reviews would be one such corpus. For instance, Amazon’s Community Guideline2 states that “Amazon values diverse opinions”, and that “Content you submit should be relevant and based on your own honest opinions and experience.”. Reviews consist of expressive content written by customers, and may not strive for the neutrality of an encyclopedia. We rely on a 1https://en.wikipedia.org/wiki/ Wikipedia:List_of_policies_and_ guidelines 2https://www.amazon.com/gp/help/ customer/display.html?nodeId=201929730 large corpus of Amazon reviews from various categories (e.g., electronics, jewelry, books, and etc.) (McAuley et al., 2015) as the Subjective Corpus. Word Embeddings For the comparative analysis in Section 3, we employ Word2Vec (reviewed below) to generate word embeddings from each corpus. Later on in Section 4, we will propose a new word embedding method called SentiVec. For Word2Vec, we use the Skip-gram model to train distributional word embeddings on the Objective Corpus and the Subjective Corpus respectively. Skip-gram aims to find word embeddings that are useful for predicting nearby words. The objective is to maximize the context probability: log L(W; C) = X w∈W X w′∈C(w) log P(w′|w), (1) where W is an input corpus and C(w) is the context of token w. The probability of context word w′, given observed word w is defined via softmax: P(w′|w) = exp (vw′ · vw) P ˆ w∈V exp (v ˆ w · vw), (2) where vw and vw′ are corresponding embeddings and V is the corpus vocabulary. Though theoretically sound, the formulation is computationally impractical and requires tractable approximation. Mikolov et al. (2013) propose two efficient procedures to optimize (1): Hierarchical Softmax and Negative Sampling (NS). In this work we focus on the widely adopted NS. The intuition is that a “good” model should be able to differentiate observed data from noise. The differentiation task is defined using logistic regression; the goal is to tell apart real context-word pair (w′, w) from randomly generated noise pair ( ˆw, w). Formally, log L[w‘,w] = log σ (vw′ · vw) + k X i=1 log σ (−v ˆ wi · vw), (3) where σ( · ) is a sigmoid function, and { ˆwi}k i=1 are negative samples. Summing up all the contextword pairs, we derive the NS Skip-gram objective: log Lword2vec(W; C) = X w∈W X w′∈C(w) log L[w‘,w]. (4) Training word embeddings with Skip-gram, we keep the same hyperparameters across all the runs: 300 dimensions for embeddings, k = 5 negative samples, and window of 5 tokens. The Objective 1214 and Subjective corpora undergo the same preprocessing, i.e., discarding short sentences (< 5 tokens) and rare words (< 10 occurrences), removing punctuation, normalizing Unicode symbols. 2.2 Evaluation Tasks To compare word embeddings, we need a common yardstick. It is difficult to define an inherent quality to word embeddings. Instead, we put them through several evaluation tasks that can leverage word embeddings and standardize their formulations as binary classification tasks. To boil the comparisons down to the essences of word embeddings (which is our central focus), we rely on standardized techniques so as to attribute as much of the differences as possible to the word embeddings. We use logistic regression for classification, and represent a text snippet (e.g., a sentence) in the feature space as the average of the word embeddings of tokens in the snippet (ignoring out-ofvocabulary tokens). The evaluation metric is the average accuracy from 10-fold cross validation. There are three evaluation tasks of varying degrees of hypothetical subjectivity, as outlined below. Each may involve multiple datasets. Sentiment Classification Task This task classifies a sentence into either positive or negative. We use two groups of datasets as follows. The first group consists of 24 datasets from UCSD Amazon product data3 corresponding to various product categories. Each review has a rating from 1 to 5, which is transformed into positive (ratings 4 or 5) or negative (ratings 1 or 2) class. For each dataset respectively, we sample 5000 sentences each from the positive and negative reviews. Note that these sentences used for this evaluation task have not participated in the generation of word embeddings. Due to space constraint, in most cases we present the average accuracy across the datasets, but where appropriate we enumerate the results for each dataset. The second is Cornell’s sentence polarity dataset v1.04 (Pang and Lee, 2005), made up of 5331 each of positive and negative sentences from Rotten Tomatoes movie reviews. The inclusion of this out-of-domain evaluation dataset is useful for examining whether the performance of word embeddings from the Subjective Corpus on the first 3http://jmcauley.ucsd.edu/data/amazon/ 4http://www.cs.cornell.edu/people/ pabo/movie-review-data/rt-polaritydata. README.1.0.txt group above may inadvertently be affected by indomain advantage arising from its Amazon origin. Subjectivity Classification Task This task classifies a sentence into subjective or objective. The dataset is Cornell’s subjectivity dataset v1.05, consisting of 5000 subjective sentences derived from Rotten Tomatoes (RT) reviews and 5000 objective sentences derived from IMDB plot summaries (Pang and Lee, 2004). This task is probably less sensitive to the subjectivity within word embeddings than sentiment classification, as determining whether a sentence is subjective or objective should ideally be an objective undertaking. Topic Classification Task We use the 20 Newsgroups dataset6 (“bydate” version), whereby the newsgroups are organized into six subject matter groupings. We extract the message body and split them into sentences. Each group’s sentences then form the in-topic class, and we randomly sample an equivalent number of sentences from the remaining newsgroups to form the out-of-topic class. This results in six datasets, each corresponding to a binary classification task. In most cases, we present the average results, and where appropriate we enumerate the results for each dataset. Hypothetically, this task is the least affected by the subjectivity within word embeddings. 3 Comparative Analyses of Subjective vs. Objective Corpora We conduct a series of comparative analyses under various setups. For each, we compare the performance in the evaluation tasks when using the Objective Corpus and the Subjective Corpus. Table 1 shows the results for this series of analyses. Initial Condition Setup I seeks to answer whether there is any difference between word embeddings derived from the Objective Corpus and the Subjective Corpus. The word embeddings were trained on the whole data respectively. Table 1 shows the corpus statistics and classification accuracies. Evidently, the Subjective word embeddings outperform the Objective word embeddings on all the evaluation tasks. The margins are largest for sentiment classification (86.5% vs. 81.5% or +5% Amazon, and 78.2% vs. 75.4% or +2.8% on Rotten Tomatoes or RT). For subjectivity and topic classifications, the differences are smaller. 5http://www.cs.cornell.edu/people/ pabo/movie-review-data/subjdata.README. 1.0.txt 6http://qwone.com/˜jason/20Newsgroups/ 1215 Setup Corpus Corpus Statistics Classification (Accuracy) # types # tokens # sentences Sentiment Subjectivity Topic Amazon RT I Objective 1.34M 1.81B 89M 81.5 75.4 90.5 83.2 Subjective 1.47M 5.49B 313M 86.5 78.2 91.1 83.4 II Objective 1.34M 1.81B 89M 81.5 75.4 90.5 83.2 Subjective 0.59M 1.56B 85.5 77.9 90.7 82.8 III Objective 0.29M 1.75B 89M 81.6 75.6 90.6 83.4 Subjective 1.54B 85.4 77.9 90.6 82.8 Table 1: Controlled comparison of Objective and Subjective corpora As earlier hypothesized, the sentiment classification task is more sensitive to subjectivity within word embeddings than the other tasks. Therefore, training word embeddings on a subjective corpus may confer an advantage for such tasks. On the other hand, the corpus statistics show a substantial difference in corpus size, which could be an alternative explanation for the outperformance by the Subjective Corpus if the larger corpus contains more informative distributional statistics. Controlling for Corpus Size In Setup II, we keep the number of sentences in both corpora the same, by randomly downsampling sentences in the Subjective Corpus. This procedure consequently reduces the number of types and tokens (see Table 1, Setup II, Corpus Statistics). Note that the number of tokens in the Subjective corpus is now fewer than in the Objective, the latter suffers no change. Yet, even after a dramatic reduction in size, the Subjective embeddings still outperform the Objective significantly on both datasets of the sentiment classification task (+4% on Amazon and +2.5% on RT), while showing similar performance on subjectivity and topic classifications. This bolsters the earlier observation that sentiment classification is more sensitive to subjectivity. While there is a small effect due to corpus size difference, the gap in performance between Subjective and Objective embeddings on sentiment classification is still significant and cannot be explained away by the corpus size alone. Controlling for Vocabulary While the Subjective Corpus has a much smaller vocabulary (i.e., # types), we turn a critical eye on whether its apparent advantage lies in having access to special word types that do not exist in the Objective Corpus. In Setup III, we keep the training vocabulary the same for both, removing the types that are Objective Corpus Subjective Corpus waste, money, return, love, great, and, loves, refund, Great, This, product, recommend, this, even, Very, returned, easy, not, send, sent, customer, item, broke, defective, her money, waste, return, and, Great, love, refund, recommend, great, this, loves, even, product, This, Very, easy, item, junk, anyone, Don’t, horrible, gift, poor, Do, returned Table 2: Top words of misclassified sentences present in one corpus but not in the other, so that out-of-vocabulary words are ignored in the training phase. Table 1, Setup III, shows significant reduction in types for both corpora. Yet, the outperformance by the Subjective embeddings on the sentiment classification task still stands (+3.8% on Amazon and +2.3% on RT). Moreover, it is so for both Amazon and Rotten Tomatoes datasets, implying that it is not due to close in-domain similarity between the corpora used for training the word embeddings and the classification tasks. Significant Words To get more insights on the difference between the Subjective and Objective corpora, we analyze the mistakes word embeddings make on the development folds. At this point we focus on the sentiment classification task and specifically on the Amazon data, which indicates the largest performance differences in the controlled experiments (see Table 1, Setup III). As words are still the main unit of information in distributional word embeddings, we extract words strongly associated with misclassified sentences. We employed log-odds ratio with informative Dirichlet prior method (Monroe et al., 2008) to quantify this association. It is used to contrast the words in misclassified vs. correctly classified sentences, and accounts for the variance of words and their prior counts taken from a large corpus. 1216 Table 2 shows the top 25 words most associated with the misclassified sentences, sorted by their association scores. On average 50% of the mistakes overlap for both word embeddings, therefore, some of the words are included in both lists. 40 −44% of these words carry positive or negative sentiment connotations in general (see the underlined words in Table 2), while other words like return or send may carry sentiment connotation in e-commerce context. We check if a word carries sentiment connotation using sentiment lexicon compiled by Hu and Liu (2004), including 6789 words along with positive or negative labels. We also observe linguistic negations (i.e., not, Don’t). For instance, the word most associated with the Objective-specific mistakes (excluding the Subjective misclassified sentences) is not, which suggests that perhaps Subjective word embedding accommodates better understanding of linguistic negations, which may partially explain the difference. However, our methodology as outlined in Section 2.2 permits exchangeable word order and is not intended to analyze structural interaction between words. We focus on further analysis of sentiment words, leaving linguistic negations in word embeddings for future investigation. Controlling for Sentiment Words To control for the “amount” of sentiment in the Subjective and Objective corpora, we use sentiment lexicon compiled by Hu and Liu (2004). For each corpus, we create two subcorpora: With Sentiment contains only the sentences with at least one word from the sentiment lexicon, while Without Sentiment is the complement. We match the corpora on the number of sentences, downsampling the larger corpus, train word embeddings on each subcorpus, and proceed with the classification experiments. Table 3 shows the results, including that of random word embeddings for reference. Sentiment lexicon has a significant impact on the performance of sentiment and subjectivity classifications, and a smaller impact on topic classification. Without sentiment, the Subjective embeddings prove more robust, still outperforming the Objective on sentiment classification, while the Objective performs close to random word embeddings on Amazon . In summary, evidences from the series of controlled experiments support the existence of some X-factor to the Subjective embeddings, which confers superior performance in subjectivity-sensitive tasks such as sentiment classification. Corpus Subcorpus Sentiment SubjectTopic Sentiment? Amazon RT ivity Objective With 81.8 75.2 90.7 83.1 Without 76.1 67.2 87.8 82.6 Subjective With 85.5 78.0 90.3 82.5 Without 79.8 71.0 89.1 82.2 Random Embeddings 76.1 62.2 80.1 71.5 Table 3: With and without sentiment 4 Sentiment-Infused Word Embeddings To leverage the consequential sentiment information, we propose a family of methods, called SentiVec, for training distributional word embeddings that are infused with information on the sentiment polarity of words. The methods are built upon Word2Vec optimization algorithm and make use of available lexical sentiment resources such as SentiWordNet (Baccianella et al., 2010), sentiment lexicon by Hu and Liu (2004), and etc. SentiVec seeks to satisfy two objectives, namely context prediction and lexical category prediction: log L = log Lword2vec(W; C) + λ log Llex(W, L), (5) where Lword2vec(W; C) is the Skip-gram objective as in (4); Llex(W, L) is a lexical objective for corpus W and lexical resource L; and λ is a tradeoff parameter. Lexical resource L = {Xi}n i=1 comprises of n word sets, each Xi contains words of the same category. For sentiment classification, we consider positive and negative word categories. 4.1 Logistic SentiVec Logistic SentiVec admits lexical resource in the form of two disjoint word sets, L = {X1, X2}, X1 ∩X2 = ∅. The objective is to tell apart which word set of L word w belongs to: log Llex(W, L) (6) = X w∈X1 log P(w ∈X1) + X w∈X2 log P(w ∈X2). We further tie these probabilities together, and cast the objective as a logistic regression problem: P(w ∈X1) = 1 −P(w ∈X2) = σ(vw · τ), (7) where vw is a word embedding and τ is a direction vector. Since word embeddings are generally invariant to scaling and rotation when used as downstream feature representations, τ can be chosen randomly and fixed during training. We 1217 experiment with randomly sampled unit length directions. For simplicity, we also scale embedding vw to its unit length when computing vw ·τ, which now equals to cosine similarity between vw and τ. When vw is completely aligned with τ, the cosine similarity between them is 1, which maximizes P(w ∈X1) and favors words in X1. When vw is opposite to τ, the cosine similarity equals to −1, which maximizes P(w ∈X2) and predicts vectors from X2. Orthogonal vectors have cosine similarity of 0, which makes both w ∈X1 and w ∈X2 equally probable. Optimizing (6) makes the corresponding word embeddings of X1 and X2 gravitate to the opposite semispaces and simulates clustering effect for the words of the same category, while the Word2Vec objective prevents words from collapsing to the same directions. Optimization The objective in (6) permits simple stochastic gradient ascent optimization and can be combined with negative sampling procedure for Skip-gram in (5). The gradient for unnormalized embedding vw is solved as follows: log L[w∈X1](D, L) ′ vwi = (log P (x ∈X1))′ vwi = 1 ∥vw∥2 σ  −vw · τ ∥vw∥   τi ∥vw∥−vwi vw · τ ∥vw∥  (8) The optimization equation for vw, when w ∈X2, can be derived analogously. 4.2 Spherical SentiVec Spherical SentiVec extends Logistic SentiVec by dealing with any number of lexical categories, L = {Xi}n i=1. As such, the lexical objective takes on generic form: log Llex(W, L) = n X i=1 X w∈Xi log P (w ∈Xi), (9) Each P (w ∈Xi) defines embedding generating process. We assume each length-normalized vw for w of L is generated w.r.t. a mixture model of von Mises-Fisher (vMF) distributions. vMF is a probability distribution on a multidimensional sphere, characterized by parameters µ (mean direction) and κ (concentration parameter). Sampled points are concentrated around µ; the greater the κ, the closer the sampled points are to µ. We consider only unimodal vMF distributions, restricting concentration parameters to be strictly positive. Hereby, each Xi ∈L is assigned to vMF distribution parameters (µi, κi) and the membership probabilities are defined as follows: P(w ∈Xi) = P(vw; µi, κi) = 1 Zκi eκiµi·vw, (10) where Zκ is the normalization factor. The Spherical SentiVec lexical objective forces words of every Xi ∈L to gravitate towards and concentrate around their direction mean µi. As in Logistic SentiVec, it simulates clustering effect for the words of the same set. In comparison to the direction vector of Logistic SentiVec, mean directions of Spherical SentiVec when fixed can substantially influence word embeddings training and must be carefully selected. We optimize the mean directions along with the word embeddings using alternating procedure resembling K-means clustering algorithm. For simplicity, we keep concentration parameters tied, κ1 = κ2 = ... = κn = κ, and treat κ as a hyperparameter of this algorithm. Optimization We derive optimization procedure for updating word embeddings assuming fixed direction means. Like Logistic SentiVec, Spherical SentiVec can be combined with the negative sampling procedure of Skip-gram. The gradient for unnormalized word embedding vw is solved by the following equation: log L[w∈Xi] (W, L) ′ vwj = κi  µij ∥vw∥−vwj vw·µi ∥vw∥  ∥vw∥2 (11) Once word embedding vw (w ∈Xi) is updated, we revise direction mean µi w.r.t. maximum likelihood estimator: µi = P w∈Xi vw P w∈Xi vw . (12) Updating the direction means in such a way ensures that the lexical objective is non-decreasing. Assuming the stochastic optimization procedure for Lword2vec complies with the same nondecreasing property, the proposed alternating procedure converges. 5 Related Work There have been considerable research on improving the quality of distributional word embeddings. Bolukbasi et al. (2016) seek to debias word embeddings from gender stereotypes. Rothe and Sch¨utze (2017) incorporate WordNet 1218 lexeme and synset information. Mrkˇsic et al. (2016) encode antonym-synonym relations. Liu et al. (2015) encode ordinal relations such as hypernym and hyponym. Kiela et al. (2015) augment Skip-gram to enforce lexical similarity or relatedness constraints, Bollegala et al. (2016) modify GloVe optimization procedure for the same purpose. Faruqui et al. (2015) employ semantic relations of PPDB, WordNet, FrameNet to retrofit word embeddings for various prediction tasks. We use this Retrofitting method7 as a baseline. Socher et al. (2011) derive multi-word embeddings for sentiment distribution prediction, while we focus on lexical distributional analysis. Maas et al. (2011) and Tang et al. (2016) use documentlevel sentiment annotations to fit word embeddings, but document annotation might not always be available for distributional analysis on neutral corpora such as Wikipedia. SentiVec relies on simple sentiment lexicon instead. Refining (Yu et al., 2018) aligns the sentiment scores taken from lexical resource and the cosine similarity scores of corresponding word embeddings. The method generally requires fine-grained sentiment scores for the words, which may not be available in some settings. We use Refining as a baseline and adopt coarse-grained sentiment lexicon for this method. Villegas et al. (2016) compare various distributional word embeddings arising from the same corpus for sentiment classification, whereas we focus on the differentiation in input corpora and propose novel sentiment-infused word embeddings. 6 Experiments The objective of experiments is to study the efficacy of Logistic SentiVec and Spherical SentiVec word embeddings on the aforementioned text classification tasks. One natural baseline is Word2Vec, as SentiVec subsumes its context prediction objective, while further incorporating lexical category prediction. We include two other baselines that can leverage the same lexical resource but in manners different from SentiVec, namely: Retrofitting (Faruqui et al., 2015) and Refining (Yu et al., 2018). For these methods, we generate their word embeddings based on Setup III (see Section 3). All the methods were run multiple times with various hyperparameters, optimized via grid-search; for each we present the best performing setting. 7Original code is available at: https://github. com/mfaruqui/retrofitting First, we discuss the sentiment classification task. Table 4 shows the unfolded results for the 24 classification datasets of Amazon, as well as for Rotten Tomatoes. For each classification dataset (row), and for the Objective and Subjective embedding corpora respectively, the best word embedding methods are shown in bold. An asterisk indicates statistically significant8 results at 5% in comparison to Word2Vec. Both SentiVec variants outperform Word2Vec in the vast majority of the cases. The degree of outperformance is higher for the Objective than the Subjective word embeddings. This is a reasonable trend given our previous findings in Section 3. As the Objective Corpus encodes less information than the Subjective Corpus for sentiment classification, the former is more likely to benefit from the infusion of sentiment information from additional lexical resources. Note that the sentiment infusion into the word embeddings comes from separate lexical resources, and does not involve any sentiment classification label. SentiVec also outperforms the two baselines that benefit from the same lexical resources. Retrofitting does not improve upon Word2Vec, with the two embeddings essentially indistinguishable (the difference is only noticeable at the second decimal point). Refining makes the word embeddings perform worse on the sentiment classification task. One possible explanation is that Refining normally requires fine-grained labeled lexicon, where the words are scored w.r.t. the sentiment scale, whereas we use sentiment lexicon of two labels (i.e., positive or negative). SentiVec accepts coarse-grained sentiment lexicons, and potentially could be extended to deal with fine-grained labels. As previously alluded to, topic and subjectivity classifications are less sensitive to the subjectivity within word embeddings than sentiment classification. One therefore would not expect much, if any, performance gain from infusion of sentiment information. However, such infusion should not subtract or harm the quality of word embeddings either. Table 5 shows that the unfolded results for topic classification on the six datasets, and the result for subjectivity classification are similar across methods. Neither the SentiVec variants, nor Retrofitting and Refining, change the subjectivity and topic classification capabilities much, which means that the used sentiment lexicon is targeted only at the sentiment subspace of embeddings. 8We use paired t-test to compute p-value. 1219 Corpus/Category Objective Embeddings Subjective Embeddings Word2Vec Retrofitting Refining SentiVec Word2Vec Retrofitting Refining SentiVec Spherical Logistic Spherical Logistic Amazon Instant Video 84.1 84.1 81.9 84.9∗ 84.9∗ 87.8 87.8 86.9 88.1 88.2 Android Apps 83.0 83.0 80.9 84.0∗ 84.0∗ 86.3 86.3 85.0 86.6 86.5 Automotive 80.7 80.7 78.8 81.0 81.3 85.1 85.1 83.8 84.9 85.0 Baby 80.9 80.9 78.6 82.1 82.2∗ 84.2 84.2 82.8 84.4 84.6 Beauty 81.8 81.8 79.8 82.4 82.7∗ 85.2 85.2 83.5 85.2 85.4 Books 80.9 80.9 78.9 81.0 81.3 85.3 85.3 83.6 85.3 85.5 CD & Vinyl 79.4 79.4 77.6 79.4 79.9 83.5 83.5 81.9 83.7 83.6 Cell Phones 82.2 82.2 80.0 82.9 83.0∗ 86.8 86.8 85.3 86.8 87.0 Clothing 82.6 82.6 80.7 83.8 84.0∗ 86.3 86.3 84.7 86.4 86.8 Digital Music 82.3 82.3 80.5 82.8 83.0∗ 86.3 86.3 84.6 86.1 86.3 Electronics 81.0 81.0 78.8 80.9 81.3 85.2 85.2 83.6 85.3 85.3 Grocery & Food 81.7 81.7 79.4 83.1∗ 83.1∗ 85.0 85.0 83.7 85.1 85.6∗ Health 79.7 79.7 77.9 80.4∗ 80.4 84.0 84.0 82.3 84.0 84.3 Home & Kitchen 81.6 81.6 79.5 82.1 82.1 85.4 85.4 83.9 85.3 85.4 Kindle Store 84.7 84.7 83.2 85.2 85.4∗ 88.3 88.3 87.2 88.3 88.6 Movies & TV 81.4 81.4 78.5 81.9 81.9 85.2 85.2 83.5 85.4 85.5 Musical Instruments 81.7 81.6 79.7 82.4 82.4 85.8 85.8 84.1 85.9 85.7 Office 82.0 82.0 80.0 83.0∗ 82.9 86.1 86.1 84.5 86.4 86.5∗ Garden 80.4 80.4 77.9 81.0 81.5 84.1 84.1 82.5 84.3 84.6∗ Pet Supplies 79.7 79.7 77.5 80.4 80.2 83.2 83.2 81.5 83.4 83.8 Sports & Outdoors 80.8 80.8 79.1 81.3∗ 81.2 84.6 84.6 83.1 84.3 84.7 Tools 81.0 81.0 79.3 81.0 81.3 84.7 84.7 83.2 84.8 84.9 Toys & Games 83.8 83.8 82.0 84.7 84.9∗ 87.2 87.2 85.7 87.1 87.5 Video Games 80.3 80.3 77.4 81.5 81.7∗ 84.9 84.9 83.2 85.0 84.9 Average 81.6 81.6 79.5 82.2 82.4 85.4 85.4 83.9 85.5 85.7 Rotten Tomatoes 75.6 75.6 73.4 75.8∗ 75.4 77.9 77.9 76.7 77.7 77.9 Table 4: Comparison of Sentiment-Infused Word Embeddings on Sentiment Classification Task Corpus/Category Objective Embeddings Subjective Embeddings Word2Vec Retrofitting Refining SentiVec Word2Vec Retrofitting Refining SentiVec Spherical Logistic Spherical Logistic Topic Computers 79.8 79.8 79.6 79.6 79.8 79.8 79.8 79.8 79.7 79.7 Misc 89.8 89.8 89.7 89.8 90.0 90.4 90.4 90.6 90.4 90.3 Politics 84.6 84.6 84.4 84.5 84.6 83.8 83.8 83.5 83.6 83.5 Recreation 83.4 83.4 83.1 83.1 83.2 82.6 82.6 82.5 82.7 82.8 Religion 84.6 84.6 84.5 84.5 84.6 84.2 84.2 84.2 84.1 84.2 Science 78.2 78.2 78.2 78.1 78.3 76.4 76.4 76.1 76.7 76.6 Average 83.4 83.4 83.2 83.3 83.4 82.8 82.8 82.8 82.9 82.8 Subjectivity 90.6 90.6 90.0 90.6 90.6 90.6 90.6 90.3 90.7 90.8 Table 5: Comparison of Word Embeddings on Subjectivity and Topic Classification Tasks Illustrative Changes in Embeddings To give more insights on the difference between SentiVec and Word2Vec, we show “flower” diagrams in Figure 1 for Logistic SentiVec and Figure 2 for Spherical SentiVec. Each is associated with a reference word (e.g., good for Figure 1a), and indicates relative changes in cosine distances between the reference word and the testing words surrounding the “flower”. Every testing word is associated with a “petal” or black axis extending from the center of the circle. The “petal” length is proportional to the relative distance change in two word embeddings: κ = dSentiV ec(wref,wtesting) dword2vec(wref,wtesting), where dSentiV ec and dword2vec are cosine distances between reference wref and testing wtesting words in SentiVec and Word2Vec embeddings correspondingly. If the distance remains unchanged (κ = 1), then the “petal” points at the circumference; if the reference and testing words are closer in the SentiVec embedding than they are in Word2Vec (κ < 1), the “petal” lies inside the circle; when the distance increases (κ > 1), the “petal” goes beyond the circle. The diagrams are presented for Objective Embeddings9. We use three reference words: good (positive), bad (negative), time (neutral); as well as three groups of testing words: green for words randomly sampled from positive lexicon (Sector I-II), red for words randomly sampled from negative lexicon (Sector II-III), and gray for frequent neutral common nouns (Sector III-I). Figure 1 shows changes produced by Logistic SentiVec. For the positive reference word (Figure 1a), the average distance to the green words is shortened, whereas the distance to the red words increases. The reverse is observed for the negative reference word (Figure 1b). This observation 9The diagrams for Subjective Embeddings show the same trend, with the moderate changes. 1220 I II III (a) Reference word: good (positive) I II III (b) Reference word: bad (negative) I II III (c) Reference word: time (neutral) Figure 1: Relative changes in cosine distances in Logistic SentiVec contrasted with Word2Vec I II III (a) Reference word: good (positive) I II III (b) Reference word: bad (negative) I II III (c) Reference word: time (neutral) Figure 2: Relative changes in cosine distances in Spherical SentiVec contrasted with Word2Vec complies with the lexical objective (7) of Logistic SentiVec, which aims to separate the words of two different classes. Note that the gray words suffer only moderate change with respect to positive and negative reference words. For the neutral reference word (Figure 1c), the distances are only moderately affected across all testing groups. Figure 2 shows that Spherical SentiVec tends to make embeddings more compact than Logistic SentiVec. As the former’s lexical objective (9) is designed for clustering, but not for separation, we look at the comparative strength of the clustering effect on the testing words. For the positive reference word (Figure 2a), the largest clustering effect is achieved for the green words. For the negative reference word (Figure 2b), as expected, the red words are affected the most. The gray words suffer the least change for all the reference words. In summary, SentiVec effectively provides an advantage for subjectivity-sensitive task such as sentiment classification, while not harming the performance of other text classification tasks. 7 Conclusion We explore the differences between objective and subjective corpora for generating word embeddings, and find that there is indeed a difference in the embeddings’ classification task performances. Identifying the presence of sentiment words as one key factor for the difference, we propose a novel method SentiVec to train word embeddings that are infused with the sentiment polarity of words derived from a separate sentiment lexicon. We further identify two lexical objectives: Logistic SentiVec and Spherical SentiVec. The proposed word embeddings show improvements in sentiment classification, while maintaining their performance on subjectivity and topic classifications. Acknowledgments This research is supported by the National Research Foundation, Prime Minister’s Office, Singapore under its NRF Fellowship Programme (Award No. NRF-NRFF2016-07). 1221 References Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. 2010. Sentiwordnet 3.0: an enhanced lexical resource for sentiment analysis and opinion mining. In LREC. volume 10. Danushka Bollegala, Mohammed Alsuhaibani, Takanori Maehara, and Ken-ichi Kawarabayashi. 2016. Joint word representation learning using a corpus and a semantic lexicon. In Proceedings of AAAI. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Proceedings of NIPS. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. JMLR 12(Aug). Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of NAACL-HLT. Yoav Goldberg. 2016. A primer on neural network models for natural language processing. JAIR 57. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM. Douwe Kiela, Felix Hill, and Stephen Clark. 2015. Specializing word embeddings for similarity or relatedness. In Proceedings of EMNLP. Quan Liu, Hui Jiang, Si Wei, Zhen-Hua Ling, and Yu Hu. 2015. Learning semantic word embeddings based on ordinal knowledge constraints. In Proceedings of ACL-IJCNLP. volume 1. Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of ACL-HLT. Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton Van Den Hengel. 2015. Image-based recommendations on styles and substitutes. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26. Burt L Monroe, Michael P Colaresi, and Kevin M Quinn. 2008. Fightin’words: Lexical feature selection and evaluation for identifying the content of political conflict. Political Analysis 16(4). Nikola Mrkˇsic, Diarmuid OS´eaghdha, Blaise Thomson, Milica Gaˇsic, Lina Rojas-Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic constraints. In Proceedings of NAACL-HLT. Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of ACL. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of ACL. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP. Sascha Rothe and Hinrich Sch¨utze. 2017. Autoextend: Combining word embeddings with semantic resources. Computational Linguistics 43(3). Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. 2011. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of EMNLP. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of EMNLP. Duyu Tang, Furu Wei, Bing Qin, Nan Yang, Ting Liu, and Ming Zhou. 2016. Sentiment embeddings with applications to sentiment analysis. IEEE TKDE 28(2). Mar´ıa Paula Villegas, Mar´ıa Jos´e Garciarena Ucelay, Juan Pablo Fern´andez, Miguel A ´Alvarez Carmona, Marcelo Luis Errecalde, and Leticia Cagnina. 2016. Vector-based word representations for sentiment analysis: a comparative study. In XXII Congreso Argentino de Ciencias de la Computaci´on (CACIC 2016).. L. C. Yu, J. Wang, K. R. Lai, and X. Zhang. 2018. Refining word embeddings using intensity scores for sentiment analysis. IEEE/ACM Transactions on Audio, Speech, and Language Processing 26(3).
2018
112
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1222–1231 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1222 Word Embedding and WordNet Based Metaphor Identification and Interpretation Rui Mao, Chenghua Lin and Frank Guerin Department of Computing Science University of Aberdeen Aberdeen, United Kingdom {r03rm16, chenghua.lin, f.guerin}@abdn.ac.uk Abstract Metaphoric expressions are widespread in natural language, posing a significant challenge for various natural language processing tasks such as Machine Translation. Current word embedding based metaphor identification models cannot identify the exact metaphorical words within a sentence. In this paper, we propose an unsupervised learning method that identifies and interprets metaphors at word-level without any preprocessing, outperforming strong baselines in the metaphor identification task. Our model extends to interpret the identified metaphors, paraphrasing them into their literal counterparts, so that they can be better translated by machines. We evaluated this with two popular translation systems for English to Chinese, showing that our model improved the systems significantly. 1 Introduction Metaphor enriches language, playing a significant role in communication, cognition, and decision making. Relevant statistics illustrate that about one third of sentences in typical corpora contain metaphor expressions (Cameron, 2003; Martin, 2006; Steen et al., 2010; Shutova, 2016). Linguistically, metaphor is defined as a language expression that uses one or several words to represent another concept, rather than taking their literal meanings of the given words in the context (Lagerwerf and Meijers, 2008). Computational metaphor processing refers to modelling non-literal expressions (e.g., metaphor, metonymy, and personification) and is useful for improving many NLP tasks such as Machine Translation (MT) and Sentiment Analysis (Rentoumi et al., 2012). For instance, Google Translate failed in translating devour within a sentence, “She devoured his novels.” (Mohammad et al., 2016), into Chinese. The term was translated into 吞噬, which takes the literal sense of swallow and is not understandable in Chinese. Interpreting metaphors allows us to paraphrase them into literal expressions which maintain the intended meaning and are easier to translate. Metaphor identification approaches based on word embeddings have become popular (Tsvetkov et al., 2014; Shutova et al., 2016; Rei et al., 2017) as they do not rely on hand-crafted knowledge for training. These models follow a similar paradigm in which input sentences are first parsed into phrases and then the metaphoricity of the phrases is identified; they do not tackle word-level metaphor. E.g., given the former sentence “She devoured his novels.”, the aforementioned methods will first parse the sentence into a verb-direct object phrase devour novel, and then detect the clash between devour and novel, flagging this phrase as a likely metaphor. However, which component word is metaphorical cannot be identified, as important contextual words in the sentence were excluded while processing these phrases. Discarding contextual information also leads to a failure to identify a metaphor when both words in the phrase are metaphorical, but taken out of context they appear literal. E.g., “This young man knows how to climb the social ladder.” (Mohammad et al., 2016) is a metaphorical expression. However, when the sentence is parsed into a verbdirect object phrase, climb ladder, it appears literal. In this paper, we propose an unsupervised metaphor processing model which can identify and interpret linguistic metaphors at the wordlevel. Specifically, our model is built upon word embedding methods (Mikolov et al., 2013) and uses WordNet (Fellbaum, 1998) for lexical re1223 lation acquisition. Our model is distinguished from existing methods in two aspects. First, our model is generic which does not constrain the source domain of metaphor. Second, the developed model does not rely on any labelled data for model training, but rather captures metaphor in an unsupervised, data-driven manner. Linguistic metaphors are identified by modelling the distance (in vector space) between the target word’s literal and metaphorical senses. The metaphorical sense within a sentence is identified by its surrounding context within the sentence, using word embedding representations and WordNet. This novel approach allows our model to operate at the sentence level without any preprocessing, e.g., dependency parsing. Taking contexts into account also addresses the issue that a two-word phrase appears literal, but it is metaphoric within a sentence (e.g., the climb ladder example). We evaluate our model against three strong baselines (Melamud et al., 2016; Shutova et al., 2016; Rei et al., 2017) on the task of metaphor identification. Extensive experimentation conducted on a publicly available dataset (Mohammad et al., 2016) shows that our model significantly outperforms the unsupervised learning baselines (Melamud et al., 2016; Shutova et al., 2016) on both phrase and sentence evaluation, and achieves equivalent performance to the state-ofthe-art deep learning baseline (Rei et al., 2017) on phrase-level evaluation. In addition, while most of the existing works on metaphor processing solely evaluate the model performance in terms of metaphor classification accuracy, we further conducted another set of experiments to evaluate how metaphor processing can be used for supporting the task of MT. Human evaluation shows that our model improves the metaphoric translation significantly, by testing on two prominent translation systems, namely, Google Translate1 and Bing Translator2. To our best knowledge, this is the first metaphor processing model that is evaluated on MT. To summarise, the contributions of this paper are two-fold: (1) we proposed a novel framework for metaphor identification which does not require any preprocessing or annotated corpora for training; (2) we conducted, to our knowledge, the first metaphor interpretation study of apply1https://translate.google.co.uk 2https://www.bing.com/translator ing metaphor processing for supporting MT. We describe related work in §2, followed by our labelling method in §4, experimental design in §5, results in §6 and conclusions in §7. 2 Related Work A wide range of methods have been applied for computational metaphor processing. Turney et al. (2011); Neuman et al. (2013); Assaf et al. (2013) and Tsvetkov et al. (2014) identified metaphors by modelling the abstractness and concreteness of metaphors and non-metaphors, using a machine usable dictionary called MRC Psycholinguistic Database (Coltheart, 1981). They believed that metaphorical words would be more abstract than literal ones. Some researchers used topic models to identify metaphors. For instance, Heintz et al. (2013) used Latent Dirichlet Allocation (LDA) (Blei et al., 2003) to model source and target domains, and assumed that sentences containing words from both domains are metaphorical. Strzalkowski et al. (2013) assumed that metaphorical terms occur out of the topic chain, where a topic chain is constructed by topical words that reveal the core discussion of the text. Shutova et al. (2017) performed metaphorical concept mappings between the source and target domains in multi-languages using both unsupervised and semi-supervised learning approaches. The source and target domains are represented by semantic clusters, which are derived through the distribution of the co-occurrences of words. They also assumed that when contextual vocabularies are from different domains then there is likely to be a metaphor. There is another line of approaches based on word embeddings. Generally, these works are not limited by conceptual domains and hand-crafted knowledge. Shutova et al. (2016) proposed a model that identified metaphors by employing word and image embeddings. The model first parses sentences into phrases which contain target words. In their word embedding based approach, the metaphoricity of a phrase was identified by measuring the cosine similarity of two component words in the phrase, based on their input vectors from Skip-gram word embeddings. If the cosine similarity is higher than a threshold, the phrase is identified as literal; otherwise metaphorical. Rei et al. (2017) identified metaphors by introducing a deep learning architecture. Instead of using word input vectors directly, they filtered out noisy in1224 T .. C1 … Cn … Cm .. Input Hidden Output CBOW W i W o C1 … Cn … Cm .. T .. Input Hidden Output Skip-gram W i W o Figure 1: CBOW and Skip-gram framework. formation in the vector of one word in a phrase, projecting the word vector into another space via a sigmoid activation function. The metaphoricity of the phrases was learnt via training a supervised deep neural network. The above word embedding based models, while demonstrating some success in metaphor identification, only explored using input vectors, which might hinder their performance. In addition, metaphor identification is highly dependent on its context. Therefore, phrase-level models (e.g., Tsvetkov et al. (2014); Shutova et al. (2016); Rei et al. (2017)) are likely to fail in the metaphor identification task if important contexts are excluded. In contrast, our model can operate at the sentence level which takes into account rich context and hence can improve the performance of metaphor identification. 3 Preliminary: CBOW and Skip-gram Our metaphor identification framework is built upon word embedding, which is based on Continuous Bag of Words (CBOW) and Skip-gram (Mikolov et al., 2013). In CBOW (see Figure 1), the input and output layers are context (C) and centre word (T) one-hot encodings, respectively. The model is trained by maximizing the probability of predicting a centre word, given its context (Rong, 2014): arg max p(t|c1, ..., cn, ..., cm) (1) where t is a centre word, cn is the nth context word of t within a sentence, totally m context words. CBOW’s hidden layer is defined as: HCBOW = 1 m × W i⊤× m X n=1 Cn = 1 m × m X n=1 vi⊤ c,n (2) where Cn is the one-hot encoding of the nth context word, vi c,n is the nth context word row vector (input vector) in W i which is a weight matrix between input and hidden layers. Thus, the hidden layer is the transpose of the average of input vectors of context words. The probability of predicting a centre word in its context is given by a softmax function below: ut = W o t ⊤× HCBOW = vo t ⊤× HCBOW (3) p(t|c1, ..., cn, ..., cm) = exp(ut) PV j=1 exp(uj) (4) where W o t is equivalent to the output vector vo t which is essentially a column vector in a weight matrix W o that is between hidden and output layers, aligning with the centre word t. V is the size of vocabulary in the corpus. The output is a one-hot encoding of the centre word. W i and W o are updated via back propagation of errors. Therefore, only the value of the position that represents the centre word’s probability, i.e., p(t|c1, ..., cn, ..., cm), will get close to the value of 1. In contrast, the probability of the rest of the words in the vocabulary will be close to 0 in every centre word training. W i embeds context words. Vectors within W i can be viewed as context word embeddings. W o embeds centre words, vectors in W o can be viewed as centre word embeddings. Skip-gram is the reverse of CBOW (see Figure 1). The input and output layers are centre word and context word one-hot encodings, respectively. The target is to maximize the probability of predicting each context word, given a centre word: arg max p(c1, ..., cn, ..., cm|t) (5) Skip-gram’s hidden layer is defined as: HSG = W i⊤× T = vi⊤ t (6) where T is the one-hot encoding of the centre word t. Skip-gram’s hidden layer is equal to the transpose of a centre word’s input vector vt, as only the tth row are kept by the operation. The probability of a context word is: uc,n = W o⊤ c,n × HSG = vo⊤ c,n × HSG (7) p(cn|t) = exp(uc,n) PV j=1 exp(uj) (8) 1225 (4) S = cos(w*, wt) literal, if S > threshold metaphoric, otherwise (1) Word Embedding Wiki train w1 w2 w3 … wn .. w1 w2 w3 … wn .. .. (2) Look up WordNet A sentence: {wc1, wc2, wt wc3 …} Context words: {wc1, wc2, wc3 …} Target word: {wt} Synonyms: {s1, s2 …} Hypernyms: {h1, h2 …} Candidate word set W (3) wt s1 s2 … hj .. wc1 wc2 wc3 … wcm .. .. cos( wt , context) cos( s1 , context) cos( s2 , context) … cos( hj , context) agrmax w* ∈W Best fit word Figure 2: Metaphor identification framework. NB: w∗= best fit word, wt = target word. where c, n is the nth context word, given a centre word. In Skip-gram, W i aligns to centre words, while W o aligns to context words. Because the names of centre word and context word embeddings are reversed in CBOW and Skip-gram, we will uniformly call vectors in W i input vectors vi, and vectors in W o output vectors vo in the remaining sections. Word embeddings represent both input and output vectors. 4 Methodology In this section, we present the technical details of our metaphor processing framework, built upon two hypotheses. Our first hypothesis (H1) is that a metaphorical word can be identified, if the sense the word takes within its context and its literal sense come from different domains. Such a hypothesis is based on the theory of Selectional Preference Violation (Wilks, 1975, 1978) that a metaphorical item can be found in a violation of selectional restrictions, where a word does not satisfy its semantic constrains within a context. Our second hypothesis (H2) is that the literal senses of words occur more commonly in corpora than their metaphoric senses (Cameron, 2003; Martin, 2006; Steen et al., 2010; Shutova, 2016). Figure 2 depicts an overview of our metaphor identification framework. The workflow of our framework is as follows. Step (1) involves training word embeddings based on a Wikipedia dump3 for obtaining input and output vectors of words. 3https://dumps.wikimedia.org/enwiki/ 20170920/ She devoured his novels. Sense 1 • devour • devoured • … HYPERNYMS • destroy • destroyed • … • ruin • ruined • … • … Sense 2 • devour • devoured • … HYPERNYMS • enjoy • enjoyed • … • bask • basked • … • … Sense 3 • devour • devoured • … HYPERNYMS • eat up • … • … SYNONYMS • down • … • … Sense 4 • devour • devoured • … HYPERNYMS • raven • ravened • … • pig • pigged • … • … … She his novels devour devoured enjoy enjoyed … Candidate set W Context words Figure 3: Given CBOW trained input and output vectors, a target word of devoured, and a context of She [ ] his novels, cos(vo devoured, vi context) = −0.01, cos(vo enjoyed, vi context) = 0.02. In Step (2), given an input sentence, the target word (i.e., the word in the original text whose metaphoricity is to be determined) and its context words (i.e., all other words in the sentence excluding the target word) are separated. We construct a candidate word set W which represents all the possible senses of the target word. This is achieved by first extracting the synonyms and direct hypernyms of the target word from WordNet, and then augmenting the set with the inflections of the extracted synonyms and hypernyms, as well as the target word and its inflections. Auxiliary verbs are excluded from this set, as these words frequently appear in most sentences with little lexical meaning. In Step (3), we identify the best fit word, which is defined as the word that represents the literal sense that the target word is most likely taking given its context. Finally, in Step (4), we compute the cosine similarity between the target word and the best fit word. If the similarity is above a threshold, the target word will be identified as literal, otherwise metaphoric (i.e., based on H1). We will discuss in detail Step (3) and Step (4) in §4.1. 4.1 Metaphor identification Step (3): One of the key steps of our metaphor identification framework is to identify the best fit word for a target word given its surrounding context. The intuition is that the best fit word will represent the literal sense that the target word is most likely taking. E.g., for the sentence “She devoured his novels.” and the corresponding target word devoured, the best fit word is enjoyed, as shown in 1226 Figure 3. Also note that the best fit word could be the target word itself if the target word is used literally. Given a sentence s, let wt be the target word of the sentence, w∗∈W the best fit word for wt, and wcontext the surrounding context for wt, i.e., all the words in s excluding wt. We compute the context embedding vi context by averaging out the input vectors of each context word of wcontext, based on Eq. 2. Next, we rank each candidate word k ∈W by measuring its similarity to the context input vector vi context in the vector space. The candidate word with the highest similarity to the context is then selected as the best fit word. w∗= arg max k SIM(vk, vcontext) (9) where vk is the vector of a candidate word k ∈ W. In contrast to existing word embedding based methods for metaphor identification which only make use of input vectors (Shutova et al., 2016; Rei et al., 2017), we explore using both input and output vectors of CBOW and Skip-gram embeddings when measuring the similarity between a candidate word and the context. We expect that using a combination of input and output vectors might work better. Specifically, we have experimented with four different model variants as shown below. SIM-CBOWI = cos(vi k,cbow, vi context,cbow) (10) SIM-CBOWI+O = cos(vo k,cbow, vi context,cbow) (11) SIM-SGI = cos(vi k,sg, vi context,sg) (12) SIM-SGI+O = cos(vo k,sg, vi context,sg) (13) Here, cos(·) is cosine similarity, cbow is CBOW word embeddings, sg is Skip-gram word embeddings. We have also tried other model variants using output vectors for vcontext. However, we found that the models using output vectors for vcontext (both CBOW and Skip-gram embeddings) do not improve our framework performance. Due to the page limit we omitted the results of those models in this paper. Step (4): Given a predicted best fit word w∗ identified in Step (3), we then compute the cosine similarity between the lemmatizations of w∗and the target word wt using their input vectors. SIM(w∗, wt) = cos(vi w∗, vi wt) (14) We give a detailed discussion in §4.2 of our rationale for using input vectors for Eq. 14. If the similarity is higher than a threshold (τ) the target word is considered as literal, otherwise, metaphorical (based on H1). One benefit of our approach is that it allows one to paraphrase the identified metaphorical target word into the best fit word, representing its literal sense in the context. Such a feature is useful for supporting other NLP tasks such as Machine Translation, which we will explore in §6. In terms of the value of threshold (τ), it is empirically determined based on a development set. Please refer to §5 for details. To better explain the workflow of our framework, we now go through an example as illustrated in Figure 3. The target word of the input sentence, “She devoured his novels.” is devoured, and its the lemmatised form devour has four verbal senses in WordNet, i.e., destroy completely, enjoy avidly, eat up completely with great appetite, and eat greedily. Each of these senses has a set of corresponding synonyms and hypernyms. E.g., Sense 3 (eat up completely with great appetite) has synonyms demolish, down, consume, and hypernyms go through, eat up, finish, and polish off. We then construct a candidate word set W by including the synonyms and direct hypernyms of the target word from WordNet, and then augmenting the set with the inflections of the extracted synonyms and hypernyms, as well as the target word devour and its inflections. We then identify the best fit word given the context she [ ] his novels based on Eq. 9. Based on H2, literal expressions are more common than metaphoric ones in corpora. Therefore, the best fit word is expected to frequently appear within the given context, and thus represents the most likely sense of the target word. For example, the similarity between enjoy (i.e., the best fit word) and the the context is higher than that of devour (i.e., the target word), as shown in Figure 3. 4.2 Word embedding: output vectors vs. input vectors Typically, input vectors are used after training CBOW and Skip-gram, with output vectors being abandoned by practical models, e.g., original word2vec model (Mikolov et al., 2013) and Gensim toolkit ( ˇReh˚uˇrek and Sojka, 2010), as these models are designed for modelling similarities in semantics. However, we found that using input vectors to measure cosine similarity between two words with different POS types in a phrase is sub1227 apple orange drink juice CBOW Output vec Output vec Input vec Input vec Skip-gram Figure 4: Input and output vector visualization. The bluer, the more negative. The redder, the more positive. optimal, as words with different POS normally have different semantics. They tend to be distant from each other in the input vector space. Taking Skip-gram for example, empirically, input vectors of words with the same POS, occurring within the same contexts tend to be close in the vector space (Mikolov et al., 2013), as they are frequently updated by back propagating the errors from the same context words. In contrast, input vectors of words with different POS, playing different semantic and syntactic roles tend to be distant from each other, as they seldom occur within the same contexts, resulting in their input vectors rarely being updated equally. Our observation is also in line with Nalisnick et al. (2016), who examine IN-IN, OUT-OUT and IN-OUT vectors to measure similarity between two words. Nalisnick et al. discovered that two words which are similar by function or type have higher cosine similarity with IN-IN or OUT-OUT vectors, while using input and output vectors for two words (IN-OUT) that frequently co-occur in the same context (e.g., a sentence) can obtain a higher similarity score. For illustrative purpose, we visualize the CBOW and Skip-gram updates between 4dimensional input and output vectors by Wevi4 (Rong, 2014), using a two-sentence corpus, “Drink apple juice.” and “Drink orange juice.”. We feed these two sentences to CBOW and Skipgram with 500 iterations. As seen Figure 4, the input vectors of apple and orange are similar in both CBOW and Skip-gram, which are different from the input vectors of their context words (drink and juice). However, the output vectors of apple and orange are similar to the input vectors of drink and juice. To summarise, using input vectors to compare similarity between the best fit word and the target word is more appropriate (cf. Eq.14), as they 4https://ronxin.github.io/wevi/ tend to have the same types of POS. When measuring the similarity between candidate words and the context, using output vectors for the former and input vectors for the latter seems to better predict the best fit word. 5 Experimental settings Baselines. We compare the performance of our framework for metaphor identification against three strong baselines, namely, an unsupervised word embedding based model by Shutova et al. (2016), a supervised deep learning model by Rei et al. (2017), and the Context2Vec model5 (Melamud et al., 2016) which achieves the best performance on Microsoft Sentence Completion Challenge (Zweig and Burges, 2011). Context2Vec was not designed for processing metaphors, in order to use it for this we plug it into a very similar framework to that described in Figure 2. We use Context2Vec to predict the best fit word from the candidate set, as it similarly uses context to predict the most likely centre word but with bidirectional LSTM based context embedding method. After locating the best fit word with Context2Vec, we identify the metaphoricity of a target word with the same method (see Step (4) in §4), so that we can also apply it for metaphor interpretation. Note that while Shutova et al. and Rei et al. detect metaphors at the phrase level by identifying metaphorical phrases, Melamud et al.’s model can perform metaphor identification and interpretation on sentences. Dataset. Evaluation was conducted based on a dataset developed by Mohammad et al. (2016). This dataset6, containing 1,230 literal and 409 metaphor sentences, has been widely used for metaphor identification related research (Shutova et al., 2016; Rei et al., 2017). There is a verbal target word annotated by 10 annotators in each sentence. We use two subsets of the Mohammad et al. set, one for phrase evaluation and one for sentence evaluation. The phrase evaluation dataset was kindly provided by Shutova, which consists of 316 metaphorical and 331 literal phrases (subject-verb and verb-direct object word pairs), parsed from Mohammad et al.’s dataset. Similar to Shutova et al. (2016), we use 40 metaphoric and 40 literal phrases as a development set and the rest as a test 5http://u.cs.biu.ac.il/˜nlp/resources/ downloads/context2vec/ 6http://saifmohammad.com/WebPages/ metaphor.html 1228 Method P R F1 Phrase Shutova et al. (2016) 0.67 0.76 0.71 Rei et al. (2017) 0.74 0.76 0.74 SIM-CBOWI+O 0.66 0.78 0.72 SIM-SGI+O 0.68 0.82 0.74* Sent. Melamud et al. (2016) 0.60 0.80 0.69 SIM-SGI 0.56 0.95 0.70 SIM-SGI+O 0.62 0.89 0.73 SIM-CBOWI 0.59 0.91 0.72 SIM-CBOWI+O 0.66 0.88 0.75* Table 1: Metaphor identification results. NB: * denotes that our model outperforms the baseline significantly, based on two-tailed paired t-test with p < 0.001. set. For sentence evaluation, we select 212 metaphorical sentences whose target words are annotated with at least 70% agreement. We also add 212 literal sentences with the highest agreement. Among the 424 sentences, we form our development set with 12 randomly selected metaphoric and 12 literal instances to identify the threshold for detecting metaphors. The remaining 400 sentences are our testing set. Word embedding training. We train CBOW and Skip-gram models on a Wikipedia dump with the same settings as Shutova et al. (2016) and Rei et al. (2017). That is, CBOW and Skip-gram models are trained iteratively 3 times on Wikipedia with a context window of 5 to learn 100-dimensional word input and output vectors. We exclude words with total frequency less than 100. 10 negative samples are randomly selected for each centre word training. The word down-sampling rate is 10-5. We use Stanford CoreNLP (Manning et al., 2014) lemmatized Wikipedia to train word embeddings for phrase level evaluation, which is in line with Shutova et al. (2016). In sentence evaluation, we use the original Wikipedia for training word embeddings. 6 Experimental Results 6.1 Metaphor identification Table 1 shows the performance of our model and the baselines on the task of metaphor identification. All the results for our models are based on a threshold of 0.6, which is empirically determined based on the developing set. For sentence level metaphor identification, it can be observed that all our models outperform the baseline (Melamud et al., 2016), with SIM-CBOWI+O giving the highest F1 score of 75% which is a 6% gain over the baseline. We also see that models based on both input and output vectors (i.e., SIM-CBOWI+O and SIM-SGI+O) yield better performance than the models based on input vectors only (i.e., SIM-CBOWI and SIM-SGI). Such an observation supports our assumption that using input and output vectors can better model similarity between words that have different types of POS, than simply using input vectors. When comparing CBOW and Skip-gram based models, we see that CBOW based models generally achieve better performance in precision whereas Skip-gram based models perform better in recall. In terms of phrase level metaphor identification, we compare our best performing models (i.e., SIM-CBOWI+O and SIM-SGI+O) against the approaches of Shutova et al. (2016) and Rei et al. (2017). In contrast to the sentence level evaluation in which SIM-CBOWI+O gives the best performance, SIM-SGI+O performs best for the phrase level evaluation. This is likely due to the fact that Skip-gram is trained by using a centre word to maximise the probability of each context word, whereas CBOW uses the average of context word input vectors to maximise the probability of the centre word. Thus, Skip-gram performs better in modelling one-word context, while CBOW has better performance in modelling multi-context words. When comparing to the baselines, our model SIM-SGI+O significantly outperforms the word embedding based approach by Shutova et al. (2016), and gives the same performance as the deep supervised method (Rei et al., 2017) which requires a large amount of labelled data for training and cost in training time. SIM-CBOWI+O and SIM-SGI+O are also evaluated with different thresholds for both phrase and sentence level metaphor identification. As can be seen from Table 2, the results are fairly stable when the threshold is set between 0.5 and 0.9 in terms of F1. 6.2 Metaphor processing for MT We believe that one of the key purposes of metaphor processing is to support other NLP tasks. Therefore, we conducted another set of experiments to evaluate how metaphor processing can be used to support English-Chinese machine translation. The evaluation task was designed as follows. From the test set for sentence-level metaphor identification which contains 200 metaphoric and 1229 τ Sentence Phrase P R F1 F1SIM-CBOWI+O F1SIM-SGI+O 0.3 0.75 0.60 0.67 0.56 0.46 0.4 0.69 0.75 0.72 0.65 0.63 0.5 0.67 0.82 0.74 0.71 0.72 0.6 0.66 0.88 0.75 0.72 0.74 0.7 0.64 0.88 0.74 0.72 0.73 0.8 0.63 0.89 0.74 0.72 0.73 0.9 0.63 0.89 0.74 0.71 0.73 1.0 0.50 1.00 0.67 0.65 0.65 Table 2: Model performance vs. different threshold (τ) settings. NB: the sentence level results are based on SIM-CBOWI+O. 0.3 0.4 0.5 0.6 0.7 0.8 Literal Metaphoric Overall Literal Metaphoric Overall Translation accuracy Google Bing Original sentence Paraphrased by our model Paraphrased by the baseline (Melamud et al. 2016) +0.26 +0.24 +0.11 +0.09 Figure 5: Accuracy of metaphor interpretation, evaluated on Google and Bing Translation. 200 literal sentences, we randomly selected 50 metaphoric and 50 literal sentences to construct a set SM for the Machine Translation (MT) evaluation task. For each sentence in SM, if it is predicted as literal by our model, the sentence is kept unchanged; otherwise, the target word of the sentence is paraphrased with the best fit word (refer to §4.1 for details). The metaphor identification step resulted in 42 True Positive (TP) instances where the ground truth label is metaphoric and 19 False Positive (FP) instances where the ground truth label is literal, resulting in a total of 61 instances predicted as metaphorical by our model. We also run one of our baseline models, Context2Vec, on the 61 sentences to predict the best fit words for comparison. Our hypothesis is that by paraphrasing the metaphorically used target word with the best fit word which expresses the target word’s real meaning, the performance of translation engines can be improved. We test our hypothesis on two popular EnglishChinese MT systems, i.e., the Google and Bing Translators. We recruited from a UK university 5 Computing Science postgraduate students who are Chinese native speakers to participate the EnglishChinese MT evaluation task. During the evaluation, subjects were presented with a questionnaire The ex-boxer's job is to bounce people who want to enter this private club. bounce: eject from the premises 1. 前拳击手的工作是反弹人谁想要进入这个私人俱乐部。 2. 前拳击手的工作是让想要进入这个私人俱乐部的人弹跳。 3. 前拳击手的工作是拒绝谁想要进入这个私人俱乐部的人。 4. 前拳击手的工作是拒绝那些想进入这个私人俱乐部的人。 5. 前拳击手的工作是打人谁想要进入这个私人俱乐部。 6. 前拳击手的工作是打击那些想进入这个私人俱乐部的人。 Good / Bad Sample Questionnaire Figure 6: MT-based metaphor interpretation questionnaire. Acc-met. Acc-lit. Acc-overall Google Orig. Sent. 0.34 0.68 0.51 Context2Vec 0.50 0.66 0.58 SIM-CBOWI+O 0.60 0.64 0.62 Bing Orig. Sent. 0.42 0.70 0.56 Context2Vec 0.60 0.66 0.63 SIM-CBOWI+O 0.66 0.64 0.65 Table 3: Accuracy of metaphor interpretation, evaluated on Google and Bing Translation. containing English-Chinese translations of each of the 100 randomly selected sentences. For each sentence predicted as literal (39 out of 100 sentences), there are two corresponding translations by Google and Bing respectively. For each sentence predicted as metaphoric (61 out of 100 sentences), there are 6 corresponding translations. An example of the evaluation task is shown in Figure 6, in which “The ex-boxer’s job is to bounce people who want to enter this private club.” is the original sentence, followed by an WordNet explanation of the target word of the sentence (i.e., bounce: eject from the premises). There are 6 translations. No. 1-2 are the original sentence translations, translated by Google Translate (GT) and Bing Translator (BT). The target word, bounce, is translated, taking the sense of (1) physically rebounding like a ball (反弹), (2) jumping (弹跳). No. 3-4 are SIM-CBOWI+O paraphrased sentences, translated by GT and BT, respectively, taking the sense of refusing (拒绝). No. 5-6 are Context2Vec paraphrased sentences, translated by GT and BT, respectively, taking the sense of hitting (5.打; 6.打击). Subjects were instructed to determine if the translation of a target word can correctly represent its sense within the translated sentence, matching its context (cohesion) in Chinese. Note that we evaluate the translation of the target word, therefore, errors in context word translations are ignored by the subjects. Finally, a label is taken agreed by more than half annotators. Noticeably, 1230 based on our observation, there is always a Chinese word corresponding to an English target word in MT, as the annotated target word normally represents important information in the sentence in the applied dataset. We use translation accuracy as a measure to evaluate the improvement on MT systems after metaphor processing. The accuracy is calculated by dividing the number of correctly translated instances by the total number of instances. As can be seen in Figure 5 and Table 3, after paraphrasing the metaphorical sentences with the SIM-CBOWI+O model, the translation improvement for the metaphorical class is dramatic for both MT systems, i.e., 26% improvement for Google Translate and 24% for Bing Translate. In terms of the literal class, there is some small drop (i.e., 4-6%) in accuracy. This is due to the fact that some literals were wrongly identified as metaphors and hence error was introduced during paraphrasing. Nevertheless, with our model, the overall translation performance of both Google and Bing Translate are significantly improved by 11% and 9%, respectively. Our baseline model Context2Vec also improves the translation accuracy, but is 2-4 % lower than our model in terms of overall accuracy. In summary, the experimental results show the effectiveness of applying metaphor processing for supporting Machine Translation. 7 Conclusion We proposed a framework that identifies and interprets metaphors at word-level with an unsupervised learning approach. Our model outperforms the unsupervised baselines in both sentence and phrase evaluations. The interpretation of the identified metaphorical words given by our model also contributes to Google and Bing translation systems with 11% and 9% accuracy improvements. The experiments show that using words’ hypernyms and synonyms in WordNet can paraphrase metaphors into their literal counterparts, so that the metaphors can be correctly identified and translated. To our knowledge, this is the first study that evaluates a metaphor processing method on Machine Translation. We believe that compared with simply identifying metaphors, metaphor processing applied in practical tasks, can be more valuable in the real world. Additionally, our experiments demonstrate that using a candidate word output vector instead of its input vector to model the similarity between the candidate word and its context yields better results in the best fit word (the literal counterpart of the metaphor) identification. CBOW and Skip-gram do not consider the distance between a context word and a centre word in a sentence, i.e., context word contributes to predict the centre word equally. Future work will introduce weighted CBOW and Skip-gram to learn positional information within sentences. Acknowledgments This work is supported by the award made by the UK Engineering and Physical Sciences Research Council (Grant number: EP/P005810/1). References Dan Assaf, Yair Neuman, Yohai Cohen, Shlomo Argamon, Newton Howard, Mark Last, Ophir Frieder, and Moshe Koppel. 2013. Why “dark thoughts” aren’t really dark: A novel algorithm for metaphor identification. In Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB), 2013 IEEE Symposium on. IEEE, pages 60–65. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent Dirichlet allocation. Journal of machine Learning research 3(Jan):993–1022. Lynne Cameron. 2003. Metaphor in educational discourse. A&C Black. Max Coltheart. 1981. The MRC psycholinguistic database. The Quarterly Journal of Experimental Psychology 33(4):497–505. Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. Bradford Books. Ilana Heintz, Ryan Gabbard, Mahesh Srinivasan, David Barner, Donald S Black, Marjorie Freedman, and Ralph Weischedel. 2013. Automatic extraction of linguistic metaphor with LDA topic modeling. In Proceedings of the First Workshop on Metaphor in NLP (ACL 2013). pages 58–66. Luuk Lagerwerf and Anoe Meijers. 2008. Openness in metaphorical and straightforward advertisements: Appreciation effects. Journal of Advertising 37(2):19–30. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. pages 55–60. James H Martin. 2006. A corpus-based analysis of context effects on metaphor comprehension. Technical Report CU-CS-738-94, Boulder: University of Colorado: Computer Science Department. 1231 Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning generic context embedding with bidirectional LSTM. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning (CoNLL 2016). pages 51–61. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. Proceedings of International Conference on Learning Representations (ICLR 2013) . Saif M Mohammad, Ekaterina Shutova, and Peter D Turney. 2016. Metaphor as a medium for emotion: An empirical study. Proceedings of the Joint Conference on Lexical and Computational Semantics (*SEM 2016) page 23. Eric Nalisnick, Bhaskar Mitra, Nick Craswell, and Rich Caruana. 2016. Improving document ranking with dual word embeddings. In Proceedings of the 25th International Conference Companion on World Wide Web. International World Wide Web Conferences Steering Committee, pages 83–84. Yair Neuman, Dan Assaf, Yohai Cohen, Mark Last, Shlomo Argamon, Newton Howard, and Ophir Frieder. 2013. Metaphor identification in large texts corpora. PloS one 8(4):e62343. Radim ˇReh˚uˇrek and Petr Sojka. 2010. Software framework for topic modelling with large corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks. ELRA, Valletta, Malta, pages 45–50. http://is.muni.cz/ publication/884893/en. Marek Rei, Luana Bulat, Douwe Kiela, and Ekaterina Shutova. 2017. Grasping the finer point: A supervised similarity network for metaphor detection. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017) pages 1537–1546. Vassiliki Rentoumi, George A Vouros, Vangelis Karkaletsis, and Amalia Moser. 2012. Investigating metaphorical language in sentiment analysis: A sense-to-sentiment perspective. ACM Transactions on Speech and Language Processing (TSLP) 9(3):6. Xin Rong. 2014. word2vec parameter learning explained. arXiv preprint arXiv:1411.2738 . Ekaterina Shutova. 2016. Design and evaluation of metaphor processing systems. Computational Linguistics . Ekaterina Shutova, Douwe Kiela, and Jean Maillard. 2016. Black holes and white rabbits: Metaphor identification with visual features. Proceedings of the 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACLHLT 2016) pages 160–170. Ekaterina Shutova, Lin Sun, Elkin Dar´ıo Guti´errez, Patricia Lichtenstein, and Srini Narayanan. 2017. Multilingual metaphor processing: Experiments with semi-supervised and unsupervised learning. Computational Linguistics 43(1):71–123. Gerard J Steen, Aletta G Dorst, J Berenike Herrmann, Anna Kaal, Tina Krennmayr, and Trijntje Pasma. 2010. A method for linguistic metaphor identification: From MIP to MIPVU, volume 14. John Benjamins Publishing. Tomek Strzalkowski, George Aaron Broadwell, Sarah Taylor, Laurie Feldman, Samira Shaikh, Ting Liu, Boris Yamrom, Kit Cho, Umit Boz, Ignacio Cases, et al. 2013. Robust extraction of metaphor from novel data. In Proceedings of the First Workshop on Metaphor in NLP (ACL 2013). pages 67–76. Yulia Tsvetkov, Leonid Boytsov, Anatole Gershman, Eric Nyberg, and Chris Dyer. 2014. Metaphor detection with cross-lingual model transfer. Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL 2014) pages 248– 258. Peter D Turney, Yair Neuman, Dan Assaf, and Yohai Cohen. 2011. Literal and metaphorical sense identification through concrete and abstract context. Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2011) pages 680–690. Yorick Wilks. 1975. A preferential, pattern-seeking, semantics for natural language inference. Artificial Intelligence 6(1):53–74. Yorick Wilks. 1978. Making preferences more active. Artificial Intelligence 11(3):197–223. Geoffrey Zweig and Christopher JC Burges. 2011. The Microsoft research sentence completion challenge. Technical report, Technical Report MSR-TR-2011129, Microsoft.
2018
113
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1232–1242 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1232 Incorporating Latent Meanings of Morphological Compositions to Enhance Word Embeddings Yang Xu†, Jiawei Liu†, Wei Yang‡∗, and Liusheng Huang‡ School of Computer Science and Technology, University of Science and Technology of China, Hefei, 230027, China †{smallant, ustcljw}@mail.ustc.edu.cn ‡{qubit, lshuang}@ustc.edu.cn Abstract Traditional word embedding approaches learn semantic information at word level while ignoring the meaningful internal structures of words like morphemes. Furthermore, existing morphology-based models directly incorporate morphemes to train word embeddings, but still neglect the latent meanings of morphemes. In this paper, we explore to employ the latent meanings of morphological compositions of words to train and enhance word embeddings. Based on this purpose, we propose three Latent Meaning Models (LMMs), named LMM-A, LMM-S and LMM-M respectively, which adopt different strategies to incorporate the latent meanings of morphemes during the training process. Experiments on word similarity, syntactic analogy and text classification are conducted to validate the feasibility of our models. The results demonstrate that our models outperform the baselines on five word similarity datasets. On Wordsim-353 and RG-65 datasets, our models nearly achieve 5% and 7% gains over the classic CBOW model, respectively. For the syntactic analogy and text classification tasks, our models also surpass all the baselines including a morphology-based model. 1 Introduction Word embedding, which is also termed distributed word representation, has been a hot topic in the area of Natural Language Processing (NLP). The derived word embeddings have been used in plenty of tasks such as text classification (Liu ∗This is the corresponding author. et al., 2015), information retrieval (Manning et al., 2008), sentiment analysis (Shin et al., 2016), machine translation (Cho et al., 2014) and so on. Recently, some classic word embedding methods have been proposed, like Continuous Bag-ofWord (CBOW), Skip-gram (Mikolov et al., 2013a), Global Vectors (GloVe) (Pennington et al., 2014). These methods can usually capture word-level semantic information but ignore the meaningful inner structures of words like English morphemes or Chinese characters. The effectiveness of exploiting the internal compositions of words has been validated by some previous work (Luong et al., 2013; Botha and Blunsom, 2014; Chen et al., 2015; Cotterell et al., 2016). Some of them compute the word embeddings by directly adding the representations of morphemes/characters to context words or optimizing a joint objective over distributional statistics and morphological properties (Qiu et al., 2014; Botha and Blunsom, 2014; Chen et al., 2015; Luong et al., 2013; Lazaridou et al., 2013), while others introduce some probabilistic graphical models to build relationship between words and their internal compositions. e.g., Bhatia et al. (2016) treat word embeddings as latent variables for a prior distribution, which reflects words’ morphological properties, and feed the latent variables into a neural sequence model to obtain final word embeddings. Cotterell et al. (2016) construct a Gaussian graphical model that binds the morphological analysis to pre-trained word embeddings, which can help to smooth the noisy embeddings. Besides, these two methods also have the ability to predict embeddings for unseen words. Different from all the above models (we regard them as Explicit models in Fig. 1) where internal compositions are directly used to encode morphological regularities into words and the 1233 it is an incredible unbelievable thing it is that in cred ible un believ able not believe able capable not believe able capable Prefix Latent Meaning in un not not Root Latent Meaning believ cred believe believe Suffix Latent Meaning able ible able, capale able, capale sentence i : sentence j : Explicit models directly use morphemes Our models employ the latent meanings of morphemes Corpus Lookup table Figure 1: An illustration of explicit models and our models in an English corpus. Although incredible and unbelievable have different morphemes, their morphemes have the same latent meanings. composition embeddings like morpheme embeddings are generated as by-products, we explore a new way to employ the latent meanings of morphological compositions rather than the compositions themselves to train word embeddings. As shown in Fig. 1, according to the distributional semantics hypothesis (Sahlgren, 2008), incredible and unbelievable probably have similar word embeddings because they have similar context. As a matter of fact, incredible is a synonym of unbelievable and their embeddings are expected to be close enough. Since the morphemes of the two words are different, especially the roots cred and believ, the explicit models may not significantly shorten the distance between the words in the vector space. Fortunately, the latent meanings of the different morphemes are the same (e.g., the latent meanings of roots cred, believ are “believe”) as listed in the lookup table (derived from the resources provided by Michigan State University),1 which evidently implies that incredible and unbelievable share the same meanings. In addition, by replacing morphemes with their latent meanings, we can directly and simply quantize the similarities between words and their sub-compositions with the same metrics used in most NLP tasks, e.g., cosine similarity. Subsequently, the similarities are utilized to calculate the weights of latent meanings of morphemes for each word. In this paper, we try different strategies to 1https://msu.edu/˜defores1/gre/roots/ gre_rts_afx1.htm modify the input layer and update rules of a neural language model, e.g., CBOW, Skipgram, and propose three lightweight and efficient models, which are termed Latent Meaning Models (LMMs), to not only encode morphological properties into words but also enhance the semantic similarities among word embeddings. Usually, the vocabulary derived from the corpus contains vast majority or even all of the latent meanings. Rather than generating and training extra embeddings for latent meanings, we directly override the embeddings of the corresponding words in the vocabulary. Moreover, a word map is created to describe the relations between words and the latent meanings of their morphemes. For comparison, our models together with the state-of-the-art baselines are tested on two basic NLP tasks, which are word similarity and syntactic analogy, and one downstream text classification task. The results show that LMMs outperform the baselines and get satisfactory improvement on these tasks. In all, the main contributions of this paper are summarized as follows. • Rather than directly incorporating the morphological compositions (surface forms) of words, we decide to employ the latent meanings of the compositions (underlying forms) to train the word embeddings. To validate the feasibility of our purpose, three specific models, named LMMs, are proposed with different strategies to incorporate the latent meanings. 1234 • We utilize a medium-sized English corpus to train LMMs and the state-of-the-art baselines, and evaluate their performance on two basic NLP tasks, i.e., word similarity and syntactic analogy, and one downstream text classification task. The results show that LMMs outperform the baselines on five word similarity datasets. On the golden standard Wordsim-353 and RG-65, LMMs approximately achieve 5% and 7% gains over CBOW, respectively. For the syntactic analogy and text classification tasks, LMMs also surpass all the baselines. • We conduct experiments to analyze the impacts of parameter settings, and the results demonstrate that the performance of LMMs on the smallest corpus is similar to the performance of CBOW on the corpus that is five times as large, which convinces us that LMMs are of great advantages to enhance word embeddings compared with traditional methods. 2 Background and Related Work Considering the high efficiency of CBOW proposed by Mikolov et al. (2013a), our LMMs are built upon CBOW. Here, we first review some backgrounds of CBOW, and then present some related work on recent word-level and morphology-based word embedding methods. CBOW with Negative Sampling With a sliding window, CBOW utilizes the context words in the window to predict the target word. Given a sequence of tokens T = {t1, t2, · · · , tn}, where n is the size of a training corpus, the objective of CBOW is to maximize the following average log probability equation: L = 1 n n X i=1 log p ti|context(ti)  , (1) where context(ti) represents the context words of ti in the slide window, p ti|context(ti)  is derived by softmax. Due to huge size of English vocabulary, p ti|context(ti)  can not be calculated in a tolerable time. Therefore, negative sampling and hierarchical softmax are proposed to solve this problem. Owing to the efficiency of negative sampling, all our models are trained based on it. In terms of negative sampling, the log probability log p(tO|tI) is transformed as: log δ vec′(tO)T vec(tI)  + m X i=1 log  1 −δ vec′(ti)T vec(tI)  , (2) where m denotes the number of negative samples, and δ(·) is the sigmoid function. The first item of Eq. (2) is the probability of target word when its context is given. The second item indicates the probability that negative samples do not share the same context as the target word. Word-level Word Embedding In general, word embedding models can mainly be divided into two branches. One is based on neural network like the classic CBOW model (Mikolov et al., 2013a), while the other is based on matrix factorization. Besides CBOW, Skip-gram (Mikolov et al., 2013a) is another widely used neuralnetwork-based model, which predicts the context by using the target word (Mikolov et al., 2013a). As for matrix factorization, Dhillon et al. (2015) proposed a spectral word embedding method to measure the correlation between word information matrix and context information matrix. In order to combine the advantages of models based on neural network and matrix factorization, Pennington et al. (2014) proposed a famous word embedding model named GloVe, which is reported to outperform the CBOW and Skip-gram models on some tasks. These models are effective to capture word-level semantic information while neglecting inner structures of words. In contrast, the unheeded inner structures are utilized in both our LMMs and other morphology-based models. Morphology-based Word Embedding Recently, some more fine-grained word embedding models are proposed by exploiting the morphological compositions of words, e.g., root and affixes. These morphology-based models can be divided into two main categories. The first category directly adds the representations of internal structures to word embeddings or optimizes a joint objective over distributional statistics and morphological properties (Luong et al., 2013; Qiu et al., 2014; Botha and Blunsom, 2014; Lazaridou et al., 2013; Chen et al., 2015; Kim et al., 2016; Cotterell and Sch¨utze, 2015). Chen et al. (2015) proposed a character-enhanced Chinese word embedding model, which splits a Chinese word into several characters and add the characters into the input layer of their models. 1235 Luong et al. (2013) utilized the morpheme segments produced by Morfessor (Creutz and Lagus, 2007) and constructed morpheme trees for words to learn morphologically-aware word embeddings by the recursive neural network. Kim et al. (2016) incorporated the convolutional character information into English words. Their model can learn character-level semantic information for embeddings, which is proved to be effective for some morpheme-rich languages. However, with a huge size architecture, it’s very time-consuming. Cotterell et al. (2015) augmented the log linear model to make the words, which share similar morphemes, gather together in vector space. The other category tries to use probabilistic graphical models to connect words with their morphological compositions, and further learns word embeddings (Bhatia et al., 2016; Cotterell et al., 2016). Bhatia et al. (2016) employed morphemes and made them as prior knowledge of the latent word embeddings, then fed the latent variables into a neural sequence model to obtain final word embeddings. Cotterell et al. (2016) proposed a morpheme-based post-processor for pre-trained word embeddings. They constructed a Gaussian graphical model which can extrapolate continuous representations for unknown words. However, these morphology-based models directly exploit the internal compositions of words to encode morphological regularities into word embeddings, and some by-products are also produced like morpheme embeddings. In contrast, we employ the latent meanings of morphological compositions to provide deeper insights for training better word embeddings. Furthermore, since the latent meanings are included in the vocabulary, there is no extra embedding being generated. 3 Our Latent Meaning Models We leverage different strategies to modify the input layer and update rules of CBOW when incorporating the latent meanings of morphemes. Three specific models, named Latent Meaning Model-Average (LMM-A), LMM-Similarity (LMM-S) and LMM-Max (LMM-M), are proposed. It should be stated that, for now, our models mainly concern the derivational morphemes, which can be interpreted to some meaningful words or phrases (i.e., latent meanings), not the inflectional morphemes like tense, number, not Latent Meaning Prefix Root Suffix it is incredible thing an SUM in capable believe able An item of the Word Map incredible not in believe able capable Word Prefix Root Suffix 1/5 1/5 1/5 1/5 1/5 Figure 2: A paradigm of LMM-A. The sentence “it is an incredible thing” is selected as an example. When calculating the input vector of “incredible”, we first find out the latent meanings of its morphemes in the word map, and add the vectors of all latent meanings to the vector of “incredible” with equal weights. gender, etc. LMM-A assumes that all latent meanings of morphemes of a word have equal contributions to the word. LMM-A is applicable to the condition where words are correctly segmented into morphemes and each morpheme is interpreted to appropriate latent meanings. However, refining the latent meanings for morphemes is timeconsuming and needs vast human annotations. To address this concern, LMM-S is proposed. Motivated by the attention scheme, LMM-S holds the assumption that all latent meanings have different contributions, and assigns the outliers small weights to let them have little impact on the representation of the target word. Furthermore, in LMM-M, we only keep the latent meanings which have the greatest contributions to the corresponding word. In what follows, we are going to introduce each of our LMMs in detail. At the end of this section, we will introduce the update rules of the models. 3.1 LMM-A Given a sequence of tokens T = {t1, t2, · · · , tn}, LMM-A assumes that morphemes’ latent meanings of token ti (i ∈[1, n]) have equal contributions to ti, as shown in Fig. 2. The item for ti in the word map is ti 7→Mi. Mi is a set of latent meanings of ti’s morphemes, and it consists of three sub-parts Pi, Ri and Si corresponding to the latent meanings of prefixes, roots and suffixes of ti, respectively. Hence, at the input layer, the 1236 not Latent Meaning Prefix Root Suffix it is incredible thing an SUM in capable believe able An item of the Word Map incredible not in believe able capable Word Prefix Root Suffix ωin ωnot ωbelieve ωcapable ωable Figure 3: A paradigm of LMM-S. In this model, all latent meanings of morphemes of “incredible” are added together with different weights. modified embedding of ti can be expressed as bvti = 1 2 vti + 1 Ni X w∈Mi vw  , (3) where vti is the original word embedding of ti, Ni denotes the length of Mi and vw indicates the embedding of latent meaning w. Meanwhile, we assume the original word embedding and the average embeddings of vw (w ∈Mi) have equal weights, i.e., 0.5. Eventually, bvti rather than vti is utilized for training in CBOW. 3.2 LMM-S This model is proposed based on the attention scheme. We observe that many morphemes have more than one latent meaning. For instance, prefix in- means “in” and “not”, and suffix -ible means “able” and “capable”.2 As Fig. 3 shows, for the item incredible 7→  [in, not], [believe], [able, capable] in the word map, the latent meanings have different biases towards “incredible”. Therefore, we assign different weights to latent meanings. We measure the weights of latent meanings by calculating the normalized similarities between token ti and the corresponding latent meanings. For LMM-S, the modified embedding of ti can be rewritten as bvti = 1 2  vti + X w∈Mi ω<ti,w> · vw  , (4) where vti is the original vector of ti, and ω<ti,w> denotes the weight between ti and the latent meaning w (w ∈Mi). We use cos(va, vb) to denote the 2All the latent meanings of roots and affixes are referred to the resources we mentioned before. not Latent Meaning Prefix Root Suffix it is incredible thing an SUM in capable believe able An item of the Word Map incredible not in believe able capable Word Prefix Root Suffix ωnot ωbelieve ωable Figure 4: A paradigm of LMM-M. The latent meanings with maximum similarities towards “incredible” are selected. cosine similarity between va and vb, then ω<ti,w> is expressed as follows: ω<ti,w> = cos(vti, vw) P x∈Mi cos(vti, vx). (5) 3.3 LMM-M To further eliminate the impacts of some uncorrelated latent meanings to a word, in LMM-M, we only select the latent meanings that have maximum similarities to the token ti from Pi, Ri and Si. As is shown in Fig. 4, the latent meaning “not” of prefix in is finally selected since the similarity between “not” and “incredible” is larger than that between “in” and “incredible”. For token ti, LMM-M is mathematically defined as bvti = 1 2  vti + X w∈Mimax ω<ti,w> · vw  , (6) where Mi max = {P i max, Ri max, Si max} is the set of latent meanings with maximum similarities towards token ti, and P i max, Ri max, Si max are obtained by the following equations: P i max = arg max w cos(vti, vw), w ∈Pi, Ri max = arg max w cos(vti, vw), w ∈Ri, (7) Si max = arg max w cos(vti, vw), w ∈Si. The normalized weight ω<ti,w> (w ∈Mi max) can similarly be derived like Eq. (5). 1237 3.4 Update Rules for LMMs After modifying the input layer of CBOW, Eq. (1) can be rewritten as bL = 1 n n X i=1 log p vti| X tj∈context(ti) bvtj  , (8) where bvtj is the modified vector of vtj (tj ∈ context(ti)). Since the word map describes top-level relations between words and the latent meanings, these relations don’t change during the training period. All parameters introduced by our models can be directly derived using the word map and word vectors, thus no extra parameter needs to be trained. When the gradient is propagated back to the input layer, we update not just the word vector vtj (tj ∈context(ti)) but the vectors of the latent meanings in the vocabulary with the same weights as they are added to the vector vtj. 4 Experimental Setup Before conducting experiments, some experimental settings are firstly introduced in this section. 4.1 Corpus and Word Map We utilize a medium-sized English corpus to train all word embedding models. The corpus stems from the website of the 2013 ACL Workshop on Machine Translation3 and is used in (Kim et al., 2016). We choose the news corpus of 2009 whose size is about 1.7GB. It contains approximately 500 million tokens and 600,000 words in the vocabulary. To get better quality of the word embeddings, we filter all digits and some punctuation marks out of the corpus. For many languages, there exist large morphological lexicons or morphological tools that can analyze any word form (Cotterell and Sch¨utze, 2015). To create the word map, we need to obtain the morphemes of each word and interpret them with the lookup table mentioned above to get the latent meanings. Usually, the lookup table can also be derived from the morphological lexicons for different languages, although it costs some time and manpower, we can create the lookup table once for all since it represents the common knowledge with respect to a certain language. Specifically, we first perform an 3http://www.statmt.org/wmt13/ translation-task.html unsupervised morpheme segmentation using Morefessor (Creutz and Lagus, 2007) for the vocabularies. Then we execute matching between the segmentation results and the morphological compositions in the lookup table, and the character sequence with largest overlap ratio will be viewed as a final morpheme and further be replaced by its latent meanings. Although the lookup table employed in this paper contains latent meanings for only 90 prefixes, 382 roots and 67 suffixes, we focus on validating the feasibility of enhancing word embeddings with the latent meanings of morphemes, and expending the lookup table is left as future work. 4.2 Baselines For comparison, we choose three word-level state-of-the-art word embedding models including CBOW, Skip-gram (Mikolov et al., 2013a) and GloVe (Pennington et al., 2014), and we also implement an Explicitly Morpheme-related Model (EMM), which is a variant version of the previous work (Qiu et al., 2014). The architecture of EMM is based on our LMM-A, where latent meanings are replaced back to morphemes and the embeddings of morphemes are also learned when training word embeddings. This enables our evaluation to focus on the critical difference between our models and the explicit model (Bhatia et al., 2016). We utilize the source code of word2vec4 to train CBOW and Skip-gram. GloVe is trained based on the code5 provided by Pennington et al. (2014). We modify the source of word2vec and train our models and EMM. 4.3 Parameter Settings Parameter settings have a great effect on the performance of word embeddings (Levy et al., 2015). For fairness, all models are trained based on equal parameter settings. In order to accelerate the training process, CBOW, Skip-gram and EMM together with our models are trained by using the negative sampling technique. It is suggested that the number of negative samples in the range 5-20 is useful for small corpus (Mikolov et al., 2013b). If large corpus is used, the number of negative samples can be as small as 2-5. According to the size of corpus we used, the number of negative samples is empirically set to be 20 in this paper. 4https://github.com/dav/word2vec 5http://nlp.stanford.edu/projects/ glove 1238 Name Pairs Name Pairs RG-65 65 RW 2034 SCWS 2003 Men-3k 3000 Wordsim-353 353 WS-353-REL 252 Table 1: Details of datasets. The column “Pairs” shows the number of word pairs in each dataset. The dimension of word embedding is set as 200 like that in (Dhillon et al., 2015). We set the context window size as 5 which is equal to the setting in (Mikolov et al., 2013b). 4.4 Evaluation Benchmarks 4.4.1 Word Similarity This experiment is conducted to evaluate the ability of word embeddings to capture semantic information from corpus. For English word similarity, we employ two gold standard datasets including Wordsim-353 (Finkelstein et al., 2001) and RG-65 (Rubenstein and Goodenough, 1965) as well as some other widely-used datasets including Rare-Word (Luong et al., 2013), SCWS (Huang et al., 2012), Men-3k (Bruni et al., 2014) and WS-353-Related (Agirre et al., 2009). More details of these datasets are shown in Table 1. Each dataset consists of three columns. The first two columns stand for word pairs and the last column is human score. We utilize the cosine similarity, which is used in many previous works (Mikolov et al., 2013b; Pennington et al., 2014), as the metric to measure the distance between two words. The Spearman’s rank correlation coefficient (ρ) is employed to evaluate the similarity between our results and human scores. Higher ρ means better performance. 4.4.2 Syntactic Analogy Based on the learned word embeddings, the core task of syntactic analogy is to answer the analogy question “a is to b as c is to ”. We utilize the Microsoft Research Syntactic Analogies dataset, which is created by Mikolov (Mikolov et al., 2013c) with size of 8000. To answer the syntactic analogy question “a is to b as c is to d” where d is unknown, we assume that the word representations of a, b, c, d are va, vb, vc, vd, respectively. To get d, we first calculate bvd = vb −va + vc. Then, we find out the word d′ whose cosine similarity to bvd is the largest. Finally, we set d as d′. 4.4.3 Text Classification To further evaluate the learned word embeddings, we also conduct 4 text classification tasks using the 20 Newsgroups dataset.6 The dataset totally contains around 19000 documents of 20 different newsgroups, and each corresponding to a different topic, such as guns, motorcycles, electronics and so on. For each task, we randomly select the documents of 10 topics and split them into training/validation/test subsets at the ratio of 6:2:2, which are emplyed to train, validate and test an L2-regularized 10-categorization logistic regression (LR) classifier. As mentioned in (Tsvetkov et al., 2015), here we also regard the average word embedding of words (excluding stop words and out-of-vocabulary words) in each document as the feature vector (the input of the classifier) of that document. The LR classifier is implemented with the scikit-learn toolkit (Pedregosa et al., 2011), which is an open-source Python module integrating many state-of-the-art machine learning algorithms. 5 Experimental Results 5.1 The Results on Word Similarity Word similarity is conducted to test the semantic information which is encoded in word embeddings, and the results are listed in Table 2 (first 6 rows). We observe that our models surpass the comparative baselines on five datasets. Compared with the base model CBOW, it is remarkable that our models approximately achieve improvements of more than 5% and 7%, respectively, in the performance on the golden standard Wordsim-353 and RG-65. On WS-353-REL, the difference between CBOW and LMM-S even reaches 8%. The advantage demonstrates the effectiveness of our methods. Based on our strategy, more semantic information will be captured in corpus when adding more latent meanings in the context window. By incorporating mophemes, EMM also performs better than other baselines but fails to get the performance as well as ours. Actually, EMM mainly tunes the distributions of words in vector space to let the morpheme-similar words gather closer, which means it just encodes more morphological properties into word embeddings but lacks the ability to capture more semantic information. Specially, because of the medium6http://qwone.com/˜jason/20Newsgroups 1239 CBOW Skip-gram GloVe EMM LMM-A LMM-S LMM-M Wordsim-353 58.77 61.94 49.40 60.01 62.05 63.13 61.54 RW 40.58 36.42 33.40 40.83 43.12 42.14 40.51 RG-65 56.50 62.81 59.92 60.85 62.51 62.49 63.07 SCWS 63.13 60.20 47.98 60.28 61.86 61.71 63.02 Men-3k 68.07 66.30 60.56 66.76 66.26 68.36 64.65 WS-353-REL 49.72 57.05 47.46 54.48 56.14 58.47 55.19 Syntactic Analogy 13.46 13.14 13.94 17.34 20.38 17.59 18.30 Text Classification 78.26 79.40 77.01 80.00 80.67 80.59 81.28 Table 2: Performance comparison (%) of our LMMs and the baselines on two basic NLP tasks (word similarity & syntactic analogy) and one downstream task (text classification). The bold digits indicate the best performances. size corpus and the experimental settings, GloVe doesn’t perform as well as that described in (Pennington et al., 2014). 5.2 The Results on Syntactic Analogy In (Mikolov et al., 2013c), the dataset is divided into adjectives, nouns and verbs. For brevity, we only report performance on the whole dataset. As the middle row of Table 2 shows, all of our models outperform the comparative baselines to a great extent. Compared with CBOW, the advantage of LMM-A even reaches to 7%. Besides, we observe that the suffix of “b” usually is the same as the suffix of “d” when answering question “a is to b as c is to d”. Based on our strategy, morphemesimilar words will not only gather closer but have a trend to group near the latent meanings of their morphemes, which makes our embeddings have the advantage to deal with the syntactic analogy problem. EMM also performs well on this task but is still weaker than our models. Actually, syntactic analogy is also a semantics-related task because “c” and “d” are with similar meanings. Since our models are better to capture semantic information, they lead to higher performance than the explicitly morphology-based models. 5.3 The Results on Text Classification For each one of the 4 text classification tasks, we report the classification accuracy over the test set. The average classification accuracy across the 4 tasks is utilized as the evaluation metric for different models. The results are displayed in the bottom row of Table 2. Since we simply use the average embedding of words as the feature vector for 10-categorization classification, the overall classification accuracies of all models are merely aroud 80%. However, the classification accuracies of our LMMs still surpass all the baselines, especailly CBOW and GloVe. Moreover, it can be found that incorporating morphological knowledge (morphemes or latent meanings of morphemes) into word embeddings can contribute to enhancing the performance of word embeddings in the downstream NLP tasks. 5.4 The Impacts of Parameter Settings Parameter settings can affect the performance of word embeddings. For example, the corpus with larger corpus size (the ratio of tokens used for training) contains more semantic information, which can improve the performance on word similarity. We analyze the impacts of corpus size and window size on the performance of word embeddings. In the analysis of corpus size, we hold the same parameter settings as before. The sizes of tokens used for training are separately 1/5, 2/5, 3/5, 4/5 and 5/5 of the entire corpus mentioned above. We utilize the result of word similarity on Wordsim-353 as the evaluation criterion. From Fig. 5, we observe several phenomena. Firstly, the performance of our LMMs is better than CBOW at each corpus size. Secondly, the performance of CBOW is sensitive to the corpus size. In contrast, LMMs’ performance is more stable than CBOW. As we analyzed in word similarity experiment, LMMs can increase the semantic information of word embeddings. It is worth noting that the performance of LMMs on the smallest corpus is even better than CBOW’s performance on the largest corpus. In the analysis of window size, we observe that the performance of all word embeddings trained by different models has a trend to ascend with the increasing of window size as illustrated in Fig. 6. Our LMMs outperform CBOW under all the pre-set conditions. Besides, the worst performance of LMMs is nearly equal to the best performance of CBOW. 1240 55.0 57.5 60.0 62.5 0.2 0.4 0.6 0.8 1.0 Corpus Size Word Similarity CBOW LMM−A LMM−M LMM−S Figure 5: Parameter analysis of corpus size. Xaxis denotes the ratio of tokens used for training, and Y-axis denotes the Spearman rank (%) of word similarity. 56 58 60 62 1 2 4 5 3 Window Size Word Similarity CBOW LMM−A LMM−M LMM−S Figure 6: Parameter analysis of window size. Xaxis and Y-axis denote the window size and Spearman rank (%) of word similarity, respectively. 5.5 Word Embedding Visualization To visualize the embeddings of our models, we randomly select several words from the results of LMM-A. The dimensions of the selected word embeddings are reduced from 200 to 2 using Principal Component Analysis (PCA), and the 2-D word embeddings are illustrated in Fig. 7. The words with different colors reflect that they have different morphemes. It is apparent that words with similar morphemes have a trend to group together and stay near the latent meanings of their morphemes. In addition, we can also find some syntactic regularities in Fig. 7, for example, “physics” is to “physicist” as “science” is to “scientist”, and “physicist” and “scientist” stay near the latent meaning, i.e., “human”, of the suffix -ist. anthropologist biologist physicist scientist microscope microorganism premier preview agreeable prescient prefix edible capablevisible human small before physics science −5 0 5 −15 −10 −5 0 5 x y Figure 7: The visualization of word embeddings. Based on PCA, we randomly select several words from word embedding of LMM-A and illustrate them in this figure, “⊠” indicates the latent meanings of morphemes. 6 Conclusion In this paper, we explored a new direction to employ the latent meanings of morphological compositions rather than the internal compositions themselves to train word embeddings. Three specific models named LMM-A, LMM-S and LMM-M were proposed by modifying the input layer and update rules of CBOW. The source code of LMMs is avaliable at https: //github.com/Y-Xu/lmm. To test the performance of our models, we chose three word-level word embedding models and implemented an Explicitly Morpheme-related Model (EMM) as comparative baselines, and tested them on two basic NLP tasks of word similarity and syntactic analogy, and one downstream text classification task. The experimental results demonstrate that our models outperform the baselines on five word similarity datasets. On the syntactic analogy as well as the text classification tasks, our models also surpass all the baselines including the EMM. In the future, we intend to evaluate our models for some morpheme-rich languages like Russian, German and so on. Acknowledgments The authors are grateful to the reviewers for constructive feedback. This work was supported by the National Natural Science Foundation of China (No.61572456), the Anhui Province Guidance Funds for Quantum Communication and Quantum Computers and the Natural Science Foundation of Jiangsu Province of China (No.BK20151241). 1241 References Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pas¸ca, and Aitor Soroa. 2009. A study on similarity and relatedness using distributional and wordnet-based approaches. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 19–27. Association for Computational Linguistics. Parminder Bhatia, Robert Guthrie, and Jacob Eisenstein. 2016. Morphological priors for probabilistic neural word embeddings. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 490–500. Association for Computational Linguistics. Jan Botha and Phil Blunsom. 2014. Compositional morphology for word representations and language modelling. In International Conference on Machine Learning, pages 1899–1907. Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Artificail Intelligence Research (JAIR), 49(1–47). Xinxiong Chen, Lei Xu, Zhiyuan Liu, Maosong Sun, and Huanbo Luan. 2015. Joint learning of character and word embeddings. In International Conference on Artificial Intelligence. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734. Association for Computational Linguistics. Ryan Cotterell and Hinrich Sch¨utze. 2015. Morphological word-embeddings. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1287–1292. Ryan Cotterell, Hinrich Sch¨utze, and Jason Eisner. 2016. Morphological smoothing and extrapolation of word embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1651– 1660. Association for Computational Linguistics. Mathias Creutz and Krista Lagus. 2007. Unsupervised models for morpheme segmentation and morphology learning. ACM Transactions on Speech and Language Processing (TSLP), 4(1):3. Paramveer S Dhillon, Dean P Foster, and Lyle H Ungar. 2015. Eigenwords: spectral word embeddings. Journal of Machine Learning Research, 16:3035– 3078. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001. Placing search in context: The concept revisited. In Proceedings of the 10th international conference on World Wide Web, pages 406– 414. ACM. Eric H. Huang, Richard Socher, Christopher D. Manning, and Andrew Y. Ng. 2012. Improving word representations via global context and multiple word prototypes. In Meeting of the Association for Computational Linguistics: Long Papers, pages 873– 882. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. 2016. Character-aware neural language models. In The Thirtieth AAAI Conference on Artificial Intelligence, pages 2741–2749. Angeliki Lazaridou, Marco Marelli, Roberto Zamparelli, and Marco Baroni. 2013. Compositional-ly derived representations of morphologically complex words in distributional semantics. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1517–1526. Association for Computational Linguistics. Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211–225. Yang Liu, Zhiyuan Liu, Tat-Seng Chua, and Maosong Sun. 2015. Topical word embeddings. In The Twenty-Ninth AAAI Conference on Artificial Intelligence, pages 2418–2424. Thang Luong, Richard Socher, and Christopher Manning. 2013. Better word representations with recursive neural networks for morphology. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 104–113. Christopher D Manning, Prabhakar Raghavan, Hinrich Sch¨utze, et al. 2008. Introduction to information retrieval, volume 1. Cambridge university press Cambridge. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 746–751. 1242 Fabian Pedregosa, Ga¨el Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. Journal of machine learning research, 12(Oct):2825–2830. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Siyu Qiu, Qing Cui, Jiang Bian, Bin Gao, and Tie-Yan Liu. 2014. Co-learning of word representations and morpheme representations. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 141–150. Herbert Rubenstein and John B Goodenough. 1965. Contextual correlates of synonymy. Communications of the ACM, 8(10):627–633. Magnus Sahlgren. 2008. The distributional hypothesis. Italian Journal of Disability Studies, 20:33–53. Bonggun Shin, Timothy Lee, and Jinho D Choi. 2016. Lexicon integrated cnn models with attention for sentiment analysis. arXiv preprint arXiv:1610.06272. Yulia Tsvetkov, Manaal Faruqui, Wang Ling, Guillaume Lample, and Chris Dyer. 2015. Evaluation of word vector representations by subspace alignment. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2049–2054.
2018
114
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1243–1252 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1243 A Stochastic Decoder for Neural Machine Translation∗ Philip Schulz Amazon Research† [email protected] Wilker Aziz University of Amsterdam [email protected] Trevor Cohn University of Melbourne [email protected] Abstract The process of translation is ambiguous, in that there are typically many valid translations for a given sentence. This gives rise to significant variation in parallel corpora, however, most current models of machine translation do not account for this variation, instead treating the problem as a deterministic process. To this end, we present a deep generative model of machine translation which incorporates a chain of latent variables, in order to account for local lexical and syntactic variation in parallel corpora. We provide an indepth analysis of the pitfalls encountered in variational inference for training deep generative models. Experiments on several different language pairs demonstrate that the model consistently improves over strong baselines. 1 Introduction Neural architectures have taken the field of machine translation by storm and are in the process of replacing phrase-based systems. Based on the encoder-decoder framework (Sutskever et al., 2014) increasingly complex neural systems are being developed at the moment. These systems find new ways of extracting information from the source sentence and the target sentence prefix for example by using convolutions (Gehring et al., 2017) or stacked self-attention layers (Vaswani et al., 2017). These architectural changes have led to great performance improvements over classical RNN-based neural translation systems (Bahdanau et al., 2014). ∗Code and a workflow that reproduces the experiments are available at https://github.com/philschulz/ stochastic-decoder. †Work done prior to joining Amazon. Surprisingly, there have been almost no efforts to change the probabilistic model wich is used to train the neural architectures. A notable exception is the work of Zhang et al. (2016) who introduce a sentence-level latent Gaussian variable. In this work, we propose a more expressive latent variable model that extends the attentionbased architecture of Bahdanau et al. (2014). Our model is motivated by the following observation: translations by professional translators vary across translators but also within a single translator (the same translator may produce different translations on different days, depending on his state of health, concentration etc.). Neural machine translation (NMT) models are incapable of capturing this variation, however. This is because their likelihood function incorporates the statistical assumption that there is one (and only one) output1 for a given source sentence, i.e., P(yn 1 |xm 1 ) = n ∏ i=1 P(yi|xm 1 , y<i) . (1) Our proposal is to augment this model with latent sources of variation that are able to represent more of the variation present in the training data. The noise sources are modelled as Gaussian random variables. The contributions of this work are: • The introduction of an NMT system that is capable of capturing word-level variation in translation data. • A thorough discussions of issues encountered when training this model. In particular, we motivate the use of KL scaling as introduced by Bowman et al. (2016) theoretically. 1Notice that from a statistical perspective the output of an NMT system is a distribution over target sentences and not any particular sentence. The mapping from the output distribution to a sentence is performed by a decision rule (e.g. argmax decoding) which can be chosen independently of the NMT system. 1244 • An empirical demonstration of the improvements achievable with the proposed model. 2 Neural Machine Translation The NMT system upon which we base our experiments is based on the work of Bahdanau et al. (2014). The likelihood of the model is given in Equation (1). We briefly describe its architecture. Let xm 1 = (x1, . . . , xm) be the source sentence and yn 1 the target sentence. Let RNN (·) be any function computed by a recurrent neural network (we use a bi-LSTM for the encoder and an LSTM for the decoder). We call the decoder state at the ith target position ti; 1 ≤i ≤n. The computation performed by the baseline system is summarised below. [ h1, . . . , hm ] = RNN (xm 1 ) (2a) ˜ti = RNN (ti−1, yi−1) (2b) eij = v⊤ a tanh ( Wa[˜ti, hj]⊤+ ba ) (2c) αij = exp (eij) ∑m j=1 exp (eij) (2d) ci = m ∑ j=1 αijhj (2e) ti = Wt[˜ti, ci]⊤+ bt (2f) ϕi = softmax(Woti + bo) (2g) The parameters {Wa, Wt, Wo, ba, bt, bo, va} ⊆ θ are learned during training. The model is trained using maximum likelihood estimation. This means that we employ a cross-entropy loss whose input is the probability vector returned by the softmax. 3 Stochastic Decoder This section introduces our stochastic decoder model for capturing word-level variation in translation data. 3.1 Motivation Imagine an idealised translator whose translations are always perfectly accurate and fluent. If an MT system was provided with training data from such a translator, it would still encounter variation in that data. After all, there are several perfectly accurate and fluent translations for each source sentence. These can be highly different in both their lexical as well as their syntactic realisations. In practice, of course, human translators’ performance varies according to their level of education, their experience on the job, their familiarity with the textual domain and myriads of other factors. Even within a single translator variation may occur due to level of stress, tiredness or status of health. That translation corpora contain variation is acknowledged by the machine translation community in the design of their evaluation metrics which are geared towards comparing one machinegenerated translation against several human translations (see e.g. Papineni et al., 2002). Prior to our work, the only attempt at modelling the latent variation underlying these different translations was made by Zhang et al. (2016) who introduced a sentence level Gaussian variable. Intuitively, however, there is more to latent variation than a unimodal density can capture, for example, there may be several highly likely clusters of plausible variations. A cluster may e.g. consist of identical syntactic structures that differ in word choice, another may consist of different syntactic constructs such as active or passive constructions. Multimodal modelling of these variations is thus called for—and our results confirm this intuition. An example of variation comes from free word order and agreement phenomena in morphologically rich languages. An English sentence with rigid word order may be translated into several orderings in German. However, all orderings need to respect the agreement relationship between the main verb and the subject (indicated by underlining) as well as the dative case of the direct object (dashes) and the accusative of the indirect object (dots). The agreement requirements are fixed and independent of word order. 1. I can’t imagine you naked. (a) Ich kann mir .... dich nicht nackt vorstellen. (b) Ich kann..... dich mir nicht nackt vorstellen. (c) ..... Dich kann ich mir nicht nackt vorstellen. Stochastically encoding the word order variation allows the model to learn the same agreement phenomenon from different translation variants as it does not need to encode the word order and agreement relationships jointly in the decoder state. Further examples of VP and NP variation from an actual translation corpus are shown in Figure 1. We aim to address these word-level variation phenomena with a stochastic decoder model. 1245 预计听证会将进⾏两天。 VOM19981105_0700_0262 The hearing is expected to last two days. The hearing will last two days. The hearings are expected to last two days. It is expected that the hearing will go on for two days. 众议院共和党的起诉⼈则希望传唤莱温斯基等多达15个⼈出庭作证。 VOM19981230_0700_0515 However, the Republican complainant in the House wanted to summon 15 people including Lewinsky to testify in court. The prosecutor of Republican Party in House of Representative hoped to summons more than 15 persons, including Lewinsky, to court. The House of Representatives republican prosecution hopes to summon over fifteen witnesses including Monica Lewinsky to appear in court. Figure 1: Examples from the multiple-translation Chinese corpus (LDC2002T01), where the translations come from different translators. These demonstrate the lexical variation of the verb and variation between passive and raising structures (top), and lexical variation on the agent NP (bottom). Both examples also exhibit appreciable length variation. 3.2 Model formulation The model contains a latent Gaussian variable for each target position. This variable depends on the previous latent states and the decoder state. Through the use of recurrent networks, the conditioning context does not need to be restricted and the likelihood factorises exactly. P(yn 1 |xm 1 ) = ∫ dzn 0 p(z0|xm 1 )× n ∏ i=1 p(zi|z<i, y<i, xm 1 )P(yi|zi 1, y<i, xm 1 ) (3) As can be seen from Equation (3), the model also contains a 0th latent variable that is meant to initialise the chain of latent variables based solely on the source sentence. Contrast this with the model of Zhang et al. (2016) which uses only that 0th variable. A graphical representation of the stochastic decoder model is given in Figure 2a. Its generative story is as follows Z0|xm 1 ∼N(µ0, σ2 0) (4a) Zi|z<i, y<i, xm 1 ∼N(µi, σ2 i ) (4b) Yi|zi 0, y<i, xm 1 ∼Cat(ϕi) (4c) where i = 1, . . . , n and both the Gaussian and the Categorical parameters are predicted by neural network architectures whose inputs vary per time step. This probabilistic formulation can be implemented with a multitude of different architectures. We present ours in the next section. 3.3 Neural Architecture Since the model contains latent variables and is parametrised by a neural network, it falls into the class of deep generative models (DGMs). We use a reparametrisation of the Gaussian variables (Kingma and Welling, 2014; Rezende et al., 2014; Titsias and Lázaro-Gredilla, 2014) to enable backpropagation inside a stochastic computation graph (Schulman et al., 2015). In order to sample ddimensional Gaussian variable z ∈Rd with mean µ and variance σ2, we first sample from a standard Gaussian distribution and then transform the sample, z = µ + σ ⊙ϵ ϵ ∼N (0, I) . (5) Here µ, σ ∈Rd and ⊙denotes element-wise multiplication (also known as Hadamard product). See the supplement for details on the Gaussian reparametrisation. We use neural networks with one hidden layer with a tanh activation to compute the mean and standard deviation of each Gaussian distribution. A softplus transformation is applied to the output of the standard deviation’s network to ensure positivity. Let us denote the functions that these networks compute by f. For the initial latent state z0 we compute the mean and standard deviation as µ0 = fµ0 (hm) σ0 = fσ0 (hm) . (6) 1246 .xl 1. z0 . y1 . z1 . y2 . z2 . y3 . z3 (a) . zi−1. zi . yi−1 . yi . yi+1 . . . . . . . . (b) Figure 2: Graphical representation of 2a the generative model and 2b the inference model. Black lines indicate generative parameters (θ) and red lines variational parameters (λ). Dashed red-black lines indicate that the inference model uses feature representations computed by the generative model as inputs. Through the recurrent net, the generative model (2a) also conditions its outputs on all previous latent assignments. We omit these arrows to avoid clutter. The inference model (2b) is only used at training time. Dots indicate further conditioning context. The parameters of all other latent distributions are computed by functions fµ and fσ whose inputs vary per target position. µi = fµ (ti−1, zi−1) σi = fσ (ti−1, zi−1) (7) Using these values, each latent variable is sampled according to Equation (5). The sampled latent variables are then used to modify the update of the decoder hidden state (Equation (2b)) as follows: ˜ti = RNN (ti−1, yi−1, zi) (8) The remaining computations stay unchanged. Notice that the latent values are used directly in updating the decoder state. This makes the decoder state a function of a random variable and thus the decoder state is itself random. Applying this argument recursively shows that also the attention mechanism is random, making the decoder entirely stochastic. 4 Inference and Training We use variational inference (see e.g. Blei et al., 2017) to train the model. In variational inference, we employ a variational distribution q(z) that approximates the true posterior p(z|x) over the latent variables. The distribution q(z) has its own set of parameters λ that is disjoint from the set of model parameters θ. It is used to maximise the evidence lower bound (ELBO) which is a lower bound on the marginal likelihood p(x). The ELBO is maximised with respect to both the model parameters θ and the variational parameters λ. Most NLP models that use DGMs only use one latent variable (e.g. Bowman et al., 2016). Models that use several variables usually employ a mean field approximation under which all latent variables are independent. This turns the ELBO into a sum of expectations (e.g. Zhou and Neubig, 2017). For our stochastic decoder we design a more flexible approximation posterior family which respects the dependencies between the latent variables, q(zn 0 ) = q(z0) n ∏ i=1 q(zi|z<i) . (9) Our stochastic decoder can be viewed as a stack of conditional DGMs (Sohn et al., 2015) in which the latent variables depend on one another. The ELBO thus consists of nested positional ELBOs, ELBO0 + Eq(z0)[ELBO1 +Eq(z1)[ELBO2 + . . .]] , (10) where for a given target position i the ELBO is ELBOi = Eq(zi) [log p(yi|xm 1 , y<i, z<i, zi)] −KL (q(zi) || p(zi|xm 1 , y<i, z<i)) . (11) The first term is often called reconstruction or likelihood term whereas the second term is called the KL term. Since the KL term is a function of two Gaussian distributions, and the Gaussian is an exponential family, we can compute it analytically (Michalowicz et al., 2014), without the need for sampling. This is very similar to the hierarchical latent variable model of Rezende et al. (2014). Following common practice in DGM research, we employ a neural network to compute the variational distributions. To discriminate it from the 1247 generative model, we call this neural net the inference model. At training time both the source and target sentence are observed. We exploit this by endowing our inference model with a “lookahead” mechanism. Concretely, samples from the inference network condition on the information available to the generation network (Section 3.3) and also on the target words that are yet to be processed by the generative decoder. This allows the latent distribution to not only encode information about the currently modelled word but also about the target words that follow it. The conditioning of the inference network is illustrated graphically in Figure 2b. The inference network produces additional representations of the target sentence. One representation encodes the target sentence bidirectionally (12a), in analogy to the source sentence encoding. The second representation is built by encoding the target sentence in reverse (12b). This reverse encoding can be used to provide information about future context to the decoder. We use the symbols b and r for the bidirectional and reverse target encodings, respectively. In our experiments, we again use LSTMs to compute these encodings. [ b1, . . . , bn ] = RNN (yn 1 ) (12a) [ r1, . . . , rn ] = RNN (yn 1 ) (12b) In analogy to the generative model (Section 3.3), the inference network uses single hidden layer networks to compute the mean and standard deviations of the latent variable distributions. We denote these functions g and again employ different functions for the initial latent state and all other latent states. µ0 = gµ0 (hm, bn) (13a) σ0 = gσ0 (hm, bn) (13b) µi = gµ (ti−1, zi−1, ri, yi) (13c) σi = gσ (ti−1, zi−1, ri, yi) (13d) As before, we use Equation (5) to sample from the variational distribution. During training, all samples are obtained from the inference network. Only at test time do we sample from the generator. Notice that since the inference network conditions on representations produced by the generator network, a naïve application of backpropagation would update parts of the generator network with gradients computed for the inference network. We prevent this by blocking gradient flow from the inference net into the generator. 4.1 Analysis of the Training Procedure The training procedure as outlined above does not work well empirically. This is because our model uses a strong generator. By this we mean that the generation model (that is the baseline NMT model) is a very good density model in and by itself and does not need to rely on latent information to achieve acceptable likelihood values during training. DGMs with strong generators have a tendency to not make use of latent information (Bowman et al., 2016). This problem went initially unnoticed because early DGMs (Kingma and Welling, 2014; Rezende et al., 2014) used weak generators2, i.e. models that made very strong independence assumptions and were not able to capture contextual information without making use of the information encoded by the latent variable. Why DGMs would ignore the latent information can be understood by considering the KL-term of the ELBO. In order for the latent variable to be informative about the observed data, we need them to have high mutual information I(Z; Y ). I(Z; Y ) = Ep(z,y) [ log p(Z, Y ) p(Z)p(Y ) ] (14) Observe that we can rewrite the mutual information as an expected KL divergence by applying the definition of conditional probability. I(Z; Y ) = Ep(y) [KL (p(Z|Y ) || p(Z))] (15) Since we cannot compute the posterior p(z|y) exactly, we approximate it with the variational distribution q(z|y) (the joint is approximated by q(z|y)p(y) where the latter factor is the data distribution). To the extent that the variational distribution recovers the true posterior, the mutual information can be computed this way. In fact, if we take the learned prior p(z) to be an approximation of the marginal ∫ q(z|y)p(y)dy it can easily be shown that the thus computed KL term is an upper bound on mutual information (Alemi et al., 2017). The trouble is that the ELBO (Equation (11)) can be trivially maximised by setting the KL-term to 0 and maximising only the reconstruction term. 2The term weak generator has first been coined by Alemi et al. (2017). 1248 This is especially likely at the beginning of training when the variational approximation does not yet encode much useful information. We can only hope to learn a useful variational distribution if a) the variational approximation is allowed to move away from the prior and b) the resulting increase in the reconstruction term is higher than the increase in the KL-term (i.e. the ELBO increases overall). Several schemes have been proposed to enable better learning of the variational distribution (Bowman et al., 2016; Kingma et al., 2016; Alemi et al., 2017). Here we use KL scaling and increase the scale gradually until the original objective is recovered. This has the following effect: during the initial learning stage, the KL-term barely contributes to the objective and thus the updates to the variational parameters are driven by the signal from the reconstruction term and hardly restricted by the prior. Once the scale factor approaches 1 the variational distribution will be highly informative to the generator (assuming sufficiently slow increase of the scale factor). The KL-term can now be minimised by matching the prior to the variational distribution. Notice that up to this point, the prior has hardly been updated. Thus moving the variational approximation back to the prior would likely reduce the reconstruction term since the standard normal prior is not useful for inference purposes. This is in stark contrast to Bowman et al. (2016) whose prior was a fixed standard normal distribution. Although they used KL scaling, the KL term could only be decreased by moving the variational approximation back to the fixed prior. This problem disappears in our model where priors are learned. Moving the prior towards the variational approximation has another desirable effect. The prior can now learn to emulate the variational “lookahead” mechanism without having access to future contexts itself (recall that the inference model has access to future target tokens). At test time we can thus hope to have learned latent variable distributions that encode information not only about the output at the current position but about future outputs as well. 5 Experiments We report experiments on the IWSLT 2016 data set which contains transcriptions of TED talks and their respective translations. We trained models to Data Arabic Czech French German Train 224,125 114,389 220,399 196,883 Dev 6,746 5,326 5,937 6,996 Test 2,762 2,762 2,762 2,762 Table 1: Number of parallel sentence pairs for each language paired with English for IWSLT data. translate from English into Arabic, Czech, French and German. The number of sentences for each language after preprocessing is shown in Table 1. The vocabulary was split into 50,000 subword units using Google’s sentence piece3 software in its standard settings. As our baseline NMT systems we use Sockeye (Hieber et al., 2017)4. Sockeye implements several different NMT models but here we use the standard recurrent attentional model described in Section 2. We report baselines with and without dropout (Srivastava et al., 2014). For dropout a retention probability of 0.5 was used. As a second baseline we use our own implementation of the model of Zhang et al. (2016) which contains a single sentence-level Gaussian latent variable (SENT). Our implementation differs from theirs in three aspects. First, we feed the last hidden state of the bidirectional encoding into encoding of the source and target sentence into the inference network (Zhang et al. (2016) use the average of all states). Second, the latent variable is smaller in size than the one used by (Zhang et al., 2016).5 This was done to make their model and the stochastic decoder proposed here as similar as possible. Finally, their implementation was based on groundhog whereas ours builds on Sockeye. Our stochastic decoder model (SDEC) is also built on top of the basic Sockeye model. It adds the components described in Sections 3 and 4. Recall that the functions that compute the means and standard deviations are implemented by neural nets with a single hidden layer with tanh activation. The width of that layer is twice the size of the latent variable. In our experiments we tested different latent variable sizes and used KL scaling (see Section 4.1). The scale started from 0 and was increased by 1/20,000 after each mini-batch. Thus, at iteration t the scale is min(t/20,000, 1). All models use 1028 units for the LSTM hid3https://github.com/google/sentencepiece 4https://github.com/awslabs/sockeye 5We did, however, find that increasing the latent variable size actually hurt performance in our implementation. 1249 den state (or 512 for each direction in the bidirectional LSTMs) and 256 for the attention mechansim. Training is done with Adam (Kingma and Ba, 2015). In decoding we use a beam of size 5 and output the most likely word at each position. We deterministically set all latent variables to their mean values during decoding. Monte Carlo decoding (Gal, 2016) is difficult to apply to our setting as it would require sampling entire translations. Results We show the BLEU scores for all models that we tested on the IWSLT data set in Table 2. The stochastic decoder dominates the Sockeye baseline across all 4 languages, and outperforms SENT on most languages. Except on German, there is a trend towards smaller latent variable sizes being more helpful. This is in line with findings by Chung et al. (2015) and Fraccaro et al. (2016) who also used relatively small latent variables. This observation also implies that our model does not improve simply because it has more parameters than the baseline. That the margin between the SDEC and SENT models is not large was to be expected for two reasons. First, Chung et al. (2015) and Fraccaro et al. (2016) have shown that stochastic RNNs lead to enormous improvements in modelling continuous sequences but only modest increases in performance for discrete sequences (such as natural language). Second, translation performance is measured in BLEU score. We observed that SDEC often reached better ELBO values than SENT indicating a better model fit. How to fully leverage the better modelling ability of stochastic RNNs when producing discrete outputs is a matter of future research. Qualitative Analysis Finally, we would like to demonstrate that our model does indeed capture variation in translation. To this end, we randomly picked sentences from the IWSLT test set and had our model translate them several times, however, the values of the latent variables were sampled instead of fixed. Contrary to the BLEU-based evaluation, beam search was not used in this evaluation in order to avoid interaction between different latent variable samples. See Figure 3 for examples of syntactic and lexical variation. It is important to note that we do not sample from the categorical output distribution. For each target position we pick the most likely word. A non-stochastic NMT system would always yield the same translation in this scenario. Interestingly, when we applied the sampling procedure to the SENT model it did not produce any variation at all, thus behaving like a deterministic NMT system. This supports our initial point that the SENT model is likely insensitive to local variation, a problem that our model was designed to address. Like the model of Bowman et al. (2016), SENT presumably tends to ignore the latent variable. 6 Related Work The stochastic decoder is strongly influenced by previous work on stochastic RNNs. The first such proposal was made by Bayer and Osendorfer (2015) who introduced i.i.d. Gaussian latent variables at each output position. Since their model neglects any sequential dependence of the noise sources, it underperformed on several sequence modeling tasks. Chung et al. (2015) made the latent variables depend on previous information by feeding the previous decoder state into the latent variable sampler. Their inference model did not make use of future elements in the sequence. Using a “look-ahead” mechanism in the inference net was proposed by Fraccaro et al. (2016) who had a separate stochastic and deterministic RNN layer which both influence the output. Since the stochastic layer in their model depends on the deterministic layer but not vice versa, they could first run the deterministic layer at inference time and then condition the inference net’s encoding of the future on the thus obtained features. Like us, they used KL scaling during training. More recently, Goyal et al. (2017) proposed an auxiliary loss that has the inference net predict future feature representations. This approach yields state-of-the-art results but is still in need of a theoretical justification. Within translation, Zhang et al. (2016) were the first to incorporate Gaussian variables into an NMT model. Their approach only uses one sentence-level latent variable (corresponding to our z0) and can thus not deal with word-level variation directly. Concurrently to our work, Su et al. (2018) have also proposed a recurrent latent variable model for NMT. Their approach differs from ours in that they do not use a 0th latent variable nor a look-ahead mechanism during inference time. Furthermore, their underlying recurrent model is a GRU. In the wider field of NLP, deep generative mod1250 Model Dropout LatentDim Arabic Czech French German Sockeye None None 8.2 6.9 23.5 14.3 Sockeye 0.5 None 8.4 7.4 24.4 15.1 SENT 0.5 64 8.4 7.3 24.8 15.3 SENT 0.5 128 8.7 7.4 24.0 15.7 SENT 0.5 256 8.9 7.4 24.7 15.5 SDEC 0.5 64 8.2 7.7 25.3 15.4 SDEC 0.5 128 8.8 7.5 24.2 15.6 SDEC 0.5 256 8.7 7.5 23.2 15.9 Table 2: BLEU scores for different models on the IWSLT data for translation into English. Recall that all SDEC and SENT models used KL scaling during training. Source Coincidentally, at the same time, the first easy-to-use clinical tests for diagnosing autism were introduced. SENT Im gleichen Zeitraum wurden die ersten einfachen klinischen Tests für Diagnose getestet. SDEC Übrigens, zur gleichen Zeit, wurden die ersten einfache klinische Tests für die Diagnose von Autismus eingeführt. SDEC Übrigens, zur gleichen Zeit, waren die ersten einfache klinische Tests für die Diagnose von Autismus eingeführt worden. Source They undertook a study of autism prevalence in the general population. SENT Sie haben eine Studie von Autismus in der allgemeinen Population übernommen. SDEC Sie entwarfen eine Studie von Autismus in der allgemeinen Bevölkerung. SDEC Sie führten eine Studie von Autismus in der allgemeinen Population ein. Figure 3: Sampled translations from our model (SDEC) and the sentent-level latent variable model (SENT). The first SDEC example shows alternation between the German simple past and past perfect. The past perfect introduces a long range dependency between the main and auxiliary verb (underlined) that the model handles well. The second example shows variation in the lexical realisation of the verb. The second variant uses a particle verb and we again observe a long range dependency between the main verb and its particle (underlined). els have been applied mostly in monolingual settings such as text generation (Bowman et al., 2016; Semeniuta et al., 2017), morphological analysis (Zhou and Neubig, 2017), dialogue modelling (Wen et al., 2017), question selection (Miao et al., 2016) and summarisation (Miao and Blunsom, 2016). 7 Conclusion and Future Work We have presented a recurrent decoder for machine translation that uses word-level Gaussian variables to model underlying sources of variation observed in translation corpora. Our experiments confirm our intuition that modelling variation is crucial to the success of machine translation. The proposed model consistently outperforms strong baselines on several language pairs. As this is the first work that systematically considers word-level variation in NMT, there are lots of research ideas to explore in the future. Here, we list the three which we believe to be most promising. • Latent factor models: our model only contains one source of variation per word. A latent factor model such as DARN (Gregor et al., 2014) would consider several sources simultaneously. This would also allow us to perform a better analysis of the model behaviour as we could correlate the factors with observed linguistic phenomena. • Richer prior and variational distributions: The diagonal Gaussian is likely too simple a 1251 distribution to appropriately model the variation in our data. Richer distributions computed by normalising flows (Rezende and Mohamed, 2015; Kingma et al., 2016) will likely improve our model. • Extension to other architectures: Introducing latent variables into non-autoregressive translation models such as the transformer (Vaswani et al., 2017) should increase their translation ability further. 8 Acknowledgements Philip Schulz and Wilker Aziz were supported by the Dutch Organisation for Scientific Research (NWO) VICI Grant nr. 277-89-002. Trevor Cohn is the recipient of an Australian Research Council Future Fellowship (project number FT130101105). References Alexander Alemi, Ben Poole, Ian Fischer, Joshua V. Dillon, Rif A. Saurous, and Kevin Murphy. 2017. An information theoretic analysis of deep latent variable models. arxiv preprint . Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. In ICLR. Justin Bayer and Christian Osendorfer. 2015. Learning stochastic recurrent networks. In ICLR. David M. Blei, Alp Kucukelbir, and Jon D. McAuliffe. 2017. Variational inference: A review for statisticians. Journal of the American Statistical Association 112(518):859–877. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Józefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In CoNLL 2016. pages 10–21. Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. 2015. A recurrent latent variable model for sequential data. In NIPS 28, pages 2980–2988. Marco Fraccaro, Søren Kaae Sø nderby, Ulrich Paquet, and Ole Winther. 2016. Sequential neural models with stochastic layers. In NIPS 29, pages 2199– 2207. Yarin Gal. 2016. Uncertainty in Deep Learning. Ph.D. thesis, University of Cambridge. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In ICML. pages 1243–1252. Anirudh Goyal, Alessandro Sordoni, Marc-Alexandre Côté, Nan Ke, and Yoshua Bengio. 2017. Z-forcing: Training stochastic recurrent networks. In NIPS 30, pages 6716–6726. Karol Gregor, Ivo Danihelka, Andriy Mnih, Charles Blundell, and Daan Wierstra. 2014. Deep autoregressive networks. In ICML. Bejing, China, pages 1242–1250. Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, and Matt Post. 2017. Sockeye: A Toolkit for Neural Machine Translation. ArXiv e-prints . Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. Diederik P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. 2016. Improved variational inference with inverse autoregressive flow. In NIPS 29, pages 4743–4751. Diederik P Kingma and Max Welling. 2014. Autoencoding variational Bayes. In ICLR. Yishu Miao and Phil Blunsom. 2016. Language as a latent variable: Discrete generative models for sentence compression. In EMNLP. pages 319–328. Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing. In ICML. New York, New York, USA, pages 1727–1736. Joseph Victor Michalowicz, Jonathan M. Nichols, and Frank Bucholtz. 2014. Handbook of Differential Entropy. CRC Press. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In ACL. pages 311–318. Danilo Rezende and Shakir Mohamed. 2015. Variational inference with normalizing flows. In ICML. volume 37, pages 1530–1538. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. In ICML. John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. 2015. Gradient estimation using stochastic computation graphs. In NIPS 28, pages 3528–3536. Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. 2017. A hybrid convolutional variational autoencoder for text generation. In EMNLP. pages 627–637. Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015. Learning structured output representation using deep conditional generative models. In NIPS 28, pages 3483–3491. 1252 Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15:1929–1958. Jinsong Su, Shan Wu, Deyi Xiong, Yaojie Ly, Xianpei Han, and Biao Zhang. 2018. Variational recurrent neural machine translation. In AAAI. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In NIPS 27, pages 3104–3112. Michalis Titsias and Miguel Lázaro-Gredilla. 2014. Doubly stochastic Variational Bayes for nonconjugate inference. In ICML. pages 1971–1979. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS 30, pages 6000–6010. Tsung-Hsien Wen, Yishu Miao, Phil Blunsom, and Steve Young. 2017. Latent intention dialogue models. In ICML. pages 3732–3741. Biao Zhang, Deyi Xiong, jinsong su, Hong Duan, and Min Zhang. 2016. Variational neural machine translation. In EMNLP. pages 521–530. Chunting Zhou and Graham Neubig. 2017. Multi-space variational encoder-decoders for semi-supervised labeled sequence transduction. In ACL. pages 310– 320.
2018
115
Forest-Based Neural Machine Translation Chunpeng Ma1,3∗Akihiro Tamura2 Masao Utiyama3 Tiejun Zhao1† Eiichiro Sumita3 1Harbin Institute of Technology, Harbin, China 2Ehime University, Matsuyama, Japan 3National Institute of Information and Communications Technology, Kyoto, Japan {cpma, tjzhao}@hit.edu.cn [email protected] {mutiyama, eiichiro.sumita}@nict.go.jp Abstract Tree-based neural machine translation (NMT) approaches, although achieved impressive performance, suffer from a major drawback: they only use the 1best parse tree to direct the translation, which potentially introduces translation mistakes due to parsing errors. For statistical machine translation (SMT), forestbased methods have been proven to be effective for solving this problem, while for NMT this kind of approach has not been attempted. This paper proposes a forest-based NMT method that translates a linearized packed forest under a simple sequence-to-sequence framework (i.e., a forest-to-string NMT model). The BLEU score of the proposed method is higher than that of the string-to-string NMT, treebased NMT, and forest-based SMT systems. 1 Introduction NMT has witnessed promising improvements recently. Depending on the types of input and output, these efforts can be divided into three categories: string-to-string systems (Sutskever et al., 2014; Bahdanau et al., 2014); tree-to-string systems (Eriguchi et al., 2016, 2017); and string-totree systems (Aharoni and Goldberg, 2017; Nadejde et al., 2017). Compared with string-to-string systems, tree-to-string and string-to-tree systems (henceforth, tree-based systems) offer some attractive features. They can use more syntactic information (Li et al., 2017), and can conveniently incorporate prior knowledge (Zhang et al., 2017). ∗Contribution during internship at National Institute of Information and Communications Technology. †Corresponding author Because of these advantages, tree-based methods become the focus of many researches of NMT nowadays. Based on how to represent trees, there are two main categories of tree-based NMT methods: representing trees by a tree-structured neural network (Eriguchi et al., 2016; Zaremoodi and Haffari, 2017), representing trees by linearization (Vinyals et al., 2015; Dyer et al., 2016; Ma et al., 2017). Compared with the former, the latter method has a relatively simple model structure, so that a larger corpus can be used for training and the model can be trained within reasonable time, hence is preferred from the viewpoint of computation. Therefore we focus on this kind of methods in this paper. In spite of impressive performance of tree-based NMT systems, they suffer from a major drawback: they only use the 1-best parse tree to direct the translation, which potentially introduces translation mistakes due to parsing errors (Quirk and Corston-Oliver, 2006). For SMT, forest-based methods have employed a packed forest to address this problem (Huang, 2008), which represents exponentially many parse trees rather than just the 1-best one (Mi et al., 2008; Mi and Huang, 2008). But for NMT, (computationally efficient) forestbased methods are still being explored1. Because of the structural complexity of forests, the inexistence of appropriate topological ordering, and the hyperedge-attachment nature of weights (see Section 3.1 for details), it is not trivial to linearize a forest. This hinders the development of forest-based NMT to some extent. Inspired by the tree-based NMT methods based on linearization, we propose an efficient forestbased NMT approach (Section 3), which can en1Zaremoodi and Haffari (2017) have proposed a forestbased NMT method based on a forest-structured neural network recently, but it is computationally inefficient (see Section 5). code the syntactic information of a packed forest on the basis of a novel weighted linearization method for a packed forest (Section 3.1), and can decode the linearized packed forest under the simple sequence-to-sequence framework (Section 3.2). Experiments demonstrate the effectiveness of our method (Section 4). 2 Preliminaries We first review the general sequence-to-sequence model (Section 2.1), then describe tree-based NMT systems based on linearization (Section 2.2), and finally introduce the packed forest, through which exponentially many trees can be represented in a compact manner (Section 2.3). 2.1 Sequence-to-sequence model Current NMT systems usually resort to a simple framework, i.e., the sequence-to-sequence model (Cho et al., 2014; Sutskever et al., 2014). Given a source sequence (x0, . . . , xT ), in order to find a target sequence (y0, . . . , yT ′) that maximizes the conditional probability p(y0, . . . , yT ′ | x0, . . . , xT ), the sequence-to-sequence model uses one RNN to encode the source sequence into a fixed-length context vector c and a second RNN to decode this vector and generate the target sequence. Formally, the probability of the target sequence can be calculated as follows: p(y0, . . . ,yT ′ | x0, . . . , xT ) = T ′ Y t=0 p(yt | c, y0, . . . , yt−1), (1) where p(yt | c, y0, . . . , yt−1) = g(yt−1, st, c), (2) st = f(st−1, yt−1, c), (3) c = q(h0, . . . , hT ), (4) ht = f(et, ht−1). (5) Here, g, f, and q are nonlinear functions; ht and st are the hidden states of the source-side RNN and target-side RNN, respectively, c is the context vector, and et is the embedding of xt. Bahdanau et al. (2014) introduced an attention mechanism to deal with the issues related to long sequences (Cho et al., 2014). Instead of encoding the source sequence into a fixed vector c, the attention model uses different ci-s when calculating the target-side output yi at time step i: ci = T X j=0 αijhj, (6) αij = exp(a(si−1, hj)) PT k=0 exp(a(si−1, hk)) . (7) The function a(si−1, hj) can be regarded as representing the soft alignment between the target-side RNN hidden state si−1 and the source-side RNN hidden state hj. By changing the format of the source/target sequences, this framework can be regarded as a string-to-string NMT system (Sutskever et al., 2014), a tree-to-string NMT system (Li et al., 2017), or a string-to-tree NMT system (Aharoni and Goldberg, 2017). 2.2 Linear-structured tree-based NMT systems Regarding the linearization adopted for tree-tostring NMT (i.e., linearization of the source side), Sennrich and Haddow (2016) encoded the sequence of dependency labels and the sequence of words simultaneously, partially utilizing the syntax information, while Li et al. (2017) traversed the constituent tree of the source sentence and combined this with the word sequence, utilizing the syntax information completely. Regarding the linearization used for string-totree NMT (i.e., linearization of the target side), Nadejde et al. (2017) used a CCG supertag sequence as the target sequence, while Aharoni and Goldberg (2017) applied a linearization method in a top-down manner, generating a sequence ensemble for the annotated tree in the Penn Treebank (Marcus et al., 1993). Wu et al. (2017) used transition actions to linearize a dependency tree, and employed the sequence-to-sequence framework for NMT. It can be seen all current tree-based NMT systems use only one tree for encoding or decoding. In contrast, we hope to utilize multiple trees (i.e., a forest). This is not trivial, on account of the lack of a fixed traversal order and the need for a compact representation. 2.3 Packed forest The packed forest gives a representation of exponentially many parsing trees, and can compactly encode many more candidates than the n-best list John S0,5 NP0,1 VP1,4 .4,5 NP2,4 NNP0,1 VBZ1,2 has DT2,3 a dog NN3,4 . NP2,4 S2,4 -3.9490 4.7280 5.0983 -1.3092 -6.7403 -18.1946 5.8665 [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] (a) Packed forest John S NP VP . NP NNP VBZ has DT a dog NN . (b) Correct constituent tree, score = −46.2389 John S NP VP . S NNP VBZ has DT a dog NN . NP (c) Incorrect constituent tree, score = −58.6321 Figure 1: An example of (a) a packed forest. The numbers in the brackets located at the upper-left corner of each node in the packed forest show one correct topological ordering of the nodes. The packed forest is a compact representation of two trees: (b) the correct constituent tree, and (c) an incorrect constituent tree. Note that the terminal nodes (i.e., words in the sentence) in the packed forest are shown only for illustration, and they do not belong to the packed forest. (Huang, 2008). Figure 1a shows a packed forest, which can be unpacked into two constituent trees (Figure 1b and Figure 1c). Formally, a packed forest is a pair ⟨V, E⟩, where V is the set of nodes and E is the set of hyperedges. Each v ∈V can be represented as Xi,j, where X is a constituent label and i, j ∈ [0, n] are indices of words, showing that the node spans the words ranging from i (inclusive) to j (exclusive). Here, n is the length of the input sentence. Each e ∈ E is a three-tuple ⟨head(e), tails(e), score(e)⟩, where head(e) ∈ V is similar to the head node in a constituent tree, and tails(e) ∈V ∗is similar to the set of child nodes in a constituent tree. score(e) ∈R is the logarithm of the probability that tails(e) represents the tails of head(e) calculated by the parser. Based on score(e), the score of a constituent tree T can be calculated as follows: score(T) = −λn + X e∈E(T) score(e), (8) where E(T) is the set of hyperedges appearing in tree T, and λ is a regularization coefficient for the sentence length2. 2Following the configuration of Charniak and Johnson 3 Forest-based NMT We first propose a linearization method for the packed forest (Section 3.1), then describe how to encode the linearized forest (Section 3.2), which can then be translated by the conventional decoder (see Section 2.1). 3.1 Forest linearization Recently, several studies have focused on the linearization methods of a syntax tree, both in the area of tree-based NMT (Section 2.2) and in the area of parsing (Vinyals et al., 2015; Dyer et al., 2016; Ma et al., 2017). Basically, these methods follow a fixed traversal order (e.g., depthfirst), which does not exist for the packed forest (a directed acyclic graph (DAG)). Furthermore, the weights are attached to edges of a packed forest instead of the nodes, which further increase the difficulty. Topological ordering algorithms for DAG (Kahn, 1962; Tarjan, 1976) are not good solutions, because the outputted ordering is not always optimal for machine translation. In particular, a topo(2005), for all the experiments in this paper, we fixed λ to log2 600. Algorithm 1 Linearization of a packed forest 1: function LINEARIZEFOREST(⟨V, E⟩, w) 2: v ←FINDROOT(V ) 3: r ←[] 4: EXPANDSEQ(v, r, ⟨V, E⟩, w) 5: return r 6: function FINDROOT(V ) 7: for v ∈V do 8: if v has no parent then 9: return v 10: procedure EXPANDSEQ(v, r, ⟨V, E⟩, w) 11: for e ∈E do 12: if head(e) = v then 13: if tails(e) ̸= ∅then 14: for t ∈SORT(tails(e)) do ▷Sort tails(e) by word indices. 15: EXPANDSEQ(t, r, ⟨V, E⟩, w) 16: l ←LINEARIZEEDGE(head(e), w) 17: r.append(⟨l, σ(0.0)⟩) ▷σ is the sigmoid function, i.e., σ(x) = 1 1+e−x , x ∈R. 18: l ←c⃝LINEARIZEEDGES(tails(e), w) ▷c⃝is a unary operator. 19: r.append(⟨l, σ(score(e))⟩) 20: else 21: l ←LINEARIZEEDGE(head(e), w) 22: r.append(⟨l, σ(0.0)⟩) 23: function LINEARIZEEDGE(Xi,j, w) 24: return X ⊗(⊙j−1 k=iwk) 25: function LINEARIZEEDGES(v, w) 26: return ⊕v∈vLINEARIZEEDGE(v, w) logical ordering could ignore “word sequential information” and “parent-child information” in the sentences. For example, for the packed forest in Figure 1a, although “[10]→[1]→[2]→· · · →[9]→[11]” is a valid topological ordering, the word sequential information of the words (e.g., “John” should be located ahead of the period), which is fairly crucial for translation of languages with fixed pragmatic word order such as Chinese or English, is lost. As another example, for the packed forest in Figure 1a, nodes [2], [9], and [10] are all the children of node [11]. However, in the topological order “[1]→[2]→· · · →[9]→[10]→[11],” node [2] is quite far from node [11], while nodes [9] and [10] are both close to node [11]. The parent-child information cannot be reflected in this topological order, which is not what we would expect. To address the above two problems, we propose a novel linearization algorithm for a packed forest (Algorithm 1). The algorithm linearizes the packed forest from the root node (Line 2) to leaf nodes by calling the EXPANDSEQ procedure (Line 15) recursively, while preserving the word order in the sentence (Line 14). In this way, word sequential information is preserved. Within the NNP⊗John / NP⊗John / c⃝NNP⊗John / VBZ⊗has / DT⊗a / NN⊗dog / NP⊗a⊙dog / c⃝DT⊗a⊕NN⊗dog / NP⊗a⊙dog / c⃝DT⊗a⊕NN⊗dog / S⊗a⊙dog / c⃝NP⊗a⊙dog / VP⊗has⊙a⊙dog / c⃝VBZ⊗has⊕NP⊗a⊙dog / c⃝VBZ⊗has⊕S⊗a⊙dog / .⊗. / S⊗John⊙has⊙a⊙dog⊙. / c⃝NP⊗John⊕VP⊗has⊙a⊙dog⊕.⊗. Figure 2: Linearization result of the packed forest in Figure 1a EXPANDSEQ procedure, once a hyperedge is linearized (Line 16), the tails are also linearized immediately (Line 18). In this way, parent-child information is preserved. Intuitively, different parts of constituent trees should be combined in different ways, therefore we define different operators ( c⃝, ⊗, ⊕, or ⊙) to represent the relationships between different parts, so that the representations of these parts can be combined in different ways (see Section 3.2 for details). Words are concatenated by the operator “⊙” with each other, a word and a constituent label is concatenated by the operator “⊗”, the linearization results of child nodes are concatenated by the operator “⊕” with each other, while the unary operator “ c⃝” is used to indicate that the node is the child node of the previous part. Furthermore, each token in the linearized sequence is related to a score, representing the confidence of the parser. The linearization result of the packed forest in Figure 1a is shown in Figure 2. Tokens in the linearized sequence are separated by slashes. Each token in the sequence is composed of different types of symbols and combined by different operators. We can see that word sequential information is preserved. For example, “NNP⊗John” (linearization result of node [1]) is in front of “VBZ⊗has” (linearization result of node [3]), which is in front of “DT⊗a” (linearization result of node [4]). Moreover, parent-child information is also preserved. For example, “NP⊗John” (linearization result of node [2]) is followed by “ c⃝NNP⊗John” (linearization result of node [1], the child of node [2]). Note that our linearization method cannot fully recover packed forest. What we want to do is not to propose a fully recoverable linearization method. What we actually want to do is to encode syntax information as much as possible, so that we can improve the performance of NMT. As will be shown in Section 4, this goal is achieved. Also note that there is one more advantage of our linearization method: the linearized sequence … … … … Decoder Input Layer Symbol Layer Node/Operator Layer Embedding Layer Attention Layer Score Layer Hidden Layer Pre-Embedding Layer (a) Score-on-Embedding (SoE) … … … … Decoder Input Layer Symbol Layer Node/Operator Layer Embedding Layer Attention Layer Score Layer Hidden Layer Pre-Embedding Layer (b) Score-on-Attention (SoA) Figure 3: The framework of the forest-based NMT system. is a weighted sequence, while all the previous studies ignored the weights during linearization. As will be shown in Section 4, the weights are actually important not only for the linearization of a packed forest, but also for the linearization of a single tree. By preserving only the nodes and hyperedges in the 1-best tree and removing all others, our linearization method can be regarded as a treelinearization method. Compared with other treelinearization methods, our method combines several different kinds of information within one symbol, retaining the parent-child information, and incorporating the confidence of the parser in the sequence. We examine whether the weights can be useful not only for linear structured tree-based NMT but also for our forest-based NMT. Furthermore, although our method is nonreversible for packed forests, it is reversible for constituent trees, in that the linearization is processed exactly in the depth-first traversal order and all necessary information in the tree nodes has been encoded. As far as we know, there is no previous work on linearization of packed forests. 3.2 Encoding the linearized forest The linearized packed forest forms the input of the encoder, which has two major differences from the input of a sequence-to-sequence NMT system. First, the input sequence of the encoder consists of two parts: the symbol sequence and the score sequence. Second, each symbol in the symbol sequence consists of several parts (words and constituent labels), which are combined by certain operators ( c⃝, ⊗, ⊕, or ⊙). Based on these observations, we propose two new frameworks, which are illustrated in Figure 3. Formally, the input layer receives the sequence (⟨l0, ξ0⟩, . . . , ⟨lT , ξT ⟩), where li denotes the i-th symbol and ξi its score. Then, the sequence is fed into the score layer and the symbol layer. The score and symbol layers receive the sequence and output the score sequence ξ = (ξ0, . . . , ξT ) and symbol sequence l = (l0, . . . , lT ), respectively, from the input. Any item l ∈l in the symbol layer has the form l = o0x1o1 . . . xm−1om−1xm, (9) where each xk (k = 1, . . . , m) is a word or a constituent label, m is the total number of words and constituent labels in a symbol, o0 is “ c⃝” or empty, and each ok (k = 1, . . . , m −1) is either “⊗”, “⊕”, or “⊙”. Then, in the node/operator layer, the x-s and o-s are separated and rearranged as x = (x1, . . . , xm, o0, . . . , om−1), which is fed to the pre-embedding layer. The pre-embedding layer generates a sequence p = (p1, . . . , pm, . . . , p2m), which is calculated as follows: p = Wemb[I(x)]. (10) Here, the function I(x) returns a list of the indices in the dictionary for all the elements in x, which consist of words, constituent labels, or operators. In addition, Wemb is the embedding matrix of size (|wword| + |wlabel| + 4) × dword, where |wword| and |wlabel| are the total number of words and constituent labels, respectively, dword is the dimension of the word embedding, and there are four possible operators: “ c⃝,” “⊗,” “⊕,” and “⊙.” Note that p is a list of 2m vectors, and the dimension of each vector is dword. Because the length of the sequence of the input layer is T + 1, there are T + 1 different ps in the pre-embedding layer, which we denote by P = (p0, . . . , pT ). Depending on where the score layer is incorporated, we propose two frameworks: Score-on-Embedding (SoE) and Score-onAttention (SoA). In SoE, the k-th element of the embedding layer is calculated as follows: ek = ξk X p∈pk p, (11) while in SoA, the k-th element of the embedding layer is calculated as ek = X p∈pk p, (12) where k = 0, . . . , T. Note that ek ∈Rdword. In this manner, the proposed forest-to-string NMT framework is connected with the conventional sequence-to-sequence NMT framework. After calculating the embedding vectors in the embedding layer, the hidden vectors are calculated using Equation 5. When calculating the context vector ci-s, SoE and SoA differ from each other. For SoE, the ci-s are calculated using Equation 6 and 7, while for SoA, the αij-s used to calculate the ci-s are determined as follows: αij = exp(ξja(si−1, hj)) PT k=0 exp(ξka(si−1, hk)) . (13) Then, using the decoder of the sequence-tosequence framework, the sentence of the target language can be generated. 4 Experiments 4.1 Setup We evaluate the effectiveness of our forest-based NMT systems on English-to-Chinese and Englishto-Japanese translation tasks3. The statistics of the corpora used in our experiments are summarized in Table 1. The packed forests of English sentences are obtained by the constituent parser proposed by Huang (2008)4. We filtered out the sentences for 3English is commonly chosen as the target language. We chose English as the source language because a highperformance forest parser is not available for other languages. 4http://web.engr.oregonstate.edu/ ˜huanlian/software/forest-reranker/ forest-charniak-v0.8.tar.bz2 Language Corpus Usage #Sent. English-Japanese ASPEC train 100,000 dev. 1790 test 1812 English-Chinese LDC7 train 1,423,695 FBIS 233,510 NIST MT 02 dev. 876 NIST MT 03 test 919 NIST MT 04 1,788 NIST MT 05 1,082 Table 1: Statistics of the corpora. which the parser cannot generate the packed forest successfully and the sentences longer than 80 words. For NIST datasets, we simply choose the first reference among the four English references of NIST corpora, because all of them are independent with each other, according to the documents of NIST datasets. For Chinese sentences, we used Stanford segmenter5 for segmentation. For Japanese sentences, we followed the preprocessing steps recommended in WAT 20176. We implemented our framework based on nematus8 (Sennrich et al., 2017). For optimization, we used the Adadelta algorithm (Zeiler, 2012). In order to avoid overfitting, we used dropout (Srivastava et al., 2014) on the embedding layer and hidden layer, with the dropout probability set to 0.2. We used the gated recurrent unit (Cho et al., 2014) as the recurrent unit of RNNs, which are bi-directional, with one hidden layer. Based on the tuning result, we set the maximum length of the input sequence to 300, the hidden layer size as 512, the dimension of word embedding as 620, and the batch size for training as 40. We pruned the packed forest using the algorithm of Huang (2008), with a threshold of 5. If the linearization of the pruned forest is still longer than 300, then we linearize the 1-best parsing tree instead of the forest. During decoding, we used beam search, and fixed the beam size to 12. For the case of Forest (SoA), with 1 core of Tesla K80 GPU and LDC corpus as the training data, training spent about 10 days, and decoding speed is about 10 sentences per second. 5https://nlp.stanford.edu/software/ stanford-segmenter-2017-06-09.zip 6http://lotus.kuee.kyoto-u.ac.jp/WAT/ WAT2017/baseline/dataPreparationJE.html 7LDC2002E18, LDC2003E07, LDC2003E14, Hansards portion of LDC2004T07, LDC2004T08, and LDC2005T06 8https://github.com/EdinburghNLP/ nematus Types Systems & MT 03 MT 04 MT 05 p value Configurations FBIS LDC FBIS LDC FBIS LDC FS Mi et al. (2008) 27.10 28.21 28.67 30.09 26.57 28.36 TN Eriguchi et al. (2016) 29.00 29.71 30.24 31.56 28.38 30.33 Chen et al. (2017) 28.34 29.64 30.00 31.25 28.14 29.59 Li et al. (2017) 28.40 29.60 29.66 31.96 27.74 29.84 FN s2s 27.44 29.18 29.73 30.53 27.32 28.80 1-best (No score) 28.61 29.38 30.07 31.58 28.59 30.01 < 0.01 1-best (SoE) 28.78 30.65 30.36 32.22 29.31 30.16 < 0.05 1-best (SoA) 29.39 30.80 30.25 32.39 29.30 30.61 < 0.005 Forest (No score) 28.06 29.63 29.51 31.41 28.48 29.75 < 0.01 Forest (SoE) 29.58 31.07 30.67 32.69 29.26 30.41 < 0.001 Forest (SoA) 29.63 31.35 30.31 33.14 29.87 31.23 < 0.001 Table 2: English-Chinese experimental results (character-level BLEU). “FS,” “TN,” and “FN” denote forest-based SMT, tree-based NMT, and forest-based NMT systems, respectively. We performed the paired bootstrap resampling significance test (Koehn, 2004) over the NIST MT 03 to 05 corpus, with respect to the s2s baseline, and list the p values in the table. Types Systems & BLEU p value Configurations (test) FS Mi et al. (2008) 34.13 TN Eriguchi et al. (2016) 37.52 Chen et al. (2017) 36.94 Li et al. (2017) 36.21 FN s2s 37.10 1-best (No score) 38.01 < 0.05 1-best (SoE) 38.53 < 0.01 1-best (SoA) 39.42 < 0.001 Forest (No score) 37.92 < 0.1 Forest (SoE) 41.35 < 0.01 Forest (SoA) 42.17 < 0.005 Table 3: English-Japanese experimental results (character-level BLEU). 4.2 Experimental results Table 2 and 3 summarize the experimental results. To avoid the affect of segmentation errors, the performance were evaluated by character-level BLEU (Papineni et al., 2002). We compare our proposed models (i.e., Forest (SoE) and Forest (SoA)) with three types of baseline: a string-to-string model (s2s), forest-based models that do not use score sequences (Forest (No score)), and tree-based models that use the 1-best parsing tree (1-best (No score, SoE, SoA)). For the 1-best models, we preserve the nodes and hyperedges that are used in the 1-best constituent tree in the packed forest, and remove all other nodes and hyperedges, yielding a pruned forest that contains only the 1-best constituent tree. For the “No score” configurations, we force the input score sequence to be a sequence of 1.0 with the same length as the input symbol sequence, so that neither the embedding layer nor the attention layer are affected by the score sequence. In addition, we also perform a comparison with some state-of-the-art tree-based systems that are publicly available, including an SMT system (Mi et al., 2008) and the NMT systems (Eriguchi et al. (2016)9, Chen et al. (2017)10, and Li et al. (2017)). For Mi et al. (2008), we use the implementation of cicada11. For Li et al. (2017), we reimplemented the “Mixed RNN Encoder” model, because of its outstanding performance on the NIST MT corpus. We can see that for both English-Chinese and English-Japanese, compared with the s2s baseline system, both the 1-best and forest-based configurations yield better results. This indicates syntactic information contained in the constituent trees or forests is indeed useful for machine translation. Specifically, we observe the following facts. First, among the three different frameworks SoE, SoA, and No-score, the SoA framework performs the best, while the No-score framework per9https://github.com/tempra28/tree2seq 10https://github.com/howardchenhd/ Syntax-awared-NMT 11https://github.com/tarowatanabe/ cicada [Source] In the Czech Republic , which was ravaged by serious floods last summer , the temperatures in its border region adjacent to neighboring Slovakia plunged to minus 18 degrees Celsius . [Reference] 去年 夏季 曾 出现 严重 水患 的 捷克 共和国 , 其 邻近 斯洛伐克 的 边界 地区 气温 低 至 摄氏 零下 18 度 。 last summer ever appear serious floods of Czech Republic , its adjacent Slovakia of border region temperature decrease to Celsius minus 18 degree . [s2s] 去年 夏天 , 捷克 地区 遭受 严重 洪灾 的 捷克 边境 地区 气温 下降 了 18 摄氏 度 。 last summer , Czech region suffer serious floods of Czech border region temperature decrease -ed 18 Celsius degree . [1best Tree] 去年 夏天 , 遭受 特大 洪灾 的 捷克 边境 地区 的 气温 下降 了 18 摄氏 度 。 last summer , suffer serious floods of Czech border region of temperature decrease -ed 18 Celsius degree . [Forest] 去年 夏天 发生 严重 水灾 的 捷克 共和国 , 毗邻 斯洛伐克 的 边境 地区 温度 下降 至 零下 18 度 。 last summer occur serious floods of Czech Republic , adjacent Slovakia of border region temperature decrease to minus 18 degree . Figure 4: Chinese translation results of an English sentence. forms the worst. This indicates that the scores of the edges in constituent trees or packed forests, which reflect the confidence of the correctness of the edges, are indeed useful. In fact, for the 1-best constituent parsing tree, the score of the edge reflects the confidence of the parser. By using this information, the NMT system succeed to learn a better attention, paying much attention to the confident structure and not paying attention to the unconfident structure, which improved the translation performance. This fact is ignored by previous studies on tree-based NMT. Furthermore, it is better to use the scores to modify the values of attention instead of rescaling the word embeddings, because modifying word embeddings carelessly may change the semantic meanings of words. Second, compared with the cases that only using the 1-best constituent trees, using packed forests yields statistical significantly better results for the SoE and SoA frameworks. This shows the effectiveness of using more syntactic information. Compared with one constituent tree, the packed forest, which contains multiple different trees, describes the syntactic structure of the sentence in different aspects, which together increase the accuracy of machine translation. However, without using the scores, the 1-best constituent tree is preferred. This is because without using the scores, all trees in the packed forest are treated equally, which makes it easy to import noise into the encoder. Compared with other types of state-of-the-art systems, our systems using only the 1-best tree (1-best(SoE, SoA)) are better than the other treebased systems. Moreover, our NMT systems using the packed forests achieve the best performance. These results also support the usefulness of the scores of the edges and packed forests in NMT. As for the efficiency, the training time of the SoA system was slightly longer than that of the SoE system, which was about twice of the s2s baseline. The training time of the tree-based system was about 1.5 times of the baseline. For the case of Forest (SoA), with 1 core of Tesla P100 GPU and LDC corpus as the training data, training spent about 10 days, and decoding speed was about 10 sentences per second. The reason for the relatively low efficiency is that the linearized sequences of packed forests were much longer than word sequences, enlarging the scale of the inputs. Despite this, the training process ended within reasonable time. 4.3 Qualitative analysis Figure 4 illustrates the translation results of an English sentence using several different configurations: the s2s baseline, using only the 1-best tree (SoE), and using the packed forest (SoE). This is a sentence from NIST MT 03, and the training corpus is the LDC corpus. For the s2s case, no syntactic information is utilized, and therefore the output of the system is not a grammatical Chinese sentence. The attributive phrase of “Czech border region” is a complete sentence. However, the attributive is not allowed to be a complete sentence in Chinese. For the case of using 1-best constituent tree, the output is a grammatical Chinese sentence. However, the phrase “adjacent to neighboring Slovakia” is completely ignored in the translation result. After analyzing the constituent tree, we found that this phrase was incorrectly parsed as an “adverb phrase”, so that the NMT system paid little attention to it, because of the low confidence given by the parser. In contrast, for the case of the packed forest, we can see this phrase was not ignored and was translated correctly. Actually, besides “adverb phrase”, this phrase was also correctly parsed as an “adjective phrase”, and covered by multiple different nodes in the forest, making it difficult for the encoder to ignore the phrase. We also noticed that our method performed better on learning attention. For the example in Figure 4, we observed that for s2s model, the decoder paid attention to the word “Czech” twice, which causes the output sentence contains the Chinese translation of Czech twice. On the other hand, for our forest model, by using the syntax information, the decoder paid attention to the phrase “In the Czech Republic” only once, making the decoder generates the correct output. 5 Related work Incorporating syntactic information into NMT systems is attracting widespread attention nowadays. Compared with conventional string-to-string NMT systems, tree-based systems demonstrate a better performance with the help of constituent trees or dependency trees. The first noteworthy study is Eriguchi et al. (2016), which used Tree-structured LSTM (Tai et al., 2015) to encode the HPSG syntax tree of the sentence in the source-side in a bottom-up manner. Then, Chen et al. (2017) enhanced the encoder with a top-down tree encoder. As a simple extension of Eriguchi et al. (2016), very recently, Zaremoodi and Haffari (2017) proposed a forest-based NMT method by representing the packed forest with a forest-structured neural network. However, their method was evaluated in small-scale MT settings (each training dataset consists of under 10k parallel sentences). In contrast, our proposed method is effective in a largescale MT setting, and we present qualitative analysis regarding the effectiveness of using forests in NMT. Although these methods obtained good results, the tree-structured network used by the encoder made the training and decoding relatively slow, therefore restricts the scope of application. Other attempts at encoding syntactic trees have also been proposed. Eriguchi et al. (2017) combined the Recurrent Neural Network Grammar (Dyer et al., 2016) with NMT systems, while Li et al. (2017) linearized the constituent tree and encoded it using RNNs. The training of these methods is fast, because of the linear structures of RNNs. However, all these syntax-based NMT systems used only the 1-best parsing tree, making the systems sensitive to parsing errors. Instead of using trees to represent syntactic information, some studies use other data structures to represent the latent syntax of the input sentence. For example, Hashimoto and Tsuruoka (2017) proposed translating using a latent graph. However, such systems do not enjoy the benefit of handcrafted syntactic knowledge, because they do not use a parser trained from a large treebank with human annotations. Compared with these related studies, our framework utilizes a linearized packed forest, meaning the encoder can encode exponentially many trees in an efficient manner. The experimental results demonstrated these advantages. 6 Conclusion and future work We proposed a new NMT framework, which encodes a packed forest for the source sentence using linear-structured neural networks, such as RNN. Compared with conventional string-tostring NMT systems and tree-to-string NMT systems, our framework can utilize exponentially many linearized parsing trees during encoding, without significantly decreasing the efficiency. This represents the first attempt at using a forest under the string-to-string NMT framework. The experimental results demonstrate the effectiveness of our framework. As future work, we plan to design some more elaborate structures to incorporate the score layer in the encoder. Further improvement in the translation performance is expected to be achieved for the forest-based NMT system. We will also apply the proposed linearization method to other tasks. Acknowledgements We are grateful to the anonymous reviewers for their insightful comments and suggestions. We thank Lemao Liu from Tencent AI Lab for his suggestions about the experiments. We thank Atsushi Fujita whose suggestions greatly improve the readability and the logical soundness of this paper. This work was done during the internship of Chunpeng Ma at NICT. Akihiro Tamura is supported by JSPS KAKENHI Grant Number JP18K18110. Tiejun Zhao is supported by the National Natural Science Foundation of China (NSFC) via grant 91520204 and State High-Tech Development Plan of China (863 program) via grant 2015AA015405. References Roee Aharoni and Yoav Goldberg. 2017. Towards string-to-tree neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 132–140. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Eugene Charniak and Mark Johnson. 2005. Coarseto-fine n-best parsing and maxent discriminative reranking. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 173–180. Huadong Chen, Shujian Huang, David Chiang, and Jiajun Chen. 2017. Improved neural machine translation with a syntax-aware encoder and decoder. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1936–1945. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A Smith. 2016. Recurrent neural network grammars. arXiv preprint arXiv:1602.07776. Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2016. Tree-to-sequence attentional neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 823–833. Akiko Eriguchi, Yoshimasa Tsuruoka, and Kyunghyun Cho. 2017. Learning to parse and translate improves neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 72–78. Kazuma Hashimoto and Yoshimasa Tsuruoka. 2017. Neural machine translation with source-side latent graph parsing. arXiv preprint arXiv:1702.02265. Liang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In Proceedings of ACL-08: HLT, pages 586–594. Arthur B Kahn. 1962. Topological sorting of large networks. Communications of the ACM, 5(11):558– 562. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of EMNLP 2004, pages 388–395. Junhui Li, Deyi Xiong, Zhaopeng Tu, Muhua Zhu, Min Zhang, and Guodong Zhou. 2017. Modeling source syntax for neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 688–697. Chunpeng Ma, Lemao Liu, Akihiro Tamura, Tiejun Zhao, and Sumita Eiichiro. 2017. Deterministic attention for sequence-to-sequence constituent parsing. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pages 3237–3243. Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313–330. Haitao Mi and Liang Huang. 2008. Forest-based translation rule extraction. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 206–214. Haitao Mi, Liang Huang, and Qun Liu. 2008. Forestbased translation. In Proceedings of ACL-08: HLT, pages 192–199. Maria Nadejde, Siva Reddy, Rico Sennrich, Tomasz Dwojak, Marcin Junczys-Dowmunt, Philipp Koehn, and Alexandra Brich. 2017. Syntax-aware neural machine translation using ccg. arXiv preprint arXiv:1702.01147. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318. Chris Quirk and Simon Corston-Oliver. 2006. The impact of parse quality on syntactically-informed statistical machine translation. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 62–69. Rico Sennrich, Orhan Firat, Kyunghyun Cho, Alexandra Birch, Barry Haddow, Julian Hitschler, Marcin Junczys-Dowmunt, Samuel L¨aubli, Antonio Valerio Miceli Barone, Jozef Mokry, and Maria Nadejde. 2017. Nematus: a toolkit for neural machine translation. In Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics, pages 65–68. Rico Sennrich and Barry Haddow. 2016. Linguistic input features improve neural machine translation. arXiv preprint arXiv:1606.02892. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of machine learning research, 15(1):1929–1958. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1556–1566. Robert Endre Tarjan. 1976. Edge-disjoint spanning trees and depth-first search. Acta Informatica, 6(2):171–185. Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Advances in Neural Information Processing Systems, pages 2773–2781. Shuangzhi Wu, Dongdong Zhang, Nan Yang, Mu Li, and Ming Zhou. 2017. Sequence-to-dependency neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 698–707. Poorya Zaremoodi and Gholamreza Haffari. 2017. Incorporating syntactic uncertainty in neural machine translation with forest-to-sequence model. arXiv preprint arXiv:1711.07019. Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701. Jiacheng Zhang, Yang Liu, Huanbo Luan, Jingfang Xu, and Maosong Sun. 2017. Prior knowledge integration for neural machine translation using posterior regularization. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1514– 1523.
2018
116
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1264–1274 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1264 Context-Aware Neural Machine Translation Learns Anaphora Resolution Elena Voita Yandex, Russia University of Amsterdam, Netherlands [email protected] Pavel Serdyukov Yandex, Russia [email protected] Rico Sennrich University of Edinburgh, Scotland University of Zurich, Switzerland [email protected] Ivan Titov University of Edinburgh, Scotland University of Amsterdam, Netherlands [email protected] Abstract Standard machine translation systems process sentences in isolation and hence ignore extra-sentential information, even though extended context can both prevent mistakes in ambiguous cases and improve translation coherence. We introduce a context-aware neural machine translation model designed in such way that the flow of information from the extended context to the translation model can be controlled and analyzed. We experiment with an English-Russian subtitles dataset, and observe that much of what is captured by our model deals with improving pronoun translation. We measure correspondences between induced attention distributions and coreference relations and observe that the model implicitly captures anaphora. It is consistent with gains for sentences where pronouns need to be gendered in translation. Beside improvements in anaphoric cases, the model also improves in overall BLEU, both over its context-agnostic version (+0.7) and over simple concatenation of the context and source sentences (+0.6). 1 Introduction It has long been argued that handling discourse phenomena is important in translation (Mitkov, 1999; Hardmeier, 2012). Using extended context, beyond the single source sentence, should in principle be beneficial in ambiguous cases and also ensure that generated translations are coherent. Nevertheless, machine translation systems typically ignore discourse phenomena and translate sentences in isolation. Earlier research on this topic focused on handling specific phenomena, such as translating pronouns (Le Nagard and Koehn, 2010; Hardmeier and Federico, 2010; Hardmeier et al., 2015), discourse connectives (Meyer et al., 2012), verb tense (Gong et al., 2012), increasing lexical consistency (Carpuat, 2009; Tiedemann, 2010; Gong et al., 2011), or topic adaptation (Su et al., 2012; Hasler et al., 2014), with special-purpose features engineered to model these phenomena. However, with traditional statistical machine translation being largely supplanted with neural machine translation (NMT) models trained in an end-toend fashion, an alternative is to directly provide additional context to an NMT system at training time and hope that it will succeed in inducing relevant predictive features (Jean et al., 2017; Wang et al., 2017; Tiedemann and Scherrer, 2017; Bawden et al., 2018). While the latter approach, using context-aware NMT models, has demonstrated to yield performance improvements, it is still not clear what kinds of discourse phenomena are successfully handled by the NMT systems and, importantly, how they are modeled. Understanding this would inform development of future discourse-aware NMT models, as it will suggest what kind of inductive biases need to be encoded in the architecture or which linguistic features need to be exploited. In our work we aim to enhance our understanding of the modelling of selected discourse phenomena in NMT. To this end, we construct a simple discourse-aware model, demonstrate that it achieves improvements over the discourse-agnostic baseline on an English-Russian subtitles dataset (Lison et al., 2018) and study which context information is being captured in the model. Specifically, we start with the Trans1265 former (Vaswani et al., 2017), a state-of-the-art model for context-agnostic NMT, and modify it in such way that it can handle additional context. In our model, a source sentence and a context sentence are first encoded independently, and then a single attention layer, in a combination with a gating function, is used to produce a context-aware representation of the source sentence. The information from context can only flow through this attention layer. When compared to simply concatenating input sentences, as proposed by Tiedemann and Scherrer (2017), our architecture appears both more accurate (+0.6 BLEU) and also guarantees that the contextual information cannot bypass the attention layer and hence remain undetected in our analysis. We analyze what types of contextual information are exploited by the translation model. While studying the attention weights, we observe that much of the information captured by the model has to do with pronoun translation. It is not entirely surprising, as we consider translation from a language without grammatical gender (English) to a language with grammatical gender (Russian). For Russian, translated pronouns need to agree in gender with their antecedents. Moreover, since in Russian verbs agree with subjects in gender and adjectives also agree in gender with pronouns in certain frequent constructions, mistakes in translating pronouns have a major effect on the words in the produced sentences. Consequently, the standard cross-entropy training objective sufficiently rewards the model for improving pronoun translation and extracting relevant information from the context. We use automatic co-reference systems and human annotation to isolate anaphoric cases. We observe even more substantial improvements in performance on these subsets. By comparing attention distributions induced by our model against co-reference links, we conclude that the model implicitly captures coreference phenomena, even without having any kind of specialized features which could help it in this subtask. These observations also suggest potential directions for future work. For example, effective co-reference systems go beyond relying simply on embeddings of contexts. One option would be to integrate ‘global’ features summarizing properties of groups of mentions predicted as linked in a document (Wiseman et al., 2016), or to use latent relations to trace entities across documents (Ji et al., 2017). Our key contributions can be summarized as follows: • we introduce a context-aware neural model, which is effective and has a sufficiently simple and interpretable interface between the context and the rest of the translation model; • we analyze the flow of information from the context and identify pronoun translation as the key phenomenon captured by the model; • by comparing to automatically predicted or human-annotated coreference relations, we observe that the model implicitly captures anaphora. 2 Neural Machine Translation Given a source sentence x = (x1, x2, . . . , xS) and a target sentence y = (y1, y2, . . . , yT ), NMT models predict words in the target sentence, word by word. Current NMT models mainly have an encoderdecoder structure. The encoder maps an input sequence of symbol representations x to a sequence of distributed representations z = (z1, z2, . . . , zS). Given z, a neural decoder generates the corresponding target sequence of symbols y one element at a time. Attention-based NMT The encoder-decoder framework with attention has been proposed by Bahdanau et al. (2015) and has become the defacto standard in NMT. The model consists of encoder and decoder recurrent networks and an attention mechanism. The attention mechanism selectively focuses on parts of the source sentence during translation, and the attention weights specify the proportions with which information from different positions is combined. Transformer Vaswani et al. (2017) proposed an architecture that avoids recurrence completely. The Transformer follows an encoder-decoder architecture using stacked self-attention and fully connected layers for both the encoder and decoder. An important advantage of the Transformer is that it is more parallelizable and faster to train than recurrent encoder-decoder models. From the source tokens, learned embeddings are generated and then modified using positional encodings. The encoded word embeddings are then used as input to the encoder which consists of N 1266 layers each containing two sub-layers: (a) a multihead attention mechanism, and (b) a feed-forward network. The self-attention mechanism first computes attention weights: i.e., for each word, it computes a distribution over all words (including itself). This distribution is then used to compute a new representation of that word: this new representation is set to an expectation (under the attention distribution specific to the word) of word representations from the layer below. In multi-head attention, this process is repeated h times with different representations and the result is concatenated. The second component of each layer of the Transformer network is a feed-forward network. The authors propose using a two-layered network with the ReLU activations. Analogously, each layer of the decoder contains the two sub-layers mentioned above as well as an additional multi-head attention sub-layer that receives input from the corresponding encoding layer. In the decoder, the attention is masked to prevent future positions from being attended to, or in other words, to prevent illegal leftward information flow. See Vaswani et al. (2017) for additional details. The proposed architecture reportedly improves over the previous best results on the WMT 2014 English-to-German and English-to-French translation tasks, and we verified its strong performance on our data set in preliminary experiments. Thus, we consider it a strong state-of-the-art baseline for our experiments. Moreover, as the Transformer is attractive in practical NMT applications because of its parallelizability and training efficiency, integrating extra-sentential information in Transformer is important from the engineering perspective. As we will see in Section 4, previous techniques developed for recurrent encoderdecoders do not appear effective for the Transformer. 3 Context-aware model architecture Our model is based on Transformer architecture (Vaswani et al., 2017). We leave Transformer’s decoder intact while incorporating context information on the encoder side (Figure 1). Source encoder: The encoder is composed of a stack of N layers. The first N −1 layers are identical and represent the original layers of TransFigure 1: Encoder of the discourse-aware model former’s encoder. The last layer incorporates contextual information as shown in Figure 1. In addition to multi-head self-attention it has a block which performs multi-head attention over the output of the context encoder stack. The outputs of the two attention mechanisms are combined via a gated sum. More precisely, let c(s−attn) i be the output of the multi-head self-attention, c(c−attn) i the output of the multi-head attention to context, ci their gated sum, and σ the logistic sigmoid function, then gi = σ  Wg h c(s−attn) i , c(c−attn) i i + bg  (1) ci = gi ⊙c(s−attn) i + (1 −gi) ⊙c(c−attn) i (2) Context encoder: The context encoder is composed of a stack of N identical layers and replicates the original Transformer encoder. In contrast to related work (Jean et al., 2017; Wang et al., 2017), we found in preliminary experiments that using separate encoders does not yield an accurate model. Instead we share the parameters of the first N −1 layers with the source encoder. Since major proportion of the context encoder’s parameters are shared with the source encoder, we add a special token (let us denote it <bos>) to the beginning of context sentences, but not source 1267 sentences, to let the shared layers know whether it is encoding a source or a context sentence. 4 Experiments 4.1 Data and setting We use the publicly available OpenSubtitles2018 corpus (Lison et al., 2018) for English and Russian.1 As described in the appendix, we apply data cleaning and randomly choose 2 million training instances from the resulting data. For development and testing, we randomly select two subsets of 10000 instances from movies not encountered in training.2 Sentences were encoded using byte-pair encoding (Sennrich et al., 2016), with source and target vocabularies of about 32000 tokens. We generally used the same parameters and optimizer as in the original Transformer (Vaswani et al., 2017). The hyperparameters, preprocessing and training details are provided in the supplementary material. 5 Results and analysis We start by experiments motivating the setting and verifying that the improvements are indeed genuine, i.e. they come from inducing predictive features of the context. In subsequent section 5.2, we analyze the features induced by the context encoder and perform error analysis. 5.1 Overall performance We use the traditional automatic metric BLEU on a general test set to get an estimate of the overall performance of the discourse-aware model, before turning to more targeted evaluation in the next section. We provide results in Table 1.3 The ‘baseline’ is the discourse-agnostic version of the Transformer. As another baseline we use the standard Transformer applied to the concatenation of the previous and source sentences, as proposed by Tiedemann and Scherrer (2017). Tiedemann and Scherrer (2017) only used a special symbol to mark where the context sentence ends and the source sentence begins. This technique performed badly with the non-recurrent Transformer architecture in preliminary experiments, resulting in 1http://opus.nlpl.eu/ OpenSubtitles2018.php 2The resulting data sets are freely available at http:// data.statmt.org/acl18_contextnmt_data/ 3We use bootstrap resampling (Riezler and Maxwell, 2005) for significance testing model BLEU baseline 29.46 concatenation (previous sentence) 29.53 context encoder (previous sentence) 30.14 context encoder (next sentence) 29.31 context encoder (random context) 29.69 Table 1: Automatic evaluation: BLEU. Significant differences at p < 0.01 are in bold. a substantial degradation of performance (over 1 BLEU). Instead, we use a binary flag at every word position in our concatenation baseline telling the encoder whether the word belongs to the context sentence or to the source sentence. We consider two versions of our discourseaware model: one using the previous sentence as the context, another one relying on the next sentence. We hypothesize that both the previous and the next sentence provide a similar amount of additional clues about the topic of the text, whereas for discourse phenomena such as anaphora, discourse relations and elliptical structures, the previous sentence is more important. First, we observe that our best model is the one using a context encoder for the previous sentence: it achieves 0.7 BLEU improvement over the discourse-agnostic model. We also notice that, unlike the previous sentence, the next sentence does not appear beneficial. This is a first indicator that discourse phenomena are the main reason for the observed improvement, rather than topic effects. Consequently, we focus solely on using the previous sentence in all subsequent experiments. Second, we observe that the concatenation baseline appears less accurate than the introduced context-aware model. This result suggests that our model is not only more amendable to analysis but also potentially more effective than using concatenation. In order to verify that our improvements are genuine, we also evaluate our model (trained with the previous sentence as context) on the same test set with shuffled context sentences. It can be seen that the performance drops significantly when a real context sentence is replaced with a random one. This confirms that the model does rely on context information to achieve the improvement in translation quality, and is not merely better regularized. However, the model is robust towards being shown a random context and obtains a performance similar to the context-agnostic baseline. 1268 5.2 Analysis In this section we investigate what types of contextual information are exploited by the model. We study the distribution of attention to context and perform analysis on specific subsets of the test data. Specifically the research questions we seek to answer are as follows: • For the translation of which words does the model rely on contextual history most? • Are there any non-lexical patterns affecting attention to context, such as sentence length and word position? • Can the context-aware NMT system implicitly learn coreference phenomena without any feature engineering? Since all the attentions in our model are multihead, by attention weights we refer to an average over heads of per-head attention weights. First, we would like to identify a useful attention mass coming to context. We analyze the attention maps between source and context, and find that the model mostly attends to <bos> and <eos> context tokens, and much less often attends to words. Our hypothesis is that the model has found a way to take no information from context by looking at uninformative tokens, and it attends to words only when it wants to pass some contextual information to the source sentence encoder. Thus we define useful contextual attention mass as sum of attention weights to context words, excluding <bos> and <eos> tokens and punctuation. 5.2.1 Top words depending on context We analyze the distribution of attention to context for individual source words to see for which words the model depends most on contextual history. We compute the overall average attention to context words for each source word in our test set. We do the same for source words at positions higher than first. We filter out words that occurred less than 10 times in a test set. The top 10 words with the highest average attention to context words are provided in Table 2. An interesting finding is that contextual attention is high for the translation of “it”, “yours”, “ones”, “you” and “I”, which are indeed very ambiguous out-of-context when translating into Russian. For example, “it” will be translated as third person singular masculine, feminine or neuter, or third person plural depending on its antecedent. word attn pos word attn pos it 0.376 5.5 it 0.342 6.8 yours 0.338 8.4 yours 0.341 8.3 yes 0.332 2.5 ones 0.318 7.5 i 0.328 3.3 ’m 0.301 4.8 yeah 0.314 1.4 you 0.287 5.6 you 0.311 4.8 am 0.274 4.4 ones 0.309 8.3 i 0.262 5.2 ’m 0.298 5.1 ’s 0.260 5.6 wait 0.281 3.8 one 0.259 6.5 well 0.273 2.1 won 0.258 4.6 Table 2: Top-10 words with the highest average attention to context words. attn gives an average attention to context words, pos gives an average position of the source word. Left part is for words on all positions, right — for words on positions higher than first. “You” can be second person singular impolite or polite, or plural. Also, verbs must agree in gender and number with the translation of “you”. It might be not obvious why “I” has high contextual attention, as it is not ambiguous itself. However, in past tense, verbs must agree with “I” in gender, so to translate past tense sentences properly, the source encoder must predict speaker gender, and the context may provide useful indicators. Most surprising is the appearance of “yes”, “yeah”, and “well” in the list of context-dependent words, similar to the finding by Tiedemann and Scherrer (2017). We note that these words mostly appear in sentence-initial position, and in relatively short sentences. If only words after the first are considered, they disappear from the top-10 list. We hypothesize that the amount of attention to context not only depends on the words themselves, but also on factors such as sentence length and position, and we test this hypothesis in the next section. 5.2.2 Dependence on sentence length and position We compute useful attention mass coming to context by averaging over source words. Figure 2 illustrates the dependence of this average attention mass on sentence length. We observe a disproportionally high attention on context for short sentences, and a positive correlation between the average contextual attention and context length. It is also interesting to see the importance given to the context at different positions in the source 1269 Figure 2: Average attention to context words vs. both source and context length Figure 3: Average attention to context vs. source token position sentence. We compute an average attention mass to context for a set of 1500 sentences of the same length. As can be seen in Figure 3, words at the beginning of a source sentence tend to attend to context more than words at the end of a sentence. This correlates with standard view that English sentences present hearer-old material before hearer-new. There is a clear (negative) correlation between sentence length and the amount of attention placed on contextual history, and between token position and the amount of attention to context, which suggests that context is especially helpful at the beginning of a sentence, and for shorter sentences. However, Figure 4 shows that there is no straightforward dependence of BLEU improvement on source length. This means that while attention on context is disproportionally high for short sentences, context does not seem disproportionally more useful for these sentences. 5.3 Analysis of pronoun translation The analysis of the attention model indicates that the model attends heavily to the contextual history for the translation of some pronouns. Here, we investigate whether this context-aware modelling results in empirical improvements in translation Figure 4: BLEU score vs. source sentence length quality, and whether the model learns structures related to anaphora resolution. 5.3.1 Ambiguous pronouns and translation quality Ambiguous pronouns are relatively sparse in a general-purpose test set, and previous work has designed targeted evaluation of pronoun translation (Hardmeier et al., 2015; Miculicich Werlen and Popescu-Belis, 2017; Bawden et al., 2018). However, we note that in Russian, grammatical gender is not only marked on pronouns, but also on adjectives and verbs. Rather than using a pronoun-specific evaluation, we present results with BLEU on test sets where we hypothesize context to be relevant, specifically sentences containing co-referential pronouns. We feed Stanford CoreNLP open-source coreference resolution system (Manning et al., 2014a) with pairs of sentences to find examples where there is a link between one of the pronouns under consideration and the context. We focus on anaphoric instances of “it” (this excludes, among others, pleonastic uses of ”it”), and instances of the pronouns “I”, “you”, and “yours” that are coreferent with an expression in the previous sentence. All these pronouns express ambiguity in the translation into Russian, and the model has learned to attend to context for their translation (Table 2). To combat data sparsity, the test sets are extracted from large amounts of held-out data of OpenSubtitles2018. Table 3 shows BLEU scores for the resulting subsets. First of all, we see that most of the antecedents in these test sets are also pronouns. Antecedent pronouns should not be particularly informative for translating the source pronoun. Nevertheless, even with such contexts, improvements are generally larger than on the overall test set. When we focus on sentences where the antecedent for pronoun under consideration contains 1270 pronoun N #pronominal antecedent baseline our model difference it 11128 6604 25.4 26.6 +1.2 you 6398 5795 29.7 30.8 +1.1 yours 2181 2092 24.1 25.2 +1.1 I 8205 7496 30.1 30.0 -0.1 Table 3: BLEU for test sets with coreference between pronoun and a word in context sentence. We show both N, the total number of instances in a particular test set, and number of instances with pronominal antecedent. Significant BLEU differences are in bold. word N baseline our model diff. it 4524 23.9 26.1 +2.2 you 693 29.9 31.7 +1.8 I 709 29.1 29.7 +0.6 Table 4: BLEU for test sets of pronouns having a nominal antecedent in context sentence. N: number of examples in the test set. type N baseline our model diff. masc. 2509 26.9 27.2 +0.3 fem. 2403 21.8 26.6 +4.8 neuter 862 22.1 24.0 +1.9 plural 1141 18.2 22.5 +4.3 Table 5: BLEU for test sets of pronoun “it” having a nominal antecedent in context sentence. N: number of examples in the test set. a noun, we observe even larger improvements (Table 4). Improvement is smaller for “I”, but we note that verbs with first person singular subjects mark gender only in the past tense, which limits the impact of correctly predicting gender. In contrast, different types of “you” (polite/impolite, singular/plural) lead to different translations of the pronoun itself plus related verbs and adjectives, leading to a larger jump in performance. Examples of nouns co-referent with “I” and “you” include names, titles (“Mr.”, “Mrs.”, “officer”), terms denoting family relationships (“Mom”, “Dad”), and terms of endearment (“honey”, “sweetie”). Such nouns can serve to disambiguate number and gender of the speaker or addressee, and mark the level of familiarity between them. The most interesting case is translation of “it”, as “it” can have many different translations into Russian, depending on the grammatical gender of the antecedent. In order to disentangle these cases, we train the Berkeley aligner on 10m sentences and use the trained model to divide the test set with “it” referring to a noun into test sets specific to each gender and number. Results are in Table 5. pronoun agreement (in %) random first last attention it 69 66 72 69 you 76 85 71 80 I 74 81 73 78 Table 6: Agreement with CoreNLP for test sets of pronouns having a nominal antecedent in context sentence (%). We see an improvement of 4-5 BLEU for sentences where “it” is translated into a feminine or plural pronoun by the reference. For cases where “it” is translated into a masculine pronoun, the improvement is smaller because the masculine gender is more frequent, and the context-agnostic baseline tends to translate the pronoun “it” as masculine. 5.3.2 Latent anaphora resolution The results in Tables 4 and 5 suggest that the context-aware model exploits information about the antecedent of an ambiguous pronoun. We hypothesize that we can interpret the model’s attention mechanism as a latent anaphora resolution, and perform experiments to test this hypothesis. For test sets from Table 4, we find an antecedent noun phrase (usually a determiner or a possessive pronoun followed by a noun) using Stanford CoreNLP (Manning et al., 2014b). We select only examples where a noun phrase contains a single noun to simplify our analysis. Then we identify which token receives the highest attention weight (excluding <bos> and <eos> tokens and punctuation). If this token falls within the antecedent span, then we treat it as agreement (see Table 6). One natural question might be: does the attention component in our model genuinely learn to perform anaphora resolution, or does it capture some simple heuristic (e.g., pointing to the last noun)? To answer this question, we consider several baselines: choosing a random, last or first 1271 pronoun agreement (in %) random first last attention it 40 36 52 58 you 42 63 29 67 I 39 56 35 62 Table 7: Agreement with CoreNLP for test sets of pronouns having a nominal antecedent in context sentence (%). Examples with ≥1 noun in context sentence. noun from the context sentence as an antecedent. Note that an agreement of the last noun for “it” or the first noun for “you” and “I” is very high. This is partially due to the fact that most context sentences have only one noun. For these examples a random and last predictions are always correct, meanwhile attention does not always pick a noun as the most relevant word in the context. To get a more clear picture let us now concentrate only on examples where there is more than one noun in the context (Table 7). We can now see that the attention weights are in much better agreement with the coreference system than any of the heuristics. This indicates that the model is indeed performing anaphora resolution. While agreement with CoreNLP is encouraging, we are aware that coreference resolution by CoreNLP is imperfect and partial agreement with it may not necessarily indicate that the attention is particularly accurate. In order to control for this, we asked human annotators to manually evaluate 500 examples from the test sets where CoreNLP predicted that “it” refers to a noun in the context sentence. More precisely, we picked random 500 examples from the test set with “it” from Table 7. We marked the pronoun in a source which CoreNLP found anaphoric. Assessors were given the source and context sentences and were asked to mark an antecedent noun phrase for a marked pronoun in a source sentence or say that there is no antecedent at all. We then picked those examples where assessors found a link from “it” to some noun in context (79% of all examples). Then we evaluated agreement of CoreNLP and our model with the ground truth links. We also report the performance of the best heuristic for “it” from our previous analysis (i.e. last noun in context). The results are provided in Table 8. The agreement between our model and the ground truth is 72%. Though 5% below the coreference system, this is a lot higher than the best agreement (in %) CoreNLP 77 attention 72 last noun 54 Table 8: Performance of CoreNLP and our model’s attention mechanism compared to human assessment. Examples with ≥1 noun in context sentence. Figure 5: An example of an attention map between source and context. On the y-axis are the source tokens, on the x-axis the context tokens. Note the high attention between “it” and its antecedent “heart”. CoreNLP right wrong attn right 53 19 attn wrong 24 4 Table 9: Performance of CoreNLP and our model’s attention mechanism compared to human assessment (%). Examples with ≥1 noun in context sentence. heuristic (+18%). This confirms our conclusion that our model performs latent anaphora resolution. Interestingly, the patterns of mistakes are quite different for CoreNLP and our model (Table 9). We also present one example (Figure 5) where the attention correctly predicts anaphora while CoreNLP fails. Nevertheless, there is room for improvement, and improving the attention component is likely to boost translation performance. 6 Related work Our analysis focuses on how our context-aware neural model implicitly captures anaphora. Early work on anaphora phenomena in statistical machine translation has relied on external systems for coreference resolution (Le Nagard and Koehn, 2010; Hardmeier and Federico, 2010). Results 1272 were mixed, and the low performance of coreference resolution systems was identified as a problem for this type of system. Later work by Hardmeier et al. (2013) has shown that cross-lingual pronoun prediction systems can implicitly learn to resolve coreference, but this work still relied on external feature extraction to identify anaphora candidates. Our experiments show that a contextaware neural machine translation system can implicitly learn coreference phenomena without any feature engineering. Tiedemann and Scherrer (2017) and Bawden et al. (2018) analyze the attention weights of context-aware NMT models. Tiedemann and Scherrer (2017) find some evidence for aboveaverage attention on contextual history for the translation of pronouns, and our analysis goes further in that we are the first to demonstrate that our context-aware model learns latent anaphora resolution through the attention mechanism. This is contrary to Bawden et al. (2018), who do not observe increased attention between a pronoun and its antecedent in their recurrent model. We deem our model more suitable for analysis, since it has no recurrent connections and fully relies on the attention mechanism within a single attention layer. 7 Conclusions We introduced a context-aware NMT system which is based on the Transformer architecture. When evaluated on an En-Ru parallel corpus, it outperforms both the context-agnostic baselines and a simple context-aware baseline. We observe that improvements are especially prominent for sentences containing ambiguous pronouns. We also show that the model induces anaphora relations. We believe that further improvements in handling anaphora, and by proxy translation, can be achieved by incorporating specialized features in the attention model. Our analysis has focused on the effect of context information on pronoun translation. Future work could also investigate whether context-aware NMT systems learn other discourse phenomena, for example whether they improve the translation of elliptical constructions, and markers of discourse relations and information structure. Acknowledgments We would like to thank Bonnie Webber for helpful discussions and annonymous reviewers for their comments. The authors also thank David Talbot and Yandex Machine Translation team for helpful discussions and inspiration. Ivan Titov acknowledges support of the European Research Council (ERC StG BroadSem 678254) and the Dutch National Science Foundation (NWO VIDI 639.022.518). Rico Sennrich has received funding from the Swiss National Science Foundation (105212 169888). References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the Third International Conference on Learning Representations (ICLR 2015). San Diego. Rachel Bawden, Rico Sennrich, Alexandra Birch, and Barry Haddow. 2018. Evaluating Discourse Phenomena in Neural Machine Translation. In Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. New Orleans, USA. Marine Carpuat. 2009. One Translation Per Discourse. In Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions. Association for Computational Linguistics, Boulder, Colorado, pages 19–27. http://www.aclweb.org/anthology/W09-2404. Zhengxian Gong, Min Zhang, Chew Lim Tan, and Guodong Zhou. 2012. N-gram-based tense models for statistical machine translation. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics, Jeju Island, Korea, pages 276–285. http://www.aclweb.org/anthology/D121026. Zhengxian Gong, Min Zhang, and Guodong Zhou. 2011. Cache-based Document-level Statistical Machine Translation. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Edinburgh, Scotland, UK., pages 909–919. http://www.aclweb.org/anthology/D11-1084. Christian Hardmeier. 2012. Discourse in statistical machine translation: A survey and a case study. Discours 11. Christian Hardmeier and Marcello Federico. 2010. Modelling Pronominal Anaphora in Statistical Machine Translation. In Proceedings of the seventh International Workshop on Spoken Language Translation (IWSLT). pages 283–289. 1273 Christian Hardmeier, Preslav Nakov, Sara Stymne, J¨org Tiedemann, Yannick Versley, and Mauro Cettolo. 2015. Pronoun-Focused MT and Cross-Lingual Pronoun Prediction: Findings of the 2015 DiscoMT Shared Task on Pronoun Translation. In Proceedings of the Second Workshop on Discourse in Machine Translation. Association for Computational Linguistics, Lisbon, Portugal, pages 1–16. https://doi.org/10.18653/v1/W15-2501. Christian Hardmeier, J¨org Tiedemann, and Joakim Nivre. 2013. Latent anaphora resolution for crosslingual pronoun prediction. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Seattle, Washington, USA, pages 380–391. http://www.aclweb.org/anthology/D131037. Eva Hasler, Phil Blunsom, Philipp Koehn, and Barry Haddow. 2014. Dynamic topic adaptation for phrase-based mt. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Gothenburg, Sweden, pages 328–337. https://doi.org/10.3115/v1/E14-1035. Sebastien Jean, Stanislas Lauly, Orhan Firat, and Kyunghyun Cho. 2017. Does Neural Machine Translation Benefit from Larger Context? In arXiv:1704.05135. ArXiv: 1704.05135. Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, and Noah A Smith. 2017. Dynamic entity representations in neural language models. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copenhagen, Denmark, pages 1830–1839. https://doi.org/10.18653/v1/D17-1195. Ronan Le Nagard and Philipp Koehn. 2010. Aiding pronoun translation with co-reference resolution. In Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR. Association for Computational Linguistics, Uppsala, Sweden, pages 252–261. http://www.aclweb.org/anthology/W10-1737. Pierre Lison, J¨org Tiedemann, and Milen Kouylekov. 2018. Opensubtitles2018: Statistical rescoring of sentence alignments in large, noisy parallel corpora. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). Miyazaki, Japan. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014a. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Association for Computational Linguistics, Baltimore, Maryland, pages 55–60. https://doi.org/10.3115/v1/P14-5010. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014b. The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Association for Computational Linguistics, Baltimore, Maryland, pages 55–60. https://doi.org/10.3115/v1/P14-5010. Thomas Meyer, Andrei Popescu-Belis, Najeh Hajlaoui, and Andrea Gesmundo. 2012. Machine Translation of Labeled Discourse Connectives. In Proceedings of the Tenth Conference of the Association for Machine Translation in the Americas (AMTA). http://www.mt-archive.info/AMTA-2012Meyer.pdf. Lesly Miculicich Werlen and Andrei Popescu-Belis. 2017. Validation of an automatic metric for the accuracy of pronoun translation (apt). In Proceedings of the Third Workshop on Discourse in Machine Translation. Association for Computational Linguistics, Copenhagen, Denmark, pages 17–25. https://doi.org/10.18653/v1/W17-4802. Ruslan Mitkov. 1999. Introduction: Special issue on anaphora resolution in machine translation and multilingual nlp. Machine Translation 14(3/4):159– 161. http://www.jstor.org/stable/40006919. Stefan Riezler and John T. Maxwell. 2005. On some pitfalls in automatic evaluation and significance testing for mt. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization. Association for Computational Linguistics, Ann Arbor, Michigan, pages 57–64. https://www.aclweb.org/anthology/W05-0908. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1715–1725. https://doi.org/10.18653/v1/P16-1162. Jinsong Su, Hua Wu, Haifeng Wang, Yidong Chen, Xiaodong Shi, Huailin Dong, and Qun Liu. 2012. Translation model adaptation for statistical machine translation with monolingual topic information. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Jeju Island, Korea, pages 459–468. http://www.aclweb.org/anthology/P12-1048. J¨org Tiedemann. 2010. Context Adaptation in Statistical Machine Translation Using Models with Exponentially Decaying Cache. In Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing. Association for Computational Linguistics, Uppsala, Sweden, pages 8–15. http://www.aclweb.org/anthology/W10-2602. 1274 J¨org Tiedemann and Yves Scherrer. 2017. Neural Machine Translation with Extended Context. In Proceedings of the Third Workshop on Discourse in Machine Translation. Association for Computational Linguistics, Copenhagen, Denmark, DISCOMT’17, pages 82–92. https://doi.org/10.18653/v1/W174811. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. Los Angeles. http://papers.nips.cc/paper/7181-attention-isall-you-need.pdf. Longyue Wang, Zhaopeng Tu, Andy Way, and Qun Liu. 2017. Exploiting Cross-Sentence Context for Neural Machine Translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Denmark, Copenhagen, EMNLP’17, pages 2816–2821. https://doi.org/10.18653/v1/D17-1301. Sam Wiseman, Alexander M Rush, and Stuart M Shieber. 2016. Learning global features for coreference resolution. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 994–1004. https://doi.org/10.18653/v1/N16-1114.
2018
117
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1275–1284 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1275 Document Context Neural Machine Translation with Memory Networks Sameen Maruf and Gholamreza Haffari Faculty of Information Technology, Monash University, Australia {firstname.lastname}@monash.edu Abstract We present a document-level neural machine translation model which takes both source and target document context into account using memory networks. We model the problem as a structured prediction problem with interdependencies among the observed and hidden variables, i.e., the source sentences and their unobserved target translations in the document. The resulting structured prediction problem is tackled with a neural translation model equipped with two memory components, one each for the source and target side, to capture the documental interdependencies. We train the model endto-end, and propose an iterative decoding algorithm based on block coordinate descent. Experimental results of English translations from French, German, and Estonian documents show that our model is effective in exploiting both source and target document context, and statistically significantly outperforms the previous work in terms of BLEU and METEOR. 1 Introduction Neural machine translation (NMT) has proven to be powerful (Sutskever et al., 2014; Bahdanau et al., 2015). It is on-par, and in some cases, even surpasses the traditional statistical MT (Luong et al., 2015) while enjoying more flexibility and significantly less manual effort for feature engineering. Despite their flexibility, most neural MT models translate sentences independently. Discourse phenomenon such as pronominal anaphora and lexical consistency, may depend on long-range dependency going farther than a few previous sentences, are neglected in sentencebased translation (Bawden et al., 2017). There are only a handful of attempts to document-wide machine translation in statistical and neural MT camps. Hardmeier and Federico (2010); Gong et al. (2011); Garcia et al. (2014) propose document translation models based on statistical MT but are restrictive in the way they incorporate the document-level information and fail to gain significant improvements. More recently, there have been a few attempts to incorporate source side context into neural MT (Jean et al., 2017; Wang et al., 2017; Bawden et al., 2017); however, these works only consider a very local context including a few previous source/target sentences, ignoring the global source and target documental contexts. The latter two report deteriorated performance when using the target-side context. In this paper, we present a document-level machine translation model which combines sentencebased NMT (Bahdanau et al., 2015) with memory networks (Sukhbaatar et al., 2015). We capture the global source and target document context with two memory components, one each for the source and target side, and incorporate it into the sentence-based NMT by changing the decoder to condition on it as the sentence translation is generated. We conduct experiments on three language pairs: French-English, German-English and Estonian-English. The experimental results and analysis demonstrate that our model is effective in exploiting both source and target document context, and statistically significantly outperforms the previous work in terms of BLEU and METEOR. 2 Background 2.1 Neural Machine Translation (NMT) Our document NMT model is grounded on sentence-based NMT model (Bahdanau et al., 1276 2015) which contains an encoder to read the source sentence as well as an attentional decoder to generate the target translation. Encoder It is a bidirectional RNN consisting of two RNNs running in opposite directions over the source sentence: −→ hi = −−→ RNN(−→ h i−1, ES[xi]), ←− h i = ←−− RNN(←− h i+1, ES[xi]) where ES[xi] is embedding of the word xi from the embedding table ES of the source language, and −→ h i and ←− h i are the hidden states of the forward and backward RNNs which can be based on the LSTM (Hochreiter and Schmidhuber, 1997) or GRU (Cho et al., 2014) units. Each word in the source sentence is then represented by the concatenation of the corresponding bidirectional hidden states, hi = [−→ h i; ←− h i]. Decoder The generation of each word yj is conditioned on all of the previously generated words y<j via the state of the RNN decoder sj, and the source sentence via a dynamic context vector cj: yj ∼ softmax(Wy · rj + br) rj = tanh(sj + Wrc · cj + Wrj · ET [yj−1]) sj = tanh(Ws · sj−1 + Wsj · ET [yj−1] + Wsc · cj) where ET [yj] is embedding of the word yj from the embedding table ET of the target language, and W matrices and br vector are the parameters. The dynamic context vector cj is computed via cj = P i αjihi, where αj = softmax(aj) aji = v · tanh(Wae · hi + Wat · sj−1) This is known as the attention mechanism which dynamically attends to relevant parts of the source necessary for generating the next target word. 2.2 Memory Networks (MemNets) Memory Networks (Weston et al., 2015) are a class of neural models that use external memories to perform inference based on long-range dependencies. A memory is a collection of vectors M = {m1, .., mK} constituting the memory cells, where each cell mk may potentially correspond to a discrete object xk. The memory is equipped with a read and optionally a write operation. Given a query vector q, the output vector generated by reading from the memory is P|M| i=1 pimi, where pi represents the relevance of the query to the i-th memory cell p = Figure 1: Factor graph for document-level MT softmax(qT · M). For the rest of the paper, we denote the read operation by MemNet(M, q). 3 Document NMT as Structured Prediction We formulate document-wide machine translation as a structured prediction problem. Given a set of sentences {x1, . . . , x|d|} in a source document d, we are interested in generating the collection of their translations {y1, . . . , y|d|} taking into account interdependencies among them imposed by the document. We achieve this by the factor graph in Figure 1 to model the probability of the target document given the source document. Our model has two types of factors: • fθ(yt; xt, x−t) to capture the interdependencies between the translation yt, the corresponding source sentence xt and all the other sentences in the source document x−t, and • gθ(yt; y−t) to capture the interdependencies between the translation yt and all the other translations in the document y−t. Hence, the probability of a document translation given the source document is P(y1, . . . , y|d||x1, . . . , x|d|) ∝ exp  X t fθ(yt; xt, x−t) + gθ(yt; y−t)  . The factors fθ and gθ are realised by neural architectures whose parameters are collectively denoted by θ. Training It is challenging to train the model parameters by maximising the (regularised) likelihood since computing the partition function is hard. This is due to the enormity of factors 1277 gθ(yt; y−t) over a large number of translation variables yt’s (i.e., the number of sentences in the document) as well as their unbounded domain (i.e., all sentences in the target language). Thus, we resort to maximising the pseudo-likelihood (Besag, 1975) for training the parameters: arg max θ Y d∈D |d| Y t=1 Pθ(yt|xt, y−t, x−t) (1) where D is the set of bilingual training documents, and |d| denotes the number of (bilingual) sentences in the document d = {(xt, yt)}|d| t=1. We directly model the document-conditioned NMT model Pθ(yt|xt, y−t, x−t) using a neural architecture which subsumes both the fθ and gθ factors (covered in the next section). Decoding To generate the best translation for a document according to our model, we need to solve the following optimisation problem: arg max y1,...,y|d| |d| Y t=1 Pθ(yt|xt, y−t, x−t) which is hard (due to similar reasons as mentioned earlier). We hence resort to a block coordinate descent optimisation algorithm. More specifically, we initialise the translation of each sentence using the base neural MT model P(yt|xt). We then repeatedly visit each sentence in the document, and update its translation using our document-context dependent NMT model P(yt|xt, y−t, x−t) while the translations of other sentences are kept fixed. 4 Context Dependent NMT with MemNets We augment the sentence-level attentional NMT model by incorporating the document context (both source and target) using memory networks when generating the translation of a sentence, as shown in Figure 2. Our model generates the target translation word-by-word from left to right, similar to the vanilla attentional neural translation model. However, it conditions the generation of a target word not only on the previously generated words and the current source sentence (as in the vanilla NMT model), but also on all the other source sentences of the document and their translations. That is, the generation process is as follows: Pθ(yt|xt, y−t, x−t) = |yt| Y j=1 Pθ(yt,j|yt,<j, xt, y−t, x−t) (2) where yt,j is the j-th word of the t-th target sentence, yt,<j are the previously generated words, and x−t and y−t are as introduced previously. Our model represents the source and target document contexts as external memories, and attends to relevant parts of these external memories when generating the translation of a sentence. Let M[x−t] and M[y−t] denote external memories representing the source and target document context, respectively. These contain memory cells corresponding to all sentences in the document except the t-th sentence (described shortly). Let ht and st be representations of the t-th source sentence and its current translation, from the encoder and decoder respectively. We make use of ht as the query to get the relevant context from the source external memory: csrc t = MemNet(M[x−t], ht) Furthermore, for the t-th sentence, we get the relevant information from the target context: ctrg t = MemNet(M[y−t], st + Wat · ht) where the query consists of the representation of the translation st from the decoder endowed with that of the source sentence ht from the encoder to make the query robust to potential noises in the current translation and circumvent error propagation, and Wat projects the source representation into the hidden state space. Now that we have representations of the relevant source and target document contexts, Eq. 2 can be re-written as: Pθ(yt|xt, y−t, x−t) = |yt| Y j=1 Pθ(yt,j|yt,<j, xt, ctrg t , csrc t ) (3) More specifically, the memory contexts csrc t and ctrg t are incorporated into the NMT decoder as: • Memory-to-Context in which the memory contexts are incorporated when computing the next decoder hidden state: st,j = tanh(Ws · st,j−1 + Wsj · ET [yt,j] + Wsc · ct,j + Wsm · csrc t + Wst · ctrg t ) 1278 Figure 2: Our Memory-to-Context documentNMT model consisting of sentence-based NMT model with source and target external memories. • Memory-to-Output in which the memory contexts are incorporated in the output layer: yt,j ∼ softmax(Wy · rt,j + Wym · csrc t + Wyt · ctrg t + br) where Wsm, Wst, Wym, and Wyt are the new parameter matrices. We use only the source, only the target, or both external memories as the additional conditioning contexts. Furthermore, we use either the Memory-to-Context or Memory-toOutput architectures for incorporating the document contexts. In the experiments, we will explore these different options to investigate the most effective combination. We now turn our attention to the construction of the external memories for the source and target sides of a document. The Source Memory We make use of a hierarchical 2-level RNN architecture to construct the external memory of the source document. More specifically, we pass each sentence of the document through a sentence-level bidirectional RNN to get the representation of the sentence (by concatenating the last hidden states of the forward and backward RNNs). We then pass the sentence representations through a document-level bidirectional RNN to propagate sentences’ information across the document. We take the hidden states of the document-level bidirectional RNNs as the memory cells of the source external memory. The source external memory is built once for each minibatch, and does not change throughout the document translation. To be able to fit the computational graph of the document NMT model within GPU memory limits, we pre-train the sentence-level bidirectional RNN using the language modelling training objective. However, the document-level bidirectional RNN is trained together with other parameters of the document NMT model by back-propagating the document translation training objective. The Target Memory The memory cells of the target external memory represent the current translations of the document. Recall from the previous section that we use coordinate descent iteratively to update these translations. Let {y1, . . . , y|d|} be the current translations, and let {s|y1|, . . . , s|y|d||} be the last states of the decoder when these translations were generated. We use these last decoder states as the cells of the external target memory. We could make use of hierarchical sentencedocument RNNs to transform the document translations into memory cells (similar to what we do for the source memory); however, it would have been computationally expensive and may have resulted in error propagation. We will show in the experiments that our efficient target memory construction is indeed effective. 5 Experiments and Analysis Datasets. We conducted experiments on three language pairs: French-English, German-English and Estonian-English. Table 1 shows the statistics of the datasets used in our experiments. The French-English dataset is based on the TED Talks corpus1 (Cettolo et al., 2012) where each talk is considered a document. The EstonianEnglish data comes from the Europarl v7 corpus2 (Koehn, 2005). Following Smith et al. (2013), we split the speeches based on the SPEAKER tag and treat them as documents. The FrenchEnglish and Estonian-English corpora were randomly split into train/dev/test sets. For GermanEnglish, we use the News Commentary v9 corpus3 for training, news-dev2009 for development, 1https://wit3.fbk.eu/ 2http://www.statmt.org/europarl/ 3http://statmt.org/wmt14/news-commentary-v9-bydocument.tgz 1279 # docs # sents doc len src/tgt vocab Fr-En 10/1.2/1.5 123/15/19 123/128/124 25.1/21 Et-En 150/10/18 209/14/25 14/14/14 48.6/24.9 De-En 49/.9/1.1/1.6 191/2/3/3 39/23/27/19 45.1/34.7 Table 1: Training/dev/test corpora statistics: number of documents (×100) and sentences (×1000), average document length (in sentences) and source/target vocabulary size (×1000). For DeEn, we report statistics of the two test sets news-test2011 and news-test2016. and news-test2011 and news-test2016 as the test sets. The news-commentary corpus has document boundaries already provided. We pre-processed all corpora to remove very short documents and those with missing translations. Out-of-vocabulary and rare words (frequency less than 5) are replaced by the <UNK> token, following Cohn et al. (2016).4 Evaluation Measures We use BLEU (Papineni et al., 2002) and METEOR (Lavie and Agarwal, 2007) scores to measure the quality of the generated translations. We use bootstrap resampling (Clark et al., 2011) to measure statistical significance, p < 0.05, comparing to the baselines. Implementation and Hyperparameters We implement our document-level neural machine translation model in C++ using the DyNet library (Neubig et al., 2017), on top of the basic sentence-level NMT implementation in mantis (Cohn et al., 2016). For the source memory, the sentence and document-level bidirectional RNNs use LSTM and GRU units, respectively. The translation model uses GRU units for the bidirectional RNN encoder and the 2-layer RNN decoder. GRUs are used instead of LSTMs to reduce the number of parameters in the main model. The RNN hidden dimensions and word embedding sizes are set to 512 in the translation and memory components, and the alignment dimension is set to 256 in the translation model. Training We use a stage-wise method to train the variants of our document context NMT model. Firstly, we pre-train the Memory-toContext/Memory-to-Output models, setting their readings from the source and target memories to 4We do not split words into subwords using BPE (Sennrich et al., 2016) as that increases sentence lengths resulting in removing long documents due to GPU memory limitations, which would heavily reduce the amount of data that we have. the zero vector. This effectively learns parameters associated with the underlying sentence-based NMT model, which is then used as initialisation when training all parameters in the second stage (including the ones from the first stage). For the first stage, we make use of stochastic gradient descent (SGD)5 with initial learning rate of 0.1 and a decay factor of 0.5 after the fourth epoch for a total of ten epochs. The convergence occurs in 6-8 epochs. For the second stage, we use SGD with an initial learning rate of 0.08 and a decay factor of 0.9 after the first epoch for a total of 15 epochs6. The best model is picked based on the dev-set perplexity. To avoid overfitting, we employ dropout with the rate 0.2 for the single memory model. For the dual memory model, we set dropout for Document RNN to 0.2 and for the encoder and decoder to 0.5. Mini-batching is used in both stages to speed up training. For the largest dataset, the document NMT model takes about 4.5 hours per epoch to train on a single P100 GPU, while the sentence-level model takes about 3 hours per epoch for the same settings. When training the document NMT model in the second stage, we need the target memory. One option would be to use the ground truth translations for building the memory. However, this may result in inferior training, since at the test time, the decoder iteratively updates the translation of sentences based on the noisy translations of other sentences (accessed via the target memory). Hence, while training the document NMT model, we construct the target memory from the translations generated by the pre-trained sentence-level model7. This effectively exposes the model to its potential test-time mistakes during the training time, resulting in more robust learned parameters. 5.1 Main Results We have three variants of our model, using: (i) only the source memory (S-NMT+src mem), (ii) only the target memory (S-NMT+trg mem), or 5In our initial experiments, we found SGD to be more effective than Adam/Adagrad; an observation also made by Bahar et al. (2017). 6For the document NMT model training, we did some preliminary experiments using different learning rates and used the scheme which converged to the best perplexity in the least number of epochs while for sentence-level training we follow Cohn et al. (2016). 7We report results for two-pass decoding, i.e., we only update the translations once using the initial translations generated from the base model. We tried multiple passes of decoding at test-time but it was not helpful. 1280 Memory-to-Context Memory-to-Output BLEU METEOR BLEU METEOR Fr→En De→En Et→En Fr→En De→En Et→En Fr→En De→En Et→En Fr→En De→En Et→En NC-11 NC-16 NC-11 NC-16 NC-11 NC-16 NC-11 NC-16 S-NMT 20.85 5.24 9.18 20.42 23.27 10.90 14.35 24.65 20.85 5.24 9.18 20.42 23.27 10.90 14.35 24.65 +src 21.91† 6.26† 10.20† 22.10† 24.04† 11.52† 15.45† 25.92† 21.80† 6.10† 9.98† 21.50† 23.99† 11.53† 15.29† 25.44† +trg 21.74† 6.24† 9.97† 21.94† 23.98† 11.58† 15.32† 25.89† 21.76† 6.31† 10.04† 21.82† 24.06† 12.10† 15.75† 25.93† +both 22.00† 6.57† 10.54† 22.32† 24.40† 12.24† 16.18† 26.34† 21.77† 6.20† 10.23† 22.20† 24.27† 11.84† 15.82† 26.10† Table 2: BLEU and METEOR scores for the sentence-level baseline (S-NMT) vs. variants of our Document NMT model. bold: Best performance, †: Statistically significantly better than the baseline. Memory-to-Context Memory-to-Output Lang. Pair Fr→En De→En Et→En Fr→En De→En Et→En S-NMT 42.5 66.8 58.4 42.5 66.8 58.5 +src mem 48.8 73.1 64.8 68.7 107.1 88.7 +trg mem 43.8 68.1 59.8 53.8 85.1 71.8 +both mems 50.1 74.4 66.1 80 125.4 102 Table 3: Number of model parameters (millions). (iii) both the source and target memories (SNMT+both mems). We compare these variants against the standard sentence-level NMT model (S-NMT). We also compare the source memory variants of our model to the local context-NMT models8 of Jean et al. (2017) and Wang et al. (2017), which use a few previous source sentences as context, added to the decoder hidden state (similar to our Memory-to-Context model). Memory-to-Context We consistently observe +1.15/+1.13 BLEU/METEOR score improvements across the three language pairs upon comparing our best model to S-NMT (see Table 2). Overall, our document NMT model with both memories has been the most effective variant for all of the three language pairs. We further experiment to train the target memory variants using gold translations instead of the generated ones for German-English. This led to −0.16 and −0.25 decrease9 in the BLEU scores for the target-only and both-memory variants, which confirms the intuition of constructing the target memory by exposing the model to its noises during training time. Memory-to-Output From Table 2, we consistently see +.95/+1.00 BLEU/METEOR improvements between the best variants of our model and the sentence-level baseline across the three lan8We implemented and trained the baseline local context models using the same hyperparameters and training procedure that we used for training our memory models. 9Latter is statistically significant decrease w.r.t. the both memory model trained on generated target translations. Smaller Corpus Larger Corpus 10 12 14 10.9 12.12 11.52 12.94 11.58 12.55 12.24 13.56 METEOR S-NMT S-NMT+src S-NMT+trg S-NMT+both (a) Memory-to-Context model Smaller Corpus Larger Corpus 10 12 14 10.9 12.12 11.53 12.48 12.1 13.21 11.84 12.99 METEOR S-NMT S-NMT+src S-NMT+trg S-NMT+both (b) Memory-to-Output model Figure 3: METEOR scores on De→En (NC-11) while training S-NMT with smaller vs. larger corpus. guage pairs. For French→English, all variants of document NMT model show comparable performance when using BLEU; however, when evaluated using METEOR, the dual memory model is the best. For German→English, the target memory variants give comparable results, whereas for Estonian→English, the dual memory variant proves to be the best. Overall, the Memory-toContext model variants perform better than their Memory-to-Output counterparts. We attribute this to the large number of parameters in the latter architecture (Table 3) and limited amount of data. We further experiment with more data for train1281 BLEU METEOR Fr→En De→En Et→EnFr→En De→En Et→En NC-11 NC-16 NC-11 NC-16 Jean et al. (2017) 21.95 6.04 10.26 21.67 24.10 11.61 15.56 25.77 Wang et al. (2017) 21.87 5.49 10.14 22.06 24.13 11.05 15.20 26.00 S-NMT 20.85 5.24 9.18 20.42 23.27 10.90 14.35 24.65 +src mem 21.91† 6.26♣10.20 22.10♠24.04† 11.52♣15.45♣25.92♠ +both mems 22.00† 6.57♦10.54♣22.32♦24.40♦12.24♦16.18♦26.34♦ Table 4: Our Memory-to-Context Source Memory NMT variants vs. S-NMT and Source context NMT baselines. bold: Best performance, †, ♠, ♣, ♦: Statistically significantly better than only S-NMT, S-NMT & Jean et al. (2017), S-NMT & Wang et al. (2017), all baselines, respectively. BLEU-1 Fr→En De→En Et→En NC-11NC-16 Jean et al. (2017) 52.8 30.6 39.2 51.9 Wang et al. (2017) 52.6 28.2 38.3 52.3 S-NMT 51.4 28.7 36.9 50.4 +src mem 53.0 30.5 39.1 52.6 +both mems 53.5 33.1 41.3 53.2 Table 5: Unigram BLEU for our Memory-to-Context Document NMT models vs. S-NMT and Source context NMT baselines. bold: Best performance. ing the sentence-based NMT to investigate the extent to which document context is useful in this setting. We randomly choose an additional 300K German-English sentence pairs from WMT’14 data to train the base NMT model in stage 1. In stage 2, we use the same document corpus as before to train the document-level models. As seen from Figure 3, the document MT variants still benefit from the document context even when the base model is trained on a larger bilingual corpus. For the Memory-to-Context model, we see massive improvements of +0.72 and +1.44 METEOR scores for the source memory and dual memory model respectively, when compared to the baseline. On the other hand, for the Memory-to-Output model, the target memory model’s METEOR score increases significantly by +1.09 compared to the baseline, slightly differing from the corresponding model using the smaller corpus (+1.2). Local Source Context Models Table 4 shows comparison of our Memory-to-Context model variants to local source context-NMT models (Jean et al., 2017; Wang et al., 2017). For French→English, our source memory model is comparable to both baselines. For German→English, our S-NMT+src mem model is comparable to Jean et al. (2017) but outperforms Wang et al. (2017) for one test set according to BLEU, and for both test sets according to METEOR. For Estonian→English, our model outperforms Jean et al. (2017). Our global source context model has only surface-level sentence information, and is oblivious to the individual words in the context since we do an offline training to get the sentence representations (as previously mentioned). However, the other two context baselines have access to that information, yet our model’s performance is either better or quite close to those models. We also look into the unigram BLEU scores to see how much our global source memory variants lead to improvement at the word-level. From Table 5, it can be seen that our model’s performance is better than the baselines for majority of the cases. The S-NMT+both mems model gives the best results for all three language pairs, showing that leveraging both source and target document context is indeed beneficial for improving MT performance. 5.2 Analysis Using Global/Local Target Context We first investigate whether using a local target context would have been equally sufficient in comparison to our global target memory model for the three datasets. We condition the decoder on the previous target sentence representation (obtained from the last hidden state of the decoder) by adding it as an additional input to all decoder states (PrevTrg) similar to our Memory-to-Context model. From Table 6, we observe that for French→English and Estonian→English, using all sentences in the target context or just the previous target sentence gives comparable results. We may attribute this to these specific datasets, that is documents from TED talks or European Parliament Proceedings may depend more on the local than on the global context. However, for German→English (NC-11), the target memory model performs the best showBLEU METEOR Lang. Pair Fr→En De→En Et→En Fr→En De→En Et→En S-NMT 20.85 5.24 20.42 23.27 10.90 24.65 +prev trg 21.75 5.93 22.08 24.03 11.40 25.94 +trg mem 21.74 6.24 21.94 23.98 11.58 25.89 Table 6: Analysis of target context model. 1282 ing that for documents with richer context (e.g. news articles) we do need the global target document context to improve MT performance. Output Analysis To better understand the dual memory model, we look at the first sentence example in Table 7. It can be seen that the source sentence has the noun “Qimonda” but the sentencelevel NMT model fails to attend to it when generating the translation. On the other hand, the single memory models are better in delivering some, if not all, of the underlying information in the source sentence but the dual memory model’s translation quality surpasses them. This is because the word “Qimonda” was being repeated in this specific document, providing a strong contextual signal to our global document context model while the local context model by Wang et al. (2017) is still unable to correctly translate the noun even when it has access to the word-level information of previous sentences. We resort to manual evaluation as there is no standard metric which evaluates document-level discourse information like consistency or pronominal anaphora. By manual inspection, we observe that our models can identify nouns in the source sentence to resolve coreferent pronouns, as shown in the second example of Table 7. Here the topic of the sentence is “the country under the dictatorship of Lukashenko” and our target and dual memory models are able to generate the appropriate pronoun/determiner as well as accurately translate the word ‘diktatuur’, hence producing much better translation as compared to both baselines. Apart from these improvements, our models are better in improving the readability of sentences by generating more context appropriate grammatical structures such as verbs and adverbs. Furthermore, to validate that our model improves the consistency of translations, we look at five documents (roughly 70 sentences) from the test set of Estonian-English, each of which had a word being repeated in the gold translation. Our model is able to resolve the consistency in 22 out of 32 cases as compared to the sentencebased model which only accurately translates 16 of those. Following Wang et al. (2017), we also investigate the extent to which our model can correct errors made by the baseline system. We randomly choose five documents from the test set. Out of the 20 words/phrases which were incorrectly translated by the sentence-based model, our model corrects 85% of them while also generating 10% new errors. Source qimonda t¨aidab lissaboni strateegia eesm¨arke. Target qimonda meets the objectives of the lisbon strategy. S-NMT <UNK> is the objectives of the lisbon strategy. +Src Mem the millennium development goals are fulfilling the millennium goals of the lisbon strategy. +Trg Mem in writing. - (ro) the lisbon strategy is fulfilling the objectives of the lisbon strategy. +Both Mems qimonda fulfils the aims of the lisbon strategy. Wang et al. (2017) <UNK> fulfils the objectives of the lisbon strategy. Source ... et riigis kehtib endiselt lukaˇsenka diktatuur, mis rikub inim- ning etnilise v¨ahemuse ˜oigusi. Target ... this country is still under the dictatorship of lukashenko, breaching human rights and the rights of ethnic minorities. S-NMT ... the country still remains in a position of lukashenko to violate human rights and ethnic minorities. +Src Mem ... the country still applies to the brutal dictatorship of human and ethnic minority rights. +Trg Mem ... the country still keeps the <UNK> dictatorship that violates human rights and ethnic rights. +Both Mems ... the country still persists in lukashenko’s dictatorship that violate human rights and ethnic minority rights. Wang et al. (2017) ... there is still a regime in the country that is violating the rights of human and ethnic minority in the country. Table 7: Example Et→En sentence translations (Memory-to-Context) from two test documents. 6 Related Work Document-level Statistical MT There have been a few SMT-based attempts to document MT, but they are either restrictive or do not lead to significant improvements. Hardmeier and Federico (2010) identify links among words in the source document using a word-dependency model to improve translation of anaphoric pronouns. Gong et al. (2011) make use of a cache-based system to save relevant information from the previously generated translations and use that to enhance document-level translation. Garcia et al. (2014) propose a two-pass approach to improve the translations already obtained by a sentencelevel model. Docent is an SMT-based document-level decoder (Hardmeier et al., 2012, 2013), which tries to modify the initial translation generated by the Moses decoder (Koehn et al., 2007) through stochastic local search and hill-climbing. Garcia et al. (2015) make use of neural-based continuous word representations to incorporate distributional semantics into Docent. In another work, Garcia et al. (2017) incorporate new word embedding features into Docent to improve the lexical consistency of translations. The proposed methods fail to yield improvements upon automatic evaluation. Larger Context Neural MT Jean et al. (2017) 1283 extend the vanilla attention-based neural MT model (Bahdanau et al., 2015) by conditioning the decoder on the previous sentence via attention over its words. Extending their model to consider the global source document context would be challenging due to the large size of computation graph over all the words in the source document. Wang et al. (2017) employ a 2-level hierarichal RNN to summarise three previous source sentences, which is then used as an additional input to the decoder hidden state. Bawden et al. (2017) use multi-encoder NMT models to exploit context from the previous source and target sentence. They highlight the importance of targetside context but report deteriorated BLEU scores when using it. All these works consider a very local source/target context and completely ignore the global source and target document contexts. 7 Conclusion We have proposed a document-level neural MT model that captures global source and target document context. Our model augments the vanilla sentence-based NMT model with external memories to incorporate documental interdependencies on both source and target sides. We show statistically significant improvements of the translation quality on three language pairs. For future work, we intend to investigate models which incorporate specific discourse-level phenomena. Acknowledgments The authors are grateful to Andr´e Martins and the anonymous reviewers for their helpful comments and corrections. This work was supported by the Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE) (www. massive.org.au), and partially supported by a Google Faculty Award to GH and the Australian Research Council through DP160102686. References Parnia Bahar, Tamer Alkhouli, Jan-Thorsten Peter, Christopher Jan-Steffen Brix, and Hermann Ney. 2017. Empirical investigation of optimization algorithms in neural machine translation. In Conference of the European Association for Machine Translation, pages 13–26, Prague, Czech Republic. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Representations. Rachel Bawden, Rico Sennrich, Alexandra Birch, and Barry Haddow. 2017. Evaluating discourse phenomena in neural machine translation. In arXiv:1711.00513. Julian Besag. 1975. Statistical analysis of non-lattice data. Journal of the Royal Statistical Society. Series D (The Statistician), 24(3):179–195. Mauro Cettolo, Christian Girardi, and Marcello Federico. 2012. WIT3: Web inventory of transcribed and translated talks. In Proceedings of the 16th Conference of the European Association for Machine Translation, pages 261–268. Kyunghyun Cho, B van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. In Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation (SSST-8). Jonathan H. Clark, Chris Dyer, Alon Lavie, and Noah A. Smith. 2011. Better hypothesis testing for statistical machine translation: Controlling for optimizer instability. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (Short Papers), pages 176–181. Association for Computational Linguistics. Trevor Cohn, Cong Duy Vu Hoang, Ekaterina Vymolova, Kaisheng Yao, Chris Dyer, and Gholamreza Haffari. 2016. Incorporating structural alignment biases into an attentional neural translation model. In Proceedings of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 876–885. Association for Computational Linguistics. Eva Mart´ınez Garcia, Carles Creus, Cristina Espa˜naBonet, and Llu´ıs M`arquez. 2017. Using word embeddings to enforce document-level lexical consistency in machine translation. The Prague Bulletin of Mathematical Linguistics, 108:85–96. Eva Mart´ınez Garcia, Cristina Espa˜na-Bonet, and Llu´ıs M`arquez. 2014. Document-level machine translation as a re-translation process. Procesamiento del Lenguaje Natural, 53:103–110. Eva Mart´ınez Garcia, Cristina Espa˜na-Bonet, and Llu´ıs M`arquez. 2015. Document-level machine translation with word vector models. In Proceedings of the18th Conference of the European Association for Machine Translation, pages 59–66. Zhengxian Gong, Min Zhang, and Guodong Zhou. 2011. Cache-based document-level statistical machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 909–919. Association for Computational Linguistics. 1284 Christian Hardmeier and Marcello Federico. 2010. Modelling pronominal anaphora in statistical machine translation. In International Workshop on Spoken Language Translation, pages 283–289. Christian Hardmeier, Joakim Nivre, and J¨org Tiedemann. 2012. Document-wide decoding for phrasebased statistical machine translation. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1179–1190. Association for Computational Linguistics. Christian Hardmeier, Sara Stymne, J¨org Tiedemann, and Joakim Nivre. 2013. Docent: A document-level decoder for phrase-based statistical machine translation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 193–198. Association for Computational Linguistics. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735– 1780. Sebastien Jean, Stanislas Lauly, Orhan Firat, and Kyunghyun Cho. 2017. Does neural machine translation benefit from larger context? In arXiv:1704.05135. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Conference Proceedings: the 10th Machine Translation Summit, pages 79–86. AAMT. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, pages 177–180. Association for Computational Linguistics. Alon Lavie and Abhaya Agarwal. 2007. Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments. In Proceedings of the Second Workshop on Statistical Machine Translation, StatMT ’07, pages 228–231, Stroudsburg, PA, USA. Association for Computational Linguistics. Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attentionbased neural machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1412–1421. Association for Computational Linguistics. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. Dynet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 311–318. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1715–1725. Jason R. Smith, Herve Saint-Amand, Chris CallisonBurch, Magdalena Plamada, and Adam Lopez. 2013. Dirt cheap web-scale parallel text from the common crawl. In Proceedings of the Conference of the Association for Computational Linguistics. Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems, pages 2440–2448. MIT Press. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems, pages 3104–3112. MIT Press. Longyue Wang, Zhaopeng Tu, Andy Way, and Qun Liu. 2017. Exploiting cross-sentence context for neural machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 2816–2821. Association for Computational Linguistics. Jason Weston, Sumit Chopra, and Antoine Bordes. 2015. Memory networks. In Proceedings of the International Conference on Learning Representations.
2018
118
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1285–1296 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1285 Which Melbourne? Augmenting Geocoding with Maps Milan Gritta, Mohammad Taher Pilehvar and Nigel Collier Language Technology Lab Department of Theoretical and Applied Linguistics University of Cambridge {mg711,mp792,nhc30}@cam.ac.uk Abstract The purpose of text geolocation is to associate geographic information contained in a document with a set (or sets) of coordinates, either implicitly by using linguistic features and/or explicitly by using geographic metadata combined with heuristics. We introduce a geocoder (location mention disambiguator) that achieves state-of-the-art (SOTA) results on three diverse datasets by exploiting the implicit lexical clues. Moreover, we propose a new method for systematic encoding of geographic metadata to generate two distinct views of the same text. To that end, we introduce the Map Vector (MapVec), a sparse representation obtained by plotting prior geographic probabilities, derived from population figures, on a World Map. We then integrate the implicit (language) and explicit (map) features to significantly improve a range of metrics. We also introduce an open-source dataset for geoparsing of news events covering global disease outbreaks and epidemics to help future evaluation in geoparsing. 1 Introduction Geocoding1 is a specific case of text geolocation, which aims at disambiguating place references in text. For example, Melbourne can refer to more than ten possible locations and a geocoder’s task is to identify the place coordinates for the intended Melbourne in a context such as “Melbourne hosts one of the four annual Grand Slam tennis tournaments.” This is central to the success of tasks such as indexing and searching documents by geography (Bhargava et al., 2017), geospatial 1Also called Toponym Resolution in related literature. analysis of social media (Buchel and Pennington, 2017), mapping of disease risk using integrated data (Hay et al., 2013), and emergency response systems (Ashktorab et al., 2014). Previous geocoding methods (Section 2) have leveraged lexical semantics to associate the implicit geographic information in natural language with coordinates. These models have achieved good results in the past. However, focusing only on lexical features, to the exclusion of other feature spaces such as the Cartesian Coordinate System, puts a ceiling on the amount of semantics we are able to extract from text. Our proposed solution is the Map Vector (MapVec), a sparse, geographic vector for explicit modelling of geographic distributions of location mentions. As in previous work, we use population data and geographic coordinates, observing that the most populous Melbourne is also the most likely to be the intended location. However, MapVec is the first instance, to our best knowledge, of the topological semantics of context locations explicitly isolated into a standardized vector representation, which can then be easily transferred to an independent task and combined with other features. MapVec is able to encode the prior geographic distribution of any number of locations into a single vector. Our extensive evaluation shows how this representation of context locations can be integrated with linguistic features to achieve a significant improvement over a SOTA lexical model. MapVec can be deployed as a standalone neural geocoder, significantly beating the population baseline, while remaining effective with simpler machine learning algorithms. This paper’s contributions are: (1) Lexical Geocoder outperforming existing systems by analysing only the textual context; (2) MapVec, a geographic representation of locations using a sparse, probabilistic vector to extract and isolate spatial features; (3) CamCoder, a novel geocoder 1286 that exploits both lexical and geographic knowledge producing SOTA results across multiple datasets; and (4) GeoVirus, an open-source dataset for the evaluation of geoparsing (Location Recognition and Disambiguation) of news events covering global disease outbreaks and epidemics. 2 Background Depending on the task objective, geocoding methodologies can be divided into two distinct categories: (1) document geocoding, which aims at locating a piece of text as a whole, for example geolocating Twitter users (Rahimi et al., 2016, 2017; Roller et al., 2012; Rahimi et al., 2015), Wikipedia articles and/or web pages (Cheng et al., 2010; Backstrom et al., 2010; Wing and Baldridge, 2011; Dredze et al., 2013; Wing and Baldridge, 2014). This is an active area of NLP research (Hulden et al., 2015; Melo and Martins, 2017, 2015; Iso et al., 2017); (2) geocoding of place mentions, which focuses on the disambiguation of location (named) entities i.e. this paper and (Karimzadeh et al., 2013; Tobin et al., 2010; Grover et al., 2010; DeLozier et al., 2015; Santos et al., 2015; Speriosu and Baldridge, 2013; Zhang and Gelernter, 2014). Due to the differences in evaluation and objective, the categories cannot be directly or fairly compared. Geocoding is typically the second step in Geoparsing. The first step, usually referred to as Geotagging, is a Named Entity Recognition component which extracts all location references in a given text. This phase may optionally include metonymy resolution, see (Zhang and Gelernter, 2015; Gritta et al., 2017a). The goal of geocoding is to choose the correct coordinates for a location mention from a set of candidates. Gritta et al. (2017b) provided a comprehensive survey of five recent geoparsers. The authors established an evaluation framework, with a new dataset, for their experimental analysis. We use this evaluation framework in our experiments. We briefly describe the methodology of each geocoder featured in our evaluation (names are capitalised and appear in italics) as well as survey the related work in geocoding. Computational methods in geocoding broadly divide into rule-based, statistical and machine learning-based. Edinburgh Geoparser (Tobin et al., 2010; Grover et al., 2010) is a fully rulebased geocoder that uses hand-built heuristics combined with large lists from Wikipedia and the Geonames2 gazetteer. It uses metadata (feature type, population, country code) with heuristics such as contextual information, spatial clustering and user locality to rank candidates. GeoTxT (Karimzadeh et al., 2013) is another rule-based geocoder with a free web service3 for identifying locations in unstructured text and grounding them to coordinates. Disambiguation is driven by multiple heuristics and uses the administrative level (country, province, city), population size, the Levenshtein Distance of the place referenced and the candidate’s name and spatial minimisation to resolve ambiguous locations. (Dredze et al., 2013) is a rule-based Twitter geocoder using only metadata (coordinates in tweets, GPS tags, user’s reported location) and custom place lists for fast and simple geocoding. CLAVIN (Cartographic Location And Vicinity INdexer)4 is an open-source geocoder, which offers contextbased entity recognition and linking. It seems to be mostly rule-based though details of its algorithm are underspecified, short of reading the source code. Unlike the Edinburgh Parser, this geocoder seems to overly rely on population data, seemingly mirroring the behaviour of a naive population baseline. Rule-based systems can perform well though the variance in performance is high (see Table 1). Yahoo! Placemaker is a free web service with a proprietary geo-database and algorithm from Yahoo!5 letting anyone geoparse text in a globally-aware and language-independent manner. It is unclear how geocoding is performed, however, the inclusion of proprietary methods makes evaluation broader and more informative. The statistical geocoder Topocluster (DeLozier et al., 2015) divides the world surface into a grid (0.5 x 0.5 degrees, approximately 60K tiles) and uses lexical features to model the geographic distribution of context words over this grid. Building on the work of Speriosu and Baldridge (2013), it uses a window of 15 words (our approach scales this up by more than 20 times) to perform hot spot analysis using Getis-Ord Local Statistic of individual words’ association with geographic space. The classification decision was made by finding the grid square with the strongest overlap of 2http://www.geonames.org/ 3http://www.geotxt.org/ 4https://clavin.bericotechnologies.com 5https://developer.yahoo.com/geo/ 1287 individual geo-distributions. Hulden et al. (2015) used Kernel Density Estimation to learn the word distribution over a world grid with a resolution of 0.5 x 0.5 degrees and classified documents with Kullback-Leibler divergence or a Naive Bayes model, reminiscent of an earlier approach by Wing and Baldridge (2011). Roller et al. (2012) used the Good-Turing Frequency Estimation to learn document probability distributions over the vocabulary with Kullback-Leibler divergence as the similarity function to choose the correct bucket in the k-d tree (world representation). Iso et al. (2017) combined Gaussian Density Estimation with a CNN-model to geolocate Japanese tweets with Convolutional Mixture Density Networks. Among the recent machine learning methods, bag-of-words representations combined with a Support Vector Machine (Melo and Martins, 2015) or Logistic Regression (Wing and Baldridge, 2014) have also achieved good results. For Twitter-based geolocation (Zhang and Gelernter, 2014), bag-of-words classifiers were successfully augmented with social network data (Jurgens et al., 2015; Rahimi et al., 2016, 2015). The machine learning-based geocoder by Santos et al. (2015) supplemented lexical features, represented as a bag-of-words, with an exhaustive set of manually generated geographic features and spatial heuristics such as geospatial containment and geodesic distances between entities. The ranking of locations was learned with LambdaMART (Burges, 2010). Unlike our geocoder, the addition of geographic features did not significantly improve scores, reporting: “The geo-specific features seem to have a limited impact over a strong baseline system.” Unable to obtain a codebase, their results feature in Table 1. The latest neural network approaches (Rahimi et al., 2017) with normalised bag-of-word representations have achieved SOTA scores when augmented with social network data for Twitter document (user’s concatenated tweets) geolocation (Bakerman et al., 2018). 3 Methodology Figure 1 shows our new geocoder CamCoder implemented in Keras (Chollet, 2015). The lexical part of the geocoder has three inputs, from the top: Context Words (location mentions excluded), Location Mentions (context words excluded) and the Target Entity (up to 15 words long) to be Figure 1: The CamCoder neural architecture. It is possible to split CamCoder into a Lexical (top 3 inputs) model and a MapVec model (see Table 2). geocoded. Consider an example disambiguation of Cairo in a sentence: “The Giza pyramid complex is an archaeological site on the Giza Plateau, on the outskirts of Cairo, Egypt.”. Here, Cairo is the Target Entity; Egypt, Giza and Giza Plateau are the Location Mentions; the rest of the sentence forms the Context Words (excluding stopwords). The context window is up to 200 words each side of the Target Entity, approximately an order of magnitude larger than most previous approaches. We used separate layers, convolutional and/or dense (fully-connected), with ReLu activations (Nair and Hinton, 2010) to break up the task into smaller, focused modules in order to learn distinct lexical feature patterns, phrases and keywords for different types of inputs, concatenating only at a higher level of abstraction. Unigrams and bigrams were learned for context words and location mentions (1,000 filters of size 1 and 2 for each input), trigrams for the target entity (1,000 filters of size 3). Convolutional Neural Networks (CNNs) with Global Maximum Pooling were chosen for their position invariance (detecting location-indicative words anywhere in context) and efficient input size scaling. The dense layers have 250 units each, with a dropout layer (p = 0.5) to prevent overfitting. The fourth input is MapVec, the geographic vector representation of location mentions. It feeds into two dense layers with 5,000 and 1,000 units respectively. The concatenated hidden layers then get fully connected to the softmax layer. The model is optimised with RMSProp (Tieleman and Hinton, 2012). We approach geocoding as a classification task where the model predicts one of 1288 7,823 classes (units in the softmax layer in Figure 1), each being a 2x2 degree tile representing part of the world’s surface, slightly coarser than MapVec (see Section 3.1 next). The coordinates of the location candidate with the smallest FD (Equation 1) are the model’s final output. FD = error −error candidatePop maximumPop Bias (1) FD for each candidate is computed by reducing the prediction error (the distance from predicted coordinates to candidate coordinates) by the value of error multiplied by the estimated prior probability (candidate population divided by maximum population) multiplied by the Bias parameter. The value of Bias = 0.9 was determined to be optimal for highest development data scores and is identical for all highly diverse test datasets. Equation 1 is designed to bias the model towards more populated locations to reflect real-world data. 3.1 The Map Vector (MapVec) Word embeddings and/or distributional vectors encode a word’s meaning in terms of its linguistic context. However, location (named) entities also carry explicit topological semantic knowledge such as a coordinate position and a population count for all places with an identical name. Until now, this knowledge was only used as part of simple disparate heuristics and manual disambiguation procedures. However, it is possible to plot this spatial data on a world map, which can then be reshaped into a 1D feature vector, or a Map Vector, the geographic representation of location mentions. MapVec is a novel standardised method for generating geographic features from text documents beyond lexical features. This enables a strong geocoding classification performance gain by extracting additional spatial knowledge that would normally be ignored. Geographic semantics cannot be inferred from language alone (too imprecise and incomplete). Word embeddings and distributional vectors use language/words as an implicit container of geographic information. Map Vector uses a lowresolution, probabilistic world map as an explicit container of geographic information, giving us two types of semantic features from the same text. In related papers on the generation of location representations, Rahimi et al. (2017) inverted the task of geocoding Twitter users to predict word Figure 2: MapVec visualisation (before reshaping into a 1D vector) for Melbourne, Perth and Newcastle, showing their combined prior geographic probabilities. Darker tiles have higher probability. probability from a set of coordinates. A continuous representation of a region was generated by using the hidden layer of the neural network. However, all locations in the same region will be assigned an identical vector, which assumes that their semantics are also identical. Another way to obtain geographic representations is by generating embeddings directly from Geonames data using heuristics-driven DeepWalk (Perozzi et al., 2014) with geodesic distances (Kejriwal and Szekely, 2017). However, to assign a vector, places must first be disambiguated (catch-22). While these generation methods are original and interesting in theory, deploying them in the real-world is infeasible, hence we invented the Map Vector. MapVec initially begins as a 180x360 world map of geodesic tiles. There are other ways of representing the surface of the Earth such as using nested hierarchies (Melo and Martins, 2015) or k-dimensional trees (Roller et al., 2012), however, this is beyond the scope of this work. The 1x1 tile size, in degrees of geographic coordinates, was empirically determined to be optimal to keep MapVec’s size computationally efficient while maintaining meaningful resolution. This map is then populated with the prior geographic distribution of each location mentioned in context (see Figure 2 for an example). We use population count to estimate a location’s prior probability as more populous places are more likely to be mentioned in common discourse. For each location mention and for each of its ambiguous candidates, their prior probability is added to the correct tile indicating its geographic position (see Algorithm 1). Tiles that cover areas of open water (64.1%) were removed to reduce size. Finally, 1289 Data: Text ←article, paragraph, tweet, etc. Result: MapVec location(s) representation Locs ←extractLocations(Text); MapVec ←new array(length=23,002); for each l in Locs do Cands ←queryCandidatesFromDB(l); maxPop ←maxPopulationOf(Cands); for each c in Cands do prior ←populationOf(c) / maxPop; i ←coordinatesToIndex(c); MapVec[i] ←MapVec[i] + prior; end end m ←max(MapVec); return MapVec / m; Algorithm 1: MapVec generation. For each extracted location l in Locs, estimate the prior probability of each candidate c. Add c’s prior probability to the appropriate array position at index i representing its geographic position/tile. Finally, normalise the array (to a [0 −1] range) by dividing by the maximum value of the MapVec array. this world map is reshaped into a one-dimensional Map Vector of length 23,002. The following features of MapVec are the most salient: Interpretability: Word vectors typically need intrinsic (Gerz et al., 2016) and extrinsic tasks (Senel et al., 2017) to interpret their semantics. MapVec generation is a fully transparent, human readable and modifiable method. Efficiency: MapVec is an efficient way of embedding any number of locations using the same standardised vector. The alternative means creating, storing, disambiguating and computing with millions of unique location vectors. Domain Independence: Word vectors vary depending on the source, time, type and language of the training data and the parameters of generation. MapVec is language-independent and stable over time, domain, size of dataset since the world geography is objectively measured and changes very slowly. 3.2 Data and Preprocessing Training data was generated from geographically annotated Wikipedia pages (dumped February 2017). Each page provided up to 30 training instances, limited to avoid bias from large pages. This resulted in collecting approximately 1.4M training instances, which were uniformly subsampled down to 400K to shorten training cycles as further increases offer diminishing returns. We used the Python-based NLP toolkit Spacy6 (Honnibal and Johnson, 2015) for text preprocessing. All words were lowercased, lemmatised, any stopwords, dates, numbers and so on were replaced with a special token (“0”). Word vectors were initialised with pretrained word embeddings7 (Pennington et al., 2014). We do not employ explicit feature selection as in (Bo et al., 2012), only a minimum frequency count, which was shown to work almost as well as deliberate selection (Van Laere et al., 2014). The vocabulary size was limited to the most frequent 331K words, minimum ten occurrences for words and two for location references in the 1.4M training corpus. A final training instance comprises four types of context information: Context Words (excluding location mentions, up to 2x200 words), Location Mentions (excluding context words, up to 2x200 words), Target Entity (up to 15 words) and the MapVec geographic representation of context locations. We have also checked for any overlaps between our Wikipedia-based training data and the WikToR dataset. Those examples were removed. The aforementioned 1.4M Wikipedia training corpus was once again uniformly subsampled to generate a disjoint development set of 400K instances. While developing our models mainly on this data, we also used small subsets of LGL (18%), GeoVirus (26%) and WikToR (9%) described in Section 4.2 to verify that development set improvements generalised to target domains. 4 Evaluation Our evaluation compares the geocoding performance of six systems from Section 2, our geocoder (CamCoder) and the population baseline. Among these, our CNN-based model is the only neural approach. We have included all open-source/free geocoders in working order we were able to find and they are the most up-to-date versions. Tables 1 and 2 feature several machine learning algorithms including Long-Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) to reproduce context2vec (Melamud et al., 2016), Naive Bayes (Zhang, 2004) and Random Forest (Breiman, 2001) using three diverse datasets. 6https://spacy.io/ 7https://nlp.stanford.edu/ 1290 Figure 3: The AUC (range [0 −1]) is calculated using the Trapezoidal Rule. Smaller errors mean a smaller (blue) area, which means a lower score and therefore better geocoding results. 4.1 Geocoding Metrics We use the three standard and comprehensive metrics, each measuring an important aspect of geocoding, giving an accurate, holistic evaluation of performance. A more detailed costbenefit analysis of geocoding metrics is available in (Karimzadeh, 2016) and (Gritta et al., 2017b). (1) Average (Mean) Error is the sum of all geocoding errors per dataset divided by the number of errors. It is an informative metric as it also indicates the total error but treats all errors as equivalent and is sensitive to outliers; (2) Accuracy@161km is the percentage of errors that are smaller than 161km (100 miles). While it is easy to interpret, giving fast and intuitive understanding of geocoding performance in percentage terms, it ignores all errors greater than 161km; (3) Area Under the Curve (AUC) is a comprehensive metric, initially introduced for geocoding in (Jurgens et al., 2015). AUC reduces the importance of large errors (1,000km+) since accuracy on successfully resolved places is more desirable. While it is not an intuitive metric, AUC is robust to outliers and measures all errors. A versatile geocoder should be able to maximise all three metrics. 4.2 Evaluation Datasets News Corpus: The Local Global Corpus (LGL) by Lieberman et al. (2010) contains 588 news articles (4460 test instances), which were collected from geographically distributed newspaper sites. This is the most frequently used geocoding evaluation dataset to date. The toponyms are mostly smaller places no larger than a US state. Approximately 16% of locations in the corpus do not have any coordinates assigned; hence, we do not use those in the evaluation, which is also how the previous figures were obtained. Wikipedia Corpus: This corpus was deliberately designed for ambiguity hence the population heuristic is not effective. Wikipedia Toponym Retrieval (WikToR) by Gritta et al. (2017b) is a programmatically created corpus and although not necessarily representative of the real world distribution, it is a test of ambiguity for geocoders. It is also a large corpus (25,000+ examples) containing the first few paragraphs of 5,000 Wikipedia pages. High quality, free and open datasets are not readily available (GeoVirus tries to address this). The following corpora could not be included: WoTR (DeLozier et al., 2016) due to limited coverage (southern US) and domain type (historical language, the 1860s), (De Oliveira et al., 2017) contains fewer than 180 locations, GeoCorpora (Wallgr¨un et al., 2017) could not be retrieved in full due to deleted Twitter users/tweets, GeoText (Eisenstein et al., 2010) only allows for user geocoding, SpatialML (Mani et al., 2010) involves prohibitive costs, GeoSemCor (Buscaldi and Rosso, 2008) was annotated with WordNet senses (rather than coordinates). 4.3 GeoVirus: a New Test Dataset We now introduce GeoVirus, an open-source test dataset for the evaluation of geoparsing of news events covering global disease outbreaks and epidemics. It was constructed from free WikiNews8 and collected during 08/2017 - 09/2017. The dataset is suitable for the evaluation of Geotagging/Named Entity Recognition and Geocoding/Toponym Resolution. Articles were identified using the WikiNews search box and keywords such as Ebola, Bird Flu, Swine Flu, AIDS, Mad Cow Disease, West Nile Disease, etc. Off-topic articles were not included. Buildings, POIs, street names and rivers were not annotated. Annotation Process. (1) The WikiNews contributor(s) who wrote the article annotated most, but not all location references. The first author checked those annotations and identified further references, then proceeded to extract the place name, indices of the start and end characters in 8https://en.wikinews.org 1291 Geocoder Area Under Curve† Average Error‡ Accuracy@161km LGL WIK GEO LGL WIK GEO LGL WIK GEO CamCoder 22 (18) 33 (37) 31 (32) 7 (5) 11 (9) 3 (3) 76 (83) 65 (57) 82 (80) Edinburgh 25 (22) 53 (58) 33 (34) 8 (8) 31 (30) 5 (4) 76 (80) 42 (36) 78 (78) Yahoo! 34 (35) 44 (53) 40 (44) 6 (5) 23 (25) 3 (3) 72 (75) 52 (39) 70 (65) Population 27 (22) 68 (71) 32 (32) 12 (10) 45 (42) 5 (3) 70 (79) 22 (14) 80 (80) CLAVIN 26 (20) 70 (69) 32 (33) 13 (9) 43 (39) 6 (5) 71 (80) 16 (16) 79 (80) GeoTxt 29 (21) 70 (71) 33 (34) 14 (9) 47 (45) 6 (5) 68 (80) 18 (14) 79 (79) Topocluster 38 (36) 63 (66) NA 12 (8) 38 (35) NA 63 (71) 26 (20) NA Santos et al. NA NA NA 8 NA NA 71 NA NA Table 1: Results on LGL, WikToR (WIK) and GeoVirus (GEO). Lower AUC and Average Error are better while higher Acc@161km is better. Figures in brackets are scores on identical subsets of each dataset. †Only the AUC decimal part shown. ‡Average Error rounded up to the nearest 100km. text, assigned coordinates and the Wikipedia page URL for each location. (2) A second pass over the entire dataset by the first author to check and/or remedy annotations. (3) A computer program checked that locations were tagged correctly, checking coordinates against the Geonames Database, URL correctness, eliminating any duplicates and validating XML formatting. Places without a Wikipedia page (0.6%) were assigned Geonames coordinates. (4) The second author annotated a random 10% sample to obtain an Inter-Annotator Agreement, which was 100% for geocoding and an F-Score of 92.3 for geotagging. GeoVirus in Numbers: Annotated locations: 2,167, Unique: 685, Continents: 94, Number of articles: 229, Most frequent places (21% of total): US, Canada, China, California, UK, Mexico, Kenya, Africa, Australia, Indonesia; Mean location occurrence: 3.2, Total word count: 63,205. 5 Results All tested models (except CamCoder) operate as end-to-end systems; therefore, it is not possible to perform geocoding separately. Each system geoparses its particular majority of the dataset to obtain a representative data sample, shown in Table 1 as strongly correlated scores for subsets of different sizes, with which to assess model performance. Table 1 also shows scores in brackets for the overlapping partition of all systems in order to compare performance on identical instances: GeoVirus 601 (26%), LGL 787 (17%) and WikToR 2,202 (9%). The geocoding difficulty based on the ambiguity of each dataset is: LGL (moderate to hard), WIK (very hard), GEO (easy to moderate). A population baseline also features in the evaluation. The baseline is conceptually simple: choose the candidate with the highest population, akin to the most frequent word sense in WSD. Table 1 shows the effectiveness of this heuristic, which is competitive with many geocoders, even outperforming some. However, the baseline is not effective on WikToR as the dataset was deliberately constructed as a tough ambiguity test. Table 1 shows how several geocoders mirror the behaviour of the population baseline. This simple but effective heuristic is rarely used in system comparisons, and where evaluated (Santos et al., 2015; Leidner, 2008), it is inconsistent with expected figures (due to unpublished resources, we are unable to investigate). We note that no single computational paradigm dominates Table 1. The rule-based (Edinburgh, GeoTxt, CLAVIN), statistical (Topocluster), machine learning (CamCoder, Santos) and other (Yahoo!, Population) geocoders occupy different ranks across the three datasets. Due to space constraints, Table 1 does not show figures for another type of scenario we tested, a shorter lexical context, using 200 words instead of the standard 400. CamCoder proved to be robust to reduced context, with only a small performance decline. Using the same format as Table 1, AUC errors for LGL increased from 22 (18) to 23 (19), WIK from 33 (37) to 37 (40) and GEO remained the same at 31 (32). This means that reducing model input size to save computational resources would still deliver accurate results. Our CNN-based lexical model performs at SOTA levels (Table 2) proving the effectiveness of linguistic features while being 1292 Geocoder System configuration Dataset Average Language Features + MapVec Features LGL WIK GEO CamCoder CNN MLP 0.22 0.33 0.31 0.29 Lexical Only CNN − 0.23 0.39 0.33 0.32 MapVec Only − MLP 0.25 0.41 0.32 0.33 Context2vec† LSTM MLP 0.24 0.38 0.33 0.32 Context2vec LSTM − 0.27 0.47 0.39 0.38 Random Forest MapVec features only, no lexical input 0.26 0.36 0.33 0.32 Naive Bayes MapVec features only, no lexical input 0.28 0.56 0.36 0.40 Population − − 0.27 0.68 0.32 0.42 Table 2: AUC scores for CamCoder and its Lexical and MapVec components (model ablation). Lower AUC scores are better. †Standard context2vec model augmented with MapVec representation. the outstanding geocoder on the highly ambiguous WikToR data. The Multi-Layer Perceptron (MLP) model using only MapVec with no lexical features is almost as effective but more importantly, it is significantly better than the population baseline (Table 2). This is because the Map Vector benefits from wide contextual awareness, encoded in Algorithm 1, while a simple population baseline does not. When we combined the lexical and geographic feature spaces in one model (CamCoder9), we observed a substantial increase in the SOTA scores. We have also reproduced the context2vec model to obtain a continuous context representation using bidirectional LSTMs to encode lexical features, denoted as LSTM10 in Table 2. This enabled us to test the effect of integrating MapVec into another deep learning model as opposed to CNNs. Supplemented with MapVec, we observed a significant improvement, demonstrating how enriching various neural models with a geographic vector representation boosts classification results. Deep learning is the dominant paradigm in our experiments. However, it is important that MapVec is still effective with simpler machine learning algorithms. To that end, we have evaluated it with the Random Forest without using any lexical features. This model was well suited to the geocoding task despite training with only half of the 400K training data (due to memory constraints, partial fit is unavailable for batch training in SciKit Learn). Scores were on par with more sophisticated systems. The Naive Bayes was less ef9Single model settings/parameters for all tests. 10https://keras.io/layers/recurrent/ fective with MapVec though still somewhat viable as a geocoder given the lack of lexical features and a naive algorithm, narrowly beating population. GeoVirus scores remain highly competitive across most geocoders. This is due to the nature of the dataset; locations skewed towards their dominant “senses” simulating ideal geocoding conditions, enabling high accuracy for the population baseline. GeoVirus alone may not serve as the best scenario to assess a geocoder’s performance, however, it is nevertheless important and valuable to determine behaviour in a standard environment. For example, GeoVirus helped us diagnose Yahoo! Placemaker’s lower accuracy in what should be an easy test for a geocoder. The figures show that while the average error is low, the accuracy@161km is noticeably lower than most systems. When coupled with other complementary datasets such as LGL and WikToR, it facilitates a comprehensive assessment of geocoding behaviour in many types of scenarios, exposing potential domain dependence. We note that GeoVirus has a dual function, NER (not evaluated but useful for future work) and Geocoding. We made all of our resources freely available11 for full reproducibility (Goodman et al., 2016). 5.1 Discussion and Errors The Pearson correlation coefficient of the target entity ambiguity and the error size was only r ≈ 0.2 suggesting that CamCoder’s geocoding errors do not simply rise with location ambiguity. Errors were also not correlated (r ≈0.0) with population size with all types of locations geocoded to various degrees of accuracy. All error curves follow 11https://github.com/milangritta/ 1293 a power law distribution with between 89% and 96% of errors less than 1500km, the rest rapidly increasing into thousands of kilometers. Errors also appear to be uniformly geographically distributed across the world. The strong lexical component shown in Table 2 is reflected by the lack of a relationship between error size and the number of locations found in the context. The number of total words in context is also independent of geocoding accuracy. This suggests that CamCoder learns strong linguistic cues beyond simple association of place names with the target entity and is able to cope with flexible-sized contexts. A CNN Geocoder would expect to perform well for the following reasons: Our context window is 400 words rather than 10-40 words as in previous approaches. The model learns 1,000 feature maps per input and per feature type, tracking 5,000 different word patterns (unigrams, bigrams and trigrams), a significant text processing capability. The lexical model also takes advantage of our own 50-dimensional word embeddings, tuned on geographic Wikipedia pages only, allowing for greater generalisation than bag-of-unigrams models; and the large training/development datasets (400K each), optimising geocoding over a diverse global set of places allowing our model to generalise to unseen instances. We note that MapVec generation is sensitive to NER performance with higher F-Scores leading to better quality of the geographic vector representation(s). Precision errors can introduce noise while recall errors may withhold important locations. The average F-Score for the featured geoparsers is F ≈0.7 (standard deviation ≈0.1). Spacy’s NER performance over the three datasets is also F ≈0.7 with a similar variation between datasets. In order to further interpret scores in Tables 1 and 2, with respect to maximising geocoding performance, we briefly discuss the Oracle score. Oracle is the geocoding performance upper bound given by the Geonames data, i.e. the highest possible score(s) using Geonames coordinates as the geocoding output. In other words, it quantifies the minimum error for each dataset given the perfect location disambiguation. This means it quantifies the difference between “gold standard” coordinates and the coordinates in the Geonames database. The following are the Oracle scores for LGL (AUC=0.04, a@161km=99) annotated with Geonames, WikToR (AUC=0.14, a@161km=92) and GeoVirus (AUC=0.27, a@161km=88), which are annotated with Wikipedia data. Subtracting the Oracle score from a geocoder’s score quantifies the scope of its theoretical future improvement, given a particular database/gazetteer. 6 Conclusions and Future Work Geocoding methods commonly employ lexical features, which have proved to be very effective. Our lexical model was the best languageonly geocoder in extensive tests. It is possible, however, to go beyond lexical semantics. Locations also have a rich topological meaning, which has not yet been successfully isolated and deployed. We need a means of extracting and encoding this additional knowledge. To that end, we introduced MapVec, an algorithm and a container for encoding context locations in geodesic vector space. We showed how CamCoder, using lexical and MapVec features, outperformed both approaches, achieving a new SOTA. MapVec remains effective with various machine learning frameworks (Random Forest, CNN and MLP) and substantially improves accuracy when combined with other neural models (LSTMs). Finally, we introduced GeoVirus, an open-source dataset that helps facilitate geoparsing evaluation across more diverse domains with different lexical-geographic distributions (Flatow et al., 2015; Dredze et al., 2016). Tasks that could benefit from our methods include social media placing tasks (Choi et al., 2014), inferring user location on Twitter (Zheng et al., 2017), geolocation of images based on descriptions (Serdyukov et al., 2009) and detecting/analyzing incidents from social media (Berlingerio et al., 2013). Future work may see our methods applied to document geolocation to assess the effectiveness of scaling geodesic vectors from paragraphs to entire documents. Acknowledgements We gratefully acknowledge the funding support of the Natural Environment Research Council (NERC) PhD Studentship NE/M009009/1 (Milan Gritta, DREAM CDT), EPSRC (Nigel Collier) Grant Number EP/M005089/1 and MRC (Mohammad Taher Pilehvar) Grant Number MR/M025160/1 for PheneBank. We also gratefully acknowledge NVIDIA Corporation’s donation of the Titan Xp GPU used for this research. 1294 References Zahra Ashktorab, Christopher Brown, Manojit Nandi, and Aron Culotta. 2014. Tweedr: Mining twitter to inform disaster response. In ISCRAM. Lars Backstrom, Eric Sun, and Cameron Marlow. 2010. Find me if you can: improving geographical prediction with social and spatial proximity. In Proceedings of the 19th international conference on World wide web. ACM, pages 61–70. Jordan Bakerman, Karl Pazdernik, Alyson Wilson, Geoffrey Fairchild, and Rian Bahran. 2018. Twitter geolocation: A hybrid approach. ACM Transactions on Knowledge Discovery from Data (TKDD) 12(3):34. Michele Berlingerio, Francesco Calabrese, Giusy Di Lorenzo, Xiaowen Dong, Yiannis Gkoufas, and Dimitrios Mavroeidis. 2013. Safercity: a system for detecting and analyzing incidents from social media. In Data Mining Workshops (ICDMW), 2013 IEEE 13th International Conference on. IEEE, pages 1077–1080. Preeti Bhargava, Nemanja Spasojevic, and Guoning Hu. 2017. Lithium nlp: A system for rich information extraction from noisy user generated text on social media. arXiv preprint arXiv:1707.04244 . Han Bo, Paul Cook, and Timothy Baldwin. 2012. Geolocation prediction in social media data by finding location indicative words. In Proceedings of COLING. pages 1045–1062. Leo Breiman. 2001. Random forests. Machine learning 45(1):5–32. Olga Buchel and Diane Pennington. 2017. Geospatial analysis. The SAGE Handbook of Social Media Research Methods pages 285–303. Chris J.C. Burges. 2010. From ranknet to lambdarank to lambdamart: An overview. Technical report. https://www.microsoft.com/enus/research/publication/from-ranknet-tolambdarank-to-lambdamart-an-overview/. Davide Buscaldi and Paulo Rosso. 2008. A conceptual density-based approach for the disambiguation of toponyms. International Journal of Geographical Information Science 22(3):301–313. Zhiyuan Cheng, James Caverlee, and Kyumin Lee. 2010. You are where you tweet: a content-based approach to geo-locating twitter users. In Proceedings of the 19th ACM international conference on Information and knowledge management. ACM, pages 759–768. Jaeyoung Choi, Bart Thomee, Gerald Friedland, Liangliang Cao, Karl Ni, Damian Borth, Benjamin Elizalde, Luke Gottlieb, Carmen Carrano, Roger Pearce, et al. 2014. The placing task: A large-scale geo-estimation challenge for social-media videos and images. In Proceedings of the 3rd ACM Multimedia Workshop on Geotagging and Its Applications in Multimedia. ACM, pages 27–31. Franc¸ois Chollet. 2015. Keras. https://github. com/fchollet/keras. Maxwell Guimar˜aes De Oliveira, Cl´audio de Souza Baptista, Cl´audio EC Campelo, and Michela Bertolotto. 2017. A gold-standard social media corpus for urban issues. In Proceedings of the Symposium on Applied Computing. ACM, pages 1011–1016. Grant DeLozier, Jason Baldridge, and Loretta London. 2015. Gazetteer-independent toponym resolution using geographic word profiles. In AAAI. pages 2382–2388. Grant DeLozier, Ben Wing, Jason Baldridge, and Scott Nesbit. 2016. Creating a novel geolocation corpus from historical texts. LAW X page 188. Mark Dredze, Miles Osborne, and Prabhanjan Kambadur. 2016. Geolocation for twitter: Timing matters. In HLT-NAACL. pages 1064–1069. Mark Dredze, Michael J Paul, Shane Bergsma, and Hieu Tran. 2013. Carmen: A twitter geolocation system with applications to public health. In AAAI workshop on expanding the boundaries of health informatics using AI (HIAI). volume 23, page 45. Jacob Eisenstein, Brendan O’Connor, Noah A Smith, and Eric P Xing. 2010. A latent variable model for geographic lexical variation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1277–1287. David Flatow, Mor Naaman, Ke Eddie Xie, Yana Volkovich, and Yaron Kanza. 2015. On the accuracy of hyper-local geotagging of social media content. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining. ACM, pages 127–136. Daniela Gerz, Ivan Vuli´c, Felix Hill, Roi Reichart, and Anna Korhonen. 2016. Simverb-3500: A largescale evaluation set of verb similarity. arXiv preprint arXiv:1608.00869 . Steven N Goodman, Daniele Fanelli, and John PA Ioannidis. 2016. What does research reproducibility mean? Science translational medicine 8(341):341ps12–341ps12. Milan Gritta, Mohammad Taher Pilehvar, Nut Limsopatham, and Nigel Collier. 2017a. Vancouver welcomes you! minimalist location metonymy resolution. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 1248–1259. Milan Gritta, Mohammad Taher Pilehvar, Nut Limsopatham, and Nigel Collier. 2017b. Whats missing in geographical parsing? . 1295 Claire Grover, Richard Tobin, Kate Byrne, Matthew Woollard, James Reid, Stuart Dunn, and Julian Ball. 2010. Use of the edinburgh geoparser for georeferencing digitized historical collections. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences 368(1925):3875–3889. Simon I Hay, Katherine E Battle, David M Pigott, David L Smith, Catherine L Moyes, Samir Bhatt, John S Brownstein, Nigel Collier, Monica F Myers, Dylan B George, et al. 2013. Global mapping of infectious disease. Phil. Trans. R. Soc. B 368(1614):20120250. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Matthew Honnibal and Mark Johnson. 2015. An improved non-monotonic transition system for dependency parsing. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 1373–1378. https://aclweb.org/anthology/D/D15/D15-1162. Mans Hulden, Miikka Silfverberg, and Jerid Francom. 2015. Kernel density estimation for text-based geolocation. In AAAI. pages 145–150. Hayate Iso, Shoko Wakamiya, and Eiji Aramaki. 2017. Density estimation for geolocation via convolutional mixture density network. arXiv preprint arXiv:1705.02750 . David Jurgens, Tyler Finethy, James McCorriston, Yi Tian Xu, and Derek Ruths. 2015. Geolocation prediction in twitter using social networks: A critical analysis and review of current practice. ICWSM 15:188–197. Morteza Karimzadeh. 2016. Performance evaluation measures for toponym resolution. In Proceedings of the 10th Workshop on Geographic Information Retrieval. ACM, page 8. Morteza Karimzadeh, Wenyi Huang, Siddhartha Banerjee, Jan Oliver Wallgr¨un, Frank Hardisty, Scott Pezanowski, Prasenjit Mitra, and Alan M MacEachren. 2013. Geotxt: a web api to leverage place references in text. In Proceedings of the 7th workshop on geographic information retrieval. ACM, pages 72–73. Mayank Kejriwal and Pedro Szekely. 2017. Neural embeddings for populated geonames locations. In International Semantic Web Conference. Springer, pages 139–146. Jochen L Leidner. 2008. Toponym resolution in text: Annotation, evaluation and applications of spatial grounding of place names. Universal-Publishers. Michael D Lieberman, Hanan Samet, and Jagan Sankaranarayanan. 2010. Geotagging with local lexicons to build indexes for textually-specified spatial data. In 2010 IEEE 26th International Conference on Data Engineering (ICDE 2010). IEEE, pages 201–212. Inderjeet Mani, Christy Doran, Dave Harris, Janet Hitzeman, Rob Quimby, Justin Richer, Ben Wellner, Scott Mardis, and Seamus Clancy. 2010. Spatialml: annotation scheme, resources, and evaluation. Language Resources and Evaluation 44(3):263–280. Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning generic context embedding with bidirectional lstm. In CoNLL. pages 51–61. Fernando Melo and Bruno Martins. 2015. Geocoding textual documents through the usage of hierarchical classifiers. In Proceedings of the 9th Workshop on Geographic Information Retrieval. ACM, page 7. Fernando Melo and Bruno Martins. 2017. Automated geocoding of textual documents: A survey of current approaches. Transactions in GIS 21(1):3–38. Vinod Nair and Geoffrey E Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10). pages 807–814. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP). pages 1532– 1543. http://www.aclweb.org/anthology/D14-1162. Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pages 701–710. Afshin Rahimi, Timothy Baldwin, and Trevor Cohn. 2017. Continuous representation of location for geolocation and lexical dialectology using mixture density networks. arXiv preprint arXiv:1708.04358 . Afshin Rahimi, Trevor Cohn, and Timothy Baldwin. 2016. pigeo: A python geotagging tool . Afshin Rahimi, Duy Vu, Trevor Cohn, and Timothy Baldwin. 2015. Exploiting text and network context for geolocation of social media users. arXiv preprint arXiv:1506.04803 . Stephen Roller, Michael Speriosu, Sarat Rallapalli, Benjamin Wing, and Jason Baldridge. 2012. Supervised text-based geolocation using language models on an adaptive grid. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics, pages 1500–1510. 1296 Jo˜ao Santos, Ivo Anast´acio, and Bruno Martins. 2015. Using machine learning methods for disambiguating place references in textual documents. GeoJournal 80(3):375–392. LutfiKerem Senel, Ihsan Utlu, Veysel Yucesoy, Aykut Koc, and Tolga Cukur. 2017. Semantic structure and interpretability of word embeddings. arXiv preprint arXiv:1711.00331 . Pavel Serdyukov, Vanessa Murdock, and Roelof Van Zwol. 2009. Placing flickr photos on a map. In Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval. ACM, pages 484–491. Michael Speriosu and Jason Baldridge. 2013. Textdriven toponym resolution using indirect supervision. In ACL (1). pages 1466–1476. Tijmen Tieleman and Geoffrey Hinton. 2012. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning 4(2):26–31. Richard Tobin, Claire Grover, Kate Byrne, James Reid, and Jo Walsh. 2010. Evaluation of georeferencing. In proceedings of the 6th workshop on geographic information retrieval. ACM, page 7. Olivier Van Laere, Jonathan Quinn, Steven Schockaert, and Bart Dhoedt. 2014. Spatially aware term selection for geotagging. IEEE transactions on Knowledge and Data Engineering 26(1):221–234. Jan Oliver Wallgr¨un, Morteza Karimzadeh, Alan M MacEachren, and Scott Pezanowski. 2017. Geocorpora: building a corpus to test and train microblog geoparsers. International Journal of Geographical Information Science pages 1–29. Benjamin Wing and Jason Baldridge. 2014. Hierarchical discriminative classification for text-based geolocation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 336–348. Benjamin P Wing and Jason Baldridge. 2011. Simple supervised document geolocation with geodesic grids. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Association for Computational Linguistics, pages 955–964. Harry Zhang. 2004. The optimality of naive bayes. AA 1(2):3. Wei Zhang and Judith Gelernter. 2014. Geocoding location expressions in twitter messages: A preference learning method. Journal of Spatial Information Science 2014(9):37–70. Wei Zhang and Judith Gelernter. 2015. Exploring metaphorical senses and word representations for identifying metonyms. arXiv preprint arXiv:1508.04515 . Xin Zheng, Jialong Han, and Aixin Sun. 2017. A survey of location prediction on twitter. arXiv preprint arXiv:1705.03172 .
2018
119
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 122–131 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 122 Towards Understanding the Geometry of Knowledge Graph Embeddings Chandrahas Indian Institute of Science [email protected] Aditya Sharma Indian Institute of Science [email protected] Partha Talukdar Indian Institute of Science [email protected] Abstract Knowledge Graph (KG) embedding has emerged as a very active area of research over the last few years, resulting in the development of several embedding methods. These KG embedding methods represent KG entities and relations as vectors in a high-dimensional space. Despite this popularity and effectiveness of KG embeddings in various tasks (e.g., link prediction), geometric understanding of such embeddings (i.e., arrangement of entity and relation vectors in vector space) is unexplored – we fill this gap in the paper. We initiate a study to analyze the geometry of KG embeddings and correlate it with task performance and other hyperparameters. To the best of our knowledge, this is the first study of its kind. Through extensive experiments on real-world datasets, we discover several insights. For example, we find that there are sharp differences between the geometry of embeddings learnt by different classes of KG embeddings methods. We hope that this initial study will inspire other follow-up research on this important but unexplored problem. 1 Introduction Knowledge Graphs (KGs) are multi-relational graphs where nodes represent entities and typededges represent relationships among entities. Recent research in this area has resulted in the development of several large KGs, such as NELL (Mitchell et al., 2015), YAGO (Suchanek et al., 2007), and Freebase (Bollacker et al., 2008), among others. These KGs contain thousands of predicates (e.g., person, city, mayorOf(person, city), etc.), and millions of triples involving such predicates, e.g., (Bill de Blasio, mayorOf, New York City). The problem of learning embeddings for Knowledge Graphs has received significant attention in recent years, with several methods being proposed (Bordes et al., 2013; Lin et al., 2015; Nguyen et al., 2016; Nickel et al., 2016; Trouillon et al., 2016). These methods represent entities and relations in a KG as vectors in high dimensional space. These vectors can then be used for various tasks, such as, link prediction, entity classification etc. Starting with TransE (Bordes et al., 2013), there have been many KG embedding methods such as TransH (Wang et al., 2014), TransR (Lin et al., 2015) and STransE (Nguyen et al., 2016) which represent relations as translation vectors from head entities to tail entities. These are additive models, as the vectors interact via addition and subtraction. Other KG embedding models, such as, DistMult (Yang et al., 2014), HolE (Nickel et al., 2016), and ComplEx (Trouillon et al., 2016) are multiplicative where entityrelation-entity triple likelihood is quantified by a multiplicative score function. All these methods employ a score function for distinguishing correct triples from incorrect ones. In spite of the existence of many KG embedding methods, our understanding of the geometry and structure of such embeddings is very shallow. A recent work (Mimno and Thompson, 2017) analyzed the geometry of word embeddings. However, the problem of analyzing geometry of KG embeddings is still unexplored – we fill this important gap. In this paper, we analyze the geometry of such vectors in terms of their lengths and conicity, which, as defined in Section 4, describes their positions and orientations in the vector space. We later study the effects of model type and training hyperparameters on the geometry of KG embeddings and correlate geometry with performance. 123 We make the following contributions: • We initiate a study to analyze the geometry of various Knowledge Graph (KG) embeddings. To the best of our knowledge, this is the first study of its kind. We also formalize various metrics which can be used to study geometry of a set of vectors. • Through extensive analysis, we discover several interesting insights about the geometry of KG embeddings. For example, we find systematic differences between the geometries of embeddings learned by additive and multiplicative KG embedding methods. • We also study the relationship between geometric attributes and predictive performance of the embeddings, resulting in several new insights. For example, in case of multiplicative models, we observe that for entity vectors generated with a fixed number of negative samples, lower conicity (as defined in Section 4) or higher average vector length lead to higher performance. Source code of all the analysis tools developed as part of this paper is available at https://github.com/malllabiisc/ kg-geometry. We are hoping that these resources will enable one to quickly analyze the geometry of any KG embedding, and potentially other embeddings as well. 2 Related Work In spite of the extensive and growing literature on both KG and non-KG embedding methods, very little attention has been paid towards understanding the geometry of the learned embeddings. A recent work (Mimno and Thompson, 2017) is an exception to this which addresses this problem in the context of word vectors. This work revealed a surprising correlation between word vector geometry and the number of negative samples used during training. Instead of word vectors, in this paper we focus on understanding the geometry of KG embeddings. In spite of this difference, the insights we discover in this paper generalizes some of the observations in the work of (Mimno and Thompson, 2017). Please see Section 6.2 for more details. Since KGs contain only positive triples, negative sampling has been used for training KG embeddings. Effect of the number of negative samples in KG embedding performance was studied by (Toutanova et al., 2015). In this paper, we study the effect of the number of negative samples on KG embedding geometry as well as performance. In addition to the additive and multiplicative KG embedding methods already mentioned in Section 1, there is another set of methods where the entity and relation vectors interact via a neural network. Examples of methods in this category include NTN (Socher et al., 2013), CONV (Toutanova et al., 2015), ConvE (Dettmers et al., 2017), R-GCN (Schlichtkrull et al., 2017), ERMLP (Dong et al., 2014) and ER-MLP-2n (Ravishankar et al., 2017). Due to space limitations, in this paper we restrict our scope to the analysis of the geometry of additive and multiplicative KG embedding models only, and leave the analysis of the geometry of neural network-based methods as part of future work. 3 Overview of KG Embedding Methods For our analysis, we consider six representative KG embedding methods: TransE (Bordes et al., 2013), TransR (Lin et al., 2015), STransE (Nguyen et al., 2016), DistMult (Yang et al., 2014), HolE (Nickel et al., 2016) and ComplEx (Trouillon et al., 2016). We refer to TransE, TransR and STransE as additive methods because they learn embeddings by modeling relations as translation vectors from one entity to another, which results in vectors interacting via the addition operation during training. On the other hand, we refer to DistMult, HolE and ComplEx as multiplicative methods as they quantify the likelihood of a triple belonging to the KG through a multiplicative score function. The score functions optimized by these methods are summarized in Table 1. Notation: Let G = (E, R, T ) be a Knowledge Graph (KG) where E is the set of entities, R is the set of relations and T ⊂E × R × E is the set of triples stored in the graph. Most of the KG embedding methods learn vectors e ∈Rde for e ∈E, and r ∈Rdr for r ∈R. Some methods also learn projection matrices Mr ∈Rdr×de for relations. The correctness of a triple is evaluated using a model specific score function σ : E × R × E → R. For learning the embeddings, a loss function L(T , T ′; θ), defined over a set of positive triples T , set of (sampled) negative triples T ′, and the parameters θ is optimized. We use small italics characters (e.g., h, r) to represent entities and relations, and correspond124 Type Model Score Function σ(h, r, t) Additive TransE (Bordes et al., 2013) −∥h + r −t∥1 TransR (Lin et al., 2015) −∥Mrh + r −Mrt∥1 STransE (Nguyen et al., 2016) − M 1 r h + r −M 2 r t 1 Multiplicative DistMult (Yang et al., 2014) r⊤(h ⊙t) HolE (Nickel et al., 2016) r⊤(h ⋆t) ComplEx (Trouillon et al., 2016) Re(r⊤(h ⊙¯t)) Table 1: Summary of various Knowledge Graph (KG) embedding methods used in the paper. Please see Section 3 for more details. ing bold characters to represent their vector embeddings (e.g., h, r). We use bold capitalization (e.g., V) to represent a set of vectors. Matrices are represented by capital italics characters (e.g., M). 3.1 Additive KG Embedding Methods This is the set of methods where entity and relation vectors interact via additive operations. The score function for these models can be expressed as below σ(h, r, t) = − M1 r h + r −M2 r t 1 (1) where h, t ∈Rde and r ∈Rdr are vectors for head entity, tail entity and relation respectively. M1 r , M2 r ∈Rdr×de are projection matrices from entity space Rde to relation space Rdr. TransE (Bordes et al., 2013) is the simplest additive model where the entity and relation vectors lie in same d−dimensional space, i.e., de = dr = d. The projection matrices M1 r = M2 r = Id are identity matrices. The relation vectors are modeled as translation vectors from head entity vectors to tail entity vectors. Pairwise ranking loss is then used to learn these vectors. Since the model is simple, it has limited capability in capturing many-to-one, one-to-many and many-to-many relations. TransR (Lin et al., 2015) is another translationbased model which uses separate spaces for entity and relation vectors allowing it to address the shortcomings of TransE. Entity vectors are projected into a relation specific space using the corresponding projection matrix M1 r = M2 r = Mr. The training is similar to TransE. STransE (Nguyen et al., 2016) is a generalization of TransR and uses different projection matrices for head and tail entity vectors. The training is similar to TransE. STransE achieves better performance than the previous methods but at the cost of more number of parameters. Equation 1 is the score function used in STransE. TransE and TransR are special cases of STransE with M1 r = M2 r = Id and M1 r = M2 r = Mr, respectively. 3.2 Multiplicative KG Embedding Methods This is the set of methods where the vectors interact via multiplicative operations (usually dot product). The score function for these models can be expressed as σ(h, r, t) = r⊤f(h, t) (2) where h, t, r ∈Fd are vectors for head entity, tail entity and relation respectively. f(h, t) ∈Fd measures compatibility of head and tail entities and is specific to the model. F is either real space R or complex space C. Detailed descriptions of the models we consider are as follows. DistMult (Yang et al., 2014) models entities and relations as vectors in Rd. It uses an entry-wise product (⊙) to measure compatibility between head and tail entities, while using logistic loss for training the model. σDistMult(h, r, t) = r⊤(h ⊙t) (3) Since the entry-wise product in (3) is symmetric, DistMult is not suitable for asymmetric and antisymmetric relations. HolE (Nickel et al., 2016) also models entities and relations as vectors in Rd. It uses circular correlation operator (⋆) as compatibility function defined as [h ⋆t]k = d−1 X i=0 hit(k+i) mod d The score function is given as σHolE(h, r, t) = r⊤(h ⋆t) (4) The circular correlation operator being asymmetric, can capture asymmetric and anti-symmetric relations, but at the cost of higher time complexity 125 Figure 1: Comparison of high vs low Conicity. Randomly generated vectors are shown in blue with their sample mean vector M in black. Figure on the left shows the case when vectors lie in narrow cone resulting in high Conicity value. Figure on the right shows the case when vectors are spread out having relatively lower Conicity value. We skipped very low values of Conicity as it was difficult to visualize. The points are sampled from 3d Spherical Gaussian with mean (1,1,1) and standard deviation 0.1 (left) and 1.3 (right). Please refer to Section 4 for more details. (O(d log d)). For training, we use pairwise ranking loss. ComplEx (Trouillon et al., 2016) represents entities and relations as vectors in Cd. The compatibility of entity pairs is measured using entry-wise product between head and complex conjugate of tail entity vectors. σComplEx(h, r, t) = Re(r⊤(h ⊙¯t)) (5) In contrast to (3), using complex vectors in (5) allows ComplEx to handle symmetric, asymmetric and anti-symmetric relations using the same score function. Similar to DistMult, logistic loss is used for training the model. 4 Metrics For our geometrical analysis, we first define a term ‘alignment to mean’ (ATM) of a vector v belonging to a set of vectors V, as the cosine similarity1 between v and the mean of all vectors in V. ATM(v, V) = cosine v, 1 |V| X x∈V x ! We also define ‘conicity’ of a set V as the mean ATM of all vectors in V. Conicity(V) = 1 |V| X v∈V ATM(v, V) 1cosine(u, v) = u⊤v ∥u∥∥v∥ Dataset FB15k WN18 #Relations 1,345 18 #Entities 14,541 40,943 #Triples Train 483,142 141,440 Validation 50,000 5,000 Test 59,071 5,000 Table 2: Summary of datasets used in the paper. By this definition, a high value of Conicity(V) would imply that the vectors in V lie in a narrow cone centered at origin. In other words, the vectors in the set V are highly aligned with each other. In addition to that, we define the variance of ATM across all vectors in V, as the ‘vector spread’(VS) of set V, VS(V) = 1 |V| X v∈V ATM(v, V)−Conicity(V) !2 Figure 1 visually demonstrates these metrics for randomly generated 3-dimensional points. The left figure shows high Conicity and low vector spread while the right figure shows low Conicity and high vector spread. We define the length of a vector v as L2-norm of the vector ∥v∥2 and ‘average vector length’ (AVL) for the set of vectors V as AVL(V) = 1 |V| X v∈V ∥v∥2 126 (a) Additive Models (b) Multiplicative Models Figure 2: Alignment to Mean (ATM) vs Density plots for entity embeddings learned by various additive (top row) and multiplicative (bottom row) KG embedding methods. For each method, a plot averaged across entity frequency bins is shown. From these plots, we conclude that entity embeddings from additive models tend to have low (positive as well as negative) ATM and thereby low Conicity and high vector spread. Interestingly, this is reversed in case of multiplicative methods. Please see Section 6.1 for more details. 5 Experimental Setup Datasets: We run our experiments on subsets of two widely used datasets, viz., Freebase (Bollacker et al., 2008) and WordNet (Miller, 1995), called FB15k and WN18 (Bordes et al., 2013), respectively. We detail the characteristics of these datasets in Table 2. Please note that while the results presented in Section 6 are on the FB15K dataset, we reach the same conclusions on WN18. The plots for our experiments on WN18 can be found in the Supplementary Section. Hyperparameters: We experiment with multiple values of hyperparameters to understand their effect on the geometry of KG embeddings. Specifically, we vary the dimension of the generated vectors between {50, 100, 200} and the number of negative samples used during training between {1, 50, 100}. For more details on algorithm specific hyperparameters, we refer the reader to the Supplementary Section.2 2For training, we used codes from https://github. Frequency Bins: We follow (Mimno and Thompson, 2017) for entity and relation samples used in the analysis. Multiple bins of entities and relations are created based on their frequencies and 100 randomly sampled vectors are taken from each bin. These set of sampled vectors are then used for our analysis. For more information about sampling vectors, please refer to (Mimno and Thompson, 2017). 6 Results and Analysis In this section, we evaluate the following questions. • Does model type (e.g., additive vs multiplicative) have any effect on the geometry of embeddings? (Section 6.1) com/Mrlyk423/Relation_Extraction (TransE, TransR), https://github.com/datquocnguyen/ STransE (STransE), https://github.com/ mnick/holographic-embeddings (HolE) and https://github.com/ttrouill/complex (ComplEx and DistMult). 127 (a) Additive Models (b) Multiplicative Models Figure 3: Alignment to Mean (ATM) vs Density plots for relation embeddings learned by various additive (top row) and multiplicative (bottom row) KG embedding methods. For each method, a plot averaged across entity frequency bins is shown. Trends in these plots are similar to those in Figure 2. Main findings from these plots are summarized in Section 6.1. • Does negative sampling have any effect on the embedding geometry? (Section 6.2) • Does the dimension of embedding have any effect on its geometry? (Section 6.3) • How is task performance related to embedding geometry? (Section 6.4) In each subsection, we summarize the main findings at the beginning, followed by evidence supporting those findings. 6.1 Effect of Model Type on Geometry Summary of Findings: Additive: Low conicity and high vector spread. Multiplicative: High conicity and low vector spread. In this section, we explore whether the type of the score function optimized during the training has any effect on the geometry of the resulting embedding. For this experiment, we set the number of negative samples to 1 and the vector dimension to 100 (we got similar results for 50-dimensional vectors). Figure 2 and Figure 3 show the distribution of ATMs of these sampled entity and relation vectors, respectively.3 Entity Embeddings: As seen in Figure 2, there is a stark difference between the geometries of entity vectors produced by additive and multiplicative models. The ATMs of all entity vectors produced by multiplicative models are positive with very low vector spread. Their high conicity suggests that they are not uniformly dispersed in the vector space, but lie in a narrow cone along the mean vector. This is in contrast to the entity vectors obtained from additive models which are both positive and negative with higher vector spread. From the lower values of conicity, we conclude that entity vectors from additive models are evenly dispersed in the vector space. This observation is also reinforced by looking at the high vector spread of additive models in comparison to that of multiplicative models. We also observed that additive models are sensitive to the frequency of entities, with high frequency bins having higher conicity than low frequency bins. However, no such pattern was observed for multiplicative models and 3We also tried using the global mean instead of mean of the sampled set for calculating cosine similarity in ATM, and got very similar results. 128 Figure 4: Conicity (left) and Average Vector Length (right) vs Number of negative samples for entity vectors learned using various KG embedding methods. In each bar group, first three models are additive, while the last three are multiplicative. Main findings from these plots are summarized in Section 6.2 conicity was consistently similar across frequency bins. For clarity, we have not shown different plots for individual frequency bins. Relation Embeddings: As in entity embeddings, we observe a similar trend when we look at the distribution of ATMs for relation vectors in Figure 3. The conicity of relation vectors generated using additive models is almost zero across frequency bands. This coupled with the high vector spread observed, suggests that these vectors are scattered throughout the vector space. Relation vectors from multiplicative models exhibit high conicity and low vector spread, suggesting that they lie in a narrow cone centered at origin, like their entity counterparts. 6.2 Effect of Number of Negative Samples on Geometry Summary of Findings: Additive: Conicity and average length are invariant to changes in #NegativeSamples for both entities and relations. Multiplicative: Conicity increases while average vector length decrease with increasing #NegativeSamples for entities. Conicity decreases, while average vector length remains constant (except HolE) for relations. For experiments in this section, we keep the vector dimension constant at 100. Entity Embeddings: As seen in Figure 4 (left), the conicity of entity vectors increases as the number of negative samples is increased for multiplicative models. In contrast, conicity of the entity vectors generated by additive models is unaffected by change in number of negative samples and they continue to be dispersed throughout the vector space. From Figure 4 (right), we observe that the average length of entity vectors produced by additive models is also invariant of any changes in number of negative samples. On the other hand, increase in negative sampling decreases the average entity vector length for all multiplicative models except HolE. The average entity vector length for HolE is nearly 1 for any number of negative samples, which is understandable considering it constrains the entity vectors to lie inside a unit ball (Nickel et al., 2016). This constraint is also enforced by the additive models: TransE, TransR, and STransE. Relation Embeddings: Similar to entity embeddings, in case of relation vectors trained using additive models, the average length and conicity do not change while varying the number of negative samples. However, the conicity of relation vectors from multiplicative models decreases with increase in negative sampling. The average relation vector length is invariant for all multiplicative methods, except for HolE. We see a surprisingly big jump in average relation vector length for HolE going from 1 to 50 negative samples, but it does not change after that. Due to space constraints in the paper, we refer the reader to the Supplementary Section for plots discussing the effect of number of negative samples on geometry of relation vectors. We note that the multiplicative score between two vectors may be increased by either increasing the alignment between the two vectors (i.e., increasing Conicity and reducing vector spread between them), or by increasing their lengths. It is interesting to note that we see exactly these effects in the geometry of multiplicative methods 129 Figure 5: Conicity (left) and Average Vector Length (right) vs Number of Dimensions for entity vectors learned using various KG embedding methods. In each bar group, first three models are additive, while the last three are multiplicative. Main findings from these plots are summarized in Section 6.3. analyzed above. 6.2.1 Correlation with Geometry of Word Embeddings Our conclusions from the geometrical analysis of entity vectors produced by multiplicative models are similar to the results in (Mimno and Thompson, 2017), where increase in negative sampling leads to increased conicity of word vectors trained using the skip-gram with negative sampling (SGNS) method. On the other hand, additive models remain unaffected by these changes. SGNS tries to maximize a score function of the form wT · c for positive word context pairs, where w is the word vector and c is the context vector (Mikolov et al., 2013). This is very similar to the score function of multiplicative models as seen in Table 1. Hence, SGNS can be considered as a multiplicative model in the word domain. Hence, we argue that our result on the increase in negative samples increasing the conicity of vectors trained using a multiplicative score function can be considered as a generalization of the one in (Mimno and Thompson, 2017). 6.3 Effect of Vector Dimension on Geometry Summary of Findings: Additive: Conicity and average length are invariant to changes in dimension for both entities and relations. Multiplicative: Conicity decreases for both entities and relations with increasing dimension. Average vector length increases for both entities and relations, except for HolE entities. Entity Embeddings: To study the effect of vector dimension on conicity and length, we set the number of negative samples to 1, while varying the vector dimension. From Figure 5 (left), we observe that the conicity for entity vectors generated by any additive model is almost invariant of increase in dimension, though STransE exhibits a slight decrease. In contrast, entity vector from multiplicative models show a clear decreasing pattern with increasing dimension. As seen in Figure 5 (right), the average lengths of entity vectors from multiplicative models increase sharply with increasing vector dimension, except for HolE. In case of HolE, the average vector length remains constant at one. Deviation involving HolE is expected as it enforces entity vectors to fall within a unit ball. Similar constraints are enforced on entity vectors for additive models as well. Thus, the average entity vector lengths are not affected by increasing vector dimension for all additive models. Relation Embeddings: We reach similar conclusion when analyzing against increasing dimension the change in geometry of relation vectors produced using these KG embedding methods. In this setting, the average length of relation vectors learned by HolE also increases as dimension is increased. This is consistent with the other methods in the multiplicative family. This is because, unlike entity vectors, the lengths of relation vectors of HolE are not constrained to be less than unit length. Due to lack of space, we are unable to show plots for relation vectors here, but the same can be found in the Supplementary Section. 130 Figure 6: Relationship between Performance (HITS@10) on a link prediction task vs Conicity (left) and Avg. Vector Length (right). For each point, N represents the number of negative samples used. Main findings are summarized in Section 6.4. 6.4 Relating Geometry to Performance Summary of Findings: Additive: Neither entites nor relations exhibit correlation between geometry and performance. Multiplicative: Keeping negative samples fixed, lower conicity or higher average vector length for entities leads to improved performance. No relationship for relations. In this section, we analyze the relationship between geometry and performance on the Link prediction task, using the same setting as in (Bordes et al., 2013). Figure 6 (left) presents the effects of conicity of entity vectors on performance, while Figure 6 (right) shows the effects of average entity vector length.4 As we see from Figure 6 (left), for fixed number of negative samples, the multiplicative model with lower conicity of entity vectors achieves better performance. This performance gain is larger for higher numbers of negative samples (N). Additive models don’t exhibit any relationship between performance and conicity, as they are all clustered around zero conicity, which is in-line with our observations in previous sections. In Figure 6 (right), for all multiplicative models except HolE, a higher average entity vector length translates to better performance, while the number of negative samples is kept fixed. Additive models and HolE don’t exhibit any such patterns, as they are all clustered just below unit average entity vector length. The above two observations for multiplicative models make intuitive sense, as lower conicity and higher average vector length would both translate 4A more focused analysis for multiplicative models is presented in Section 3 of Supplementary material. to vectors being more dispersed in the space. We see another interesting observation regarding the high sensitivity of HolE to the number of negative samples used during training. Using a large number of negative examples (e.g., N = 50 or 100) leads to very high conicity in case of HolE. Figure 6 (right) shows that average entity vector length of HolE is always one. These two observations point towards HolE’s entity vectors lying in a tiny part of the space. This translates to HolE performing poorer than all other models in case of high numbers of negative sampling. We also did a similar study for relation vectors, but did not see any discernible patterns. 7 Conclusion In this paper, we have initiated a systematic study into the important but unexplored problem of analyzing geometry of various Knowledge Graph (KG) embedding methods. To the best of our knowledge, this is the first study of its kind. Through extensive experiments on multiple realworld datasets, we are able to identify several insights into the geometry of KG embeddings. We have also explored the relationship between KG embedding geometry and its task performance. We have shared all our source code to foster further research in this area. Acknowledgements We thank the anonymous reviewers for their constructive comments. This work is supported in part by the Ministry of Human Resources Development (Government of India), Intel, Intuit, and by gifts from Google and Accenture. 131 References Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data. AcM, pages 1247–1250. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in neural information processing systems. pages 2787–2795. T. Dettmers, P. Minervini, P. Stenetorp, and S. Riedel. 2017. Convolutional 2D Knowledge Graph Embeddings. ArXiv e-prints . Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pages 601–610. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In AAAI. pages 2181–2187. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. pages 3111–3119. George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM 38(11):39– 41. David Mimno and Laure Thompson. 2017. The strange geometry of skip-gram with negative sampling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. pages 2863–2868. T. Mitchell, W. Cohen, E. Hruschka, P. Talukdar, J. Betteridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohamed, N. Nakashole, E. Platanios, A. Ritter, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, and J. Welling. 2015. Never-ending learning. In Proceedings of AAAI. Dat Quoc Nguyen, Kairit Sirts, Lizhen Qu, and Mark Johnson. 2016. Stranse: a novel embedding model of entities and relationships in knowledge bases. In Proceedings of NAACL-HLT. pages 460–466. Maximilian Nickel, Lorenzo Rosasco, and Tomaso A. Poggio. 2016. Holographic embeddings of knowledge graphs. In AAAI. Srinivas Ravishankar, Chandrahas, and Partha Pratim Talukdar. 2017. Revisiting simple neural networks for learning representations of knowledge graphs. 6th Workshop on Automated Knowledge Base Construction (AKBC) at NIPS 2017 . M. Schlichtkrull, T. N. Kipf, P. Bloem, R. van den Berg, I. Titov, and M. Welling. 2017. Modeling Relational Data with Graph Convolutional Networks. ArXiv eprints . Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in Neural Information Processing Systems. pages 926–934. Fabian M Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In WWW. Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon. 2015. Representing Text for Joint Embedding of Text and Knowledge Bases. In Empirical Methods in Natural Language Processing (EMNLP). ACL Association for Computational Linguistics. Th´eo Trouillon, Johannes Welbl, Sebastian Riedel, ´Eric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In ICML. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In AAAI. Citeseer, pages 1112–1119. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575 .
2018
12
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1297–1307 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1297 Learning Prototypical Goal Activities for Locations Tianyu Jiang and Ellen Riloff School of Computing University of Utah Salt Lake City, UT 84112 {tianyu, riloff}@cs.utah.edu Abstract People go to different places to engage in activities that reflect their goals. For example, people go to restaurants to eat, libraries to study, and churches to pray. We refer to an activity that represents a common reason why people typically go to a location as a prototypical goal activity (goal-act). Our research aims to learn goal-acts for specific locations using a text corpus and semi-supervised learning. First, we extract activities and locations that co-occur in goal-oriented syntactic patterns. Next, we create an activity profile matrix and apply a semi-supervised label propagation algorithm to iteratively revise the activity strengths for different locations using a small set of labeled data. We show that this approach outperforms several baseline methods when judged against goal-acts identified by human annotators. 1 Introduction Every day, people go to different places to accomplish goals. People go to stores to buy clothing, go to restaurants to eat, and go to the doctor for medical services. People travel to specific destinations to enjoy the beach, go skiing, or see historical sites. For most places, people typically go there for a common set of reasons, which we will refer to as prototypical goal activities (goal-acts) for a location. For example, a prototypical goal-act for restaurants would be “eat food” and for IKEA would be “buy furniture”. Previous research has established that recognizing people’s goals is essential for narrative text understanding and story comprehension (Schank and Abelson, 1977; Wilensky, 1978; Lehnert, 1981; Elson and McKeown, 2010; Goyal et al., 2013). Goals and plans are essential to understand people’s behavior and we use our knowledge of prototypical goals to make inferences when reading. For example, consider the following pair of sentences: “Mary went to the supermarket. She needed milk.” Most people will infer that Mary purchased milk, unless told otherwise. But a purchase event is not explicitly mentioned. In contrast, a similar sentence pair “Mary went to the theatre. She needed milk.” feels incongruent and does not produce that inference. Recognizing goals is also critical for conversational dialogue systems. For example, if a friend tells you that they went to a restaurant, you might reply “What did you eat?”, but if a friend says that they went to Yosemite, a more appropriate response might be “Did you hike?” or “Did you see the waterfalls?”. Our knowledge of prototypical goal activities also helps us resolve semantic ambiguity. For example, consider the following sentences: (a) She went to the kitchen and got chicken. (b) She went to the supermarket and got chicken. (c) She went to the restaurant and got chicken. In sentence (a), we infer that she retrieved chicken (e.g., from the refrigerator) but did not pay for it. In (b), we infer that she paid for the chicken but probably did not eat it at the supermarket. In (c), we infer that she ate the chicken at the restaurant. Note how the verb “got” maps to different presumed events depending on the location. Our research aims to learn the prototypical goalacts for locations using a text corpus. First, we extract activities that co-occur with locations in goaloriented syntactic patterns. Next, we construct an activity profile matrix that consists of an activity vector (profile) for each of the locations. We then apply a semi-supervised label propagation algorithm to iteratively revise the activity profile strengths based on a small set of labeled locations. 1298 (noun) (verb) Particle NP Head Lemma: go Subject prt nmod(to) dobj nsubj xcomp(to) compound Figure 1: Dependency relation structure for “go to” pattern. We also incorporate external resources to measure similarity between different activity expressions. Our results show that this semi-supervised learning approach outperforms several baseline methods in identifying the prototypical goal activities for locations. 2 Related Work Recognizing plans and goals is fundamental to narrative story understanding (Schank and Abelson, 1977; Bower, 1982). Conceptual knowledge structures developed in prior work have shown the importance of this type of knowledge, including plans (Wilensky, 1978), goal trees (Carbonell, 1979), and plot units (Lehnert, 1981). Wilensky’s research aimed to understand the actions of characters in stories by analyzing their goals, and their plans to accomplish those goals. For example, someone’s goal might be to obtain food with a plan to go to a restaurant. Our work aims to learn prototypical goals associated with a location, to support similar inference capabilities during story understanding. Goals and plans can also function to trigger scripts (Cullingford, 1978), such as the $RESTAURANT script. There has been growing interest in learning narrative event chains and script knowledge from large text corpora (e.g., (Chambers and Jurafsky, 2008, 2009; Jans et al., 2012; Pichotta and Mooney, 2014, 2016)). In addition, Goyal et al. (2010; 2013) developed a system to automatically produce plot unit representations for short stories. A manual analysis of their stories revealed that 61% of Positive/Negative Affect States originated from completed plans and goals, and 46% of Mental Affect States originated from explicitly stated or inferred plans and goals. Elson & McKeown (2010) included plans and goals in their work on creating extensive story bank annotations that capture the knowledge needed to understand narrative structure. Researchers have also begun to explore NLP methods for recognizing the goals, desires, and plans of characters in stories. Recent work has explored techniques to detect wishes (desires) in natural language text (Goldberg et al., 2009) and identify desire fulfillment (Chaturvedi et al., 2016; Rahimtoroghi et al., 2017). Graph-based semi-supervised learning has been successfully used for many tasks, including sentiment analysis (Rao and Ravichandran, 2009; Feng et al., 2013), affective event recognition (Ding and Riloff, 2016) and class-instance extraction (Talukdar and Pereira, 2010). The semi-supervised learning algorithm used in this paper is modeled after a framework developed by Zhu et al. (2003) based on harmonic energy minimization and a label propagation algorithm described in (Zhu and Ghahramani, 2002). 3 Learning Prototypical Goal Activities Our aim is to learn the most prototypical goal-acts for locations. To tackle this problem, we first extract locations and related activities from a large text corpus. Then we use a semi-supervised learning method to identify the goal activities for individual locations. In the following sections we describe these processes in detail. 3.1 Location and Activity Extraction To collect information about locations and activities, we use the 2011 Spinn3r dataset (Burton et al., 2011). Since our interest is learning about the activities of ordinary people in their daily lives, we use the Weblog subset of the Spinn3r corpus, which contains over 133 million blog posts. We use the text data to identify activities that are potential goal-acts for a location. However we also need to identify locations and want to include both proper names (e.g., Disneyland) as well as nominals (e.g., store, beach), so Named Entity Recognition will not suffice. Consequently, we extract (Loc, Act) pairs using syntactic patterns. First, we apply the Stanford dependency parser (Manning et al., 2014). We then extract sentences that match the pattern “go to X to Y ” with the 1299 a1 = buy book a2 = eat burger ... am = pray l1 = McDonald’s .10 .30 .01 l2 = Burger King .12 .50 .02 l3 = bookstore .40 .02 .04 ... ... ln = church .05 .01 .70 Table 1: An illustration of the activity profile matrix Y . following conditions: (1) there exists a subject connecting to “go”, (2) X has an nmod (nominal modifier) relation to “go” (lemma), (3) X is a noun or noun compound, (4) Y has an xcomp relation (open clausal complement) with “go”, and (5) Y is a verb. Figure 1 depicts the intended syntactic structure, which we will informally call the “go to” pattern. For sentences that match this pattern, we extract X as a location and Y as an activity. If the verb is followed by a particle and/or noun phrase (NP), then we also include the particle and head noun of the NP. For example, we extract activities such as “pray”, “clean up”, and “buy sweater”. This syntactic structure was chosen to identify activities that are described as being the reason why someone went to the location. However it is not perfect. In some cases, X is not a location (e.g., “go to great lengths to ...” yields “lengths” as a location), or Y is not a goal-act for X (e.g., “go to the office to retrieve my briefcase ...” yields “retrieve briefcase” which is not a prototypical goal for “office”). Interestingly, the pattern extracts some nominals that are not locations in a strict sense, but behave as locations. For example, “go to the doctor” extracts “doctor” as a location. Literally a doctor is a person, but in this context it really refers to the doctor’s office, which is a location. The pattern also extracts entities such as “roof”, which are not generally thought of as locations but do have a fixed physical location. Other extracted entities are virtual but function as locations, such as “Internet”. For the purposes of this work, we use the term location in a general sense to include any place or object that has a physical, virtual or implied location. The “go to” pattern worked quite well at extracting (Loc, Act) pairs, but in relatively small quantities due to the very specific nature of the syntactic structure. So we tried to find additional activities for those locations. Initially, we tried harvesting activities that occurred in close proximity (within 5 words) to a known location, but the results were too noisy. Instead, we used the pattern “Y in/at X” with the same syntactic constraints for Y (the extracted activity) and X (a location extracted by the “go to” pattern). We discovered many sentences in the corpus that were exactly or nearly the same, differing only by a few words, which resulted in artificially high frequency counts for some (Loc, Act) pairs. So we filtered duplicate or near-duplicate sentences by computing the longest common substring of sentence pairs that extracted the same (Loc, Act). If the shared substring had length ≥5, then we discarded the “duplicate” sentence. Finally, we applied three filters. To keep the size of the data manageable, we discarded locations and activities that were each extracted with frequency < 30 by our patterns. And we manually filtered locations that are Named Entities corresponding to cities or larger geo-political regions (e.g., provinces or countries). Large regions defined by government boundaries fall outside the scope of our task because the set of activities that typically occur in (say) a city or country is so broad. Finally, we added a filter to try to remove extremely general activities that can occur almost anywhere (e.g., visit). If an activity co-occurred with > 20% of the extracted (distinct) locations, then we discarded it. After these filters, we extracted 451 distinct locations, 5143 distinct activities, roughly 200, 000 distinct (Loc, Act) pairs, and roughly 500, 000 instances of (Loc, Act) pairs. 3.2 Activity Profiles for Locations We define an activity profile matrix Y of size n×m, where n is the number of distinct locations and m is the number of distinct activities. Yi,j represents the strength of the jth activity aj being a goal-act for li. We use yi ∈Rm to denote the ith row of Y . Table 1 shows an illustration of (partial) activity profiles for four locations.1 Our goal is 1Not actual values, for illustration only. 1300 to learn the Yi,j values so that activities with high strength are truly goal-acts for location li. We could build the activity profile for location li using the co-occurrence data extracted from the blog corpus. For example, we could estimate P(aj | li) directly from the frequency counts of the activities extracted for li. However, a high co-occurrence frequency doesn’t necessarily mean that the activity represents a prototypical goal. For example, the activity “have appointment” frequently co-occurs with “clinic” but doesn’t reveal the underlying reason for going to the clinic (e.g., probably to see a doctor or undergo a medical test). To appreciate the distinction, imagine that you asked a friend why she went to a health clinic, and she responded with “because I had an appointment”. You would likely view her response as being snarky or evasive (i.e., she didn’t want to tell you the reason). In Section 4, we will evaluate this approach as a baseline and show that it does not perform well. 3.3 Semi-Supervised Learning of Goal-Act Probabilities Our aim is to learn the activity profiles for locations using a small amount of labeled data, so we frame this problem as a semi-supervised learning task. Given a small number of “seed” locations coupled with predefined goal-acts, we want to learn the goal-acts for new locations. 3.3.1 Location Similarity Graph We use li ∈L to represent location li, where |L| = n. We define an undirected graph G = (V, E) with vertices representing locations (|V | = n) and edges E = V ×V , such that each pair of vertices vi and vk is connected with an edge eik whose weight represents the similarity between li and lk. We can then represent the edge weights as an n × n symmetric weight matrix W indicating the similarity between locations. There could be many ways to define the weights, but for now we use the following definition from (Zhu et al., 2003), where σ2 is a hyper-parameter2: Wi,k = exp  −1 σ2 (1 −sim (li, lk))  (1) To assess the similarity between locations, we measure the cosine similarity between vectors of their co-occurrence frequencies with activities. Specifically, let matrix Fn×m = [f1, ..., fn]T 2We use the same value σ2 = 0.03 as (Zhu et al., 2003). where fi is a vector of length m capturing the co-occurrence frequencies between location li and each activity aj in the extracted data (i.e., Fi,j is the number of times that activity aj occurred with location li). We then define location similarity as: sim(li, lk) = fT i fk ∥fi∥∥fk∥ (2) 3.3.2 Initializing Activity Profiles We use semi-supervised learning with a set of “seed” locations from human annotations, and another set of locations that are unlabeled. So we subdivide the set of locations into S = {l1, ..., ls}, which are the seed locations, and U = {ls+1, ..., ls+u}, which are the unlabeled locations, such that s + u = n. For an unlabeled location li ∈U, the initial activity profile is the normalized co-occurrence frequency vector fi. For each seed location li ∈S, we first automatically construct an activity profile vector hi based on the gold goal-acts which were obtained from human annotators as described in Section 4.1. All activities not in the gold set are assigned a value of zero. Each activity aj in the gold set is assigned a probability P(aj | li) based on the gold answers. However, the gold goal-acts may not match the activity phrases found in the corpus (see discussion in Section 4.3), so we smooth the vector created with the gold goal-acts by averaging it with the normalized co-occurrence frequency vector fi extracted from the corpus. The activity profiles of seed locations stay constant through the learning process. We use y0 i to denote the initial activity profiles. So when li ∈S, y0 i = (fi + hi)/2. 3.3.3 Learning Goal-Act Strengths We apply a learning framework developed by (Zhu et al., 2003) based on harmonic energy minimization and extend it to multiple labels. Intuitively, we assume that similar locations should share similar activity profiles,3 which motivates the following objective function over matrix Y : arg min Y X i,k Wi,k∥yi −yk∥2, s.t. yi = y0 i for each li ∈S (3) Let D = (di) denote an n × n diagonal matrix where di = Pn k=1 Wi,k. Let’s split Y by the sth 3This is a heuristic but is not always true. 1301 row: Y = Ys Yu  , then split W(similarly for D) into four blocks by the sth row and column: W = Wss Wsu Wus Wuu  (4) From (Zhu et al., 2003), Eq (3) is given by: Yu = (Duu −Wuu)−1WusYs (5) We then use the label propagation algorithm described in (Zhu and Ghahramani, 2002) to compute Y : Algorithm 1 repeat Y ←D−1WY Clamp yi = y0 i for each li ∈S until convergence 3.3.4 Activity Similarity One problem with the above algorithm is that it only takes advantage of relations between vertices (i.e., locations). If there are intrinsic relations between activities, they could be exploited as a complementary source of information to benefit the learning. Intuitively, different pairs of activities share different similarities, e.g., “eat burgers” should be more similar to “have lunch” than “read books”. Under this idea, similar to the previous location similarity weight matrix W, we want to define an activity similarity weight matrix Am×m where Ai,k indicates the similarity weight between activity ai and ak: Ai,k = exp  −1 σ2 (1 −sim (ai, ak))  (6) where σ2 is the same as in Eq (1). We explore 3 different similarity functions sim(ai, ak) based on co-occurrence with locations, word matching, and embedding similarities. First, similar to Eq (2), we can use each activity’s co-occurrence frequency with all locations as its location profile and define a similarity score based on cosine values of location profile vectors: simL(ai, ak) = gT i gk ∥gi∥∥gk∥ (7) where the predefined co-occurrence frequency matrix F = [f1, ..., fn]T = [g1, ..., gm]. As a second option, the similarity between activities can often be implied by their lexical overlap, e.g., two activities sharing the same verb or noun might be related. For each word belonging to any of our activities, we use WordNet (Miller, 1995) to find its synonyms. We also include the word itself in the synonym set. If the synonym sets of two words overlap, we call these two words “match”. Then we define the lexical overlap similarity function between ai and ak: simO(ai, ak) =      1 if verb and noun match 0.5 if verb or noun match 0 otherwise (8) As a third option, we can use 300-dimension word embedding vectors (Pennington et al., 2014) trained on 840 billion tokens of web data to compute semantic similarity. We compute an activity’s embedding as the average of its words’ embeddings. Let simE(ai, ak) be the cosine value between the embedding vectors of ai and ak: simE(ai, ak) = cos⟨Embed(ai), Embed(ak)⟩ (9) Finally, we can plug these similarity functions into Eq (6). We use AL, AO, AE to denote the corresponding matrix. We can also plug in multiple similarity metrics such as (simL + simE)/2 and use combination symbols AL+E to denote the matrix. 3.3.5 Injecting Activity Similarity Once we have a similarity matrix for activities, the next question is how will it help with the activity profile computation? Recall from Eq (5), we know that the activity profile of an unlabeled location can be represented by a linear combination of other locations’ activity profiles. The activity profile matrix Y is an n × m matrix where each row is the activity profile for a location. We can also view Y as a matrix whose each column is the location profile for an activity. Using the same idea, we can make each column approximate a linear combination of its highly related columns (i.e., the location profile of an activity will become more similar to the location profiles of its similar activities). Our expectation is that this approximation will help improve the quality of Y . By being right multiplied by matrix A, Y gets updated from manipulating its columns (activities) as well. We modify the algorithm accordingly as below: 1302 0 1 2 3 4 5 6 7 8 9 10 0 % 20 % 40 % 60 % 80 % 100 % # of Annotators Listing Same Activity % of Locations Figure 2: Percentage of locations that have at least one goal-act assigned by multiple annotators. Algorithm 2 repeat Y ←D−1WY A Clamp yi = y0 i for each li ∈S until convergence 4 Evaluation 4.1 Gold Standard Data Since this is a new task and there is no existing dataset for evaluation, we use crowd-sourcing via Amazon Mechanical Turk (AMT) to acquire gold standard data. First, we released a qualification test containing 15 locations along with detailed annotation guidelines. 25 AMT workers finished our assignment, and we chose 15 of them who did the best job following our guidelines to continue. We gave the 15 qualified workers 200 new locations, consisting of 152 nominals and 48 proper names,4 randomly selected from our extracted data and set aside as test data. For each location, we asked the AMT workers to complete the following sentence: People go to LOC to VERB NOUN LOC was replaced by one of the 200 locations. Annotators were asked to provide an activity that is the primary reason why a person would go to that location, in the form of just a VERB or a VERB NOUN pair. Annotators also had the option to label a location as an “ERROR” if they felt that the provided term is not a location, since our location extraction was not perfect. 4Same distribution as in the whole location set. Only 10 annotators finished labeling our test cases, so we used their answers as the gold standard. We discarded 12 locations that were labeled as an “ERROR” by ≥3 workers.5 This resulted in a test set of 188 locations paired with 10 manually defined goal-acts for each one. A key question that we wanted to investigate through this manual annotation effort is to know whether people truly do associate the same prototypical goal activities with locations. To what extent do people agree when asked to list goalacts? Also, some places clearly have a smaller set of goal-acts than others. For example, the primary reason to go to an airport is to catch a flight, but there’s a larger set of common reasons why people go to Yosemite (e.g.,“hiking camping”, “rock climbing”, “see waterfalls”, etc.). Complicating matters, the AMT workers often described the same activity with different words (e.g., “buy book” vs. “purchase book”). Automatically recognizing synonymous event phrases is a difficult NLP problem in its own right.6 So solely for the purpose of analysis, we manually merged activities that have a nearly identical meaning. We were extremely conservative and did not merge similar or related phrases that were not synonymous because the granularity of terms may matter for this task (e.g., we did not merge “eat burger” and “eat lunch” because one may apply to a specific location while the other does not). Figure 2 shows the results of our analysis. Only 1 location was assigned exactly the same goal-act by all 10 annotators. But at least half (5) of the annotators listed the same goal-act for 40% of the locations. And nearly 80% of locations had one or more goal-acts listed by ≥3 people. These results show that people often do share the same associations between prototypical goal-acts and locations. These results are also very conservative because many different answers were also similar (e.g. “eat burger”, “eat meal”). In Table 2 we show examples of locations and the goal-acts listed for them by the human annotators. If multiple people gave the same answer, we show the number in parentheses. For example, given the location “Toys R Us”, 9 people listed “buy toys” as a goal-act and 1 person listed “browse gifts”. We see from Table 2 that 5We found that the workers rarely used the “ERROR” label, so setting this threshold to be 3 was a strong signal. 6We tried using WordNet synsets to conflate phrases, but it didn’t help much. 1303 Location Gold Goal-Acts Toys R Us buy toys (9), browse gifts sink wash hands (7), wash dishes (3) airport catch flight (7), board planes, take airplane, take trips bookstore buy books (6), browse books (2), browse bestsellers, read book lake go fishing (3), go swimming (3), drive boat (2), ride boat, see scenery chiropractor get treatment (3), adjust backs (3), alleviate pain (2), get adjustment, get aligned Chinatown buy goods (2), buy duck, buy souvenirs, eat dim sum, eat rice, eat wontons, find Chinese, speak Chinese, visit restaurants Table 2: Goal-acts provided by human annotators. some locations yield very similar sets of goal-acts (e.g., sink, airport, bookstore), while other locations show more diversity (e.g., lake, chiropractor, Chinatown). 4.2 Baselines To assess the difficulty of this NLP task, we created 3 baseline systems for comparison with our learning approach. All of these methods take the list of activities that co-occurred with a location li in our extracted data and rank them. The first baseline, FREQ, ranks the activities based on the co-occurrence frequency Fi,j between li and aj in our patterns. The second baseline, PMI, ranks the activities using point-wise mutual information. The third baseline, EMBED, ranks the activities based on the cosine similarity of the semantic embedding vectors for li and aj. We use GloVe (Pennington et al., 2014) 300dimension embedding vectors pre-trained on 840 billion tokens of web data. For locations and activities with multiple words, we create an embedding by averaging the vectors of their constituent words. 4.3 Matching Activities The gold standard contains a set of goal-acts for each location. Since the same activity can be expressed with many different phrases, the only way to truly know whether two phrases refer to the same activity is manual evaluation, which is expensive. Furthermore, many activities are very similar or highly related, but not exactly the same. For example, “eat burger” and “eat food” both describe eating activities, but the latter is more general than the former. Considering them to be the same is not always warranted (e.g., “eat MRRE MRRP TOP1 TOP2 TOP3 EMBED 0.02 0.09 0.05 0.08 0.12 PMI 0.20 0.33 0.25 0.36 0.41 FREQ 0.23 0.34 0.23 0.32 0.40 AP 0.28 0.38 0.29 0.41 0.47 AP+AL 0.28 0.40 0.32 0.44 0.49 AP+AO 0.23 0.33 0.24 0.35 0.43 AP+AE 0.25 0.36 0.28 0.40 0.47 AP+AL+E 0.29 0.42 0.35 0.44 0.52 Table 3: Scores for MRR and Top k results. burger” is a logical goal-act for McDonald’s but not for Baskin-Robbins which primarily sells ice cream). As another example, “buy chicken” and “eat chicken” refer to different events (buying and eating) so they are clearly not the same semantically. But at a place like KFC, buying chicken implies eating chicken, and vice versa, so they seem like equally good answers as goal-acts for KFC. Due to the complexities of determining which gold standard answers belong in equivalence classes, we considered all of the goal-acts provided by the human annotators to be acceptable answers. To determine whether an activity aj produced by our system matches any of the gold goal-acts for a location li, we report results using two types of matching criteria. Exact Match judges aj to be a correct answer for li if (1) it exactly matches (after lemmatization) any activity in li’s gold set, or (2) aj’s verb and noun both appear in li’s gold set, though possibly in different phrases. For example, if a gold set contains “buy novels” and “browse books”, then “buy books” will be a match. Since Exact Match is very conservative, we also define a Partial Match criterion to give 50% credit for answers that partially overlap with a gold answer. An activity aj is a partial match for li if either its verb or noun matches any of the activities in li’s gold set of goal-acts. For example, “buy burger” would be a partial match with “buy food” because their verbs match. 4.4 Evaluation Metrics All of our methods produce a ranked list of hypothesized goal-acts for a location. So we use Mean Reciprocal Rank (MRR) to judge the quality of the top 10 activities in each ranked list. We report two types of MRR scores. MRR based on the Exact Match criteria (MRRE) is computed as follows, where n is the 1304 number of locations in the test set: MRRE = 1 n n X i=1 1 rank of 1st Exact Match (10) We also compute MRR using both the Exact Match and Partial Match criteria. First, we need to identify the “best” answer among the 10 activities in the ranked list, which depends both on each activity’s ranking and its matching score. The matching score for activity aj is defined as: score(aj) =      1 if aj is an Exact Match 0.5 if aj is a Partial Match 0 otherwise Given 10 ranked activities a1 ... a10 for li, we then compute: best score(li) = max j=1..10 score(aj) rank(aj) And then finally define MRRP as follows: MRRP = 1 n n X i=1 best score(li) (11) 4.5 Experimental Results Unless otherwise noted, all of our experiments report results using 4-fold cross-validation on the 200 locations in our test set. We used 4 folds to ensure 50 seed locations for each run (i.e., 1 fold for training and 3 folds for testing). The first two columns of Table 3 show the MRR results under Exact Match and Partial Match conditions. The first 3 rows show the results for the baseline systems, and the remaining rows show results for our Activity Profile (AP) semi-supervised learning method. We show results for 5 variations of the algorithm: AP uses Algorithm 1, and the others use Algorithm 2 with different Activity Similarity measures: AP+AL (location profile similarity), AP+AO (overlap similarity), AP+AE (embedding similarity), and AP+AL+E (location profiles plus embeddings). Table 3 shows that our AP algorithm outperforms all 3 baseline methods. When adding Activity Similarity into the algorithm, we find that AL slightly improves performance, but AO and AE do not. However, we also tried combining them and obtained improved results by using AL and AE together, yielding an MRRP score of 0.42. To gain more insight about the behavior of the models, Table 3 also shows results for the topranked 1, 2, and 3 answers. For these experiments, the system gets full credit if any of its top k answers exactly matches the gold standard, or 50% credit if a partial match is among its top k answers. These results show that our AP method produces more correct answers at the top of the list than the baseline methods. Table 4 shows six locations with their gold answers and the Top 3 goal-acts hypothesized by our best AP system and the PMI and FREQ baselines. The activities in boldface were deemed correct (including Partial Match). For “bookstore” and “pharmacy”, all of the methods perform well. Note the challenge of recognizing that different phrases mean essentially the same thing (e.g., “fill prescription”, “pick up prescription”, “find medicine”). For “university” and “Meijer”, the AP method produces more appropriate answers than the baseline methods. For “market” and “phone”, all three methods struggle to produce good answers. Since “market” is polysemous, we see activities related to both stores and financial markets. And “phone” arguably is not a location at all, but most human annotators treated it as a virtual location, listing goal-acts related to telephones. However our algorithm considered phones to be similar to computers, which makes sense for today’s smartphones. In general, we also observed that Internet sites behave as virtual locations in language (e.g., “I went to YouTube...”). 4.6 Discussion The goal-acts learned by our system were extracted from the Spinn3r dataset, while the gold standard answers were provided by human annotators, so the same (or very similar) activities are often expressed in different ways (see Section 4.3). This raises the question: what is the upper bound on system performance when evaluating against human-provided goal-acts? To answer this, we compared all of the activities that co-occurred with each location in the corpus against its gold goalacts. Only 36% of locations had at least one gold goal-act among its extracted activities when matching identical strings (after lemmatization). Because of this issue, our Exact Match criteria also allowed for combining verbs and nouns from different gold answers. Under this Exact Match criteria, 73% of locations had at least one gold goal-act 1305 Location Gold Activity List AP+AL+E Top 3 PMI Top 3 FREQ Top 3 bookstore buy book (6) browse book (2) browse bestseller read book buy book purchase book see book buy copy purchase book buy book buy book browse find book pharmacy get drug (4) fill prescription (3) get prescription (2) buy medicine find medicine get prescription pick up prescription buy pill fill prescription pick up prescription buy pill fill prescription pick up prescription university get degree (4) gain education (5) watch sport gain education further education gain knowledge study law study psychology pursue study enrol7 enroll take class Meijer buy grocery (8) buy cream obtain grocery buy item go shopping get item check out deal have shopping post today get item save money check out market buy grocery (6) buy fresh, buy goods buy shirt, find produce make money eat out eat lunch have demand increase competition lead player trade intervene make money phone make call (4), ERROR (2) answer call, call friend have conversation stop ring play game browse website view website put number have number put card plug glance have number Table 4: Examples of Top 3 hypothesized prototypical goal activities. among the extracted activities, so this represents an upper bound on performance using this metric. Under the Partial Match criteria, 98% of locations had at least one gold goal-act among the extracted activities, but only 50% credit was awarded for these cases so the maximum score possible would be ∼86%. We also manually inspected 200 gold locations to analyze their types. We discovered some related groups, but substantial diversity overall. The largest group contains ∼20% of the locations, which are many kinds of stores (e.g., Ikea, WalMart, Apple store, shoe store). Even within a group, different locations often have quite different sets of co-occurring activities. In fact, we discovered some spelling variants (e.g., “WalMart” and “wal mart”), but they also have substantially different activity vectors (e.g., because one spelling is much more frequent), so the model learns about them independently.8 Other groups include restaurants (∼5%), home-related (e.g., bathroom) (∼5%), education (∼5%), virtual (e.g., Wikipedia) (∼3%), medical (∼3%) and landscape (e.g., hill) (∼3%). It is worth noting that our locations were extracted by two syntactic patterns and it remains to be seen if this has brought in any bias — detecting location nouns (especially nominals) 7A lemmatization error for the verb “enrolled”. 8Of course normalizing location names beforehand may be beneficial in future work. is a challenging problem in its own right. 5 Conclusions and Future Work We introduced the problem of learning prototypical goal activities for locations. We obtained human annotations and showed that people do associate prototypical goal-acts with locations. We then created an activity profile framework and applied a semi-supervised label propagation algorithm to iteratively update the activity strengths for locations. We demonstrated that our learning algorithm identifies goal-acts for locations more accurately than several baseline methods. However, this problem is far from solved. Challenges also remain in how to evaluate the accuracy of goal knowledge extracted from text corpora. Nevertheless, our work represents a first step toward learning goal knowledge about locations, and we believe that learning knowledge about plans and goals is an important direction for natural language understanding research. In future work, we hope to see if we can take advantage of more contextual information as well as other external knowledge to improve the recognition of goalacts. Acknowledgments We are grateful to Haibo Ding for valuable comments on preliminary versions of this work. 1306 References Gordon H Bower. 1982. Plans and Goals in Understanding Episodes. Advances in Psychology, 8:2– 15. K. Burton, N. Kasch, and I. Soboroff. 2011. The ICWSM 2011 Spinn3r Dataset. In Proceedings of the Fifth Annual Conference on Weblogs and Social Media (ICWSM-2011). J. G. Carbonell. 1979. Subjective Understanding: Computer Models of Belief Systems. Ph.D. thesis, Yale University. Nathanael Chambers and Dan Jurafsky. 2008. Unsupervised Learning of Narrative Event Chains. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL/HLT-2008). Nathanael Chambers and Dan Jurafsky. 2009. Unsupervised Learning of Narrative Schemas and Their Participants. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. Snigdha Chaturvedi, Dan Goldwasser, and Hal Daum´e III. 2016. Ask, and Shall You Receive? Understanding Desire Fulfillment in Natural Language Text. In Processings of the 30th AAAI Conference on Artificial Intelligence (AAAI-2016). Richard Edward Cullingford. 1978. Script Application: Computer Understanding of Newspaper Stories. Ph.D. thesis, Yale University. Haibo Ding and Ellen Riloff. 2016. Acquiring Knowledge of Affective Events from Blogs using Label Propagation. In Processings of the 30th AAAI Conference on Artificial Intelligence (AAAI-2016). David Elson and Kathleen McKeown. 2010. Building a Bank of Semantically Encoded Narratives. In Proceedings of the Seventh Conference on International Language Resources and Evaluation (LREC-2010). Song Feng, Jun Seok Kang, Polina Kuznetsova, and Yejin Choi. 2013. Connotation Lexicon: A Dash of Sentiment Beneath the Surface Meaning. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL-2013). Andrew B Goldberg, Nathanael Fillmore, David Andrzejewski, Zhiting Xu, Bryan Gibson, and Xiaojin Zhu. 2009. May all your wishes come true: A study of wishes and how to recognize them. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics (HLT/NAACL-2009). Amit Goyal, Ellen Riloff, and Hal Daum´e III. 2010. Automatically producing plot unit representations for narrative text. In Proceedings of the 2010 Conference on Empirical Methods on Natural Language Processing (EMNLP-2010). Amit Goyal, Ellen Riloff, and Hal Daum´e III. 2013. A Computational Model for Plot Units. Computational Intelligence, 29(3):466–488. Bram Jans, Steven Bethard, Ivan Vuli´c, and Marie Francine Moens. 2012. Skip n-grams and ranking functions for predicting script events. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL-2012). Wendy G Lehnert. 1981. Plot Units and Narrative Summarization. Cognitive Science, 5(4):293–331. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL-2014) System Demonstrations. George A Miller. 1995. WordNet: A Lexical Database for English. Communications of the ACM, 38(11):39–41. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods on Natural Language Processing (EMNLP-2014). Karl Pichotta and Raymond Mooney. 2014. Statistical script learning with multi-argument events. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL-2014). Karl Pichotta and Raymond J Mooney. 2016. Learning Statistical Scripts with LSTM Recurrent Neural Networks. In Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI-2016). Elahe Rahimtoroghi, Jiaqi Wu, Ruimin Wang, Pranav Anand, and Marilyn Walker. 2017. Modelling Protagonist Goals and Desires in First-Person Narrative. In Proceedings of the 18th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL-2017). Delip Rao and Deepak Ravichandran. 2009. Semisupervised Polarity Lexicon Induction. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics (EACL-2009). Roger C Schank and Robert Abelson. 1977. Scripts, Plans, Goals and Understanding. Lawrence Erlbaum. 1307 Partha Pratim Talukdar and Fernando Pereira. 2010. Experiments in Graph-based Semi-supervised Learning Methods for Class-instance Acquisition. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL-2010). Robert Wilensky. 1978. Understanding Goal-based Stories. Ph.D. thesis, Yale University. Xiaojin Zhu and Zoubin Ghahramani. 2002. Learning from Labeled and Unlabeled Data with Label Propagation. Technical report, Carnegie Mellon University. Xiaojin Zhu, Zoubin Ghahramani, and John D Lafferty. 2003. Semi-supervised Learning Using Gaussian Fields and Harmonic Functions. In Proceedings of the 20th International Conference on Machine Learning (ICML-2003).
2018
120
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1308–1317 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1308 Guess Me if You Can: Acronym Disambiguation for Enterprises Yang Li1∗, Bo Zhao2, Ariel Fuxman1, Fangbo Tao3 1Google, Mountain View, CA, USA 2Pinterest, San Francisco, CA, USA 3Facebook, Menlo Park, CA, USA {zheda2006liyang, bo.zhao.uiuc, afuxman, fangbo.tao}@gmail.com Abstract Acronyms are abbreviations formed from the initial components of words or phrases. In enterprises, people often use acronyms to make communications more efficient. However, acronyms could be difficult to understand for people who are not familiar with the subject matter (new employees, etc.), thereby affecting productivity. To alleviate such troubles, we study how to automatically resolve the true meanings of acronyms in a given context. Acronym disambiguation for enterprises is challenging for several reasons. First, acronyms may be highly ambiguous since an acronym used in the enterprise could have multiple internal and external meanings. Second, there are usually no comprehensive knowledge bases such as Wikipedia available in enterprises. Finally, the system should be generic to work for any enterprise. In this work we propose an end-to-end framework to tackle all these challenges. The framework takes the enterprise corpus as input and produces a high-quality acronym disambiguation system as output. Our disambiguation models are trained via distant supervised learning, without requiring any manually labeled training examples. Therefore, our proposed framework can be deployed to any enterprise to support highquality acronym disambiguation. Experimental results on real world data justified the effectiveness of our system. 1 Introduction Acronyms are abbreviations formed from the initial components of words or phrases (e.g., “AI” from “Artificial Intelligence”). As acronyms can shorten long names and make communications ∗Work done while authors were at Microsoft Research. more efficient, they are widely used at almost everywhere in enterprises, including notifications, emails, reports and social network posts. Figure 1 shows a sample enterprise social network post. As we can see, acronyms are frequently used there. Someone Figure 1: Acronyms in Enterprises Despite the fact that acronyms can make communications more efficient, sometimes they could be difficult to understand, especially for people who are not familiar with the specific areas, such as new employees and patent lawyers. We randomly sampled 1000 documents from a Microsoft question answering forum and found out that only 7% of the acronyms co-occur with the corresponding meanings in the same document, which means 93% of the time when the user does not understand an acronym, she will need to find clues outside of the document. Therefore, it is particularly useful to develop a system that can automatically resolve the true meanings of acronyms in enterprise documents. Such system could be run online as a querying tool to handle any adhoc document, or run offline to annotate acronyms with their true meanings in a large corpus. In the offline mode, the true meanings can be further indexed by an enterprise search engine, so that when users search for the true meaning, documents containing the acronym can also be found. The enterprise acronym disambiguation task is challenging due to the high ambiguity of acronyms, e.g., “SP” could stand for “Service Pack”, “SharePoint” or “Surface Pro” in Microsoft. And there is one additional challenge compared with previous disambiguation tasks: in an enterprise document, an acronym could refer 1309 to either an internal meaning (concepts created by the enterprise that may or may not be found outside) or an external meaning (all concepts that are not internal). For example, regarding the acronym “AI”, “Asset Intelligence” is an internal meaning mainly used only in Microsoft, while “Artificial Intelligence” is an external meaning widely used in public. A good acronym disambiguation system should be able to handle both internal and external meanings. As we will explain in details, it is important to make such distinction and different strategies are needed for such two cases. For internal meanings, there are some previous work on word sense disambiguation (Navigli, 2009) and acronym disambiguation (Feng et al., 2009; Pakhomov et al., 2005; Pustejovsky et al., 2001; Stevenson et al., 2009; Yu et al., 2006) on a closed-domain corpus. The main challenge here is that there are rarely any domain-specific knowledge bases available in enterprises, therefore all the signals for disambiguation (including potential meanings, and their popularity scores, context representations, etc.) need to be mined from plain text. Training data should also be automatically generated to make the system easily scale out to all enterprises. Compared with previous work, we developed a more comprehensive and advanced set of features in the disambiguation model, and also used a much less restrictive way to discover meaning candidates and training data, so that both precision and recall can be improved. Moreover, one main limitation of all previous work is that they do not distinguish internal and external meanings. They merely rely on the enterprise corpus to discover information about external meanings, which we observe is quite ineffective. The reason is that for popular external meaning like “Artificial Intelligence”, people often directly use its acronym in enterprises without explanation, therefore there is limited information about the connection between the acronym and the external meaning in the enterprise corpus. On the other hand, there are much more such information available in the public domain, which should be leveraged by the system. If we consider utilizing a public knowledge base such as Wikipedia to better handle external meanings of acronyms, the problem becomes very related to the well studied Entity Linking (Ji and Grishman, 2011; Cucerzan, 2007; Dredze et al., 2010; Hoffart et al., 2011; Li et al., 2013, 2016; Ratinov et al., 2011; Shen et al., 2012) problem, which is to map entity mentions in texts to their corresponding entities in a reference knowledge base (e.g. Wikipedia). But our disambiguation task is different from the entity linking task, because the system also needs to handle internal meanings which are not covered by any knowledge bases, and ultimately needs to decide whether an acronym refers to an internal meaning or an external meaning. It is nontrivial to combine the information mined from the enterprise corpus and the public knowledge base so that the system can get the best of both worlds. For instance, we have tried to run an internal disambiguator (leveraging information mined from enterprise corpus) and then resort to a public entity linking system if the internal one’s confidence is low, but the performance is very poor. Even for external meanings, it is important to leverage signals from the enterprise corpus since the context surrounding them could be quite different from that in the external world, and context is one of the most important factor for disambiguation. For example, in public world, when people mention “Operating System” they mainly talk about how to install or use it; while within Microsoft, when people mention “Operating System” most of the time they focus on how to design or implement it. In this work, we design a novel, end-to-end framework to address all the above challenges. Our framework takes the enterprise corpus and certain public knowledge base as input and produces a high-quality acronym disambiguation system as output. The models are all trained via distant supervised learning, therefore our system requires no manually labeled training examples and can be easily deployed to any enterprises. 2 Problem Statement The Enterprise Acronym Disambiguation problem is comprised of two sub-problems. The first one is Acronym Meaning Mining (Adar, 2004; Ao and Takagi, 2005; Park and Byrd, 2001; Schwartz and Hearst, 2002; Jain et al., 2007; Larkey et al., 2000; Nadeau and Turney, 2005; Taneva et al., 2013), which aims at mining acronym/meaning pairs from the enterprise corpus. Each meaning m should contain the full name expansion e, popularity score p (indicating how often m is used as the genuine meaning of acronym a) and context words W (i.e. words frequently used in context of the meaning). The popularity score and 1310 context words can provide critical information for making disambiguation decisions. The second one is Meaning Candidate Ranking, whose goal is to rank the candidate meanings associated with the target acronym a and select the genuine meaning m based on the given context. In this paper we assume the acronyms for disambiguation are provided as input to the system, either by the user or by an existing acronym detection module. We do not try to optimize the performance of acronym detection (e.g. identifying acronyms beyond the simple capitalized rule, or distinguishing cases where a capitalized term is not an acronym but a regular English word, such as “OK”). The task of acronym detection is also interesting and important. But due to the space limit, it is beyond the scope of this paper. 3 Framework We propose a novel end-to-end framework to solve the Enterprise Acronym Disambiguation problem. Our framework takes the enterprise corpus as input and produces a high-quality acronym disambiguation system as output. Figure 2 shows the details of our proposed framework. In the mining module, we will sequentially perform Candidates Generation, Popularity Calculation, Candidates Deduplication and Context Harvesting on the input enterprise corpus. The details of these steps will be discussed in Section 4. After mining steps, we will get an acronym/meaning repository storing all the mined acronym/meaning pairs. Feed this repository together with the training data (automatically generated via distant supervision from the enterprise corpus) to the training module, we will get a candidate ranking model, a confidence estimation model and a final selection model. These models form the final acronym disambiguator and will be used in the testing module for actual acronym disambiguation. In the testing module, given the target acronym along with some context as input, the system will output the predicted meaning. Note that the mining and training module run offline once for the entire corpus or periodically when the corpus update, while the testing can be run online repeatedly for processing new documents. 4 Acronym Meaning Mining 4.1 Candidates Generation As there is no reference dictionary or knowledge base available in enterprise telling us the potential Figure 2: Framework meanings of acronyms, we have to extract them from plain text. We propose a strategy called Hybrid Generation to balance extraction accuracy and coverage. Namely, we treat a phrase as a meaning candidate for an acronym if: (1) the initial letters of the phrase match the acronym and the phrase and the acronym co-occur in at least one document in the enterprise corpus; or (2) it is a valid candidate for the acronym in public knowledge bases (e.g. Wikipedia). The insight of this strategy is that the valid candidates missed by condition (1) are mainly public meanings which can be found in public knowledge bases. With this strategy we can make our system understand both the internal world and the external world. 4.2 Popularity Calculation As mentioned in Section 2, for each candidate meaning, we need to calculate its popularity score, which reveals how often the candidate meaning is used as the genuine meaning of the acronym. In previous research on Entity Linking, popularity is calculated as the fraction of times a candidate being the target page for an anchor text in a reference knowledge base (e.g. Wikipedia). However, in enterprises, we do not have a knowledge base with anchor links. Thus we cannot calculate popularity in the same way. Here we propose to calculate two types of popularity to mimic the effect. 1. Marginal Popularity. MP(mi) = Count(mi) Pn j=1 Count(mj), (1) where m1, m2, . . ., mn are the meaning candidates of acronym a and Count(mi) is the number of occurrences for mi in the corpus. 2. Conditional Popularity. CP(mi) = Count(mi, a) Pn j=1 Count(mj, a), (2) where m1, m2, . . ., mn are the meaning candidates of acronym a and Count(mi, a) is the number of document-level cooccurrences for mi and a in the corpus. 1311 Conditional Popularity can more reasonably reveal how often the acronym is used to represent each meaning candidate. However, due to the data sparsity issue in enterprises, many valid candidates may get zero value for conditional popularity since they may never co-occur with the acronyms in the enterprise corpus. The Marginal Popularity does not have this problem since it is calculated from the raw counts of the candidates. Yet on the other hand, high marginal popularity score does not necessarily indicate high correlation between the candidate and the acronym. It is unclear how to combine the two scores into one popularity score, so we use both of them as features in the disambiguation model. 4.3 Candidates Deduplication In enterprises, people often create many variants (including abbreviations, plurals or even misspellings) for the same meaning, therefore many mined meaning candidates are actually equivalent. For example, for the meaning “Certificate Authority” of the acronym “CA”, the variants include “Cert Auth”, “Certificate Authorities” and many others. It is important to deduplicate these variants before sending them to the disambiguation module. The deduplication helps aggregate disambiguation evidences and reduce noises. We design several heuristic rules1 to perform the deduplication. Experiments show that the rules can accurately group the variants together. After grouping, we sort the variants within the same group based on their marginal popularity. The candidate with the largest marginal popularity is selected as the canonical candidate for the group. Other variants in the group will be deleted from the candidate list and their popularity scores will be aggregated to the canonical candidate. We maintain a table to record the variants for each canonical candidate. 4.4 Context Harvesting In this step, we aim to harvest context words for each meaning candidate. These context words could be used to calculate context similarity with the query context. For each meaning candidate m, we put its canonical form and all its variants (from the variants table in Section 4.3) into set S. Then we scan the enterprise corpus, each time we find a match of any e ∈S, we harvest the words in a 1Due to space limitations, the detailed rules are omitted. Example rules are “word overlap percentage after stemming > 0.8”, “corresponding component words share same prefix”. ... Basically, using direct AD Import fails if Sharepoint Code Analysis is configured to run over SSL ... Acronym: CA Ground Truth: Code Analysis Context: ... Basically, using direct AD Import fails if Sharepoint CA is configured to run over SSL ... Figure 3: Distant Supervision Example width-W word window surrounding e as the context words of m. In our experiments we set window size as 30 after trying to vary the window size from 10 to 50 and finding 30 gives the best result. As mentioned before, some popular public meanings might be mentioned very rarely by their full names in the enterprise corpus since people directly use their acronyms most of the time. Therefore, the above context harvesting process can only get very few context words for those public meanings. To alleviate this, for each public meaning we add its Wikipedia page’s content as complementary context. By doing so, we ensure almost all valid candidates get a reasonable amount of context words. 5 Meaning Candidate Ranking 5.1 Candidate Ranking We first train a candidate ranking model to rank candidates with respect to the likelihood of being the genuine meaning for the target acronym. 5.1.1 Training Data Generation In order to train a robust ranking model, we need to get adequate amount of labeled training data. Manually labeling is obviously too expensive and it requires a lot of domain knowledge, which severely limits our framework’s generalization capability. To tackle this problem, we propose to automatically generate training data via distant supervision. The intuition is that since acronyms and the corresponding meanings are semantically equivalent, people use them interchangeably in enterprises. Therefore we can fetch documents containing the meaning, replace the meaning with the corresponding acronym and treat the meaning as ground truth. Figure 3 shows an example of this automatic training data generation process. 5.1.2 Training Algorithm Any learning-to-rank algorithms can be used here. In our system we utilize the LambdaMART algorithm (Burges, 2010) to train the model. 1312 5.1.3 Features Now we explain the features we developed for the candidate ranking model. First, we have the Marginal Popularity score and Conditional Popularity score as two context-independent features, which could compensate for each other. However, as discussed in the previous section, some popular public meanings (e.g., “Artificial Intelligence”) can be rarely mentioned in enterprise corpus by their full names, therefore both their marginal popularity score and conditional popularity score can be very low. To address this, we add a third feature called Wiki Popularity, which is calculated from Wikipedia anchor texts to capture how often an acronym refers to a public meaning in Wikipedia. The fourth feature we adopt is Context Similarity. We convert the harvested context for the meaning and the query context of the target acronym into TFIDF vectors and then compute their cosine similarity2. We also include two features (i.e. LeftNeighborScore and RightNeighborScore) to capture the effect of the immediate neighboring words, which are more important than further context words since immediate words could form phrases with the acronym. For example, if we see an acronym “SP” followed by the word “2”, then likely it stands for “Service Pack”. However, if we see “SP” followed by “2003”, then probably its genuine meaning is “SharePoint”. The last feature we use is FullNamePercentage. This feature is defined as the percentage of the meaning candidate’s component words appearing in the context of the target acronym. Table 1 summarizes the features used to train the candidate ranking model. 5.2 Confidence Estimation After getting the ranking results, we propose to apply a confidence estimation step to decide whether to trust the top ranked answer. There are two motivations behind. First, our candidate generation approach is not perfect, therefore we could encounter cases in which the genuine meaning is not in our candidates. For such cases, the top ranked answer is obviously incorrect. Second, our training data is biased towards the internal meanings since external meanings may rarely appear with full names. 2One popular alternative to measure context similarity is using word embeddings (Mikolov et al., 2013; Li et al., 2015). In our system we experimented replacing TFIDF cosine similarity with word embedding similarity, or adding word embedding similarity as an additional feature, but both hurt the disambiguation accuracy. So we only included the TFIDF cosine similarity as the context similarity feature in our system. As a result, the learned ranking model may lack the capability to properly rank the external meanings. In such cases, we would better have the system return nothing rather than directly provide a wrong answer to mislead the user. In this step, we train a confidence estimation model, which will estimate the top result’s confidence. 5.2.1 Training Data Generation Similar to the ranker training, here the training data is also automatically generated. We run the learned ranker on some distant labeled data (generated from a different corpus), and then check if the top ranked answer is correct or not. If it is correct, we generate a positive training example; otherwise we make a negative training example. 5.2.2 Training Algorithm Any classification algorithms can be used here. In our system we utilize the MART boosted tree algorithm (Friedman, 2000) to train the model. 5.2.3 Features We design 7 features (summarized in Table 2) to train the confidence estimation model. There are two intuitions behind: (1) If the top-ranked answer’s ranking score is very small, or the topranked answer’s score is close to the secondranked answer’s score, then the ranking is not very confident; (2) If the acronym has a dominating candidate in the public domain (e.g., “Personal Computer” is the dominating candidate for “PC”), and the candidates’ Wiki popularity distribution is significantly different from their marginal/conditional popularity distributions, then the ranker’s output is not very confident. The first intuition covers the first 3 features, while the second intuition covers the last 4 features. 5.3 Final Selection We have discussed that one particular motivation for confidence estimation is that the candidate ranking stage has some bias so it does not always rank public meanings at top when they are correct. Therefore, assuming the confidence estimation model can remove incorrect top-ranked result, we still need one additional step to decide if any public meaning is correct, which we call a final selection model. In this step, we determine whether to return the most popular public meaning (based on Wiki Popularity) as the final answer, and this step is only triggered when the confidence estimator judges that the ranking result is unconfident. 1313 Feature Description MarginalPopularity The meaning candidate’s marginal popularity score ConditionalPopularity The meaning candidate’s conditional popularity score WikiPopularity The meaning candidate’s Wiki popularity score ContextSimilarity TFIDF cosine similarity between meaning context and acronym context LeftNeighborScore Probability of acronym and meaning sharing the same immediate left word RightNeighborScore Probability of acronym and meaning sharing the same immediate right word FullNamePercentage Percentage of meaning candidate’s component words appearing in acronym context Table 1: Candidate Ranking Features Feature Description Top1Score Top 1 ranked meaning candidate’s ranking score Top1&2ScoreDiff Difference between 1st and 2nd ranked meaning candidates’ ranking score Top1&2CtxSimDiff Difference between 1st and 2nd ranked meaning candidates’ context similarity score Top1WikiPopularity Top 1 ranked meaning candidate’s Wiki popularity score MaxWikiPopularity Max Wiki popularity score among all the meaning candidates MaxWP&MPGap Max gap between Wiki and marginal popularity among all the meaning candidates MaxWP&CPGap Max gap between Wiki and conditional popularity among all the meaning candidates Table 2: Confidence Estimation Features The goal of the final selection model is similar to that of the confidence estimation model. In confidence estimation, we judge whether the topranked answer is correct; while in final selection, we check whether the most popular external meaning is correct. Thanks to this similarity, we can reuse the data, features and training algorithm in confidence estimation model. We take the same training data in Section 5.2.1 and update the labels correspondingly: if the genuine answer is the most popular external meaning, we generate a positive example; otherwise we make a negative one. 6 Experiments 6.1 Data 6.1.1 Mining and Training Corpus We use both the Microsoft Answer Corpus (MAC) and the Microsoft Yammer Corpus (MYC) as the mining corpus. These corpus are kindly shared to us by Microsoft for research purpose. MAC contains 0.3 million web pages from a Microsoft internal question answering forum. MYC is consisted of 6.8 million posts from Microsoft’s Yammer social network. In total, our mining module harvested 5287 acronyms and 17258 meaning candidates from this joint corpus. For model training, the confidence estimation model and final selection model need to be trained on a different corpus than the candidate ranking model. So we train the candidate ranking model on MAC, with 12500 training examples being automatically generated; and train the confidence estimation and final selection model on MYC, with 40000 training instances being generated. 6.1.2 Evaluation Datasets We prepared four datasets3 for evaluation purposes. The first one Manual is obtained from the recent pages of Microsoft answer forum. Note these pages are disjoint from those used as mining/training corpus. We randomly sampled 300 pages and filtered out pages which do not contain ambiguous acronyms. After filtering, 240 test cases were left and we manually labeled them. The second one Distant is generated via distant labeling on Microsoft Office365 documents. We sampled 2000 documents which contain at least one occurrence of a meaning candidate. Then we replaced the meanings with the corresponding acronyms and treat the meanings as ground truths. We manually checked through this dataset to remove some bad cases (e.g., “AS” for “App Store”). This resulted in a test set of 1949 test cases. Comparing the Manual dataset with the Distant dataset, the Manual one, though in smaller size, can more accurately evaluate the system performance, since the target acronyms in it are sampled from the real distribution, while in the Distant dataset acronyms are artificially generated from 3Due to data confidentiality issue, we were unable to directly release these datasets. 1314 randomly sampled meanings. We also want to compare our method with the state-of-the-art Entity Linking (EL) systems based on public knowledge bases such as Wikipedia. However, it is unfair to directly compare as most enterprise specific meanings are unknown to them. Therefore, we need to only consider cases where the true meaning is a public meaning covered by both our system and the compared system. By filtering the distant dataset from Office365, we get the third dataset JoinW (1659 test cases) for comparing with the Wikifier (Ratinov et al., 2011), and the fourth dataset JoinA (237 test cases) for comparing with AIDA (Hoffart et al., 2011). 6.2 Compared Methods 6.2.1 Ablations of Our System We compare the following ablations of our system, to illustrate the effectiveness of the features and components. • Internal Popularity (IP): Only the internal popularity features (i.e., marginal popularity and conditional popularity). • Popularity (P): The internal popularity features plus Wiki popularity features. • Popularity+Context (P+C): The popularity features plus context similarity feature. • Popularity+Context+Neighbbor (P+C+N): The popularity features, context similarity feature and immediate neighbor features. • Popularity+ Context+ Neighbbor+ Fullname (a.k.a. Candidate Ranker, or CR): Using all the features in candidate ranking module. • Candidate Ranker + Confidence Estimator (CR+ CE): Using the candidate ranking model plus the confidence estimation model. • Candidate Ranker + Confidence Estimator + Final Selector (a.k.a. Acronym Disambiguator, or AD): Using the candidate ranking model, the confidence estimation model and the final selection model. Full version of our system. 6.2.2 State-of-the-art EL Systems We also compare our method with two state-ofthe-art Entity Linking (EL) systems. • Wikifier: a popular EL system using machine learning to combine various features together. • AIDA: a robust EL system using mention-entity graph to find the best mention-entity mapping. 6.3 Quality of Mined Acronyms/Meanings We first conduct experiments to evaluate the quality of the acronym/meaning pairs harvested through our offline mining module. Out of the 17258 mined pairs, we randomly sampled 2000 of them and asked 5 domain experts to manually check their validness. An acronym/meaning pair is considered as valid if the majority of the experts think the acronym is indeed used to abbreviate the meaning. For example, (AS, Analysis Service) is a valid pair, but (AS, App Store) is considered as invalid because people will not actually use AS to represent App Store. Among the sampled 2000 pairs, 94.5% are labeled as valid, indicating our offline mining module can accurately extract acronym/meaning pairs from enterprise corpus. It is hard to precisely evaluate the coverage/recall of our mining method, since it is very difficult to obtain the complete meaning list for a given acronym. To get a rough idea, we randomly picked up 100 acronyms and asked the 5 domain experts to enumerate the valid meanings for these acronyms. In total we got 230 valid meanings and all of them are covered by the mined pairs. 6.4 Disambiguation Performance We first conduct experiments to evaluate the disambiguation performance of our ranking model, and compare the helpfulness of the features used in the model. Figure 4 shows the precision (i.e., percentage of correctly disambiguated cases among all predicted cases), recall (i.e., percentage of correctly disambiguated cases among all test cases) and F1 (i.e., harmonic mean of precision and recall) of the compared methods on the Manual dataset and the Distant dataset. In terms of the helpfulness of the features, the context similarity feature and the immediate neighbor features contribute most to the performance gain. Other features are less helpful, yet still bring improvements to the overall performance. Next we conduct experiments to illustrate the effectiveness of the confidence estimation module and the final selection module in our system. Figure 5 shows the precision, recall and F1 of the compared system configurations on the Manual and Distant dataset. As can be seen, the confidence estimation module can improve precision at the cost of hurting recall. Fortunately, the final selection module can recover some recall losses without sacrificing too much on precision. In 1315 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Manual  Dataset Distant  Dataset Precision IP P P+C P+C+N CR 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Manual  Dataset Distant  Dataset Recall IP P P+C P+C+N CR 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Manual  Dataset Distant  Dataset F1 IP P P+C P+C+N CR Figure 4: Ranking Performance 0.7 0.75 0.8 0.85 0.9 0.95 Manual Dataset Distant Dataset Precision CR CR+CE AD 0.7 0.75 0.8 0.85 0.9 0.95 Manual Dataset Distant Dataset Recall CR CR+CE AD 0.7 0.75 0.8 0.85 0.9 0.95 Manual Dataset Distant Dataset F1 CR CR+CE AD Figure 5: Effectiveness of Confidence Estimator and Final Selector terms of the F1 measure, the final system achieves the best performance. Note that the ablation P+C naturally corresponds to the existing acronym disambiguation approaches (Feng et al., 2009; Pakhomov et al., 2005; Pustejovsky et al., 2001; Stevenson et al., 2009; Yu et al., 2006) mainly relying on context words and domain specific resources. These approaches do not specifically distinguish internal and external meanings. They merely rely on the internal corpus to discover information about external meanings, which is quite ineffective in the scenario of enterprise acronym disambiguation (as discussed in Section 1). In comparison, our system (AD) is able to leverage public resources together with the internal corpus to better handle the problem and therefore significantly outperforms them. 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Precision Recall F1 Wikifier AD (a) Wikifier vs. AD 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Precision Recall F1 AIDA AD (b) AIDA vs. AD Figure 6: Comparison with EL Systems 6.5 Comparison with EL Systems We also compare our system (AD) with two stateof-the-art Entity Linking (EL) systems: Wikifier and AIDA. As explained in Section 6.1.2, we made two datasets (i.e., JoinW and JoinA) for fair comparisons. Figure 6(a) and Figure 6(b) present the comparison of our AD system against Wikifer and AIDA, respectively. As we can see from the figures, AD significantly outperforms both Wikifier and AIDA on all three measures. The reason is that even for public meanings (e.g., Operating System) indexed by Wikifier and AIDA, the usage of them could be quite different in enterprises (e.g., inside Microsoft people talk more about designing Operating System rather than how to install it). Wikifier and AIDA utilize information from public knowledge bases (e.g., Wikipedia) to generate features, therefore can hardly capture such enterprisespecific signals. In contrast, our AD system mines disambiguation features directly from the enterprise corpus and utilizes them together with the public signals. As a result, it can more accurately represent the characteristics of the enterprise and lead to much better disambiguation performances. 7 Related Work Acronym meaning discovery has received a lot of attentions in vertical domains (mainly in biomedical). Most of the proposed approaches (Adar, 2004; Ao and Takagi, 2005; Park and Byrd, 2001; Schwartz and Hearst, 2002; Wren et al., 2002) utilized generic rules or text patterns (e.g. brackets, colons) to discover acronym meanings. These methods are usually based on the assumption that 1316 acronyms are co-mentioned with the corresponding meanings in the same document. However, in enterprises, this assumption rarely holds. Enterprises themselves are closed ecosystems, so it is very common for people to define the acronyms somewhere and use them elsewhere. As a result, such methods cannot be used for acronym meaning discovery in enterprises. Recently, there have been a few works (Jain et al., 2007; Larkey et al., 2000; Nadeau and Turney, 2005; Taneva et al., 2013) on automatically mining acronym meanings by leveraging Web data (e.g., query sessions, click logs). However, it is hard to apply them directly to enterprises, since most data in enterprises are raw text and therefore the query sessions/click logs are rarely available. Acronym disambiguation can be seen as a special case for the Entity Linking (EL) (Ji and Grishman, 2011; Dredze et al., 2010) problem. Approaches that link entity mentions to Wikipedia date back to Bunescu et. al’s work (Bunescu and Pas¸ca, 2006). They computed the cosine similarity between the text around the mention and the entity candidate’s Wikipedia page. The referent entity with the maximum similarity score is selected as the disambiguation result. Cucerzan’s work (Cucerzan, 2007) is the first one to realize the effectiveness of using topical coherence to globally perform EL. In that work, the topical coherence between the referent entity candidate and other entities within the same context is calculated based on their overlaps in categories and incoming links in Wikipedia. Recently, several methods (Hoffart et al., 2011; Li et al., 2013, 2016; Ratinov et al., 2011; Shen et al., 2012; Cheng and Roth, 2013) also tried to enrich “context similarity” and “topical coherence” using hybrid strategies. Shen et. al (Shen et al., 2015) provided a comprehensive survey for the techniques used in EL. However, these EL techniques cannot be used for acronym disambiguation in enterprises, since most enterprise meanings are not covered by public knowledge bases, and there are rarely any domain-specific knowledge bases available in enterprises. Automatic knowledge base construction (Suchanek et al., 2013) is promising, but the quality is far from applicable. Moreover, the structural information (e.g. entity taxonomy, crossdocument hyperlinks) within Wikipedia, is rarely available in enterprises. Most of the previous work (Feng et al., 2009; Pakhomov et al., 2005; Pustejovsky et al., 2001; Stevenson et al., 2009; Yu et al., 2006) on acronym disambiguation heavily rely on context words and domain specific resources. In comparison, our method explored a more comprehensive set of domain-independent features. Moreover, our method used a much less restrictive way to discover meaning candidates and training data, which is far more general than the methods relying on strict definition patterns (Schwartz and Hearst, 2002). Another particular limitation of all these previous work is that they do not distinguish internal and external meanings. They merely rely on the internal corpus to discover information about external meanings, which is quite ineffective. 8 Conclusions In this paper, we studied the Acronym Disambiguation for Enterprises problem. We proposed a novel, end-to-end framework to solve this problem. Our framework takes the enterprise corpus as input and produces a high-quality acronym disambiguation system as output. The disambiguation models are trained via distant supervised learning, without requiring any manually labeled training examples. Different from all the previous acronym disambiguation approaches, our system is capable of accurately resolving acronyms to both enterprise-specific meanings and public meanings. Experimental results on Microsoft enterprise data demonstrated that our system can effectively construct acronym/meaning repositories from scratch and accurately disambiguate acronyms to their meanings with over 90% precision. Furthermore, our proposed framework can be easily deployed to any enterprises without requiring any domain knowledge. References Eytan Adar. 2004. Sarad: A simple and robust abbreviation dictionary. Bioinformatics, 20(4):527–533. Hiroko Ao and Toshihisa Takagi. 2005. Alice: an algorithm to extract abbreviations from medline. Journal of the American Medical Informatics Association, 12(5):576–586. Razvan Bunescu and Marius Pas¸ca. 2006. Using encyclopedic knowledge for named entity disambiguation. In Proceedings of EACL, pages 9–16. Christopher JC Burges. 2010. From ranknet to lambdarank to lambdamart: An overview. Learning, 11:23–581. 1317 Xiao Cheng and Dan Roth. 2013. Relational inference for wikification. In Proceedings of EMNLP, pages 1787–1796. Silviu Cucerzan. 2007. Large-scale named entity disambiguation based on wikipedia data. In Proceedings of EMNLP-CoNLL, pages 708–716. Mark Dredze, Paul McNamee, Delip Rao, Adam Gerber, and Tim Finin. 2010. Entity disambiguation for knowledge base population. In Proceedings of COLING, pages 277–285. Shicong Feng, Yuhong Xiong, Conglei Yao, Liwei Zheng, and Wei Liu. 2009. Acronym extraction and disambiguation in large-scale organizational web pages. In Proceedings of CIKM, pages 1693–1696. Jerome H Friedman. 2000. Greedy function approximation: A gradient boosting machine. Annals of Statistics, 29:1189–1232. Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen F¨urstenau, Manfred Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named entities in text. In Proceedings of EMNLP, pages 782–792. Alpa Jain, Silviu Cucerzan, and Saliha Azzam. 2007. Acronym-expansion recognition and ranking on the web. In Information Reuse and Integration, pages 209–214. Heng Ji and Ralph Grishman. 2011. Knowledge base population: Successful approaches and challenges. In Proceedings of ACL, pages 1148–1158. Leah S Larkey, Paul Ogilvie, M Andrew Price, and Brenden Tamilio. 2000. Acrophile: an automated acronym extractor and server. In Proceedings of ACM conference on Digital libraries, pages 205– 214. Chao Li, Lei Ji, and Jun Yan. 2015. Acronym disambiguation using word embedding. In Proceedings of AAAI, pages 4178–4179. Yang Li, Shulong Tan, Huan Sun, Jiawei Han, Dan Roth, and Xifeng Yan. 2016. Entity disambiguation with linkless knowledge bases. In Proceedings of WWW, pages 1261–1270. Yang Li, Chi Wang, Fangqiu Han, Jiawei Han, Dan Roth, and Xifeng Yan. 2013. Mining evidences for named entity disambiguation. In Proceedings of SIGKDD, pages 1070–1078. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in NIPS, pages 3111–3119. David Nadeau and Peter D Turney. 2005. A supervised learning approach to acronym identification. In Proceedings of CSCSI, pages 319–329. Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM Comput. Surv., 41(2):10:1–10:69. Serguei Pakhomov, Ted Pedersen, and Christopher G Chute. 2005. Abbreviation and acronym disambiguation in clinical discourse. In AMIA Annual Symposium Proceedings, pages 589–593. Youngja Park and Roy J Byrd. 2001. Hybrid text mining for finding abbreviations and their definitions. In Proceedings of EMNLP, pages 126–133. James Pustejovsky, Jose Castano, Brent Cochran, Maciej Kotecki, Michael Morrell, and Anna Rumshisky. 2001. Extraction and disambiguation of acronym-meaning pairs in medline. Medinfo, 10(2001):371–375. Lev Ratinov, Dan Roth, Doug Downey, and Mike Anderson. 2011. Local and global algorithms for disambiguation to wikipedia. In Proceedings of ACL, pages 1375–1384. Ariel S Schwartz and Marti A Hearst. 2002. A simple algorithm for identifying abbreviation definitions in biomedical text. In Biocomputing, pages 451–462. Wei Shen, Jianyong Wang, and Jiawei Han. 2015. Entity linking with a knowledge base: Issues, techniques, and solutions. Knowledge and Data Engineering, IEEE Transactions on, 27(2):443–460. Wei Shen, Jianyong Wang, Ping Luo, and Min Wang. 2012. Linden: linking named entities with knowledge base via semantic knowledge. In Proceedings of WWW, pages 449–458. Mark Stevenson, Yikun Guo, Abdulaziz Al Amri, and Robert Gaizauskas. 2009. Disambiguation of biomedical abbreviations. In Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing, pages 71–79. Fabian Suchanek, James Fan, Raphael Hoffmann, Sebastian Riedel, and Partha Pratim Talukdar. 2013. Advances in automated knowledge base construction. SIGMOD Records. Bilyana Taneva, Tao Cheng, Kaushik Chakrabarti, and Yeye He. 2013. Mining acronym expansions and their meanings using query click log. In Proceedings of WWW, pages 1261–1272. Jonathan D Wren, Harold R Garner, et al. 2002. Heuristics for identification of acronym-definition patterns within text: towards an automated construction of comprehensive acronym-definition dictionaries. Methods of information in medicine, 41(5):426– 434. Hong Yu, Won Kim, Vasileios Hatzivassiloglou, and John Wilbur. 2006. A large scale, corpus-based approach for automatically disambiguating biomedical abbreviations. ACM Transactions on Information Systems, 24(3):380–404.
2018
121
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1318–1328 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1318 A Multi-Axis Annotation Scheme for Event Temporal Relations Qiang Ning,1 Hao Wu,2 Dan Roth1,2 Department of Computer Science 1University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA 2University of Pennsylvania, Philadelphia, PA 19104, USA [email protected], {haowu4,danroth}@seas.upenn.edu Abstract Existing temporal relation (TempRel) annotation schemes often have low interannotator agreements (IAA) even between experts, suggesting that the current annotation task needs a better definition. This paper proposes a new multi-axis modeling to better capture the temporal structure of events. In addition, we identify that event end-points are a major source of confusion in annotation, so we also propose to annotate TempRels based on start-points only. A pilot expert annotation effort using the proposed scheme shows significant improvement in IAA from the conventional 60’s to 80’s (Cohen’s Kappa). This better-defined annotation scheme further enables the use of crowdsourcing to alleviate the labor intensity for each annotator. We hope that this work can foster more interesting studies towards event understanding.1 1 Introduction Temporal relation (TempRel) extraction is an important task for event understanding, and it has drawn much attention in the natural language processing (NLP) community recently (UzZaman et al., 2013; Chambers et al., 2014; Llorens et al., 2015; Minard et al., 2015; Bethard et al., 2015, 2016, 2017; Leeuwenberg and Moens, 2017; Ning et al., 2017, 2018a,b). Initiated by TimeBank (TB) (Pustejovsky et al., 2003b), a number of TempRel datasets have been collected, including but not limited to the verbclause augmentation to TB (Bethard et al., 2007), 1The dataset is publicly available at https:// cogcomp.org/page/publication_view/834. TempEval1-3 (Verhagen et al., 2007, 2010; UzZaman et al., 2013), TimeBank-Dense (TB-Dense) (Cassidy et al., 2014), EventTimeCorpus (Reimers et al., 2016), and datasets with both temporal and other types of relations (e.g., coreference and causality) such as CaTeRs (Mostafazadeh et al., 2016) and RED (O’Gorman et al., 2016). These datasets were annotated by experts, but most still suffered from low inter-annotator agreements (IAA). For instance, the IAAs of TB-Dense, RED and THYME-TimeML (Styler IV et al., 2014) were only below or near 60% (given that events are already annotated). Since a low IAA usually indicates that the task is difficult even for humans (see Examples 1-3), the community has been looking into ways to simplify the task, by reducing the label set, and by breaking up the overall, complex task into subtasks (e.g., getting agreement on which event pairs should have a relation, and then what that relation should be) (Mostafazadeh et al., 2016; O’Gorman et al., 2016). In contrast to other existing datasets, Bethard et al. (2007) achieved an agreement as high as 90%, but the scope of its annotation was narrowed down to a very special verb-clause structure. (e1, e2), (e3, e4), and (e5, e6): TempRels that are difficult even for humans. Note that only relevant events are highlighted here. Example 1: Serbian police tried to eliminate the proindependence Kosovo Liberation Army and (e1:restore) order. At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region. Example 2: Service industries (e3:showed) solid job gains, as did manufacturers, two areas expected to be hardest (e4:hit) when the effects of the Asian crisis hit the American economy. Example 3: We will act again if we have evidence he is (e5:rebuilding) his weapons of mass destruction capabilities, senior officials say. In a bit of television diplomacy, Iraq’s deputy foreign minister (e6:responded) from Baghdad in less than one hour, saying that . . . This paper proposes a new approach to handling 1319 these issues in TempRel annotation. First, we introduce multi-axis modeling to represent the temporal structure of events, based on which we anchor events to different semantic axes; only events from the same axis will then be temporally compared (Sec. 2). As explained later, those event pairs in Examples 1-3 are difficult because they represent different semantic phenomena and belong to different axes. Second, while we represent an event pair using two time intervals (say, [t1 start, t1 end] and [t2 start, t2 end]), we suggest that comparisons involving end-points (e.g., t1 end vs. t2 end) are typically more difficult than comparing start-points (i.e., t1 start vs. t2 start); we attribute this to the ambiguity of expressing and perceiving durations of events (Coll-Florit and Gennari, 2011). We believe that this is an important consideration, and we propose in Sec. 3 that TempRel annotation should focus on start-points. Using the proposed annotation scheme, a pilot study done by experts achieved a high IAA of .84 (Cohen’s Kappa) on a subset of TB-Dense, in contrast to the conventional 60’s. In addition to the low IAA issue, TempRel annotation is also known to be labor intensive. Our third contribution is that we facilitate, for the first time, the use of crowdsourcing to collect a new, high quality (under multiple metrics explained later) TempRel dataset. We explain how the crowdsourcing quality was controlled and how vague relations were handled in Sec. 4, and present some statistics and the quality of the new dataset in Sec. 5. A baseline system is also shown to achieve much better performance on the new dataset, when compared with system performance in the literature (Sec. 6). The paper’s results are very encouraging and hopefully, this work would significantly benefit research in this area. 2 Temporal Structure of Events Given a set of events, one important question in designing the TempRel annotation task is: which pairs of events should have a relation? The answer to it depends on the modeling of the overall temporal structure of events. 2.1 Motivation TimeBank (Pustejovsky et al., 2003b) laid the foundation for many later TempRel corpora, e.g., (Bethard et al., 2007; UzZaman et al., 2013; Cassidy et al., 2014).2 In TimeBank, the annotators were allowed to label TempRels between any pairs of events. This setup models the overall structure of events using a general graph, which made annotators inadvertently overlook some pairs, resulting in low IAAs and many false negatives. Example 4: Dense Annotation Scheme. Serbian police (e7:tried) to (e8:eliminate) the proindependence Kosovo Liberation Army and (e1:restore) order. At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region. Given 4 NON-GENERIC events above, the dense scheme presents 6 pairs to annotators one by one: (e7, e8), (e7, e1), (e7, e2), (e8, e1), (e8, e2), and (e1, e2). Apparently, not all pairs are well-defined, e.g., (e8, e2) and (e1, e2), but annotators are forced to label all of them. To address this issue, Cassidy et al. (2014) proposed a dense annotation scheme, TB-Dense, which annotates all event pairs within a sliding, two-sentence window (see Example 4). It requires all TempRels between GENERIC3 and NON-GENERIC events to be labeled as vague, which conceptually models the overall structure by two disjoint time-axes: one for the NONGENERIC and the other one for the GENERIC. However, as shown by Examples 1-3 in which the highlighted events are NON-GENERIC, the TempRels may still be ill-defined: In Example 1, Serbian police tried to restore order but ended up with conflicts. It is reasonable to argue that the attempt to e1:restore order happened before the conflict where 51 people were e2:killed; or, 51 people had been killed but order had not been restored yet, so e1:restore is after e2:killed. Similarly, in Example 2, service industries and manufacturers were originally expected to be hardest e4:hit but actually e3:showed gains, so e4:hit is before e3:showed; however, one can also argue that the two areas had showed gains but had not been hit, so e4:hit is after e3:showed. Again, e5:rebuilding is a hypothetical event: “we will act if rebuilding is true”. Readers do not know for sure if “he is already rebuilding weapons but we have no evidence”, or “he will be building weapons in the future”, so annotators may disagree on the relation between e5:rebuilding and e6:responded. Despite, importantly, minimizing missing annota2EventTimeCorpus (Reimers et al., 2016) is based on TimeBank, but aims at anchoring events onto explicit time expressions in each document rather than annotating TempRels between events, which can be a good complementary to other TempRel datasets. 3For example, lions eat meat is GENERIC. 1320 tions, the current dense scheme forces annotators to label many such ill-defined pairs, resulting in low IAA. 2.2 Multi-Axis Modeling Arguably, an ideal annotator may figure out the above ambiguity by him/herself and mark them as vague, but it is not a feasible requirement for all annotators to stay clear-headed for hours; let alone crowdsourcers. What makes things worse is that, after annotators spend a long time figuring out these difficult cases, whether they disagree with each other or agree on the vagueness, the final decisions for such cases will still be vague. As another way to handle this dilemma, TBDense resorted to a 80% confidence rule: annotators were allowed to choose a label if one is 80% sure that it was the writer’s intent. However, as pointed out by TB-Dense, annotators are likely to have rather different understandings of 80% confidence and it will still end up with disagreements. In contrast to these annotation difficulties, humans can easily grasp the meaning of news articles, implying a potential gap between the difficulty of the annotation task and the one of understanding the actual meaning of the text. In Examples 1-3, the writers did not intend to explain the TempRels between those pairs, and the original annotators of TimeBank4 did not label relations between those pairs either, which indicates that both writers and readers did not think the TempRels between these pairs were crucial. Instead, what is crucial in these examples is that “Serbian police tried to restore order but killed 51 people”, that “two areas were expected to be hit but showed gains”, and that “if he rebuilds weapons then we will act.” To “restore order”, to be “hardest hit”, and “if he was rebuilding” were only the intention of police, the opinion of economists, and the condition to act, respectively, and whether or not they actually happen is not the focus of those writers. This discussion suggests that a single axis is too restrictive to represent the complex structure of NON-GENERIC events. Instead, we need a modeling which is more restrictive than a general graph so that annotators can focus on relation annotation (rather than looking for pairs first), but also more flexible than a single axis so that ill-defined 4Recall that they were given the entire article and only salient relations would be annotated. Event Type Category INTENTION, OPINION On an orthogonal axis HYPOTHESIS, GENERIC On a parallel axis NEGATION Not on any axis STATIC, RECURRENT Other Table 1: The interpretation of various event types that are not on the main axis in the proposed multi-axis modeling. The names are rather straightforward; see examples for each in Appendix A. relations are not forcibly annotated. Specifically, we need axes for intentions, opinions, hypotheses, etc. in addition to the main axis of an article. We thus argue for multi-axis modeling, as defined in Table 1. Following the proposed modeling, Examples 1-3 can be represented as in Fig. 1. This modeling aims at capturing what the author has explicitly expressed and it only asks annotators to look at comparable pairs, rather than forcing them to make decisions on often vaguely defined pairs. tried e2: killed e1:restore order Main axis Intention axis of “tried” e5:rebuilding have evidence act e6:responded saying officials say Main axis Hypothetical axis crisis hit America e3:showed e4:hardest hit Main axis Opinion axis of “expected” expected Asian crisis Figure 1: A multi-axis view of Examples 1-3. Only events on the same axis are compared. In practice, we annotate one axis at a time: we first classify if an event is anchorable onto a given axis (this is also called the anchorability annotation step); then we annotate every pair of anchorable events (i.e., the relation annotation step); finally, we can move to another axis and repeat the two steps above. Note that ruling out cross-axis relations is only a strategy we adopt in this paper to separate well-defined relations from ill-defined relations. We do not claim that cross-axis relations are unimportant; instead, as shown in Fig. 2, we think that cross-axis relations are a different semantic phenomenon that requires additional investigation. 1321 2.3 Comparisons with Existing Work There have been other proposals of temporal structure modelings (Bramsen et al., 2006; Bethard et al., 2012), but in general, the semantic phenomena handled in our work are very different and complementary to them. (Bramsen et al., 2006) introduces “temporal segments” (a fragment of text that does not exhibit abrupt changes) in the medical domain. Similarly, their temporal segments can also be considered as a special temporal structure modeling. But a key difference is that (Bramsen et al., 2006) only annotates inter-segment relations, ignoring intra-segment ones. Since those segments are usually large chunks of text, the semantics handled in (Bramsen et al., 2006) is in a very coarse granularity (as pointed out by (Bramsen et al., 2006)) and is thus different from ours. (Bethard et al., 2012) proposes a tree structure for children’s stories, which “typically have simpler temporal structures”, as they pointed out. Moreover, in their annotation, an event can only be linked to a single nearby event, even if multiple nearby events may exist, whereas we do not have such restrictions. In addition, some of the semantic phenomena in Table 1 have been discussed in existing work. Here we compare with them for a better positioning of the proposed scheme. 2.3.1 Axis Projection TB-Dense handled the incomparability between main-axis events and HYPOTHESIS/NEGATION by treating an event as having occurred if the event is HYPOTHESIS/NEGATION.5 In our multiaxis modeling, the strategy adopted by TB-Dense falls into a more general approach, “axis projection”. That is, projecting events across different axes to handle the incomparability between any two axes (not limited to HYPOTHESIS/NEGATION). Axis projection works well for certain event pairs like Asian crisis and e4:hardest hit in Example 2: as in Fig. 1, Asian crisis is before expected, which is again before e4:hardest hit, so Asian crisis is before e4:hardest hit. Generally, however, since there is no direct evidence that can guide the projection, annotators may have different projections (imagine projecting e5:rebuilding onto the main axis: is it in the past or in the future?). As a result, axis projec5In the case of Example 3, it is to treat rebuilding as actually happened and then link it to responded. tion requires many specially designed guidelines or strong external knowledge. Annotators have to rigidly follow the sometimes counter-intuitive guidelines or “guess” a label instead of looking for evidence in the text. When strong external knowledge is involved in axis projection, it becomes a reasoning process and the resulting relations are a different type. For example, a reader may reason that in Example 3, it is well-known that they did “act again”, implying his e5:rebuilding had happened and is before e6:responded. Another example is in Fig. 2. It is obvious that relations based on these projections are not the same with and more challenging than those same-axis relations, so in the current stage, we should focus on same-axis relations only. worked hard attended submit a paper Main axis Intention axis Figure 2: In I worked hard to submit a paper ...I attended the conference, the projection of submit a paper onto the main axis is clearly before attended. However, this projection requires strong external knowledge that a paper should be submitted before attending a conference. Again, this projection is only a guess based on our external knowledge and it is still open whether the paper is submitted or not. 2.3.2 Introduction of the Orthogonal Axes Another prominent difference to earlier work is the introduction of orthogonal axes, which has not been used in any existing work as we know. A special property is that the intersection event of two axes can be compared to events from both, which can sometimes bridge events, e.g., in Fig. 1, Asian crisis is seemingly before hardest hit due to their connections to expected. Since Asian crisis is on the main axis, it seems that e4:hardest hit is on the main axis as well. However, the “hardest hit” in “Asian crisis before hardest hit” is only a projection of the original e4:hardest hit onto the real axis and is valid only when this OPINION is true. Nevertheless, OPINIONS are not always true and INTENTIONS are not always fulfilled. In Example 5, e9:sponsoring and e10:resolve are the opinions of the West and the speaker, respectively; whether or not they are true depends on the au1322 thors’ implications or the readers’ understandings, which is often beyond the scope of TempRel annotation.6 Example 6 demonstrates a similar situation for INTENTIONS: when reading the sentence of e11:report, people are inclined to believe that it is fulfilled. But if we read the sentence of e12:report, we have reason to believe that it is not. When it comes to e13:tell, it is unclear if everyone told the truth. The existence of such examples indicates that orthogonal axes are a better modeling for INTENTIONS and OPINIONS. Example 5: Opinion events may not always be true. He is ostracized by the West for (e9:sponsoring) terrorism. We need to (e10:resolve) the deep-seated causes that have resulted in these problems. Example 6: Intentions may not always be fulfilled. A passerby called the police to (e11:report) the body. A passerby called the police to (e12:report) the body. Unfortunately, the line was busy. I asked everyone to (e13:tell) the truth. 2.3.3 Differences from Factuality Event modality have been discussed in many existing event annotation schemes, e.g., Event Nugget (Mitamura et al., 2015), Rich ERE (Song et al., 2015), and RED. Generally, an event is classified as Actual or Non-Actual, a.k.a. factuality (Saur´ı and Pustejovsky, 2009; Lee et al., 2015). The main-axis events defined in this paper seem to be very similar to Actual events, but with several important differences: First, future events are Non-Actual because they indeed have not happened, but they may be on the main axis. Second, events that are not on the main axis can also be Actual events, e.g., intentions that are fulfilled, or opinions that are true. Third, as demonstrated by Examples 5-6, identifying anchorability as defined in Table 1 is relatively easy, but judging if an event actually happened is often a high-level understanding task that requires an understanding of the entire document or external knowledge. Interested readers are referred to Appendix B for a detailed analysis of the difference between Anchorable (onto the main axis) and Actual on a subset of RED. 3 Interval Splitting All existing annotation schemes adopt the interval representation of events (Allen, 1984) and there 6For instance, there is undoubtedly a causal link between e9:sponsoring and ostracized. are 13 relations between two intervals (for readers who are not familiar with it, please see Fig. 4 in the appendix). To reduce the burden of annotators, existing schemes often resort to a reduced set of the 13 relations. For instance, Verhagen et al. (2007) merged all the overlap relations into a single relation, overlap. Bethard et al. (2007); Do et al. (2012); O’Gorman et al. (2016) all adopted this strategy. In Cassidy et al. (2014), they further split overlap into includes, included and equal. Let [t1 start, t1 end] and [t2 start, t2 end] be the time intervals of two events (with the implicit assumption that tstart tend). Instead of reducing the relations between two intervals, we try to explicitly compare the time points (see Fig. 3). In this way, the label set is simply before, after and equal,7 while the expressivity remains the same. This interval splitting technique has also been used in (Raghavan et al., 2012). [!"#$%# & , !()* & ] [!"#$%# + , !()* + ] time Figure 3: The comparison of two event time intervals, [t1 start, t1 end] and [t2 start, t2 end], can be decomposed into four comparisons t1 start vs. t2 start, t1 start vs. t2 end, t1 end vs. t2 start, and t1 end vs. t2 end, without loss of generality. In addition to same expressivity, interval splitting can provide even more information when the relation between two events is vague. In the conventional setting, imagine that the annotators find that the relation between two events can be either before or before and overlap. Then the resulting annotation will have to be vague, although the annotators actually agree on the relation between t1 start and t2 start. Using interval splitting, however, such information can be preserved. An obvious downside of interval splitting is the increased number of annotations needed (4 point comparisons vs. 1 interval comparison). In practice, however, it is usually much fewer than 4 comparisons. For example, when we see t1 end < t2 start (as in Fig. 3), the other three can be skipped because they can all be inferred. Moreover, although the number of annotations is increased, the work load for human annotators may still be the same, because even in the conventional scheme, they still need to think of the relations between start- and 7We will discuss vague in Sec. 4. 1323 end-points before they can make a decision. 3.1 Ambiguity of End-Points During our pilot annotation, the annotation quality dropped significantly when the annotators needed to reason about relations involving end-points of events. Table 2 shows four metrics of task difficulty when only t1 start vs. t2 start or t1 end vs. t2 end are annotated. Non-anchorable events were removed for both jobs. The first two metrics, qualifying pass rate and survival rate are related to the two quality control protocols (see Sec. 4.1 for details). We can see that when annotating the relations between end-points, only one out of ten crowdsourcers (11%) could successfully pass our qualifying test; and even if they had passed it, half of them (56%) would have been kicked out in the middle of the task. The third line is the overall accuracy on gold set from all crowdsourcers (excluding those who did not pass the qualifying test), which drops from 67% to 37% when annotating end-end relations. The last line is the average response time per annotation and we can see that it takes much longer to label an end-end TempRel (52s) than a start-start TempRel (33s). This important discovery indicates that the TempRels between end-points is probably governed by a different linguistic phenomenon. Metric t1 start vs. t2 start t1 end vs. t2 end Qualification pass rate 50% 11% Survival rate 74% 56% Accuracy on gold 67% 37% Avg. response time 33s 52s Table 2: Annotations involving the end-points of events are found to be much harder than only comparing the start-points. We hypothesize that the difficulty is a mixture of how durative events are expressed (by authors) and perceived (by readers) in natural language. In cognitive psychology, Coll-Florit and Gennari (2011) discovered that human readers take longer to perceive durative events than punctual events, e.g., owe 50 bucks vs. lost 50 bucks. From the writer’s standpoint, durations are usually fuzzy (Schockaert and De Cock, 2008), or assumed to be a prior knowledge of readers (e.g., college takes 4 years and watching an NBA game takes a few hours), and thus not always written explicitly. Given all these reasons, we ignore the comparison of end-points in this work, although event duration is indeed, another important task. 4 Annotation Scheme Design To summarize, with the proposed multi-axis modeling (Sec. 2) and interval splitting (Sec. 3), our annotation scheme is two-step. First, we mark every event candidate as being temporally Anchorable or not (based on the time axis we are working on). Second, we adopt the dense annotation scheme to label TempRels only between Anchorable events. Note that we only work on verb events in this paper, so non-verb event candidates are also deleted in a preprocessing step. We design crowdsourcing tasks for both steps and as we show later, high crowdsourcing quality was achieved on both tasks. In this section, we will discuss some practical issues. 4.1 Quality Control for Crowdsourcing We take advantage of the quality control feature in CrowdFlower in our crowdsourcing jobs. For any job, a set of examples are annotated by experts beforehand, which is considered gold and will serve two purposes. (i) Qualifying test: Any crowdsourcer who wants to work on this job has to pass with 70% accuracy on 10 questions randomly selected from the gold set. (ii) Surviving test: During the annotation process, questions from the gold set will be randomly given to crowdsourcers without notice, and one has to maintain 70% accuracy on the gold set till the end of the annotation; otherwise, he or she will be forbidden from working on this job anymore and all his/her annotations will be discarded. At least 5 different annotators are required for every judgement and by default, the majority vote will be the final decision. 4.2 Vague Relations How to handle vague relations is another issue in temporal annotation. In non-dense schemes, annotators usually skip the annotation of a vague pair. In dense schemes, a majority agreement rule is applied as a postprocessing step to back off a decision to vague when annotators cannot pass a majority vote (Cassidy et al., 2014), which reminds us that annotators often label a vague relation as non-vague due to lack of thinking. We decide to proactively reduce the possibility of such situations. As mentioned earlier, our label set for t1 start vs. t2 start is before, after, equal and vague. We ask two questions: Q1=Is it possible that t1 start is before t2 start? Q2=Is it possible that t2 start is before t1 start? Let the an1324 swers be A1 and A2. Then we have a oneto-one mapping as follows: A1=A2=yes7!vague, A1=A2=no7!equal, A1=yes, A2=no7!before, and A1=no, A2=yes7!after. An advantage is that one will be prompted to think about all possibilities, thus reducing the chance of overlook. Finally, the annotation interface we used is shown in Appendix C. 5 Corpus Statistics and Quality In this section, we first focus on annotations on the main axis, which is usually the primary storyline and thus has most events. Before launching the crowdsourcing tasks, we checked the IAA between two experts on a subset of TB-Dense (about 100 events and 400 relations). A Cohen’s Kappa of .85 was achieved in the first step: anchorability annotation. Only those events that both experts labeled Anchorable were kept before they moved onto the second step: relation annotation, for which the Cohen’s Kappa was .90 for Q1 and .87 for Q2. Table 3 furthermore shows the distribution, Cohen’s Kappa, and F1 of each label. We can see the Kappa and F1 of vague (=.75, F1=.81) are generally lower than those of the other labels, confirming that temporal vagueness is a more difficult semantic phenomenon. Nevertheless, the overall IAA shown in Table 3 is a significant improvement compared to existing datasets. b a e v Overall Distribution .49 .23 .02 .26 1 IAA: Cohen’s  .90 .87 1 .75 .84 IAA: F1 .92 .93 1 .81 .90 Table 3: IAA of two experts’ annotations in a pilot study on the main axis. Notations: before, after, equal, and vague. With the improved IAA confirmed by experts, we sequentially launched the two-step crowdsourcing tasks through CrowdFlower on top of the same 36 documents of TB-Dense. To evaluate how well the crowdsourcers performed on our task, we calculate two quality metrics: accuracy on the gold set and the Worker Agreement with Aggregate (WAWA). WAWA indicates the average number of crowdsourcers’ responses agreed with the aggregate answer (we used majority aggregation for each question). For example, if N individual responses were obtained in total, and n of them were correct when compared to the aggregate answer, then WAWA is simply n/N. In the first step, crowdsourcers labeled 28% of the events as NonAnchorable to the main axis, with an accuracy on the gold of .86 and a WAWA of .79. With Non-Anchorable events filtered, the relation annotation step was launched as another crowdsourcing task. The label distribution is b=.50, a=.28, e=.03, and v=.19 (consistent with Table 3). In Table 4, we show the annotation quality of this step using accuracy on the gold set and WAWA. We can see that the crowdsourcers achieved a very good performance on the gold set, indicating that they are consistent with the authors who created the gold set; these crowdsourcers also achieved a high-level agreement under the WAWA metric, indicating that they are consistent among themselves. These two metrics indicate that the annotation task is now well-defined and easy to understand even by non-experts. No. Metric Q1 Q2 All 1 Accuracy on Gold .89 .88 .88 2 WAWA .82 .81 .81 Table 4: Quality analysis of the relation annotation step of MATRES. “Q1” and “Q2” refer to the two questions crowdsourcers were asked (see Sec. 4.2 for details). Line 1 measures the level of consistency between crowdsourcers and the authors and line 2 measures the level of consistency among the crowdsourcers themselves. We continued to annotate INTENTION and OPINION which create orthogonal branches on the main axis. In the first step, crowdsourcers achieved an accuracy on gold of .82 and a WAWA of .89. Since only 16% of the events are in this category and these axes are usually very short (e.g., allocate funds to build a museum.), the annotation task is relatively small and two experts took the second step and achieved an agreement of .86 (F1). We name our new dataset MATRES for MultiAxis Temporal RElations for Start-points. Each individual judgement cost us $0.01 and MATRES in total cost about $400 for 36 documents. 5.1 Comparison to TB-Dense To get another checkpoint of the quality of the new dataset, we compare with the annotations of TBDense. TB-Dense has 1.1K verb events, between which 3.4K event-event (EE) relations are annotated. In the new dataset, 72% of the events (0.8K) are anchored onto the main axis, resulting in 1.6K EE relations, and 16% (0.2K) are anchored onto orthogonal axes, resulting in 0.2K EE relations. 1325 The following comparison is based on the 1.8K EE relations in common. Moreover, since TB-Dense annotations are for intervals instead of start-points only, we converted TB-Dense’s interval relations to start-point relations (e.g., if A includes B, then tA start is before tB start). b a e v All b 455 11 5 42 513 a 45 309 16 68 438 e 13 7 2 10 32 v 450 138 20 192 800 All 963 465 43 312 1783 Table 5: An evaluation of MATRES against TB-Dense. Horizontal: MATRES. Vertical: TB-Dense (with interval relations mapped to start-point relations). Please see explanation of these numbers in text. The confusion matrix is shown in Table 5. A few remarks about how to understand it: First, when TB-Dense labels before or after, MATRES also has a high-probability of having the same label (b=455/513=.89, a=309/438=.71); when MATRES labels vague, TB-Dense is also very likely to label vague (v=192/312=.62). This indicates the high agreement level between the two datasets if the interval- or point-based annotation difference is ruled out. Second, many vague relations in TB-Dense are labeled as before, after or equal in MATRES. This is expected because TB-Dense annotates relations between intervals, while MATRES annotates start-points. When durative events are involved, the problem usually becomes more difficult and interval-based annotation is more likely to label vague (see earlier discussions in Sec. 3). Example 7 shows three typical cases, where e14:became, e17:backed, e18:rose and e19:extending can be considered durative. If only their start-points are considered, the crowdsourcers were correct in labeling e14 before e15, e16 after e17, and e18 equal to e19, although TBDense says vague for all of them. Third, equal seems to be the relation that the two dataset mostly disagree on, which is probably due to crowdsourcers’ lack of understanding in time granularity and event coreference. Although equal relations only constitutes a small portion in all relations, it needs further investigation. 6 Baseline System We develop a baseline system for TempRel extraction on MATRES, assuming that all the events and axes are given. The following commonlyExample 7: Typical cases that TB-Dense annotated vague but MATRES annotated before, after, and equal, respectively. At one point , when it (e14:became) clear controllers could not contact the plane, someone (e15:said) a prayer. TB-Dense: vague; MATRES: before The US is bolstering its military presence in the gulf, as President Clinton (e16:discussed) the Iraq crisis with the one ally who has (e17:backed) his threat of force, British prime minister Tony Blair. TB-Dense: vague; MATRES: after Average hourly earnings of nonsupervisory employees (e18:rose) to $12.51. The gain left wages 3.8 percent higher than a year earlier, (e19:extending) a trend that has given back to workers some of the earning power they lost to inflation in the last decade. TB-Dense: vague; MATRES: equal used features for each event pair are used: (i) The part-of-speech (POS) tags of each individual event and of its neighboring three words. (ii) The sentence and token distance between the two events. (iii) The appearance of any modal verb between the two event mentions in text (i.e., will, would, can, could, may and might). (iv) The appearance of any temporal connectives between the two event mentions (e.g., before, after and since). (v) Whether the two verbs have a common synonym from their synsets in WordNet (Fellbaum, 1998). (vi) Whether the input event mentions have a common derivational form derived from WordNet. (vii) The head words of the preposition phrases that cover each event, respectively. And (viii) event properties such as Aspect, Modality, and Polarity that come with the TimeBank dataset and are commonly used as features. The proposed baseline system uses the averaged perceptron algorithm to classify the relation between each event pair into one of the four relation types. We adopted the same train/dev/test split of TB-Dense, where there are 22 documents in train, 5 in dev, and 9 in test. Parameters were tuned on the train-set to maximize its F1 on the dev-set, after which the classifier was retrained on the union of train and dev. A detailed analysis of the baseline system is provided in Table 6. The performance on equal and vague is lower than on before and after, probably due to shortage in these labels in the training data and the inherent difficulty in event coreference and temporal vagueness. We can see, though, that the overall performance on MATRES is much better than those in the literature for TempRel extraction, which used to be in the low 50’s (Chambers et al., 2014; Ning et al., 2017). The same system was also retrained 1326 and tested on the original annotations of TB-Dense (Line “Original”), which confirms the significant improvement if the proposed annotation scheme is used. Note that we do not mean to say that the proposed baseline system itself is better than other existing algorithms, but rather that the proposed annotation scheme and the resulting dataset lead to better defined machine learning tasks. In the future, more data can be collected and used with advanced techniques such as ILP (Do et al., 2012), structured learning (Ning et al., 2017) or multi-sieve (Chambers et al., 2014). Training Testing P R F1 P R F1 Before .74 .91 .82 .71 .80 .75 After .73 .77 .75 .55 .64 .59 Equal 1 .05 .09 Vague .75 .28 .41 .29 .13 .18 Overall .73 .81 .77 .66 .72 .69 Original .44 .67 .53 .40 .60 .48 Table 6: Performance of the proposed baseline system on MATRES. Line “Original” is the same system retrained on the original TB-Dense and tested on the same subset of event pairs. Due to the limited number of equal examples, the system did not make any equal predictions on the testset. 7 Conclusion This paper proposes a new scheme for TempRel annotation between events, simplifying the task by focusing on a single time axis at a time. We have also identified that end-points of events is a major source of confusion during annotation due to reasons beyond the scope of TempRel annotation, and proposed to focus on start-points only and handle the end-points issue in further investigation (e.g., in event duration annotation tasks). Pilot study by expert annotators shows significant IAA improvements compared to literature values, indicating a better task definition under the proposed scheme. This further enables the usage of crowdsourcing to collect a new dataset, MATRES, at a lower time cost. Analysis shows that MATRES, albeit crowdsourced, has achieved a reasonably good agreement level, as confirmed by its performance on the gold set (agreement with the authors), the WAWA metric (agreement with the crowdsourcers themselves), and consistency with TB-Dense (agreement with an existing dataset). Given the fact that existing schemes suffer from low IAAs and lack of data, we hope that the findings in this work would provide a good start towards understanding more sophisticated semantic phenomena in this area. Acknowledgements We thank Martha Palmer, Tim O’Gorman, Mark Sammons and all the anonymous reviewers for providing insightful comments and critique in earlier stages of this work. This research is supported in part by a grant from the Allen Institute for Artificial Intelligence (allenai.org); the IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR) - a research collaboration as part of the IBM AI Horizons Network; by DARPA under agreement number FA8750-132-0008; and by the Army Research Laboratory (ARL) under agreement W911NF-09-2-0053 (the ARL Network Science CTA). The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA, of the Army Research Laboratory or the U.S. Government. Any opinions, findings, conclusions or recommendations are those of the authors and do not necessarily reflect the view of the ARL. References James F Allen. 1984. Towards a general theory of action and time. Artificial intelligence 23(2):123–154. Steven Bethard, Leon Derczynski, Guergana Savova, James Pustejovsky, and Marc Verhagen. 2015. SemEval-2015 Task 6: Clinical TempEval. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015). Association for Computational Linguistics, Denver, Colorado, pages 806–814. Steven Bethard, Oleksandr Kolomiyets, and MarieFrancine Moens. 2012. Annotating story timelines as temporal dependency structures. In Proceedings of the eighth international conference on language resources and evaluation (LREC). ELRA, pages 2721–2726. Steven Bethard, James H Martin, and Sara Klingenstein. 2007. Timelines from text: Identification of syntactic temporal relations. In IEEE International Conference on Semantic Computing (ICSC). pages 11–18. 1327 Steven Bethard, Guergana Savova, Wei-Te Chen, Leon Derczynski, James Pustejovsky, and Marc Verhagen. 2016. SemEval-2016 Task 12: Clinical TempEval. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). Association for Computational Linguistics, San Diego, California, pages 1052–1062. Steven Bethard, Guergana Savova, Martha Palmer, and James Pustejovsky. 2017. SemEval-2017 Task 12: Clinical TempEval. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). Association for Computational Linguistics, pages 565–572. Philip Bramsen, Pawan Deshpande, Yoong Keok Lee, and Regina Barzilay. 2006. Inducing temporal graphs. In Proceedings of the Conference on Empirical Methods for Natural Language Processing (EMNLP). pages 189–198. Taylor Cassidy, Bill McDowell, Nathanel Chambers, and Steven Bethard. 2014. An annotation framework for dense event ordering. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). pages 501–506. Nathanael Chambers, Taylor Cassidy, Bill McDowell, and Steven Bethard. 2014. Dense event ordering with a multi-pass architecture. Transactions of the Association for Computational Linguistics 2:273– 284. Marta Coll-Florit and Silvia P Gennari. 2011. Time in language: Event duration in language comprehension. Cognitive psychology 62(1):41–79. Quang Xuan Do, Wei Lu, and Dan Roth. 2012. Joint inference for event timeline construction. In Proc. of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. MIT Press. Kenton Lee, Yoav Artzi, Yejin Choi, and Luke Zettlemoyer. 2015. Event detection and factuality assessment with non-expert supervision. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 1643–1648. Tuur Leeuwenberg and Marie-Francine Moens. 2017. Structured learning for temporal relation extraction from clinical records. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. Hector Llorens, Nathanael Chambers, Naushad UzZaman, Nasrin Mostafazadeh, James Allen, and James Pustejovsky. 2015. SemEval-2015 Task 5: QA TEMPEVAL - evaluating temporal information understanding with question answering. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015). pages 792–800. Anne-Lyse Minard, Manuela Speranza, Eneko Agirre, Itziar Aldabe, Marieke van Erp, Bernardo Magnini, German Rigau, Ruben Urizar, and Fondazione Bruno Kessler. 2015. SemEval-2015 Task 4: TimeLine: Cross-document event ordering. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015). pages 778–786. Teruko Mitamura, Yukari Yamakawa, Susan Holm, Zhiyi Song, Ann Bies, Seth Kulick, and Stephanie Strassel. 2015. Event nugget annotation: Processes and issues. In Proceedings of the Workshop on Events at NAACL-HLT. Nasrin Mostafazadeh, Alyson Grealish, Nathanael Chambers, James Allen, and Lucy Vanderwende. 2016. CaTeRS: Causal and temporal relation scheme for semantic annotation of event structures. In Proceedings of the 4th Workshop on Events: Definition, Detection, Coreference, and Representation. pages 51–61. Qiang Ning, Zhili Feng, and Dan Roth. 2017. A structured learning approach to temporal relation extraction. In Proceedings of the Conference on Empirical Methods for Natural Language Processing (EMNLP). Copenhagen, Denmark. Qiang Ning, Hao Wu, Haoruo Peng, and Dan Roth. 2018a. Improving temporal relation extraction with a globally acquired statistical resource. In Proceedings of the Annual Meeting of the North American Association of Computational Linguistics (NAACL). Association for Computational Linguistics. Qiang Ning, Zhongzhi Yu, Chuchu Fan, and Dan Roth. 2018b. Exploiting partially annotated data for temporal relation extraction. In The Joint Conference on Lexical and Computational Semantics (*SEM). Association for Computational Linguistics. Tim O’Gorman, Kristin Wright-Bettner, and Martha Palmer. 2016. Richer event description: Integrating event coreference with temporal, causal and bridging annotation. In Proceedings of the 2nd Workshop on Computing News Storylines (CNS 2016). Association for Computational Linguistics, Austin, Texas, pages 47–56. James Pustejovsky, Jos´e M Castano, Robert Ingria, Roser Sauri, Robert J Gaizauskas, Andrea Setzer, Graham Katz, and Dragomir R Radev. 2003a. TimeML: Robust specification of event and temporal expressions in text. New directions in question answering 3:28–34. James Pustejovsky, Patrick Hanks, Roser Sauri, Andrew See, Robert Gaizauskas, Andrea Setzer, Dragomir Radev, Beth Sundheim, David Day, Lisa Ferro, et al. 2003b. The TIMEBANK corpus. In Corpus linguistics. volume 2003, pages 647–656. Preethi Raghavan, Eric Fosler-Lussier, and Albert M Lai. 2012. Learning to temporally order medical 1328 events in clinical text. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume 2. Association for Computational Linguistics, pages 70–74. Nils Reimers, Nazanin Dehghani, and Iryna Gurevych. 2016. Temporal anchoring of events for the timebank corpus. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 2195–2204. Roser Saur´ı and James Pustejovsky. 2009. FactBank: a corpus annotated with event factuality. Language resources and evaluation 43(3):227. Steven Schockaert and Martine De Cock. 2008. Temporal reasoning about fuzzy intervals. Artificial Intelligence 172(8-9):1158–1193. Zhiyi Song, Ann Bies, Stephanie Strassel, Tom Riese, Justin Mott, Joe Ellis, Jonathan Wright, Seth Kulick, Neville Ryant, and Xiaoyi Ma. 2015. From light to rich ere: Annotation of entities, relations, and events. In Proceedings of the The 3rd Workshop on EVENTS: Definition, Detection, Coreference, and Representation. Association for Computational Linguistics, Denver, Colorado, pages 89–98. William F Styler IV, Steven Bethard, Sean Finan, Martha Palmer, Sameer Pradhan, Piet C de Groen, Brad Erickson, Timothy Miller, Chen Lin, Guergana Savova, et al. 2014. Temporal annotation in the clinical domain. Transactions of the Association for Computational Linguistics 2:143. Naushad UzZaman, Hector Llorens, James Allen, Leon Derczynski, Marc Verhagen, and James Pustejovsky. 2013. SemEval-2013 Task 1: TempEval-3: Evaluating time expressions, events, and temporal relations. In Second Joint Conference on Lexical and Computational Semantics. volume 2, pages 1–9. Marc Verhagen, Robert Gaizauskas, Frank Schilder, Mark Hepple, Graham Katz, and James Pustejovsky. 2007. SemEval-2007 Task 15: TempEval temporal relation identification. In SemEval. pages 75–80. Marc Verhagen, Roser Sauri, Tommaso Caselli, and James Pustejovsky. 2010. SemEval-2010 Task 13: TempEval-2. In SemEval. pages 57–62.
2018
122
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1329–1338 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1329 Exemplar Encoder-Decoder for Neural Conversation Generation Gaurav Pandey, Danish Contractor, Vineet Kumar and Sachindra Joshi IBM Research AI New Delhi, India {gpandey1, dcontrac, vineeku6, jsachind}@in.ibm.com Abstract In this paper we present the Exemplar Encoder-Decoder network (EED), a novel conversation model that learns to utilize similar examples from training data to generate responses. Similar conversation examples (context-response pairs) from training data are retrieved using a traditional TF-IDF based retrieval model. The retrieved responses are used to create exemplar vectors that are used by the decoder to generate the response. The contribution of each retrieved response is weighed by the similarity of corresponding context with the input context. We present detailed experiments on two large data sets and find that our method outperforms state of the art sequence to sequence generative models on several recently proposed evaluation metrics. We also observe that the responses generated by the proposed EED model are more informative and diverse compared to existing state-of-the-art method. 1 Introduction With the availability of large datasets and the recent progress made by neural methods, variants of sequence to sequence learning (seq2seq) (Sutskever et al., 2014) architectures have been successfully applied for building conversational systems (Serban et al., 2016, 2017b). However, despite these methods being the stateof-the art frameworks for conversation generation, they suffer from problems such as lack of diversity in responses and generation of short, repetitive and uninteresting responses (Liu et al., 2016; Serban et al., 2016, 2017b). A large body of recent literature has focused on overcoming such challenges (Li et al., 2016a; Lowe et al., 2017). In part, such problems arise as all information required to generate responses needs to be captured as part of the model parameters learnt from the training data. These model parameters alone may not be sufficient for generating natural conversations. Therefore, despite providing enormous amount of data, neural generative systems have been found to be ineffective for use in real world applications (Liu et al., 2016). In this paper, we focus our attention on closed domain conversations. A characteristic feature of such conversations is that over a period of time, some conversation contexts1 are likely to have occurred previously (Lu et al., 2017b). For instance, Table 1 shows some contexts from the Ubuntu dialog corpus. Each row presents an input dialog context with its corresponding gold response followed by a similar context and response seen in training data – as can be seen, contexts for “installing dms”, “sharing files”, “blocking ufw ports” have all occurred in training data. We hypothesize that being able to refer to training responses for previously seen similar contexts could be a helpful signal to use while generating responses. In order to exploit this aspect of closed domain conversations we build our neural encoderdecoder architecture called the Exemplar Encoder Decoder (EED), that learns to generate a response for a given context by exploiting similar contexts from training conversations. Thus, instead of having the seq2seq model learn patterns of language only from aligned parallel corpora, we assist the model by providing it closely related (similar) samples from the training data that it can refer to while generating text. Specifically, given a context c, we retrieve a set 1We use the phrase “dialog context”, “conversation context” and “context” interchangeably throughout the paper. 1330 Input Context Gold Response Similar Context in training data Associated Response U1 if you want autologin install a dm of some sort lightdm, gdm, kdm, xdm, slim, etc. U1 if you’re running a dm, it will probably restart x e.g. gdm, kdm, xdm U2 what is a dm U2 whats a dm? U1 is it possible to share a file in one user’s home directory with another user? so chmod 777 should do it, right? U1 howto set right permission for my home directory? chmod and chown? u mean that sintax U2 if you set permissions (to ’group’,’other’ or with an acl) U2 but which is the syntax to set permission for my user in my home user directory ? U1 is there a way to block all ports in ufw and only allow the ports that have been allowed? do i need to use iptables in order to use ufw? U1 is ufw blocking connections to all ports by default? how do i block all ports with ufw? U2 try to get familiar with configuring iptables U2 no, all ports are open by default. U1 how do i upgrade on php beyond 5.3.2 on ubuntu using apt-get ? ? ? this version is a bit old lucid, 10.04 ubuntu 10.04.4 lts U1 hello!, how can i upgrade apt-get?(i have version 0.7.9 installed but i need to update to latest) I’m using ubuntu server 10.04 64 U2 which version of ubuntu are you using? U2 sudo apt-get upgrade apt-get U1 what version of ubuntu do you have? Table 1: Sample input contexts and corresponding gold responses from Ubuntu validation dataset along with similar contexts seen in training data and their corresponding responses. We refer to training data as training data for the Ubuntu corpus. The highlighted words are common between the gold response and the exemplar response. of context-response pairs (c(k), r(k)), 1 ≤k ≤K using an inverted index of training data. We create an exemplar vector e(k) by encoding the response r(k) (also referred to as exemplar response) along with an encoded representation of the current context c. We then learn the importance of each exemplar vector e(k) based on the likelihood of it being able to generate the ground truth response. We believe that e(k) may contain information that is helpful in generating the response. Table 1 highlights the words in exemplar responses that appear in the ground truth response as well. Contributions: We present a novel Exemplar Encoder-Decoder (EED) architecture that makes use of similar conversations, fetched from an index of training data. The retrieved contextresponse pairs are used to create exemplar vectors which are used by the decoder in the EED model, to learn the importance of training context-response pairs, while generating responses. We present detailed experiments on the publicly benchmarked Ubuntu dialog corpus data set (Lowe et al., 2015) as well a large collection of more than 127,000 technical support conversations. We compare the performance of the EED model with the existing state of the art generative models such as HRED (Serban et al., 2016) and VHRED (Serban et al., 2017b). We find that our model out-performs these models on a wide variety of metrics such as the recently proposed Activity Entity metrics (Serban et al., 2017a) as well as Embedding-based metrics (Lowe et al., 2015). In addition, we present qualitative insights into our results and we find that exemplar based responses are more informative and diverse. The rest of the paper is organized as follows. Section 2 briefly describes the recent works in neural dialogue generation The details of the proposed EED model for dialogue generation are described in detail in Section 3. In Section 4, we describe the datasets as well as the details of the models used during training. We present quantitative and qualitative results of EED model in Section 5. 2 Related Work In this section, we compare our work against other data-driven end-to-end conversation models. Endto-end conversation models can be further classified into two broad categories — generation based models and retrieval based models. Generation based models cast the problem of dialogue generation as a sequence to sequence learning problem. Initial works treat the entire context as a single long sentence and learn an encoder-decoder framework to generate response word by word (Shang et al., 2015; Vinyals and Le, 2015). This was followed by work that models context better by breaking it into conversation history and last utterance (Sordoni et al., 2015b). Context was further modeled effectively by using a hierarchical encoder decoder (HRED) model which first learns a vector representation of each utterance and then combines these representations to learn vector representation of context (Serban et al., 2016). Later, an alternative hierarchical model called VHRED (Serban et al., 2017b) was proposed, where generated responses were conditioned on latent variables. This leads to more in1331 formative responses and adds diversity to response generation. Models that explicitly incorporate diversity in response generation have also been studied in literature (Li et al., 2016b; Vijayakumar et al., 2016; Cao and Clark, 2017; Zhao et al., 2017). Our work differs from the above as none of these above approaches utilize similar conversation contexts observed in the training data explicitly. Retrieval based models on the other hand treat the conversation context as a query and obtain a set of responses using information retrieval (IR) techniques from the conversation logs (Ji et al., 2014). There has been further work where the responses are further ranked using a deep learning based model (Yan et al., 2016a,b; Qiu et al., 2017). On the other hand of the spectrum, endto-end deep learning based rankers have also been employed to generate responses (Wu et al., 2017; Henderson et al., 2017). Recently a framework has also been proposed that uses a discriminative dialog network that ranks the candidate responses received from a response generator network and trains both the networks in an end to end manner (Lu et al., 2017a). In contrast to the above models, we use the input contexts as well as the retrieved responses for generating the final responses. Contemporaneous to our work, a generative model for machine translation that employs retrieved translation pairs has also been proposed (Gu et al., 2017). We note that while the underlying premise of both the papers remains the same, the difference lies in the mechanism of incorporating the retrieved data. 3 Exemplar Encoder Decoder 3.1 Overview A conversation consists of a sequence of utterances. At a given point in the conversation, the utterances expressed prior to it are jointly referred to as the context. The utterance that immediately follows the context is referred to as the response. As discussed in Section 1, given a conversational context, we wish to to generate a response by utilizing similar context-response pairs from the training data. We retrieve a set of K exemplar contextresponse pairs from an inverted index created using the training data in an off-line manner. The input and the retrieved context-response pairs are then fed to the Exemplar Encoder Decoder (EED) network. A schematic illustration of the EED network is presented in Figure 1. The EED encoder combines the input context and the retrieved responses to create a set of exemplar vectors. The EED decoder then uses the exemplar vectors based on the similarity between the input context and retrieved contexts to generate a response. We now provide details of each of these modules. 3.2 Retrieval of Similar Context-Response Pairs Given a large collection of conversations as (context, response) pairs, we index each response and its corresponding context in tf −idf vector space. We further extract the last turn of a conversation and index it as an additional attribute of the context-response document pairs so as to allow directed queries based on it. Given an input context c, we construct a query that weighs the last utterance in the context twice as much as the rest of the context and use it to retrieve the top-k similar context-response pairs from the index based on a BM25 (Robertson et al., 2009) retrieval model. These retrieved pairs form our exemplar context-response pairs (c(k), r(k)), 1 ≤k ≤K. 3.3 Exemplar Encoder Network Given the exemplar pairs (c(k), r(k)), 1 ≤k ≤ K and an input context-response pair (c, r), we feed the input context c and the exemplar contexts c(1), . . . , c(K) through an encoder to generate the embeddings as given below: ce = Encodec(c) c(k) e = Encodec(c(k)), 1 ≤k ≤K Note that we do not constrain our choice of encoder and that any parametrized differentiable architecture can be used as the encoder to generate the above embeddings. Similarly, we feed the exemplar responses r(1), . . . , r(K) through a response encoder to generate response embeddings r(1) e , . . . , r(K) e , that is, r(k) e = Encoder(r(k)), 1 ≤k ≤K (1) Next, we concatenate the exemplar response encoding r(k) e with an encoded representation of current context ce as shown in equation 2 to create the exemplar vector e(k). This allows us to include in1332 Figure 1: A schematic illustration of the EED network. The input context-response pair is (c, r), while the exemplar context-response pairs are (c(k), r(k)), 1 ≤k ≤K. formation about similar responses along with the encoded input context representation. e(k) = [ce; r(k) e ], 1 ≤k ≤K (2) The exemplar vectors e(k), 1 ≤k ≤K are further used by the decoder for generating the ground truth response as described in the next section. 3.4 Exemplar Decoder Network Recall that we want the exemplar responses to help generate the responses based on how similar the corresponding contexts are with the input context. More similar an exemplar context is to the input context, higher should be its effect in generating the response. To this end, we compute the similarity scores s(k), 1 ≤k ≤K using the encodings computed in Section 3.3 as shown below. s(k) = exp(cT e c(k) e ) PK l=1 exp(cTe c(l) e ) (3) Next, each exemplar vector e(k) computed in Section 3.3, is fed to a decoder, where the decoder is responsible for predicting the ground truth response from the exemplar vector. Let pdec(r|e(k)) be the distribution of generating the ground truth response given the exemplar embedding. The objective function to be maximized, is expressed as a function of the scores s(k), the decoding distribution pdec and the exemplar vectors e(k) as shown below: ll = K X k=1 s(k) log pdec(r|e(k)) (4) Note that we weigh the contribution of each exemplar vector to the final objective based on how similar the corresponding context is to the input context. Moreover, the similarities are differentiable function of the input and hence, trainable by back propagation. The model should learn to assign higher similarities to the exemplar contexts, whose responses are helpful for generating the correct response. The model description uses encoder and decoder networks that can be implemented using any differentiable parametrized architecture. We discuss our choices for the encoders and decoder in the next section. 3.5 The Encoders and Decoder In this section, we discuss the various encoders and the decoder used by our model. The conversation context consists of an ordered sequence of utterances and each utterance can be further viewed as a sequence of words. Thus, context can be viewed as having multiple levels of 1333 hierarchies—at the word level and then at the utterance (sentence) level. We use a hierarchical recurrent encoder—popularly employed as part of the HRED framework for generating responses and query suggestions (Sordoni et al., 2015a; Serban et al., 2016, 2017b). The word-level encoder encodes the vector representations of words of an utterance to an utterance vector. Finally, the utterance-level encoder encodes the utterance vectors to a context vector. Let (u1, . . . , uN) be the utterances present in the context. Furthermore, let (wn1, . . . , wnMn) be the words present in the nth utterance for 1 ≤n ≤ N. For each word in the utterance, we retrieve its corresponding embedding from an embedding matrix. The word embedding for wnm will be denoted as wenm. The encoding of the nth utterance can be computed iteratively as follows: hnm = f1(hnm−1, wenm), 1 ≤m ≤Mn (5) We use an LSTM (Hochreiter and Schmidhuber, 1997) to model the above equation. The last hidden state hnMn is referred to as the utterance encoding and will be denoted as hn. The utterance-level encoder takes the utterance encodings h1, . . . , hN as input and generates the encoding for the context as follows: cen = f2(cen−1, hn), 1 ≤n ≤N (6) Again, we use an LSTM to model the above equation. The last hidden state ceN is referred to as the context embedding and is denoted as ce. A single level LSTM is used for embedding the response. In particular, let (w1, . . . , wM) be the sequence of words present in the response. For each word w, we retrieve the corresponding word embedding we from a word embedding matrix. The response embedding is computed from the word embeddings iteratively as follows: rem = g(rem−1, wem), 1 ≤m ≤M (7) Again, we use an LSTM to model the above equation. The last hidden state rem is referred to as the response embedding and is denoted as re. 4 Experimental Setup 4.1 Datasets 4.1.1 Ubuntu Dataset We conduct experiments on Ubuntu Dialogue Corpus (Lowe et al., 2015)(v2.0)2. Ubuntu dialogue corpus has about 1M context response pairs along with a label. The label value 1 indicates that the response associated with a context is the correct response and is incorrect otherwise. As we are only interested in positive labeled data we work with label = 1. Table 2 depicts some statistics for the dataset. Size Training Pairs 499,873 Validation Pairs 19,560 Test Pairs 18,920 |V | 538,328 Table 2: Dataset statistics for Ubuntu Dialog Corpus v2.0 (Lowe et al., 2015), where |V | represents the size of vocabulary. 4.1.2 Tech Support Dataset We also conduct our experiments on a large technical support dataset with more than 127K conversations. We will refer to this dataset as Tech Support dataset in the rest of the paper. Tech Support dataset contains conversations pertaining to an employee seeking assistance from an agent (technical support) — to resolve problems such as password reset, software installation/licensing, and wireless access. In contrast to Ubuntu dataset, this dataset has clearly two distinct users — employee and agent. In our experiments we model the agent responses only. For each conversation in the tech support data, we sample context and response pairs to create a dataset similar to the Ubuntu dataset format. Note that multiple context-response pairs can be generated from a single conversation. For each conversation, we sample 25% of the possible contextresponse pairs. We create validation pairs by selecting 5000 conversations randomly and sampling context response pairs). Similarly, we create test pairs from a different subset of 5000 conversations. The remaining conversations are used to 2https://github.com/rkadlec/ ubuntu-ranking-dataset-creator 1334 create training context-response pairs. Table 3 depicts some statistics for this dataset: Size Conversations 127,466 Training Pairs 204,808 Validation Pairs 8,738 Test Pairs 8,756 |V | 293,494 Table 3: Dataset statistics for Tech Support dataset. 4.2 Model and Training Details The EED and HRED models were implemented using the PyTorch framework (Paszke et al., 2017). We initialize the word embedding matrix as well as the weights of context and response encoders from the standard normal distribution with mean 0 and variance 0.01. The biases of the encoders and decoder are initialized with 0. The word embedding matrix is shared by the context and response encoders. For Ubuntu dataset, we use a word embedding size of 600, whereas the size of the hidden layers of the LSTMs in context and response encoders and the decoder is fixed at 1200. For Tech support dataset, we use a word embedding size of 128. Furthermore, the size of the hidden layers of the multiple LSTMs in context and response encoders and the decoder is fixed at 256. A smaller embedding size was chosen for the Tech Support dataset since we observed much less diversity in the responses of the Tech Support dataset as compared to Ubuntu dataset. Two different encoders are used for encoding the input context (not shown in Figure 1 for simplicity). The output of the first context encoder is concatenated with the exemplar response vectors to generate exemplar vectors as detailed in Section 3.3. The output of the second context encoder is used to compute the scoring function as detailed in Section 3.4. For each input context, we retrieve 5 similar context-response pairs for Ubuntu dataset and 3 context-response pairs for Tech support dataset using the tf-idf mechanism discussed in Section 3.2. We use the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 1e −4 for training the model. A batch size of 20 samples was used during training. In order to prevent overfitting, we use early stopping with log-likelihood on validation set as the stopping criteria. In order to generate the samples using the proposed EED model, we identify the exemplar context that is most similar to the input context based on the learnt scoring function discussed in Section 3.4. The corresponding exemplar vector is fed to the decoder to generate the response. The samples are generated using a beam search with width 5. The average per-word log-likelihood is used to score the beams. 5 Results & Evaluation 5.1 Quantitative Evaluation 5.1.1 Activity and Entity Metrics A traditional and popular metric used for comparing a generated sentence with a ground truth sentence is BLEU (Papineni et al., 2002) and is frequently used to evaluate machine translation. The metric has also been applied to compute scores for predicted responses in conversations, but it has been found to be less indicative of actual performance (Liu et al., 2016; Sordoni et al., 2015a; Serban et al., 2017a), as it is extremely sensitive to the exact words in the ground truth response, and gives equal importance to stop words/phrases and informative words. Serban et al. (2017a) recently proposed a new set of metrics for evaluating dialogue responses for the Ubuntu corpus. It is important to highlight that these metrics have been specifically designed for the Ubuntu corpus and evaluate a generated response with the ground truth response by comparing the coarse level representation of an utterance (such as entities, activities, Ubuntu OS commands). Here is a brief description of each metric: • Activity: Activity metric compares the activities present in a predicted response with the ground truth response. Activity can be thought of as a verb. Thus, all the verbs in a response are mapped to a set of manually identified list of 192 verbs. • Entity: This compares the technical entities that overlap with the ground truth response. A total of 3115 technical entities is identified using public resources such as Debian package manager APT. 1335 Activity Entity Tense Cmd Model P R F1 P R F1 Acc. Acc. LSTM* 1.7 1.03 1.18 1.18 0.81 0.87 14.57 94.79 VHRED* 6.43 4.31 4.63 3.28 2.41 2.53 20.2 92.02 HRED* 5.93 4.05 4.34 2.81 2.16 2.22 22.2 92.58 EED 6.42 4.77 4.87 3.8 2.91 2.99 31.73 95.06 Table 4: Activity & Entity metrics for the Ubuntu corpus. LSTM*, HRED* & VHRED* as reported by Serban et al. (2017a). • Tense: This measure compares the time tense of ground truth with predicted response. • Cmd: This metric computes accuracy by comparing commands identified in ground truth utterance with a predicted response. Table 4 compares our model with other recent generative models (Serban et al., 2017a) — LSTM (Shang et al., 2015), HRED (Serban et al., 2016) & VHRED (Serban et al., 2017b).We do not compare our model with Multi-Resolution RNN (MRNN) (Serban et al., 2017a), as MRNN explicitly utilizes the activities and entities during the generation process. In contrast, the proposed EED model and the other models used for comparison are agnostic to the activity and entity information. We use the standard script3 to compute the metrics. The EED model scores better than generative models on almost all of the metrics, indicating that we generate more informative responses than other state-of-the-art generative based approaches for Ubuntu corpus. The results show that responses associated with similar contexts may contain the activities and entities present in the ground truth response, and thus help in response generation. This is discussed further in Section 5.2. Additionally, we compared our proposed EED with a retrieval only baseline. The retrieval baseline achieves an activity F1 score of 4.23 and entity F1 score of 2.72 compared to 4.87 and 2.99 respectively achieved by our method on the Ubuntu corpus. The Tech Support dataset is not evaluated using the above metrics, since activity and entity information is not available for this dataset. 3https://github.com/julianser/Ubuntu-MultiresolutionTools/blob/master/ActEntRepresentation/eval file.sh 5.1.2 Embedding Metrics Embedding metrics (Lowe et al., 2017) were proposed as an alternative to word by word comparison metrics such as BLEU. We use pre-trained Google news word embeddings4 similar to Serban et al. (2017b), for easy reproducibility as these metrics are sensitive to the word embeddings used. The three metrics of interest utilize the word vectors in ground truth response and a predicted response and are discussed below: • Average: Average word embedding vectors are computed for the candidate response and ground truth. The cosine similarity is computed between these averaged embeddings. High similarity gives as indication that ground truth and predicted response have similar words. • Greedy: Greedy matching score finds the most similar word in predicted response to ground truth response using cosine similarity. • Extrema: Vector extrema score computes the maximum or minimum value of each dimension of word vectors in candidate response and ground truth. Of these, the embedding average metric is the most reflective of performance for our setup. The extrema representation, for instance, is very sensitive to text length and becomes ineffective beyond single length sentences(Forgues et al., 2014). We use the publicly available script5 for all our computations. As the test outputs for HRED are not available for Technical Support dataset, we use our 4GoogleNews-vectors-negative300.bin from https:// code.google.com/archive/p/word2vec/ 5https://github.com/julianser/ hed-dlg-truncated/blob/master/ Evaluation/embedding_metrics.py 1336 Ubuntu Tech Support Model Average Greedy Extrema Average Greedy Extrema HRED 0.5131 0.4103 0.3263 0.6372 0.5100 0.4355 EED 0.5528 0.4172 0.3134 0.6420 0.5125 0.4356 Table 5: Embedding Metrics (Lowe et al., 2015) for Ubuntu and Technical Support Corpus. Ubuntu Tech Support Model distinct-1 distinct-2 distinct-3 distinct-1 distinct-2 distinct-3 HRED 1154 8668 22417 645 3510 6365 EED 1173 9572 29447 732 4149 7877 Table 6: The number of unique tokens, token-pairs and token-triplets for Ubuntu and Technical Support Corpus. Input Context (C) and exemplar response (R) Generated/Ground Truth Responses 1 C: i want to check ntfs file systems. is fsck ok with ntfs. →no , you ’d need to use chkdsk from windows. → R: i already symlinked ntfsfix to fsck. ntfs and am scanning the unmounted devices HRED: i know , but i dont know how to do that VHRED: how can i do that ? EED: i am trying to use ntfs 3g i have a windows partition. GT: i dont have windows installed 2 C: anyone help me ? →seems that its detecting grub , have you tried checking continue without installing grub ? R: try the live cd... you ca n’t boot into linux unless grub install HRED: yes i did , but i have no idea how to fix it VHRED: yes i did. i dont know how to do it EED: nope not yet. grub install cd is not working GT: no i did not. 3 C: any ideas on my mplayer problem ? →reinstall. apt-get purge mplayer then reinstall R: but half my system depends on this package ... HRED: I did that , but it didn’t work VHRED: I dont know how to do that EED: does n t work. package does n t work GT: that does n’t seem as a solution 4 C: any of you guys in here know how to take a package on launchpad and recompile it and submit to your own ppa ? https : //launchpad.net/ mojocode/+archive /ppa ¡ ¡ i ’m trying to take the nginx package and install the rtmp module →could n’t you download the source and just commit it to your own ? R: i need prop . drivers HRED: I don’t know how to do that , I just want to know if there is a way to do it VHRED: no , I just want to make it so I can use it EED: i need to get the source package. package manager is not available GT: so i figured how how to copy into my private ppa . any idea how to download those source files maybe and make modifications to resubmit and recompile ? https : //launchpad.net / blendedbyus/+archive/master/+packages 5 C: hi people , can you help me uninstall a package ? i did dpkg -i virtualbox-4.2 4.2.2-81494 ubuntu precise amd64.deb. how do i uninstall it , i ca n’t find the package from apt-get →sudo apt-get purge virtualbox should work fine →package virtualbox is not installed , so not removed →does n’t work : ( R: sudo dpkg – remove – force < package > HRED: what version of ubuntu ? VHRED: did you try sudo apt-get install virtualbox-ose EED: sudo dpkg reconfigure package package manager GT: if you ca n’t find the package ; sudo aptitude search virtualbox Table 7: Contexts, exemplar responses and responses generated by HRED, VHRED and the proposed EED model. We use the published responses for HRED and VHRED. GT indicates the ground truth response. The change of turn is indicated by →. The highlighted words in bold are common between the exemplar response and the response predicted by EED. own implementation of HRED. Table 5 compares our model with HRED, and depicts that our model scores better on all metrics for Technical Support dataset, and on majority of the metrics for Ubuntu dataset. We note that the improvement achieved by the 1337 EED model on activity and entity metrics are much more significant than those on embedding metrics. This suggests that the EED model is better able to capture the specific information (objects and actions) present in the conversations. Finally, we evaluate the diversity of the generated responses for EED against HRED by counting the number of unique tokens, token-pairs and token-triplets present in the generated responses on Ubuntu and Tech Support dataset. The results are shown in Table 6. As can be observed, the responses in EED have a larger number of distinct tokens, token-pairs and token-triplets than HRED, and hence, are arguably more diverse. 5.2 Qualitative Evaluation Table 7 presents the responses generated by HRED, VHRED and the proposed EED for a few selected contexts along with the corresponding similar exemplar responses. As can be observed from the table, the responses generated by EED tend to be more specific to the input context as compared to the responses of HRED and VHRED. For example, in conversations 1 and 2 we find that both HRED and VHRED generate simple generic responses whereas EED generates responses with additional information such as the type of disk partition used or a command not working. This is also confirmed by the quantitative results obtained using activity and entity metrics in the previous section. We further observe that the exemplar responses contain informative words that are utilized by the EED model for generating the responses as highlighted in Table 7. 6 Conclusions In this work, we propose a deep learning method, Exemplar Encoder Decoder (EED), that given a conversation context uses similar contexts and corresponding responses from training data for generating a response. We show that by utilizing this information the system is able to outperform state of the art generative models on publicly available Ubuntu dataset. We further show improvements achieved by the proposed method on a large collection of technical support conversations. While in this work, we apply the exemplar encoder decoder network on conversational task, the method is generic and could be used with other tasks such as question answering and machine translation. In our future work we plan to extend the proposed method to these other applications. Acknowledgements We are grateful to the anonymous reviewers for their comments that helped in improving the paper. References Kris Cao and Stephen Clark. 2017. Latent variable dialogue models and their diversity. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, volume 2, pages 182–187. Gabriel Forgues, Joelle Pineau, Jean-Marie Larchevˆeque, and R´eal Tremblay. 2014. Bootstrapping dialog systems with word embeddings. In Nips, modern machine learning and natural language processing workshop, volume 2. Jiatao Gu, Yong Wang, Kyunghyun Cho, and Victor OK Li. 2017. Search engine guided nonparametric neural machine translation. arXiv preprint arXiv:1705.07267. Matthew Henderson, Rami Al-Rfou’, Brian Strope, Yun-Hsuan Sung, L´aszl´o Luk´acs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, and Ray Kurzweil. 2017. Efficient natural language response suggestion for smart reply. CoRR, abs/1705.00652. Sepp Hochreiter and Jurgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Zongcheng Ji, Zhengdong Lu, and Hang Li. 2014. An information retrieval approach to short text conversation. CoRR, abs/1408.6988. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 110–119. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016b. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119. 1338 Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. arXiv preprint arXiv:1603.08023. Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. arXiv preprint arXiv:1506.08909. Ryan Thomas Lowe, Nissan Pow, Iulian Vlad Serban, Laurent Charlin, Chia-Wei Liu, and Joelle Pineau. 2017. Training end-to-end dialogue systems with the ubuntu dialogue corpus. Dialogue & Discourse, 8(1):31–65. Jiasen Lu, Anitha Kannan, Jianwei Yang, Devi Parikh, and Dhruv Batra. 2017a. Best of both worlds: Transferring knowledge from discriminative learning to a generative visual dialog model. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 314–324. Yichao Lu, Phillip Keung, Shaonan Zhang, Jason Sun, and Vikas Bhardwaj. 2017b. A practical approach to dialogue response generation in closed domains. arXiv preprint arXiv:1703.09439. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. Minghui Qiu, Feng-Lin Li, Siyu Wang, Xing Gao, Yan Chen, Weipeng Zhao, Haiqing Chen, Jun Huang, and Wei Chu. 2017. Alime chat: A sequence to sequence and rerank based chatbot engine. In ACL. Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends R⃝in Information Retrieval, 3(4):333–389. Iulian Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In AAAI. Iulian Vlad Serban, Tim Klinger, Gerald Tesauro, Kartik Talamadupula, Bowen Zhou, Yoshua Bengio, and Aaron C Courville. 2017a. Multiresolution recurrent neural networks: An application to dialogue response generation. In AAAI, pages 3288–3294. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C Courville, and Yoshua Bengio. 2017b. A hierarchical latent variable encoder-decoder model for generating dialogues. In AAAI, pages 3295–3301. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In ACL. Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, and JianYun Nie. 2015a. A hierarchical recurrent encoderdecoder for generative context-aware query suggestion. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pages 553–562. ACM. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and William B. Dolan. 2015b. A neural network approach to contextsensitive generation of conversational responses. In HLT-NAACL. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Ashwin K Vijayakumar, Michael Cogswell, Ramprasath R Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2016. Diverse beam search: Decoding diverse solutions from neural sequence models. arXiv preprint arXiv:1610.02424. Oriol Vinyals and Quoc V. Le. 2015. A neural conversational model. CoRR, abs/1506.05869. Yu Wu, Wei Wu, Chen Xing, Can Xu, Zhoujun Li, and Ming Zhou. 2017. A sequential matching framework for multi-turn response selection in retrievalbased chatbots. CoRR, abs/1710.11344. Rui Yan, Yiping Song, and Hua Wu. 2016a. Learning to respond with deep neural networks for retrievalbased human-computer conversation system. In SIGIR. Rui Yan, Yiping Song, Xiangyang Zhou, and Hua Wu. 2016b. ”shall i be your chat companion?”: Towards an online human-computer conversation system. In CIKM. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 654–664.
2018
123
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1339–1349 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1339 DialSQL: Dialogue Based Structured Query Generation Izzeddin Gur and Semih Yavuz and Yu Su and Xifeng Yan Department of Computer Science, University of California, Santa Barbara {izzeddingur,syavuz,ysu,xyan}@cs.ucsb.edu Abstract The recent advance in deep learning and semantic parsing has significantly improved the translation accuracy of natural language questions to structured queries. However, further improvement of the existing approaches turns out to be quite challenging. Rather than solely relying on algorithmic innovations, in this work, we introduce DialSQL, a dialoguebased structured query generation framework that leverages human intelligence to boost the performance of existing algorithms via user interaction. DialSQL is capable of identifying potential errors in a generated SQL query and asking users for validation via simple multi-choice questions. User feedback is then leveraged to revise the query. We design a generic simulator to bootstrap synthetic training dialogues and evaluate the performance of DialSQL on the WikiSQL dataset. Using SQLNet as a black box query generation tool, DialSQL improves its performance from 61.3% to 69.0% using only 2.4 validation questions per dialogue. 1 Introduction Building natural language interfaces to databases (NLIDB) is a long-standing open problem and has significant implications for many application domains. It can enable users without SQL programming background to freely query the data they have. For this reason, generating SQL queries from natural language questions has gained a renewed interest due to the recent advance in deep learning and semantic parsing (Yaghmazadeh et al., 2017; Zhong et al., 2017; Xu et al., 2017; Iyer et al., 2017). While new methods race to achieve the stateof-the-art performance on NLIDB datasets such as WikiSQL (Xu et al., 2017; Zhong et al., 2017), the accuracy is still not high enough for real use. For example, SQLNet (Xu et al., 2017) achieves 61.3% accuracy on WikiSQL. After analyzing the error cases of Seq2SQL (Zhong et al., 2017) and SQLNet, we recognized that many wrong translations cannot be easily corrected due to the lack of external knowledge and semantic understanding. In this paper, we aim to alleviate the aforementioned problem by putting human users in the loop. Previous human-in-the-loop NLIDBs (Li and Jagadish, 2014; Yaghmazadeh et al., 2017) rely on users to carefully go through a generated SQL query and revise it accordingly, which is not feasible for users who do not know the SQL language. Instead, we resort to a different approach by introducing a goal-oriented dialogue model, DialSQL, that interacts with users to extract and correct potential errors in the generated queries. Given a SQL query generated from a natural language question, we assume any segment, or span, of the generated query such as a WHERE clause can be potentially erroneous. The goal of DialSQL is to extract the erroneous spans and ask users multi-choice questions to validate and correct these errors. DialSQL is based on a hierarchical encoder-decoder architecture with attention and pointer mechanisms. The model first encodes each turn of interaction and runs a dialogue level RNN network on the dialogue history. The output of the network is then used to predict the error category, i.e., whether it is a selection, projection, or aggregation error. Conditioned on the error category, the output of a second RNN is used to predict the start and end positions of the error span by pointing to the query tokens. Finally, candidate choices are decoded from the error category and span representations. Following previous 1340 Natural Language Question : What are the countries that joined the NATO before 2004 ? Ground Truth SQL Query : SELECT country WHERE date of join < 2004 Initial SQL Query : SELECT count ( country ) WHERE date of join = 2004 Turn Side NL Expression using Rule-Based Templates Response System: Could you please confirm if you are asking total number of the results? validate agg (1, 2) User: No, that’s not true. negate() System: Could you please select one of the following options : (i) average number of results, (ii) all the results? offer agg (average, no agg) User: It should be all the results. select(2) System: Could you please confirm if date of join is 2004 ? validate where change(6, 11) User: That’s wrong. negate() System: Could you please select one of the following options : (i) date of join is before 2004, (ii) date of invitation is before 2004? offer where (date of join < 2004, date of invitation < 2004) User: Date of join is before 2004. select(1) Table 1: DialSQL model running example. Initial SQL query is generated by running a black box model on the question. Natural language (NL) expressions are generated using a template based method. Substrings in red represent the error spans and substrings in blue represent the choices offered. Each response is accompanied with natural language utterances for clarity. work (Zhong et al., 2017; Xu et al., 2017), we only use column names and do not utilize table values. How to train and evaluate DialSQL become two challenging issues due to the lack of error data and interaction data. In this work, we construct a simulator to generate simulated dialogues, a general approach practiced by many dialogue studies. Inspired by the agenda-based methods for user simulation (Schatzmann et al., 2007), we keep an agenda of pending actions that are needed to induce the ground truth query. At the start of the dialogue, a new query is carefully synthesized by randomly altering the ground truth query and the agenda is populated by the sequence of altering actions. Each action consists of three sub-actions: (i) Pick an error category and extract a span; (ii) Raise a question; (iii) Update the query by randomly altering the span and remove the action from the agenda. Consider the example in Figure 1: Step1 synthesizes the initial query by randomly altering the WHERE clause and AGGREGATION; Step2 generates the simulated dialogue by validating the altered spans and offering the correct choice. To evaluate our model, we first train DialSQL on the simulated dialogues. Initial queries for new questions are manufactured by running a black box SQL generation system on the new questions. When tested on the WikiSQL (Zhong et al., 2017) dataset, our model increases the query match accuracy of SQLNet (Xu et al., 2017) from 61.3% to 69.0% using on average 2.4 validation questions per query. 2 Related Work Research on natural language interfaces to databases (NLIDBs), or semantic parsing, has spanned several decades. Early rule-based NLIDBs (Woods, 1973; Androutsopoulos et al., 1995; Popescu et al., 2003) employ carefully designed rules to map natural language questions to formal meaning representations like SQL queries. While having a high precision, rule-based systems are brittle when facing with language variations. The rise of statistical models (Zettlemoyer and Collins, 2005; Kate et al., 2005; Berant et al., 2013), especially the ongoing wave of neural network models (Yih et al., 2015; Dong and Lapata, 2016; Sun et al., 2016; Zhong et al., 2017; Xu et al., 2017; Guo and Gao, 2018; Yavuz et al., 2016), has enabled NLIDBs that are more robust to language variations. Such systems allow users to formulate questions with greater flexibility. However, although state-of-the-art systems have achieved a high accuracy of 80% to 90% (Dong and Lapata, 2016) on well-curated datasets like GEO (Zelle and Ray, 1996) and ATIS (Zettlemoyer and Collins, 2007), the best accuracies on datasets with questions formulated by real human users, e.g., WebQuestions (Berant et al., 2013), GraphQuestions (Su et al., 2016), and WikiSQL (Zhong et al., 2017), are still far from enough for real use, typically in the range of 20% to 60%. Human-in-the-loop systems are a promising paradigm for building practical NLIDBs. A number of recent studies have explored this paradigm with two types of user interaction: coarse-grained and fine-grained. Iyer et al. (2017) and Li et 1341 Figure 1: An instantiation of our dialogue simulation process. Step-1 synthesizes the initial query (top) by randomly altering the ground truth query (bottom). Step-2 generates the dialogue by validating the sequence of actions populated in Step-1 with the user. Each action is defined by the error category, start and end positions of the error span, and the random replacement, ex. AGG (1, 2, count). al. (2016) incorporate coarse-grained user interaction, i.e., asking the user to verify the correctness of the final results. However, for real-world questions, it may not always be possible for users to verify result correctness, especially in the absence of supporting evidence. Li and Jagadish (2014) and Yaghmazadeh et al. (2017) have shown that incorporating fine-grained user interaction can greatly improve the accuracy of NLIDBs. However, they require that the users have intimate knowledge of SQL, an assumption that does not hold for general users. Our method also enables fine-grained user interaction for NLIDBs, but we solicit user feedback via a dialogue between the user and the system. Our model architecture is inspired by recent studies on hierarchical neural network models (Sordoni et al., 2015; Serban et al., 2015; Gur et al., 2017). Recently, Saha et al. (2018) propose a hierarchical encoder-decoder model augmented with key-value memory network for sequential question answering over knowledge graphs. Users ask a series of questions, and their system finds the answers by traversing a knowledge graph and resolves coreferences between questions. Our interactive query generation task significantly differs from their setup in that we aim to explicitly detect and correct the errors in the generated SQL query via a dialogue between our model and the user. Agenda based user simulations have been investigated in goal-oriented dialogues for model training (Schatzmann et al., 2007). Recently, Seq2seq neural network models are proposed for user simulation (Asri et al., 2016) that utilize additional state tracking signals and encode dialogue turns in a more coarse way. We design a simulation method for the proposed task where we generate dialogues with annotated errors by altering queries and tracking the sequence of alteration steps. 3 Problem Setup and Datasets We study the problem of building an interactive natural language interface to databases (INLIDB) for synthesizing SQL queries from natural language questions. In particular, our goal is to design a dialogue system to extract and validate potential errors in generated queries by asking users multi-choice questions over multiple turns. We will first define the problem formally and then explain our simulation strategy. 3.1 Interactive Query Generation At the beginning of each dialogue, we are given a question Q = {q1, q2, · · · , qN}, a table with column names T = {T1, T2, · · · , TK} where each name is a sequence of words, and an initial SQL query U generated using a black box SQL generation system. Each turn t is represented by a tuple of system and user responses, (St, Rt), and augmented with the dialogue history (list of previous turns), Ht. Each system response is a triplet of error category c, error span s, and a set of candidate choices C, i.e., St = (c, s, C). An error category (Table 2) denotes the type of the error that we seek to correct and an error span is the segment of the current query that indicates the actual error. Candidate choices depend on the error category and range over the following possibilities: (i) a column name, (ii) an aggregation operator, or (iii) a where condition. User responses are represented by ei1342 Error Category Meaning in a dialogue validate sel Validate the select clause validate agg Validate the aggregation operator validate where changed Validate if a segment of a where clause is incorrect validate where removed Validate if a new where clause is needed validate where added Validate if an incorrect where clause exists no error Validate if there is no remaining error Table 2: The list of error categories and their explanations for our interactive query generation task. ther an affirmation or a negation answer and an index c′ to identify a choice. We define the interactive query generation task as a list of subtasks: at each turn t, (i) predict c, (ii) extract s from U, and (iii) decode C. The task is supervised and each subtask is annotated with labeled data. Consider the example dialogue in Table 1. We first predict validate agg as the error category and error span (start = 1, end = 2) is decoded by pointing to the aggregation segment of the query. Candidate choices, (average, no agg), are decoded using the predicted error category, predicted error span, and dialogue history. We use a template based natural language generation (NLG) component to convert system and user responses into natural language. 3.2 Dialogue Simulation for INLIDB In our work, we evaluate our model on the WikiSQL task. Each example in WikiSQL consists of a natural language question and a table to query from. The task is to generate a SQL query that correctly maps the question to the given table. Unfortunately, the original WikiSQL lacks error data and user interaction data to train and evaluate DialSQL. We work around this problem by designing a simulator to bootstrap training dialogues and evaluate DialSQL on the test questions of WikiSQL. Inspired by the agenda-based methods (Schatzmann et al., 2007), we keep an agenda of pending actions that are needed to induce the ground truth query. At the start of the dialogue, we synthesize a new query by randomly altering the ground truth query and populating the agenda by the sequence of altering actions. Each action launches a sequence of sub-actions: (i) Randomly select an error category and extract a related span from the current query, (ii) randomly generate a valid choice for the chosen span, and (iii) update the current query by replacing the span with the choice. The dialogue is initiated with the final query and a rule-based system interacts with a rule-based user simulator to populate the dialogue. The rule-based system follows the sequence of altering actions previously generated and asks the user simulator a single question at each turn. The user simulator has access to the ground truth query and answers each question by comparing the question (error span and the choice) with the ground truth. Consider the example in Figure 1 where Step1 synthesizes the initial query and Step-2 simulates a dialogue using the outputs of Step-1. Step-1 first randomly alters the WHERE clause; the operator is replaced with a random operator. The updated query is further altered and the final query is passed to Step-2. In Step-2, the system starts with validating the aggregation with the user simulator. In this motivating example, the aggregation is incorrect and the user simulator negates and selects the offered choice. During training, there is only a single choice offered and DialSQL trains to produce this choice; however, during testing, it can offer multiple choices. In the next step, the system validates the WHERE clause and generates a no error action to issue the generated query. At the end of this process, we generate a set of labeled dialogues by executing Step-1 and Step2 consecutively. DialSQL interacts with the same rule-based simulator during testing and the SQL queries obtained at the end of the dialogues are used to evaluate the model. 4 Dialogue Based SQL Generation In this section, we present our DialSQL model and describe its operation in a fully supervised setting. DialSQL is composed of three layers linked in a hierarchical structure where each layer solves a different subtask : (i) Predicting error category, (ii) Decoding error span, and (iii) Decoding candidate choices (illustrated in Figure 2). Given a (Q, T, U) triplet, the model first encodes Q, each column name Ti ∈T, and query U into vector representations in parallel using Recurrent Neural Networks (RNN). Next, the first layer of the 1343 model encodes the dialogue history with an RNN and predicts the error category from this encoding. The second layer is conditioned on the error category and decodes the start and end positions of the error span by attending over the outputs of query encoder. Finally, the last layer is conditioned on both error category and error span and decodes a list of choices to offer to the user. 4.1 Preliminaries and Notation Each token w is associated with a vector ew from rows of an embeddings matrix E. We aim at obtaining vector representations for question, table headers, and query, then generating error category, error span, and candidate choices. For our purposes, we use GRU units (Cho et al., 2014) in our RNN encoders which are defined as ht = f(xt; ht−1) where ht is the hidden state at time t. f is a nonlinear function operating on input vector xt and previous state ht−1. We refer to the last hidden state of an RNN encoder as the encoding of a sequence. 4.2 Encoding The core of our model is a hierarchical encoderdecoder neural network that encodes dialogue history and decodes errors and candidate choices at the end of each user turn. The input to the model is the previous system turn and the current user turn and the output is the next system question. Encoding Question, Column Names, and Query. Using decoupled RNNs (Enc), we encode natural language question, column names, and query sequences in parallel and produce outputs and hidden states. oQ, oTi, and oU denote the sequence of hidden states at each step and hQ, hTi, and hU denote the last hidden states of question, column name, and query encoders, respectively. Parameters of the encoders are decoupled and only the word embedding matrix E is shared. Encoding System and User Turns Since there is only a single candidate choice during training, we ignore the index and encode user turn by doing an embedding lookup using the validation answer (affirmation or negation). Each element (error category, error span, and candidate choice) of the system response is encoded by doing an embedding lookup and different elements are used as input at different layers of our model. Encoding Dialogue History At the end of each user turn, we first concatenate the previous error category and the current user turn encodings to generate the turn level input. We employ an RNN to encode dialogue history and current turn into a fixed length vector as hD1 0 = hQ oD1 t , gD1 t = Enc([Ec, Ea]) hD1 t = [Attn(gD1 t , HT ), oD t ] where [.] is vector concatenation, Ec is the error category encoding, Ea is the user turn encoding, hD1 0 is the initial hidden state, and hD1 t is the current hidden state. Attn is an attention layer with a bilinear product defined as in (Luong et al., 2015) Attn(h, O) = X softmax(tanh(hWO)) ∗O where W is attention parameter. 4.3 Predicting Error Category We predict the error category by attending over query states using the output of the dialogue encoder as ct = tanh(Lin([Attn(hD1 t , OU), hD1 t ])) lt = softmax(ct · E(C)) where Lin is a linear transformation, E(C) is a matrix with error category embeddings, and lt is the probability distribution over categories. 4.4 Decoding Error Span Consider the case in which there are more than one different WHERE clauses in the query and each clause has an error. In this case, the model needs to monitor previous error spans to avoid decoding the same error. DialSQL runs another RNN to generate a new dialogue encoding to solve the aforementioned problem as hD2 0 = hQ oD2 t , gD2 t = Enc(Ec) hD2 t = [Attn(gD2 t , HT )oD2 t ] where hD2 0 is the initial hidden state, and hD2 t is the current hidden state. Start position i of the error span is decoded using the following probability distribution over query tokens pi = softmax(tanh(hD2 t L1HU)) 1344 Figure 2: DialSQL model: Boxes are RNN cells, colors indicate parameter sharing. Dashed lines denote skip connections, dashed boxes denote classifications, and black circles denote vector concatenation. Blue boxes with capital letters and numbers (X.1, X.2) denote that the embeddings of predicted token at X.1 is passed as input to X.2. Each component in the pipeline is numbered according to execution order. <GO> is a special token to represent the start of a sequence and ST and ED denote the start and end indices of a span, respectively. where pi is the probability of start position over the ith query token. End position j of the error span is predicted by conditioning on the start position ci = X pi ∗HU ˆpj = softmax(tanh([hD2 t , ci]L2HU)) where ˆpj is the probability of end position over the jth query token. Conditioning on the error category will localize the span prediction problem as each category is defined by only a small segment of the query. 4.5 Decoding Candidate Choices Given error category c and error span (i, j) , DialSQL decodes a list of choices that will potentially replace the error span based on user feedback. Inspired by SQLNet (Xu et al., 2017), we describe our candidate choice decoding approach as follows. Select column choice. We define the following scores over column names, h = Attn(Lin([oU i−1, oU j , Ec]), HT ) ssel = uT ∗tanh(Lin([HT , h])) where oU i−1 is the output vector of the query encoder preceding the start position, and oU j is the output of query encoder at the end position. Aggregation choice. Conditioned on the encoding e of the select column, we define the following scores over the set of aggregations (MIN, MAX, COUNT, NO AGGREGATION) sagg = vT ∗tanh(Lin(Attn(e, HQ))) Where condition choice. We first decode the condition column name similar to decoding select column. Given the encoding e of condition column, we define the following scores over the set of operators (=, <, >) sop = wT ∗tanh(Lin(Attn(e, HQ))) 1345 Next, we define the following scores over question tokens for the start and end positions of the condition value sst = Attn(e, HQ) sed = Attn([e, hst, HQ]) where hst is the context vector generated from the first attention. We denote the number of candidate choices to be decoded by k. We train DialSQL with k = 1. The list of k > 1 candidate choices is decoded similar to beam search during testing. As an example, we select k column names that have the highest scores as the candidate where column choices. For each column name, we first generate k different operators and from the set of k ∗2 column name and operator pairs; select k operators that have the highest joint probability. Ideally, DialSQL should be able to learn the type of errors present in the generated query, extract precise error spans by pointing to query tokens, and using the location of the error spans, generate a set of related choices. 5 Experimental Results and Discussion In this section, we evaluate DialSQL on WikiSQL using several evaluation metrics by comparing with previous literature. 5.1 Evaluation Setup and Metrics We measure the query generation accuracy as well as the complexity of the questions and the length of the user interactions. Query-match accuracy. We evaluate DialSQL on WikiSQL using query-match accuracy (Zhong et al., 2017; Xu et al., 2017). Query-match accuracy is the proportion of testing examples for which the generated query is exactly the same as the ground truth, except the ordering of the WHERE clauses. Dialogue length. We count the number of turns to analyze whether DialSQL generates any redundant validation questions. Question complexity. We use the average number of tokens in the generated validation questions to evaluate if DialSQL can generate simple questions without overwhelming users. Since SQLNet and Seq2SQL are single-step models, we can not analyze DialSQL’s performance by comparing against these on the last two metrics. We overcome this issue by generating simulated dialogues using an oracle system that has access to the ground truth query. The system compares SELECT and AGGREGATION clauses of the predicted query and the ground truth; asks a validation question if they differ. For each WHERE clause pairs of generated query and the ground truth, the system counts the number of matching segments namely COLUMN, OP, and VALUE. The system takes all the pairs with the highest matching scores and asks a validation question until one of the queries has no remaining WHERE clause. If both queries have no remaining clauses, the dialogue terminates. Otherwise, the system asks a validate where added (validate where removed) question when the generated query (ground truth query) has more remaining clauses. We call this strategy OracleMatching (OM). OM ensures that the generated dialogues have the minimum number of turns possible. 5.2 Training Details We implement DialSQL in TensorFlow (Abadi et al., 2016) using the Adam optimizer (Kingma and Ba, 2014) for the training with a learning rate of 1e−4. We use an embedding size of 300, RNN state size of 50, and a batch size of 64. The embeddings are initialized from pretrained GloVe embeddings (Pennington et al., 2014) and fine-tuned during training. We use bidirectional RNN encoders with two layers for questions, column names, and queries. Stanford CoreNLP tokenizer (Manning et al., 2014) is used to parse questions and column names. Parameters of each layer are decoupled from each other and only the embedding matrix is shared. The total number of turns is limited to 10 and 10 simulated dialogues are generated for each example in the WikiSQL training set. SQLNet and Seq2SQL models are trained on WikiSQL using the existing implemention provided by their authors. The code is available at https://github.com/ izzeddingur/DialSQL. 5.3 Evaluation on the WikiSQL Dataset Table 3 presents the results of query match accuracy. We observe that DialSQL model with a number of 5 choices improves the performance of both SQLNet and Seq2SQL by 7.7% and 9.4%, respectively. The higher gain on Seq2SQL model can be attributed that the single-step Seq2SQL makes more errors: DialSQL has more room for improvement. We also show the results of DialSQL where 1346 Model QM-Dev QM-Test Seq2SQL (Xu et al., 2017) 53.5% 51.6% SQLNet (Xu et al., 2017) 63.2% 61.3% BiAttn (Guo and Gao, 2018) 64.1% 62.5% Seq2SQL - DialSQL 62.2% 61% SQLNet - DialSQL 70.9% 69.0% Seq2SQL - DialSQL+ 68.9% 67.8% SQLNet - DialSQL+ 74.8% 73.9% Seq2SQL - DialSQL* 84.4% 84% SQLNet - DialSQL* 82.9% 83.7% Table 3: Query-match accuracy on the WikiSQL development and test sets. The first two scores of our model are generated using 5 candidate choices, (+) denotes a variant where users can revisit their previous answers, and (*) denotes a variant with more informative user responses. users are allowed to revisit their previous answers and with more informative user responses; instead the model only validates the error span and the user directly gives the correct choice. In this scenario, the performance further improves on both development and test sets. It seems decoding candidate choices is a hard task and has room for improvement. For the rest of the evaluation, we present results with multi-choice questions. 5.4 Query Complexity and Dialogue Length In Table 4, we compare DialSQL to the OM strategy on query complexity (QC) and dialogue length (DL) metrics. DialSQL and SQLNet-OM both have very similar query complexity scores showing that DialSQL produces simple questions. The number of questions DialSQL asks is around 3 for both query generation models. Even though SQLNet-OM dialogues have much smaller dialogue lengths, we attribute this to the fact that 61.3% of the dialogues have empty interactions since OM will match every segment in the generated query and the ground truth. The average number of turns in dialogues with non-empty interactions, on the other hand, is 3.10 which is close to DialSQL. 5.5 A Varying Number of Choices In Figure 3, we plot the accuracy of DialSQL on WikiSQL with a varying number of choices at each turn. We train DialSQL once and generate a different number of choices at each turn by offering top-k candidates during testing. We observe that offering even a single candidate improves the performance of SQLNet remarkably, 1.9% and Model QC Dev DL Dev QC Test DL Test Seq2SQL - OM 3.47 (2.25) 0.84 (1.77) 3.51 (2.41) 0.88 (1.8) SQLNet - OM 3.37 (2.63) 0.61 (1.45) 3.34 (2.51) 0.63 (1.49) Seq2SQL - DialSQL 3.53 (1.79) 5.54 (2.32) 3.55 (1.81) 5.55 (2.34) SQLNet - DialSQL 3.6 (1.86) 5.57 (2.34) 3.17 (1.55) 4.77 (1.57) Table 4: Average query complexity and dialogue length on the WikiSQL datasets (values in paranthesis are standard deviations). Metrics for SQLNet and Seq2SQL models are generated by the OM strategy as described earlier. Figure 3: DialSQL performance on WikiSQL with a varying number of choices at each turn. 2.5% for development and test sets, respectively. As the number of choices increases, the performance of DialSQL improves in all the cases. Particularly, for the SQLNet-DialSQL model we observe more accuray gain. We increased the number of choices to 10 and observed no notable further improvement in the development set which suggests that 5 is a good value for the number of choices. 5.6 Error Distribution We examine the error distribution of DialSQL and SQLNet. In DialSQL, almost all the errors are caused by validate sel and validate where change, while in SQLNet validate where change is the major cause of error and other errors are distributed uniformly. 5.7 Human Evaluation We extend our evaluation of DialSQL using human subject experiment so that real users interact with the system instead of our simulated user. We randomly pick 100 questions from WikiSQL development set and run SQLNet to generate initial candidate queries. Next, we run DialSQL using these candidate queries to generate 100 dialogues, each of which is evaluated 1347 Model Accuracy SQLNet 58 DialSQL w/ User Simulation 75 DialSQL w/ Real Users 65 (1.4) Table 5: QM accuracies of SQLNet, DialSQL with user simulation, and DialSQL with real users (value in paranthesis is standard deviation). Figure 4: Distribution of user preference for DialSQL ranking (scaled to 1-6 with 6 is None of the above.). by 3 different users. At each turn, we show users the headers of the corresponding table, original question, system response, and list of candidate choices for users to pick. For each error category, we generate 5 choices except for the validate where added category for which we only show 2 choices (YES or NO). Also, we add an additional choice of None of the above so that users can keep the previous prediction unchanged. At the end of each turn, we also ask users to give an overall score between 1 and 3 to evaluate whether they had a successful interaction with the DialSQL for the current turn. On average, the length of the generated dialogues is 5.6. In Table 5, we compare the performance of SQLNet, DialSQL with user simulation, and DialSQL with real users using QM metric. We present the average performance across 3 different users with the standard deviation estimated over all dialogues. We observe that when real users interact with our system, the overall performance of the generated queries are better than SQLNet model showing that DialSQL can improve the performance of a strong NLIDB system in a real setting. However, there is still a large room for improvement between simulated dialogues and real users. In Figure 4, we present the correlation between DialSQL ranking of the candidate choices and user preferences. We observe that, user answers and DialSQL rankings are positively correlated; most of the time users prefer the top-1 choice. Interestingly, 15% of the user answers is None of the above. This commonly happens in the scenario where DialSQL response asks to replace a correct condition and users prefer to keep the original prediction unchanged. Another scenario where users commonly select None of the above is when table headers without the content remain insufficient for users to correctly disambiguate condition values from questions. We also compute the Mean Reciprocal Rank (MMR) for each user to measure the correlation between real users and DialSQL. Average MMR is 0.69 with standard deviation of 0.004 which also shows that users generally prefer the choices ranked higher by DialSQL. The overall score of each turn also suggests that users had a reasonable conversation with DialSQL. The average score is 2.86 with standard deviation of 0.14, showing users can understand DialSQL responses and can pick a choice confidently. 6 Conclusion We demonstrated the efficacy of the DialSQL, improving the state of the art accuracy from 62.5% to 69.0% on the WikiSQL dataset. DialSQL successfully extracts error spans from queries and offers several alternatives to users. It generates simple questions over a small number of turns without overwhelming users. The model learns from only simulated data which makes it easy to adapt to new domains. We further investigate the usability of DialSQL in a real life setting by conducting human evaluations. Our results suggest that the accuracy of the generated queries can be improved via real user feedback. Acknowledgements The authors would like to thank the anonymous reviewers for their thoughtful comments. This research was sponsored in part by the Army Research Laboratory under cooperative agreements W911NF09-2-0053 and NSF 1528175. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notice herein. 1348 References Martin Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2016. Tensorflow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pages 265– 283. Ion Androutsopoulos, Graeme D Ritchie, and Peter Thanisch. 1995. Natural language interfaces to databases–an introduction. Natural language engineering, 1(1):29–81. Layla El Asri, Jing He, and Kaheer Suleman. 2016. A sequence-to-sequence model for user simulation in spoken dialogue systems. CoRR, abs/1607.00070. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of Conference on Empirical Methods in Natural Language Processing. Kyunghyun Cho, Bart van Merrienboer, C¸ aglar G¨ulc¸ehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. CoRR, abs/1406.1078. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. T. Guo and H. Gao. 2018. Bidirectional Attention for SQL Generation. ArXiv e-prints. Izzeddin Gur, Daniel Hewlett, Llion Jones, and Alexandre Lacoste. 2017. Accurate supervised and semi-supervised machine reading for long documents. In Proceedings of Conference on Empirical Methods in Natural Language Processing. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer. 2017. Learning a neural semantic parser from user feedback. CoRR, abs/1704.08760. Rohit J Kate, Yuk Wah Wong, and Raymond J Mooney. 2005. Learning to transform natural to formal languages. In Proceedings of the AAAI Conference on Artificial Intelligence. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. F. Li and H. V. Jagadish. 2014. Constructing an interactive natural language interface for relational databases. Proc. VLDB Endow., 8(1):73–84. Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc’Aurelio Ranzato, and Jason Weston. 2016. Dialogue learning with human-in-the-loop. CoRR, abs/1611.09823. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In EMNLP. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 55–60. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In In EMNLP. Ana-Maria Popescu, Oren Etzioni, and Henry Kautz. 2003. Towards a theory of natural language interfaces to databases. In Proceedings of the 8th international conference on Intelligent user interfaces, pages 149–157. ACM. A. Saha, V. Pahuja, M. M. Khapra, K. Sankaranarayanan, and S. Chandar. 2018. Complex Sequential Question Answering: Towards Learning to Converse Over Linked Question Answer Pairs with a Knowledge Graph. ArXiv e-prints. Jost Schatzmann, Blaise Thomson, Karl Weilhammer, Hui Ye, and Steve Young. 2007. Agenda-based user simulation for bootstrapping a POMDP dialogue system. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers, pages 149– 152, Rochester, New York. Association for Computational Linguistics. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2015. Hierarchical neural network generative models for movie dialogues. CoRR, abs/1507.04808. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C. Courville, and Yoshua Bengio. 2016. A hierarchical latent variable encoder-decoder model for generating dialogues. CoRR, abs/1605.06069. Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, and JianYun Nie. 2015. A hierarchical recurrent encoderdecoder for generative context-aware query suggestion. CoRR, abs/1507.02221. Yu Su, Huan Sun, Brian Sadler, Mudhakar Srivatsa, Izzeddin Gur, Zenghui Yan, and Xifeng Yan. 2016. On generating characteristic-rich question sets for qa evaluation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 562–572. 1349 Huan Sun, Hao Ma, Xiaodong He, Wen-tau Yih, Yu Su, and Xifeng Yan. 2016. Table cell search for question answering. In Proceedings of the International Conference on World Wide Web. William A Woods. 1973. Progress in natural language understanding: an application to lunar geology. In Proceedings of the American Federation of Information Processing Societies Conference. Xiaojun Xu, Chang Liu, and Dawn Song. 2017. Sqlnet: Generating structured queries from natural language without reinforcement learning. CoRR, abs/1711.04436. Navid Yaghmazadeh, Yuepeng Wang, Isil Dillig, and Thomas Dillig. 2017. Sqlizer: Query synthesis from natural language. Proc. ACM Program. Lang., 1(OOPSLA):63:1–63:26. Semih Yavuz, Izzeddin Gur, Yu Su, Mudhakar Srivatsa, and Xifeng Yan. 2016. Improving semantic parsing via answer type inference. In Proceedings of Conference on Empirical Methods in Natural Language Processing. Scott Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. John M Zelle and Mooney Ray. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of the AAAI Conference on Artificial Intelligence. Luke Zettlemoyer and Michael Collins. 2007. Online learning of relaxed ccg grammars for parsing to logical form. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, pages 658–666. Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. CoRR, abs/1709.00103.
2018
124
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1350–1361 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1350 Conversations Gone Awry: Detecting Early Signs of Conversational Failure Justine Zhang and Jonathan P. Chang and Cristian Danescu-Niculescu-Mizil∗ Cornell University {jz727,jpc362}@cornell.edu, [email protected] Lucas Dixon and Nithum Thain Jigsaw {ldixon,nthain}@google.com Yiqing Hua Cornell University [email protected] Dario Taraborelli Wikimedia Foundation [email protected] Abstract One of the main challenges online social systems face is the prevalence of antisocial behavior, such as harassment and personal attacks. In this work, we introduce the task of predicting from the very start of a conversation whether it will get out of hand. As opposed to detecting undesirable behavior after the fact, this task aims to enable early, actionable prediction at a time when the conversation might still be salvaged. To this end, we develop a framework for capturing pragmatic devices—such as politeness strategies and rhetorical prompts—used to start a conversation, and analyze their relation to its future trajectory. Applying this framework in a controlled setting, we demonstrate the feasibility of detecting early warning signs of antisocial behavior in online discussions. 1 Introduction “Or vedi l’anime di color cui vinse l’ira.”1 – Dante Alighieri, Divina Commedia, Inferno Online conversations have a reputation for going awry (Hinds and Mortensen, 2005; Gheitasy et al., 2015): antisocial behavior (Shepherd et al., 2015) or simple misunderstandings (Churchill and Bly, 2000; Yamashita and Ishida, 2006) hamper the efforts of even the best intentioned collaborators. Prior computational work has focused on characterizing and detecting content exhibiting antisocial online behavior: trolling (Cheng et al., 2015, 2017), hate speech (Warner and Hirschberg, 2012; Davidson et al., 2017), harassment (Yin et al., 2009), personal attacks (Wulczyn et al., ∗Corresponding senior author. 1“Now you see the souls of those whom anger overcame.” 2017) or, more generally, toxicity (Chandrasekharan et al., 2017; Pavlopoulos et al., 2017b). Our goal is crucially different: instead of identifying antisocial comments after the fact, we aim to detect warning signs indicating that a civil conversation is at risk of derailing into such undesirable behaviors. Such warning signs could provide potentially actionable knowledge at a time when the conversation is still salvageable. As a motivating example, consider the pair of conversations in Figure 1. Both exchanges took place in the context of the Wikipedia discussion page for the article on the Dyatlov Pass Incident, and both show (ostensibly) civil disagreement between the participants. However, only one of these conversations will eventually turn awry and devolve into a personal attack (“Wow, you’re coming off as a total d**k. [...] What the hell is wrong with you?”), while the other will remain civil. As humans, we have some intuition about which conversation is more likely to derail.2 We may note the repeated, direct questioning with which A1 opens the exchange, and that A2 replies with yet another question. In contrast, B1’s softer, hedged approach (“it seems”, “I don’t think”) appears to invite an exchange of ideas, and B2 actually addresses the question instead of stonewalling. Could we endow artificial systems with such intuitions about the future trajectory of conversations? In this work we aim to computationally capture linguistic cues that predict a conversation’s future health. Most existing conversation modeling approaches aim to detect characteristics of an observed discussion or predict the outcome after the discussion concludes—e.g., whether it involves a present dispute (Allen et al., 2014; Wang and Cardie, 2014) or contributes to the even2In fact, humans achieve an accuracy of 72% on this balanced task, showing that it is feasible, but far from trivial. 1351 A1: Why there’s no mention of it here? Namely, an altercation with a foreign intelligence group? True, by the standards of sources some require it wouln’t even come close, not to mention having some really weak points, but it doesn’t mean that it doesn’t exist. A2: So what you’re saying is we should put a bad source in the article because it exists? B1: Is the St. Petersberg Times considered a reliable source by wikipedia? It seems that the bulk of this article is coming from that one article, which speculates about missile launches and UFOs. I’m going to go through and try and find corroborating sources and maybe do a rewrite of the article. I don’t think this article should rely on one so-so source. B2: I would assume that it’s as reliable as any other mainstream news source. Figure 1: Two examples of initial exchanges from conversations concerning disagreements between editors working on the Wikipedia article about the Dyatlov Pass Incident. Only one of the conversations will eventually turn awry, with an interlocutor launching into a personal attack. tual solution of a problem (Niculae and DanescuNiculescu-Mizil, 2016). In contrast, for this new task we need to discover interactional signals of the future trajectory of an ongoing conversation. We make a first approach to this problem by analyzing the role of politeness (or lack thereof) in keeping conversations on track. Prior work has shown that politeness can help shape the course of offline (Clark, 1979; Clark and Schunk, 1980), as well as online interactions (Burke and Kraut, 2008), through mechanisms such as softening the perceived force of a message (Fraser, 1980), acting as a buffer between conflicting interlocutor goals (Brown and Levinson, 1987), and enabling all parties to save face (Goffman, 1955). This suggests the potential of politeness to serve as an indicator of whether a conversation will sustain its initial civility or eventually derail, and motivates its consideration in the present work. Recent studies have computationally operationalized prior formulations of politeness by extracting linguistic cues that reflect politeness strategies (Danescu-Niculescu-Mizil et al., 2013; Aubakirova and Bansal, 2016). Such research has additionally tied politeness to social factors such as individual status (Danescu-NiculescuMizil et al., 2012; Krishnan and Eisenstein, 2015), and the success of requests (Althoff et al., 2014) or of collaborative projects (Ortu et al., 2015). However, to the best of our knowledge, this is the first computational investigation of the relation between politeness strategies and the future trajectory of the conversations in which they are deployed. Furthermore, we generalize beyond predefined politeness strategies by using an unsupervised method to discover additional rhetorical prompts used to initiate different types of conversations that may be specific to online collaborative settings, such as coordinating work (Kittur and Kraut, 2008) or conducting factual checks. We explore the role of such pragmatic and rhetorical devices in foretelling a particularly perplexing type of conversational failure: when participants engaged in previously civil discussion start to attack each other. This type of derailment “from within” is arguably more disruptive than other forms of antisocial behavior, such as vandalism or trolling, which the interlocutors have less control over or can choose to ignore. We study this phenomenon in a new dataset of Wikipedia talk page discussions, which we compile through a combination of machine learning and crowdsourced filtering. The dataset consists of conversations which begin with ostensibly civil comments, and either remain healthy or derail into personal attacks. Starting from this data, we construct a setting that mitigates effects which may trivialize the task. In particular, some topical contexts (such as politics and religion) are naturally more susceptible to antisocial behavior (Kittur et al., 2009; Cheng et al., 2015). We employ techniques from causal inference (Rosenbaum, 2010) to establish a controlled framework that focuses our study on topic-agnostic linguistic cues. In this controlled setting, we find that pragmatic cues extracted from the very first exchange in a conversation (i.e., the first comment-reply pair) can indeed provide some signal of whether the conversation will subsequently go awry. For example, conversations prompted by hedged remarks sustain their initial civility more so than those prompted by forceful questions, or by direct language addressing the other interlocutor. In summary, our main contributions are: • We articulate the new task of detecting early on whether a conversation will derail into personal attacks; • We devise a controlled setting and build a labeled dataset to study this phenomenon; 1352 • We investigate how politeness strategies and other rhetorical devices are tied to the future trajectory of a conversation. More broadly, we show the feasibility of automatically detecting warning signs of future misbehavior in collaborative interactions. By providing a labeled dataset together with basic methodology and several baselines, we open the door to further work on understanding factors which may derail or sustain healthy online conversations. To facilitate such future explorations, we distrubute the data and code as part of the Cornell Conversational Analysis Toolkit.3 2 Further Related Work Antisocial behavior. Prior work has studied a wide range of disruptive interactions in various online platforms like Reddit and Wikipedia, examining behaviors like aggression (Kayany, 1998), harassment (Chatzakou et al., 2017; Vitak et al., 2017), and bullying (Akbulut et al., 2010; Kwak et al., 2015; Singh et al., 2017), as well as their impact on aspects of engagement like user retention (Collier and Bear, 2012; Wikimedia Support and Safety Team, 2015) or discussion quality (Arazy et al., 2013). Several studies have sought to develop machine learning techniques to detect signatures of online toxicity, such as personal insults (Yin et al., 2009), harassment (Sood et al., 2012) and abusive language (Nobata et al., 2016; Gamb¨ack and Sikdar, 2017; Pavlopoulos et al., 2017a; Wulczyn et al., 2017). These works focus on detecting toxic behavior after it has already occurred; a notable exception is Cheng et al. (2017), which predicts future community enforcement against users in news-based discussions. Our work similarly aims to understand future antisocial behavior; however, our focus is on studying the trajectory of a conversation rather than the behavior of individuals across disparate discussions. Discourse analysis. Our present study builds on a large body of prior work in computationally modeling discourse. Both unsupervised (Ritter et al., 2010) and supervised (Zhang et al., 2017a) approaches have been used to categorize behavioral patterns on the basis of the language that ensues in a conversation, in the particular realm of online discussions. Models of conversational behavior have also been used to predict conversation outcomes, such as betrayal in games (Niculae et al., 3http://convokit.infosci.cornell.edu 2015), and success in team problem solving settings (Fu et al., 2017) or in persuading others (Tan et al., 2016; Zhang et al., 2016). While we are inspired by the techniques employed in these approaches, our work is concerned with predicting the future trajectory of an ongoing conversation as opposed to a post-hoc outcome. In this sense, we build on prior work in modeling conversation trajectory, which has largely considered structural aspects of the conversation (Kumar et al., 2010; Backstrom et al., 2013). We complement these structural models by seeking to extract potential signals of future outcomes from the linguistic discourse within the conversation. 3 Finding Conversations Gone Awry We develop our framework for understanding linguistic markers of conversational trajectories in the context of Wikipedia’s talk page discussions— public forums in which contributors convene to deliberate on editing matters such as evaluating the quality of an article and reviewing the compliance of contributions with community guidelines. The dynamic of conversational derailment is particularly intriguing and consequential in this setting by virtue of its collaborative, goal-oriented nature. In contrast to unstructured commenting forums, cases where one collaborator turns on another over the course of an initially civil exchange constitute perplexing pathologies. In turn, these toxic attacks are especially disruptive in Wikipedia since they undermine the social fabric of the community as well as the ability of editors to contribute (Henner and Sefidari, 2016). To approach this domain we reconstruct a complete view of the conversational process in the edit history of English Wikipedia by translating sequences of revisions of each talk page into structured conversations. This yields roughly 50 million conversations across 16 million talk pages. Roughly one percent of Wikipedia comments are estimated to exhibit antisocial behavior (Wulczyn et al., 2017). This illustrates a challenge for studying conversational failure: one has to sift through many conversations in order to find even a small set of examples. To avoid such a prohibitively exhaustive analysis, we first use a machine learning classifier to identify candidate conversations that are likely to contain a toxic contribution, and then use crowdsourcing to vet the resulting labels and construct our controlled dataset. 1353 Job 1: Ends in personal attack. We show three annotators a conversation and ask them to determine if its last comment is a personal attack toward someone else in the conversation. Annotators Conversations Agreement 367 4,022 67.8% Job 2: Civil start. We split conversations into snippets of three consecutive comments. We ask three annotators to determine whether any of the comments in a snippet is toxic. Annotators Conversations Snippets Agreement 247 1,252 2,181 87.5% Table 1: Descriptions of crowdsourcing jobs, with relevant statistics. More details in Appendix A. Candidate selection. Our goal is to analyze how the start of a civil conversation is tied to its potential future derailment into personal attacks. Thus, we only consider conversations that start out as ostensibly civil, i.e., where at least the first exchange does not exhibit any toxic behavior,4 and that continue beyond this first exchange. To focus on the especially perplexing cases when the attacks come from within, we seek examples where the attack is initiated by one of the two participants in the initial exchange. To select candidate conversations to include in our collection, we use the toxicity classifier provided by the Perspective API,5 which is trained on Wikipedia talk page comments that have been annotated by crowdworkers (Wulczyn et al., 2016). This provides a toxicity score t for all comments in our dataset, which we use to preselect two sets of conversations: (a) candidate conversations that are civil throughout, i.e., conversations in which all comments (including the initial exchange) are not labeled as toxic (t < 0.4); and (b) candidate conversations that turn toxic after the first (civil) exchange, i.e., conversations in which the N-th comment (N > 2) is labeled toxic (t ≥0.6), but all the preceding comments are not (t < 0.4). Crowdsourced filtering. Starting from these candidate sets, we use crowdsourcing to vet each conversation and select a subset that are perceived by humans to either stay civil throughout (“ontrack” conversations), or start civil but end with a personal attack (“awry-turning” conversations). To inform the design of this human-filtering process and to check its effectiveness, we start from a seed set of 232 conversations manually verified by the authors to end in personal attacks (more details about the selection of the seed set and its role in the crowd-sourcing process can be found in Appendix A). We take particular care to not over-constrain crowdworker interpretations of 4For the sake of generality, in this work we focus on this most basic conversational unit: the first comment-reply pair starting a conversation. 5https://www.perspectiveapi.com/ what personal attacks may be, and to separate toxicity from civil disagreement, which is recognized as a key aspect of effective collaborations (Coser, 1956; De Dreu and Weingart, 2003). We design and deploy two filtering jobs using the CrowdFlower platform, summarized in Table 1 and detailed in Appendix A. Job 1 is designed to select conversations that contain a “rude, insulting, or disrespectful” comment towards another user in the conversation—i.e., a personal attack. In contrast to prior work labeling antisocial comments in isolation (Sood et al., 2012; Wulczyn et al., 2017), annotators are asked to label personal attacks in the context of the conversations in which they occur, since antisocial behavior can often be contextdependent (Cheng et al., 2017). In fact, in order to ensure that the crowdworkers read the entire conversation, we also ask them to indicate who is the target of the attack. We apply this task to the set of candidate awry-turning conversations, selecting the 14% which all three annotators perceived as ending in a personal attack.6 Job 2 is designed to filter out conversations that do not actually start out as civil. We run this job to ensure that the awry-turning conversations are civil up to the point of the attack—i.e., they turn awry—discarding 5% of the candidates that passed Job 1. We also use it to verify that the candidate on-track conversations are indeed civil throughout, discarding 1% of the respective candidates. In both cases we filter out conversations in which three annotators could identify at least one comment that is “rude, insulting, or disrespectful”. Controlled setting. Finally, we need to construct a setting that affords for meaningful comparison between conversations that derail and those that stay on track, and that accounts for trivial topical confounds (Kittur et al., 2009; Cheng et al., 2015). We mitigate topical confounds using matching, a technique developed for causal inference in observational studies (Rubin, 2007). Specifically, start6We opted to use unanimity in this task to account for the highly subjective nature of the phenomenon. 1354 ing from our human-vetted collection of conversations, we pair each awry-turning conversation, with an on-track conversation, such that both took place on the same talk page. If we find multiple such pairs, we only keep the one in which the paired conversations take place closest in time, to tighten the control for topic. Conversations that cannot be paired are discarded. This procedure yields a total of 1,270 paired awry-turning and on-track conversations (including our initial seed set), spanning 582 distinct talk pages (averaging 1.1 pairs per page, maximum 8) and 1,876 (overlapping) topical categories. The average length of a conversation is 4.6 comments. 4 Capturing Pragmatic Devices We now describe our framework for capturing linguistic cues that might inform a conversation’s future trajectory. Crucially, given our focus on conversations that start seemingly civil, we do not expect overtly hostile language—such as insults (Yin et al., 2009)—to be informative. Instead, we seek to identify pragmatic markers within the initial exchange of a conversation that might serve to reveal or exacerbate underlying tensions that eventually come to the fore, or conversely suggest sustainable civility. In particular, in this work we explore how politeness strategies and rhetorical prompts reflect the future health of a conversation. Politeness strategies. Politeness can reflect a-priori good will and help navigate potentially face-threatening acts (Goffman, 1955; Lakoff, 1973), and also offers hints to the underlying intentions of the interlocutors (Fraser, 1980). Hence, we may naturally expect certain politeness strategies to signal that a conversation is likely to stay on track, while others might signal derailment. In particular, we consider a set of pragmatic devices signaling politeness drawn from Brown and Levinson (1987). These linguistic features reflect two overarching types of politeness. Positive politeness strategies encourage social connection and rapport, perhaps serving to maintain cohesion throughout a conversation; such strategies include gratitude (“thanks for your help”), greetings (“hey, how is your day so far”) and use of “please”, both at the start (“Please find sources for your edit...”) and in the middle (“Could you please help with...?”) of a sentence. Negative politeness strategies serve to dampen an interlocutor’s imposition on an addressee, often through conveying indirectness or uncertainty on the part of the commenter. Both commenters in example B (Fig. 1) employ one such strategy, hedging, perhaps seeking to soften an impending disagreement about a source’s reliability (“I don’t think...”, “I would assume...”). We also consider markers of impolite behavior, such as the use of direct questions (“Why’s there no mention of it?’) and sentenceinitial second person pronouns (“Your sources don’t matter...”), which may serve as forcefulsounding contrasts to negative politeness markers. Following Danescu-Niculescu-Mizil et al. (2013), we extract such strategies by pattern matching on the dependency parses of comments. Types of conversation prompts. To complement our pre-defined set of politeness strategies, we seek to capture domain-specific rhetorical patterns used to initiate conversations. For instance, in a collaborative setting, we may expect conversations that start with an invitation for working together to signal less tension between the participants than those that start with statements of dispute. We discover types of such conversation prompts in an unsupervised fashion by extending a framework used to infer the rhetorical role of questions in (offline) political debates (Zhang et al., 2017b) to more generally extract the rhetorical functions of comments. The procedure follows the intuition that the rhetorical role of a comment is reflected in the type of replies it is likely to elicit. As such, comments which tend to trigger similar replies constitute a particular type of prompt. To implement this intuition, we derive two different low-rank representations of the common lexical phrasings contained in comments (agnostic to the particular topical content discussed), automatically extracted as recurring sets of arcs in the dependency parses of comments. First, we derive reply-vectors of phrasings, which reflect their propensities to co-occur. In particular, we perform singular value decomposition on a termdocument matrix R of phrasings and replies as R ≈ˆR = URSV T R , where rows of UR are lowrank reply-vectors for each phrasing. Next, we derive prompt-vectors for the phrasings, which reflect similarities in the subsequent replies that a phrasing prompts. We construct a prompt-reply matrix P = (pij) where pij = 1 if phrasing j occurred in a reply to a comment containing phrasing i. We project P into the same space as UR by solving for ˆP in P = ˆPSV T R as 1355 Prompt Type Description Examples Factual check Statements about article content, pertaining to or The terms are used interchangeably in the US. contending issues like factual accuracy. The census is not talking about families here. Moderation Rebukes or disputes concerning moderation decisions If you continue, you may be blocked from editing. such as blocks and reversions. He’s accused me of being a troll. Coordination Requests, questions, and statements of intent It’s a long list so I could do with your help. pertaining to collaboratively editing an article. Let me know if you agree with this and I’ll go ahead [...] Casual remark Casual, highly conversational aside-remarks. What’s with this flag image? I’m surprised there wasn’t an article before. Action statement Requests, statements, and explanations about Please consider improving the article to address the issues [...] various editing actions. The page was deleted as self-promotion. Opinion Statements seeking or expressing opinions about I think that it should be the other way around. editing challenges and decisions. This article seems to have a lot of bias. Table 2: Prompt types automatically extracted from talk page conversations, with interpretations and examples from the data. Bolded text indicate common prompt phrasings extracted by the framework. Further examples are shown in Appendix B, Table 4. ˆP = PVRS−1. Each row of ˆP is then a promptvector of a phrasing, such that the prompt-vector for phrasing i is close to the reply-vector for phrasing j if comments with phrasing i tend to prompt replies with phrasing j. Clustering the rows of ˆP then yields k conversational prompt types that are unified by their similarity in the space of replies. To infer the prompt type of a new comment, we represent the comment as an average of the representations of its constituent phrasings (i.e., rows of ˆP) and assign the resultant vector to a cluster.7 To determine the prompt types of comments in our dataset, we first apply the above procedure to derive a set of prompt types from a disjoint (unlabeled) corpus of Wikipedia talk page conversations (Danescu-Niculescu-Mizil et al., 2012). After initial examination of the framework’s output on this external data, we chose to extract k = 6 prompt types, shown in Table 2 along with our interpretations.8 These prompts represent signatures of conversation-starters spanning a wide range of topics and contexts which reflect core elements of Wikipedia, such as moderation disputes and coordination (Kittur et al., 2007; Kittur and Kraut, 2008). We assign each comment in our present dataset to one of these types.9 7We scale rows of UR and ˆP to unit norm. We assign comments whose vector representation has (ℓ2) distance ≥1 to all cluster centroids to an extra, infrequently-occurring null type which we ignore in subsequent analyses. 8We experimented with more prompt types as well, finding that while the methodology recovered finer-grained types, and obtained qualitatively similar results and prediction accuracies as described in Sections 5 and 6, the assignment of comments to types was relatively sparse due to the small data size, resulting in a loss of statistical power. 9While the particular prompt types we discover are spe5 Analysis We are now equipped to computationally explore how the pragmatic devices used to start a conversation can signal its future health. Concretely, to quantify the relative propensity of a linguistic marker to occur at the start of awry-turning versus on-track conversations, we compute the logodds ratio of the marker occurring in the initial exchange—i.e., in the first or second comments— of awry-turning conversations, compared to initial exchanges in the on-track setting. These quantities are depicted in Figure 2A.10 Focusing on the first comment (represented as ♦s), we find a rough correspondence between linguistic directness and the likelihood of future personal attacks. In particular, comments which contain direct questions, or exhibit sentenceinitial you (i.e., “2nd person start”), tend to start awry-turning conversations significantly more often than ones that stay on track (both p < 0.001).11 This effect coheres with our intuition that directness signals some latent hostility from the conversation’s initiator, and perhaps reinforces the forcefulness of contentious impositions (Brown and Levinson, 1987). This interpretation is also sugcific to Wikipedia, the methodology for inferring them is unsupervised and is applicable in other conversational settings. 10To reduce clutter we only depict features which occur a minimum of 50 times and have absolute log-odds ≥0.2 in at least one of the data subsets. The markers indicated as statistically significant for Figure 2A remain so after a Bonferroni correction, with the exception of factual checks, hedges (lexicon, ♦), gratitude (♦), and opinion. 11All p values in this section are computed as two-tailed binomial tests, comparing the proportion of awry-turning conversations exhibiting a particular device to the proportion of on-track conversations. 1356 Figure 2: Log-odds ratios of politeness strategies and prompt types exhibited in the first and second comments of conversations that turn awry, versus those that stay on-track. All: Purple and green markers denote log-odds ratios in the first and second comments, respectively; points are solid if they reflect significant (p < 0.05) log-odds ratios with an effect size of at least 0.2. A: ♦s and □s denote first and second comment log-odds ratios, respectively; * denotes statistically significant differences at the p < 0.05 (*), p < 0.01 (**) and p < 0.001 (***) levels for the first comment (two-tailed binomial test); + denotes corresponding statistical significance for the second comment. B and C: ▽s and ⃝s correspond to effect sizes in the comments authored by the attacker and non-attacker, respectively, in attacker initiated (B) and non-attacker initiated (C) conversations. gested by the relative propensity of the factual check prompt, which tends to cue disputes regarding an article’s factual content (p < 0.05). In contrast, comments which initiate on-track conversations tend to contain gratitude (p < 0.05) and greetings (p < 0.001), both positive politeness strategies. Such conversations are also more likely to begin with coordination prompts (p < 0.05), signaling active efforts to foster constructive teamwork. Negative politeness strategies are salient in on-track conversations as well, reflected by the use of hedges (p < 0.01) and opinion prompts (p < 0.05), which may serve to soften impositions or factual contentions (H¨ubler, 1983). These effects are echoed in the second comment—i.e., the first reply (represented as □s). Interestingly, in this case we note that the difference in pronoun use is especially marked. First replies in conversations that eventually derail tend to contain more second person pronouns (p < 0.001), perhaps signifying a replier pushing back to contest the initiator; in contrast, on-track conversations have more sentenceinitial I/We (i.e., “1st person start”, p < 0.001), potentially indicating the replier’s willingness to step into the conversation and work with—rather than argue against—the initiator (Tausczik and Pennebaker, 2010). Distinguishing interlocutor behaviors. Are the linguistic signals we observe solely driven by the eventual attacker, or do they reflect the behavior of both actors? To disentangle the attacker and nonattackers’ roles in the initial exchange, we examine their language use in these two possible cases: when the future attacker initiates the conversation, or is the first to reply. In attacker-initiated conversations (Figure 2B, 608 conversations), we see that both actors exhibit a propensity for the linguistically direct markers (e.g., direct questions) 1357 that tend to signal future attacks. Some of these markers are used particularly often by the nonattacking replier in awry-turning conversations (e.g., second person pronouns, p < 0.001, ⃝s), further suggesting the dynamic of the replier pushing back at—and perhaps even escalating—the attacker’s initial hint of aggression. Among conversations initiated instead by the non-attacker (Figure 2C, 662 conversations), the non-attacker’s linguistic behavior in the first comment (⃝s) is less distinctive from that of initiators in the on-track setting (i.e., log-odds ratios closer to 0); markers of future derailment are (unsurprisingly) more pronounced once the eventual attacker (▽s) joins the conversation in the second comment.12 More broadly, these results reveal how different politeness strategies and rhetorical prompts deployed in the initial stages of a conversation are tied to its future trajectory. 6 Predicting Future Attacks We now show that it is indeed feasible to predict whether a conversation will turn awry based on linguistic properties of its very first exchange, providing several baselines for this new task. In doing so, we demonstrate that the pragmatic devices examined above encode signals about the future trajectory of conversations, capturing some of the intuition humans are shown to have. We consider the following balanced prediction task: given a pair of conversations, which one will eventually lead to a personal attack? We extract all features from the very first exchange in a conversation—i.e., a comment-reply pair, like those illustrated in our introductory example (Figure 1). We use logistic regression and report accuracies on a leave-one-page-out cross validation, such that in each fold, all conversation pairs from a given talk page are held out as test data and pairs from all other pages are used as training data (thus preventing the use of page-specific information). Prediction results are summarized in Table 3. Language baselines. As baselines, we consider several straightforward features: word count (which performs at chance level), sentiment lexicon (Liu et al., 2005) and bag of words. Pragmatic features. Next, we test the predictive power of the prompt types and politeness 12As an interesting avenue for future work, we note that some markers used by non-attacking initiators potentially still anticipate later attacks, suggested by, e.g., the relative prevalence of sentence-initial you (p < 0.05, ⃝s). Feature set # features Accuracy Bag of words 5,000 56.7% Sentiment lexicon 4 55.4% Politeness strategies 38 60.5% Prompt types 12 59.2% Pragmatic (all) 50 61.6% Interlocutor features 5 51.2% Trained toxicity 2 60.5% Toxicity + Pragmatic 52 64.9% Humans 72.0% Table 3: Accuracies for the balanced futureprediction task. Features based on pragmatic devices are bolded, reference points are italicized. strategies features introduced in Section 4. The 12 prompt type features (6 features for each comment in the initial exchange) achieve 59.2% accuracy, and the 38 politeness strategies features (19 per comment) achieve 60.5% accuracy. The pragmatic features combine to reach 61.6% accuracy. Reference points. To better contextualize the performance of our features, we compare their predictive accuracy to the following reference points: Interlocutor features: Certain kinds of interlocutors are potentially more likely to be involved in awry-turning conversations. For example, perhaps newcomers or anonymous participants are more likely to derail interactions than more experienced editors. We consider a set of features representing participants’ experience on Wikipedia (i.e., number of edits) and whether the comment authors are anonymous. In our task, these features perform at the level of random chance. Trained toxicity: We also compare with the toxicity score of the exchange from the Perspective API classifier—a perhaps unfair reference point, since this supervised system was trained on additional human-labeled training examples from the same domain and since it was used to create the very data on which we evaluate. This results in an accuracy of 60.5%; combining trained toxicity with our pragmatic features achieves 64.9%. Humans: A sample of 100 pairs were labeled by (non-author) volunteer human annotators. They were asked to guess, from the initial exchange, which conversation in a pair will lead to a personal attack. Majority vote across three annotators was used to determine the human labels, resulting in an accuracy of 72%. This confirms that humans have 1358 some intuition about whether a conversation might be heading in a bad direction, which our features can partially capture. In fact, the classifier using pragmatic features is accurate on 80% of the examples that humans also got right. Attacks on the horizon. Finally, we seek to understand whether cues extracted from the first exchange can predict future discussion trajectory beyond the immediate next couple of comments. We thus repeat the prediction experiments on the subset of conversations in which the first personal attack happens after the fourth comment (282 pairs), and find that the pragmatic devices used in the first exchange maintain their predictive power (67.4% accuracy), while the sentiment and bag of words baselines drop to the level of random chance. Overall, these initial results show the feasibility of reconstructing some of the human intuition about the future trajectory of an ostensibly civil conversation in order to predict whether it will eventually turn awry. 7 Conclusions and Future Work In this work, we started to examine the intriguing phenomenon of conversational derailment, studying how the use of pragmatic and rhetorical devices relates to future conversational failure. Our investigation centers on the particularly perplexing scenario in which one participant of a civil discussion later attacks another, and explores the new task of predicting whether an initially healthy conversation will derail into such an attack. To this end, we develop a computational framework for analyzing how general politeness strategies and domain-specific rhetorical prompts deployed in the initial stages of a conversation are tied to its future trajectory. Making use of machine learning and crowdsourcing tools, we formulate a tightly-controlled setting that enables us to meaningfully compare conversations that stay on track with those that go awry. The human accuracy on predicting future attacks in this setting (72%) suggests it is feasible at least at the level of human intuition. We show that our computational framework can recover some of that intuition, hinting at the potential of automated methods to identify signals of the future trajectories of online conversations. Our approach has several limitations which open avenues for future work. Our correlational analyses do not provide any insights into causal mechanisms of derailment, which randomized experiments could address. Additionally, since our procedure for collecting and vetting data focused on precision rather than recall, it might miss more subtle attacks that are overlooked by the toxicity classifier. Supplementing our investigation with other indicators of antisocial behavior, such as editors blocking one another, could enrich the range of attacks we study. Noting that our framework is not specifically tied to Wikipedia, it would also be valuable to explore the varied ways in which this phenomenon arises in other (possibly noncollaborative) public discussion venues, such as Reddit and Facebook Pages. While our analysis focused on the very first exchange in a conversation for the sake of generality, more complex modeling could extend its scope to account for conversational features that more comprehensively span the interaction. Beyond the present binary classification task, one could explore a sequential formulation predicting whether the next turn is likely to be an attack as a discussion unfolds, capturing conversational dynamics such as sustained escalation. Finally, our study of derailment offers only one glimpse into the space of possible conversational trajectories. Indeed, a manual investigation of conversations whose eventual trajectories were misclassified by our models—as well as by the human annotators—suggests that interactions which initially seem prone to attacks can nonetheless maintain civility, by way of level-headed interlocutors, as well as explicit acts of reparation. A promising line of future work could consider the complementary problem of identifying pragmatic strategies that can help bring uncivil conversations back on track. Acknowledgements. We are grateful to the anonymous reviewers for their thoughtful comments and suggestions, and to Maria Antoniak, Valts Blukis, Liye Fu, Sam Havron, Jack Hessel, Ishaan Jhaveri, Lillian Lee, Alex Niculescu-Mizil, Alexandra Schofield, Laure Thompson, Andrew Wang, Leila Zia and the members of the Wikimedia Foundation anti-harassment program for extremely insightful (on-track) conversations and for assisting with data annotation efforts. This work is supported in part by NSF CAREER Award IIS1750615, NSF Grant SES-1741441, a Google Faculty Award, a WMF gift and a CrowdFlower AI for Everyone Award. 1359 References Yavuz Akbulut, Yusuf Levent Sahin, and Bahadir Eristi. 2010. Cyberbullying victimization among Turkish online social utility members. Journal of Educational Technology & Society. Kelsey Allen, Giuseppe Carenini, and Raymond T Ng. 2014. Detecting disagreement in conversations using pseudo-monologic rhetorical structure. In Proceedings of EMNLP. Tim Althoff, Cristian Danescu-Niculescu-Mizil, and Dan Jurafsky. 2014. How to ask for a favor: A case study on the success of altruistic requests. In Proceedings of ICWSM. Ofer Arazy, Lisa Yeo, and Oded Nov. 2013. Stay on the Wikipedia task: When task-related disagreements slip into personal and procedural conflicts. Journal of the Association for Information Science and Technology. Malika Aubakirova and Mohit Bansal. 2016. Interpreting neural networks to improve politeness comprehension. In Proceedings of EMNLP. Lars Backstrom, Jon Kleinberg, Lillian Lee, and Cristian Danescu-Niculescu-Mizil. 2013. Characterizing and curating conversation threads: Expansion, focus, volume, re-entry. In Proceedings of WSDM. Penelope Brown and Stephen Levinson. 1987. Politeness: Some universals in language usage. Cambridge University Press. Moira Burke and Robert Kraut. 2008. Mind your Ps and Qs: The impact of politeness and rudeness in online communities. In Proceedings of CSCW. Eshwar Chandrasekharan, Umashanthi Pavalanathan, Anirudh Srinivasan, Adam Glynn, Jacob Eisenstein, and Eric Gilbert. 2017. You can’t stay here: The efficacy of Reddit’s 2015 ban examined through hate speech. In Proceedings of CSCW. Despoina Chatzakou, Nicolas Kourtellis, Jeremy Blackburn, Emiliano De Cristofaro, Gianluca Stringhini, and Athena Vakali. 2017. Measuring #GamerGate: A tale of hate, sexism, and bullying. In Proceedings of WWW. Justin Cheng, Michael Bernstein, Cristian DanescuNiculescu-Mizil, and Jure Leskovec. 2017. Anyone can become a troll: Causes of trolling behavior in online discussions. In Proceedings of CSCW. Justin Cheng, Cristian Danescu-Niculescu-Mizil, and Jure Leskovec. 2015. Antisocial behavior in online discussion communities. In Proceedings of ICWSM. Elizabeth F Churchill and Sara Bly. 2000. Culture vultures: Considering culture and communication in virtual environments. SIGGroup Bulletin. Herbert H Clark. 1979. Responding to indirect speech acts. Cognitive psychology. Herbert H Clark and Dale H Schunk. 1980. Polite responses to polite requests. Cognition. Benjamin Collier and Julia Bear. 2012. Conflict, criticism, or confidence: An empirical examination of the gender gap in Wikipedia contributions. In Proceedings of CSCW. Lewis A Coser. 1956. The Functions of Social Conflict. Routledge. Cristian Danescu-Niculescu-Mizil, Lillian Lee, Bo Pang, and Jon Kleinberg. 2012. Echoes of power: Language effects and power differences in social interaction. In Proceedings of WWW. Cristian Danescu-Niculescu-Mizil, Moritz Sudhof, Dan Jurafsky, Jure Leskovec, and Christopher Potts. 2013. A computational approach to politeness with application to social factors. In Proceedings of ACL. Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of ICWSM. Carsten K De Dreu and Laurie R Weingart. 2003. Task versus relationship conflict, team performance, and team member satisfaction: A meta-analysis. Journal of Applied Psychology. Bruce Fraser. 1980. Conversational mitigation. Journal of Pragmatics. Liye Fu, Lillian Lee, and Cristian Danescu-NiculescuMizil. 2017. When confidence and competence collide: Effects on online decision-making discussions. In Proceedings of WWW. Bj¨orn Gamb¨ack and Utpal Kumar Sikdar. 2017. Using convolutional neural networks to classify hatespeech. In Proceedings of the Workshop on Abusive Language Online. Ali Gheitasy, Jos´e Abdelnour-Nocera, and Bonnie Nardi. 2015. Socio-technical gaps in online collaborative consumption (OCC): An example of the Etsy community. In Proceedings of ICDC. Erving Goffman. 1955. On face-work: An analysis of ritual elements in social interaction. Psychiatry. Christophe Henner and Maria Sefidari. 2016. Wikimedia Foundation Board on healthy Wikimedia community culture, inclusivity, and safe spaces. Wikimedia Blog. Pamela J Hinds and Mark Mortensen. 2005. Understanding conflict in geographically distributed teams: The moderating effects of shared identity, shared context, and spontaneous communication. Organization Science. Axel H¨ubler. 1983. Understatements and Hedges in English. John Benjamins Publishing. 1360 Joseph M Kayany. 1998. Contexts of uninhibited online behavior: Flaming in social newsgroups on usenet. Journal of the Association for Information Science and Technology. Aniket Kittur, Ed H Chi, and Bongwon Suh. 2009. What’s in Wikipedia?: Mapping topics and conflict using socially annotated category structure. In Proceedings of CHI. Aniket Kittur and Robert E Kraut. 2008. Harnessing the wisdom of crowds in Wikipedia: Quality through coordination. In Proceedings of CSCW. Aniket Kittur, Bongwon Suh, Bryan A Pendleton, and Ed H Chi. 2007. He says, she says: Conflict and coordination in Wikipedia. In Proceedings of CHI. Vinodh Krishnan and Jacob Eisenstein. 2015. “You’re Mr. Lebowski, I’m the Dude”: Inducing address term formality in signed social networks. In Proceedings of NAACL. Ravi Kumar, Mohammad Mahdian, and Mary McGlohon. 2010. Dynamics of conversations. In Proceedings of KDD. Haewoon Kwak, Jeremy Blackburn, and Seungyeop Han. 2015. Exploring cyberbullying and other toxic behavior in team competition online games. In Proceedings of CHI. Robin T Lakoff. 1973. The logic of politeness: Minding your P’s and Q’s. In Proceedings of the Chicago Linguistic Society. Bing Liu, Minqing Hu, and Junsheng Cheng. 2005. Opinion observer: Analyzing and comparing opinions on the web. In Proceedings of WWW. Vlad Niculae and Cristian Danescu-Niculescu-Mizil. 2016. Conversational markers of constructive discussions. In Proceedings of NAACL. Vlad Niculae, Srijan Kumar, Jordan Boyd-Graber, and Cristian Danescu-Niculescu-Mizil. 2015. Linguistic harbingers of betrayal: A case study on an online strategy game. In Proceedings of ACL. Chikashi Nobata, Joel Tetreault, Achint Thomas, Yashar Mehdad, and Yi Chang. 2016. Abusive language detection in online user content. In Proceedings of WWW. Marco Ortu, Bram Adams, Giuseppe Destefanis, Parastou Tourani, Michele Marchesi, and Roberto Tonelli. 2015. Are bullies more productive? Empirical study of affectiveness vs. issue fixing time. In Proceedings of MSR. John Pavlopoulos, Prodromos Malakasiotis, and Ion Androutsopoulos. 2017a. Deep learning for user comment moderation. In Proceedings of the Workshop on Abusive Language Online. John Pavlopoulos, Prodromos Malakasiotis, and Ion Androutsopoulos. 2017b. Deeper attention to abusive user content moderation. In Proceedings of EMNLP. Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Unsupervised modeling of Twitter conversations. In Proceedings of NAACL. Paul R Rosenbaum. 2010. Design of observational studies. Springer. Donald B Rubin. 2007. The design versus the analysis of observational studies for causal effects: Parallels with the design of randomized trials. Statistics in Medicine. Tamara Shepherd, Alison Harvey, Tim Jordan, Sam Srauy, and Kate Miltner. 2015. Histories of hating. Social Media + Society. Vivek K Singh, Marie L Radford, Qianjia Huang, and Susan Furrer. 2017. “They basically like destroyed the school one day”: On newer app features and cyberbullying in schools. In Proceedings of CSCW. Sara Owsley Sood, Elizabeth F Churchill, and Judd Antin. 2012. Automatic identification of personal insults on social news sites. Journal of the American Society for Information Science and Technology. Chenhao Tan, Vlad Niculae, Cristian DanescuNiculescu-Mizil, and Lillian Lee. 2016. Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions. In Proceedings of WWW. Yla R Tausczik and James W Pennebaker. 2010. The psychological meaning of words: LIWC and computerized text analysis methods. Journal of Language and Social Psychology. Kal Turnbull. 2018. “Thats Bullshit” – Rude Enough for Removal? A Multi-Mod Perspective. Change My View Blog. Jessica Vitak, Kalyani Chadha, Linda Steiner, and Zahra Ashktorab. 2017. Identifying women’s experiences with and strategies for mitigating negative effects of online harassment. In Proceedings of CSCW. Lu Wang and Claire Cardie. 2014. A piece of my mind: A sentiment analysis approach for online dispute detection. In Proceedings of ACL. William Warner and Julia Hirschberg. 2012. Detecting hate speech on the World Wide Web. In Proceedings of the Workshop on Language in Social Media. Wikimedia Support and Safety Team. 2015. Harassment survey. Wikimedia Foundation. Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2016. Wikipedia talk labels: Toxicity. 1361 Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017. Ex machina: Personal attacks seen at scale. In Proceedings of WWW. Naomi Yamashita and Toru Ishida. 2006. Automatic prediction of misconceptions in multilingual computer-mediated communication. In Proceedings of IUI. Dawei Yin, Zhenzhen Xue, Liangjie Hong, Brian D Davison, April Kontostathis, and Lynne Edwards. 2009. Detection of harassment on Web 2.0. In Proceedings of the Workshop on Content Analysis in the Web 2.0. Amy X Zhang, Bryan Culbertson, and Praveen Paritosh. 2017a. Characterizing online discussion using coarse discourse sequences. In Proceedings of ICWSM. Justine Zhang, Ravi Kumar, Sujith Ravi, and Cristian Danescu-Niculescu-Mizil. 2016. Conversational flow in Oxford-style debates. In Proceedings of NAACL. Justine Zhang, Arthur Spirling, and Cristian DanescuNiculescu-Mizil. 2017b. Asking too much? The rhetorical role of questions in political discourse. In Proceedings of EMNLP.
2018
125
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1362–1371 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1362 Are BLEU and Meaning Representation in Opposition? Ondˇrej C´ıfka and Ondˇrej Bojar Charles University Faculty of Mathematics and Physics Institute of Formal and Applied Linguistics {cifka,bojar}@ufal.mff.cuni.cz Abstract One of possible ways of obtaining continuous-space sentence representations is by training neural machine translation (NMT) systems. The recent attention mechanism however removes the single point in the neural network from which the source sentence representation can be extracted. We propose several variations of the attentive NMT architecture bringing this meeting point back. Empirical evaluation suggests that the better the translation quality, the worse the learned sentence representations serve in a wide range of classification and similarity tasks. 1 Introduction Deep learning has brought the possibility of automatically learning continuous representations of sentences. On the one hand, such representations can be geared towards particular tasks such as classifying the sentence in various aspects (e.g. sentiment, register, question type) or relating the sentence to other sentences (e.g. semantic similarity, paraphrasing, entailment). On the other hand, we can aim at “universal” sentence representations, that is representations performing reasonably well in a range of such tasks. Regardless the evaluation criterion, the representations can be learned either in an unsupervised way (from simple, unannotated texts) or supervised, relying on manually constructed training sets of sentences equipped with annotations of the appropriate type. A different approach is to obtain sentence representations from training neural machine translation models (Hill et al., 2016). Since Hill et al. (2016), NMT has seen substantial advances in translation quality and it is thus natural to ask how these improvements affect the learned representations. One of the key technological changes was the introduction of “attention” (Bahdanau et al., 2014), making it even the very central component in the network (Vaswani et al., 2017). Attention allows the NMT system to dynamically choose which parts of the source are most important when deciding on the current output token. As a consequence, there is no longer a static vector representation of the sentence available in the system. In this paper, we remove this limitation by proposing a novel encoder-decoder architecture with a structured fixed-size representation of the input that still allows the decoder to explicitly focus on different parts of the input. In other words, our NMT system has both the capacity to attend to various parts of the input and to produce static representations of input sentences. We train this architecture on English-to-German and English-to-Czech translation and evaluate the learned representations of English on a wide range of tasks in order to assess its performance in learning “universal” meaning representations. In Section 2, we briefly review recent efforts in obtaining sentence representations. In Section 3, we introduce a number of variants of our novel architecture. Section 4 describes some standard and our own methods for evaluating sentence representations. Section 5 then provides experimental results: translation and representation quality. The relation between the two is discussed in Section 6. 2 Related Work The properties of continuous sentence representations have always been of interest to researchers working on neural machine translation. In the first works on RNN sequence-to-sequence models, Cho et al. (2014) and Sutskever et al. (2014) 1363 Bahdanau et al. Sutskever et al. Cho et al. Compound attention ATTN FINAL FINAL-CTX *POOL *POOL-CTX ATTN-CTX ATTN-ATTN Encoder states used all final final all all all all Combined using ... — — — pooling pooling inner att. inner att. Sent. emb. available ✗ ✓ ✓ ✓ ✓ ✓ ✓ Dec. attends to enc. states ✓ ✗ ✗ ✗ ✗ ✗ ✗ Dec. attends to sent. emb. ✗ ✗ ✗ ✗ ✗ ✗ ✓ Sent. emb. used in ... — init init+ctx init init+ctx init+ctx input for att. Table 1: Different RNN-based architectures and their properties. Legend: “pooling” – vectors combined by mean or max (AVGPOOL, MAXPOOL); “sent. emb.” – sentence embedding, i.e. the fixed-size sentence representation computed by the encoder. “init” – initial decoder state. “ctx” – context vector, i.e. input for the decoder cell. “input for att.” – input for the decoder attention. provided visualizations of the phrase and sentence embedding spaces and observed that they reflect semantic and syntactic structure to some extent. Hill et al. (2016) perform a systematic evaluation of sentence representation in different models, including NMT, by applying them to various sentence classification tasks and by relating semantic similarity to closeness in the representation space. Shi et al. (2016) investigate the syntactic properties of representations learned by NMT systems by predicting sentence- and word-level syntactic labels (e.g. tense, part of speech) and by generating syntax trees from these representations. Schwenk and Douze (2017) aim to learn language-independent sentence representations using NMT systems with multiple source and target languages. They do not consider the attention mechanism and evaluate primarily by similarity scores of the learned representations for similar sentences (within or across languages). 3 Model Architectures Our proposed model architectures differ in (a) which encoder states are considered in subsequent processing, (b) how they are combined, and (c) how they are used in the decoder. Table 1 summarizes all the examined configurations of RNN-based models. The first three (ATTN, FINAL, FINAL-CTX) correspond roughly to the standard sequence-to-sequence models, Bahdanau et al. (2014), Sutskever et al. (2014) and Cho et al. (2014), resp. The last column (ATTNATTN) is our main proposed architecture: compound attention, described here in Section 3.1. In addition to RNN-based models, we experiment with the Transformer model, see Section 3.3. s1 s2 s3 sT  + c3 −→ h1 ←− h1 −→ h2 ←− h2 −→ h3 ←− h3 −→ hT ←− hT + α21 α22 α23 α2T . . . M2 M1 M3 M4 β31 β32 β33 β34 = M  = H decoder encoder x1 x2 x3 xT . . . Figure 1: An illustration of compound attention with 4 attention heads. The figure shows the computations that result in the decoder state s3 (in addition, each state si depends on the previous target token yi−1). Note that the matrix M is the same for all positions in the output sentence and it can thus serve as the source sentence representation. 3.1 Compound Attention Our compound attention model incorporates attention in both the encoder and the decoder, Fig. 1. Encoder with inner attention. First, we process the input sequence x1, x2, . . . , xT using a bidirectional recurrent network with gated recurrent units (GRU, Cho et al., 2014): −→ ht = −−→ GRU(xt, −−→ ht−1), ←− ht = ←−− GRU(xt, ←−− ht+1), ht = [−→ ht, ←− ht]. 1364 We denote by u the combined number of units in the two RNNs, i.e. the dimensionality of ht. Next, our goal is to combine the states (h1, h2, . . . , hT ) = H of the encoder into a vector of fixed dimensionality that represents the entire sentence. Traditional seq2seq models concatenate the final states of both encoder RNNs (−→ hT and ←− h1) to obtain the sentence representation (denoted as FINAL in Table 1). Another option is to combine all encoder states using the average or maximum over time (Collobert and Weston, 2008; Schwenk and Douze, 2017) (AVGPOOL and MAXPOOL in Table 1 and following). We adopt an alternative approach, which is to use inner attention1 (Liu et al., 2016; Li et al., 2016) to compute several weighted averages of the encoder states (Lin et al., 2017). The main motivation for incorporating these multiple “views” of the state sequence is that it removes the need for the RNN cell to accumulate the representation of the whole sentence as it processes the input, and therefore it should have more capacity for modeling local dependencies. Specifically, we fix a number r, the number of attention heads, and compute an r×T matrix A of attention weights αjt, representing the importance of position t in the input for the jth attention head. We then use this matrix to compute r weighted sums of the encoder states, which become the rows of a new matrix M: M = AH. (1) A vector representation of the source sentence (the “sentence embedding”) can be obtained by flattening the matrix M. In our experiments, we project the encoder states h1, h2, . . . , hT down to a given dimensionality before applying Eq. (1), so that we can control the size of the representation. Following Lin et al. (2017), we compute the attention matrix by feeding the encoder states to a two-layer feed-forward network: A = softmax(U tanh(WH)), (2) where W and U are weight matrices of dimensions d × u and r × d, respectively (d is the number of hidden units); the softmax function is applied along the second dimension, i.e. across the encoder states. 1Some papers call the same or similar approach selfattention or single-time attention. Attentive decoder. In vanilla seq2seq models with a fixed-size sentence representation, the decoder is usually conditioned on this representation via the initial RNN state. We propose to instead leverage the structured sentence embedding by applying attention to its components. This is no different from the classical attention mechanism used in NMT (Bahdanau et al., 2014), except that it acts on this fixed-size representation instead of the sequence of encoder states. In the ith decoding step, the attention mechanism computes a distribution {βij}r j=1 over the r components of the structured representation. This is then used to weight these components to obtain the context vector ci, which in turn is used to update the decoder state. Again, we can write this in matrix form as C = BM, (3) where B = (βij)T ,r i=1,j=1 is the attention matrix and C = (ci, c2, . . . , cT ) are the context vectors. Note that by combining Eqs. (1) and (3), we get C = (BA)H. (4) Hence, the composition of the encoder and decoder attentions (the “compound attention”) defines an implicit alignment between the source and the target sequence. From this viewpoint, our model can be regarded as a restriction of the conventional attention model. The decoder uses a conditional GRU cell (cGRUatt; Sennrich et al., 2017), which consists of two consecutively applied GRU blocks. The first block processes the previous target token yi−1, while the second block receives the context vector ci and predicts the next target token yi. 3.2 Constant Context Compared to the FINAL model, the compound attention architecture described in the previous section undoubtedly benefits from the fact that the decoder is presented with information from the encoder (i.e. the context vectors ci) in every decoding step. To investigate this effect, we include baseline models where we replace all context vectors ci with the entire sentence embedding (indicated by the suffix “-CTX” in Table 1). Specifically, we provide either the flattened matrix M (for models with inner attention; ATTN or ATTN-CTX), the final state of the encoder (FINAL-CTX), or the result of mean- or max-pooling (*POOL-CTX) as a constant input to the decoder cell. 1365 Name Cl. Train Test Task Example Input and Label MR 2 11k — sentiment (movies) an idealistic love story that brings out the latent 15-year-old romantic in everyone. (+) CR 2 4k — product review polarity no way to contact their customer service. (−) SUBJ 2 10k — subjectivity a little weak – and it isn’t that funny. (subjective) MPQA 2 11k — opinion polarity was hoping (+), breach of the very constitution (−) SST2 2 68k 2k sentiment (movies) contains very few laughs and even less surprises (−) SST5 5 10k 2k sentiment (movies) it’s worth taking the kids to. (4) TREC 6 5k 500 question type What was Einstein s IQ? (NUM) MRPC 2 4k 2k semantic equivalence Lawtey is not the first faith-based program in Florida’s prison system. / But Lawtey is the first entire prison to take that path. (−) SNLI 3 559k 10k natural language inference Two doctors perform surgery on patient. / Two surgeons are having lunch. (contradiction) SICK-E 3 5k 5k natural language inference A group of people is near the ocean / A crowd of people is near the water (entailment) Table 2: SentEval classification tasks. Tasks without a test set use 10-fold cross-validation. Name Train Test Method SICK-R 5k 5k regression STSB 7k 1k regression STS12 — 3k cosine similarity STS13 — 2k cosine similarity STS14 — 4k cosine similarity STS15 — 9k cosine similarity STS16 — 9k cosine similarity Table 3: SentEval semantic relatedness tasks. 3.3 Transformer with Inner Attention The Transformer (Vaswani et al., 2017) is a recently proposed model based entirely on feedforward layers and attention. It consists of an encoder and a decoder, each with 6 layers, consisting of multi-head attention on the previous layer and a position-wise feed-forward network. In order to introduce a fixed-size sentence representation into the model, we modify it by adding inner attention after the last encoder layer. The attention in the decoder then operates on the components of this representation (i.e. the rows of the matrix M). This variation on the Transformer model corresponds to the ATTN-ATTN column in Table 1 and is therefore denoted TRF-ATTN-ATTN. 4 Representation Evaluation Continuous sentence representations can be evaluated in many ways, see e.g. Kiros et al. (2015), Conneau et al. (2017) or the RepEval workshops.2 We evaluate our learned representations with classification and similarity tasks from SentEval (Section 4.1) and by examining clusters of sentence paraphrase representations (Section 4.2). 4.1 SentEval We perform evaluation on 10 classification and 7 similarity tasks using the SentEval3 (Conneau et al., 2017) evaluation tool. This is a superset of the tasks from Kiros et al. (2015). Table 2 describes the classification tasks (number of classes, data size, task type and an example) and Table 3 lists the similarity tasks. The similarity (relatedness) datasets contain pairs of sentences labeled with a real-valued similarity score. A given sentence representation model is evaluated either by learning to directly predict this score given the respective sentence embeddings (“regression”), or by computing the cosine similarity of the embeddings (“similarity”) without the need of any training. In both cases, Pearson and Spearman correlation of the predictions with the gold ratings is reported. See Dolan et al. (2004) for details on MRPC and Hill et al. (2016) for the remaining tasks. 4.2 Paraphrases We also evaluate the representation of paraphrases. We use two paraphrase sources for this purpose: COCO and HyTER Networks. COCO (Common Objects in Context; Lin et al., 2014) is an object recognition and image captioning dataset, containing 5 captions for each image. We extracted the captions from its validation set to form a set of 5 × 5k = 25k sentences grouped by the source image. The average sentence length is 11 tokens and the vocabulary size is 9k types. HyTER Networks (Dreyer and Marcu, 2014) are large finite-state networks representing a sub2https://repeval2017.github.io/ 3https://github.com/facebookresearch/ SentEval/ 1366 set of all possible English translations of 102 Arabic and 102 Chinese sentences. The networks were built by manually based on reference sentences in Arabic, Chinese and English. Each network contains up to hundreds of thousands of possible translations of the given source sentence. We randomly generated 500 translations for each source sentence, obtaining a corpus of 102k sentences grouped into 204 clusters, each containing 500 paraphrases. The average length of the 102k English sentences is 28 tokens and the vocabulary size is 11k token types. For every model, we encode each dataset to obtain a set of sentence embeddings with cluster labels. We then compute the following metrics: Cluster classification accuracy (denoted “Cl”). We remove 1 point (COCO) or half of the points (HyTER) from each cluster, and fit an LDA classifier on the rest. We then compute the accuracy of the classifier on the removed points. Nearest-neighbor paraphrase retrieval accuracy (NN). For each point, we find its nearest neighbor according to cosine or L2 distance, and count how often the neighbor lies in the same cluster as the original point. Inverse Davies-Bouldin index (iDB). The Davies-Bouldin index (Davies and Bouldin, 1979) measures cluster separation. For every pair of clusters, we compute the ratio Rij of their combined scatter (average L2 distance to the centroid) Si + Sj and the L2 distance of their centroids dij, then average the maximum values for all clusters: Rij = Si + Sj dij (5) DB = 1 N N  i=1 max j=i Rij (6) The lower the DB index, the better the separation. To match with the rest of our metrics, we take its inverse: iDB = 1 DB. 5 Experimental Results We trained English-to-German and English-toCzech NMT models using Neural Monkey4 (Helcl and Libovick´y, 2017a). In the following, we distinguish these models using the code of the target language, i.e. de or cs. The de models were trained on the Multi30K multilingual image caption dataset (Elliott et al., 4https://github.com/ufal/neuralmonkey 2016), extended by Helcl and Libovick´y (2017b), who acquired additional parallel data using backtranslation (Sennrich et al., 2016) and perplexitybased selection (Yasuda et al., 2008). This extended dataset contains 410k sentence pairs, with the average sentence length of 12 ± 4 tokens in English. We train each model for 20 epochs with the batch size of 32. We truecased the training data as well as all data we evaluate on. For German, we employed Neural Monkey’s reversible pre-processing scheme, which expands contractions and performs morphological segmentation of determiners. We used a vocabulary of at most 30k tokens for each language (no subword units). The cs models were trained on CzEng 1.7 (Bojar et al., 2016).5 We used byte-pair encoding (BPE) with a vocabulary of 30k sub-word units, shared for both languages. For English, the average sentence length is 15±19 BPE tokens and the original vocabulary size is 1.9M. We performed 1 training epoch with the batch size of 128 on the entire training section (57M sentence pairs). The datasets for both de and cs models come with their respective development and test sets of sentence pairs, which we use for the evaluation of translation quality. (We use 1k randomly selected sentence pairs from CzEng 1.7 dtest as a development set. For evaluation, we use the entire etest.) We also evaluate the InferSent model6 (Conneau et al., 2017) as pre-trained on the natural language inference (NLI) task. InferSent has been shown to achieve state-of-the-art results on the SentEval tasks. We also include a bag-ofwords baseline (GloVe-BOW) obtained by averaging GloVe7 word vectors (Pennington et al., 2014). 5.1 Translation Quality We estimate translation quality of the various models using single-reference case-sensitive BLEU (Papineni et al., 2002) as implemented in Neural Monkey (the reference implementation is mteval-v13a.pl from NIST or Moses). Tables 4 and 5 provide the results on the two datasets. The cs dataset is much larger and the training takes much longer. We were thus able to experiment with only a subset of the possible model configurations. 5http://ufal.mff.cuni.cz/czeng/czeng17 6https://github.com/facebookresearch/ InferSent 7https://nlp.stanford.edu/projects/ glove/ 1367 Model Size Heads BLEU dev test de-ATTN — — 37.6 36.2 de-TRF — — 38.2 36.1 de-ATTN-ATTN 2400 12 36.2 34.8 de-ATTN-ATTN 1200 12 35.6 34.3 de-ATTN-ATTN 600 8 35.4 33.7 de-ATTN-ATTN 600 12 35.3 33.4 de-ATTN-ATTN 1200 6 35.0 33.2 de-ATTN-ATTN 600 6 35.1 33.2 de-TRF-ATTN-ATTN 600 3 32.3 30.1 de-ATTN-ATTN 600 3 31.4 29.4 de-ATTN-CTX 1200 12 30.6 29.2 de-ATTN-CTX 600 12 29.8 29.1 de-ATTN-CTX 600 8 29.8 28.9 de-ATTN-CTX 600 6 29.5 28.8 de-TRF-ATTN-ATTN 2400 12 30.6 28.5 de-MAXPOOL-CTX 600 — 27.8 28.1 de-FINAL-CTX 600 — 28.1 26.9 de-ATTN-CTX 600 3 27.8 26.9 de-AVGPOOL-CTX 600 — 27.1 26.5 de-ATTN-ATTN 600 1 27.2 26.0 de-TRF-ATTN-ATTN 600 6 26.5 25.8 de-TRF-ATTN-ATTN 1200 12 26.6 25.3 de-FINAL 600 — 23.9 23.8 Table 4: Translation quality of de models. Model Size Heads BLEU Manual dev test > others cs-ATTN — — 22.8 22.2 89.1 cs-ATTN-ATTN 1000 8 19.1 18.4 78.8 cs-ATTN-ATTN 4000 4 18.4 17.9 — cs-ATTN-ATTN 1000 4 17.5 17.1 — cs-ATTN-CTX 1000 4 16.6 16.1 58.8 cs-FINAL-CTX 1000 — 16.1 15.5 — cs-ATTN-ATTN 1000 1 15.3 14.8 49.1 cs-FINAL 1000 — 11.2 10.8 — cs-AVGPOOL 1000 — 11.1 10.6 — cs-MAXPOOL 1000 — 5.4 5.4 3.0 Table 5: Translation quality of cs models. The columns “Size” and “Heads” specify the total size of sentence representation and the number of heads of encoder inner attention. In both cases, the best performing is the ATTN Bahdanau et al. model, followed by Transformer (de only) and our ATTN-ATTN (compound attention). The non-attentive FINAL Cho et al. is the worst, except cs-MAXPOOL. For 5 selected cs models, we also performed the WMT-style 5-way manual ranking on 200 sentence pairs. The annotations are interpreted as simulated pairwise comparisons. For each model, the final score is the number of times the model was judged better than the other model in the pair. Tied pairs are excluded. The results, shown in Table 5, confirm the automatic evaluation results. We also checked the relation between BLEU and the number of heads and representation size. While there are many exceptions, the general tendency is that the larger the representation or the more heads, the higher the BLEU score. The Pearson correlation between BLEU and the number of heads is 0.87 for cs and 0.31 for de. 5.2 SentEval Due to the large number of SentEval tasks, we present the results abridged in two different ways: by reporting averages (Table 6) and by showing only the best models in comparison with other methods (Table 7). The full results can be found in the supplementary material. Table 6 provides averages of the classification and similarity results, along with the results of selected tasks (SNLI, SICK-E). As the baseline for classifications tasks, we assign the most frequent class to all test examples.8 The de models are generally worse, most likely due to the higher OOV rate and overall simplicity of the training sentences. On cs, we see a clear pattern that more heads hurt the performance. The de set has more variations to consider but the results are less conclusive. For the similarity results, it is worth noting that cs-ATTN-ATTN performs very well with 1 attention head but fails miserably with more heads. Otherwise, the relation to the number of heads is less clear. Table 7 compares our strongest models with other approaches on all tasks. Besides InferSent and GloVe-BOW, we include SkipThought as evaluated by Conneau et al. (2017), and the NMTbased embeddings by Hill et al. (2016) trained on the English-French WMT15 dataset (this is the best result reported by Hill et al. for NMT). We see that the supervised InferSent clearly outperforms all other models in all tasks except for MRPC and TREC. Results by Hill et al. are always lower than our best setups, except MRPC and TREC again. On classification tasks, our models are outperformed even by GloVe-BOW, except for the NLI tasks (SICK-E and SNLI) where csFINAL-CTX is better. 5.3 Paraphrase Scores Table 6 also provides our measurements based on sentence paraphrases. For paraphrase retrieval (NN), we found cosine distance to work better 8For MR, CR, SUBJ, and MPQA, where there is no distinct test set, the class is established on the whole collection. For other tasks, the class is learned from the training set. 1368 Name Size H. SNLI SICK-E AvgAcc AvgSim Hy-Cl Hy-NN Hy-iDB CO-Cl CO-NN CO-iDB InferSent 4096 — 83.7 86.4 81.7 .70 99.99 100.00 0.579 31.58 26.21 0.367 GloVe-BOW 300 — 66.0 78.2 75.8 .59 99.94 100.00 0.654 34.28 19.72 0.352 cs-FINAL-CTX 1000 — 70.2 82.1 74.4 .60 99.92 100.00 0.406 23.20 16.07 0.346 cs-ATTN-ATTN 1000 1 69.3 80.8 73.4 .54 99.88 99.91 0.347 21.54 11.50 0.331 cs-FINAL 1000 — 69.2 81.1 73.2 .60 99.91 100.00 0.439 22.40 14.63 0.340 cs-MAXPOOL 1000 — 68.5 81.7 73.0 .60 99.86 100.00 0.447 21.76 16.34 0.348 cs-AVGPOOL 1000 — 67.8 79.7 72.4 .50 99.80 99.99 0.387 17.90 8.61 0.311 cs-ATTN-CTX 1000 4 66.0 79.5 72.2 .45 99.75 99.74 0.287 14.60 7.54 0.318 cs-ATTN-ATTN 4000 4 65.2 78.0 71.2 .39 99.54 98.98 0.252 11.52 5.51 0.303 cs-ATTN-ATTN 1000 4 64.6 78.0 70.8 .39 99.26 98.93 0.253 10.84 5.20 0.299 cs-ATTN-ATTN 1000 8 63.2 76.6 70.0 .36 99.41 98.09 0.243 10.24 4.64 0.287 de-MAXPOOL-CTX 600 — 68.0 78.8 67.1 .50 98.42 99.90 0.343 21.54 15.62 0.341 de-ATTN-CTX 1200 12 65.0 77.4 66.7 .52 98.88 99.91 0.347 20.06 16.68 0.348 de-ATTN-CTX 600 8 64.0 75.7 65.8 .51 98.11 99.90 0.348 21.64 17.32 0.349 de-AVGPOOL-CTX 600 — 65.2 77.5 65.6 .48 97.72 99.60 0.312 20.04 14.27 0.337 de-ATTN-CTX 600 12 61.9 76.0 65.5 .50 97.79 99.89 0.360 20.22 16.10 0.344 de-FINAL 600 — 64.7 77.0 65.3 .47 97.01 99.30 0.305 19.88 12.40 0.328 de-ATTN-CTX 600 3 63.3 76.0 65.3 .50 97.81 99.87 0.328 19.74 16.43 0.343 de-ATTN-ATTN 600 1 63.8 76.9 64.8 .50 97.70 99.73 0.352 19.74 16.26 0.340 de-ATTN-ATTN 600 3 61.5 74.7 64.5 .47 97.42 99.75 0.314 17.36 14.35 0.333 de-FINAL-CTX 600 — 62.6 76.2 64.5 .48 96.65 99.70 0.323 17.22 12.84 0.333 de-ATTN-ATTN 1200 6 59.6 72.3 64.3 .41 98.05 99.80 0.289 11.90 10.69 0.327 de-TRF-ATTN-ATTN 600 3 61.4 72.5 63.9 .49 95.79 99.64 0.315 15.76 14.04 0.340 de-ATTN-ATTN 1200 12 58.2 72.5 63.4 .43 97.15 99.65 0.283 12.18 11.97 0.330 de-ATTN-ATTN 2400 12 59.8 73.9 63.2 .41 98.69 99.77 0.287 10.26 10.94 0.326 de-TRF-ATTN-ATTN 2400 12 59.0 71.2 63.0 .46 95.82 99.03 0.307 5.66 14.53 0.339 de-ATTN-ATTN 600 6 57.5 70.9 62.6 .40 96.03 99.71 0.287 12.22 10.59 0.323 de-ATTN-ATTN 600 8 55.6 68.6 62.1 .39 95.32 99.73 0.275 10.22 10.58 0.325 de-TRF-ATTN-ATTN 600 6 59.5 71.0 61.9 .45 90.24 98.44 0.313 9.06 13.64 0.332 de-ATTN-ATTN 600 12 55.2 70.5 61.5 .40 95.16 99.64 0.278 9.62 10.47 0.323 de-TRF-ATTN-ATTN 1200 12 58.2 68.8 61.1 .46 90.71 98.22 0.301 7.06 13.70 0.333 de-ATTN-CTX 600 6 62.9 68.7 61.0 .43 98.11 99.86 0.358 20.44 15.57 0.342 LM perplexity (cs) 190.6 299.4 1150.2 1224.2 668.5 238.5 % OOV (cs) 0.3 0.2 2.3 2.6 1.2 0.1 LM perplexity (de) 38.8 65.0 3578.2 2010.6 3354.8 86.3 % OOV (de) 1.5 1.7 17.8 16.2 19.3 1.9 Table 6: Abridged SentEval and paraphrase evaluation results. Full results in supplementary material. AvgAcc is the average of all 10 SentEval classification tasks (see Table S1 in supplementary material), AvgSim averages all 7 similarity tasks (see Table S2). Hy- and CO- stand for HyTER and COCO, respectively. “H.” is the number of attention heads. We give the out-of-vocabulary (OOV) rate and the perplexity of a 4-gram language model (LM) trained on the English side of the respective parallel corpus and evaluated on all available data for a given task. Name Size H. MR CR SUBJ MPQA SST2 SST5 TREC MRPC SICK-E SNLI AvgAcc Most frequent baseline 50.0 63.8 50.0 68.8 49.9 23.1 18.8 66.5 56.7 34.3 48.19 InferSent 4096 — 81.5 86.7 92.7 90.6 85.0 45.8 88.2 76.6 86.4 (83.7) 81.7 Hill et al. en→fr† 2400 — 64.7 70.1 84.9 81.5 — — 82.8 96.1 — — — SkipThought-LN† — — 79.4 83.1 93.7 89.3 82.9 — 88.4 — 79.5 — — GloVe-BOW 300 — 77.0 78.2 91.1 87.9 81.0 44.4 82.0 72.3 78.2 66.0 75.8 cs-FINAL-CTX 1000 — 68.7 77.4 88.5 85.5 73.0 38.2 88.6 71.8 82.1 70.2 74.4 cs-ATTN-ATTN 1000 1 68.2 76.0 86.9 84.9 72.0 35.7 89.0 70.7 80.8 69.3 73.4 Name Size H. SICK-R STSB STS12 STS13 STS14 STS15 STS16 AvgSim InferSent 4096 — .88/.83 .76/.75 .59/.60 .59/.59 .70/.67 .71/.72 .71/.73 .70 SkipThought-LN† — — .85/ — — — — .44/.45 — — — GloVe-BOW 300 — .80/.72 .64/.62 .52/.53 .50/.51 .55/.56 .56/.59 .51/.58 .59 cs-FINAL-CTX 1000 — .82/.76 .74/.74 .51/.53 .44/.44 .52/.50 .62/.61 .57/.58 .60 cs-ATTN-ATTN 1000 1 .81/.76 .73/.73 .46/.49 .32/.33 .45/.44 .53/.52 .47/.48 .54 Table 7: Comparison of state-of-the-art SentEval results with our best models and the Glove-BOW baseline. “H.” is the number of attention heads. Reprinted results are marked with †, others are our measurements. 1369 BLEU MR CR SUBJ MPQA SST2 SST5 TREC MRPC SICK-E SNLI AvgAcc SICK-R STSB STS12 STS13 STS14 STS15 STS16 AvgSim Hy-Cl Hy-NN Hy-iDB CO-Cl CO-NN CO-iDB BLEU MR CR SUBJ MPQA SST2 SST5 TREC MRPC SICK-E SNLI AvgAcc SICK-R STSB STS12 STS13 STS14 STS15 STS16 AvgSim Hy-Cl Hy-NN Hy-iDB CO-Cl CO-NN CO-iDB −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00 Figure 2: Pearson correlations. Upper triangle: de models, lower triangle: cs models. Positive values shown in shades of green. For similarity tasks, only the Pearson (not Spearman) coefficient is represented. than L2 distance. We therefore do not list L2based results (except in the supplementary material). This evaluation seems less stable and discerning than the previous two, but we can again confirm the victory of InferSent followed by our nonattentive cs models. cs and de models are no longer clearly separated. 6 Discussion To assess the relation between the various measures of sentence representations and translation quality as estimated by BLEU, we plot a heatmap of Pearson correlations in Fig. 2. As one example, Fig. 3 details the cs models’ BLEU scores and AvgAcc. A good sign is that on the cs dataset, most metrics of representation are positively correlated (the pairwise Pearson correlation is 0.78 ± 0.32 on average), the outlier being TREC (−0.16±0.16 correlation with the other metrics on average) On the other hand, most representation metrics correlate with BLEU negatively (−0.57±0.31) on cs. The pattern is less pronounced but still clear also on the de dataset. A detailed understanding of what the learned representations contain is difficult. We can only 5.0 7.5 10.0 12.5 15.0 17.5 BLEU-test 70 71 72 73 74 AvgAcc avgpool 1000 attn-attn 1000, 1 head attn-attn 1000, 4 heads attn-attn 1000, 8 heads attn-attn 4000, 4 heads attn-ctx 1000, 4 heads maxpool 1000 final-ctx 1000 final 1000 Figure 3: BLEU vs. AvgAcc for cs models. speculate that if the NMT model has some capability for following the source sentence superficially, it will use it and spend its capacity on closely matching the target sentences rather than on deriving some representation of meaning which would reflect e.g. semantic similarity. We assume that this can be a direct consequence of NMT being trained for cross entropy: putting the exact word forms in exact positions as the target sentence requires. Performing well in single-reference BLEU is not an indication that the system understands the meaning but rather that it can maximize the chance of producing the n-grams required by the reference. The negative correlation between the number of attention heads and the representation metrics from Fig. 3 (−0.81±0.12 for cs and −0.18±0.19 for de, on average) can be partly explained by the following observation. We plotted the induced alignments (e.g. Fig. 4) and noticed that the heads tend to “divide” the sentence into segments. While one would hope that the segments correspond to some meaningful units of the sentence (e.g. subject, predicate, object), we failed to find any such interpretation for ATTN-ATTN and for cs models in general. Instead, the heads divide the source sentence more or less equidistantly, as documented by Fig. 5. Such a multi-headed sentence representation is then less fit for representing e.g. paraphrases where the subject and object swap their position due to passivization, because their representations are then accessed by different heads, and thus end up in different parts of the sentence embedding vector. For de-ATTN-CTX models, we observed a much 1370                                           !        "#$ Figure 4: Alignment between a source sentence (left) and the output (right) as represented in the ATTN-ATTN model with 8 heads and size of 1000. Each color represents a different head; the stroke width indicates the alignment weight; weights ≤ 0.01 pruned out. (Best viewed in color.) flatter distribution of attention weights for each head and, unlike in the other models, we were often able to identify a head focusing on the main verb. This difference between ATTN-ATTN and some ATTN-CTX models could be explained by the fact that in the former, the decoder is oblivious to the ordering of the heads (because of decoder attention), and hence it may not be useful for a given head to look for a specific syntactic or semantic role. 7 Conclusion We presented a novel variation of attentive NMT models (Bahdanau et al., 2014; Vaswani et al., 2017) that again provides a single meeting point with a continuous representation of the source sentence. We evaluated these representations with a           Figure 5: Attention weight by relative position in the source sentence (average over dev set excluding sentences shorter than 8 tokens). Same model as in Fig. 4. Each plot corresponds to one head. number of measures reflecting how well the meaning of the source sentence is captured. While our proposed “compound attention” leads to translation quality not much worse than the fully attentive model, it generally does not perform well in the meaning representation. Quite on the contrary, the better the BLEU score, the worse the meaning representation. We believe that this observation is important for representation learning where bilingual MT now seems less likely to provide useful data, but perhaps more so for MT itself, where the struggle towards a high single-reference BLEU score (or even worse, cross entropy) leads to systems that refuse to consider the meaning of the sentence. Acknowledgement This work has been supported by the grants 18-24210S of the Czech Science Foundation, SVV 260 453 and “Progress” Q18+Q48 of Charles University, and using language resources distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (projects LM2015071 and OP VVV VI CZ.02.1.01/0.0/0.0/16 013/0001781). References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by 1371 jointly learning to align and translate. CoRR, abs/1409.0473. Ondˇrej Bojar et al. 2016. CzEng 1.6: Enlarged CzechEnglish Parallel Corpus with Processing Tools Dockered. In Text, Speech, and Dialogue (TSD), number 9924 in LNAI, pages 231–238. Kyunghyun Cho, Bart van Merrienboer, C¸ aglar G¨ulc¸ehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In EMNLP. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: deep neural networks with multitask learning. In ICML. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In EMNLP. David L. Davies and Donald W. Bouldin. 1979. A cluster separation measure. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI1:224–227. William B. Dolan, Chris Quirk, and Chris Brockett. 2004. Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In COLING. Markus Dreyer and Daniel Marcu. 2014. HyTER networks of selected OpenMT08/09 sentences. Linguistic Data Consortium. LDC2014T09. Desmond Elliott, Stella Frank, Khalil Sima’an, and Lucia Specia. 2016. Multi30k: Multilingual englishgerman image descriptions. CoRR, abs/1605.00459. Jindˇrich Helcl and Jindˇrich Libovick´y. 2017a. Neural Monkey: An open-source tool for sequence learning. The Prague Bulletin of Mathematical Linguistics, 107(1):5–17. Jindˇrich Helcl and Jindˇrich Libovick´y. 2017b. CUNI System for the WMT17 Multimodal Traslation Task. Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. In HLT-NAACL. Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2015. Skip-thought vectors. In NIPS Vol. 2, NIPS’15, pages 3294–3302. Peng Li, Wei Li, Zhengyan He, Xuguang Wang, Ying Cao, Jie Zhou, and Wei Xu. 2016. Dataset and neural recurrent sequence labeling model for open-domain factoid question answering. CoRR, abs/1607.06275. Tsung-Yi Lin, Michael Maire, Serge J. Belongie, Lubomir D. Bourdev, Ross B. Girshick, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C. Lawrence Zitnick. 2014. Microsoft COCO: common objects in context. CoRR, abs/1405.0312. Zhouhan Lin, Minwei Feng, C´ıcero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. CoRR, abs/1703.03130. Yang Liu, Chengjie Sun, Lei Lin, and Xiaolong Wang. 2016. Learning natural language inference using bidirectional LSTM model and inner-attention. CoRR, abs/1605.09090. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In ACL, pages 311–318. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532–1543. Holger Schwenk and Matthijs Douze. 2017. Learning joint multilingual sentence representations with neural machine translation. volume abs/1704.04154. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. CoRR, abs/1511.06709. Rico Sennrich et al. 2017. Nematus: a toolkit for neural machine translation. In EACL. Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does string-based neural MT learn source syntax? In EMNLP. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In NIPS. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. Keiji Yasuda, Ruiqiang Zhang, Hirofumi Yamamoto, and Eiichiro Sumita. 2008. Method of selecting training data to build a compact and efficient translation model. In IJCNLP.
2018
126
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1372–1382 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1372 Automatic Metric Validation for Grammatical Error Correction Leshem Choshen1 and Omri Abend1,2 1School of Computer Science and Engineering, 2 Department of Cognitive Sciences The Hebrew University of Jerusalem [email protected], [email protected] Abstract Metric validation in Grammatical Error Correction (GEC) is currently done by observing the correlation between human and metric-induced rankings. However, such correlation studies are costly, methodologically troublesome, and suffer from low inter-rater agreement. We propose MAEGE, an automatic methodology for GEC metric validation, that overcomes many of the difficulties with existing practices. Experiments with MAEGE shed a new light on metric quality, showing for example that the standard M2 metric fares poorly on corpus-level ranking. Moreover, we use MAEGE to perform a detailed analysis of metric behavior, showing that correcting some types of errors is consistently penalized by existing metrics. 1 Introduction Much recent effort has been devoted to automatic evaluation, both within GEC (Napoles et al., 2015; Felice and Briscoe, 2015; Ng et al., 2014; Dahlmeier and Ng, 2012, see §2), and more generally in text-to-text generation tasks. Within Machine Translation (MT), an annual shared task is devoted to automatic metric development, accompanied by an extensive analysis of metric behavior (Bojar et al., 2017). Metric validation is also raising interest in GEC, with several recent works on the subject (Grundkiewicz et al., 2015; Napoles et al., 2015, 2016b; Sakaguchi et al., 2016), all using correlation with human rankings (henceforth, CHR) as their methodology. Human rankings are often considered as ground truth in text-to-text generation, but using them reliably can be challenging. Other than the costs of compiling a sizable validation set, human rankings are known to yield poor inter-rater agreement in MT (Bojar et al., 2011; Lopez, 2012; Graham et al., 2012), and to introduce a number of methodological problems that are difficult to overcome, notably the treatment of ties in the rankings and uncomparable sentences (see §3). These difficulties have motivated several proposals to alter the MT metric validation protocol (Koehn, 2012; Dras, 2015), leading to a recent abandoning of evaluation by human rankings due to its unreliability (Graham et al., 2015; Bojar et al., 2016). These conclusions have not yet been implemented in GEC, despite their relevance. In §3 we show that human rankings in GEC also suffer from low inter-rater agreement, motivating the development of alternative methodologies. The main contribution of this paper is an automatic methodology for metric validation in GEC called MAEGE (Methodology for Automatic Evaluation of GEC Evaluation), which addresses these difficulties. MAEGE requires no human rankings, and instead uses a corpus with gold standard GEC annotation to generate lattices of corrections with similar meanings but varying degrees of grammaticality. For each such lattice, MAEGE generates a partial order of correction quality, a quality score for each correction, and the number and types of edits required to fully correct each. It then computes the correlation of the induced partial order with the metric-induced rankings. MAEGE addresses many of the problems with existing methodology: • Human rankings yield low inter-rater and intra-rater agreement (§3). Indeed, Choshen and Abend (2018a) show that while annotators often generate different corrections given a sentence, they generally agree on whether a correction is valid or not. Unlike CHR, MAEGE bases its scores on human corrections, rather than on rankings. 1373 • CHR uses system outputs to obtain human rankings, which may be misleading, as systems may share similar biases, thus neglecting to evaluate some types of valid corrections (§7). MAEGE addresses this issue by systematically traversing an inclusive space of corrections. • The difficulty in handling ties is addressed by only evaluating correction pairs where one contains a sub-set of the errors of the other, and is therefore clearly better. • MAEGE uses established statistical tests for determining the significance of its results, thereby avoiding ad-hoc methodologies used in CHR to tackle potential biases in human rankings (§5, §6). In experiments on the standard NUCLE test set (Dahlmeier et al., 2013), we find that MAEGE often disagrees with CHR as to the quality of existing metrics. For example, we find that the standard GEC metric, M2, is a poor predictor of corpuslevel ranking, but a good predictor of sentencelevel pair-wise rankings. The best predictor of corpus-level quality by MAEGE is the referenceless LT metric (Miłkowski, 2010; Napoles et al., 2016b), while of the reference-based metrics, GLEU (Napoles et al., 2015) fares best. In addition to measuring metric reliability, MAEGE can also be used to analyze the sensitivities of the metrics to corrections of different types, which to our knowledge is a novel contribution of this work. Specifically, we find that not only are valid edits of some error types better rewarded than others, but that correcting certain error types is consistently penalized by existing metrics (Section 7). The importance of interpretability and detail in evaluation practices (as opposed to just providing bottom-line figures), has also been stressed in MT evaluation (e.g., Birch et al., 2016). 2 Examined Metrics We turn to presenting the metrics we experiment with. The standard practice in GEC evaluation is to define differences between the source and a correction (or a reference) as a set of edits (Dale et al., 2012). An edit is a contiguous span of tokens to be edited, a substitute string, and the corrected error type. For example: “I want book” might have an edit (2-3, “a book”, ArtOrDet); applying the edit results in “I want a book”. Edits are defined (by the annotation guidelines) to be maximally independent, so that each edit can be applied independently of the others. We denote the examined set of metrics with METRICS. BLEU. BLEU (Papineni et al., 2002) is a reference-based metric that averages the outputreference n-gram overlap precision values over different ns. While commonly used in MT and other text generation tasks (Sennrich et al., 2017; Krishna et al., 2017; Yu et al., 2017), BLEU was shown to be a problematic metric in monolingual translation tasks, in which much of the source sentence should remain unchanged (Xu et al., 2016). We use the NLTK implementation of BLEU, using smoothing method 3 by Chen and Cherry (2014). GLEU. GLEU (Napoles et al., 2015) is a reference-based GEC metric inspired by BLEU. Recently, it was updated to better address multiple references (Napoles et al., 2016a). GLEU rewards n-gram overlap of the correction with the reference and penalizes unchanged n-grams in the correction that are changed in the reference. iBLEU. iBLEU (Sun and Zhou, 2012) was introduced to monolingual translation in order to balance BLEU, by averaging it with the BLEU score of the source and the output. This yields a metric that rewards similarity to the source, and not only overlap with the reference: iBLEU(S, R, O) = αBLEU(O, R)−(1−α)BLEU(O, S) We set α = 0.8 as suggested by Sun and Zhou. F-Score computes the overlap of edits to the source in the reference, and in the output. As system edits can be constructed in multiple ways, the standard M2 scorer (Dahlmeier and Ng, 2012) computes the set of edits that yields the maximum F-score. As M2 requires edits from the source to the reference, and as MAEGE generates new source sentences, we use an established protocol to automatically construct edits from pairs of strings (Felice et al., 2016; Bryant et al., 2017). The protocol was shown to produce similar M2 scores to those produced with manual edits. Following common practice, we use the Precision-oriented F0.5. SARI. SARI (Xu et al., 2016) is a referencebased metric proposed for sentence simplification. 1374 SARI averages three scores, measuring the extent to which n-grams are correctly added to the source, deleted from it and retained in it. Where multiple references are present, SARI’s score is determined not as the maximum single-reference score, but some averaging over them. As this may lead to an unintuitive case, where a correction which is identical to the output gets a score of less than 1, we experiment with an additional metric, MAX-SARI, which coincides with SARI for a single reference, and computes the maximum singlereference SARI score for multiple-references. Levenshtein Distance. We use the Levenshtein distance (Kruskal and Sankoff, 1983), i.e., the number of character edits needed to convert one string to another, between the correction and its closest reference (MinLDO→R). To enrich the discussion, we also report results with a measure of conservatism, LDS→O, i.e., the Levenshtein distance between the correction and the source. Both distances are normalized by the number of characters in the second string (R, O respectively). In order to convert these distance measures into measures of similarity, we report 1 −LD(c1,c2) len(c1) . Grammaticality is a reference-less metric, which uses grammatical error detection tools to assess the grammaticality of GEC system outputs. We use LT (Miłkowski, 2010), the best performing non-proprietary grammaticality metric (Napoles et al., 2016b). The detection tool at the base of LT can be much improved. Indeed, Napoles et al. (2016b) reported that the proprietary tool they used detected 15 times more errors than LT. A sentence’s score is defined to be 1 −#errors #tokens. See (Asano et al., 2017; Choshen and Abend, 2018b) for additional reference-less measures, published concurrently with this work. I-Measure. I-Measure (Felice and Briscoe, 2015) is a weighted accuracy metric over tokens. I-measure rank determines whether a correction is better than the source and to what extent. Unlike in this paper, I-measure assumes that every pair of intersecting edits (i.e., edits whose spans of tokens overlap) are alternating, and that non-intersecting edits are independent. Consequently, where multiple references are present, it extends the set of references, by generating every possible combination of independent edits. As the number of combinations is generally exponential in the number of references, the procedure can be severely inefficient. Figure 1: Histogram and rug plot of the log number of references under I-measure assumptions, i.e. overlapping edits alternate as valid corrections of the same error. There are billions of ways to combine 8 references on average. Indeed, a sentence in the test set has 3.5 billion references on average, where the median is 512 (See Figure 1). I-measure can also be run without generating new references, but despite parallelization efforts, this version did not terminate after 140 CPU days, while the cumulative CPU time of the rest of the metrics was less than 1.5 days. 3 Human Ranking Experiments Correlation with human rankings (CHR) is the standard methodology for assessing the validity of GEC metrics. While informative, human rankings are costly to produce, present low inter-rater agreement (shown for MT evaluation in (Bojar et al., 2011; Dras, 2015)), and introduce methodological difficulties that are hard to overcome. We begin by showing that existing sets of human rankings produce inconsistent results with respect to the quality of different metrics, and proceed by proposing an improved protocol for computing this correlation in the future. There are two existing sets of human rankings for GEC that were compiled concurrently: GJG15 by Grundkiewicz et al. (2015), and NSPT15 by Napoles et al. (2015). Both sets are based on system outputs from the CoNLL 2014 (Ng et al., 2014) shared task, using sentences from the NUCLE test set. We compute CHR against each. System-level correlations are computed by TrueSkill (Sakaguchi et al., 2014), which adopts its methodology from MT.1 1There’s a minor problem in the output of the NTHU system: a part of the input is given as sentence 39 and sentence 43 is missing. We corrected it to avoid unduly penalizing NTHU for all the sentences in this range. 1375 Table 1 shows CHR with Spearman ρ (Pearson r shows similar trends). Results on the two datasets diverge considerably, despite their use of the same systems and corpus (albeit a different sub-set thereof). For example, BLEU receives a high positive correlation on GJG15, but a negative one on NSPT15; GLEU receives a correlation of 0.51 against GJG15 and 0.76 against NSPT15; and M2 ranges between 0.4 (GJG15) and 0.7 (NSPT15). In fact, this variance is already apparent in the published correlations of GLEU, e.g., Napoles et al. (2015) reported a ρ of 0.56 against NSPT15 and Napoles et al. (2016b) reported a ρ of 0.85 against GJG15.2 This variance in the metrics’ scores is an example of the low agreement between human rankings, echoing similar findings in MT (Bojar et al., 2011; Lopez, 2012; Dras, 2015). Another source of inconsistency in CHR is that the rankings are relative and sampled, so datasets rank different sets of outputs (Lopez, 2012). For example, if a system is judged against the best systems more often then others, it may unjustly receive a lower score. TrueSkill is the best known practice to tackle such issues (Bojar et al., 2014), but it produces a probabilistic corpus-level score, which can vary between runs (Sakaguchi et al., 2016).3 This makes CHR more difficult to interpret, compared to classic correlation coefficients. We conclude by proposing a practice for reporting CHR in future work. First, we combine both sets of human judgments to arrive at the statistically most powerful test. Second, we compute the metrics’ corpus-level rankings according to the same subset of sentences used for human rankings. The current practice of allowing metrics to rank systems based on their output on the entire CoNLL test set (while human rankings are only collected for a sub-set thereof), may bias the results due to potential non-uniform system performance on the test set. We report CHR according to the proposed protocol in Table 1 (left column). 4 Constructing Lattices of Corrections In the following sections we present MAEGE an alternative methodology to CHR, which uses human corrections to induce more reliable and scalable rankings to compare metrics against. We begin our presentation by detailing the method MAEGE 2The difference between our results and previously reported ones is probably due to a recent update in GLEU to better tackles multiple references (Napoles et al., 2016a). 3The standard deviation of the results is about 0.02. Combined GJG15 NSPT15 ρ P-val ρ Rank ρ Rank GLEU 0.771 0.001 0.512 1 0.758 1 LT 0.692 0.006 0.358 4 0.615 3 M 2 0.626 0.017 0.398 3 0.703 2 SARI 0.596 0.025 0.323 6 0.599 4 MAX-SARI 0.552 0.041 0.292 7 0.577 5 MinLDO→R 0.191 0.513 0.350 5 -0.187 7 BLEU 0.143 0.626 0.455 2 -0.126 6 iBLEU -0.059 0.840 0.226 8 -0.462 8 LDS→O -0.481 0.081 -0.178 -0.505 Table 1: Metrics correlation with human judgments. The Combined column presents the Spearman correlation coefficient (ρ) according to the combined set of human rankings, with its associated P-value. The GJG15 and NSPT15 columns present the Spearman correlation according to the two sets of human rankings, as well as the rank of the metric according to this correlation. Measures are ordered by their rank in the combined human judgments. The discrepancy between the ρ values obtained against GJG15 and NSPT15 demonstrate low inter-rater agreement in human rankings. R(1) 1 R(1) k · · · O1 R(n) 1 R(n) k · · · · · · On Figure 2: An illustration of the generated corrections lattices. The Ois are the original sentences, directed edges represent an application of an edit and R(i) j is the j-th perfect correction of Oi (i.e., the perfect correction that result from applying all the edits of the j-th annotation of Oi). uses to generate source-correction pairs and a partial order between them. MAEGE operates by using a corpus with gold annotation, given as edits, to generate lattices of corrections, each defined by a sub-set of the edits. Within the lattice, every pair of sentences can be regarded as a potential source and a potential output. We create sentence chains, in an increasing order of quality, taking a source sentence and applying edits in some order one after the other (see Figure 2 and 3). Formally, for each sentence s in the corpus and each annotation a, we have a set of typed edits edits(s, a) = {e(1) s,a, . . . , e(ns,a) s,a } of size ns,a. We call 2edits(s,a) the corrections lattice, and denote it with Es,a. We call, s, the correction corresponding to ∅the original. We define a partial order relation between x, y ∈Es,a such that x < y if x ⊂y. This order relation is assumed to be the gold standard ranking between the corrections. For our experiments, we use the NUCLE test data (Ng et al., 2014). Each sentence is paired with two annotations. The other eight available 1376 Social media makes our life patten so fast and left us less time to think about our life. Social media makes our life patten so fast and leave us less time to think about our life. Social media make our life patten so fast and leave us less time to think about our life. Social media make our pace of life so fast and leave us less time to think about our life. left leave makes make life patten pace of life Figure 3: An example chain from a corrections lattice – each sentence is the result of applying a single edit to the sentence below it. The top sentence is a perfect correction, while the bottom is the original. Figure 4: A scatter plot of the corpus-level correlation of metrics according to the different methodologies. The x-axis corresponds to the correlation according to human rankings (Combined setting), and the y-axis corresponds to the correlation according to MAEGE. While some get similar correlation (e.g., GLEU), other metrics change drastically (e.g., SARI). references, produced by Bryant and Ng (2015), are used as references for the reference-based metrics. Denote the set of references for s with Rs. Sentences which require no correction according to at least one of the two annotations are discarded. In 26 cases where two edit spans intersect in the same annotation (out of a total of about 40K edits), the edits are manually merged or split. 5 Corpus-level Analysis We conduct a corpus-level analysis, namely testing the ability of metrics to determine which corpus of corrections is of better quality. In practice, this procedure is used to rank systems based on their outputs on the test corpus. In order to compile corpora corresponding to systems of different quality levels, we define several corpus models, each applying a different expected number of edits to the original. Models are denoted with the expected number of edits they apply to the original which is a positive number M ∈R+. Given a corpus model M, we generate a corpus of corrections by traversing the original sentences, and for each sentence s uniformly sample an annotation a (i.e., a set of edits that results in a perfect correction), and the number of edits applied nedits, which is sampled from a clipped binomial probability with mean M and variance 0.9. Given nedits, we uniformly sample from the lattice Es,a a sub-set of edits of size nedits, and apply this set of edits to s. The corpus of M = 0 is the set of originals. The corpus of source sentences, against which all other corpora are compared, is sampled by traversing the original sentences, and for each sentence s, uniformly sample an annotation a, and given s, a, uniformly sample a sentence from Es,a. Given a metric m ∈METRICS, we compute its score for each sampled corpus. Where corpuslevel scores are not defined by the metrics themselves, we use the average sentence score instead. We compare the rankings induced by the scores of m and the ranking of systems according to their corpus model (i.e., systems that have a higher M should be ranked higher), and report the correlation between these rankings. 5.1 Experiments Setup. For each model, we sample one correction per NUCLE sentence, noting that it is possible to reduce the variance of the metrics’ corpuslevel scores by sampling more. Corpus models of integer values between 0 and 10 are taken. We report Spearman ρ, commonly used for system-level rankings (Bojar et al., 2017).4 Results. Results, presented in Table 2 (left part), shows that LT correlates best with the rankings induced by MAEGE, where GLEU is second. M2’s correlation is only 0.06. We note that the LT requires a complementary metric to penalize grammatical outputs that diverge in meaning from the source (Napoles et al., 2016b). See §8. Comparing the metrics’ quality in corpus-level evaluation with their quality according to CHR (§3), we find they are often at odds. Figure 4 plots the Spearman correlation of the different metrics according to the two validation methodologies, 4Using Pearson correlation shows similar trends. 1377 Corpus-level Sentence-level ρ P-val r P-val τ P-val iBLEU 0.418 0.200 0.230 † 0.050 † M 2 0.060 0.853 -0.025 0.024 0.213 † LT 0.973 † 0.167 † 0.222 † BLEU 0.564 0.071 0.214 † 0.111 † MinLDO→R -0.867 † 0.011 0.327 -0.183 † GLEU 0.736 0.001 0.189 † -0.028 † MAX-SARI -0.809 0.003 0.027 0.015 -0.070 † SARI -0.545 0.080 0.061 † -0.039 † LDS→O -0.118 0.729 0.109 † 0.094 † Table 2: Corpus-level Spearman ρ, sentence-level Pearson r and Kendall τ with the metrics (left). † represents P-value < 0.001. LT correlates best at the corpus level and has the highest sentence-level τ, while iBLEU has the highest sentence-level r. Figure 5: Average GLEU score of originals (y-axis), plotted against the number of errors they contain (x-axis). Their substantial correlation indicates that GLEU is globally reliable. showing correlations are slightly correlated, but disagreements as to metric quality are frequent and substantial (e.g., with iBLEU or SARI). 6 Sentence-level Analysis We proceed by presenting a method for assessing the correlation between metric-induced scores of corrections of the same sentence, and the scores given to these corrections by MAEGE. Given a sentence s and an annotation a, we sample a random permutation over the edits in edits(s, a). We denote the permutation with σ ∈Sns,a, where Sns,a is the permutation group over {1, · · · , ns,a}. Given σ, we define a monotonic chain in Ei,j as: chain(s, a, σ) =  ∅< {e(σ(1)) s,a } < {e(σ(1)) s,a , e(σ(2)) s,a } < . . . < edits(s, a)  For each chain, we uniformly sample one of its elements, mark it as the source, and denote it with src. In order to generate a set of chains, MAEGE traverses the original sentences and annotations, and for each sentence-annotation pair, uniformly samples nch chains without repetition. It then uniformly samples a source sentence from each chain. If the number of chains in Es,a is smaller than nch, MAEGE selects all the chains. Given a metric m ∈METRICS, we compute its score for every correction in each sampled chain against the sampled source and available references. We compute the sentence-level correlation of the rankings induced by the scores of m and the rankings induced by <. For computing rank correlation (such as Spearman ρ or Kendall τ), such a relative ranking is sufficient. We report Kendall τ, which is only sensitive to the relative ranking of correction pairs within the same chain. Kendall is minimalistic in its assumptions, as it does not require numerical scores, but only assuming that < is well-motivated, i.e., that applying a set of valid edits is better in quality than applying only a subset of it. As < is a partial order, and as Kendall τ is standardly defined over total orders, some modification is required. τ is a function of the number of compared pairs and of discongruent pairs (ordered differently in the compared rankings): τ = 1 −2 |discongruent pairs| |all pairs| . To compute these quantities, we extract all unique pairs of corrections that can be compared with < (i.e., one applies a sub-set of the edits of the other), and count the number of discongruent ones between the metric’s ranking and <. Significance is modified accordingly.5 Spearman ρ is 5Code can be found in https://github.com/ borgr/EoE 1378 less applicable in this setting, as it compares total orders whereas here we compare partial orders. To compute linear correlation with Pearson r, we make the simplifying assumption that all edits contribute equally to the overall quality. Specifically, we assume that a perfect correction (i.e., the top of a chain) receives a score of 1. Each original sentence s (the bottom of a chain), for which there exists annotations a1, . . . , an, receives a score of 1 −min i |edits(s, ai)| |tokens(s)| . The scores of partial (non-perfect) corrections in each chain are linearly spaced between the score of the perfect correction and that of the original. This scoring system is well-defined, as a partial correction receives the same score according to all chains it is in, as all paths between a partial correction and the original have the same length. 6.1 Experiments Setup. We experiment with nch = 1, yielding 7936 sentences in 1312 chains (same as the number of original sentences in the NUCLE test set). We report the Pearson correlation over the scores of all sentences in all chains (r), and Kendall τ over all pairs of corrections within the same chain. Results. Results are presented in Table 2 (right part). No metric scores very high, neither according to Pearson r nor according to Kendall τ. iBLEU correlates best with < according to r, obtaining a correlation of 0.23, whereas LT fares best according to τ, obtaining 0.222. Results show a discrepancy between the low corpus-level and sentence-level r correlations of M2 and its high sentence-level τ. It seems that although M2 orders pairs of corrections well, its scores are not a linear function of MAEGE’s scores. This may be due to M2’s assignment of the minimal possible score to the source, regardless of its quality. M2 thus seems to predict well the relative quality of corrections of the same sentence, but to be less effective in yielding a globally coherent score (cf. Felice and Briscoe (2015)). GLEU shows the inverse behaviour, failing to correctly order pairs of corrections of the same sentence, while managing to produce globally coherent scores. We test this hypothesis by computing the average difference in GLEU score between all pairs in the sampled chains, and find it to be slightly negative (-0.00025), which is in line with GLEU’s small negative τ. On the other hand, plotting the GLEU scores of the originals grouped by the number of errors they contain, we find they correlate well (Figure 5), indicating that GLEU performs well in comparing the quality of corrections of different sentences. Four sentences with considerably more errors than the others were considered outliers and removed. 7 Metric Sensitivity by Error Type MAEGE’s lattice can be used to analyze how the examined metrics reward corrections of errors of different types. For each edit type t, we denote with St the set of correction pairs from the lattice that only differ in an edit of type t. For each such pair (c, c′) and for each metric m, we compute the difference in the score assigned by m to c and c′. The average difference is denoted with ∆m,t. ∆m,t = 1 |St| X (c,c′)∈St  m(src, c, R)−m(src, c′, R)  R is the corresponding reference set. A negative (positive) ∆m,t indicates that m penalizes (awards) valid corrections of type t. 7.1 Experiments Setup. We sample chains using the same sampling method as in §6, and uniformly sample a source from each chain. For each edit type t, we detect all pairs of corrections in the sampled chains that only differ in an edit of type t, and use them to compute ∆m,t. We use the set of 27 edit types given in the NUCLE corpus. Results. Table 3 presents the results, showing that under all metrics, some edits types are penalized and others rewarded. iBLEU and LT penalize the least edit types, and GLEU penalizes the most, providing another perspective on GLEU’s negative Kendall τ (§6). Certain types are penalized by almost all metrics. One such type is Vm, wrong verb modality (e.g., “as they [∅; may] not want to know”). Another such type is Npos, a problem in noun possessive (e.g., “their [facebook’s ; Facebook] page”). Other types, such as Mec, mechanical (e.g., “[real-life ; real life]”), and V0, missing verb (e.g., “’Privacy’, this is the word that [∅; is] popular”), are often rewarded by the metrics. In general, the tendency of reference-based metrics (the vast majority of GEC metrics) to penalize edits of various types suggests that many edit 1379 Type iBLEU M 2 LT BLEU MinLDO→R GLEU MAX-SARI SARI LDS→O WOinc 0.016 −0.000 −0.002 −0.005 −0.026 −0.051 −0.075 −0.046 0.063 Nn 0.033 −0.001 0.004 0.029 −0.007 0.025 0.043 0.037 0.017 Npos −0.001 0.001 −0.004 −0.011 −0.007 −0.030 −0.023 −0.009 0.014 Sfrag −0.025 −0.003 −0.000 −0.067 −0.068 −0.143 −0.177 −0.142 0.076 Wtone −0.013 −0.002 −0.008 −0.024 −0.021 −0.026 −0.086 −0.055 0.018 Srun −0.027 −0.004 −0.004 −0.048 −0.014 −0.078 −0.039 −0.030 0.020 ArtOrDet 0.028 −0.001 0.001 0.019 −0.006 −0.003 −0.022 −0.003 0.024 Vt 0.054 −0.001 0.005 0.046 −0.002 0.011 0.003 0.018 0.025 Wa 0.041 −0.002 −0.002 −0.013 0.006 −0.028 −0.073 −0.090 0.071 Wform 0.049 −0.001 0.002 0.044 −0.003 0.010 0.004 0.020 0.022 WOadv 0.007 0.000 0.009 0.011 0.012 0.006 0.088 0.054 −0.014 V0 0.015 −0.001 0.019 0.005 −0.003 −0.006 −0.010 −0.004 0.015 Trans −0.011 0.000 0.005 −0.022 −0.029 −0.031 −0.019 −0.004 0.013 Pform 0.021 −0.001 0.003 0.011 −0.012 −0.019 −0.003 0.005 0.030 Smod −0.052 0.001 0.004 −0.093 −0.072 −0.126 −0.062 −0.043 0.055 Ssub −0.005 0.000 −0.011 −0.024 −0.027 −0.052 −0.072 −0.038 0.026 Wci −0.008 −0.001 0.004 −0.022 −0.029 −0.045 −0.049 −0.032 0.017 Vm −0.007 −0.001 −0.001 −0.029 −0.027 −0.075 −0.070 −0.059 0.030 Pref −0.003 −0.001 0.002 −0.015 −0.022 −0.045 −0.048 −0.035 0.018 Mec 0.012 0.001 0.014 0.004 −0.013 −0.014 0.000 0.002 0.018 Vform 0.043 −0.001 0.006 0.044 0.000 0.030 0.033 0.043 0.013 Prep 0.018 −0.000 0.004 0.011 −0.008 −0.001 −0.010 0.005 0.014 Um −0.038 −0.001 −0.007 −0.043 −0.100 −0.037 −0.046 −0.032 0.009 Others −0.048 −0.000 0.007 −0.063 −0.054 −0.060 −0.040 −0.024 −0.000 Rloc0.004 −0.001 −0.004 −0.006 −0.027 −0.023 −0.028 −0.019 0.022 Spar 0.041 0.001 0.003 0.035 −0.012 −0.003 0.008 0.026 0.024 SVA 0.045 −0.001 −0.001 0.037 −0.005 −0.002 0.012 0.015 0.021 Table 3: Average change in metric score by metric and edit types (∆m,t; see text). Rows correspond to edit types (abbreviations in Dahlmeier et al. (2013)); columns correspond to metrics. Some edit types are consistently penalized. types are under-represented in available reference sets. Automatic evaluation of systems that perform these edit types may, therefore, be unreliable. Moreover, not addressing these biases in the metrics may hinder progress in GEC. Indeed, M2 and GLEU, two of the most commonly used metrics, only award a small sub-set of edit types, thus offering no incentive for systems to improve performance on such types.6 8 Discussion We revisit the argument that using system outputs to perform metric validation poses a methodological difficulty. Indeed, as GEC systems are developed, trained and tested using available metrics, and as metrics tend to reward some correction types and penalize others (§7), it is possible that GEC development adjusts to the metrics, and neglects some error types. Resulting tendencies in GEC systems would then yield biased sets of outputs for human rankings, which in turn would result in biases in the validation process. To make this concrete, GEC systems are often precision-oriented: trained to prefer not to correct than to invalidly correct. Indeed, Choshen and 6LDS→O tends to award valid corrections of almost all types. As source sentences are randomized across chains, this indicates that on average, corrections with more applied edits tend to be more similar to comparable corrections on the lattice. This is also reflected by the slightly positive sentencelevel correlation of LDS→O (§6). Abend (2018a) show that modern systems tend to be highly conservative, often performing an order of magnitude fewer changes to the source than references do. Validating metrics on their ability to rank conservative system outputs (as is de facto the common practice) may produce a different picture of metric quality than when considering a more inclusive set of corrections. We use MAEGE to mimic a setting of ranking against precision-oriented outputs. To do so, we perform corpus-level and sentence-level analyses, but instead of randomly sampling a source, we invariably take the original sentence as the source. We thereby create a setting where all edits applied are valid (but not all valid edits are applied). Comparing the results to the regular MAEGE correlation (Table 4), we find that LT remains reliable, while M2, that assumes the source receives the worst possible score, gains from this unbalanced setting. iBLEU drops, suggesting it may need to be retuned to this setting and give less weight to BLEU(O, S), thus becoming more like BLEU and GLEU. The most drastic change we see is in SARI and MAX-SARI, which flip their sign and present strong performance. Interestingly, the metrics that benefit from this precisionoriented setting in the corpus-level are the same metrics that perform better according to CHR than to MAEGE (Figure 4). This indicates the different trends produced by MAEGE and CHR, may result 1380 Corpus-level Sentence-level ρ P-val r P-val τ P-val iBLEU -0.872 (0.418) † 0.235 (0.230) † 0.053 (0.050) † M 2 0.882 (0.060) † -0.014 (-0.025) 0.223 0.223 (0.213) † LT 0.836 (0.973) 0.001 0.175 (0.167) 0.019 0.184 (0.222) † BLEU 0.845 (0.564) 0.001 0.217 (0.214) † 0.115 (0.111) † MinLDO→R -0.909 (-0.867) † 0.022 (0.011) † -0.180 (-0.183) † GLEU 0.945 (0.736) † 0.208 (0.189) † 0.003 (-0.028) † MAX-SARI 0.772 (-0.809) 0.005 0.053 (0.027) † 0.004 (-0.070) 0.6 SARI 0.800 (-0.545) 0.003 0.084 (0.061) † 0.022 (-0.039) 0.001 LDS→O -0.972 (-0.118) † 0.025 (0.109) 0.027 0.070 (0.094) † Table 4: Corpus-level Spearman ρ, sentence-level Pearson r and Kendall τ correlations using origin as the source with the various metrics (left). Correlations using a random source are found in parenthesis. † represents P −value < 0.001. LT is the best corpus correlated, and has the best τ while iBLEU has the best r from the latter’s use of precision-oriented outputs. Drawbacks. Like any methodology MAEGE has its simplifying assumptions and drawbacks; we wish to make them explicit. First, any biases introduced in the generation of the test corpus are inherited by MAEGE (e.g., that edits are contiguous and independent of each other). Second, MAEGE does not include errors that a human will not perform but machines might, e.g., significantly altering the meaning of the source. This partially explains why LT, which measures grammaticality but not meaning preservation, excels in our experiments. Third, MAEGE’s scoring system (§6) assumes that all errors damage the score equally. While this assumption is made by GEC metrics, we believe it should be refined in future work by collecting user information. 9 Conclusion In this paper, we show how to leverage existing annotation in GEC for performing validation reliably. We propose a new automatic methodology, MAEGE, which overcomes many of the shortcomings of the existing methodology. Experiments with MAEGE reveal a different picture of metric quality than previously reported. Our analysis suggests that differences in observed metric quality are partly due to system outputs sharing consistent tendencies, notably their tendency to under-predict corrections. As existing methodology ranks system outputs, these shared tendencies bias the validation process. The difficulties in basing validation on system outputs may be applicable to other text-to-text generation tasks, a question we will explore in future work. Acknowledgments This work was supported by the Israel Science Foundation (grant No. 929/17), and by the HUJI Cyber Security Research Center in conjunction with the Israel National Cyber Bureau in the Prime Minister’s Office. We thank Joel Tetreault and Courtney Napoles for helpful feedback and inspiring conversations. References Hiroki Asano, Tomoya Mizumoto, and Kentaro Inui. 2017. Reference-based metrics can be replaced with reference-less metrics in evaluating grammatical error correction systems. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), volume 2, pages 343–348. Alexandra Birch, Omri Abend, Ondˇrej Bojar, and Barry Haddow. 2016. Hume: Human ucca-based evaluation of machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1264–1274. Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, et al. 2014. Findings of the 2014 workshop on statistical machine translation. In Proceedings of the ninth workshop on statistical machine translation, pages 12–58. Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Logacheva, et al. 2017. Findings of the 2017 conference on machine translation (wmt17). In Proceedings of the Second Conference on Machine Translation, pages 169–214. Ondˇrej Bojar, Miloš Ercegovˇcevi´c, Martin Popel, and Omar F Zaidan. 2011. A grain of salt for the wmt 1381 manual evaluation. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 1–11. Association for Computational Linguistics. Ondˇrej Bojar, Yvette Graham, Amir Kamran, and Miloš Stanojevi´c. 2016. Results of the wmt16 metrics shared task. In Proceedings of the First Conference on Machine Translation, pages 199– 231, Berlin, Germany. Association for Computational Linguistics. Christopher Bryant, Mariano Felice, and Ted Briscoe. 2017. Automatic annotation and evaluation of error types for grammatical error correction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 793–805, Vancouver, Canada. Association for Computational Linguistics. Christopher Bryant and Hwee Tou Ng. 2015. How far are we from fully automatic high quality grammatical error correction? In ACL (1), pages 697–707. Boxing Chen and Colin Cherry. 2014. A systematic comparison of smoothing techniques for sentencelevel bleu. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 362–367. Leshem Choshen and Omri Abend. 2018a. Inherent biases in reference-based evaluation for grammatical error correction and text simplification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Leshem Choshen and Omri Abend. 2018b. Referenceless measure of faithfulness for grammatical error correction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Daniel Dahlmeier and Hwee Tou Ng. 2012. Better evaluation for grammatical error correction. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 568–572. Association for Computational Linguistics. Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu. 2013. Building a large annotated corpus of learner english: The nus corpus of learner english. In Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications, pages 22–31. Robert Dale, Ilya Anisimoff, and George Narroway. 2012. Hoo 2012: A report on the preposition and determiner error correction shared task. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP, pages 54–62. Association for Computational Linguistics. Mark Dras. 2015. Evaluating human pairwise preference judgments. Computational Linguistics, 41(2):337–345. Mariano Felice and Ted Briscoe. 2015. Towards a standard evaluation method for grammatical error detection and correction. In HLT-NAACL, pages 578– 587. Mariano Felice, Christopher Bryant, and Ted Briscoe. 2016. Automatic extraction of learner errors in esl sentences using linguistically enhanced alignments. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 825–835, Osaka, Japan. The COLING 2016 Organizing Committee. Yvette Graham, Timothy Baldwin, Aaron Harwood, Alistair Moffat, and Justin Zobel. 2012. Measurement of progress in machine translation. In Proceedings of the Australasian Language Technology Association Workshop 2012, pages 70–78. Yvette Graham, Timothy Baldwin, and Nitika Mathur. 2015. Accurate evaluation of segment-level machine translation metrics. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1183–1191. Roman Grundkiewicz, Marcin Junczys-Dowmunt, Edward Gillian, et al. 2015. Human evaluation of grammatical error correction systems. In EMNLP, pages 461–470. Philipp Koehn. 2012. Simulating human judgment in machine translation evaluation campaigns. In International Workshop on Spoken Language Translation (IWSLT) 2012. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 1(123):32–73. Joseph B Kruskal and David Sankoff. 1983. Time Warps, String Edits, and Macromolecules: The Theory and Practice of Sequence Comparison. Addison-Wesley. Adam Lopez. 2012. Putting human assessments of machine translation systems in order. In Proceedings of the Seventh Workshop on Statistical Machine Translation, pages 1–9. Association for Computational Linguistics. Marcin Miłkowski. 2010. Developing an open-source, rule-based proofreading tool. Software: Practice and Experience, 40(7):543–566. Courtney Napoles, Keisuke Sakaguchi, Matt Post, and Joel Tetreault. 2015. Ground truth for grammatical error correction metrics. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, volume 2, pages 588–593. 1382 Courtney Napoles, Keisuke Sakaguchi, Matt Post, and Joel Tetreault. 2016a. GLEU without tuning. eprint arXiv:1605.02592 [cs.CL]. Courtney Napoles, Keisuke Sakaguchi, and Joel Tetreault. 2016b. There’s no comparison: Reference-less evaluation metrics in grammatical error correction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2109–2115. Association for Computational Linguistics. Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christopher Bryant. 2014. The conll-2014 shared task on grammatical error correction. In CoNLL Shared Task, pages 1–14. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Keisuke Sakaguchi, Courtney Napoles, Matt Post, and Joel Tetreault. 2016. Reassessing the goals of grammatical error correction: Fluency instead of grammaticality. Transactions of the Association for Computational Linguistics, 4:169–182. Keisuke Sakaguchi, Matt Post, and Benjamin Van Durme. 2014. Efficient elicitation of annotations for human evaluation of machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 1–11. Rico Sennrich, Orhan Firat, Kyunghyun Cho, Alexandra Birch, Barry Haddow, Julian Hitschler, Marcin Junczys-Dowmunt, Samuel Läubli, Antonio Valerio Miceli Barone, Jozef Mokry, et al. 2017. Nematus: a toolkit for neural machine translation. arXiv preprint arXiv:1703.04357. Hong Sun and Ming Zhou. 2012. Joint learning of a dual smt system for paraphrase generation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short PapersVolume 2, pages 38–42. Association for Computational Linguistics. Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401–415. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In AAAI, pages 2852–2858.
2018
127
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1383–1392 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1383 The Hitchhiker’s Guide to Testing Statistical Significance in Natural Language Processing Rotem Dror Gili Baumer Faculty of Industrial Engineering and Management, Technion, IIT {rtmdrr@campus|sgbaumer@campus|segevs@campus|roiri}.technion.ac.il Segev Shlomov Roi Reichart Abstract Statistical significance testing is a standard statistical tool designed to ensure that experimental results are not coincidental. In this opinion/theoretical paper we discuss the role of statistical significance testing in Natural Language Processing (NLP) research. We establish the fundamental concepts of significance testing and discuss the specific aspects of NLP tasks, experimental setups and evaluation measures that affect the choice of significance tests in NLP research. Based on this discussion, we propose a simple practical protocol for statistical significance test selection in NLP setups and accompany this protocol with a brief survey of the most relevant tests. We then survey recent empirical papers published in ACL and TACL during 2017 and show that while our community assigns great value to experimental results, statistical significance testing is often ignored or misused. We conclude with a brief discussion of open issues that should be properly addressed so that this important tool can be applied in NLP research in a statistically sound manner1. 1 Introduction The field of Natural Language Processing (NLP) has recently made great progress due to the data revolution that has made abundant amounts of textual data from a variety of languages and linguistic domains (newspapers, scientific journals, social media etc.) available. This, together with the emergence of a new generation of computing resources and the related development of Deep Neural Network models, have resulted in dramatic improvements in the capabilities of NLP algorithms. 1The code for all statistical tests detailed in this paper is found on: https://github.com/rtmdrr/ testSignificanceNLP.git The extended reach of NLP algorithms has also resulted in NLP papers giving much more emphasis to the experiment and result sections by showing comparisons between multiple algorithms on various datasets from different languages and domains. This emphasis on empirical results highlights the role of statistical significance testing in NLP research: if we rely on empirical evaluation to validate our hypotheses and reveal the correct language processing mechanisms, we better be sure that our results are not coincidental. This paper aims to discuss the various aspects of proper statistical significance testing in NLP and to provide a simple and sound guide to the way this important tool should be used. We also discuss the particular challenges of statistical significance in the context of language processing tasks. To facilitate a clear and coherent presentation, our (somewhat simplified) model of an NLP paper is one that presents a new algorithm and makes the hypothesis that this algorithm is better than a previous strong algorithm, which serves as the baseline. This hypothesis is verified in experiments where the two algorithms are applied to the same datasets (test sets), reasoning that if one algorithm is consistently better than the other, hopefully with a sufficiently large margin, then it should also be better on future, currently unknown, datasets. Yet, the experimental differences might be coincidental. Here comes statistical significance testing into the picture: we have to make sure that the probability of falsely concluding that one algorithm is better than the other is very small. We note that in this paper we do not deal with the problem of drawing valid conclusions from multiple comparisons between algorithms across a large number of datasets , a.k.a. replicability analysis (see (Dror et al., 2017)). Instead, our focus is on a single comparison: how can we make sure that the difference between the two algorithms, as 1384 observed in an individual comparison, is not coincidental. Statistical significance testing of each individual comparison is the basic building block of replicability analysis – its accurate performance is a pre-condition for any multiple dataset analysis. Statistical significance testing (§ 2) is a well researched problem in the statistical literature. However, the unique structured nature of natural language data is reflected in specialized evaluation measures such as BLEU (machine translation, (Papineni et al., 2002)), ROUGE (extractive summarization, (Lin, 2004)), UAS and LAS (dependency parsing, (K¨ubler et al., 2009)). The distribution of these measures is of great importance to statistical significance testing. Moreover, certain properties of NLP datasets and the community’s evaluation standards also affect the way significance testing should be performed. An NLP-specific discussion of significance testing is hence in need. In § 3 we discuss the considerations to be made in order to select the proper statistical significance test in NLP setups. We propose a simple decision tree algorithm for this purpose, and survey the prominent significance tests – parametric and nonparametric – for NLP tasks and data. In § 4 we survey the current evaluation and significance testing practices of the community. We provide statistics collected from the long papers of the latest ACL proceedings (Barzilay and Kan, 2017) as well as from the papers published in the TACL journal during 2017. Our analysis reveals that there is still a room for improvement in the way statistical significance is used in papers published in our top-tier publication venues. Particularly, a large portion of the surveyed papers do not test the significance of their results, or use incorrect tests for this purpose. Finally, in § 5 we discuss open issues. A particularly challenging problem is that while most significance tests assume the test set consists of independent observations, most NLP datasets consist of dependent data points. For example, many NLP standard evaluation sets consist of sentences coming from the same source (e.g. newspaper) or document (e.g. newspaper article) or written by the same author. Unfortunately, the nature of these dependencies is hard to characterize, let alone to quantify. Another important problem is how to test significance when cross-validation, a popular evaluation methodology in NLP papers, is performed. Besides its practical value, we hope this paper will encourage further research into the role of statistical significance testing in NLP and on the questions that still remain open. 2 Preliminaries In this section we provide the required preliminaries for our discussion. We start with a formal definition of statistical significance testing and proceed with an overview of the prominent evaluation measures in NLP. 2.1 Statistical Significance Testing In this paper we focus on the setup where the performance of two algorithms, A and B, on a dataset X, is compared using an evaluation measure M. Let us denote M(ALG, X) as the value of the evaluation measure M when algorithm ALG is applied to the dataset X. Without loss of generality, we assume that higher values of the measure are better. We define the difference in performance between the two algorithms according to the measure M on the dataset X as: δ(X) = M(A, X) −M(B, X). (1) In this paper we will refer to δ(X) as our test statistic. Using this notation we formulate the following statistical hypothesis testing problem:2 H0 :δ(X) ≤0 H1 :δ(X) > 0. In order to decide whether or not to reject the null hypothesis, that is reaching the conclusion that δ(X) is indeed greater than 0, we usually compute a p−value for the test. The p−value is defined as the probability, under the null hypothesis H0, of obtaining a result equal to or more extreme than what was actually observed. For the one-sided hypothesis testing defined here, the p−value is defined as: Pr(δ(X) ≥δobserved|H0). Where δobserved is the performance difference between the algorithms (according to M) when applied to X. The smaller the p-value, the higher the significance, or, in other words, the stronger 2For simplicity we consider a one-sided hypothesis, it can be easily re-formulated as a double-sided hypothesis. 1385 the indication provided by the data that the nullhypothesis, H0, does not hold. In order to decide whether H0 should be rejected, the researcher should pre-define an arbitrary, fixed threshold value α, a.k.a the significance level. Only if p−value < α then the null hypothesis is rejected. In significance (or hypothesis) testing we consider two error types. Type I error refers to the case where the null hypothesis is rejected when it is actually true. Type II error refers to the case where the null hypothesis is not rejected although it should be. A common approach in hypothesis testing is to choose a test that guarantees that the probability of making a type I error is upper bounded by the test significance level α, mentioned above, while achieving the highest possible power: i.e. the lowest possible probability of making a type II error. 2.2 Evaluation Measures in NLP Evaluation Measure ACL 17 TACL 17 F-scores 78 (39.8%) 9 (25.71%) Accuracy 67 (34.18%) 13 (37.14%) Precision/ Recall 44 (22.45%) 6 (17.14%) BLEU 26 (13.27%) 4 (11.43%) ROUGE 12 (6.12%) 0 (0%) Pearson/ Spearman correlations 4 (2.04%) 6 (17.14%) Perplexity 7 (3.57%) 2 (5.71%) METEOR 6 (3.06%) 1 (2.86%) UAS+LAS 1 (0.51%) 3 (8.57%) Table 1: The most common evaluation measures in (long) ACL and TACL 2017 papers, ordered by ACL frequency. For each measure we present the total number of papers where it is used and the fraction of papers in the corresponding venue. In order to draw valid conclusions from the experiments formulated in § 2.1 it is crucial to apply the correct statistical significance test. In § 3 we explain that the choice of the significance test is based, among other considerations, on the distribution of the test statistics, δ(X). From equation 1 it is clear that δ(X) depends on the evaluation measure M. We hence turn to discuss the evaluation measures employed in NLP. In § 4 we analyze the (long) ACL and TACL 2017 papers, and observe that the most commonly used evaluation measures are the 12 measures that appear in Table 1. Notice that seven of these measures: Accuracy, Precision, Recall, F-score, Pearson and Spearman correlations and Perplexity, are not specific to NLP. The other five measures: BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), METEOR (Banerjee and Lavie, 2005), UAS and LAS (K¨ubler et al., 2009), are unique measures that were developed for NLP applications. BLEU and METEOR are standard evaluation measures for machine translation, ROUGE for extractive summarization, and UAS and LAS for dependency parsing. While UAS and LAS are in fact accuracy measures, BLEU, ROUGE and METEOR are designed for tasks where there are several possible outputs - a characteristic property of several NLP tasks. In machine translation, for example, a sentence in one language can be translated in multiple ways to another language. Consequently, BLEU takes an n-gram based approach on the surface forms, while METEOR considers only unigram matches but uses stemming and controls for synonyms. All 12 measures return a real number, either in [0, 1] or in R. Notice though that accuracy may reflect an average over a set of categorical scores (observations), e.g., in document-level binary sentiment analysis where every document is tagged as either positive or negative. In other cases, the individual observations are also continuous. For example, when comparing two dependency parsers, we may want to understand how likely it is, given our results, that one parser will do better than the other on a new sentence. In such a case we will consider the sentence-level UAS or LAS differences between the two parsers on all the sentences in the test set. Such sentence level UAS or LAS scores - the individual observations to be considered in the significance test - are real-valued. With the basic concepts clarified, we are ready to discuss the considerations to be made when choosing a statistical significance test. 3 Statistical Significance in NLP The goal of this section is to detail the considerations involved in the selection of a statistical significance test for an NLP application. Based on these considerations we provide a practical recipe that can be applied in order to make a good choice. In order to make this paper a practical guide for 1386 the community, we also provide a short description of the significance tests that are most relevant for NLP setups. 3.1 Parametric vs. Non-parametric Tests As noted above, a major consideration in the selection of a statistical significance test is the distribution of the test statistic, δ(X), under the null hypothesis. If the distribution is known, then the suitable test will come from the family of parametric tests, that uses this distribution in order to achieve powerful results (i.e., low probability of making a type II error, see § 2). If the distribution is unknown then any assumption made by a test may lead to erroneous conclusions and hence we should rely on non-parametric tests that do not make any such assumption. While non-parametric tests may be less powerful than their parametric counterparts, they do not make unjustified assumptions and are hence statistically sound even when the test statistic distribution is unknown. But how can one know the test statistic distribution? One possibility is to apply tests designed to evaluate the distribution of a sample of observations. For example, the Shapiro-Wilk test (Shapiro and Wilk, 1965) tests the null hypothesis that a sample comes from a normally distributed population, the Kolmogorov-Smirnov test quantifies the distance between the empirical distribution function of the sample and the cumulative distribution function of the reference distribution, and the Anderson-Darling test (Anderson and Darling, 1954) tests whether a given sample of data is drawn from a given probability distribution. As discussed below, there seems to be other heuristics that are used in practice but are not often mentioned in research papers. In what follows we discuss the prominent parametric and non-parametric tests for NLP setups. Based on this discussion we end this section with a simple decision tree that aims to properly guide the significance test choice process. 3.2 Prominent Significance Tests 3.2.1 Parametric Tests Parametric significance tests assume that the test statistic is distributed according to a known distribution with defined parameters, typically the normal distribution. While this assumption may be hard to verify (see discussion above), when it holds, these parametric tests have stronger statistical power compared to non-parametric tests that do not make this assumption (Fisher, 1937). Here we discuss the prominent parametric test for NLP setups - the paired student’s t-test. Paired Student’s t-test This test assesses whether the population means of two sets of measurements differ from each other, and is based on the assumption that both samples come from a normal distribution (Fisher, 1937). In practice, t-test is often applied with evaluation measures such as accuracy, UAS and LAS, that compute the mean number of correct predictions per input example. When comparing two dependency parsers, for example, we can apply the test to check if the averaged difference of their UAS scores is significantly larger than zero, which can serve as an indication that one parser is better than the other. Although we have not seen this discussed in NLP papers, we believe that the decision to use the t-test with these measures is based on the Central Limit Theorem (CLT). CLT establishes that, in most situations, when independent random variables are added, their properly normalized sum tends toward a normal distribution even if the original variables themselves are not normally distributed. That is, accuracy measures in structured tasks tend to be normally distributed when the number individual predictions (e.g. number of words in a sentence when considering sentencelevel UAS) is large enough. One case where it is theoretically justified to employ the t-test is described in (Sethuraman, 1963). The authors prove that for large enough data, the sampling distribution of a certain function of the Pearson’s correlation coefficient follows the Student’s t-distribution with n −2 degrees of freedom. With the recent surge in word similarity research with word embedding models, this result is of importance to our community. For other evaluation measures, such as F-score, BLEU, METEOR and ROUGE that do not compute means, the common practice is to assume that they are not normally distributed (Yeh, 2000; Berg-Kirkpatrick et al., 2012). We believe this issue requires a further investigation and suggest that it may be best to rely on the normality tests discussed in § 3.1 when deciding whether or not to employ the t-test. 1387 3.2.2 Non-parametric Tests When the test statistic distribution is unknown, non-parametric significance testing should be used. The non-parametric tests that are commonly used in NLP setups can be divided into two families that differ with respect to their statistical power and computational complexity. The first family consists of tests that do not consider the actual values of the evaluation measures. The second family do consider the values of the measures: it tests repeatedly sample from the test data, and estimates the p-value based on the test statistic values in the samples. We refer to the first family as the family of sampling-free tests and to the second as the family of sampling-based tests. The two families of tests reflect different preferences with respect to the balance between statistical power and computational efficiency. Sampling-free tests do not consider the evaluation measure values, only higher level statistics of the results such as the number of cases in which each of the algorithms performs better than the other. Consequently, their statistical power is lower than that of sampling-based tests that do consider the evaluation measure values. Sampling-based tests, however, compensate for the lack of distributional assumptions over the data with re-sampling – a computationally intensive procedure. Samplingbased methods are hence not the optimal candidates for very large datasets. We consider here four commonly used sampling-free tests: the sign test and two of its variants, and the wilcoxon signed-rank test. Sign test This test tests whether matched pair samples are drawn from distributions with equal medians. The test statistic is the number of examples for which algorithm A is better than algorithm B, and the null hypothesis states that given a new pair of measurements (e.g. evaluations (ai, bi) of the two algorithms on a new test example), then ai and bi are equally likely to be larger than the other (Gibbons and Chakraborti, 2011). The sign test has limited practical implications since it only checks if algorithm A is better than B and ignores the extent of the difference. Yet, it has been used in a variety of NLP papers (e.g. (Collins et al., 2005; Chan et al., 2007; Rush et al., 2012)). The assumptions of this test is that the data samples are i.i.d, the differences come from a continuous distribution (not necessarily normal) and that the values are ordered. The next test is a special case of the sign test for binary classification (or a two-tailed sign test). McNemar’s test (McNemar, 1947) This test is designed for paired nominal observations (binary labels). The test is applied to a 2 × 2 contingency table, which tabulates the outcomes of two algorithms on a sample of n examples. The null hypothesis for this test states that the marginal probability for each outcome (label one or label two) is the same for both algorithms. That is, when applying both algorithms on the same data we would expect them to be correct/incorrect on the same proportion of items. Under the null hypothesis, with a sufficiently large number of disagreements between the algorithms, the test statistic has a distribution of χ2 with one degree of freedom. This test is appropriate for binary classification tasks, and has been indeed used in such NLP works (e.g. sentiment classificaiton, (Blitzer et al., 2006; Ziser and Reichart, 2017)). The Cochran’s Q test (Cochran, 1950) generalizes the McNemar’s test for multi-class classification setups. The sign test and its variants consider only pairwise ranks: which algorithm performs better on each test example. In NLP setups, however, we also have access to the evaluation measure values, and this allows us to rank the differences between the algorithms. The Wilcoxon signed-rank test makes use of such a rank and hence, while it does not consider the evaluation measure values, it is more powerful than the sign test and its variants. Wilcoxon signed-rank test (Wilcoxon, 1945) Like the sign test variants, this test is used when comparing two matched samples (e.g. UAS values of two dependency parsers on a set of sentences). Its null hypothesis is that the differences follow a symmetric distribution around zero. First, the absolute values of the differences are ranked. Then, each rank gets a sign according to the sign of the difference. The Wilcoxon test statistic sums these signed ranks. The test is actually applicable for most NLP setups and it has been used widely (e.g. (Søgaard et al., 2014; Søgaard, 2013; Yang and Mitchell, 2017)) due to its improved power compared to the sign test variants. As noted above, sampling-free tests trade statistical power for efficiency. Sampling-based methods take the opposite approach. This family includes two main methods: permutation/randomization tests (Noreen, 1989) and the 1388 paired bootstrap (Efron and Tibshirani, 1994). Pitman’s permutation test This test estimates the test statistic distribution under the null hypothesis by calculating the values of this statistic under all possible labellings (permutations) of the test set. The (two-sided) p-value of the test is calculated as the proportion of these permutations where the absolute difference was greater than or equal to the absolute value of the difference in the output of the algorithm. Obviously, permutation tests are computationally intensive due to the exponentially large number of possible permutations. In practice, approximate randomization tests are used where a pre-defined limited number of permutations are drawn from the space of all possible permutations, without replacements (see, e.g. (Riezler and Maxwell, 2005) in the context of machine translation). The bootstrap test (Efron and Tibshirani, 1994) is based on a closely related idea. Paired bootstrap test This test is very similar to approximate randomization of the permutation test, with the difference that the sampling is done with replacements (i.e., an example from the original test data can appear more than once in a sample). The idea of bootstrap is to use the samples as surrogate populations, for the purpose of approximating the sampling distribution of the statistic. The p-value is calculated in a similar manner to the permutation test. Bootstrap was used with a variety of NLP tasks, including machine translation, text summarization and semantic parsing (e.g. (Koehn, 2004; Li et al., 2017; Wu et al., 2017; Ouchi et al., 2017)). The test is less effective for small test sets, as it assumes that the test set distribution does not deviate too much from the population distribution. Clearly, Sampling-based methods are computationally intensive and can be intractable for large datasets, even with modern computing power. In such cases, sampling-free methods form an available alternative. 3.3 Significance Test Selection With the discussion of significance test families - parametric vs. non-parametric (§ 3.1), and the properties of the actual significance tests (§ 3.2) we are now ready to provide a simple recipe for significance test selection in NLP setups. The decision tree in Figure 1 provides an illustration. Does the test statistic come from a known distribution? Use a parametric test Is the data size small ? Use bootstrap or randomization test Use samplingfree nonparametric test Yes No Yes No Figure 1: Decision tree for statistical significance test selection. If the distribution of the test statistic is known, then parametric tests are most appropriate. These tests are more statistically powerful and less computationally intensive compared to their non-parametric counterparts. The stronger statistical power of parametric tests stems from the stronger, parametric assumptions they make, while the higher computational demand of some non-parametric tests is the result of their sampling process. When the distribution of the test statistic is unknown, the first non-parametric family of choice is that of sampling-based tests. These tests consider the actual values of the evaluation measures and are not restricted to higher order properties (e.g. ranks) of the observed values – their statistical power is hence higher. As noted in (Riezler and Maxwell, 2005), in the case where the distributional assumptions of the parametric tests are violated, sampling-based tests have more statistical power than parametric tests. Nonetheless, sampling-based tests are computationally intensive – the exact permutation test, for example, requires the generation of all 2n data permutations (where n is the number of points in the dataset). To overcome this, approximate randomization can be used, as was done, e.g., by Yeh (2000) for test sets of more than 20 points. The other alternative for very large datasets are sampling-free tests that are less powerful but are computationally feasible. In what follows we check whether recent ACL and TACL papers follow these guidelines. 1389 4 Survey of ACL and TACL papers General Statistics ACL ’17 TACL ’17 Total number of papers 196 37 # relevant (experimental) papers 180 33 # different tasks 36 15 # different evaluation measures 24 19 Average number of measures per paper 2.34 2.1 # papers that do not report significance 117 15 # papers that report significance 63 18 # papers that report significance but use the wrong statistical test 6 0 # papers that report significance but do not mention the test name 21 3 # papers that have to report replicability 110 19 # papers that report replicability 3 4 # papers that perform cross validation 23 5 Table 2: Statistical significance statistics for empirical ACL and TACL 2017 papers. We analyzed the long papers from the proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL17, (Barzilay and Kan, 2017)), a total of 196 papers, and the papers from the Transactions of the Association of Computational Linguistics journal (TACL17), Volume 5, Issue 1, a total of 37 papers. We have focused on empirical papers where at least one comparison between methods was performed. Table 2 presents the main results from our survey. The top part of the table presents general statistics of our dataset. In both conference and journal papers, the variety of different NLP tasks is quite large: 36 tasks in ACL 2017 and 15 tasks in TACL. Interestingly, in almost every paper in our survey the researchers chose to analyze their results using more than one evaluation measure, Statistical Test ACL ’17 TACL ’17 Bootstrap 6 1 t-test 17 2 Wilcoxon 3 0 Chi square 3 1 Randomization 3 1 McNemar 2 3 Sign 2 3 Permutation 1 4 Table 3: Number of times each of the prominent statistical significance tests in ACL and TACL 2017 papers was used. 42 ACL and 15 TACL papers reported the significance test name. 5 ACL papers mentioned an unrecognized test name. with an average of 2.34 (ACL) and 2.1 (TACL). Table 1 presents the most common of these evaluation measures. The lower part of Table 2 depicts the disturbing reality of statistical significance testing in our research community. Out of the 180 experimental long papers of ACL 2017, only 63 papers included a statistical significance test. Moreover, out of these 63 papers 21 did not mention the name of the significance test they employed. Of the 42 papers that did mention the name of the significance test, 6 used the wrong test according to the considerations discussed in § 3.3 In TACL, where the review process is presumably more strict and of higher quality, out of 33 experimental papers, 15 did not include statistical significance testing, and all the papers that report significance and mentioned the name of the test used a valid test. While this paper focuses on the correct choice of a significance test, we also checked whether the papers in our sample account for the effect of multiple hypothesis testing when testing statistical significance (see (Dror et al., 2017)). When testing multiple hypotheses, as in the case of comparing the participating algorithms across a large number of datasets, the probability of making one or more false claims may be very high, even if the probability of drawing an erroneous conclusion in each individual comparison is small. In ACL 2017, out 3We considered the significance test to be inappropriate in three cases: 1. Using the t-test when the evaluation measure is not an average measure; 2. Using the t-test for a classification task (i.e. when the observations are categorical rather then continuous), even if the evaluation measure is an average measure; and 3. Using a Boostrap test with a small test set size. 1390 of 110 papers that used multiple datasets only 3 corrected for multiplicity (all using the Bonferroni correction). In TACL, the situation is slightly better with 4 papers correcting for multiplicity out of 19 that should have done that. Regarding the statistical tests that were used in the papers that did report significance (Table 3), in ACL 2017 most of the papers used the Student’s t-test that assumes the data is i.i.d and that the test statistics are normally distributed. As discussed in § 3 this is not the case in many NLP applications. Gladly, in TACL, t-test is not as prominent. One final note is about the misuse of the word significant. We noticed that in a considerable number of papers this word was used as a synonym for words such as important, considerable, meaningful, substantial, major, notable etc. We believe that we should be more careful when using this word, ideally keeping its statistical sense and using other, more general words to indicate a substantial impact. We close this discussion with two important open issues. 5 Open Questions In this section we would like to point on two issues that remain open even after our investigation. We hope that bringing these issues to the attention of the research community will encourage our fellow researchers to come up with appropriate solutions. The first open issue is that of dependent observations. An assumption shared by the statistical significance tests described in § 3, that are commonly used in NLP setups, is that the data samples are independent and identically distributed. This assumption, however, is rarely true in NLP setups. For example, the popular WSJ Penn Treebank corpus (Marcus et al., 1993) consists of 2,499 articles from a three year Wall Street Journal (WSJ) collection of 98,732 stories. Obviously, some of the sentences included in the corpus come from the same article, were written by the same author or were reviewed before publication by the same editor. As another example, many sentences in the Europarl parallel corpus (Koehn, 2005) that is very popular in the machine translation literature are taken from the same parliament discussion. An independence assumption between the sentences in these corpora is not likely to hold. This dependence between test examples violates the conditions under which the theoretical guarantees of the various tests were developed. The impact of this phenomenon on our results is hard to quantify, partly because it is hard to quantify the nature of the dependence between test set examples in NLP datasets. Some papers are even talking about abandoning the null hypothesis statistical significance test approach due to this hard-to-meet assumption (Koplenig, 2017; McShane et al., 2017; Carver, 1978; Leek et al., 2017). In our opinion, this calls for a future collaboration with statisticians in order to better understand the extent to which existing popular significance tests are relevant for NLP, and to develop alternative tests if necessary. Another issue that deserves some thought is that of cross-validation. To increase the validity of reported results, it is customary in NLP papers to create a number of random splits of the experimental corpus into train, development and test portions (see Table 2). For each such split (fold), the tested algorithms are trained and tuned on the training and development datasets, respectively, and their results on the test data are recorded. The final reported result is typically the average of the test set results across the splits. Some papers also report the fraction of the folds for which one algorithm was better than the others. While cross-validation is surely a desired practice, it is challenging to report statistical significance when it is employed. Particularly, the test sets of the different folds are obviously not independent – their content is even likely to overlap. One solution we would like to propose here is based on replicability analysis (Dror et al., 2017). This paper proposes a statistical significance framework for multiple comparisons performed with dependent test sets, using the KBonferroni estimator for the number of datasets with significant effect. One statistically sound way to test for significance when a cross-validation protocol is employed is hence to calculate the pvalue for each fold separately, and then to perform replicability analysis for dependent datasets with KBonferroni. Only if this analysis rejects the null hypothesis in all folds (or in more than a predefined threshold number of folds), the results should be declared significant. Here again, further statistical investigation may lead to additional, potentially better, solutions. 1391 6 Conclusions We discussed the use of significance testing in NLP. We provided the main considerations for significance test selection, and proposed a simple test selection protocol. We then surveyed the state of significance testing in recent top venue papers and concluded with open issues. We hope this paper will serve as a guide for NLP researchers and, not less importantly, that it will encourage discussions and collaborations that will contribute to the soundness and correctness of our research. References Theodore W Anderson and Donald A Darling. 1954. A test of goodness of fit. Journal of the American statistical association 49(268):765–769. Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization. Regina Barzilay and Min-Yen Kan. 2017. Proceedings of the 55th annual meeting of the association for computational linguistics (volume 1: Long papers). In Proceedings of ACL. Taylor Berg-Kirkpatrick, David Burkett, and Dan Klein. 2012. An empirical investigation of statistical significance in nlp. In Proceedings of EMNLPCoNLL. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proceedings of EMNLP. Ronald Carver. 1978. The case against statistical significance testing. Harvard Educational Review 48(3):378–399. Yee Seng Chan, Hwee Tou Ng, and David Chiang. 2007. Word sense disambiguation improves statistical machine translation. In Proceedings of ACL. William G Cochran. 1950. The comparison of percentages in matched samples. Biometrika 37(3/4):256– 266. Michael Collins, Philipp Koehn, and Ivona Kucerova. 2005. Clause restructuring for statistical machine translation. In Proceedings of ACL. Rotem Dror, Gili Baumer, Marina Bogomolov, and Roi Reichart. 2017. Replicability analysis for natural language processing: Testing significance with multiple datasets. Transactions of the Association for Computational Linguistics 5:471–486. Bradley Efron and Robert J Tibshirani. 1994. An introduction to the bootstrap. CRC press. Ronald Aylmer Fisher. 1937. The design of experiments. Oliver And Boyd; Edinburgh; London. Jean Dickinson Gibbons and Subhabrata Chakraborti. 2011. Nonparametric statistical inference. In International encyclopedia of statistical science, Springer, pages 977–979. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of EMNLP. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceedings of the MT summit. Alexander Koplenig. 2017. Against statistical significance testing in corpus linguistics. Corpus Linguistics and Linguistic Theory . Sandra K¨ubler, Ryan McDonald, and Joakim Nivre. 2009. Dependency parsing. Synthesis Lectures on Human Language Technologies 1(1):1–127. Jeff Leek, Blakeley B McShane, Andrew Gelman, David Colquhoun, Mich`ele B Nuijten, and Steven N Goodman. 2017. Five ways to fix statistics. Nature 551(7682):557–559. Junhui Li, Deyi Xiong, Zhaopeng Tu, Muhua Zhu, Min Zhang, and Guodong Zhou. 2017. Modeling source syntax for neural machine translation. In Proceedings of ACL. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop. Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computational linguistics 19(2):313–330. Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika 12(2):153–157. Blakeley B McShane, David Gal, Andrew Gelman, Christian Robert, and Jennifer L Tackett. 2017. Abandon statistical significance. arXiv preprint arXiv:1709.07588 . Eric W Noreen. 1989. Computer intensive methods for hypothesis testing: An introduction. Wiley, New York. Hiroki Ouchi, Hiroyuki Shindo, and Yuji Matsumoto. 2017. Neural modeling of multi-predicate interactions for japanese predicate argument structure analysis. In Proceedings of ACL. 1392 Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL. Stefan Riezler and John T Maxwell. 2005. On some pitfalls in automatic evaluation and significance testing for mt. In Proceedings of the ACL workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization. Alexander Rush, Roi Reichart, Michael Collins, and Amir Globerson. 2012. Improved parsing and pos tagging using inter-sentence consistency constraints. In Proceedings of EMNLP-CoNLL. J Sethuraman. 1963. The Advanced Theory of Statistics, Volume 2: Inference and Relationship. JSTOR. Samuel Sanford Shapiro and Martin B Wilk. 1965. An analysis of variance test for normality (complete samples). Biometrika 52(3/4):591–611. Anders Søgaard. 2013. Estimating effect size across datasets. In Proceedings of NAACL-HLT. Anders Søgaard, Anders Johannsen, Barbara Plank, Dirk Hovy, and H´ector Mart´ınez Alonso. 2014. What’s in a p-value in nlp? In Proceedings of CoNLL. Frank Wilcoxon. 1945. Individual comparisons by ranking methods. Biometrics bulletin 1(6):80–83. Shuangzhi Wu, Dongdong Zhang, Nan Yang, Mu Li, and Ming Zhou. 2017. Sequence-to-dependency neural machine translation. In Proceedings of ACL. Bishan Yang and Tom Mitchell. 2017. Leveraging knowledge bases in lstms for improving machine reading. In Proceedings of ACL. Alexander Yeh. 2000. More accurate tests for the statistical significance of result differences. In Proceedings of COLING. Yftah Ziser and Roi Reichart. 2017. Neural structural correspondence learning for domain adaptation. In Proceedings of CoNLL 2017.
2018
128
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1393–1402 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1393 Distilling Knowledge for Search-based Structured Prediction Yijia Liu, Wanxiang Che∗, Huaipeng Zhao, Bing Qin, Ting Liu Research Center for Social Computing and Information Retrieval Harbin Institute of Technology, China {yjliu,car,hpzhao,qinb,tliu}@ir.hit.edu.cn Abstract Many natural language processing tasks can be modeled into structured prediction and solved as a search problem. In this paper, we distill an ensemble of multiple models trained with different initialization into a single model. In addition to learning to match the ensemble’s probability output on the reference states, we also use the ensemble to explore the search space and learn from the encountered states in the exploration. Experimental results on two typical search-based structured prediction tasks – transition-based dependency parsing and neural machine translation show that distillation can effectively improve the single model’s performance and the final model achieves improvements of 1.32 in LAS and 2.65 in BLEU score on these two tasks respectively over strong baselines and it outperforms the greedy structured prediction models in previous literatures. 1 Introduction Search-based structured prediction models the generation of natural language structure (part-ofspeech tags, syntax tree, translations, semantic graphs, etc.) as a search problem (Collins and Roark, 2004; Liang et al., 2006; Zhang and Clark, 2008; Huang et al., 2012; Sutskever et al., 2014; Goodman et al., 2016). It has drawn a lot of research attention in recent years thanks to its competitive performance on both accuracy and running time. A stochastic policy that controls the whole search process is usually learned by imitating a reference policy. The imitation is usually addressed as training a classifier to predict the ref∗* Email corresponding. Training data Reference policy Exploration policy Reference states NLL loss Exploration states Distillation loss Distilled model model1 Ensemble model . . . modelM Figure 1: Workflow of our knowledge distillation for search-based structured prediction. The yellow bracket represents the ensemble of multiple models trained with different initialization. The dashed red line shows our distillation from reference (§3.2). The solid blue line shows our distillation from exploration (§3.3). erence policy’s search action on the encountered states when performing the reference policy. Such imitation process can sometimes be problematic. One problem is the ambiguities of the reference policy, in which multiple actions lead to the optimal structure but usually, only one is chosen as training instance (Goldberg and Nivre, 2012). Another problem is the discrepancy between training and testing, in which during the test phase, the learned policy enters non-optimal states whose search action is never learned (Ross and Bagnell, 2010; Ross et al., 2011). All these problems harm the generalization ability of search-based structured prediction and lead to poor performance. Previous works tackle these problems from two directions. To overcome the ambiguities in data, techniques like ensemble are often adopted (Di1394 Dependency parsing Neural machine translation st (σ, β, A), where σ is a stack, β is a buffer, and A is the partially generated tree ($, y1, y2, ..., yt), where $ is the start symbol. A {SHIFT, LEFT, RIGHT} pick one word w from the target side vocabulary W. S0 {([ ], [1, .., n], ∅)} {($)} ST {([ROOT], [ ], A)} {($, y1, y2, ..., ym)} T (s, a) • SHIFT: (σ, j|β) →(σ|j, β) ($, y1, y2, ..., yt) →($, y1, y2, ..., yt, yt+1 = w) • LEFT: (σ|i j, β) →(σ|j, β) A ←A ∪{i ←j} • RIGHT: (σ|i j, β) →(σ|i, β) A ←A ∪{i →j} Table 1: The search-based structured prediction view of transition-based dependency parsing (Nivre, 2008) and neural machine translation (Sutskever et al., 2014). etterich, 2000). To mitigate the discrepancy, exploration is encouraged during the training process (Ross and Bagnell, 2010; Ross et al., 2011; Goldberg and Nivre, 2012; Bengio et al., 2015; Goodman et al., 2016). In this paper, we propose to consider these two problems in an integrated knowledge distillation manner (Hinton et al., 2015). We distill a single model from the ensemble of several baselines trained with different initialization by matching the ensemble’s output distribution on the reference states. We also let the ensemble randomly explore the search space and learn the single model to mimic ensemble’s distribution on the encountered exploration states. Combing the distillation from reference and exploration further improves our single model’s performance. The workflow of our method is shown in Figure 1. We conduct experiments on two typical searchbased structured prediction tasks: transition-based dependency parsing and neural machine translation. The results of both these two experiments show the effectiveness of our knowledge distillation method by outperforming strong baselines. In the parsing experiments, an improvement of 1.32 in LAS is achieved and in the machine translation experiments, such improvement is 2.65 in BLEU. Our model also outperforms the greedy models in previous works. Major contributions of this paper include: • We study the knowledge distillation in search-based structured prediction and propose to distill the knowledge of an ensemble into a single model by learning to match its distribution on both the reference states (§3.2) and exploration states encountered when using the ensemble to explore the search space (§3.3). A further combination of these two methods is also proposed to improve the performance (§3.4). • We conduct experiments on two search-based structured prediction problems: transitionbased dependency parsing and neural machine translation. In both these two problems, the distilled model significantly improves over strong baselines and outperforms other greedy structured prediction (§4.2). Comprehensive analysis empirically shows the feasibility of our distillation method (§4.3). 2 Background 2.1 Search-based Structured Prediction Structured prediction maps an input x = (x1, x2, ..., xn) to its structural output y = (y1, y2, ..., ym), where each component of y has some internal dependencies. Search-based structured prediction (Collins and Roark, 2004; Daum´e III et al., 2005; Daum´e III et al., 2009; Ross and Bagnell, 2010; Ross et al., 2011; Doppa et al., 2014; Vlachos and Clark, 2014; Chang et al., 2015) models the generation of the structure as a search problem and it can be formalized as a tuple (S, A, T (s, a), S0, ST ), in which S is a set of states, A is a set of actions, T is a function that maps S × A →S, S0 is a set of initial states, and ST is a set of terminal states. Starting from an initial state s0 ∈S0, the structured prediction model repeatably chooses an action at ∈A by following a policy π(s) and applies at to st and enter a new state st+1 as st+1 ←T (st, at), until a final state sT ∈ST is achieved. Several natural language structured prediction problems can be modeled under the search-based framework including dependency parsing (Nivre, 2008) and neural machine translation (Liang et al., 2006; Sutskever et al., 2014). Table 1 shows the search-based structured prediction view of these two problems. In the data-driven settings, π(s) controls the whole search process and is usually parameterized by a classifier p(a | s) which outputs the proba1395 Algorithm 1: Generic learning algorithm for search-based structured prediction. Input: training data: {x(n), y(n)}N n=1; the reference policy: πR(s, y). Output: classifier p(a|s). 1 D ←∅; 2 for n ←1...N do 3 t ←0; 4 st ←s0(x(n)); 5 while st /∈ST do 6 at ←πR(st, y(n)); 7 D ←D ∪{st}; 8 st+1 ←T (st, at); 9 t ←t + 1; 10 end 11 end 12 optimize LNLL; bility of choosing an action a on the given state s. The commonly adopted greedy policy can be formalized as choosing the most probable action with π(s) = argmaxa p(a | s) at test stage. To learn an optimal classifier, search-based structured prediction requires constructing a reference policy πR(s, y), which takes an input state s, gold structure y and outputs its reference action a, and training p(a | s) to imitate the reference policy. Algorithm 1 shows the common practices in training p(a | s), which involves: first, using πR(s, y) to generate a sequence of reference states and actions on the training data (line 1 to line 11 in Algorithm 1); second, using the states and actions on the reference sequences as examples to train p(a | s) with negative log-likelihood (NLL) loss (line 12 in Algorithm 1), LNLL = X s∈D X a −1{a = πR} · log p(a | s) where D is a set of training data. The reference policy is sometimes sub-optimal and ambiguous which means on one state, there can be more than one action that leads to the optimal prediction. In transition-based dependency parsing, Goldberg and Nivre (2012) showed that one dependency tree can be reached by several search sequences using Nivre (2008)’s arcstandard algorithm. In machine translation, the ambiguity problem also exists because one source language sentence usually has multiple semantically correct translations but only one reference translation is presented. Similar problems have also been observed in semantic parsing (Goodman et al., 2016). According to Fr´enay and Verleysen (2014), the widely used NLL loss is vulnerable to ambiguous data which make it worse for searchbased structured prediction. Besides the ambiguity problem, training and testing discrepancy is another problem that lags the search-based structured prediction performance. Since the training process imitates the reference policy, all the states in the training data are optimal which means they are guaranteed to reach the optimal structure. But during the test phase, the model can predict non-optimal states whose search action is never learned. The greedy search which is prone to error propagation also worsens this problem. 2.2 Knowledge Distillation A cumbersome model, which could be an ensemble of several models or a single model with larger number of parameters, usually provides better generalization ability. Knowledge distillation (Buciluˇa et al., 2006; Ba and Caruana, 2014; Hinton et al., 2015) is a class of methods for transferring the generalization ability of the cumbersome teacher model into a small student model. Instead of optimizing NLL loss, knowledge distillation uses the distribution q(y | x) outputted by the teacher model as “soft target” and optimizes the knowledge distillation loss, LKD = X x∈D X y −q(y | x) · log p(y | x). In search-based structured prediction scenario, x corresponds to the state s and y corresponds to the action a. Through optimizing the distillation loss, knowledge of the teacher model is learned by the student model p(y | x). When correct label is presented, NLL loss can be combined with the distillation loss via simple interpolation as L = αLKD + (1 −α)LNLL (1) 3 Knowledge Distillation for Search-based Structured Prediction 3.1 Ensemble As Hinton et al. (2015) pointed out, although the real objective of a machine learning algorithm is to generalize well to new data, models are usually trained to optimize the performance on training data, which bias the model to the training data. 1396 In search-based structured prediction, such biases can result from either the ambiguities in the training data or the discrepancy between training and testing. It would be more problematic to train p(a | s) using the loss which is in-robust to ambiguities and only considering the optimal states. The effect of ensemble on ambiguous data has been studied in Dietterich (2000). They empirically showed that ensemble can overcome the ambiguities in the training data. Daum´e III et al. (2005) also use weighted ensemble of parameters from different iterations as their final structure prediction model. In this paper, we consider to use ensemble technique to improve the generalization ability of our search-based structured prediction model following these works. In practice, we train M search-based structured prediction models with different initialized weights and ensemble them by the average of their output distribution as q(a | s) = 1 M P m qm(a | s). In Section 4.3.1, we empirically show that the ensemble has the ability to choose a good search action in the optimal-yetambiguous states and the non-optimal states. 3.2 Distillation from Reference As we can see in Section 4, ensemble indeed improves the performance of baseline models. However, real world deployment is usually constrained by computation and memory resources. Ensemble requires running the structured prediction models for multiple times, and that makes it less applicable in real-world problem. To take the advantage of the ensemble model while avoid running the models multiple times, we use the knowledge distillation technique to distill a single model from the ensemble. We started from changing the NLL learning objective in Algorithm 1 into the distillation loss (Equation 1) as shown in Algorithm 2. Since such method learns the model on the states produced by the reference policy, we name it as distillation from reference. Blocks connected by in dashed red lines in Figure 1 show the workflow of our distillation from reference. 3.3 Distillation from Exploration In the scenario of search-based structured prediction, transferring the teacher model’s generalization ability into a student model not only includes matching the teacher model’s soft targets on the reference search sequence, but also imitating the search decisions made by the teacher model. One way to accomplish the imitation can be sampling Algorithm 2: Knowledge distillation for search-based structured prediction. Input: training data: {x(n), y(n)}N n=1; the reference policy: πR(s, y); the exploration policy: πE(s) which samples an action from the annealed ensemble q(a | s) 1 T Output: classifier p(a | s). 1 D ←∅; 2 for n ←1...N do 3 t ←0; 4 st ←s0(x(n)); 5 while st /∈ST do 6 if distilling from reference then 7 at ←πR(st, y(n)); 8 else 9 at ←πE(st); 10 end 11 D ←D ∪{st}; 12 st+1 ←T (st, at); 13 t ←t + 1; 14 end 15 end 16 if distilling from reference then 17 optimize αLKD + (1 −α)LNLL; 18 else 19 optimize LKD; 20 end search sequence from the ensemble and learn from the soft target on the sampled states. More concretely, we change πR(s, y) into a policy πE(s) which samples an action a from q(a | s) 1 T , where T is the temperature that controls the sharpness of the distribution (Hinton et al., 2015). The algorithm is shown in Algorithm 2. Since such distillation generate training instances from exploration, we name it as distillation from exploration. Blocks connected by in solid blue lines in Figure 1 show the workflow of our distillation from exploration. On the sampled states, reference decision from πR is usually non-trivial to achieve, which makes learning from NLL loss infeasible. In Section 4, we empirically show that fully distilling from the soft target, i.e. setting α = 1 in Equation 1, achieves comparable performance with that both from distillation and NLL. 1397 3.4 Distillation from Both Distillation from reference can encourage the model to predict the action made by the reference policy and distillation from exploration learns the model on arbitrary states. They transfer the generalization ability of the ensemble from different aspects. Hopefully combining them can further improve the performance. In this paper, we combine distillation from reference and exploration with the following manner: we use πR and πE to generate a set of training states. Then, we learn p(a | s) on the generated states. If one state was generated by the reference policy, we minimize the interpretation of distillation and NLL loss. Otherwise, we minimize the distillation loss only. 4 Experiments We perform experiments on two tasks: transitionbased dependency parsing and neural machine translation. Both these two tasks are converted to search-based structured prediction as Section 2.1. For the transition-based parsing, we use the stack-lstm parsing model proposed by Dyer et al. (2015) to parameterize the classifier.1 For the neural machine translation, we parameterize the classifier as an LSTM encoder-decoder model by following Luong et al. (2015).2 We encourage the reader of this paper to refer corresponding papers for more details. 4.1 Settings 4.1.1 Transition-based Dependency Parsing We perform experiments on Penn Treebank (PTB) dataset with standard data split (Section 2-21 for training, Section 22 for development, and Section 23 for testing). Stanford dependencies are converted from the original constituent trees using Stanford CoreNLP 3.3.03 by following Dyer et al. (2015). Automatic part-of-speech tags are assigned by 10-way jackknifing whose accuracy is 97.5%. Labeled attachment score (LAS) excluding punctuation are used in evaluation. For the other hyper-parameters, we use the same settings as Dyer et al. (2015). The best iteration and α is determined on the development set. 1The code for parsing experiments is available at: https://github.com/Oneplus/twpipe. 2We based our NMT experiments on OpenNMT (Klein et al., 2017). The code for NMT experiments is available at: https://github.com/Oneplus/OpenNMT-py. 3stanfordnlp.github.io/CoreNLP/ history.html 27.13 27.06 26.86 26.934 26.926 26.99 26.99 26.25 26.50 26.75 27.00 27.25 1 2 5 10 20 50 100 BLEU score on dev. set Figure 2: The effect of using different Ks when approximating distillation loss with K-most probable actions in the machine translation experiments. Reimers and Gurevych (2017) and others have pointed out that neural network training is nondeterministic and depends on the seed for the random number generator. To control for this effect, they suggest to report the average of M differentlyseeded runs. In all our dependency parsing, we set n = 20. 4.1.2 Neural Machine Translation We conduct our experiments on a small machine translation dataset, which is the Germanto-English portion of the IWSLT 2014 machine translation evaluation campaign. The dataset contains around 153K training sentence pairs, 7K development sentence pairs, and 7K testing sentence pairs. We use the same preprocessing as Ranzato et al. (2015), which leads to a German vocabulary of about 30K entries and an English vocabulary of 25K entries. One-layer LSTM for both encoder and decoder with 256 hidden units are used by following Wiseman and Rush (2016). BLEU (Papineni et al., 2002) was used to evaluate the translator’s performance.4 Like in the dependency parsing experiments, we run M = 10 differentlyseeded runs and report the averaged score. Optimizing the distillation loss in Equation 1 requires enumerating over the action space. It is expensive for machine translation since the size of the action space (vocabulary) is considerably large (25K in our experiments). In this paper, we use the K-most probable actions (translations on target side) on one state to approximate the whole probability distribution of q(a | s) as P a q(a | s) · log p(a | s) ≈PK k q(ˆak | s) · log p(ˆak | s), where ˆak is the k-th probable action. We fix α to 4We use multi-bleu.perl to evaluate our model’s performance 1398 LAS Baseline 90.83 Ensemble (20) 92.73 Distill (reference, α=1.0) 91.99 Distill (exploration, T=1.0) 92.00 Distill (both) 92.14 Ballesteros et al. (2016) (dyn. oracle) 91.42 Andor et al. (2016) (local, B=1) 91.02 Buckman et al. (2016) (local, B=8) 91.19 Andor et al. (2016) (local, B=32) 91.70 Andor et al. (2016) (global, B=32) 92.79 Dozat and Manning (2016) 94.08 Kuncoro et al. (2016) 92.06 Kuncoro et al. (2017) 94.60 Table 2: The dependency parsing results. Significance test (Nilsson and Nivre, 2008) shows the improvement of our Distill (both) over Baseline is statistically significant with p < 0.01. 1 and vary K and evaluate the distillation model’s performance. These results are shown in Figure 2 where there is no significant difference between different Ks and in speed consideration, we set K to 1 in the following experiments. 4.2 Results 4.2.1 Transition-based Dependency Parsing Table 2 shows our PTB experimental results. From this result, we can see that the ensemble model outperforms the baseline model by 1.90 in LAS. For our distillation from reference, when setting α = 1.0, best performance on development set is achieved and the test LAS is 91.99. We tune the temperature T during exploration and the results are shown in Figure 3. Sharpen the distribution during the sampling process generally performs better on development set. Our distillation from exploration model gets almost the same performance as that from reference, but simply combing these two sets of data outperform both models by achieving an LAS of 92.14. We also compare our parser with the other parsers in Table 2. The second group shows the greedy transition-based parsers in previous literatures. Andor et al. (2016) presented an alternative state representation and explored both greedy and beam search decoding. (Ballesteros et al., 2016) explores training the greedy parser with dynamic oracle. Our distillation parser outperforms all these greedy counterparts. The third group shows BLEU Baseline 22.79 Ensemble (10) 26.26 Distill (reference, α=0.8) 24.76 Distill (exploration, T=0.1) 24.64 Distill (both) 25.44 MIXER 20.73 BSO (local, B=1) 22.53 BSO (global, B=1) 23.83 Table 3: The machine translation results. MIXER denotes that of Ranzato et al. (2015), BSO denotes that of Wiseman and Rush (2016). Significance test (Koehn, 2004) shows the improvement of our Distill (both) over Baseline is statistically significant with p < 0.01. 92.11 92.14 92.05 92.09 92.11 91.98 91.85 89.04 89 90 91 92 0.1 0.2 0.5 0.67 1 1.5 2 5 LAS on dev. set 26.99 26.97 26.93 26.7 26.24 26.25 26.50 26.75 27.00 0.1 0.2 0.5 0.67 1 BLEU on dev. set Figure 3: The effect of T on PTB (above) and IWSLT 2014 (below) development set. parsers trained on different techniques including decoding with beam search (Buckman et al., 2016; Andor et al., 2016), training transitionbased parser with beam search (Andor et al., 2016), graph-based parsing (Dozat and Manning, 2016), distilling a graph-based parser from the output of 20 parsers (Kuncoro et al., 2016), and converting constituent parsing results to dependencies (Kuncoro et al., 2017). Our distillation parser still outperforms its transition-based counterparts but lags the others. We attribute the gap between our parser with the other parsers to the difference in parsing algorithms. 1399 4.2.2 Neural Machine Translation Table 3 shows the experimental results on IWSLT 2014 dataset. Similar to the PTB parsing results, the ensemble 10 translators outperforms the baseline translator by 3.47 in BLEU score. Distilling from the ensemble by following the reference leads to a single translator of 24.76 BLEU score. Like in the parsing experiments, sharpen the distribution when exploring the search space is more helpful to the model’s performance but the differences when T ≤0.2 is not significant as shown in Figure 3. We set T = 0.1 in our distillation from exploration experiments since it achieves the best development score. Table 3 shows the exploration result of a BLEU score of 24.64 and it slightly lags the best reference model. Distilling from both the reference and exploration improves the single model’s performance by a large margin and achieves a BLEU score of 25.44. We also compare our model with other translation models including the one trained with reinforcement learning (Ranzato et al., 2015) and that using beam search in training (Wiseman and Rush, 2016). Our distillation translator outperforms these models. Both the parsing and machine translation experiments confirm that it’s feasible to distill a reasonable search-based structured prediction model by just exploring the search space. Combining the reference and exploration further improves the model’s performance and outperforms its greedy structured prediction counterparts. 4.3 Analysis In Section 4.2, improvements from distilling the ensemble have been witnessed in both the transition-based dependency parsing and neural machine translation experiments. However, questions like “Why the ensemble works better? Is it feasible to fully learn from the distillation loss without NLL? Is learning from distillation loss stable?” are yet to be answered. In this section, we first study the ensemble’s behavior on “problematic” states to show its generalization ability. Then, we empirically study the feasibility of fully learning from the distillation loss by studying the effect of α in the distillation from reference setting. Finally, we show that learning from distillation loss is less sensitive to initialization and achieves a more stable model. optimal-yetambiguous non-optimal Baseline 68.59 89.59 Ensemble 74.19 90.90 Distill (both) 81.15 91.38 Table 4: The ranking performance of parsers’ output distributions evaluated in MAP on “problematic” states. 4.3.1 Ensemble on “Problematic” States As mentioned in previous sections, “problematic” states which is either ambiguous or non-optimal harm structured prediciton’s performance. Ensemble shows to improve the performance in Section 4.2, which indicates it does better on these states. To empirically testify this, we use dependency parsing as a testbed and study the ensemble’s output distribution using the dynamic oracle. The dynamic oracle (Goldberg and Nivre, 2012; Goldberg et al., 2014) can be used to efficiently determine, given any state s, which transition action leads to the best achievable parse from s; if some errors may have already made, what is the best the parser can do, going forward? This allows us to analyze the accuracy of each parser’s individual decisions, in the “problematic” states. In this paper, we evaluate the output distributions of the baseline and ensemble parser against the reference actions suggested by the dynamic oracle. Since dynamic oracle yields more than one reference actions due to ambiguities and previous mistakes and the output distribution can be treated as their scoring, we evaluate them as a ranking problem. Intuitively, when multiple reference actions exist, a good parser should push probability mass to these actions. We draw problematic states by sampling from our baseline parser. The comparison in Table 4 shows that the ensemble model significantly outperforms the baseline on ambiguous and non-optimal states. This observation indicates the ensemble’s output distribution is more “informative”, thus generalizes well on problematic states and achieves better performance. We also observe that the distillation model perform better than both the baseline and ensemble. We attribute this to the fact that the distillation model is learned from exploration. 1400 92.07 92.04 91.93 91.9 91.7 91.72 91.55 91.49 91.3 91.1 90.9 91.0 91.5 92.0 92.5 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 LAS on dev. set 26.96 27.04 27.13 26.95 26.6 26.64 26.37 26.21 26.09 25.9 24.93 25.0 25.5 26.0 26.5 27.0 27.5 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 BLEU on dev. set Figure 4: The effect of α on PTB (above) and IWSLT 2014 (below) development set. 4.3.2 Effect of α Over our distillation from reference model, we study the effect of α in Equation 1. We vary α from 0 to 1 by a step of 0.1 in both the transitionbased dependency parsing and neural machine translation experiments and plot the model’s performance on development sets in Figure 4. Similar trends are witnessed in both these two experiments that model that’s configured with larger α generally performs better than that with smaller α. For the dependency parsing problem, the best development performance is achieved when we set α = 1, and for the machine translation, the best α is 0.8. There is only 0.2 point of difference between the best α model and the one with α equals to 1. Such observation indicates that when distilling from the reference policy paying more attention to the distillation loss rather than the NLL is more beneficial. It also indicates that fully learning from the distillation loss outputted by the ensemble is reasonable because models configured with α = 1 generally achieves good performance. 4.3.3 Learning Stability Besides the improved performance, knowledge distillation also leads to more stable learning. The performance score distributions of differentlyseed runs are depicted as violin plot in Figure 5. Table 5 also reveals the smaller standard derivations are achieved by our distillation methods. As Keskar et al. (2016) pointed out that the general90.5 91.0 91.5 92.0 92.5 Baseline Distill (both) 22 23 24 25 26 Figure 5: The distributions of scores for the baseline model and our distillation from both on PTB test (left) and IWSLT 2014 test (right) on differently-seeded runs. system seeds min max σ PTB test Baseline 20 90.45 91.14 0.17 Distill (both) 20 92.00 92.37 0.09 IWSLT 2014 test Baseline 10 21.63 23.67 0.55 Distill (both) 10 24.22 25.65 0.12 Table 5: The minimal, maximum, and standard derivation values on differently-seeded runs. ization gap is not due to overfit, but due to the network converge to sharp minimizer which generalizes worse, we attribute the more stable training from our distillation model as the distillation loss presents less sharp minimizers. 5 Related Work Several works have been proposed to applying knowledge distillation to NLP problems. Kim and Rush (2016) presented a distillation model which focus on distilling the structured loss from a large model into a small one which works on sequencelevel. In contrast to their work, we pay more attention to action-level distillation and propose to do better action-level distillation by both from reference and exploration. Freitag et al. (2017) used an ensemble of 6translators to generate training reference. Exploration was tried in their work with beam-search. We differ their work by training the single model 1401 to match the distribution of the ensemble. Using ensemble in exploration was also studied in reinforcement learning community (Osband et al., 2016). In addition to distilling the ensemble on the labeled training data, a line of semisupervised learning works show that it’s effective to transfer knowledge of cumbersome model into a simple one on the unlabeled data (Liang et al., 2008; Li et al., 2014). Their extensions to knowledge distillation call for further study. Kuncoro et al. (2016) proposed to compile the knowledge from an ensemble of 20 transitionbased parsers into a voting and distill the knowledge by introducing the voting results as a regularizer in learning a graph-based parser. Different from their work, we directly do the distillation on the classifier of the transition-based parser. Besides the attempts for directly using the knowledge distillation technique, Stahlberg and Byrne (2017) propose to first build the ensemble of several machine translators into one network by unfolding and then use SVD to shrink its parameters, which can be treated as another kind of knowledge distillation. 6 Conclusion In this paper, we study knowledge distillation for search-based structured prediction and propose to distill an ensemble into a single model both from reference and exploration states. Experiments on transition-based dependency parsing and machine translation show that our distillation method significantly improves the single model’s performance. Comparison analysis gives empirically guarantee for our distillation method. Acknowledgments We thank the anonymous reviewers for their helpful comments and suggestions. This work was supported by the National Key Basic Research Program of China via grant 2014CB340503 and the National Natural Science Foundation of China (NSFC) via grant 61632011 and 61772153. References Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In Proc. of ACL. Jimmy Ba and Rich Caruana. 2014. Do deep nets really need to be deep? In NIPS 27, pages 2654–2662. Miguel Ballesteros, Yoav Goldberg, Chris Dyer, and Noah A. Smith. 2016. Training with exploration improves a greedy stack lstm parser. In Proc. of EMNLP. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In NIPS 28, pages 1171–1179. Cristian Buciluˇa, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In Proc. of KDD. Jacob Buckman, Miguel Ballesteros, and Chris Dyer. 2016. Transition-based dependency parsing with heuristic backtracking. In Proc. of EMNLP. Kai-Wei Chang, Akshay Krishnamurthy, Alekh Agarwal, Hal Daum´e III, and John Langford. 2015. Learning to search better than your teacher. In Proc. of ICML. Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proc. of ACL. Hal Daum´e III, John Langford, and Daniel Marcu. 2005. Search-based structured prediction as classification. In NIPS Workshop on ASLTSP. Hal Daum´e III, John Langford, and Daniel Marcu. 2009. Search-based structured prediction. Machine Learning, 75(3). Thomas G. Dietterich. 2000. An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Machine Learning, 40(2):139–157. Janardhan Rao Doppa, Alan Fern, and Prasad Tadepalli. 2014. Hc-search: A learning framework for search-based structured prediction. J. Artif. Intell. Res. (JAIR), 50. Timothy Dozat and Christopher D. Manning. 2016. Deep biaffine attention for neural dependency parsing. CoRR, abs/1611.01734. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proc. of ACL. Markus Freitag, Yaser Al-Onaizan, and Baskaran Sankaran. 2017. Ensemble distillation for neural machine translation. CoRR, abs/1702.01802. Benoˆıt Fr´enay and Michel Verleysen. 2014. Classification in the presence of label noise: A survey. IEEE Transactions on Neural Networks and Learning Systems, 25:845–869. 1402 Yoav Goldberg and Joakim Nivre. 2012. A dynamic oracle for arc-eager dependency parsing. In Proc. of COLING. Yoav Goldberg, Francesco Sartorio, and Giorgio Satta. 2014. A tabular method for dynamic oracles in transition-based parsing. TACL, 2. James Goodman, Andreas Vlachos, and Jason Naradowsky. 2016. Noise reduction and targeted exploration in imitation learning for abstract meaning representation parsing. In Proc. of ACL. Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. CoRR, abs/1503.02531. Liang Huang, Suphan Fayong, and Yang Guo. 2012. Structured perceptron with inexact search. In Proc. of NAACL. Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. 2016. On large-batch training for deep learning: Generalization gap and sharp minima. CoRR, abs/1609.04836. Yoon Kim and Alexander M. Rush. 2016. Sequencelevel knowledge distillation. In Proc. of EMNLP. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. In Proc. of ACL 2017, System Demonstrations. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proc. of EMNLP 2004. Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, Graham Neubig, and Noah A. Smith. 2017. What do recurrent neural network grammars learn about syntax? In Proc. of EACL. Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, and Noah A. Smith. 2016. Distilling an ensemble of greedy dependency parsers into one MST parser. In Proc. of EMNLP. Zhenghua Li, Min Zhang, and Wenliang Chen. 2014. Ambiguity-aware ensemble training for semisupervised dependency parsing. In Proc. of ACL. P. Liang, H. Daum´e, and D. Klein. 2008. Structure compilation: trading structure for features. pages 592–599. Percy Liang, Alexandre Bouchard-Cˆot´e, Dan Klein, and Ben Taskar. 2006. An end-to-end discriminative approach to machine translation. In Proc. of ACL. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proc. of EMNLP. Jens Nilsson and Joakim Nivre. 2008. Malteval: an evaluation and visualization tool for dependency parsing. In Proc. of LREC. Http://www.lrecconf.org/proceedings/lrec2008/. Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Computational Linguistics, 34(4). Ian Osband, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. 2016. Deep exploration via bootstrapped dqn. In NIPS 29, pages 4026–4034. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proc. of ACL. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks. CoRR, abs/1511.06732. Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of lstm-networks for sequence tagging. In Proc. of EMNLP. Stephane Ross and Drew Bagnell. 2010. Efficient reductions for imitation learning. In Proc. of AISTATS, volume 9. Stephane Ross, Geoffrey Gordon, and Drew Bagnell. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. In Proc. of AISTATS, volume 15. Felix Stahlberg and Bill Byrne. 2017. Unfolding and shrinking neural machine translation ensembles. In Proc. of EMNLP. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In NIPS 27, pages 3104–3112. Andreas Vlachos and Stephen Clark. 2014. A new corpus and imitation learning framework for contextdependent semantic parsing. Transactions of the Association for Computational Linguistics, 2:547–559. Sam Wiseman and Alexander M. Rush. 2016. Sequence-to-sequence learning as beam-search optimization. In Proc. of EMNLP. Yue Zhang and Stephen Clark. 2008. A tale of two parsers: Investigating and combining graph-based and transition-based dependency parsing. In Proc. of EMNLP.
2018
129
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 132–141 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 132 A Unified Model for Extractive and Abstractive Summarization using Inconsistency Loss Wan-Ting Hsu1, Chieh-Kai Lin1, Ming-Ying Lee1, Kerui Min2, Jing Tang2, Min Sun1 1 National Tsing Hua University, 2 Cheetah Mobile {hsuwanting, axk51013, masonyl03}@gapp.nthu.edu.tw, {minkerui, tangjing}@cmcm.com, [email protected] Abstract We propose a unified model combining the strength of extractive and abstractive summarization. On the one hand, a simple extractive model can obtain sentence-level attention with high ROUGE scores but less readable. On the other hand, a more complicated abstractive model can obtain word-level dynamic attention to generate a more readable paragraph. In our model, sentence-level attention is used to modulate the word-level attention such that words in less attended sentences are less likely to be generated. Moreover, a novel inconsistency loss function is introduced to penalize the inconsistency between two levels of attentions. By end-to-end training our model with the inconsistency loss and original losses of extractive and abstractive models, we achieve state-of-theart ROUGE scores while being the most informative and readable summarization on the CNN/Daily Mail dataset in a solid human evaluation. 1 Introduction Text summarization is the task of automatically condensing a piece of text to a shorter version while maintaining the important points. The ability to condense text information can aid many applications such as creating news digests, presenting search results, and generating reports. There are mainly two types of approaches: extractive and abstractive. Extractive approaches assemble summaries directly from the source text typically selecting one whole sentence at a time. In contrast, abstractive approaches can generate novel words and phrases not copied from the source text. Original Article: McDonald’s says...... The company says it expects the new ‘Artisan Grilled Chicken’ to be in its more than 14,300 U.S. stores by the end of next week, in products including a new sandwich, as well as existing sandwiches, wraps and salads. It says the biggest change is the removal of sodium phosphates, which it said was used to keep the chicken moist, in favor of vegetable starch. The new recipe also does not use maltodextrin, which McDonald’s said is generally used as a sugar to increase browning or as a carrier for seasoning. Jessica Foust, director of culinary innovation at McDonald’s, said the changes were made because customers said they want ‘simple, clean ingredients’ they are familiar with...... And Panera Bread has said it plans to purge artificial colors, flavors and preservatives from its food by 2016...... Extractive Approach: The company says it expects the new ‘Artisan Grilled Chicken’ to be in its more than 14,300 U.S. stores by the end of next week, in products including a new sandwich, as well as existing sandwiches, wraps and salads. It says the biggest change is the removal of sodium phosphates, which it said was used to keep the chicken moist, in favor of vegetable starch. The new recipe also does not use maltodextrin, which McDonald’s said is generally used as a sugar to increase browning or as a carrier for seasoning. Abstractive Approach: McDonald’s says it expects the new ‘Artisan Grilled Chicken’ to be in its more than 14,300 U.S. stores by the end of next week. The company says the changes were made because customers said they want ‘simple, clean ingredients’ they are familiar with. McDonald’s said it plans to purge artificial colors, flavors and preservatives from its food by 2016. Unified Approach: McDonald’s says it expects the new ‘Artisan Grilled Chicken’ to be in its more than 14,300 U.S. stores by the end of next week, in products including a new sandwich, as well as existing sandwiches, wraps and salads. It says the biggest change is the removal of sodium phosphates. The new recipe also does not use maltodextrin, which McDonald’s said is generally used as a sugar to increase browning or as a carrier for seasoning. Figure 1: Comparison of extractive, abstractive, and our unified summaries on a news article. The extractive model picks most important but incoherent or not concise (see blue bold font) sentences. The abstractive summary is readable, concise but still loses or mistakes some facts (see red italics font). The final summary rewritten from fragments (see underline font) has the advantages from both extractive (importance) and abstractive advantage (coherence (see green bold font)). Hence, abstractive summaries can be more coherent and concise than extractive summaries. Extractive approaches are typically simpler. They output the probability of each sentence to be selected into the summary. Many earlier works on summarization (Cheng and Lapata, 2016; Nallapati et al., 2016a, 2017; Narayan et al., 2017; Yasunaga et al., 2017) focus on extractive summarization. Among them, Nallapati et al. 133 (2017) have achieved high ROUGE scores. On the other hand, abstractive approaches (Nallapati et al., 2016b; See et al., 2017; Paulus et al., 2017; Fan et al., 2017; Liu et al., 2017) typically involve sophisticated mechanism in order to paraphrase, generate unseen words in the source text, or even incorporate external knowledge. Neural networks (Nallapati et al., 2017; See et al., 2017) based on the attentional encoder-decoder model (Bahdanau et al., 2014) were able to generate abstractive summaries with high ROUGE scores but suffer from inaccurately reproducing factual details and an inability to deal with outof-vocabulary (OOV) words. Recently, See et al. (2017) propose a pointer-generator model which has the abilities to copy words from source text as well as generate unseen words. Despite recent progress in abstractive summarization, extractive approaches (Nallapati et al., 2017; Yasunaga et al., 2017) and lead-3 baseline (i.e., selecting the first 3 sentences) still achieve strong performance in ROUGE scores. We propose to explicitly take advantage of the strength of state-of-the-art extractive and abstractive summarization and introduced the following unified model. Firstly, we treat the probability output of each sentence from the extractive model (Nallapati et al., 2017) as sentence-level attention. Then, we modulate the word-level dynamic attention from the abstractive model (See et al., 2017) with sentence-level attention such that words in less attended sentences are less likely to be generated. In this way, extractive summarization mostly benefits abstractive summarization by mitigating spurious word-level attention. Secondly, we introduce a novel inconsistency loss function to encourage the consistency between two levels of attentions. The loss function can be computed without additional human annotation and has shown to ensure our unified model to be mutually beneficial to both extractive and abstractive summarization. On CNN/Daily Mail dataset, our unified model achieves state-of-theart ROUGE scores and outperforms a strong extractive baseline (i.e., lead-3). Finally, to ensure the quality of our unified model, we conduct a solid human evaluation and confirm that our method significantly outperforms recent state-ofthe-art methods in informativity and readability. To summarize, our contributions are twofold: • We propose a unified model combining sentence-level and word-level attentions to take advantage of both extractive and abstractive summarization approaches. • We propose a novel inconsistency loss function to ensure our unified model to be mutually beneficial to both extractive and abstractive summarization. The unified model with inconsistency loss achieves the best ROUGE scores on CNN/Daily Mail dataset and outperforms recent state-of-the-art methods in informativity and readability on human evaluation. 2 Related Work Text summarization has been widely studied in recent years. We first introduce the related works of neural-network-based extractive and abstractive summarization. Finally, we introduce a few related works with hierarchical attention mechanism. Extractive summarization. K˚ageb¨ack et al. (2014) and Yin and Pei (2015) use neural networks to map sentences into vectors and select sentences based on those vectors. Cheng and Lapata (2016), Nallapati et al. (2016a) and Nallapati et al. (2017) use recurrent neural networks to read the article and get the representations of the sentences and article to select sentences. Narayan et al. (2017) utilize side information (i.e., image captions and titles) to help the sentence classifier choose sentences. Yasunaga et al. (2017) combine recurrent neural networks with graph convolutional networks to compute the salience (or importance) of each sentence. While some extractive summarization methods obtain high ROUGE scores, they all suffer from low readability. Abstractive summarization. Rush et al. (2015) first bring up the abstractive summarization task and use attention-based encoder to read the input text and generate the summary. Based on them, Miao and Blunsom (2016) use a variational auto-encoder and Nallapati et al. (2016b) use a more powerful sequence-to-sequence model. Besides, Nallapati et al. (2016b) create a new articlelevel summarization dataset called CNN/Daily Mail by adapting DeepMind question-answering dataset (Hermann et al., 2015). Ranzato et al. (2015) change the traditional training method to directly optimize evaluation metrics (e.g., BLEU and ROUGE). Gu et al. (2016), See et al. (2017) and Paulus et al. (2017) combine pointer networks 134 Sentence 1 Sentence 2 Sentence 3 Inconsistent Updated Word Attention 1.0 0.5 Sentence Attention (transparent bars) and Word Attention (solid bars) Attenuated Multiplying and Renormalizing Sentence and Word Attentions Sentence 1 Sentence 2 Sentence 3 Figure 2: Our unified model combines the word-level and sentence-level attentions. Inconsistency occurs when word attention is high but sentence attention is low (see red arrow). (Vinyals et al., 2015) into their models to deal with out-of-vocabulary (OOV) words. Chen et al. (2016) and See et al. (2017) restrain their models from attending to the same word to decrease repeated phrases in the generated summary. Paulus et al. (2017) use policy gradient on summarization and state out the fact that high ROUGE scores might still lead to low human evaluation scores. Fan et al. (2017) apply convolutional sequenceto-sequence model and design several new tasks for summarization. Liu et al. (2017) achieve high readability score on human evaluation using generative adversarial networks. Hierarchical attention. Attention mechanism was first proposed by Bahdanau et al. (2014). Yang et al. (2016) proposed a hierarchical attention mechanism for document classification. We adopt the method of combining sentence-level and word-level attention in Nallapati et al. (2016b). However, their sentence attention is dynamic, which means it will be different for each generated word. Whereas our sentence attention is fixed for all generated words. Inspired by the high performance of extractive summarization, we propose to use fixed sentence attention. Our model combines state-of-the-art extractive model (Nallapati et al., 2017) and abstractive model (See et al., 2017) by combining sentencelevel attention from the former and word-level attention from the latter. Furthermore, we design an inconsistency loss to enhance the cooperation between the extractive and abstractive models. 3 Our Unified Model We propose a unified model to combine the strength of both state-of-the-art extractor (Nallapati et al., 2017) and abstracter (See et al., 2017). Before going into details of our model, we first define the tasks of the extractor and abstracter. Problem definition. The input of both extractor and abstracter is a sequence of words w = [w1, w2, ..., wm, ...], where m is the word index. The sequence of words also forms a sequence of sentences s = [s1, s2, ..., sn, ...], where n is the sentence index. The mth word is mapped into the n(m)th sentence, where n(·) is the mapping function. The output of the extractor is the sentencelevel attention β = [β1, β2, ..., βn, ...], where βn is the probability of the nth sentence been extracted into the summary. On the other hand, our attention-based abstractor computes word-level attention αt =  αt 1, αt 2, ..., αt m, ...  dynamically while generating the tth word in the summary. The output of the abstracter is the summary text y =  y1, y2, ..., yt, ...  , where yt is tth word in the summary. In the following, we introduce the mechanism to combine sentence-level and word-level attentions in Sec. 3.1. Next, we define the novel inconsistency loss that ensures extractor and abstracter to be mutually beneficial in Sec. 3.2. We also give the details of our extractor in Sec. 3.3 and our abstracter in Sec. 3.4. Finally, our training procedure is described in Sec. 3.5. 3.1 Combining Attentions Pieces of evidence (e.g., Vaswani et al. (2017)) show that attention mechanism is very important for NLP tasks. Hence, we propose to explicitly combine the sentence-level βn and word-level αt m attentions by simple scalar multiplication and renormalization. The updated word attention ˆαt m is ˆαt m = αt m × βn(m) P m αtm × βn(m) . (1) The multiplication ensures that only when both word-level αt m and sentence-level βn attentions are high, the updated word attention ˆαt m can be high. Since the sentence-level attention βn from the extractor already achieves high ROUGE 135 GRU GRU GRU GRU GRU GRU GRU GRU GRU 𝑤1 𝑤2 𝑤3 𝑤4 𝑤5 𝑤6 𝑤7 𝑤8 𝑤9 GRU GRU GRU Sentence-level RNN Word-level RNN Sentence-Level Attention 0.9 0.2 0.5 Figure 3: Architecture of the extractor. We treat the sigmoid output of each sentence as sentencelevel attention ∈[0, 1]. scores, βn intuitively modulates the word-level attention αt m to mitigate spurious word-level attention such that words in less attended sentences are less likely to be generated (see Fig. 2). As highlighted in Sec. 3.4, the word-level attention ˆαt m significantly affects the decoding process of the abstracter. Hence, an updated word-level attention is our key to improve abstractive summarization. 3.2 Inconsistency Loss Instead of only leveraging the complementary nature between sentence-level and word-level attentions, we would like to encourage these two-levels of attentions to be mostly consistent to each other during training as an intrinsic learning target for free (i.e., without additional human annotation). Explicitly, we would like the sentence-level attention to be high when the word-level attention is high. Hence, we design the following inconsistency loss, Linc = −1 T T X t=1 log( 1 |K| X m∈K αt m × βn(m)), (2) where K is the set of top K attended words and T is the number of words in the summary. This implicitly encourages the distribution of the wordlevel attentions to be sharp and sentence-level attention to be high. To avoid the degenerated solution for the distribution of word attention to be one-hot and sentence attention to be high, we include the original loss functions for training the extractor ( Lext in Sec. 3.3) and abstracter (Labs and Lcov in Sec. 3.4). Note that Eq. 1 is the only part that the extractor is interacting with the abstracter. Our proposed inconsistency loss facilitates our end-to-end trained unified model to be mutually beneficial to both the extractor and abstracter. 3.3 Extractor Our extractor is inspired by Nallapati et al. (2017). The main difference is that our extractor does not need to obtain the final summary. It mainly needs to obtain a short list of important sentences with a high recall to further facilitate the abstractor. We first introduce the network architecture and the loss function. Finally, we define our ground truth important sentences to encourage high recall. Architecture. The model consists of a hierarchical bidirectional GRU which extracts sentence representations and a classification layer for predicting the sentence-level attention βn for each sentence (see Fig. 3). Extractor loss. The following sigmoid cross entropy loss is used, Lext = −1 N N X n=1 (gn log βn + (1 −gn) log(1 −βn)), (3) where gn ∈{0, 1} is the ground-truth label for the nth sentence and N is the number of sentences. When gn = 1, it indicates that the nth sentence should be attended to facilitate abstractive summarization. Ground-truth label. The goal of our extractor is to extract sentences with high informativity, which means the extracted sentences should contain information that is needed to generate an abstractive summary as much as possible. To obtain the ground-truth labels g = {gn}n, first, we measure the informativity of each sentence sn in the article by computing the ROUGE-L recall score (Lin, 2004) between the sentence sn and the reference abstractive summary ˆy = {ˆyt}t. Second, we sort the sentences by their informativity and select the sentence in the order of high to low informativity. We add one sentence at a time if the new sentence can increase the informativity of all the selected sentences. Finally, we obtain the ground-truth labels g and train our extractor by minimizing Eq. 3. Note that our method is different from Nallapati et al. (2017) who aim to extract a final summary for an article so they use ROUGE F-1 score to select ground-truth sentences; while we focus on high informativity, hence, we use ROUGE recall score to obtain as much information as possible with respect to the reference summary ˆy. 3.4 Abstracter The second part of our model is an abstracter that reads the article; then, generate a summary 136 Word Distribution 𝐏𝑣𝑜𝑐𝑎𝑏 1 - 𝑝𝑔𝑒𝑛 Final Word Distribution 𝐏𝑓𝑖𝑛𝑎𝑙 Context Vector ℎ∗(α̂ 𝑡) Decoder Hidden State ℎ𝑡 𝑑 Updated Word Attention α̂𝑡 Encoder Hidden States {ℎ1 𝑒, … , ℎ𝑀 𝑒} 𝑝𝑔𝑒𝑛 Figure 4: Decoding mechanism in the abstracter. In the decoder step t, our updated word attention ˆαt is used to generate context vector h∗(ˆαt). Hence, it updates the final word distribution Pfinal. word-by-word. We use the pointer-generator network proposed by See et al. (2017) and combine it with the extractor by combining sentence-level and word-level attentions (Sec. 3.1). Pointer-generator network. The pointergenerator network (See et al., 2017) is a specially designed sequence-to-sequence attentional model that can generate the summary by copying words in the article or generating words from a fixed vocabulary at the same time. The model contains a bidirectional LSTM which serves as an encoder to encode the input words w and a unidirectional LSTM which serves as a decoder to generate the summary y. For details of the network architecture, please refer to See et al. (2017). In the following, we describe how the updated word attention ˆαt affects the decoding process. Notations. We first define some notations. he m is the encoder hidden state for the mth word. hd t is the decoder hidden state in step t. h∗(ˆαt) = PM m ˆαt m × he m is the context vector which is a function of the updated word attention ˆαt. Pvocab(h∗(ˆαt)) is the probability distribution over the fixed vocabulary before applying the copying mechanism. Pvocab(h∗(ˆαt)) (4) = softmax(W2(W1[hd t , h∗(ˆαt)] + b1) + b2), where W1, W2, b1 and b2 are learnable parameters. Pvocab = {P vocab w }w where P vocab w (h∗(ˆαt)) is the probability of word w being decoded. pgen(h∗(ˆαt)) ∈[0, 1] is the generating probability (see Eq.8 in See et al. (2017)) and 1 − pgen(h∗(ˆαt)) is the copying probability. Final word distribution. P final w (ˆαt) is the final probability of word w being decoded (i.e., yt = w). It is related to the updated word attention ˆαt as follows (see Fig. 4), P final w (ˆαt) = pgen(h∗(ˆαt))P vocab w (h∗(ˆαt)) (5) + (1 −pgen(h∗(ˆαt))) X m:wm=w ˆαt m. Note that Pfinal = {P final w }w is the probability distribution over the fixed vocabulary and out-ofvocabulary (OOV) words. Hence, OOV words can be decoded. Most importantly, it is clear from Eq. 5 that P final w (ˆαt) is a function of the updated word attention ˆαt. Finally, we train the abstracter to minimize the negative log-likelihood: Labs = −1 T T X t=1 log P final ˆyt (ˆαt) , (6) where ˆyt is the tth token in the reference abstractive summary. Coverage mechanism. We also apply coverage mechanism (See et al., 2017) to prevent the abstracter from repeatedly attending to the same place. In each decoder step t, we calculate the coverage vector ct = Pt−1 t′=0 ˆαt′ which indicates so far how much attention has been paid to every input word. The coverage vector ct will be used to calculate word attention ˆαt (see Eq.11 in See et al. (2017)). Moreover, coverage loss Lcov is calculated to directly penalize the repetition in updated word attention ˆαt: Lcov = 1 T T X t=1 M X m=1 min(ˆαt m, ct m) . (7) The objective function for training the abstracter with coverage mechanism is the weighted sum of negative log-likelihood and coverage loss. 3.5 Training Procedure We first pre-train the extractor by minimizing Lext in Eq. 3 and the abstracter by minimizing Labs and Lcov in Eq. 6 and Eq. 7, respectively. When pre-training, the abstracter takes ground-truth extracted sentences (i.e., sentences with gn = 1) as input. To combine the extractor and abstracter, we proposed two training settings : (1) two-stages training and (2) end-to-end training. Two-stages training. In this setting, we view the sentence-level attention β from the pre-trained extractor as hard attention. The extractor becomes a classifier to select sentences with high attention (i.e., βn > threshold). We simply combine the extractor and abstracter by feeding the extracted sentences to the abstracter. Note that we finetune the abstracter since the input text becomes extractive summary which is obtained from the extractor. 137 End-to-end training. For end-to-end training, the sentence-level attention β is soft attention and will be combined with the word-level attention αt as described in Sec. 3.1. We end-to-end train the extractor and abstracter by minimizing four loss functions: Lext, Labs, Lcov, as well as Linc in Eq. 2. The final loss is as below: Le2e = λ1Lext + λ2Labs + λ3Lcov + λ4Linc, (8) where λ1, λ2, λ3, λ4 are hyper-parameters. In our experiment, we give Lext a bigger weight (e.g., λ1 = 5) when end-to-end training with Linc since we found that Linc is relatively large such that the extractor tends to ignore Lext. 4 Experiments We introduce the dataset and implementation details of our method evaluated in our experiments. 4.1 Dataset We evaluate our models on the CNN/Daily Mail dataset (Hermann et al., 2015; Nallapati et al., 2016b; See et al., 2017) which contains news stories in CNN and Daily Mail websites. Each article in this dataset is paired with one humanwritten multi-sentence summary. This dataset has two versions: anonymized and non-anonymized. The former contains the news stories with all the named entities replaced by special tokens (e.g., @entity2); while the latter contains the raw text of each news story. We follow See et al. (2017) and obtain the non-anonymized version of this dataset which has 287,113 training pairs, 13,368 validation pairs and 11,490 test pairs. 4.2 Implementation Details We train our extractor and abstracter with 128dimension word embeddings and set the vocabulary size to 50k for both source and target text. We follow Nallapati et al. (2017) and See et al. (2017) and set the hidden dimension to 200 and 256 for the extractor and abstracter, respectively. We use Adagrad optimizer (Duchi et al., 2011) and apply early stopping based on the validation set. In the testing phase, we limit the length of the summary to 120. Pre-training. We use learning rate 0.15 when pretraining the extractor and abstracter. For the extractor, we limit both the maximum number of sentences per article and the maximum number of tokens per sentence to 50 and train the model for 27k iterations with the batch size of 64. For the abstracter, it takes ground-truth extracted sentences (i.e., sentences with gn = 1) as input. We limit the length of the source text to 400 and the length of the summary to 100 and use the batch size of 16. We train the abstracter without coverage mechanism for 88k iterations and continue training for 1k iterations with coverage mechanism (Labs : Lcov = 1 : 1). Two-stages training. The abstracter takes extracted sentences with βn > 0.5, where β is obtained from the pre-trained extractor, as input during two-stages training. We finetune the abstracter for 10k iterations. End-to-end training. During end-to-end training, we will minimize four loss functions (Eq. 8) with λ1 = 5 and λ2 = λ3 = λ4 = 1. We set K to 3 for computing Linc. Due to the limitation of the memory, we reduce the batch size to 8 and thus use a smaller learning rate 0.01 for stability. The abstracter here reads the whole article. Hence, we increase the maximum length of source text to 600. We end-to-end train the model for 50k iterations. 5 Results Our unified model not only generates an abstractive summary but also extracts the important sentences in an article. Our goal is that both of the two types of outputs can help people to read and understand an article faster. Hence, in this section, we evaluate the results of our extractor in Sec. 5.1 and unified model in Sec. 5.2. Furthermore, in Sec. 5.3, we perform human evaluation and show that our model can provide a better abstractive summary than other baselines. 5.1 Results of Extracted Sentences To evaluate whether our extractor obtains enough information for the abstracter, we use full-length ROUGE recall scores1 between the extracted sentences and reference abstractive summary. High ROUGE recall scores can be obtained if the extracted sentences include more words or sequences overlapping with the reference abstractive summary. For each article, we select sentences with the sentence probabilities β greater than 0.5. We show the results of the ground-truth sentence labels (Sec. 3.3) and our models on the 1All our ROUGE scores are reported by the official ROUGE script. We use the pyrouge package. https://pypi.org/project/pyrouge/0.1.3/ 138 Method ROUGE-1 ROUGE-2 ROUGE-L pre-trained 73.50 35.55 68.57 end2end w/o inconsistency loss 72.97 35.11 67.99 end2end w/ inconsistency loss 78.40 39.45 73.83 ground-truth labels 89.23 49.36 85.46 Table 1: ROUGE recall scores of the extracted sentences. pre-trained indicates the extractor trained on the ground-truth labels. end2end indicates the extractor after end-to-end training with the abstracter. Note that ground-truth labels show the upper-bound performance since the reference summary to calculate ROUGE-recall is abstractive. All our ROUGE scores have a 95% confidence interval with at most ±0.33. Method ROUGE-1 ROUGE-2 ROUGE-L HierAttn (Nallapati et al., 2016b)∗ 32.75 12.21 29.01 DeepRL (Paulus et al., 2017)∗ 39.87 15.82 36.90 pointer-generator (See et al., 2017) 39.53 17.28 36.38 GAN (Liu et al., 2017) 39.92 17.65 36.71 two-stage (ours) 39.97 17.43 36.34 end2end w/o inconsistency loss (ours) 40.19 17.67 36.68 end2end w/ inconsistency loss (ours) 40.68 17.97 37.13 lead-3 (See et al., 2017) 40.34 17.70 36.57 Table 2: ROUGE F-1 scores of the generated abstractive summaries on the CNN/Daily Mail test set. Our two-stages model outperforms pointer-generator model on ROUGE-1 and ROUGE-2. In addition, our model trained end-to-end with inconsistency loss exceeds the lead-3 baseline. All our ROUGE scores have a 95% confidence interval with at most ±0.24. ‘∗’ indicates the model is trained and evaluated on the anonymized dataset and thus is not strictly comparable with ours. test set of the CNN/Daily Mail dataset in Table 1. Note that the ground-truth extracted sentences can’t get ROUGE recall scores of 100 because reference summary is abstractive and may contain some words and sequences that are not in the article. Our extractor performs the best when end-toend trained with inconsistency loss. 5.2 Results of Abstractive Summarization We use full-length ROUGE-1, ROUGE-2 and ROUGE-L F-1 scores to evaluate the generated summaries. We compare our models (two-stage and end-to-end) with state-of-the-art abstractive summarization models (Nallapati et al., 2016b; Paulus et al., 2017; See et al., 2017; Liu et al., 2017) and a strong lead-3 baseline which directly uses the first three article sentences as the summary. Due to the writing style of news articles, the most important information is often written at the beginning of an article which makes lead3 a strong baseline. The results of ROUGE F-1 scores are shown in Table 2. We prove that with help of the extractor, our unified model can outperform pointer-generator (the third row in Table 2) even with two-stages training (the fifth row in Table 2). After end-to-end training without inconsistency loss, our method already achieves better ROUGE scores by cooperating with each other. Moreover, our model end-to-end trained with inconsistency loss achieves state-of-the-art ROUGE scores and exceeds lead-3 baseline. In order to quantify the effect of inconsistency loss, we design a metric – inconsistency rate Rinc – to measure the inconsistency for each generated summary. For each decoder step t, if the word with maximum attention belongs to a sentence with low attention (i.e., βn(argmax(αt)) < mean(β)), we define this step as an inconsistent step tinc. The inconsistency rate Rinc is then defined as the percentage of the inconsistent steps in the summary. Rinc = Count(tinc) T , (9) where T is the length of the summary. The average inconsistency rates on test set are shown in Table 4. Our inconsistency loss significantly decrease Rinc from about 20% to 4%. An example of inconsistency improvement is shown in Fig. 5. 139 Method informativity conciseness readability DeepRL (Paulus et al., 2017) 3.23 2.97 2.85 pointer-generator (See et al., 2017) 3.18 3.36 3.47 GAN (Liu et al., 2017) 3.22 3.52 3.51 Ours 3.58 3.40 3.70 reference 3.43 3.61 3.62 Table 3: Comparing human evaluation results with state-of-the-art methods. Method avg. Rinc w/o incon. loss 0.198 w/ incon. loss 0.042 Table 4: Inconsistency rate of our end-to-end trained model with and without inconsistency loss. Without inconsistency loss: If that was a tornado, it was one monster of one. Luckily, so far it looks like no one was hurt. With tornadoes touching down near Dallas on Sunday, Ryan Shepard snapped a photo of a black cloud formation reaching down to the ground. He said it was a tornado. It wouldn’t be an exaggeration to say it looked half a mile wide. More like a mile, said Jamie Moore, head of emergency management in Johnson County, Texas. It could have been one the National Weather Service warned about in a tweet as severe thunderstorms drenched the area, causing street flooding. (...) With inconsistency loss: If that was a tornado, it was one monster of one. Luckily, so far it looks like no one was hurt. With tornadoes touching down near Dallas on Sunday, Ryan Shepard snapped a photo of a black cloud formation reaching down to the ground. He said it was a tornado. It wouldn’t be an exaggeration to say it looked half a mile wide. More like a mile, said Jamie Moore, head of emergency management in Johnson County, Texas. It could have been one the National Weather Service warned about in a tweet as severe thunderstorms drenched the area, causing street flooding. (...) Figure 5: Visualizing the consistency between sentence and word attentions on the original article. We highlight word (bold font) and sentence (underline font) attentions. We compare our methods trained with and without inconsistency loss. Inconsistent fragments (see red bold font) occur when trained without the inconsistency loss. 5.3 Human Evaluation We perform human evaluation on Amazon Mechanical Turk (MTurk)2 to evaluate the informativity, conciseness and readability of the summaries. We compare our best model (end2end with inconsistency loss) with pointer-generator (See et al., 2017), generative adversarial network (Liu et al., 2017) and deep reinforcement model (Paulus et al., 2017). For these three models, we use the test set outputs provided by the authors3. 2https://www.mturk.com/ 3https://github.com/abisee/ pointer-generator and https://likicode.com for the first two. For DeepRL, we asked through email. We randomly pick 100 examples in the test set. All generated summaries are re-capitalized and de-tokenized. Since Paulus et al. (2017) trained their model on anonymized data, we also recover the anonymized entities and numbers of their outputs. We show the article and 6 summaries (reference summary, 4 generated summaries and a random summary) to each human evaluator. The random summary is a reference summary randomly picked from other articles and is used as a trap. We show the instructions of three different aspects as: (1) Informativity: how well does the summary capture the important parts of the article? (2) Conciseness: is the summary clear enough to explain everything without being redundant? (3) Readability: how well-written (fluent and grammatical) the summary is? The user interface of our human evaluation is shown in the supplementary material. We ask the human evaluator to evaluate each summary by scoring the three aspects with 1 to 5 score (higher the better). We reject all the evaluations that score the informativity of the random summary as 3, 4 and 5. By using this trap mechanism, we can ensure a much better quality of our human evaluation. For each example, we first ask 5 human evaluators to evaluate. However, for those articles that are too long, which are always skipped by the evaluators, it is hard to collect 5 reliable evaluations. Hence, we collect at least 3 evaluations for every example. For each summary, we average the scores over different human evaluators. The results are shown in Table 3. The reference summaries get the best score on conciseness since the recent abstractive models tend to copy sentences from the input articles. However, our model learns well to select important information and form complete sentences so we even get slightly better scores on informativity and readability than the reference summaries. We show a typical example of our model comparing with other state-of140 Original article (truncated): A chameleon balances carefully on a branch, waiting calmly for its prey... except that if you look closely, you will see that this picture is not all that it seems. For the ‘creature’ poised to pounce is not a colourful species of lizard but something altogether more human. Featuring two carefully painted female models, it is a clever piece of sculpture designed to create an amazing illusion. It is the work of Italian artist Johannes Stoetter. Scroll down for video. Can you see us? Italian artist Johannes Stoetter has painted two naked women to look like a chameleon. The 37-year-old has previously transformed his models into frogs and parrots but this may be his most intricate and impressive piece to date. Stoetter daubed water-based body paint on the naked models to create the multicoloured effect, then intertwined them to form the shape of a chameleon. To complete the deception, the models rested on a bench painted to match their skin and held the green branch in the air beneath them. Stoetter can take weeks to plan one of his pieces and hours to paint it. Speaking about The Chameleon, he said: ‘I worked about four days to design the motif bigger and paint it with colours. The body painting took me about six hours with the help of an assistant. I covered the hair with natural clay to make the heads look bald.’ Camouflage job: A few finishing touches are applied to the two naked models to complete the transformation. ‘There are different difficulties on different levels as in every work, but I think that my passion and love to my work is so big, that I figure out a way to deal with difficulties. My main inspirations are nature, my personal life-philosophy, every-day-life and people themselves.’ However, the finished result existed only briefly before the models were able to get up and wash the paint off with just a video and some photographs to record it. (...) Reference summary: Johannes Stoetter’s artwork features two carefully painted female models. The 37-year-old has previously transformed models into frogs and parrots. Daubed water-based body paint on naked models to create the effect. Completing the deception, models rested on bench painted to match skin. DeepRL: Italian artist Johannes Stoetter has painted female models to look like a chameleon. The 37-year-old has previously transformed his models into frogs and parrots but this may be the work of Italian artist. He has painted nude models and it is a clever piece of sculpture designed to create an amazing illusion. It is work of artist Johannes Stoetter. GAN: Italian artist Johannes Stoetter has painted two naked women to look like a chameleon. The 37-year-old has previously transformed his models into frogs and parrots but this may be his most intricate and impressive piece to date. Pointer-generator: Italian artist Johannes Stoetter has painted two naked women to look like a chameleon. It is the work of Italian artist Johannes Stoetter. Stoetter daubed water-based body paint on the naked models to create the multicoloured effect, then intertwined them to form the shape of a chameleon. Our unified model (with inconsistency loss): Johannes Stoetter has painted two naked women to look like a chameleon. The 37-year-old has previously transformed his models into frogs and parrots. Stoetter daubed water-based body paint on the naked models to create the multicoloured effect, then intertwined them to form the shape of a chameleon. Figure 6: Typical Comparison. Our model attended at the most important information (blue bold font) matching well with the reference summary; while other state-of-the-art methods generate repeated or less important information (red italic font). the-art methods in Fig. 6. More examples (5 using CNN/Daily Mail news articles and 3 using nonnews articles as inputs) are provided in the supplementary material. 6 Conclusion We propose a unified model combining the strength of extractive and abstractive summarization. Most importantly, a novel inconsistency loss function is introduced to penalize the inconsistency between two levels of attentions. The inconsistency loss enables extractive and abstractive summarization to be mutually beneficial. By end-to-end training of our model, we achieve the best ROUGE-recall and ROUGE while being the most informative and readable summarization on the CNN/Daily Mail dataset in a solid human evaluation. Acknowledgments We thank the support from Cheetah Mobile, National Taiwan University, and MOST 107-2634-F007-007, 106-3114-E-007-004, 107-2633-E-002001. We thank Yun-Zhu Song for assistance with useful survey and experiment on the task of abstractive summarization. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. In Proceedings of the 2015 International Conference on Learning Representations. Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, and Hui Jiang. 2016. Distraction-based neural networks for modeling documents. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16). Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 484–494. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121–2159. Angela Fan, David Grangier, and Michael Auli. 2017. Controllable abstractive summarization. arXiv preprint arXiv:1711.05217. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1631–1640. 141 Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1693– 1701. Mikael K˚ageb¨ack, Olof Mogren, Nina Tahmasebi, and Devdatt Dubhashi. 2014. Extractive summarization using continuous vector space models. In Proceedings of the 2nd Workshop on Continuous Vector Space Models and their Compositionality (CVSC), pages 31–39. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out. Linqing Liu, Yao Lu, Min Yang, Qiang Qu, Jia Zhu, and Hongyan Li. 2017. Generative adversarial network for abstractive text summarization. In Proceddings of the 2018 Association for the Advancement of Artificial Intelligence. Yishu Miao and Phil Blunsom. 2016. Language as a latent variable: Discrete generative models for sentence compression. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 319–328. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. In Proceddings of the 2017 Association for the Advancement of Artificial Intelligence, pages 3075–3081. Ramesh Nallapati, Bowen Zhou, and Mingbo Ma. 2016a. Classify or select: Neural architectures for extractive document summarization. arXiv preprint arXiv:1611.04244. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016b. Abstractive text summarization using sequence-tosequence rnns and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 280–290. Shashi Narayan, Nikos Papasarantopoulos, Mirella Lapata, and Shay B Cohen. 2017. Neural extractive summarization with side information. arXiv preprint arXiv:1704.04530. Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. In Proceedings of the 2018 International Conference on Learning Representations. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732. Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379–389. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1073–1083. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000–6010. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems, pages 2692–2700. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1480–1489. Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srinivasan, and Dragomir Radev. 2017. Graph-based neural multi-document summarization. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 452–462. Wenpeng Yin and Yulong Pei. 2015. Optimizing sentence modeling and selection for document summarization. In Proceedings of the 24th International Joint Conference on Artificial Intelligence, pages 1383–1389. AAAI Press.
2018
13
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1403–1414 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1403 Stack-Pointer Networks for Dependency Parsing Xuezhe Ma Carnegie Mellon University [email protected] Zecong Hu∗ Tsinghua University [email protected] Jingzhou Liu Carnegie Mellon University [email protected] Nanyun Peng University of Southern California [email protected] Graham Neubig and Eduard Hovy Carnegie Mellon University {gneubig, ehovy}@cs.cmu.edu Abstract We introduce a novel architecture for dependency parsing: stack-pointer networks (STACKPTR). Combining pointer networks (Vinyals et al., 2015) with an internal stack, the proposed model first reads and encodes the whole sentence, then builds the dependency tree top-down (from root-to-leaf) in a depth-first fashion. The stack tracks the status of the depthfirst search and the pointer networks select one child for the word at the top of the stack at each step. The STACKPTR parser benefits from the information of the whole sentence and all previously derived subtree structures, and removes the leftto-right restriction in classical transitionbased parsers. Yet, the number of steps for building any (including non-projective) parse tree is linear in the length of the sentence just as other transition-based parsers, yielding an efficient decoding algorithm with O(n2) time complexity. We evaluate our model on 29 treebanks spanning 20 languages and different dependency annotation schemas, and achieve state-of-theart performance on 21 of them. 1 Introduction Dependency parsing, which predicts the existence and type of linguistic dependency relations between words, is a first step towards deep language understanding. Its importance is widely recognized in the natural language processing (NLP) community, with it benefiting a wide range of NLP applications, such as coreference resolution (Ng, 2010; Durrett and Klein, 2013; Ma et al., ∗Work done while at Carnegie Mellon University. 2016), sentiment analysis (Tai et al., 2015), machine translation (Bastings et al., 2017), information extraction (Nguyen et al., 2009; Angeli et al., 2015; Peng et al., 2017), word sense disambiguation (Fauceglia et al., 2015), and low-resource languages processing (McDonald et al., 2013; Ma and Xia, 2014). There are two dominant approaches to dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007): local and greedy transitionbased algorithms (Yamada and Matsumoto, 2003; Nivre and Scholz, 2004; Zhang and Nivre, 2011; Chen and Manning, 2014), and the globally optimized graph-based algorithms (Eisner, 1996; McDonald et al., 2005a,b; Koo and Collins, 2010). Transition-based dependency parsers read words sequentially (commonly from left-to-right) and build dependency trees incrementally by making series of multiple choice decisions. The advantage of this formalism is that the number of operations required to build any projective parse tree is linear with respect to the length of the sentence. The challenge, however, is that the decision made at each step is based on local information, leading to error propagation and worse performance compared to graph-based parsers on root and long dependencies (McDonald and Nivre, 2011). Previous studies have explored solutions to address this challenge. Stack LSTMs (Dyer et al., 2015; Ballesteros et al., 2015, 2016) are capable of learning representations of the parser state that are sensitive to the complete contents of the parser’s state. Andor et al. (2016) proposed a globally normalized transition model to replace the locally normalized classifier. However, the parsing accuracy is still behind state-of-the-art graph-based parsers (Dozat and Manning, 2017). Graph-based dependency parsers, on the other hand, learn scoring functions for parse trees and perform exhaustive search over all possible trees for a sentence to find the globally highest scoring 1404 tree. Incorporating this global search algorithm with distributed representations learned from neural networks, neural graph-based parsers (Kiperwasser and Goldberg, 2016; Wang and Chang, 2016; Kuncoro et al., 2016; Dozat and Manning, 2017) have achieved the state-of-the-art accuracies on a number of treebanks in different languages. Nevertheless, these models, while accurate, are usually slow (e.g. decoding is O(n3) time complexity for first-order models (McDonald et al., 2005a,b) and higher polynomials for higherorder models (McDonald and Pereira, 2006; Koo and Collins, 2010; Ma and Zhao, 2012b,a)). In this paper, we propose a novel neural network architecture for dependency parsing, stackpointer networks (STACKPTR). STACKPTR is a transition-based architecture, with the corresponding asymptotic efficiency, but still maintains a global view of the sentence that proves essential for achieving competitive accuracy. Our STACKPTR parser has a pointer network (Vinyals et al., 2015) as its backbone, and is equipped with an internal stack to maintain the order of head words in tree structures. The STACKPTR parser performs parsing in an incremental, topdown, depth-first fashion; at each step, it generates an arc by assigning a child for the head word at the top of the internal stack. This architecture makes it possible to capture information from the whole sentence and all the previously derived subtrees, while maintaining a number of parsing steps linear in the sentence length. We evaluate our parser on 29 treebanks across 20 languages and different dependency annotation schemas, and achieve state-of-the-art performance on 21 of them. The contributions of this work are summarized as follows: (i) We propose a neural network architecture for dependency parsing that is simple, effective, and efficient. (ii) Empirical evaluations on benchmark datasets over 20 languages show that our method achieves state-of-the-art performance on 21 different treebanks1. (iii) Comprehensive error analysis is conducted to compare the proposed method to a strong graph-based baseline using biaffine attention (Dozat and Manning, 2017). 1Source code is publicly available at https:// github.com/XuezheMax/NeuroNLP2 2 Background We first briefly describe the task of dependency parsing, setup the notation, and review Pointer Networks (Vinyals et al., 2015). 2.1 Dependency Parsing and Notations Dependency trees represent syntactic relationships between words in the sentences through labeled directed edges between head words and their dependents. Figure 1 (a) shows a dependency tree for the sentence, “But there were no buyers”. In this paper, we will use the following notation: Input: x = {w1, . . . , wn} represents a generic sentence, where wi is the ith word. Output: y = {p1, p2, · · · , pk} represents a generic (possibly non-projective) dependency tree, where each path pi = $, wi,1, wi,2, · · · , wi,li is a sequence of words from the root to a leaf. “$” is an universal virtual root that is added to each tree. Stack: σ denotes a stack configuration, which is a sequence of words. We use σ|w to represent a stack configuration that pushes word w into the stack σ. Children: ch(wi) denotes the list of all the children (modifiers) of word wi. 2.2 Pointer Networks Pointer Networks (PTR-NET) (Vinyals et al., 2015) are a variety of neural network capable of learning the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence. This model cannot be trivially expressed by standard sequence-to-sequence networks (Sutskever et al., 2014) due to the variable number of input positions in each sentence. PTR-NET solves the problem by using attention (Bahdanau et al., 2015; Luong et al., 2015) as a pointer to select a member of the input sequence as the output. Formally, the words of the sentence x are fed one-by-one into the encoder (a multiple-layer bidirectional RNN), producing a sequence of encoder hidden states si. At each time step t, the decoder (a uni-directional RNN) receives the input from last step and outputs decoder hidden state ht. The attention vector at is calculated as follows: et i = score(ht, si) at = softmax(et) (1) where score(·, ·) is the attention scoring function, which has several variations such as dot-product, 1405 s1 $ s2 But s3 there s4 were s5 no s6 buyers h1 s1 h2 s4 h3 s3 h4 s4 h5 s2 $ were $ there were $ were $ but were $ 0 0 + + 2 3 sg ss 0 s1 + + 0 s4 + + 0 s1 + + s3 s4 + + sg ss sg ss sg ss sg ss 4 3 2 (b) $ But there were no buyers (a) Figure 1: Neural architecture for the STACKPTR network, together with the decoding procedure of an example sentence. The BiRNN of the encoder is elided for brevity. For the inputs of decoder at each time step, vectors in red and blue boxes indicate the sibling and grandparent. concatenation, and biaffine (Luong et al., 2015). PTR-NET regards the attention vector at as a probability distribution over the source words, i.e. it uses at i as pointers to select the input elements. 3 Stack-Pointer Networks 3.1 Overview Similarly to PTR-NET, STACKPTR first reads the whole sentence and encodes each word into the encoder hidden state si. The internal stack σ is always initialized with the root symbol $. At each time step t, the decoder receives the input vector corresponding to the top element of the stack σ (the head word wp where p is the word index), generates the hidden state ht, and computes the attention vector at using Eq. (1). The parser chooses a specific position c according to the attention scores in at to generate a new dependency arc (wh, wc) by selecting wc as a child of wh. Then the parser pushes wc onto the stack, i.e. σ →σ|wc, and goes to the next step. At one step if the parser points wh to itself, i.e. c = h, it indicates that all children of the head word wh have already been selected. Then the parser goes to the next step by popping wh out of σ. At test time, in order to guarantee a valid dependency tree containing all the words in the input sentences exactly once, the decoder maintains a list of “available” words. At each decoding step, the parser selects a child for the current head word, and removes the child from the list of available words to make sure that it cannot be selected as a child of other head words. For head words with multiple children, it is possible that there is more than one valid selection for each time step. In order to define a deterministic decoding process to make sure that there is only one ground-truth choice at each step (which is necessary for simple maximum likelihood estimation), a predefined order for each ch(wi) needs to be introduced. The predefined order of children can have different alternatives, such as leftto-right or inside-out2. In this paper, we adopt the inside-out order3 since it enables us to utilize second-order sibling information, which has been proven beneficial for parsing performance (McDonald and Pereira, 2006; Koo and Collins, 2010) (see § 3.4 for details). Figure 1 (b) depicts the architecture of STACKPTR and the decoding procedure for the example sentence in Figure 1 (a). 3.2 Encoder The encoder of our parsing model is based on the bi-directional LSTM-CNN architecture (BLSTMCNNs) (Chiu and Nichols, 2016; Ma and Hovy, 2016) where CNNs encode character-level information of a word into its character-level repre2Order the children by the distances to the head word on the left side, then the right side. 3We also tried left-to-right order which obtained worse parsing accuracy than inside-out. 1406 sentation and BLSTM models context information of each word. Formally, for each word, the CNN, with character embeddings as inputs, encodes the character-level representation. Then the character-level representation vector is concatenated with the word embedding vector to feed into the BLSTM network. To enrich word-level information, we also use POS embeddings. Finally, the encoder outputs a sequence of hidden states si. 3.3 Decoder The decoder for our parser is a uni-directional LSTM. Different from previous work (Bahdanau et al., 2015; Vinyals et al., 2015) which uses word embeddings of the previous word as the input to the decoder, our decoder receives the encoder hidden state vector (si) of the top element in the stack σ (see Figure 1 (b)). Compared to word embeddings, the encoder hidden states contain more contextual information, benefiting both the training and decoding procedures. The decoder produces a sequence of decoder hidden states hi, one for each decoding step. 3.4 Higher-order Information As mentioned before, our parser is capable of utilizing higher-order information. In this paper, we incorporate two kinds of higher-order structures — grandparent and sibling. A sibling structure is a head word with two successive modifiers, and a grandparent structure is a pair of dependencies connected head-to-tail:           !" "#$%&#&' %!#&(()&*%&&!*"&(+("!"%) &," "#$' (%&#-.,"/" "#$ % %%#*)(&***%&0%#(!$%%()- 123-4 5 67896: ;<=>?@ ABCDE3-F14 GH I.J3-4%&*%&()!!&%#( &+!,&* -"ABCDE3-F14K!&%&*K&&*&&1 %&*!$%%()%&#-'A#&*#"!0 &$()J3-4(,% (&!!$,&*&*!&*() &*%&#."#&!$%(!KEL'% #&#!' #((%&&$."(,*#*)(%&*)(0 #%()&*%  .%&(MNOPQR#*" "#$& &(%!!SNRPT.,*#*#+%#("%(!&(' #&(" %#+)(!U"%)(!!(,% ABCDE3-F14 5 V W<; ABCDEXDY3-FZ4 Y*&%.,&&&*" "#$&1%%& () &%Z.#*(),*#*[%% &#(&0 +&(&(&*%#(()1'(#&)#&(U&(%. )/#& %!(&*%%&)(%(!KEL'' \"/&*QR]^R() &##("&(&* +()" "#%&#(&%.,&*!(0 (%&(!($)()#&(U&(%" %!0 (&*%'_&*"()&*%  .,)(#% ()#&(U&(%&!U&*)(!!(, &% ` ` a a a a a b b b b b c c c d efgfhefhij klmnlho opqheirlne spltklmnlho opqhetklmnlho A #/#!!$.A#&(%u'.u'v."u'w"%#+ %%&*&.% #&K!$.)#&(&%&("0 #*!" &%."0%+! &%."&() "0%+!"&0%+! &%' x yz{|}{~€‚|{~ƒ„‚{}…†| C,&*"0("" "#$ %%+!"( "%)( %& %!(&*%'_&*% %#&(., (K"+#[("(&,(!K& %%)( K(%,([' GH ‡ ˆ ‰ ‰ Š Š ‹ ‹ G+H Œ  Ž Ž    ‘’ vY*"$#0 (%&#&% ""K&(%()&*E%Gv“““H!(&*' B( !&% %" #&"%&!%"0 #( !&% %%& U("%'(+K&$.,!" &*%$&#*&0*""K%(%' x”• –{‚|}—„‚˜™‚š›}„‚{œ}{„~ Y*/%&&$ () %,"%#+%%/%&0 ("ž)#&(U&(.,*#*"#( (%%" 0 "#$&&(&%"K"!" "#%'E%0 Gv“““H&("#","!$0%""$#0 (!(&*)(/%&0(" %Ÿ% &%&*+%%)($ %%.#!"(, !(&*%.,%U&%"%*' Y*E%Gv“““H!(&* %+%"(&,( &!&"&$ %()"$#0 (%&#0 &%OQ S¡^P^% %.,*#*#(%%&()*"0 ,(""&%"%#"&%((%"."¢£OQ ¤ S¡^P^% %.,*#*#(%%&()" "#$"&* (+&,&**""("/' (!!$.,"(&#( !&% %¥¦§¨ ,*©"ª&*"#%()&*% «%*"0 ,(""" (&' #( !&% %"0 (&"%¬¦§­ ,*©"® &*"()&* *""("/()" "#$' _&&K!$. #( !&%  %&%*!)0#(%&&&ž *""+$©.,*%#( !&% %(!$  &!*!)0#(%&&&.%#&*#(%&&&# +&""+$""(("/%&(®' E#*&$ ()% %#&"+$#%K!$ #(+&,(%!!."¯#&% %Ÿ&*#(0 %&#&(%% #/" *#!!$v' #( !&% %#(%&#&")(   ()#( !&% %."#&&*"K%(()&* °©F®±&(#(%&&&%*""+$©" ®'  #( !&% %#&"+$#( !&0 ž#( !&% ,&*&*(&**!)() ®«%#(%&&&'Y* (&()#(#&&( #*#(%&#&(²® vGH(³0 vG+H²%&*TS¡¢PSQ¢£P.)"&*&%& +&"&(/"&*( &!#(%&#&(' _("&( %%&#-.&%)/#%&( /"( &!#(%&#&(%)(!!#( !&" #( !&% %"/"(-' Y*%#+ ´ To utilize higher-order information, the decoder’s input at each step is the sum of the encoder hidden states of three words: βt = sh + sg + ss where βt is the input vector of decoder at time t and h, g, s are the indices of the head word and its grandparent and sibling, respectively. Figure 1 (b) illustrates the details. Here we use the element-wise sum operation instead of concatenation because it does not increase the dimension of the input vector βt, thus introducing no additional model parameters. 3.5 Biaffine Attention Mechanism For attention score function (Eq. (1)), we adopt the biaffine attention mechanism (Luong et al., 2015; Dozat and Manning, 2017): et i = hT t Wsi + UT ht + VT si + b where W, U, V, b are parameters, denoting the weight matrix of the bi-linear term, the two weight vectors of the linear terms, and the bias vector. As discussed in Dozat and Manning (2017), applying a multilayer perceptron (MLP) to the output vectors of the BLSTM before the score function can both reduce the dimensionality and overfitting of the model. We follow this work by using a one-layer perceptron to si and hi with elu (Clevert et al., 2015) as its activation function. Similarly, the dependency label classifier also uses a biaffine function to score each label, given the head word vector ht and child vector si as inputs. Again, we use MLPs to transform ht and si before feeding them into the classifier. 3.6 Training Objectives The STACKPTR parser is trained to optimize the probability of the dependency trees given sentences: Pθ(y|x), which can be factorized as: Pθ(y|x) = kQ i=1 Pθ(pi|p<i, x) = kQ i=1 liQ j=1 Pθ(ci,j|ci,<j, p<i, x), (2) where θ represents model parameters. p<i denotes the preceding paths that have already been generated. ci,j represents the jth word in pi and ci,<j denotes all the proceeding words on the path pi. Thus, the STACKPTR parser is an autoregressive model, like sequence-to-sequence models, but it factors the distribution according to a top-down tree structure as opposed to a left-to-right chain. We define Pθ(ci,j|ci,<j, p<i, x) = at, where attention vector at (of dimension n) is used as the distribution over the indices of words in a sentence. Arc Prediction Our parser is trained by optimizing the conditional likelihood in Eq (2), which is implemented as the cross-entropy loss. Label Prediction We train a separated multiclass classifier in parallel to predict the dependency labels. Following Dozat and Manning (2017), the classifier takes the information of the 1407 head word and its child as features. The label classifier is trained simultaneously with the parser by optimizing the sum of their objectives. 3.7 Discussion Time Complexity. The number of decoding steps to build a parse tree for a sentence of length n is 2n−1, linear in n. Together with the attention mechanism (at each step, we need to compute the attention vector at, whose runtime is O(n)), the time complexity of decoding algorithm is O(n2), which is more efficient than graph-based parsers that have O(n3) or worse complexity when using dynamic programming or maximum spanning tree (MST) decoding algorithms. Top-down Parsing. When humans comprehend a natural language sentence, they arguably do it in an incremental, left-to-right manner. However, when humans consciously annotate a sentence with syntactic structure, they rarely ever process in fixed left-to-right order. Rather, they start by reading the whole sentence, then seeking the main predicates, jumping back-and-forth over the sentence and recursively proceeding to the subtree structures governed by certain head words. Our parser follows a similar kind of annotation process: starting from reading the whole sentence, and processing in a top-down manner by finding the main predicates first and only then search for sub-trees governed by them. When making latter decisions, the parser has access to the entire structure built in earlier steps. 3.8 Implementation Details Pre-trained Word Embeddings. For all the parsing models in different languages, we initialize word vectors with pretrained word embeddings. For Chinese, Dutch, English, German and Spanish, we use the structured-skipgram (Ling et al., 2015) embeddings. For other languages we use Polyglot embeddings (Al-Rfou et al., 2013). Optimization. Parameter optimization is performed with the Adam optimizer (Kingma and Ba, 2014) with β1 = β2 = 0.9. We choose an initial learning rate of η0 = 0.001. The learning rate η is annealed by multiplying a fixed decay rate ρ = 0.75 when parsing performance stops increasing on validation sets. To reduce the effects of “gradient exploding”, we use gradient clipping of 5.0 (Pascanu et al., 2013). Dropout Training. To mitigate overfitting, we apply dropout (Srivastava et al., 2014; Ma et al., 2017). For BLSTM, we use recurrent dropout (Gal and Ghahramani, 2016) with a drop rate of 0.33 between hidden states and 0.33 between layers. Following Dozat and Manning (2017), we also use embedding dropout with a rate of 0.33 on all word, character, and POS embeddings. Hyper-Parameters. Some parameters are chosen from those reported in Dozat and Manning (2017). We use the same hyper-parameters across the models on different treebanks and languages, due to time constraints. The details of the chosen hyper-parameters for all experiments are summarized in Appendix A. 4 Experiments 4.1 Setup We evaluate our STACKPTR parser mainly on three treebanks: the English Penn Treebank (PTB version 3.0) (Marcus et al., 1993), the Penn Chinese Treebank (CTB version 5.1) (Xue et al., 2002), and the German CoNLL 2009 corpus (Hajiˇc et al., 2009). We use the same experimental settings as Kuncoro et al. (2016). To make a thorough empirical comparison with previous studies, we also evaluate our system on treebanks from CoNLL shared task and the Universal Dependency (UD) Treebanks4. For the CoNLL Treebanks, we use the English treebank from CoNLL-2008 shared task (Surdeanu et al., 2008) and all 13 treebanks from CoNLL-2006 shared task (Buchholz and Marsi, 2006). The experimental settings are the same as Ma and Hovy (2015). For UD Treebanks, we select 12 languages. The details of the treebanks and experimental settings are in § 4.5 and Appendix B. Evaluation Metrics Parsing performance is measured with five metrics: unlabeled attachment score (UAS), labeled attachment score (LAS), unlabeled complete match (UCM), labeled complete match (LCM), and root accuracy (RA). Following previous work (Kuncoro et al., 2016; Dozat and Manning, 2017), we report results excluding punctuations for Chinese and English. For each experiment, we report the mean values with corresponding standard deviations over 5 repetitions. 4http://universaldependencies.org/ 1408 Figure 2: Parsing performance of different variations of our model on the test sets for three languages, together with baseline BIAF. For each of our STACKPTR models, we perform decoding with beam size equal to 1 and 10. The improvements of decoding with beam size 10 over 1 are presented by stacked bars with light colors. Baseline For fair comparison of the parsing performance, we re-implemented the graph-based Deep Biaffine (BIAF) parser (Dozat and Manning, 2017), which achieved state-of-the-art results on a wide range of languages. Our re-implementation adds character-level information using the same LSTM-CNN encoder as our model (§ 3.2) to the original BIAF model, which boosts its performance on all languages. 4.2 Main Results We first conduct experiments to demonstrate the effectiveness of our neural architecture by comparing with the strong baseline BIAF. We compare the performance of four variations of our model with different decoder inputs — Org, +gpar, +sib and Full — where the Org model utilizes only the encoder hidden states of head words, while the +gpar and +sib models augments the original one with grandparent and sibling information, respectively. The Full model includes all the three information as inputs. Figure 2 illustrates the performance (five metrics) of different variations of our STACKPTR parser together with the results of baseline BIAF re-implemented by us, on the test sets of the three languages. On UAS and LAS, the Full variation of STACKPTR with decoding beam size 10 outperforms BIAF on Chinese, and obtains competitive performance on English and German. An interesting observation is that the Full model achieves the best accuracy on English and Chinese, while performs slightly worse than +sib on German. This shows that the importance of higher-order information varies in languages. On LCM and UCM, STACKPTR significantly outperforms BIAF on all languages, showing the superiority of our parser on complete sentence parsing. The results of our parser on RA are slightly worse than BIAF. More details of results are provided in Appendix C. 4.3 Comparison with Previous Work Table 1 illustrates the UAS and LAS of the four versions of our model (with decoding beam size 10) on the three treebanks, together with previous top-performing systems for comparison. Note that the results of STACKPTR and our reimplementation of BIAF are the average of 5 repetitions instead of a single run. Our Full model significantly outperforms all the transition-based parsers on all three languages, and achieves better results than most graph-based parsers. Our 1409 English Chinese German System UAS LAS UAS LAS UAS LAS Chen and Manning (2014) T 91.8 89.6 83.9 82.4 – – Ballesteros et al. (2015) T 91.63 89.44 85.30 83.72 88.83 86.10 Dyer et al. (2015) T 93.1 90.9 87.2 85.7 – – Bohnet and Nivre (2012) T 93.33 91.22 87.3 85.9 91.4 89.4 Ballesteros et al. (2016) T 93.56 91.42 87.65 86.21 – – Kiperwasser and Goldberg (2016) T 93.9 91.9 87.6 86.1 – – Weiss et al. (2015) T 94.26 92.41 – – – – Andor et al. (2016) T 94.61 92.79 – – 90.91 89.15 Kiperwasser and Goldberg (2016) G 93.1 91.0 86.6 85.1 – – Wang and Chang (2016) G 94.08 91.82 87.55 86.23 – – Cheng et al. (2016) G 94.10 91.49 88.1 85.7 – – Kuncoro et al. (2016) G 94.26 92.06 88.87 87.30 91.60 89.24 Ma and Hovy (2017) G 94.88 92.98 89.05 87.74 92.58 90.54 BIAF: Dozat and Manning (2017) G 95.74 94.08 89.30 88.23 93.46 91.44 BIAF: re-impl G 95.84 94.21 90.43 89.14 93.85 92.32 STACKPTR: Org T 95.77 94.12 90.48 89.19 93.59 92.06 STACKPTR: +gpar T 95.78 94.12 90.49 89.19 93.65 92.12 STACKPTR: +sib T 95.85 94.18 90.43 89.15 93.76 92.21 STACKPTR: Full T 95.87 94.19 90.59 89.29 93.65 92.11 Table 1: UAS and LAS of four versions of our model on test sets for three languages, together with topperforming parsing systems. “T” and “G” indicate transition- and graph-based models, respectively. For BIAF, we provide the original results reported in Dozat and Manning (2017) and our re-implementation. For STACKPTR and our re-implementation of BiAF, we report the average over 5 runs. (a) (b) (c) Figure 3: Parsing performance of BIAF and STACKPTR parsers relative to length and graph factors. POS UAS LAS UCM LCM Gold 96.12±0.03 95.06±0.05 62.22±0.33 55.74±0.44 Pred 95.87±0.04 94.19±0.04 61.43±0.49 49.68±0.47 None 95.90±0.05 94.21±0.04 61.58±0.39 49.87±0.46 Table 2: Parsing performance on the test data of PTB with different versions of POS tags. re-implementation of BIAF obtains better performance than the original one in Dozat and Manning (2017), demonstrating the effectiveness of the character-level information. Our model achieves state-of-the-art performance on both UAS and LAS on Chinese, and best UAS on English. On German, the performance is competitive with BIAF, and significantly better than other models. 4.4 Error Analysis In this section, we characterize the errors made by BIAF and STACKPTR by presenting a number of experiments that relate parsing errors to a set of linguistic and structural properties. For simplicity, we follow McDonald and Nivre (2011) and report labeled parsing metrics (either accuracy, precision, or recall) for all experiments. 4.4.1 Length and Graph Factors Following McDonald and Nivre (2011), we analyze parsing errors related to structural factors. Sentence Length. Figure 3 (a) shows the accuracy of both parsing models relative to sentence lengths. Consistent with the analysis in McDonald and Nivre (2011), STACKPTR tends to perform better on shorter sentences, which make fewer parsing decisions, significantly reducing the chance of error propagation. Dependency Length. Figure 3 (b) measures the precision and recall relative to dependency lengths. While the graph-based BIAF parser still performs better for longer dependency arcs and transition-based STACKPTR parser does better for shorter ones, the gap between the two systems is marginal, much smaller than that shown 1410 Bi-Att NeuroMST BIAF STACKPTR Best Published UAS [LAS] UAS [LAS] UAS [LAS] UAS [LAS] UAS LAS ar 80.34 [68.58] 80.80 [69.40] 82.15±0.34 [71.32±0.36] 83.04±0.29 [72.94±0.31] 81.12 – bg 93.96 [89.55] 94.28 [90.60] 94.62±0.14 [91.56±0.24] 94.66±0.10 [91.40±0.08] 94.02 – zh – 93.40 [90.10] 94.05±0.27 [90.89±0.22] 93.88±0.24 [90.81±0.55] 93.04 – cs 91.16 [85.14] 91.18 [85.92] 92.24±0.22 [87.85±0.21] 92.83±0.13 [88.75±0.16] 91.16 85.14 da 91.56 [85.53] 91.86 [87.07] 92.80±0.26 [88.36±0.18] 92.08±0.15 [87.29±0.21] 92.00 – nl 87.15 [82.41] 87.85 [84.82] 90.07±0.18 [87.24±0.17] 90.10±0.27 [87.05±0.26] 87.39 – en – 94.66 [92.52] 95.19±0.05 [93.14±0.05] 93.25±0.05 [93.17±0.05] 93.25 – de 92.71 [89.80] 93.62 [91.90] 94.52±0.11 [93.06±0.11] 94.77±0.05 [93.21±0.10] 92.71 89.80 ja 93.44 [90.67] 94.02 [92.60] 93.95±0.06 [92.46±0.07] 93.38±0.08 [91.92±0.16] 93.80 – pt 92.77 [88.44] 92.71 [88.92] 93.41±0.08 [89.96±0.24] 93.57±0.12 [90.07±0.20] 93.03 – sl 86.01 [75.90] 86.73 [77.56] 87.55±0.17 [78.52±0.35] 87.59±0.36 [78.85±0.53] 87.06 – es 88.74 [84.03] 89.20 [85.77] 90.43±0.13 [87.08±0.14] 90.87±0.26 [87.80±0.31] 88.75 84.03 sv 90.50 [84.05] 91.22 [86.92] 92.22±0.15 [88.44±0.17] 92.49±0.21 [89.01±0.22] 91.85 85.26 tr 78.43 [66.16] 77.71 [65.81] 79.84±0.23 [68.63±0.29] 79.56±0.22 [68.03±0.15] 78.43 66.16 Table 3: UAS and LAS on 14 treebanks from CoNLL shared tasks, together with several state-of-the-art parsers. Bi-Att is the bi-directional attention based parser (Cheng et al., 2016), and NeuroMST is the neural MST parser (Ma and Hovy, 2017). “Best Published” includes the most accurate parsers in term of UAS among Koo et al. (2010), Martins et al. (2011), Martins et al. (2013), Lei et al. (2014), Zhang et al. (2014), Zhang and McDonald (2014), Pitler and McDonald (2015), and Cheng et al. (2016). in McDonald and Nivre (2011). One possible reason is that, unlike traditional transition-based parsers that scan the sentence from left to right, STACKPTR processes in a top-down manner, thus sometimes unnecessarily creating shorter dependency arcs first. Root Distance. Figure 3 (c) plots the precision and recall of each system for arcs of varying distance to the root. Different from the observation in McDonald and Nivre (2011), STACKPTR does not show an obvious advantage on the precision for arcs further away from the root. Furthermore, the STACKPTR parser does not have the tendency to over-predict root modifiers reported in McDonald and Nivre (2011). This behavior can be explained using the same reasoning as above: the fact that arcs further away from the root are usually constructed early in the parsing algorithm of traditional transition-based parsers is not true for the STACKPTR parser. 4.4.2 Effect of POS Embedding The only prerequisite information that our parsing model relies on is POS tags. With the goal of achieving an end-to-end parser, we explore the effect of POS tags on parsing performance. We run experiments on PTB using our STACKPTR parser with gold-standard and predicted POS tags, and without tags, respectively. STACKPTR in these experiments is the Full model with beam=10. Table 2 gives results of the parsers with different versions of POS tags on the test data of PTB. The parser with gold-standard POS tags significantly outperforms the other two parsers, showing that dependency parsers can still benefit from accurate POS information. The parser with predicted (imperfect) POS tags, however, performs even slightly worse than the parser without using POS tags. It illustrates that an end-to-end parser that doesn’t rely on POS information can obtain competitive (or even better) performance than parsers using imperfect predicted POS tags, even if the POS tagger is relative high accuracy (accuracy > 97% in this experiment on PTB). 4.5 Experiments on Other Treebanks 4.5.1 CoNLL Treebanks Table 3 summarizes the parsing results of our model on the test sets of 14 treebanks from the CoNLL shared task, along with the state-of-theart baselines. Along with BIAF, we also list the performance of the bi-directional attention based Parser (Bi-Att) (Cheng et al., 2016) and the neural MST parser (NeuroMST) (Ma and Hovy, 2017) for comparison. Our parser achieves state-of-theart performance on both UAS and LAS on eight languages — Arabic, Czech, English, German, Portuguese, Slovene, Spanish, and Swedish. On Bulgarian and Dutch, our parser obtains the best UAS. On other languages, the performance of our parser is competitive with BIAF, and significantly better than others. The only exception is Japanese, on which NeuroMST obtains the best scores. 1411 Dev Test BIAF STACKPTR BIAF STACKPTR UAS LAS UAS LAS UAS LAS UAS LAS bg 93.92±0.13 89.05±0.11 94.09±0.16 89.17±0.14 94.30±0.16 90.04±0.16 94.31±0.06 89.96±0.07 ca 94.21±0.05 91.97±0.06 94.47±0.02 92.51±0.05 94.36±0.06 92.05±0.07 94.47±0.02 92.39±0.02 cs 94.14±0.03 90.89±0.04 94.33±0.04 91.24±0.05 94.06±0.04 90.60±0.05 94.21±0.06 90.94±0.07 de 91.89±0.11 88.39±0.17 92.26±0.11 88.79±0.15 90.26±0.19 86.11±0.25 90.26±0.07 86.16±0.01 en 92.51±0.08 90.50±0.07 92.47±0.03 90.46±0.02 91.91±0.17 89.82±0.16 91.93±0.07 89.83±0.06 es 93.46±0.05 91.13±0.07 93.54±0.06 91.34±0.05 93.72±0.07 91.33±0.08 93.77±0.07 91.52±0.07 fr 95.05±0.04 92.76±0.07 94.97±0.04 92.57±0.06 92.62±0.15 89.51±0.14 92.90±0.20 89.88±0.23 it 94.89±0.12 92.58±0.12 94.93±0.09 92.90±0.10 94.75±0.12 92.72±0.12 94.70±0.07 92.55±0.09 nl 93.39±0.08 90.90±0.07 93.94±0.11 91.67±0.08 93.44±0.09 91.04±0.06 93.98±0.05 91.73±0.07 no 95.44±0.05 93.73±0.05 95.52±0.08 93.80±0.08 95.28±0.05 93.58±0.05 95.33±0.03 93.62±0.03 ro 91.97±0.13 85.38±0.03 92.06±0.08 85.58±0.12 91.94±0.07 85.61±0.13 91.80±0.11 85.34±0.21 ru 93.81±0.05 91.85±0.06 94.11±0.07 92.29±0.10 94.40±0.03 92.68±0.04 94.69±0.04 93.07±0.03 Table 4: UAS and LAS on both the development and test datasets of 12 treebanks from UD Treebanks, together with BIAF for comparison. 4.5.2 UD Treebanks For UD Treebanks, we select 12 languages — Bulgarian, Catalan, Czech, Dutch, English, French, German, Italian, Norwegian, Romanian, Russian and Spanish. For all the languages, we adopt the standard training/dev/test splits, and use the universal POS tags (Petrov et al., 2012) provided in each treebank. The statistics of these corpora are provided in Appendix B. Table 4 summarizes the results of the STACKPTR parser, along with BIAF for comparison, on both the development and test datasets for each language. First, both BIAF and STACKPTR parsers achieve relatively high parsing accuracies on all the 12 languages — all with UAS are higher than 90%. On nine languages — Catalan, Czech, Dutch, English, French, German, Norwegian, Russian and Spanish — STACKPTR outperforms BIAF for both UAS and LAS. On Bulgarian, STACKPTR achieves slightly better UAS while LAS is slightly worse than BIAF. On Italian and Romanian, BIAF obtains marginally better parsing performance than STACKPTR. 5 Conclusion In this paper, we proposed STACKPTR, a transition-based neural network architecture, for dependency parsing. Combining pointer networks with an internal stack to track the status of the top-down, depth-first search in the decoding procedure, the STACKPTR parser is able to capture information from the whole sentence and all the previously derived subtrees, removing the leftto-right restriction in classical transition-based parsers, while maintaining linear parsing steps, w.r.t the length of the sentences. Experimental results on 29 treebanks show the effectiveness of our parser across 20 languages, by achieving state-ofthe-art performance on 21 corpora. There are several potential directions for future work. First, we intend to consider how to conduct experiments to improve the analysis of parsing errors qualitatively and quantitatively. Another interesting direction is to further improve our model by exploring reinforcement learning approaches to learn an optimal order for the children of head words, instead of using a predefined fixed order. Acknowledgements The authors thank Chunting Zhou, Di Wang and Zhengzhong Liu for their helpful discussions. This research was supported in part by DARPA grant FA8750-18-2-0018 funded under the AIDA program. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA. References Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual nlp. In Proceedings of CoNLL2013. Sofia, Bulgaria, pages 183–192. Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In Proceedings of ACL-2016 (Volume 1: Long Papers). Berlin, Germany, pages 2442–2452. Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D. Manning. 2015. Leveraging linguistic structure for open domain information extraction. 1412 In Proceedings of ACL-2015 (Volume 1: Long Papers). Beijing, China, pages 344–354. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR-2015. Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2015. Improved transition-based parsing by modeling characters instead of words with lstms. In Proceedings of EMNLP-2015. Lisbon, Portugal, pages 349–359. Miguel Ballesteros, Yoav Goldberg, Chris Dyer, and Noah A. Smith. 2016. Training with exploration improves a greedy stack lstm parser. In Proceedings of EMNLP-2016. Austin, Texas, pages 2005–2010. Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Simaan. 2017. Graph convolutional encoders for syntax-aware neural machine translation. In Proceedings of EMNLP-2017. Copenhagen, Denmark, pages 1957–1967. Bernd Bohnet and Joakim Nivre. 2012. A transitionbased system for joint part-of-speech tagging and labeled non-projective dependency parsing. In Proceedings of EMNLP-2012. Jeju Island, Korea, pages 1455–1465. Sabine Buchholz and Erwin Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In Proceeding of CoNLL-2006. New York, NY, pages 149–164. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of EMNLP-2014. Doha, Qatar, pages 740–750. Hao Cheng, Hao Fang, Xiaodong He, Jianfeng Gao, and Li Deng. 2016. Bi-directional attention with agreement for dependency parsing. In Proceedings of EMNLP-2016. Austin, Texas, pages 2204–2214. Jason Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. Transactions of the Association for Computational Linguistics 4:357–370. Djork-Arn´e Clevert, Thomas Unterthiner, and Sepp Hochreiter. 2015. Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289 . Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency parsing. In Proceedings of ICLR-2017 (Volume 1: Long Papers). Toulon, France. Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In Proceedings of EMNLP-2013. Seattle, Washington, USA, pages 1971–1982. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of ACL-2015 (Volume 1: Long Papers). Beijing, China, pages 334–343. Jason M Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proceedings of COLING-1996 (Volume 1). Association for Computational Linguistics, pages 340–345. Nicolas R Fauceglia, Yiu-Chang Lin, Xuezhe Ma, and Eduard Hovy. 2015. Word sense disambiguation via propstore and ontonotes for event mention detection. In Proceedings of the The 3rd Workshop on EVENTS: Definition, Detection, Coreference, and Representation. Denver, Colorado, pages 11–15. Yarin Gal and Zoubin Ghahramani. 2016. A theoretically grounded application of dropout in recurrent neural networks. In Advances in Neural Information Processing Systems. Jan Hajiˇc, Massimiliano Ciaramita, Richard Johansson, Daisuke Kawahara, Maria Ant`onia Mart´ı, Llu´ıs M`arquez, Adam Meyers, Joakim Nivre, Sebastian Pad´o, Jan ˇStˇep´anek, et al. 2009. The conll-2009 shared task: Syntactic and semantic dependencies in multiple languages. In Proceedings of CoNLL2009: Shared Task. pages 1–18. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional lstm feature representations. Transactions of the Association for Computational Linguistics 4:313–327. Terry Koo and Michael Collins. 2010. Efficient thirdorder dependency parsers. In Proceedings of ACL2010. Uppsala, Sweden, pages 1–11. Terry Koo, Alexander M. Rush, Michael Collins, Tommi Jaakkola, and David Sontag. 2010. Dual decomposition for parsing with non-projective head automata. In Proceedings of EMNLP-2010. Cambridge, MA, pages 1288–1298. Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, and Noah A. Smith. 2016. Distilling an ensemble of greedy dependency parsers into one mst parser. In Proceedings of EMNLP2016. Austin, Texas, pages 1744–1753. Tao Lei, Yu Xin, Yuan Zhang, Regina Barzilay, and Tommi Jaakkola. 2014. Low-rank tensors for scoring dependency structures. In Proceedings of ACL2014 (Volume 1: Long Papers). Baltimore, Maryland, pages 1381–1391. Wang Ling, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015. Two/too simple adaptations of word2vec for syntax problems. In Proceedings of NAACL-2015. Denver, Colorado, pages 1299–1304. 1413 Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attentionbased neural machine translation. In Proceedings of EMNLP-2015. Lisbon, Portugal, pages 1412–1421. Xuezhe Ma, Yingkai Gao, Zhiting Hu, Yaoliang Yu, Yuntian Deng, and Eduard Hovy. 2017. Dropout with expectation-linear regularization. In Proceedings of the 5th International Conference on Learning Representations (ICLR-2017). Toulon, France. Xuezhe Ma and Eduard Hovy. 2015. Efficient inner-toouter greedy algorithm for higher-order labeled dependency parsing. In Proceedings of EMNLP-2015. Lisbon, Portugal, pages 1322–1328. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proceedings of ACL-2016 (Volume 1: Long Papers). Berlin, Germany, pages 1064–1074. Xuezhe Ma and Eduard Hovy. 2017. Neural probabilistic model for non-projective mst parsing. In Proceedings of IJCNLP-2017 (Volume 1: Long Papers). Taipei, Taiwan, pages 59–69. Xuezhe Ma, Zhengzhong Liu, and Eduard Hovy. 2016. Unsupervised ranking model for entity coreference resolution. In Proceedings of NAACL-2016. San Diego, California, USA. Xuezhe Ma and Fei Xia. 2014. Unsupervised dependency parsing with transferring distribution via parallel guidance and entropy regularization. In Proceedings of ACL-2014. Baltimore, Maryland, pages 1337–1348. Xuezhe Ma and Hai Zhao. 2012a. Fourth-order dependency parsing. In Proceedings of COLING 2012: Posters. Mumbai, India, pages 785–796. Xuezhe Ma and Hai Zhao. 2012b. Probabilistic models for high-order projective dependency parsing. Technical Report, arXiv:1502.04174 . Mitchell Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: the Penn Treebank. Computational Linguistics 19(2):313–330. Andre Martins, Miguel Almeida, and Noah A. Smith. 2013. Turning on the turbo: Fast third-order nonprojective turbo parsers. In Proceedings of ACL2013 (Volume 2: Short Papers). Sofia, Bulgaria, pages 617–622. Andre Martins, Noah Smith, Mario Figueiredo, and Pedro Aguiar. 2011. Dual decomposition with many overlapping components. In Proceedings of EMNLP-2011. Edinburgh, Scotland, UK., pages 238–249. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005a. Online large-margin training of dependency parsers. In Proceedings of ACL-2005. Ann Arbor, Michigan, USA, pages 91–98. Ryan McDonald and Joakim Nivre. 2011. Analyzing and integrating dependency parsers. Computational Linguistics 37(1):197–230. Ryan McDonald, Joakim Nivre, Yvonne QuirmbachBrundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar T¨ackstr¨om, Claudia Bedini, N´uria Bertomeu Castell´o, and Jungmee Lee. 2013. Universal dependency annotation for multilingual parsing. In Proceedings of ACL-2013. Sofia, Bulgaria, pages 92–97. Ryan McDonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proceeding of EACL-2006. Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajic. 2005b. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of HLT/EMNLP-2005. Vancouver, Canada, pages 523–530. Vincent Ng. 2010. Supervised noun phrase coreference research: The first fifteen years. In Proceedings of ACL-2010. Association for Computational Linguistics, Uppsala, Sweden, pages 1396–1411. Truc-Vien T. Nguyen, Alessandro Moschitti, and Giuseppe Riccardi. 2009. Convolution kernels on constituent, dependency and sequential structures for relation extraction. In Proceedings of EMNLP2009. Singapore, pages 1378–1387. Joakim Nivre, Johan Hall, Sandra K¨ubler, Ryan McDonald, Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007. The CoNLL 2007 shared task on dependency parsing. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007. Prague, Czech Republic, pages 915–932. Joakim Nivre and Mario Scholz. 2004. Deterministic dependency parsing of English text. In Proceedings of COLING-2004. Geneva, Switzerland, pages 64– 70. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In Proceedings of ICML-2013. pages 1310–1318. Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, and Wen-tau Yih. 2017. Cross-sentence n-ary relation extraction with graph lstms. Transactions of the Association for Computational Linguistics 5:101–115. Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A universal part-of-speech tagset. In Proceedings of LREC-2012. Istanbul, Turkey, pages 2089–2096. Emily Pitler and Ryan McDonald. 2015. A linear-time transition system for crossing interval trees. In Proceedings of NAACL-2015. Denver, Colorado, pages 662–671. 1414 Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15(1):1929–1958. Mihai Surdeanu, Richard Johansson, Adam Meyers, Llu´ıs M`arquez, and Joakim Nivre. 2008. The conll2008 shared task on joint parsing of syntactic and semantic dependencies. In Proceedings of CoNLL2008. pages 159–177. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. pages 3104–3112. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings ACL-2015 (Volume 1: Long Papers). Beijing, China, pages 1556–1566. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems. pages 2692–2700. Wenhui Wang and Baobao Chang. 2016. Graph-based dependency parsing with bidirectional lstm. In Proceedings of ACL-2016 (Volume 1: Long Papers). Berlin, Germany, pages 2306–2315. David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. 2015. Structured training for neural network transition-based parsing. In Proceedings of ACL-2015 (Volume 1: Long Papers). Beijing, China, pages 323–333. Nianwen Xue, Fu-Dong Chiou, and Martha Palmer. 2002. Building a large-scale annotated chinese corpus. In Proceedings of COLING-2002. pages 1–8. Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proceedings of IWPT. Nancy, France, volume 3, pages 195–206. Hao Zhang and Ryan McDonald. 2014. Enforcing structural diversity in cube-pruned dependency parsing. In Proceedings of ACL-2014 (Volume 2: Short Papers). Baltimore, Maryland, pages 656–661. Yuan Zhang, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2014. Greed is good if randomized: New inference for dependency parsing. In Proceedings of EMNLP-2014. Doha, Qatar, pages 1013–1024. Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Portland, Oregon, USA, pages 188–193.
2018
130
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1415–1425 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1415 Twitter Universal Dependency Parsing for African-American and Mainstream American English Su Lin Blodgett Johnny Tian-Zheng Wei Brendan O’Connor College of Information and Computer Sciences University of Massachusetts Amherst [email protected] [email protected] [email protected] Abstract Due to the presence of both Twitterspecific conventions and non-standard and dialectal language, Twitter presents a significant parsing challenge to current dependency parsing tools. We broaden English dependency parsing to handle social media English, particularly social media African-American English (AAE), by developing and annotating a new dataset of 500 tweets, 250 of which are in AAE, within the Universal Dependencies 2.0 framework. We describe our standards for handling Twitter- and AAE-specific features and evaluate a variety of crossdomain strategies for improving parsing with no, or very little, in-domain labeled data, including a new data synthesis approach. We analyze these methods’ impact on performance disparities between AAE and Mainstream American English tweets, and assess parsing accuracy for specific AAE lexical and syntactic features. Our annotated data and a parsing model are available at: http://slanglab.cs.umass.edu/ TwitterAAE/. 1 Introduction Language on Twitter diverges from well-edited Mainstream American English (MAE, also called Standard American English) in a number of ways, presenting significant challenges to current NLP tools. It contains, among other phenomena, nonstandard spelling, punctuation, capitalization, and syntax, as well as Twitter-specific conventions such as hashtags, usernames, and retweet tokens (Eisenstein, 2013). Additionally, it contains an abundance of dialectal language, including African-American English (AAE), a dialect of American English spoken by millions of individuals, which contains lexical, phonological, and syntactic features not present in MAE (Green, 2002; Stewart, 2014; Jones, 2015). Since standard English NLP tools are typically trained on well-edited MAE text, their performance is degraded on Twitter, and even more so for AAE tweets compared to MAE tweets— gaps exist for part-of-speech tagging (Jørgensen et al., 2016), language identification, and dependency parsing (Blodgett et al., 2016; Blodgett and O’Connor, 2017). Expanding the linguistic coverage of NLP tools to include minority and colloquial dialects would help support equitable language analysis across sociolinguistic communities, which could help information retrieval, translation, or opinion analysis applications (Jurgens et al., 2017). For example, sentiment analysis systems ought to count the opinions of all types of people, whether they use standard dialects or not. In this work, we broaden Universal Dependencies (Nivre et al., 2016) parsing1 to better handle social media English, in particular social media AAE. First, we develop standards to handle Twitter-specific and AAE-specific features within Universal Dependencies 2.0 (§3), by selecting and annotating a new dataset of 500 tweets, 250 of which are in AAE. Second, we evaluate several state-of-the-art dependency parsers, finding that, as expected, they perform poorly on our dataset relative to the UD English Treebank (§4). Third, since the UD English Treebank contains substantial amounts of traditional MAE data for training, we investigate cross-domain training methods to improve Twitter AAE dependency parsing with no, or very little, 1http://universaldependencies.org/ 1416 in-domain labeled data, by using Twitter-specific taggers, embeddings, and a novel heuristic training data synthesis procedure. This helps close some of the gap between MAE and AAE performance. Finally, we provide an error analysis of the parsers’ performance on AAE lexical and syntactic constructions in our dataset (§5.4).2 2 Related Work 2.1 Parsing for Twitter Parsing for noisy social media data presents interesting and significant challenges. Foster et al. (2011) develop a dataset of 519 constituencyannotated English tweets, which were converted to Stanford dependencies. Their analysis found a substantial drop in performance of an off-the-shelf dependency parser on the new dataset compared to a WSJ test set. Sanguinetti et al. (2017) annotated a dataset of 6,738 Italian tweets according to UD 2.0 and examined the performance of two parsers on the dataset, finding that they lagged considerably relative to performance on the Italian UD Treebank. Kong et al. (2014) develop an English dependency parser designed for Twitter, annotating a dataset of 929 tweets (TWEEBANK V1) according to the unlabeled FUDG dependency formalism (Schneider et al., 2013). It has substantially different structure than UD (for example, prepositions head PPs, and auxiliaries govern main verbs). More recently, Liu et al. (2018) developed TWEEBANK V2, fully annotating TWEEBANK V1 according to UD 2.0 and annotating additionally sampled tweets, for a total of 3,550 tweets. They found that creating consistent annotations was challenging, due to frequent ambiguities in interpreting tweets; nevertheless, they were able to train a pipeline for tokenizing, tagging, and parsing the tweets, and develop ensemble and distillation models to improve parsing accuracy. Our work encounters similar challenges; in our approach, we intentionally oversample AAE-heavy messages for annotation, detail specific annotation decisions for AAE-specific phenomena (§3.2), and analyze parser performance between dialects and for particular constructions (§5.3–5.4). Future work may be able to combine these annotations for effective multi-dialect Twitter UD parsers, which 2Our annotated dataset and trained dependency parser are available at http://slanglab.cs.umass.edu/TwitterAAE/ and annotations are available in the public Universal Dependencies repository. may allow for the use of pre-existing downstream tools like semantic relation extractors (e.g. White et al. (2016)). One line of work for parsing noisy social media data, including Khan et al. (2013) and Nasr et al. (2016), examines the effects of the domain mismatches between traditional sources of training data and social media data, finding that matching the data as closely as possible aids performance. Other work focuses on normalization, including Daiber and van der Goot (2016) and van der Goot and van Noord (2017), which develop a dataset of 500 manually normalized and annotated tweets, and uses normalization within a parser. Separately, Zhang et al. (2013) created a domain-adaptable, parser-focused system by directly linking parser performance to normalization performance. 2.2 Parsing for Dialects For Arabic dialects, Chiang et al. (2006) parse Levantine Arabic by projecting parses from Modern Standard Arabic translations, while Green and Manning (2010) conduct extensive error analysis of Arabic constituency parsers and the Penn Arabic Treebank. Scherrer (2011) parse Swiss German dialects by transforming Standard German phrase structures. We continue in this line of work in our examination of AAE-specific syntactic structures and generation of synthetic data with such structures (§4.2.1). Less work has examined parsing dialectal language on social media. Recently, Wang et al. (2017) annotate 1,200 Singlish (Singaporean English) sentences from a Singaporean talk forum, selecting sentences containing uniquely Singaporean vocabulary items. Like other work, they observe a drop in performance on dialectal Singlish text, but increase performance through a stacking-based domain adaptation method. 3 Dataset and Annotation 3.1 Dataset Our dataset contains 500 tweets, with a total of 5,951 non-punctuation edges, sampled from the publicly available TwitterAAE corpus.3 Each tweet in that corpus is accompanied by a model’s demographically-aligned topic model probabilities jointly inferred from Census demographics and word likelihood by Blodgett et al. (2016), including the African-American and White topics. 3http://slanglab.cs.umass.edu/TwitterAAE/ 1417 We create a balanced sample to get a range of dialectal language, sampling 250 tweets from those where the African-American topic has at least 80% probability, and 250 from those where the White topic has at least 80% probability. We refer to these two subcorpora as AA and WH; Blodgett et al. (2016) showed the former exhibits linguistic features typical of AAE. The 250 AA tweets include many alternate spellings of common words that correspond to well-known phonological phenomena—including da, tha (the), dat, dhat (that), dis, dhis (this), ion, iont (I don’t), ova (over), yo (your), dere, der (there), den, dhen (then), ova (over), and nall, null (no, nah)—where each of the mentioned italicized AAE terms appears in the AAE data, but never in the MAE data. We examine these lexical variants more closely in §5.4. Across the AA tweets, 18.0% of tokens were not in a standard English dictionary, while the WH tweets’ OOV rate was 10.7%.4 We further observe a variety of AAE syntactic phenomena in our AA tweets, several of which are described in §3.2 and §5.4. 3.2 Annotation To effectively measure parsing quality and develop better future models, we first focus on developing high-quality annotations for our dataset, for which we faced a variety of challenges. We detail our annotation principles using Universal Dependency 2.0 relations (Nivre et al., 2016). All tweets were initially annotated by two annotators, and disagreements resolved by one of the annotators. Annotation decisions for several dozen tweets were discussed in a group of three annotators early in the annotation process. Our annotation principles are in alignment with those proposed by Liu et al. (2018), with the exception of contraction handling, which we discuss briefly in §3.2.2. 3.2.1 Null Copulas The AAE dialect is prominently characterized by the drop of copulas, which can occur when the copula is present tense, not first person, not accented, not negative, and expressing neither the habitual nor the remote present perfect tenses (Green, 2002). We frequently observed null copulas, as in: 4The dictionary of 123,377 words with American spellings was generated using http://wordlist.aspell.net/. If u wit me den u pose to RESPECT ME nsubj nsubj “If you (are) with me, then you (are) supposed to respect me” The first dropped are is a null copula; UD2.0 would analyze the MAE version as you nsubj ←−−me cop −→are, which we naturally extend to analyze the null copula by simply omitting cop (which is now over a null element, so cannot exist in a dependency graph). The second are is a null auxiliary (in MAE, you nsubj ←−−supposed aux −→are), a tightly related phenomenon (for example, Green et al. (2007) studies both null copulas and null auxiliary be in infant AAE), which we analyze similarly by simply omitting the aux edge. 3.2.2 AAE Verbal Auxiliaries We observed AAE verbal auxiliaries, e.g., fees be looking upside my head aux Now we gone get fucked up aux damnnn I done let alot of time pass by aux including habitual be (“Continually, over and over, fees are looking at me...”), future gone (“we are going to get...”), and completive done (“I did let time pass by,” emphasizing the speaker completed a time-wasting action). We attach the auxiliary to the main verb with the aux relation, as UD2.0 analyzes other English auxiliaries (e.g. would or will). 3.2.3 Verbs: Auxiliaries vs. Main Verbs We observed many instances of quasi-auxiliary, “to” shortened verbs such as wanna, gotta, finna, bouta, tryna, gonna, which can be glossed as want to, got to, fixing to, about to, etc. They control modality, mood and tense—for example, finna and bouta denote an immediate future tense; Green (2002, ch. 2) describes finna as a preverbal marker. From UD’s perspective, it is difficult to decide if they should be subordinate auxiliaries or main verbs. In accordance with the UD Treebank’s handling of MAE want to X and going to X as main verbs (want xcomp −−→X), we analyzed them similarly, e.g. Lol he bouta piss me off “He is about to piss me off” xcomp 1418 This is an instance of a general principle that, if there is a shortening of an MAE multiword phrase into a single word, the annotations on that word should mirror the edges in and out of the original phrase’s subgraph (as in Schneider et al. (2013)’s fudge expressions). However, in contrast to the UD Treebank, we did not attempt to split up these words into their component words (e.g. wanna →want to), since to do this well, it would require a more involved segmentation model over the dozens or even hundreds of alternate spellings each of the above can take;5 we instead rely on Owoputi et al. (2013); O’Connor et al. (2010)’s rule-based tokenizer that never attempts to segment within such shortenings. This annotation principle is in contrast to that of Liu et al. (2018), which follows UD tokenization for contractions. 3.2.4 Non-AAE Twitter issues We also encountered many issues general to Twitter but not AAE; these are still important to deal with since AAE tweets include more non-standard linguistic phenomena overall. When possible, we adapted Kong et al. (2014)’s annotation conventions into the Universal Dependencies context, which are the only published conventions we know of for Twitter dependencies (for the FUDG dependency formalism). Issues include: • @-mentions, which require different treatment when they are terms of address, versus nominal elements within a sentence. • Hashtags, which in their tag-like usage are utterances by themselves (#tweetliketheoppositegender Oh damn .). or sometimes can be words with standard syntactic relations within the sentence (#She’s A Savage, having #She’s nsubj ←−−Savage). Both hashtag and @mention ambiguities are handled by Owoputi et al. (2013)’s POS tagger. • Multiple utterances, since we do not attempt sentence segmentation, and in many cases sentential utterances are not separated by explicit punctuation. FUDG allows for multiple roots for a text, but UD does not; instead we follow UD’s convention of the parataxis relation for what they describe as “side-by-side run-on sentences.” 5For example, Owoputi et al. (2013)’s Twitter word cluster 0011000 has 36 forms of gonna alone: http://www.cs. cmu.edu/∼ark/TweetNLP/cluster viewer.html • Emoticons and emoji, which we attach as discourse relations to the utterance root, following UD’s treatment of interjections. • Collapsed phrases, like omw for “on my way.” When possible, we used the principle of annotating according to the root of the subtree of the original phrase. For example, UD 2.0 prescribes way xcomp −−→get for the sentence On my way to get...; therefore we use omw xcomp −−→get for omw to get. • Separated words, like uh round for “around,” which we analyze as multiword phrases (flat or compound). We discuss details for these and other cases in the online appendix. 4 Experiments 4.1 Models Our experiments use the following two parsers. UDPipe (Straka et al., 2016) is a neural pipeline containing a tokenizer, morphological analyzer, tagger, and transition-based parser intended to be easily retrainable. The parser attains 80.2% LAS (labeled attachment score) on the UD English treebank with automatically generated POS tags, and was a baseline system used in the CoNLL 2017 Shared Task (Zeman et al., 2017).6 Deep Biaffine (Dozat et al., 2017; Dozat and Manning, 2016) is a graph-based parser incorporating neural attention and biaffine classifiers for arcs and labels. We used the version of the parser in the Stanford CoNLL 2017 Shared Task submission, which attained 82.2% LAS on the UD English treebank with automatically generated tags, and achieved the best performance in the task. The model requires pre-trained word embeddings. 7 4.2 Experimental Setup We considered a series of experiments within both a cross-domain scenario (§4.2.1), where we trained only on UD Treebank data, and an indomain scenario (§4.2.2) using small amounts of our labeled data. We use the parsing systems’ default hyperparameters (e.g. minibatch size and learning rate) and the default training/development split of the treebank (both systems perform early stopping based on development set performance). 6https://github.com/ufal/udpipe 7https://github.com/tdozat/UnstableParser/ 1419 4.2.1 Cross-Domain Settings Morpho-Tagger vs. ARK POS tags: The UD Treebank contains extensive fine-grained POS and morphological information, on which UDPipe’s morphological analyzer and tagging system is originally trained. This rich information should be useful for parsing, but the analyzers may be highly error-prone on out-of-domain, dialectal Twitter data, and contribute to poor parsing performance. We hypothesize that higher quality, even if coarser, POS information should improve parsing. To test this, we retrain UDPipe in two different settings. We first retrain the parser component with fine-grained PTB-style POS tags and morphological information provided by the tagger component;8 we call this the Morpho-Tagger setting. Second, we retrain the parser with morphological information stripped and its tags predicted from the ARK Twitter POS tagger (Owoputi et al., 2013), which is both tailored for Twitter and displays a smaller AAE vs MAE performance gap than traditional taggers (Jørgensen et al., 2016); we call this the ARK Tagger setting.9 The ARK Tagger’s linguistic representation is impoverished compared to Morpho-Tagger: its coarse-grained POS tag system does not include tense or number information, for example.10 Synthetic Data: Given our knowledge of Twitter- and AAE-specific phenomena that do not occur in the UD Treebank, we implemented a rulebased method to help teach the machine-learned parser these phenomena; we generated synthetic data for three Internet-specific conventions and one set of AAE syntactic features. (This is inspired by Scherrer (2011)’s rule transforms between Standard and Swiss German.) We performed each of the following transformations separately on a copy of the UD Treebank data and concatenated the transformed files together for the final training and development files, so that each final file contained several transformed copies of the original UD Treebank data. 1. @-mentions, emojis, emoticons, expressions, and hashtags: For each sentence in the UD Treebank we inserted at least one @-mention, emoji, emoticon, expression (Internet-specific words and 8We also retrained this component, to maintain consistency of training and development split. We also remove the universal (coarse) POS tags it produces, replacing them with the same PTB tags. 9We strip lemmas from training and development files for both settings. 10Derczynski et al. (2013)’s English Twitter tagger, which outputs PTB-style tags, may be of interest for future work. abbreviations such as lol, kmsl, and xoxo), or hashtag, annotated with the correct relation, at the beginning of the sentence. An item of the same type was repeated with 50% probability, and a second item was inserted with 50% probability. @mentions were inserted using the ATMENTION token and emojis using the EMOJI token. Emoticons were inserted from a list of 20 common emoticons, expressions were inserted from a list of 16 common expressions, and hashtags were sampled for insertion according to their frequency in a list of all hashtags observed in the TwitterAAE corpus. 2. Syntactically participating @-mentions: To replicate occurrences of syntactically participating @-mentions, for each sentence in the UD Treebank with at least one token annotated with an nsubj or obj relation and an NNP POS tag, we replaced one at random with the ATMENTION token. 3. Multiple utterances: To replicate occurrences of multiple utterances, we randomly collapsed pairs of two short sentences (< 15 tokens) together, attaching the root of the second to the root of the first with the parataxis relation. 4. AAE preverbal markers and auxiliaries: We introduced instances of verbal constructions present in AAE that are infrequent or non-existent in the UD Treebank data. First, constructions such as going to, about to, and want to are frequently collapsed to gonna, bouta, and wanna, respectively (see §3.2.2); for each sentence with at least one of these constructions, we randomly chose one to collapse. Second, we randomly replaced instances of going to with finna, a preverbal marker occurring in AAE and in the American South (Green, 2002). Third, we introduced the auxiliaries gone and done, which denote future tense and past tense, respectively; for the former, for each sentence containing at least one auxiliary will, we replace it with gone, and for the latter, for each sentence containing at least one nonauxiliary, non-passive, past-tense verb, we choose one and insert done before it. Finally, for each sentence containing at least one copula, we delete one at random. Word Embeddings: Finally, since a tremendous variety of Twitter lexical items are not present in the UD Treebank, we use 200dimensional word embeddings that we trained with word2vec11 (Mikolov et al., 2013) on the 11https://github.com/dav/word2vec 1420 TwitterAAE corpus, which contains 60.8 million tweets. Before training, we processed the corpus by replacing @-mentions with ATMENTION, replacing emojis with EMOJI, and replacing sequences of more than two repeated letters with two repeated letters (e.g. partyyyyy →partyy). This resulted in embeddings for 487,450 words. We retrain and compare UDPipe on each of the Morpho-Tagger and ARK Tagger settings with synthetic data and pre-trained embeddings, and without. We additionally retrain Deep Biaffine with and without synthetic data and embeddings.12 4.2.2 In-domain Training We additionally investigate the effects of small amounts of in-domain training data from our dataset. We perform 2-fold cross-validation, randomly partitioning our dataset into two sets of 250 tweets. We compare two different settings (all using the UDPipe ARK Tagger setting): Twitter-only: To explore the effect of training with Twitter data alone, for each set of 250 we trained on that set alone, along with our Twitter embeddings, and tested on the remaining 250. UDT+Twitter: To explore the additional signal provided by the UD Treebank, for each set of 250 we trained on the UD Treebank concatenated with that set (with the tweets upweighted to approximately match the size of the UD Treebank, in order to use similar hyperparameters) and tested on the remaining 250. 5 Results and Analysis In our evaluation, we ignored punctuation tokens (labeled with punct) in our LAS calculation. 5.1 Effects of Cross-Domain Settings Morpho-Tagger vs. ARK Tagger: As hypothesized, UDPipe’s ARK Tagger setting outperformed the Morpho-Tagger across all settings, ranging from a 2.8% LAS improvement when trained only on the UD Treebank with no pre-trained word embeddings, to 4.7% and 5.4% improvements when trained with Twitter embeddings and both Twitter embeddings and synthetic data, respectively. The latter improvements suggest that the ARK Tagger setup is able to take better advantage of Twitterspecific lexical information from the embeddings 12As the existing implementation of Deep Biaffine requires pre-trained word embeddings, for the Deep Biaffine baseline experiments we use the CoNLL 2017 Shared Task 100dimensional embeddings that were pretrained on the English UD Treebank. Model LAS (1) UDPipe, Morpho-Tagger, UDT 50.5 (2) + Twitter embeddings 53.9 (3) + synthetic, Twitter embeddings 58.9 (4) UDPipe, ARK Tagger, UDT 53.3 (5) + Twitter embeddings 58.6 (6) + synthetic, Twitter embeddings 64.3 Deep Biaffine, UDT (7) + CoNLL MAE embeddings 62.3 (8) + Twitter embeddings 63.7 (9) + synthetic, Twitter embeddings 65.0 Table 1: Results from cross-domain training settings (see §4.2.1). Model LAS (10) UDPipe, Twitter embeddings 62.2 (11) + UDT 70.3 Table 2: Results from in-domain training settings (with the ARK Tagger setting, see §4.2.2). and syntactic patterns from the synthetic data. Table 1 shows the LAS for our various settings. After observing the better performance of the ARK Tagger setting, we opted not to retrain the Deep Biaffine parser in any Morpho-Tagger settings due to the model’s significantly longer training time; all our Deep Biaffine results are reported for models trained with an ARK Tagger setting. Synthetic data and embeddings: We observed that synthetic data and Twitter-trained embeddings were independently helpful; embeddings provided a 1.4–5.3% boost across the UDPipe and Deep Biaffine models, while synthetic data provided a 1.3– 5.7% additional boost (Table 1). UDPipe vs. Deep Biaffine: While the baseline models for UDPipe and Deep Biaffine are not directly comparable (since the latter required pretrained embeddings), in the Twitter embeddings setting Deep Biaffine outperformed UDPipe by 5.1%. However, given access to both synthetic data and Twitter embeddings, UDPipe’s performance approached that of Deep Biaffine. 5.2 Effects of In-Domain Training Perhaps surprisingly, training with even limited amounts of in-domain training data aided in parsing performance; training with just in-domain data produced an LAS comparable to that of the baseline Deep Biaffine model, and adding UD Treebank data further increased LAS by 8.1%, indicat1421 Model AA LAS WH LAS Gap (1) UDPipe, Morpho-Tagger 43.0 57.0 14.0 (2) + Twitter embeddings 45.5 61.2 15.7 (3) + synthetic, Twitter embeddings 50.7 66.2 15.5 (4) UDPipe, ARK Tagger 50.2 56.1 5.9 (5) + Twitter embeddings 54.1 62.5 8.4 (6) + synthetic, Twitter embeddings 59.9 68.1 8.2 Deep Biaffine, ARK Tagger (7) + CoNLL MAE embeddings 56.1 67.7 11.6 (8) + Twitter embeddings 58.7 66.7 8.0 (9) + synthetic, Twitter embeddings 59.9 70.8 10.9 Table 3: AA and WH tweets’ labeled attachment scores for UD Treebank-trained models (see §5.3 for discussion); Gap is the WH −AA difference in LAS. ing that they independently provide critical signal. 5.3 AAE/MAE Performance Disparity For each model in each of the cross-domain settings, we calculated the LAS on the 250 tweets drawn from highly African-American tweets and the 250 from highly White tweets (see §3 for details); we will refer to these as the AA and WH tweets, respectively. We observed clear disparities in performance between the two sets of tweets, ranging from 5.9% to 15.7% (Table 3). Additionally, across settings, we observed several patterns. First, the UDPipe ARK Tagger settings produced significantly smaller gaps (5.9–8.4%) than the corresponding Morpho-Tagger settings (14.0– 15.7%). Indeed, most of the performance improvement of the ARK Tagger setting comes from the AA tweets; the LAS on the AA tweets jumps 7.2–9.2% from each Morpho-Tagger setting to the corresponding ARK Tagger setting, compared to differences of −0.9–1.9% for the WH tweets. Second, the Deep Biaffine ARK Tagger settings produced larger gaps (8.0–11.6%) than the UDPipe ARK Tagger settings, with the exception of the embeddings-only setting. Finally, we observed the surprising result that adding Twitter-trained embeddings and synthetic data, which contains both Twitter-specific and AAE-specific features, increases the performance gap across both UDPipe settings. We hypothesize that while UDPipe is able to effectively make use of both Twitter-specific lexical items and annotation conventions within MAE-like syntactic structures, it continues to be stymied by AAE-like syntactic structures, and is therefore unable to make use of the additional information. We further calculated recall for each relation type across the AA tweets and WH tweets, and the resulting performance gap, under the UDPipe Morpho-Tagger and ARK Tagger models trained with synthetic data and embeddings. Table 4 shows these calculations for the 15 relation types for which the performance gap was highest and which had at least 15 instances in each of the AA and WH tweet sets, along with the corresponding calculation under the ARK Tagger model. The amount by which the performance gap is reduced from the first setting to the second setting is also reported. Of the 15 relations shown, the gap was reduced for 14, and 7 saw a reduction of at least 10%. 5.4 Lexical and Syntactic Analysis of AAE In this section, we discuss AAE lexical and syntactic variations observed in our dataset, with the aim of providing insight into decreased AA parsing accuracy, and the impact of various parser settings on their parsing accuracy. AAE contains a variety of phonological features which present themselves on Twitter through a number of lexical variations (Green, 2002; Jones, 2015), many of which are listed in §3.1, instances of which occur a total of 80 times in the AA tweets; notably, none occur in the WH tweets. We investigated the accuracy of various crossdomain parser settings on these lexical variants; for each of the baseline Morpho-Tagger, baseline ARK Tagger, ARK Tagger with embeddings, and ARK Tagger with synthetic data and embeddings models, we counted the number of instances of lexical variants from §3.1 for which the model gave the correct head with the correct label. While the lexical variants challenged all four models, switching from the Morpho-Tagger set1422 Morpho-Tagger ARK Tagger Relation AA Recall WH Recall Gap (WH - AA) AA Recall WH Recall Gap (WH - AA) Reduction compound 36.4 71.2 34.8 42.4 72.9 30.5 4.4 obl:tmod 25.0 51.7 26.7 43.8 55.2 11.4 15.3 nmod 28.6 54.4 25.8 45.7 51.5 5.8 20.1 cop 56.5 82.1 25.6 65.2 79.1 13.9 11.7 obl 41.4 65.4 24.0 56.8 62.5 5.7 18.3 cc 56.9 79.0 22.1 78.5 82.7 4.3 17.8 ccomp 33.3 54.2 20.8 40.5 54.2 13.7 7.1 obj 61.3 81.5 20.2 72.8 83.5 10.7 9.5 case 60.5 79.8 19.3 75.2 83.4 8.2 11.1 det 73.1 90.7 17.5 83.4 92.2 8.8 8.7 advmod 53.8 71.2 17.3 62.9 72.1 9.1 8.2 advcl 31.5 46.8 15.3 25.9 46.8 20.9 -5.6 root 56.4 71.6 15.2 62.8 74.0 11.2 4.0 xcomp 40.0 54.9 14.9 51.2 50.0 1.2 13.7 discourse 30.7 44.9 14.2 46.0 51.4 5.4 8.8 Table 4: Recall by relation type under UDPipe’s Morpho-Tagger and ARK Tagger settings (+synthetic+embeddings; (3) and (6) from Table 3; §5.3). Reduction is the reduction in performance gap from the Morpho-Tagger setting to the ARK Tagger setting; bolded numbers indicate a gap reduction of ≥10.0. Feature AA Count WH Count Example Dropped copula 44 0 MY bestfrienddd mad at me tho Habitual be, describing repeated actions 10 0 fees be looking upside my head likee ion kno wat be goingg on . I kno that clown, u don’t be around tho Dropped possessive marker 5 0 ATMENTION on Tv...tawkn bout dat man gf Twink rude lol can’t be calling ppl ugly that’s somebody child lol... Dropped 3rd person singular 5 0 When a female owe you sex you don’t even wanna have a conversation with her Future gone 4 0 she gone dance without da bands lol it is instead of there is 2 1 It was too much goin on in dat mofo . Completive done 1 0 damnnn I done let alot of time pass by . . Table 5: Examples of AAE syntactic phenomena and occurrence counts in the 250 AA and 250 WH tweet sets. ting to the ARK Tagger settings produced significant accuracy increases (Table 6). We observed that the greatest improvement came from using the ARK Tagger setting with Twitter-trained embeddings; the Twitter-specific lexical information provided by the embeddings was critical to recognizing the variants. Surprisingly, adding synthetic data decreased the model’s ability to parse the variants. We next investigated the presence of AAE syntactic phenomena in our dataset. Table 5 shows examples of seven well-documented AAE morphological and syntactic features and counts of their occurrences in our AA and WH tweet sets; again, while several of the phenomena, such as dropped copulas and habitual be, occur frequently in our AA tweets, there is only one instance of any of these features occurring in the WH tweet set. We measured the parsing accuracy for the two most frequent syntactic features, dropped copulas and habitual be, across the four models; accuracies are given in Table 6. For dropped copulas, we measured parsing correctness by checking if the parser correctly attached the subject to the correct predicate word via the nsubj relation; for the first example in Table 5, for example, we considered the parser correct if it attached bestfrienddd to mad via the nsubj relation. For habitual be, we checked for correct attachment via the aux or cop relations as in the first and second examples in Ta1423 AAE Feature Morpho-Tagger Baseline ARK Tagger Baseline ARK Tagger with Embeddings ARK Tagger with Synthetic, Embeddings Lexical Variants (§3.1) 16.3 (13/80) 61.3 (49/80) 63.8 (51/80) 57.5 (46/80) Dropped copula 54.5 (24/44) 70.5 (31/44) 61.4 (27/44) 68.2 (30/44) Habitual be 50.0 (5/10) 80.0 (8/10) 90.0 (9/10) 90.0 (9/10) Table 6: Parsing accuracies of syntactic and lexical variations across four UDPipe models (see §5.4). ble 5, respectively. As before, we observed significant increases in accuracy moving from the Morpho-Tagger to the ARK Tagger settings. However, neither adding embeddings nor synthetic data appeared to significantly increase accuracy for these features. From manual inspection, most of the dropped copulas errors appear to arise either from challenging questions (e.g. ATMENTION what yo number ?) or from mis-identification of the word to which to attach the subject (e.g. He claim he in love llh, where he was attached to llh rather than to love). 6 Conclusion While current neural dependency parsers are highly accurate on MAE, our analyses suggest that AAE text presents considerable challenges due to lexical and syntactic features which diverge systematically from MAE. While the cross-domain strategies we presented can greatly increase accurate parsing of these features, narrowing the performance gap between AAE- and MAE-like tweets, much work remains to be done for accurate parsing of even linguistically well-documented features. It remains an open question whether it is better to use a model with a smaller accuracy disparity (e.g. UDPipe), or a model with higher average accuracy, but a worse disparity (e.g. Deep Biaffine). The emerging literature on fairness in algorithms suggests interesting further challenges; for example, Kleinberg et al. (2017) and CorbettDavies et al. (2017) argue that as various commonly applied notions of fairness are mutually incompatible, algorithm designers must grapple with such trade-offs. Regardless, the modeling decision should be made in light of the application of interest; for example, applications like opinion analysis and information retrieval may benefit from equal (and possibly weaker) performance between groups, so that concepts or opinions inferred from groups of authors (e.g. AAE speakers) are not under-counted or under-represented in results returned to a user or analyst. Acknowledgments We thank the anonymous reviewers for their helpful comments. This work was supported by a Google Faculty Research Award, and a National Science Foundation Graduate Research Fellowship (No. 1451512). References Su Lin Blodgett, Lisa Green, and Brendan O’Connor. 2016. Demographic dialectal variation in social media: A case study of African-American English. Proceedings of EMNLP. Su Lin Blodgett and Brendan O’Connor. 2017. Racial disparity in natural language processing: A case study of social media African-American English. arXiv preprint arXiv:1707.00061; presented at Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) workshop at KDD 2017. David Chiang, Mona Diab, Nizar Habash, Owen Rambow, and Safiullah Shareef. 2006. Parsing Arabic dialects. In Proceedings of EACL. Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. Algorithmic decision making and the cost of fairness. In Proceedings of the KDD. ACM. Joachim Daiber and Rob van der Goot. 2016. The denoised web treebank: Evaluating dependency parsing under noisy input conditions. In Proceedings of LREC. Leon Derczynski, Alan Ritter, Sam Clark, and Kalina Bontcheva. 2013. Twitter part-of-speech tagging for all: Overcoming sparse and noisy data. In Recent Advances in Natural Language Processing, pages 198–206. Timothy Dozat and Christopher D Manning. 2016. Deep biaffine attention for neural dependency parsing. Proceedings of ICLR. 1424 Timothy Dozat, Peng Qi, and Christopher D. Manning. 2017. Stanford’s graph-based neural dependency parser at the CoNLL 2017 shared task. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Jacob Eisenstein. 2013. What to do about bad language on the internet. In HLT-NAACL, pages 359–369. Jennifer Foster, ¨Ozlem C¸ etinoglu, Joachim Wagner, Joseph Le Roux, Stephen Hogan, Joakim Nivre, Deirdre Hogan, and Josef Van Genabith. 2011. # hardtoparse: Pos tagging and parsing the twitterverse. In AAAI 2011 Workshop on Analyzing Microtext. Rob van der Goot and Gertjan van Noord. 2017. Parser adaptation for social media by integrating normalization. In Proceedings of ACL. Lisa Green, Toya A Wyatt, and Qiuana Lopez. 2007. Event arguments and ‘be’ in child African American English. University of Pennsylvania Working Papers in Linguistics, 13(2):8. Lisa J Green. 2002. African American English: A linguistic introduction. Cambridge University Press. Spence Green and Christopher D Manning. 2010. Better arabic parsing: Baselines, evaluations, and analysis. In Proceedings of COLING. ACL. Taylor Jones. 2015. Toward a description of African American Vernacular English dialect regions using “Black Twitter”. American Speech, 90(4). Anna Jørgensen, Dirk Hovy, and Anders Søgaard. 2016. Learning a pos tagger for aave-like language. In Proceedings of NAACL. Association for Computational Linguistics. David Jurgens, Yulia Tsvetkov, and Dan Jurafsky. 2017. Incorporating dialectal variability for socially equitable language identification. In Proceedings of ACL. Mohammad Khan, Markus Dickinson, and Sandra K¨ubler. 2013. Towards domain adaptation for parsing web data. In RANLP. Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2017. Inherent trade-offs in the fair determination of risk scores. Proceedings of ITCS. Lingpeng Kong, Nathan Schneider, Swabha Swayamdipta, Archna Bhatia, Chris Dyer, and Noah A. Smith. 2014. A dependency parser for tweets. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1001–1012, Doha, Qatar. Association for Computational Linguistics. Yijia Liu, Yi Zhu, Wanxiang Che, Bing Qin, Nathan Schneider, and Noah A Smith. 2018. Parsing tweets into universal dependencies. Proceedings of NAACL. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Alexis Nasr, Geraldine Damnati, Aleksandra Guerraz, and Frederic Bechet. 2016. Syntactic parsing of chat language in contact center conversation corpus. In Annual SIGdial Meeting on Discourse and Dialogue. Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal Dependencies v1: A multilingual treebank collection. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016). Brendan O’Connor, Michel Krieger, and David Ahn. 2010. TweetMotif: Exploratory search and topic summarization for Twitter. In Proceedings of the International AAAI Conference on Weblogs and Social Media. Olutobi Owoputi, Brendan O’Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A. Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. In Proceedings of NAACL. Manuela Sanguinetti, Cristina Bosco, Alessandro Mazzei, Alberto Lavelli, and Fabio Tamburini. 2017. Annotating Italian social media texts in universal dependencies. In Proceedings of Depling 2017. Yves Scherrer. 2011. Syntactic transformations for Swiss German dialects. In Proceedings of the First Workshop on Algorithms and Resources for Modelling of Dialects and Language Varieties. ACL. Nathan Schneider, Brendan O’Connor, Naomi Saphra, David Bamman, Manaal Faruqui, Noah A. Smith, Chris Dyer, and Jason Baldridge. 2013. A framework for (under)specifying dependency syntax without overloading annotators. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 51–60, Sofia, Bulgaria. Association for Computational Linguistics. Ian Stewart. 2014. Now we stronger than ever: African-american english syntax in twitter. In Proceedings of the Student Research Workshop at the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 31– 37, Gothenburg, Sweden. Association for Computational Linguistics. Milan Straka, Jan Hajic, and Jana Strakov´a. 2016. Udpipe: Trainable pipeline for processing conll-u files performing tokenization, morphological analysis, pos tagging and parsing. In Proceedings of LREC. 1425 Hongmin Wang, Yue Zhang, GuangYong Leonard Chan, Jie Yang, and Hai Leong Chieu. 2017. Universal dependencies parsing for colloquial Singaporean english. Proceedings of ACL. Aaron Steven White, Drew Reisinger, Keisuke Sakaguchi, Tim Vieira, Sheng Zhang, Rachel Rudinger, Kyle Rawlins, and Benjamin Van Durme. 2016. Universal decompositional semantics on universal dependencies. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1713–1723, Austin, Texas. Association for Computational Linguistics. Daniel Zeman, Martin Popel, Milan Straka, Jan Hajic, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, et al. 2017. CoNLL 2017 shared task: Multilingual parsing from raw text to universal dependencies. Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Congle Zhang, Tyler Baldwin, Howard Ho, Benny Kimelfeld, and Yunyao Li. 2013. Adaptive parsercentric text normalization. In Proceedings of ACL.
2018
131
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1426–1436 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1426 LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better Adhiguna Kuncoro♠♣Chris Dyer♠John Hale♠♥ Dani Yogatama♠Stephen Clark♠Phil Blunsom♠♣ ♠DeepMind, London, UK ♣Department of Computer Science, University of Oxford, UK ♥Department of Linguistics, Cornell University, NY, USA {akuncoro,cdyer,jthale,dyogatama,clarkstephen,pblunsom}@google.com Abstract Language exhibits hierarchical structure, but recent work using a subject-verb agreement diagnostic argued that state-ofthe-art language models, LSTMs, fail to learn long-range syntax-sensitive dependencies. Using the same diagnostic, we show that, in fact, LSTMs do succeed in learning such dependencies—provided they have enough capacity. We then explore whether models that have access to explicit syntactic information learn agreement more effectively, and how the way in which this structural information is incorporated into the model impacts performance. We find that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved. Further, we find that the choice of how syntactic structure is built affects how well number agreement is learned: top-down construction outperforms leftcorner and bottom-up variants in capturing long-distance structural dependencies. 1 Introduction Recurrent neural networks (RNNs) are remarkably effective models of sequential data. Recent years have witnessed the widespread adoption of recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) in various NLP tasks, with state of the art results in language modeling (Melis et al., 2018) and conditional generation tasks like machine translation (Bahdanau et al., 2015) and text summarization (See et al., 2017). Here we revisit the question asked by Linzen et al. (2016): as RNNs model word sequences without explicit notions of hierarchical structure, Figure 1: An example of the number agreement task with two attractors and a subject-verb distance of five. to what extent are these models able to learn non-local syntactic dependencies in natural language? Identifying number agreement between subjects and verbs—especially in the presence of attractors—can be understood as a cognitivelymotivated probe that seeks to distinguish hierarchical theories from sequential ones, as models that rely on sequential cues like the most recent noun would favor the incorrect verb form. We provide an example of this task in Fig. 1, where the plural form of the verb have agrees with the distant subject parts, rather than the adjacent attractors (underlined) of the singular form. Contrary to the findings of Linzen et al. (2016), our experiments suggest that sequential LSTMs are able to capture structural dependencies to a large extent, even for cases with multiple attractors (§2). Our finding suggests that network capacity plays a crucial role in capturing structural dependencies with multiple attractors. Nevertheless, we find that a strong character LSTM language model—which lacks explicit word representation and has to capture much longer sequential dependencies in order to learn non-local structural dependencies effectively—performs much worse in the number agreement task. Given the strong performance of word-based LSTM language models, are there are any substantial benefits, in terms of number agreement accuracy, to explicitly modeling hierarchical structures as an inductive bias? We discover that a 1427 certain class of LSTM language models that explicitly models syntactic structures, the recurrent neural network grammars (Dyer et al., 2016, RNNGs), considerably outperforms sequential LSTM language models for cases with multiple attractors (§3). We present experiments affirming that this gain is due to an explicit composition operator rather than the presence of predicted syntactic annotations. Rather surprisingly, syntactic LSTM language models without explicit composition have no advantage over sequential LSTMs that operate on word sequences, although these models can nevertheless be excellent predictors of phrase structures (Choe and Charniak, 2016). Having established the importance of modeling structures, we explore the hypothesis that how we build the structure affects the model’s ability to identify structural dependencies in English. As RNNGs build phrase-structure trees through top-down operations, we propose extensions to the structure-building sequences and model architecture that enable left-corner (Henderson, 2003, 2004) and bottom-up (Chelba and Jelinek, 2000; Emami and Jelinek, 2005) generation orders (§4). Extensive prior work has characterized topdown, left-corner, and bottom-up parsing strategies in terms of cognitive plausibility (Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992) and neurophysiological evidence in human sentence processing (Nelson et al., 2017). Here we move away from the realm of parsing and evaluate the three strategies as models of generation instead, and address the following empirical question: which generation order is most appropriately biased to model structural dependencies in English, as indicated by number agreement accuracy? Our key finding is that the top-down generation outperforms left-corner and bottom-up variants for difficult cases with multiple attractors. In theory, the three traversal strategies approximate the same chain rule that decompose the joint probability of words and phrase-structure trees, denoted as p(x, y), differently and as such will impose different biases on the learner. In §4.3, we show that the three variants achieve similar perplexities on a held-out validation set. As we observe different patterns in number agreement, this demonstrates that while perplexity can be a useful diagnostic tool, it may not be sensitive enough for comparing models in terms of how well they capture grammatical intuitions. 2 Number Agreement with LSTM Language Models We revisit the number agreement task with LSTMs trained on language modeling objectives, as proposed by Linzen et al. (2016). Experimental Settings. We use the same parsed Wikipedia corpus, verb inflectors, preprocessing steps, and dataset split as Linzen et al. (2016).1 Word types beyond the most frequent 10,000 are converted to their respective POS tags. We summarize the corpus statistics of the dataset, along with the test set distribution of the number of attractors, in Table 1. Similar to Linzen et al. (2016), we only include test cases where all intervening nouns are of the opposite number forms than the subject noun. All models are implemented using the DyNet library (Neubig et al., 2017). Train Test Sentences 141,948 1,211,080 Types 10,025 10,025 Tokens 3,159,622 26,512,851 # Attractors # Instances % Instances n = 0 1,146,330 94.7% n = 1 52,599 4.3% n = 2 9,380 0.77% n = 3 2,051 0.17% n = 4 561 0.05% n = 5 159 0.01% Table 1: Corpus statistics of the Linzen et al. (2016) number agreement dataset. Training was done using a language modeling objective that predicts the next word given the prefix; at test time we compute agreement error rates by comparing the probability of the correct verb form with the incorrect one. We report performance of a few different LSTM hidden layer configurations, while other hyper-parameters are selected based on a grid search.2 Following Linzen 1The dataset and scripts are obtained from https:// github.com/TalLinzen/rnn_agreement. 2Based on the grid search results, we used the following hyper-parameters that work well across different hidden layer sizes: 1-layer LSTM, SGD optimizers with an initial learning rate of 0.2, a learning rate decay of 0.10 after 10 epochs, LSTM dropout rates of 0.2, an input embedding dimension of 50, and a batch size of 10 sentences. Our use of singlelayer LSTMs and 50-dimensional word embedding (learned from scratch) as one of the baselines is consistent with the experimental settings of Linzen et al. (2016). 1428 n=0 n=1 n=2 n=3 n=4 Random 50.0 50.0 50.0 50.0 50.0 Majority 32.0 32.0 32.0 32.0 32.0 LSTM, H=50† 6.8 32.6 ≈50 ≈65 ≈70 Our LSTM, H=50 2.4 8.0 15.7 26.1 34.65 Our LSTM, H=150 1.5 4.5 9.0 14.3 17.6 Our LSTM, H=250 1.4 3.3 5.9 9.7 13.9 Our LSTM, H=350 1.3 3.0 5.7 9.7 13.8 1B Word LSTM (repl) 2.8 8.0 14.0 21.8 20.0 Char LSTM 1.2 5.5 11.8 20.4 27.8 Table 2: Number agreement error rates for various LSTM language models, broken down by the number of attractors. The top two rows represent the random and majority class baselines, while the next row (†) is the reported result from Linzen et al. (2016) for an LSTM language model with 50 hidden units (some entries, denoted by ≈, are approximately derived from a chart, since Linzen et al. (2016) did not provide a full table of results). We report results of our LSTM implementations of various hidden layer sizes, along with our re-run of the Jozefowicz et al. (2016) language model, in the next five rows. We lastly report the performance of a state of the art character LSTM baseline with a large model capacity (Melis et al., 2018). et al. (2016), we include the results of our replication3 of the large-scale language model of Jozefowicz et al. (2016) that was trained on the One Billion Word Benchmark.4 Hyper-parameter tuning is based on validation set perplexity. Discussion. Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement. For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite comparable performance for zero attractor cases, suggesting that network capacity plays an especially important role in propagating relevant structural information across a large number of steps.5 Our experiment independently derives the 3When evaluating the large-scale language model, the primary difference is that we do not map infrequent word types to their POS tags and that we subsample to obtain 500 test instances of each number of attractor due to computation cost; both preprocessing were also done by Linzen et al. (2016). 4The pretrained large-scale language model is obtained from https://github.com/tensorflow/models/ tree/master/research/lm_1b. 5This trend is also observed by comparing results with H=150 and H=250. While both models achieve near-identical performance for zero attractor, the model with H=250 persame finding as the recent work of Gulordava et al. (2018), who also find that LSTMs trained with language modeling objectives are able to learn number agreement well; here we additionally identify model capacity as one of the reasons for the discrepancy with the Linzen et al. (2016) results. While the pretrained large-scale language model of Jozefowicz et al. (2016) has certain advantages in terms of model capacity, more training data, and richer vocabulary, we suspect that the poorer performance is due to differences between their training domain and the number agreement testing domain, although the model still performs reasonably well in the number agreement test set. Prior work has confirmed the notion that, in many cases, statistical models are able to achieve good performance under some aggregate metric by overfitting to patterns that are predictive in most cases, often at the expense of more difficult, infrequent instances that require deeper language understanding abilities (Rimell et al., 2009; Jia and Liang, 2017). In the vast majority of cases, structural dependencies between subjects and verbs highly overlap with sequential dependencies (Table 1). Nevertheless, the fact that number agreement accuracy gets worse as the number of attractors increases is consistent with a sequential recency bias in LSTMs: under this conjecture, identifying the correct structural dependency becomes harder when there are more adjacent nouns of different number forms than the true subject. If the sequential recency conjecture is correct, then LSTMs would perform worse when the structural dependency is more distant in the sequences, compared to cases where the structural dependency is more adjacent. We empirically test this conjecture by running a strong character-based LSTM language model of Melis et al. (2018) that achieved state of the art results on EnWiki8 from the Hutter Prize dataset (Hutter, 2012), with 1,800 hidden units and 10 million parameters. The character LSTM is trained, validated, and tested6 on the same split of the Linzen et al. (2016) number agreement dataset. A priori, we expect that number agreement is harder for character LSTMs for two reasons. First, character LSTMs lack explicit word representaforms much better for cases with multiple attractors. 6For testing, we similarly evaluate number agreement accuracy by comparing the probability of the correct and incorrect verb form given the prefix, as represented by the respective character sequences. 1429 tions, thus succeeding in this task requires identifying structural dependencies between two sequences of character tokens, while word-based LSTMs only need to resolve dependencies between word tokens. Second, by nature of modeling characters, non-local structural dependencies are sequentially further apart than in the wordbased language model. On the other hand, character LSTMs have the ability to exploit and share informative morphological cues, such as the fact that plural nouns in English tend to end with ‘s’. As demonstrated on the last row of Table 2, we find that the character LSTM language model performs much worse at number agreement with multiple attractors compared to its word-based counterparts. This finding is consistent with that of Sennrich (2017), who find that character-level decoders in neural machine translation perform worse than subword models in capturing morphosyntactic agreement. To some extent, our finding demonstrates the limitations that character LSTMs face in learning structure from language modeling objectives, despite earlier evidence that character LSTM language models are able to implicitly acquire a lexicon (Le Godais et al., 2017). 3 Number Agreement with RNNGs Given the strong performance of sequential LSTMs in number agreement, is there any further benefit to explicitly modeling hierarchical structures? We focus on recurrent neural network grammars (Dyer et al., 2016, RNNGs), which jointly model the probability of phrase-structure trees and strings, p(x, y), through structurebuilding actions and explicit compositions for representing completed constituents. Our choice of RNNGs is motivated by the findings of Kuncoro et al. (2017), who find evidence for syntactic headedness in RNNG phrasal representations. Intuitively, the ability to learn heads is beneficial for this task, as the representation for the noun phrase “The flowers in the vase” would be similar to the syntactic head flowers rather than vase. In some sense, the composition operator can be understood as injecting a structural recency bias into the model design, as subjects and verbs that are sequentially apart are encouraged to be close together in the RNNGs’ representation. 3.1 Recurrent Neural Network Grammars RNNGs (Dyer et al., 2016) are language models that estimate the joint probability of string terminals and phrase-structure tree nonterminals. Here we use stack-only RNNGs that achieve better perplexity and parsing performance (Kuncoro et al., 2017). Given the current stack configuration, the objective function of RNNGs is to predict the correct structure-building operation according to a top-down, left-to-right traversal of the phrasestructure tree; a partial traversal for the input sentence “The flowers in the vase are blooming” is illustrated in Fig. 3(a).7 The structural inductive bias of RNNGs derives from an explicit composition operator that represents completed constituents; for instance, the constituent (NP The flowers) is represented by a single composite element on the stack, rather than as four separate symbols. During each REDUCE action, the topmost stack elements that belong to the new constituent are popped from the stack and then composed by the composition function; the composed symbol is then pushed back into the stack. The model is trained in an end-to-end manner by minimizing the cross-entropy loss relative to a sample of gold trees. 3.2 Experiments Here we summarize the experimental settings of running RNNGs on the number agreement dataset and discuss the empirical findings. Experimental settings. We obtain phrasestructure trees for the Linzen et al. (2016) dataset using a publicly available discriminative model8 trained on the Penn Treebank (Marcus et al., 1993). At training time, we use these predicted trees to derive action sequences on the training set, and train the RNNG model on these sequences.9 At test time, we compare the probabilities of the correct and incorrect verb forms given the prefix, which now includes both nonterminal and terminal symbols. An example of the stack contents (i.e. the prefix) when predicting the verb is provided in Fig. 3(a). We similarly run a grid search over the same hyper-parameter range as the sequential 7For a complete example of action sequences, we refer the reader to the example provided by Dyer et al. (2016). 8https://github.com/clab/rnng 9Earlier work on RNNGs (Dyer et al., 2016; Kuncoro et al., 2017) train the model on gold phrase-structure trees on the Penn Treebank, while here we train the RNNG on the number agreement dataset based on predicted trees from another parser. 1430 LSTM and compare the results with the strongest sequential LSTM baseline from §2. Figure 2: Number agreement error rates for sequential LSTM language models (left), sequential syntactic LSTM language models (Choe and Charniak, 2016, center), and RNNGs (right). Discussion. Fig. 2 shows that RNNGs (rightmost) achieve much better number agreement accuracy compared to LSTM language models (leftmost) for difficult cases with four and five attractors, with around 30% error rate reductions, along with a 13% error rate reduction (from 9% to 7.8%) for three attractors. We attribute the slightly worse performance of RNNGs on cases with zero and one attractor to the presence of intervening structure-building actions that separate the subject and the verb, such as a REDUCE (step 6 in Fig. 3(a)) action to complete the noun phrase and at least one action to predict a verb phrase (step 15 in Fig. 3(a)) before the verb itself is introduced, while LSTM language models benefit from shorter dependencies for zero and one attractor cases. The performance gain of RNNGs might arise from two potential causes. First, RNNGs have access to predicted syntactic annotations, while LSTM language models operate solely on word sequences. Second, RNNGs incorporate explicit compositions, which encourage hierarhical representations and potentially the discovery of syntactic (rather than sequential) dependencies. Would LSTMs that have access to syntactic annotations, but without the explicit composition function, benefit from the same performance gain as RNNGs? To answer this question, we run sequential LSTMs over the same phrase-structure trees (Choe and Charniak, 2016), similarly estimating the joint probability of phrase-structure nonterminals and string terminals but without an explicit composition operator. Taking the example in Fig. 3(a), the sequential syntactic LSTM would have fifteen10 symbols on the LSTM when predicting the verb, as opposed to three symbols in the case of RNNGs’ stack LSTM. In theory, the sequential LSTM over the phrase-structure trees (Choe and Charniak, 2016) may be able to incorporate a similar, albeit implicit, composition process as RNNGs and consequently derive similarly syntactic heads, although there is no inductive bias that explicitly encourages such process. Fig. 2 suggests that the sequential syntactic LSTMs (center) perform comparably with sequential LSTMs without syntax for multiple attractor cases, and worse than RNNGs for nearly all attractors; the gap is highest for multiple attractors. This result showcases the importance of an explicit composition operator and hierarchical representations in identifying structural dependencies, as indicated by number agreement accuracy. Our finding is consistent with the recent work of Yogatama et al. (2018), who find that introducing elements of hierarchical modeling through a stackstructured memory is beneficial for number agreement, outperforming LSTM language models and attention-augmented variants by increasing margins as the number of attractor grows. 3.3 Further Analysis In order to better interpret the results, we conduct further analysis into the perplexities of each model, followed by a discussion on the effect of incrementality constraints on the RNNG when predicting number agreement. Perplexity. To what extent does the success of RNNGs in the number agreement task with multiple attractors correlate with better performance under the perplexity metric? We answer this question by using an importance sampling marginalization procedure (Dyer et al., 2016) to obtain an estimate of p(x) under both RNNGs and the sequential syntactic LSTM model. Following Dyer et al. (2016), for each sentence on the validation set we sample 100 candidate trees from a discriminative model11 as our proposal distribution. As demonstrated in Table 3, the LSTM language model has the lowest validation set perplexity despite substantially worse performance than RNNGs in number agreement with multiple attractors, suggesting that lower perplexity is not neces10In the model of Choe and Charniak (2016), each nonterminal, terminal, and closed parenthesis symbol is represented as an element on the LSTM sequence. 11https://github.com/clab/rnng 1431 sarily correlated with number agreement success. Validation ppl. LSTM LM 72.6 Seq. Syntactic LSTM 79.2 RNNGs 77.9 Table 3: Validation set perplexity of LSTM language model, sequential syntactic LSTM, and RNNGs. Incrementality constraints. As the syntactic prefix was derived from a discriminative model that has access to unprocessed words, one potential concern is that this prefix might violate the incrementality constraints and benefit the RNNG over the LSTM language model. To address this concern, we remark that the empirical evidence from Fig. 2 and Table 3 indicates that the LSTM language model without syntactic annotation outperforms the sequential LSTM with syntactic annotation in terms of both perplexity and number agreement throughout nearly all attractor settings, suggesting that the predicted syntactic prefix does not give any unfair advantage to the syntactic models. Furthermore, we run an experiment where the syntactic prefix is instead derived from an incremental beam search procedure of Fried et al. (2017).12 To this end, we take the highest scoring beam entry at the time that the verb is generated to be the syntactic prefix; this procedure is applied to both the correct and incorrect verb forms.13 We then similarly compare the probabilities of the correct and incorrect verb form given each respective syntactic prefix to obtain number agreement accuracy. Our finding suggests that using the fully incremental tree prefix leads to even better RNNG number agreement performance for four and five attractors, achieving 7.1% and 8.2% error rates, respectively, compared to 9.4% and 12% for the RNNG error rates in Fig. 2. 4 Top-Down, Left-Corner, and Bottom-Up Traversals In this section, we propose two new variants of RNNGs that construct trees using a different con12As the beam search procedure is time-consuming, we randomly sample 500 cases for each attractor and compute the number agreement accuracy on these samples. 13Consequently, the correct and incorrect forms of the sentence might have different partial trees, as the highest scoring beam entries may be different for each alternative. struction order than the top-down, left-to-right order used above. These are a bottom-up construction order (§4.1) and a left-corner construction order (§4.2), analogous to the well-known parsing strategies (e.g. Hale, 2014, chapter 3). They differ from these classic strategies insofar as they do not announce the phrase-structural content of an entire branch at the same time, adopting instead a node-by-node enumeration reminescent of Markov Grammars (Charniak, 1997). This stepby-step arrangement extends to the derived string as well; since all variants generate words from left to right, the models can be compared using number agreement as a diagnostic.14 Here we state our hypothesis on why the build order matters. The three generation strategies represent different chain rule decompositions of the joint probability of strings and phrase-structure trees, thereby imposing different biases on the learner. Earlier work in parsing has characterized the plausibility of top-down, left-corner, and bottom-up strategies as viable candidates of human sentence processing, especially in terms of memory constraints and human difficulties with center embedding constructions (Johnson-Laird, 1983; Pulman, 1986; Abney and Johnson, 1991; Resnik, 1992, inter alia), along with neurophysiological evidence in human sentence processing (Nelson et al., 2017). Here we cast the three strategies as models of language generation (Manning and Carpenter, 1997), and focus on the empirical question: which generation order has the most appropriate bias in modeling non-local structural dependencies in English? These alternative orders organize the learning problem so as to yield intermediate states in generation that condition on different aspects of the grammatical structure. In number agreement, this amounts to making an agreement controller, such as the word flowers in Fig. 3, more or less salient. If it is more salient, the model should be better-able to inflect the main verb in agreement with this controller, without getting distracted by the attractors. The three proposed build orders are compared in Fig. 3, showing the respective configurations (i.e. the prefix) when generating the main verb in a sentence with a single attractor.15 In ad14Only the order in which these models build the nonterminal symbols is different, while the terminal symbols are still generated in a left-to-right manner in all variants. 15Although the stack configuration at the time of verb generation varies only slightly, the configurations encountered 1432 dition, we show concrete action sequences for a simpler sentence in each section. 4.1 Bottom-Up Traversal In bottom-up traversals, phrases are recursively constructed and labeled with the nonterminal type once all their daughters have been built, as illustrated in Fig. 4. Bottom-up traversals benefit from shorter stack depths compared to top-down due to the lack of incomplete nonterminals. As the commitment to label the nonterminal type of a phrase is delayed until its constituents are complete, this means that the generation of a child node cannot condition on the label of its parent node. In n-ary branching trees, bottom-up completion of constituents requires a procedure for determining how many of the most recent elements on the stack should be daughters of the node that is being constructed.16 Conceptually, rather than having a single REDUCE operation as we have before, we have a complex REDUCE(X, n) operation that must determine the type of the constituent (i.e., X) as well as the number of daughters (i.e., n). In step 5 of Fig. 4, the newly formed NP constituent only covers the terminal worms, and neither the unattached terminal eats nor the constituent (NP The fox) is part of the new noun phrase. We implement this extent decision using a stick-breaking construction—using the stack LSTM encoding, a single-layer feedforward network, and a logistic output layer—which decides whether the top element on the stack should be the leftmost child of the new constituent (i.e. whether or not the new constituent is complete after popping the current topmost stack element), as illustrated in Fig. 5. If not, the process is then repeated after the topmost stack element is popped. Once the extent of the new nonterminal has been decided, we parameterize the decision of the nonterminal label type; in Fig. 5 this is an NP. A second difference to top-down generation is that when a single constituent remains on the stack, the sentence is not necessarily complete (see step 3 of Fig. 4 for examples where this happens). We thus introduce an explicit STOP action (step 8, Fig. 4), indicating the tree is complete, which is only assigned non-zero probability when the stack has a during the history of the full generation process vary considerably in the invariances and the kinds of actions they predict. 16This mechanism is not necessary with strictly binary branching trees, since each new nonterminal always consists of the two children at the top of the stack. Avg. stack depth Ppl. TD 12.29 94.90 LC 11.45 95.86 BU 7.41 96.53 Table 4: Average stack depth and validation set perplexity for top-down (TD), left-corner (LC), and bottom-up (BU) RNNGs. single complete constituent. 4.2 Left-Corner Traversal Left-corner traversals combine some aspects of top-down and bottom-up processing. As illustrated in Fig. 6, this works by first generating the leftmost terminal of the tree, The (step 0), before proceeding bottom-up to predict its parent NP (step 1) and then top-down to predict the rest of its children (step 2). A REDUCE action similarly calls the composition operator once the phrase is complete (e.g. step 3). The complete constituent (NP The fox) is the leftmost child of its parent node, thus an NT SW(S) action is done next (step 4). The NT SW(X) action is similar to the NT(X) from the top-down generator, in that it introduces an open nonterminal node and must be matched later by a corresponding REDUCE operation, but, in addition, swaps the two topmost elements at the top of the stack. This is necessary because the parent nonterminal node is not built until after its left-most child has been constructed. In step 1 of Fig. 6, with a single element The on the stack, the action NT SW(NP) adds the open nonterminal symbol NP to become the topmost stack element, but after applying the swap operator the stack now contains (NP | The (step 2). 4.3 Experiments We optimize the hyper-parameters of each RNNG variant using grid searches based on validation set perplexity. Table 4 summarizes average stack depths and perplexities17 on the Linzen et al. (2016) validation set. We evaluate each of the variants in terms of number agreement accuracy as an evidence of its suitability to model structural dependencies in English, presented in Table 5. To account for randomness in training, we report the error rate summary statistics of ten different runs. 17Here we measure perplexity over p(x, y), where y is the presumptive gold tree on the Linzen et al. (2016) dataset. Dyer et al. (2016) instead used an importance sampling procedure to marginalize and obtain an estimate of p(x). 1433 S NP VP VP The are blooming NP PP the NP in S NP VP VP are blooming NP PP NP in S NP VP VP are blooming NP PP NP in 1 2 3 4 5 6 7 8 9 10 11 12 13 15 16 3 4 5 6 7 8 9 14 10 14 5 2 4 7 6 9 8 10 11 12 15 13 top (NP (NP The flowers) (PP in (NP the vase))) <latexit sha1_base64="pEVHBaNobe2opYNoW5WgOk9U8 Tg=">ACR3icbVDPSxtBFJ6NWjW1NerRy2BoSaCEjQi2t4AXTyUFV4VsCLOTt8ngzM4y8zZtWPbP8+Kxt/4NXnqwpUd nkxU09sGDj+979cXpVJY9P1fXm1tfePN5tZ2/e3Ou/e7jb39S6szwyHgWmpzHTELUiQoEAJ16kBpiIJV9HNWVm/moGx QicXOE9hqNgkEbHgDB01aow+hg/0PIcdVrQMKwvCaPy1tc+LTMElU7ziynQWOrvblbRpq3+Ey+S4pkKnWrmzina7fZi 2qjR9Dv+Iuhr0K1Ak1TRHzV+hmPNMwUJcsmsHXT9FIc5Myi4hKIeZhZSxm/YBAYOJkyBHeYLIwr6wTFjGmvjMkG6YJ935 ExZO1eRUyqGU7taK8n/1QYZxp+H7tc0Q0j4clGcSYqalq7SsTDAUc4dYNwIdyvlU2YR+dXaUJ39eXIDjufOn4306av U+VG1vkByRFumSU9Ij56RPAsLJLbknD+SPd+f9v56/5bSmlf1HJAXUfMeAcDnsAs=</latexit> <latexit sha1_base64="pEVHBaNobe2opYNoW5WgOk9U8 Tg=">ACR3icbVDPSxtBFJ6NWjW1NerRy2BoSaCEjQi2t4AXTyUFV4VsCLOTt8ngzM4y8zZtWPbP8+Kxt/4NXnqwpUd nkxU09sGDj+979cXpVJY9P1fXm1tfePN5tZ2/e3Ou/e7jb39S6szwyHgWmpzHTELUiQoEAJ16kBpiIJV9HNWVm/moGx QicXOE9hqNgkEbHgDB01aow+hg/0PIcdVrQMKwvCaPy1tc+LTMElU7ziynQWOrvblbRpq3+Ey+S4pkKnWrmzina7fZi 2qjR9Dv+Iuhr0K1Ak1TRHzV+hmPNMwUJcsmsHXT9FIc5Myi4hKIeZhZSxm/YBAYOJkyBHeYLIwr6wTFjGmvjMkG6YJ935 ExZO1eRUyqGU7taK8n/1QYZxp+H7tc0Q0j4clGcSYqalq7SsTDAUc4dYNwIdyvlU2YR+dXaUJ39eXIDjufOn4306av U+VG1vkByRFumSU9Ij56RPAsLJLbknD+SPd+f9v56/5bSmlf1HJAXUfMeAcDnsAs=</latexit> <latexit sha1_base64="pEVHBaNobe2opYNoW5WgOk9U8 Tg=">ACR3icbVDPSxtBFJ6NWjW1NerRy2BoSaCEjQi2t4AXTyUFV4VsCLOTt8ngzM4y8zZtWPbP8+Kxt/4NXnqwpUd nkxU09sGDj+979cXpVJY9P1fXm1tfePN5tZ2/e3Ou/e7jb39S6szwyHgWmpzHTELUiQoEAJ16kBpiIJV9HNWVm/moGx QicXOE9hqNgkEbHgDB01aow+hg/0PIcdVrQMKwvCaPy1tc+LTMElU7ziynQWOrvblbRpq3+Ey+S4pkKnWrmzina7fZi 2qjR9Dv+Iuhr0K1Ak1TRHzV+hmPNMwUJcsmsHXT9FIc5Myi4hKIeZhZSxm/YBAYOJkyBHeYLIwr6wTFjGmvjMkG6YJ935 ExZO1eRUyqGU7taK8n/1QYZxp+H7tc0Q0j4clGcSYqalq7SsTDAUc4dYNwIdyvlU2YR+dXaUJ39eXIDjufOn4306av U+VG1vkByRFumSU9Ij56RPAsLJLbknD+SPd+f9v56/5bSmlf1HJAXUfMeAcDnsAs=</latexit> top (S (NP (NP The flowers) (PP in (NP the vase))) (VP <latexit sha1_base64="UOyF7nqTZQejWkT+z8q4hH0J /4=">ACZHicbVDfSxtBEN47taptlHxqSCLoSWBEi5FsL4JvgkV0xUyIWwt5lLFndvj905bTjvn/TN1703DzoxA TBwa+/eab2ZkvzqSwGAQvnr+xufVhu/Kx+mln9/OX2t7+jdW54dDlWmpzFzMLUqTQRYES7jIDTMUSbuP7i2n9gGMFTrt 4CSDvmKjVCSCM3TUoPb0PUL4g5YXqLOSRlF1ThVNK5X3lchnWYEKhsXnTHQROpHN7ts0kb4nxdpuaRCp3pw65XNZnNl 2k1YDmr1oBXMgq6D9gLUySLCQe05GmqeK0iRS2Ztrx1k2C+YQcElNUot5Axfs9G0HMwZQpsv5i5VNJvjhnSRBuXKdIZu 9xRMGXtRMVOqRiO7WptSr5X6+WY/Oq7w7McIeXzj5JcUtR0ajkdCgMc5cQBxo1wu1I+ZoZxdOZVnQnt1ZPXQfdn6wV/ D6pn/9YuFEhX8kxaZA2OSXn5JKEpEs4+etVvD1v3/vn7/oH/uFc6nuLngPyJvyjVyk1tZo=</latexit> <latexit sha1_base64="UOyF7nqTZQejWkT+z8q4hH0J /4=">ACZHicbVDfSxtBEN47taptlHxqSCLoSWBEi5FsL4JvgkV0xUyIWwt5lLFndvj905bTjvn/TN1703DzoxA TBwa+/eab2ZkvzqSwGAQvnr+xufVhu/Kx+mln9/OX2t7+jdW54dDlWmpzFzMLUqTQRYES7jIDTMUSbuP7i2n9gGMFTrt 4CSDvmKjVCSCM3TUoPb0PUL4g5YXqLOSRlF1ThVNK5X3lchnWYEKhsXnTHQROpHN7ts0kb4nxdpuaRCp3pw65XNZnNl 2k1YDmr1oBXMgq6D9gLUySLCQe05GmqeK0iRS2Ztrx1k2C+YQcElNUot5Axfs9G0HMwZQpsv5i5VNJvjhnSRBuXKdIZu 9xRMGXtRMVOqRiO7WptSr5X6+WY/Oq7w7McIeXzj5JcUtR0ajkdCgMc5cQBxo1wu1I+ZoZxdOZVnQnt1ZPXQfdn6wV/ D6pn/9YuFEhX8kxaZA2OSXn5JKEpEs4+etVvD1v3/vn7/oH/uFc6nuLngPyJvyjVyk1tZo=</latexit> <latexit sha1_base64="UOyF7nqTZQejWkT+z8q4hH0J /4=">ACZHicbVDfSxtBEN47taptlHxqSCLoSWBEi5FsL4JvgkV0xUyIWwt5lLFndvj905bTjvn/TN1703DzoxA TBwa+/eab2ZkvzqSwGAQvnr+xufVhu/Kx+mln9/OX2t7+jdW54dDlWmpzFzMLUqTQRYES7jIDTMUSbuP7i2n9gGMFTrt 4CSDvmKjVCSCM3TUoPb0PUL4g5YXqLOSRlF1ThVNK5X3lchnWYEKhsXnTHQROpHN7ts0kb4nxdpuaRCp3pw65XNZnNl 2k1YDmr1oBXMgq6D9gLUySLCQe05GmqeK0iRS2Ztrx1k2C+YQcElNUot5Axfs9G0HMwZQpsv5i5VNJvjhnSRBuXKdIZu 9xRMGXtRMVOqRiO7WptSr5X6+WY/Oq7w7McIeXzj5JcUtR0ajkdCgMc5cQBxo1wu1I+ZoZxdOZVnQnt1ZPXQfdn6wV/ D6pn/9YuFEhX8kxaZA2OSXn5JKEpEs4+etVvD1v3/vn7/oH/uFc6nuLngPyJvyjVyk1tZo=</latexit> top (S (NP (NP The flowers) (PP in (NP the vase))) <latexit sha1_base64="696n4vimN65k0w31NO6f4kaJv vM=">ACU3icbVBNixNBEO0Z4242a9asHvfSGFYSkDARwd1bwIsnGTFjApkQejo1SZPu6aG7ZjUM8yP1IPhLvHiw8yH ExIKC169eVe9JfCYhD89PxHtcdn5/WLxuWT5tXT1vWz1YXhkPEtdRmnDALUmQoUAJ49wAU4mEUbJ6t6mPHsBYobMh rnOYKrbIRCo4Q0fNWquXMcJXtLxEnVc0jhs7wqiy8+no/SGkm4xB5ctyuASaSv3Fza6tBP+5UVWHajQqR7celW3261m rXbQC7ZBT0F/D9pkH+Gs9T2ea14oyJBLZu2kH+Q4LZlBwSVUjbiwkDO+YguYOJgxBXZabk2p6K1j5jTVxmWGdMsedpRMW btWiVMqhkt7XNuQ/6tNCkzvpu7OvEDI+O6jtJAUNd04TOfCAEe5doBxI9yulC+ZYRydVw1nQv/45FMQve7d94KPb9qDV 3s36uSGvCAd0idvyYC8JyGJCfyC+PeJ73w/vt+35tJ/W9fc9z8k/4zT+HsrEg</latexit> <latexit sha1_base64="696n4vimN65k0w31NO6f4kaJv vM=">ACU3icbVBNixNBEO0Z4242a9asHvfSGFYSkDARwd1bwIsnGTFjApkQejo1SZPu6aG7ZjUM8yP1IPhLvHiw8yH ExIKC169eVe9JfCYhD89PxHtcdn5/WLxuWT5tXT1vWz1YXhkPEtdRmnDALUmQoUAJ49wAU4mEUbJ6t6mPHsBYobMh rnOYKrbIRCo4Q0fNWquXMcJXtLxEnVc0jhs7wqiy8+no/SGkm4xB5ctyuASaSv3Fza6tBP+5UVWHajQqR7celW3261m rXbQC7ZBT0F/D9pkH+Gs9T2ea14oyJBLZu2kH+Q4LZlBwSVUjbiwkDO+YguYOJgxBXZabk2p6K1j5jTVxmWGdMsedpRMW btWiVMqhkt7XNuQ/6tNCkzvpu7OvEDI+O6jtJAUNd04TOfCAEe5doBxI9yulC+ZYRydVw1nQv/45FMQve7d94KPb9qDV 3s36uSGvCAd0idvyYC8JyGJCfyC+PeJ73w/vt+35tJ/W9fc9z8k/4zT+HsrEg</latexit> <latexit sha1_base64="696n4vimN65k0w31NO6f4kaJv vM=">ACU3icbVBNixNBEO0Z4242a9asHvfSGFYSkDARwd1bwIsnGTFjApkQejo1SZPu6aG7ZjUM8yP1IPhLvHiw8yH ExIKC169eVe9JfCYhD89PxHtcdn5/WLxuWT5tXT1vWz1YXhkPEtdRmnDALUmQoUAJ49wAU4mEUbJ6t6mPHsBYobMh rnOYKrbIRCo4Q0fNWquXMcJXtLxEnVc0jhs7wqiy8+no/SGkm4xB5ctyuASaSv3Fza6tBP+5UVWHajQqR7celW3261m rXbQC7ZBT0F/D9pkH+Gs9T2ea14oyJBLZu2kH+Q4LZlBwSVUjbiwkDO+YguYOJgxBXZabk2p6K1j5jTVxmWGdMsedpRMW btWiVMqhkt7XNuQ/6tNCkzvpu7OvEDI+O6jtJAUNd04TOfCAEe5doBxI9yulC+ZYRydVw1nQv/45FMQve7d94KPb9qDV 3s36uSGvCAd0idvyYC8JyGJCfyC+PeJ73w/vt+35tJ/W9fc9z8k/4zT+HsrEg</latexit> (a) <latexit sha 1_base64="UHgaAuqwBFHO+ w3+PMZ4hgEg2pg=">AB8n icbVBNS8NAEN3Ur1q/qh69L BahgpREBPVW8OKxgrGFJpTNd tMu3d2E3YlYQv+GFw8qXv01 3vw3btsctPXBwO9GWbmRan gBlz32ymtrK6tb5Q3K1vbO7 t71f2DB5NkmjKfJiLRnYgYJ rhiPnAQrJNqRmQkWDsa3Uz9 9iPThifqHsYpCyUZKB5zSsB KQDsCbTM6+R0qvW3IY7A1 4mXkFqECrV/0K+gnNJFNAB TGm67kphDnRwKlgk0qQGZYSO iID1rVUEclMmM9unuATq/Rx nGhbCvBM/T2RE2nMWEa2UxI YmkVvKv7ndTOIr8KcqzQDpu h8UZwJDAmeBoD7XDMKYmwJo ZrbWzEdEk0o2JgqNgRv8eVl 4p83rhvu3UWteVakUZH6Bj VkYcuURPdohbyEUpekav6M 3JnBfn3fmYt5acYuYQ/YHz+ QNW/ZFZ</latexit> <latexit sha 1_base64="UHgaAuqwBFHO+ w3+PMZ4hgEg2pg=">AB8n icbVBNS8NAEN3Ur1q/qh69L BahgpREBPVW8OKxgrGFJpTNd tMu3d2E3YlYQv+GFw8qXv01 3vw3btsctPXBwO9GWbmRan gBlz32ymtrK6tb5Q3K1vbO7 t71f2DB5NkmjKfJiLRnYgYJ rhiPnAQrJNqRmQkWDsa3Uz9 9iPThifqHsYpCyUZKB5zSsB KQDsCbTM6+R0qvW3IY7A1 4mXkFqECrV/0K+gnNJFNAB TGm67kphDnRwKlgk0qQGZYSO iID1rVUEclMmM9unuATq/Rx nGhbCvBM/T2RE2nMWEa2UxI YmkVvKv7ndTOIr8KcqzQDpu h8UZwJDAmeBoD7XDMKYmwJo ZrbWzEdEk0o2JgqNgRv8eVl 4p83rhvu3UWteVakUZH6Bj VkYcuURPdohbyEUpekav6M 3JnBfn3fmYt5acYuYQ/YHz+ QNW/ZFZ</latexit> <latexit sha 1_base64="UHgaAuqwBFHO+ w3+PMZ4hgEg2pg=">AB8n icbVBNS8NAEN3Ur1q/qh69L BahgpREBPVW8OKxgrGFJpTNd tMu3d2E3YlYQv+GFw8qXv01 3vw3btsctPXBwO9GWbmRan gBlz32ymtrK6tb5Q3K1vbO7 t71f2DB5NkmjKfJiLRnYgYJ rhiPnAQrJNqRmQkWDsa3Uz9 9iPThifqHsYpCyUZKB5zSsB KQDsCbTM6+R0qvW3IY7A1 4mXkFqECrV/0K+gnNJFNAB TGm67kphDnRwKlgk0qQGZYSO iID1rVUEclMmM9unuATq/Rx nGhbCvBM/T2RE2nMWEa2UxI YmkVvKv7ndTOIr8KcqzQDpu h8UZwJDAmeBoD7XDMKYmwJo ZrbWzEdEk0o2JgqNgRv8eVl 4p83rhvu3UWteVakUZH6Bj VkYcuURPdohbyEUpekav6M 3JnBfn3fmYt5acYuYQ/YHz+ QNW/ZFZ</latexit> (b) <latexit sha 1_base64="q0c13WRT90Z0P hqBh91Pvh1Rkms=">AB8n icbVBNS8NAEN3Ur1q/qh69L BahgpREBPVW8OKxgrGFJpTNd tMu3d2E3YlYQv+GFw8qXv01 3vw3btsctPXBwO9GWbmRan gBlz32ymtrK6tb5Q3K1vbO7 t71f2DB5NkmjKfJiLRnYgYJ rhiPnAQrJNqRmQkWDsa3Uz9 9iPThifqHsYpCyUZKB5zSsB KQDsCbTM69HpFetuQ13Br xMvILUIFWr/oV9BOaSaAC mJM13NTCHOigVPBJpUgMywld EQGrGupIpKZMJ/dPMEnVunj ONG2FOCZ+nsiJ9KYsYxspyQ wNIveVPzP62YQX4U5V2kGTN H5ojgTGBI8DQD3uWYUxNgSQ jW3t2I6JpQsDFVbAje4svL xD9vXDfcu4ta86xIo4yO0DG qIw9doia6RS3kI4pS9Ixe0Z uTOS/Ou/Mxby05xcwh+gPn8 wdYgpFa</latexit> <latexit sha 1_base64="q0c13WRT90Z0P hqBh91Pvh1Rkms=">AB8n icbVBNS8NAEN3Ur1q/qh69L BahgpREBPVW8OKxgrGFJpTNd tMu3d2E3YlYQv+GFw8qXv01 3vw3btsctPXBwO9GWbmRan gBlz32ymtrK6tb5Q3K1vbO7 t71f2DB5NkmjKfJiLRnYgYJ rhiPnAQrJNqRmQkWDsa3Uz9 9iPThifqHsYpCyUZKB5zSsB KQDsCbTM69HpFetuQ13Br xMvILUIFWr/oV9BOaSaAC mJM13NTCHOigVPBJpUgMywld EQGrGupIpKZMJ/dPMEnVunj ONG2FOCZ+nsiJ9KYsYxspyQ wNIveVPzP62YQX4U5V2kGTN H5ojgTGBI8DQD3uWYUxNgSQ jW3t2I6JpQsDFVbAje4svL xD9vXDfcu4ta86xIo4yO0DG qIw9doia6RS3kI4pS9Ixe0Z uTOS/Ou/Mxby05xcwh+gPn8 wdYgpFa</latexit> <latexit sha 1_base64="q0c13WRT90Z0P hqBh91Pvh1Rkms=">AB8n icbVBNS8NAEN3Ur1q/qh69L BahgpREBPVW8OKxgrGFJpTNd tMu3d2E3YlYQv+GFw8qXv01 3vw3btsctPXBwO9GWbmRan gBlz32ymtrK6tb5Q3K1vbO7 t71f2DB5NkmjKfJiLRnYgYJ rhiPnAQrJNqRmQkWDsa3Uz9 9iPThifqHsYpCyUZKB5zSsB KQDsCbTM69HpFetuQ13Br xMvILUIFWr/oV9BOaSaAC mJM13NTCHOigVPBJpUgMywld EQGrGupIpKZMJ/dPMEnVunj ONG2FOCZ+nsiJ9KYsYxspyQ wNIveVPzP62YQX4U5V2kGTN H5ojgTGBI8DQD3uWYUxNgSQ jW3t2I6JpQsDFVbAje4svL xD9vXDfcu4ta86xIo4yO0DG qIw9doia6RS3kI4pS9Ixe0Z uTOS/Ou/Mxby05xcwh+gPn8 wdYgpFa</latexit> (c) <latexit sha 1_base64="nBjLEin08ijRv aoqcwftuVRQPlE=">AB8n icbVBNS8NAEN3Ur1q/qh69L BahgpREBPVW8OKxgrGFJpTNd tMu3d2E3YlYQv+GFw8qXv01 3vw3btsctPXBwO9GWbmRan gBlz32ymtrK6tb5Q3K1vbO7 t71f2DB5NkmjKfJiLRnYgYJ rhiPnAQrJNqRmQkWDsa3Uz9 9iPThifqHsYpCyUZKB5zSsB KQDsCbTM6/R0qvW3IY7A1 4mXkFqECrV/0K+gnNJFNAB TGm67kphDnRwKlgk0qQGZYSO iID1rVUEclMmM9unuATq/Rx nGhbCvBM/T2RE2nMWEa2UxI YmkVvKv7ndTOIr8KcqzQDpu h8UZwJDAmeBoD7XDMKYmwJo ZrbWzEdEk0o2JgqNgRv8eVl 4p83rhvu3UWteVakUZH6Bj VkYcuURPdohbyEUpekav6M 3JnBfn3fmYt5acYuYQ/YHz+ QNaB5Fb</latexit> <latexit sha 1_base64="nBjLEin08ijRv aoqcwftuVRQPlE=">AB8n icbVBNS8NAEN3Ur1q/qh69L BahgpREBPVW8OKxgrGFJpTNd tMu3d2E3YlYQv+GFw8qXv01 3vw3btsctPXBwO9GWbmRan gBlz32ymtrK6tb5Q3K1vbO7 t71f2DB5NkmjKfJiLRnYgYJ rhiPnAQrJNqRmQkWDsa3Uz9 9iPThifqHsYpCyUZKB5zSsB KQDsCbTM6/R0qvW3IY7A1 4mXkFqECrV/0K+gnNJFNAB TGm67kphDnRwKlgk0qQGZYSO iID1rVUEclMmM9unuATq/Rx nGhbCvBM/T2RE2nMWEa2UxI YmkVvKv7ndTOIr8KcqzQDpu h8UZwJDAmeBoD7XDMKYmwJo ZrbWzEdEk0o2JgqNgRv8eVl 4p83rhvu3UWteVakUZH6Bj VkYcuURPdohbyEUpekav6M 3JnBfn3fmYt5acYuYQ/YHz+ QNaB5Fb</latexit> <latexit sha 1_base64="nBjLEin08ijRv aoqcwftuVRQPlE=">AB8n icbVBNS8NAEN3Ur1q/qh69L BahgpREBPVW8OKxgrGFJpTNd tMu3d2E3YlYQv+GFw8qXv01 3vw3btsctPXBwO9GWbmRan gBlz32ymtrK6tb5Q3K1vbO7 t71f2DB5NkmjKfJiLRnYgYJ rhiPnAQrJNqRmQkWDsa3Uz9 9iPThifqHsYpCyUZKB5zSsB KQDsCbTM6/R0qvW3IY7A1 4mXkFqECrV/0K+gnNJFNAB TGm67kphDnRwKlgk0qQGZYSO iID1rVUEclMmM9unuATq/Rx nGhbCvBM/T2RE2nMWEa2UxI YmkVvKv7ndTOIr8KcqzQDpu h8UZwJDAmeBoD7XDMKYmwJo ZrbWzEdEk0o2JgqNgRv8eVl 4p83rhvu3UWteVakUZH6Bj VkYcuURPdohbyEUpekav6M 3JnBfn3fmYt5acYuYQ/YHz+ QNaB5Fb</latexit> Structure Stack contents <latexit sha1_base64="Bp gv0W9yrxMbIL8afjLiNO9akmk=">ACRniclVA7SwNBEJ 6LrxhfUubxSBYSLiIoHYBG8uIxgRyMeztbZLFvUd2Z8Vw 5N/Z2Nr5F2wsVGzdPBNbBzY4eOb+WZ2Pj+RQqPrPjuZuf mFxaXscm5ldW19I7+5da1joxivsljGqu5TzaWIeBUFSl5PF KehL3nNvz0b1mt3XGkR1fYT3gzpJ1ItAWjaKlW/sZDfo8 qTC9RGYZG8YHX6xka/CcP0/cYym4JiyPkEepBK19wi+4oy CwoTUABJlFp5Z+8IGYmtGomqdaNkptgM6UKBZN8kPOM5ol dQTu8YWFEQ6b6ciHAdmzTEDasbIvQjJifypSGmrdD3bG VLs6unakPyr1jDYPmIkqMPYuNF7WNJBiToakEIozlH0L KFPC/pWwLlWUobU+Z0oTZ8C6qHxdOie3FUKB9M3MjCDu zCPpTgGMpwDhWoAoMHeIE3eHcenVfnw/kct2aciWYbfkUG vgDQs7Yh</latexit> <latexit sha1_base64="Bp gv0W9yrxMbIL8afjLiNO9akmk=">ACRniclVA7SwNBEJ 6LrxhfUubxSBYSLiIoHYBG8uIxgRyMeztbZLFvUd2Z8Vw 5N/Z2Nr5F2wsVGzdPBNbBzY4eOb+WZ2Pj+RQqPrPjuZuf mFxaXscm5ldW19I7+5da1joxivsljGqu5TzaWIeBUFSl5PF KehL3nNvz0b1mt3XGkR1fYT3gzpJ1ItAWjaKlW/sZDfo8 qTC9RGYZG8YHX6xka/CcP0/cYym4JiyPkEepBK19wi+4oy CwoTUABJlFp5Z+8IGYmtGomqdaNkptgM6UKBZN8kPOM5ol dQTu8YWFEQ6b6ciHAdmzTEDasbIvQjJifypSGmrdD3bG VLs6unakPyr1jDYPmIkqMPYuNF7WNJBiToakEIozlH0L KFPC/pWwLlWUobU+Z0oTZ8C6qHxdOie3FUKB9M3MjCDu zCPpTgGMpwDhWoAoMHeIE3eHcenVfnw/kct2aciWYbfkUG vgDQs7Yh</latexit> <latexit sha1_base64="Bp gv0W9yrxMbIL8afjLiNO9akmk=">ACRniclVA7SwNBEJ 6LrxhfUubxSBYSLiIoHYBG8uIxgRyMeztbZLFvUd2Z8Vw 5N/Z2Nr5F2wsVGzdPBNbBzY4eOb+WZ2Pj+RQqPrPjuZuf mFxaXscm5ldW19I7+5da1joxivsljGqu5TzaWIeBUFSl5PF KehL3nNvz0b1mt3XGkR1fYT3gzpJ1ItAWjaKlW/sZDfo8 qTC9RGYZG8YHX6xka/CcP0/cYym4JiyPkEepBK19wi+4oy CwoTUABJlFp5Z+8IGYmtGomqdaNkptgM6UKBZN8kPOM5ol dQTu8YWFEQ6b6ciHAdmzTEDasbIvQjJifypSGmrdD3bG VLs6unakPyr1jDYPmIkqMPYuNF7WNJBiToakEIozlH0L KFPC/pWwLlWUobU+Z0oTZ8C6qHxdOie3FUKB9M3MjCDu zCPpTgGMpwDhWoAoMHeIE3eHcenVfnw/kct2aciWYbfkUG vgDQs7Yh</latexit> flowers The 1 2 flowers The 1 3 flowers vase the vase the vase Figure 3: The (a) top-down, (b) bottom-up, and (c) left-corner build order variants showing in black the structure that exists as well as the generator’s stack contents when the word are is generated during the derivation of the sentence The flowers in the vase are blooming. Structure in grey indicates material that will be generated subsequent to this. Circled numbers indicate the time when the corresponding structure/word is constructed. In (a) and (c), nonterminals are generated by a matched pair of NT and REDUCE operations, while in (b) they are introduced by a single complex REDUCE operation. Input: The fox eats worms Stack Action 0 GEN(The) 1 The GEN(fox) 2 The | fox REDUCE(NP,2) 3 (NP The fox) GEN(eats) 4 (NP The fox) | eats GEN(worms) 5 (NP The fox) | eats | worms REDUCE(NP,1) 6 (NP The fox) | eats | (NP worms) REDUCE(VP,2) 7 (NP The fox) | (VP eats (NP worms)) REDUCE(S,2) 8 (S (NP The fox) (VP eats (NP worms)) STOP Figure 4: Example Derivation for Bottom-Up Traversal. ‘ | ’ indicates separate elements on the stack. The REDUCE(X, n) action takes the top n elements on the stack and creates a new constituent of type X with the composition function. Avg.(±sdev)/min/max n=2 n=3 n=4 LM 5.8(±0.2)/5.5/6.0 9.6(±0.7)/8.8/10.1 14.1(±1.2)/13.0/15.3 TD 5.5(±0.4)/4.9/5.8 7.8(±0.6)/7.4/8.0 8.9(±1.1)/7.9/9.8 LC 5.4(±0.3)/5.2/5.5 8.2(±0.4)/7.9/8.7 9.9(±1.3)/8.8/11.5 BU 5.7(±0.3) 5.5/5.8 8.5(±0.7)/8.0/9.3 9.7(±1.1)/9.0/11.3 Table 5: Number agreement error rates for topdown (TD), left-corner (LC), and bottom-up (BU) RNNGs, broken down by the number of attractors. LM indicates the best sequential language model baseline (§2). We report the mean, standard deviation, and minimum/maximum of 10 different random seeds of each model. 1434 Figure 5: Architecture to determine type and span of new constituents during bottom-up generation. Input: The fox eats worms Stack Action 0 GEN(The) 1 The NT SW(NP) 2 (NP | The GEN(fox) 3 (NP | The | fox REDUCE 4 (NP The fox) NT SW(S) 5 (S | (NP The fox) GEN(eats) 6 (S | (NP The fox) | eats NT SW(VP) 7 (S | (NP The fox) | (VP | eats GEN(worms) 8 (S | (NP The fox) | (VP | eats | worms NT SW(NP) 9 (S | (NP The fox) | (VP | eats | (NP | worms REDUCE 10 (S | (NP The fox) | (VP | eats | (NP worms) REDUCE 11 (S | (NP The fox) | (VP eats (NP worms)) REDUCE 12 (S (NP The fox) (VP eats (NP worms))) N/A Figure 6: Example Derivation for left-corner traversal. Each NT SW(X) action adds the open nonterminal symbol (X to the stack, followed by a deterministic swap operator that swaps the top two elements on the stack. Discussion. In Table 5, we focus on empirical results for cases where the structural dependencies matter the most, corresponding to cases with two, three, and four attractors. All three RNNG variants outperform the sequential LSTM language model baseline for these cases. Nevertheless, the top-down variant outperforms both left-corner and bottom-up strategies for difficult cases with three or more attractors, suggesting that the top-down strategy is most appropriately biased to model difficult number agreement dependencies in English. We run an approximate randomization test by stratifying the output and permuting within each stratum (Yeh, 2000) and find that, for four attractors, the performance difference between the top-down RNNG and the other variants is statistically significant at p < 0.05. The success of the top-down traversal in the domain of number-agreement prediction is consistent with a classical view in parsing that argues top-down parsing is the most human-like parsing strategy since it is the most anticipatory. Only anticipatory representations, it is said, could explain the rapid, incremental processing that humans seem to exhibit (Marslen-Wilson, 1973; Tanenhaus et al., 1995); this line of thinking similarly motivates Charniak (2010), among others. While most work in this domain has been concerned with the parsing problem, our findings suggest that anticipatory mechanisms are also beneficial in capturing structural dependencies in language modeling. We note that our results are achieved using models that, in theory, are able to condition on the entire derivation history, while earlier work in sentence processing has focused on cognitive memory considerations, such as the memory-bounded model of Schuler et al. (2010). 5 Conclusion Given enough capacity, LSTMs trained on language modeling objectives are able to learn syntax-sensitive dependencies, as evidenced by accurate number agreement accuracy with multiple attractors. Despite this strong performance, we discover explicit modeling of structure does improve the model’s ability to discover non-local structural dependencies when determining the distribution over subsequent word generation. Recurrent neural network grammars (RNNGs), which jointly model phrase-structure trees and strings and employ an explicit composition operator, substantially outperform LSTM language models and syntactic language models without explicit compositions; this highlights the importance of a hierarchical inductive bias in capturing structural dependencies. We explore the possibility that how the structure is built affects number agreement performance. Through novel extensions to RNNGs that enable the use of left-corner and bottom-up generation strategies, we discover that this is indeed the case: the three RNNG variants have different generalization properties for number agreement, with the top-down traversal strategy performing best for cases with multiple attractors. Acknowledgments We would like to thank Tal Linzen for his help in data preparation and answering various questions. We also thank Laura Rimell, Nando de Freitas, and the three anonymous reviewers for their helpful comments and suggestions. 1435 References Steven Abney and Mark Johnson. 1991. Memory requirements and local ambiguities for parsing strategies. Journal of Psycholinguistic Research . Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. of ICLR. Eugene Charniak. 1997. Statistical techniques for natural language parsing. AI Magazine . Eugene Charniak. 2010. Top-down nearly-contextsensitive parsing. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Cambridge, MA, pages 674–683. http://www.aclweb.org/anthology/D10-1066. Ciprian Chelba and Frederick Jelinek. 2000. Structured language modeling. Computer Speech and Language 14(4). Do Kook Choe and Eugene Charniak. 2016. Parsing as language modeling. In Proc. of EMNLP. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proc. of NAACL. Ahmad Emami and Frederick Jelinek. 2005. A neural syntactic language model. Machine Learning 60:195–227. Daniel Fried, Mitchell Stern, and Dan Klein. 2017. Improving neural parsing by disentangling model combination and reranking effects. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Vancouver, Canada, pages 161–166. Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proc. of NAACL. John T Hale. 2014. Automaton theories of human sentence comprehension. CSLI Publications. James Henderson. 2003. Inducing history representations for broad coverage statistical parsing. In Proc. of NAACL. James Henderson. 2004. Discriminative training of a neural network statistical parser. In Proc. of ACL. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation . Marcus Hutter. 2012. The human knowledge compression contest . Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proc. of EMNLP. Philip N. Johnson-Laird. 1983. Mental Models. Harvard University Press. Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language modeling. Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, Graham Neubig, and Noah A. Smith. 2017. What do recurrent neural network grammars learn about syntax? In Proc. of EACL. Ga¨el Le Godais, Tal Linzen, and Emmanuel Dupoux. 2017. Comparing character-level neural language models using a lexical decision task. In Proc. of EACL. Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics . Christopher D. Manning and Bob Carpenter. 1997. Probabilistic parsing using left corner language models. In Proc. of IWPT. Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computational Linguistics . William Marslen-Wilson. 1973. Linguistic structure and speech shadowing at very short latencies. Nature 244:522–523. Gabor Melis, Chris Dyer, and Phil Blunsom. 2018. On the state of the art of evaluation in neural language models. In Proc. of ICLR. Matthew J. Nelson, Imen El Karoui, Kristof Giber, Xiaofang Yang, Laurent Cohen, Hilda Koopman, Sydney S. Cash, Lionel Naccache, John T. Hale, Christophe Pallier, and Stanislas Dehaene. 2017. Neurophysiological dynamics of phrase-structure building during sentence processing. Proceedings of the National Academy of Sciences of the United States of America . Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. Dynet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980 . Stephen Pulman. 1986. Grammars, parsers, and memory limitations. Language and Cognitive Processes . Philip Resnik. 1992. Left-corner parsing and psychological plausibility. In Proc. of COLING. 1436 Laura Rimell, Stephen Clark, and Mark Steedman. 2009. Unbounded dependency recovery for parser evaluation. In Proc. of EMNLP. William Schuler, Samir AbdelRahman, Tim Miller, and Lane Schwartz. 2010. Broad-coverage parsing using human-like memory constraints 36(1):1–30. Abigail See, Peter Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proc. of ACL. Rico Sennrich. 2017. How grammatical is characterlevel neural machine translation? assessing mt quality with contrastive translation pairs. In Proc. of EACL. Michael Tanenhaus, Michael Spivey-Knowlton, Kathleen Eberhard, and Julie Sedivy. 1995. Integration of visual and linguistic information in spoken language comprehension. Science 268:1632–1634. Alexander Yeh. 2000. More accurate tests for the statistical significance of result differences. In Proc. of COLING. Dani Yogatama, Yishu Miao, Gabor Melis, Wang Ling, Adhiguna Kuncoro, Chris Dyer, and Phil Blunsom. 2018. Memory architectures in recurrent neural network language models. In Proc. of ICLR.
2018
132
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1437–1447 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1437 Sequicity: Simplifying Task-oriented Dialogue Systems with Single Sequence-to-Sequence Architectures Wenqiang Lei‡∗, Xisen Jin§∗, Zhaochun Ren†, Xiangnan He‡, Min-Yen Kan‡, Dawei Yin† ‡National University of Singapore, Singapore §Fudan University, Shanghai, China †Data Science Lab, JD.com, Beijing, China {wenqianglei,xisenjin}@gmail.com [email protected] [email protected] [email protected] [email protected] Abstract Existing solutions to task-oriented dialogue systems follow pipeline designs which introduce architectural complexity and fragility. We propose a novel, holistic, extendable framework based on a single sequence-to-sequence (seq2seq) model which can be optimized with supervised or reinforcement learning. A key contribution is that we design text spans named belief spans to track dialogue believes, allowing task-oriented dialogue systems to be modeled in a seq2seq way. Based on this, we propose a simplistic Two Stage CopyNet instantiation which demonstrates good scalability: significantly reducing model complexity in terms of number of parameters and training time by an order of magnitude. It significantly outperforms state-of-the-art pipeline-based methods on two datasets and retains a satisfactory entity match rate on out-of-vocabulary (OOV) cases where pipeline-designed competitors totally fail. 1 Introduction The challenge of achieving both task completion and human-like response generation for taskoriented dialogue systems is gaining research interest. Wen et al. (2017b, 2016a, 2017a) pioneered a set of models to address this challenge. Their proposed architectures follow traditional pipeline designs, where the belief tracking component is the key component (Chen et al., 2017). In the current paradigm, such a belief tracker builds a complex multi-class classifier for each * Work performed during an internship at Data Science Lab, JD.com. slot (See §3.2) which can suffer from high complexity, especially when the number of slots and their values grow. Since all the possible slot values have to be pre-defined as classification labels, such trackers also cannot handle the requests that have out-of-vocabulary (OOV) slot values. Moreover, the belief tracker requires delexicalization, i.e., replacing slot values with their slot names in utterances (Mrkˇsi´c et al., 2017). It does not scale well, due to the lexical diversity. The belief tracker also needs to be pre-trained, making the models unrealistic for end-to-end training (Eric and Manning, 2017a). While Eric and Manning (2017a,b) investigated building task-oriented dialogue systems by using a seq2seq model, unfortunately, their methods are rather preliminary and do not perform well in either task completion or response generation, due to their omission of a belief tracker. Questioning the basic pipeline architecture, in this paper, we re-examine the tenets of belief tracking in light of advances in deep learning. We introduce the concept of a belief span (bspan), a text span that tracks the belief states at each turn. This leads to a new framework, named Sequicity, with a single seq2seq model. Sequicity decomposes the task-oriented dialogue problem into the generation of bspans and machine responses, converting this problem into a sequence optimization problem. In practice, Sequicity decodes in two stages: in the first stage, it decodes a bspan to facilitate knowledge base (KB) search; in the second, it decodes a machine response on the condition of knowledge base search result and the bspan. Our method represents a shift in perspective compared to existing work. Sequicity employs a single seq2seq model, resulting in a vastly simplified architecture. Unlike previous approaches with an overly parameterized delexicalization-based belief tracker, Sequicity achieves much less train1438 ing time, better performance on larger a dataset and an exceptional ability to handle OOV cases. Furthermore, Sequicity is a theoretically and aesthetically appealing framework, as it achieves true end-to-end trainability using only one seq2seq model. As such, Sequicity leverages the rapid development of seq2seq models (Gehring et al., 2017; Vaswani et al., 2017; Yu et al., 2017) in developing solutions to task-oriented dialogue scenarios. In our implementation, we improve on CopyNet (Gu et al., 2016) to instantiate Sequicity framework in this paper, as key words present in bspans and machine responses recur from previous utterances. Extensive experiments conducted on two benchmark datasets verify the effectiveness of our proposed method. Our contributions are fourfold: (1) We propose the Sequicity framework, which handles both task completion and response generation in a single seq2seq model; (2) We present an implementation of the Sequicity framework, called Two Stage CopyNet (TSCP), which has fewer number of parameters and trains faster than state-of-the-art baselines (Wen et al., 2017b, 2016a, 2017a); (3) We demonstrate that TSCP significantly outperforms state-of-the-art baselines on two large-scale datasets, inclusive of scenarios involving OOV; (4) We release source code of TSCP to assist the community to explore Sequicity1. 2 Related Work Historically, task-oriented dialog systems have been built as pipelines of separately trained modules. A typical pipeline design contains four components: 1) a user intent classifier, 2) a belief tracker, 3) a dialogue policy maker and a 4) response generator. User intent detectors classify user utterances to into one of the pre-defined intents. SVM, CNN and RNN models (Silva et al., 2011; Hashemi et al., 2016; Shi et al., 2016) perform well for intent classification. Belief trackers, which keep track of user goals and constraints every turn (Henderson et al., 2014a,b; Kim et al., 2017) are the most important component for task accomplishment. They model the probability distribution of values over each slot (Lee, 2013). Dialogue policy makers then generate the next available system action. Recent experiments suggest that reinforcement learning is a promising paradigm to accomplish this task (Young et al., 1http://github.com/WING-NUS/sequicity 2013a; Cuay´ahuitl et al., 2015; Liu and Lane, 2017), when state and action spaces are carefully designed (Young et al., 2010). Finally, in the response generation stage, pipeline designs usually pre-define fixed templates where placeholders are filled with slot values at runtime (Dhingra et al., 2017; Williams et al., 2017; Henderson et al., 2014b,a). However, this causes rather static responses that could lower user satisfaction. Generating a fluent, human-like response is considered a separate topic, typified by the topic of conversation systems (Li et al., 2015). 3 Preliminaries 3.1 Encoder-Decoder Seq2seq Models Current seq2seq models adopt encoder–decoder structures. Given a source sequence of tokens X = x1x2...xn, an encoder network represent X as hidden states: H(x) = h(x) 1 h(x) 2 ...h(x) n . Based on H(x), a decoder network generates a target sequence of tokens Y = y1y2...ym whose likelihood should be maximized given the training corpus. As of late, the recurrent neural network with attention (Att-RNN) is now considered a baseline encoder–decoder architecture. Such networks employ two (sometimes identical) RNNs, one for encoding (i.e., generating H(x)) and another for decoding. Particularly, for decoding yj, the decoder RNN takes the embedding yj−1 to generate a hidden vector h(y) j . Afterwards, the decoder attends to X: calculating attention scores between all h(x) i ∈H(x) and h(y) j (Eq. (1)), and then sums all h(x) i , weighted by their corresponding attention scores (Eqs. (2)). The summed result ˜h(x) j concatenates h(y) j as a single vector which is mapped into an output space for a softmax operation (Eq. (3)) to decode the current token: uij = vT tanh(W1h(x) i + W2h(y) j ) (1) ˜h(x) j = n X i=1 euij P i euij h(x) i (2) yj = softmax(O " ˜h(x) j h(y) j # ) (3) where v ∈R1×l; W1, W2 ∈Rl×d and O ∈ R|V |×d. d is embedding size and V is vocabulary set and |V | is its size. 1439 3.2 Belief Trackers In the multi-turn scenarios, a belief tracker is the key component for task completion as it records key information from past turns (Wen et al., 2017b; Henderson et al., 2013, 2014a,b). Early belief trackers are designed as Bayesian networks where each node is a dialogue belief state (Paek and Horvitz, 2000; Young et al., 2013b). Recent work successfully represents belief trackers as discriminative classifiers (Henderson et al., 2013; Williams, 2012; Wen et al., 2017b). Wen et al. (2017b) apply discrimination approaches (Henderson et al., 2013) to build one classifier for each slot in their belief tracker. Following the terminology of (Wen et al., 2017b), a slot can be either informable or requestable, which have been annotated in CamRes676 and KVRET. Individually, an informable slot, specified by user utterances in previous turns, is set to a constraint for knowledge base search; whereas a requestable slot records the user’s need in the current dialogue. As an example of belief trackers in CamRes676, food type is an informable slot, and a set of food types is also predefined (e.g., Italian) as corresponding slot values. In (Wen et al., 2017b), the informable slot food type is recognized by a classifier, which takes user utterances as input to predict if and which type of food should be activated, while the requestable slot of address is a binary variable. address will be set to true if the slot is requested by the user. 4 Method We now describe the Sequicity framework, by first explaining the core concept of bspans. We then instantiate the Sequicity framework with our introduction of an improved CopyNet (Gu et al., 2016). 4.1 Belief Spans for Belief Tracking The core of belief tracking is keeping track of informable and requestable slot values when a dialogue progresses. In the era of pipeline-based methods, supervised classification is a straightforward solution. However, we observe that this traditional architecture can be updated by applying seq2seq models directly to the problem. In contrast to (Wen et al., 2017b) which treats slot values as classification labels, we record them in a text span, to be decoded by the model. This leverages the state-of-the-art neural seq2seq models to learn and dynamically generate them. Specifically, our bspan has an information field (marked with <Inf></Inf>) to store values of informable slots since only values are important for knowledge base search. Bspans can also feature a requested field (marked with <Req></Req>), storing requestable slot names if the corresponding value is True. At turn t, given the user utterance Ut, we show an example of both bspan Bt and machine response Rt generation in Figure 1, where annotated slot values at each turn are decoded into bspans. B1 contains an information slot Italian because the user stated “Italian food” in U1. During the second turn, the user adds an additional constraint cheap resulting in two slot values in B2’s information field. In the third turn, the user further asks for the restaurant’s phone and address, which are stored in requested slots of B3. Our bspan solution is concise: it simplifies multiple sophisticated classifiers with a single sequence model. Furthermore, it can be viewed as an explicit data structure that expedite knowledge base search as its format is fixed: following (Wen et al., 2017b), we use the informable slots values directly for matching fields of entries in databases. 4.2 The Sequicity Framework We make a key observation that at turn t, a system only needs to refer to Bt−1, Rt−1 and Ut to generate a new belief span Bt and machine response Rt, without appealing to knowing all past utterances. Such Markov assumption allows Sequicity to concatenate Bt−1, Rt−1 and Ut (denoted as Bt−1Rt−1Ut) as a source sequence for seq2seq modeling, to generate Bt and Rt as target output sequences at each turn. More formally, we represent the dialogue utterances as {(B0R0U1; B1R1); (B1R1U2; B2R2); ...; (Bt−1 Rt−1Ut; BtRt)} where B0 and R0 are initialized as empty sequences. In this way, Sequicity fulfills both task accomplishment and response generation in an unified seq2seq model. Note that we process Bt and Rt separately, as the belief state Bt depends only on Bt−1Rt−1Ut, while the response Rt is additionally conditioned on Bt and the knowledge base search results (denoted as kt); that is, Bt informs the Rt’s contents. For example, Rt must include all the request slots from Bt when communicating the entities fulfilling the requests found in the knowledge base. Here, kt helps generate Rt pragmatically. 1440 𝐵ଶ <Inf> Italian ; cheap </Inf> <Req></Req> 𝑈ଷ Tell me the address and the phone number please . </s> <Inf> Italian ; cheap </Inf> <Req>address ; phone</Req> 𝐵ଷ Rଷ Turn Dialogue User1 Can I have some Italian food please? Mach ine1 <Inf> Italian </Inf><Req> </Req> What price range are you looking for? User2 I want cheap ones. Mach ine2 <Inf> Italian ; cheap </Inf> <Req></Req> NAME_SLOT is a cheap restaurant serving western food User3 Tell me the address and the phone number please . Mach ine3 <Inf> Italian ; cheap </Inf> <Req>address ; phone</Req> The address is ADDRESS_SLOT and the phone number is PHONE_SLOT Knowledge Base The address is ADDRESS_SLOT and the phone number is PHONE_SLOT NAME_SLOT is a cheap restaurant serving western food 𝑅ଶ Figure 1: Sequicity overview. The left shows a sample dialogue; the right illustrates the Sequicity. Bt is employed only by the model, and not visible to users. During training, we substitute slot values with placeholders bearing the slot names for machine response. During testing, this is inverted: the placeholders are replaced by actual slot values, according to the item selected from the knowledge base. Generally, kt has three possibilities: 1) multiple matches, 2) exact match and 3) no match, while the machine responses differ accordingly. As an example, let’s say a user requests an Italian restaurant. In the scenario of multiple matches, the system should prompt for additional constraints for disambiguation (such as restaurant price range). In the second exact match scenario where a single target (i.e., restaurant) has been found, the system should inform the user their requested information (e.g., restaurant address). If no entity is obtained, the system should inform the user and perhaps generate a cooperative response to retry a different constraint. We thus formalize Sequicity as a seq2seq model which encodes Bt−1Rt−1Ut jointly, but decodes Bt and Rt separately, in two serial stages. In the first stage, the seq2seq model decodes Bt unconditionally (Eq. 4a). Once Bt obtained, the decoding pauses to perform the requisite knowledge base search based on Bt, resulting in kt. Afterwards, the seq2seq model continues to the second decoding stage, where Rt is generated on the additional conditions of Bt and kt (Eq. 4b). Bt = seq2seq(Bt−1Rt−1Ut|0, 0) (4a) Rt = seq2seq(Bt−1Rt−1Ut|Bt, kt) (4b) Sequicity is a general framework suitably implemented by any of the various seq2seq models. The additional modeling effort beyond a general seq2seq model is to add the conditioning on Bt and kt to decode the machine response Rt. Fortunately, natural language generation with specific conditions has been extensively studied (Wen et al., 2016b; Karpathy and Fei-Fei, 2015; Mei et al., 2016) which can be employed within this framework. 4.3 Sequicity Instantiation: A Two Stage CopyNet Although there are many possible instantiations, in this work we purposefully choose a simplistic architecture, leaving more sophisticated modeling for future work. We term our instantiated model a Two Stage CopyNet (TSCP). We denote the first m′ tokens of target sequence Y are Bt and the rests are Rt, i.e. Bt = y1...ym′ and Rt = ym′+1...ym. Two-Stage CopyNet. We choose to improve upon CopyNet (Gu et al., 2016) as our seq2seq model. This is a natural choice as we observe that target sequence generation often requires the copying of tokens from the input sequence. Let’s discuss this in more detail. From a probabilistic point of view, the traditional encoder–decoder structure learns a language model. To decode yj, we can employ a softmax (e.g., Eq. 3) to calculate the probability distribution over V i.e., P g j (v) where v ∈V , and then choose the token with the highest generation probability. However, in our case, tokens in the target sequence Y might 1441 be exactly copied from the input X (e.g., “Italian”). These copied words need to be explicitly modeled. CopyNet (Gu et al., 2016) is a natural fit here, as it enlarges the decoding output space from V to V ∪X. For yj, it considers an additional copy probability P c j (v), indicating the likelihood of yj copied from v ∈X. Following (Gu et al., 2016), the simple summation of both probabilities Pj(v) = P g j (v) + P c j (v), v ∈V ∪X is treated as the final probability in the original paper. In Sequicity, simply applying original CopyNet architecture is insufficient, since Bt and Rt have different distributions. We here employ two separate RNN (GRU in our implementation) in decoder: one for Bt and the other for Rt. In the first decoding stage, we have a copy-attention mechanism on X to decode Bt; then calculate the generation probability through attending to X as introduced in Sec 3.1, as well as the copy probability for each word v ∈X following (Gu et al., 2016) by Eq. 5: P c j (v) = 1 Z |X| X i:xi=v eψ(xi), j ⩽m′ (5) where Z is a normalization term and ψ(xi) is the score of “copying” word xi and is calculated by: ψ(xi) = σ(h(x) i T Wc)h(y) j , j ⩽m′ (6) where Wc ∈Rd×d. In the second decoding stage (i.e., decoding Rt), we apply the last hidden state of Bt as the initial hidden state of the Rt GRU. However, as we need to explicitly model the dependency on Bt, we have copy-attention mechanism on Bt instead of on X: treating all tokens of Bt as the candidate for copying and attention. Specifically, we use hidden state generated by Bt GRU, i.e., h(y) 1 , ..., h(y) m′ , to calculate copying using Eqs. 7 and 8 and attention score as introduced in Sec 3.1. It helps to reduce search space because all key information of X for task completion has been included in Bt. P c j (v) = 1 Z X i:yi=v eψ(yi), i ⩽m′ < j ⩽m (7) ψ(yi) = σ(h(y) i T Wc)h(y) j , i ⩽m′ < j ⩽m (8) In contrast to recent work (Eric and Manning, 2017a) that also employs a copy-attention mechanism to generate a knowledge-base search API and machine responses, our proposed method advances in two aspects: on one hand, bspans reduce the search space from U1R1...UtRt to Bt−1Rt−1Ut by compressing key points for the task completion given past dialogues; on the other hand, because bspans revisit context by only handling the Bt with a fixed length, the time complexity of TSCP is only O(T), comparing O(T 2) in (Eric and Manning, 2017a). Involving kt when decoding Rt. As kt has three possible values: obtaining only one, multiple or no entities. We let kt be a vector of three dimensions, one of which signals a value. We append kt to the embeddings yj , as shown in Eq. (9) that is fed into an GRU for generating h(y) j+1. This approach is also referred to as Language Model Type condition (Wen et al., 2016b) y′ j =  yj kt  , j ∈[m′ + 1, m] (9) 4.4 Training The standard cross entropy is adopted as our objective function to train a language model: m X j=1 yjlogPj(yj) (10) In response generation, every token is treated equally. However, in our case, tokens for task completion are more important. For example, when a user asks for the address of a restaurant, it matters more to decode the placeholder <address> than decode words for language fluency. We can employ reinforcement learning to fine tune the trained response decoder with an emphasis to decode those important tokens. Inspired by (Wen et al., 2017a), in the context of reinforcement learning, the decoding network can be viewed as a policy network, denoted as πΘ(yj) for decoding yj (m′ + 1 ⩽j ⩽m). Accordingly, the choice of word yj is an action and its hidden vector generated by decoding GRU is the corresponding state. In reinforcement tuning stage, the trained response decoder is the initial policy network. By defining a proper reward function r(j) for decoding yj, we can update the trained 1442 Dataset Cam676 Size Train:408 / Test: 136 / Dev: 136 Domains restaurant reservation Slot types price, food style etc. Distinct slot values 99 Dataset KVRET Size Train:2425 / Test: 302 / Dev: 302 Domains calendar weather info. POI Slot types date, etc. location, etc. poi, etc. Distinct slot values 79 65 140 Table 1: Dataset demographics. Following the respective literature, Cam676 is split 3:1:1 and KVRET is split 8:1:1, into training, developing and testing sets, respectively. response model with policy gradient: 1 m −m′ m X j=m′+1 r(j) ∂logπΘ(yj) ∂Θ (11) where r(j) = r(j) + λr(j+1) + λ2r(j+2) + ... + λm−j+1r(m). To encourage our generated response to answer the user requested information but avoid long-winded response, we set the reward at each step r(j) as follows: once the placeholder of requested slot has been decoded, the reward for current step is 1; otherwise, current step’s reward is -0.1. λ is a decay parameter. Sec 5.2 for λ settings. 5 Experiments We assess the effectiveness of Sequicity in three aspects: the task completion, the language quality, and the efficiency. The evaluation metrics are listed as follows: · BLEU to evaluate the language quality (Papineni et al., 2002) of generated responses (hence top-1 candidate in (Wen et al., 2017b)). · Entity match rate evaluates task completion. According to (Wen et al., 2017b), it determines if a system can generate all correct constraints to search the indicated entities of the user. This metric is either 0 or 1 for each dialogue. · Success F1 evaluates task completion and is modified from the success rate in (Wen et al., 2017b, 2016a, 2017a). The original success rate measures if the system answered all the requested information (e.g. address, phone number). However, this metric only evaluates recall. A system can easily achieve a perfect task success by always responding all possible request slots. Instead, we here use success F1 to balance both recall and precision. It is defined as the F1 score of requested slots answered in the current dialogue. · Training time. The training time is important for iteration cycle of a model in industry settings. 5.1 Datasets We adopt the CamRest676 (Wen et al., 2017a) and KVRET (Eric and Manning, 2017b) datasets. Both datasets are created by a Wizard-of-Oz (Kelley, 1984) method on Amazon Mechanical Turk platform, where a pair of workers are recruited to carry out a fluent conversation to complete an assigned task (e.g. restaurant reservation). During conversation, both informable and requestable slots are recorded by workers. CamRest676’s dialogs are in the single domain of restaurant searching, while KVRET is broader, containing three domains: calendar scheduling, weather information retrieval and point of interest (POI) Navigation. Detailed slot information in each domain are shown in Table 1. We follow the data splits of the original papers as shown in 1. 5.2 Parameter Settings For all models, the hidden size and the embedding size d is set to 50. |V | is 800 for CamRes676 and 1400 for KVRET. We train our model with an Adam optimizer (Kingma and Ba, 2015), with a learning rate of 0.003 for supervised training and 0.0001 for reinforcement learning. Early stopping is performed on developing set. In reinforcement learning, the decay parameter λ is set to 0.8. We also use beam search strategy for decoding, with a beam size of 10. 5.3 Baselines and Comparisons We first compare our model with the state-of-theart baselines as follow: • NDM (Wen et al., 2017b). As described in Sec 1, it adopts pipeline designs with a belief tracker component depending on delexicalization. • NDM+Att+SS. Based on the NDM model, an additional attention mechanism is performed on the belief trackers and a snapshot learning mechanism (Wen et al., 2016a) is adopted. • LIDM (Wen et al., 2017a). Also based on NDM, this model adopts neural variational inference with reinforcement learning. 1443 CamRes676 KVRET Mat. BLEU Succ. F1 Timefull TimeN.B. Mat. BLEU Succ. F1 Timefull TimeN.B. (1) NDM 0.904 0.212 0.832 91.9 min 8.6 min 0.724 0.186 0.741 285.5 min 29.3 min (2) NDM + Att + SS 0.904 0.240 0.836 93.7 min 10.4 min 0.724 0.188 0.745 289.7 min 33.5 min (3) LIDM 0.912 0.246 0.840 97.7 min 14.4 min 0.721 0.173 0.762 312.8 min 56.6 min (4) KVRN N/A 0.134 N/A 21.4 min – 0.459 0.184 0.540 46.9 min – (5) TSCP 0.927 0.253 0.854 7.3 min – 0.845 0.219 0.811 25.5 min – (6) Att-RNN 0.851 0.248 0.774 7.2 min – 0.805 0.208 0.801 23.0 min – (7) TSCP\kt 0.927 0.232 0.835 7.2 min – 0.845 0.168 0.759 25.3 min – (8) TSCP\RL 0.927 0.234 0.834 4.1 min – 0.845 0.191 0.774 17.5 min – (9) TSCP\Bt 0.888 0.197 0.809 22.9 min – 0.628 0.182 0.755 42.7 min – Table 2: Model performance on CamRes676 and KVRET. This table is split into two parts: competitors on the upper side and our ablation study on the bottom side. Mat. and Succ. F1 are for match rate and success F1 respectively. Timefull column reports training time till converge. For NDM, NDM+Att+SS and LIDM, we also calculate the training time for the rest parts except for the belief tracker (TimeN.B.). • KVRN (Eric and Manning, 2017b) uses one seq2seq model to generate response as well as interacting with knowledge base. However, it does not incorporate a belief tracking mechanism. For NDM, NDM+Att+SS, LIDM, we run the source code released by the original authors2. For KVRN, we replicate it since there is no source code available. We also performed an ablation study to examine the effectiveness of each component. • TSCP\kt. We removed the conditioning on kt when decoding Rt. • TSCP\RL. We removed reinforcement learning which fine tunes the models for response generation. • Att-RNN. The standard seq2seq baseline as described in the preliminary section (See §3.1). • TSCP\Bt. We removed bspans for dialogue state tracking. Instead, we adopt the method in (Eric and Manning, 2017a): concatenating all past utterance in a dialogue into a CopyNet to generate user information slots for knowledge base search as well as machine response. 5.4 Experimental Results As shown in Table 2, TSCP outperforms all baselines (Row 5 vs. Rows 1–4) in task completion (entity match rate, success F1) and language quality (BLEU). The more significant performance of TSCP in KVRET dataset indicates the scalability 2https://github.com/shawnwun/NNDIAL of TSCP. It is because KVRET dataset has significant lexical variety, making it hard to perform delexicalization for Wen et al.’s model (Rows 1– 3)3. However, CamRes676 is relatively small with simple patterns where all systems work well. As predicted, KVRN (Row 4) performs worse than TSCP (Row 5) due to lack of belief tracking. Compared with Wen et al.’s models (Rows 1–3), TSCP takes a magnitude less time to train. Although TSCP is implemented in PyTorch while Wen et al.’s models in Theano, such speed comparison is still valid, as the rest of the NDM model — apart from its belief tracker — has a comparable training speed to TSCP (7.3 mins vs. 8.6 mins on CamRes676 and 25.5 mins vs. 29.3 mins on KVRET), where model complexities are similar. The bottleneck in the time expense is due to belief tracker training. In addition, Wen et al.’s models perform better at the cost of more training time (Rows 1, 2 and 3), suggesting the intrinsic complexity of pipeline designs. Importantly, ablation studies validate the necessity of bspans. With bspans, even a standard seq2seq model (Att-RNN, Row 6) beats sophisticated models such as attention copyNets (TSCP\Bt, Row 9) in KVRET. Furthermore, TSCP (Row 5) outperforms TSCP\Bt (Row 9) in all aspects: task completion, language quality and training speed. This validate our theoretical analysis in Sec 4.3. Other components of TSCP are also important. If we only use vanilla Attention-based RNN instead of copyNet, all metrics for model effectiveness decrease, validating our hypothesize that the copied words need to be specifically modeled. Secondly, BLEU score is sensitive to knowl3We use the delexicalization lexicon provided by the original author of KVRET(Eric and Manning, 2017b) 1444 edge base search result kt (Row 7 vs. Row 5). By examining error cases, we find that the system is likely to generate common sentences like “you are welcome” regardless of context, due to corpus frequency. Finally, reinforcement learning effectively helps both BLEU and success F1 although it takes acceptable additional time for training. 5.5 OOV Tests Previous work predefines all slot values in a belief tracker. However, a user may request new attributes that has not been predefined as a classification label, which results in an entity mismatch. TSCP employs copy mechanisms, gaining an intrinsic potential to handle OOV cases. To conduct the OOV test, we synthesize OOV test instances by adding a suffix unk to existing slot fillers. For example, we change “I would like Chinese food” into “I would like Chinese unk food.” We then randomly make a proportion of testing data OOV and measure its entity match rate. For simplicity, we only show the three most representative models pre-trained in the in-vocabulary data: TSCP, TSCP\Bt and NDM. Mat. 0 0.25 0.5 0.75 1 OOV rate 0% 25% 50% 75% 100% TSCP NDM TSCP\Bt (a) CamRes676 Mat. 0 0.25 0.5 0.75 1 OOV rate 0% 25% 50% 75% 100% TSCP NDM TSCP\Bt (b) KVRET Figure 2: OOV tests. 0% OOV rate means no OOV instance while 100% OOV rate means all instances are changed to be OOV. Compared with NDM, TSCP still performs well when all slot fillers are unknown. This is because TSCP actually learns sentence patterns. For example, CamRes676 dataset contains a frequent pattern “I would like [food type] food” where the [food type] should be copied in Bt regardless what exact word it is. In addition, the performance of TSCP\Bt decreases more sharply than TSCP as more instances set to be OOV. This might be because handling OOV cases is much harder when search space is large. 5.6 Empirical Model Complexity Traditional belief trackers like (Wen et al., 2017b) are built as a multi-class classifier, which models each individual slot and its corresponding values, introducing considerable model complexities. This is especially severe in large datasets with a number of slots and values. In contrast, Sequicity reduces such a complex classifier to a language model. To compare the model complexities of two approaches, we empirically measure model size. We split KVRET dataset by their domains, resulting in three sub-datasets. We then accumulatively add the sub-datasets into training set to examine how the model size grows. We here selectively present TSCP, NDM and its separately trained belief tracker, since Wen et al.’s set of models share similar model sizes. Model Size(million) 0 2 4 5 7 Distinct Slot Values 79 144 284 TSCP NDM Belief Tracker Figure 3: Model size sensitivity with respect to KVRET. Distinct slot values of 79, 144, 284 correspond to the number of slots in KVRET’s calendar, calendar + weather info., and all 3 domains. As shown in Figure 3, TSCP has a magnitude less number of parameters than NDM and its model size is much less sensitive to distinct slot values increasing. It is because TSCP is a seq2seq language model which has a approximate linear complexity to vocabulary size. However, NDM employs a belief tracker which dominates its model size. The belief tracker is sensitive to the increase of distinct slot values because it employs complex structures to model each slot and corresponding values. Here, we only perform empirical evaluation, leaving theoretically complexity analysis for future works. 5.7 Discussions In this section we discuss if Sequicity can tackle inconsistent user requests , which happens when users change their minds during a dialogue. Inconsistent user requests happen frequently and are dif1445 ficult to tackle in belief tracking (Williams, 2012; Williams et al., 2013). Unlike most of previous pipeline-based work that explicitly defines model actions for each situation, Sequicity is proposed to directly handle various situations from the training data with less manual intervention. Here, given examples about restaurant reservation, we provide three different scenarios to discuss: • A user totally changes his mind. For example, the user request a Japanese restaurant first and says “I dont want Japanese food anymore, I’d like French now.” Then, all the slot activated before should be invalid now. The slot annotated for this turn is only French. Sequicity can learn this pattern, as long as it is annotated in the training set. • User requests cannot be found in the KB (e.g., Japanese food). Then the system should respond like “Sorry, there is no Japanese food...”. Consequently, the user can choose a different option: “OK, then French food.” The activated slot Japanese will be replaced as French, which our system can learn. Therefore, an important pattern is the machine-response (e.g., “there is no [XXX constraint]”) in the immediate previous utterance. • Other cases. Sequicity is expected to generate both slot values in a belief span if it doesn’t know which slot to replace. To maintain the belief span, we run a simple postprocessing script at each turn, which detects whether two slot values have the same slot name (e.g., food type) in a pre-defined slot name-value table. Then, such script only keeps the slot value in the current turn of user utterance. Given this script, Sequicity can accurately discover the slot requested by a user in each utterance. However, this script only works when slot values are pre-defined. For inconsistent OOV requests, we need to build another classifier to recognize slot names for slot values. To sum up, Sequicity, as a framework, is able to handle various inconsistent user input despite its simple design. However, detailed implementations should be customized depends on different applications. 6 Conclusion We propose Sequicity, an extendable framework, which tracks dialogue believes through the decoding of novel text spans: belief spans. Such belief spans enable a task-oriented dialogue system to be holistically optimized in a single seq2seq model. One simplistic instantiation of Sequicity, called Two Stage CopyNet (TSCP), demonstrates better effectiveness and scalability of Sequicity. Experiments show that TSCP outperforms the state-ofthe-art baselines in both task accomplishment and language quality. Moreover, our TSCP implementation also betters traditional pipeline architectures by a magnitude in training time and adds the capability of handling OOV. Such properties are important for real-world customer service dialog systems where users’ inputs vary frequently and models need to be updated frequently. For our future work, we will consider advanced instantiations for Sequicity, and extend Sequicity to handle unsupervised cases where information and requested slots values are not annotated. Acknowledgments We would like to thank the anonymous reviewers for their detailed comments and suggestions for this paper. This work is also supported by the National Research Foundation, Prime Ministers Office, Singapore under its IRC@SG Funding Initiative. References Hongshen Chen, Xiaorui Liu, Dawei Yin, and Jiliang Tang. 2017. A survey on dialogue systems: Recent advances and new frontiers. arXiv preprint arXiv:1711.01731 . Heriberto Cuay´ahuitl, Simon Keizer, and Oliver Lemon. 2015. Strategic dialogue management via deep reinforcement learning. arXiv preprint arXiv:1511.08099 . Bhuwan Dhingra, Lihong Li, Xiujun Li, Jianfeng Gao, Yun-Nung Chen, Faisal Ahmed, and Li Deng. 2017. Towards end-to-end reinforcement learning of dialogue agents for information access. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 484–495. Mihail Eric and Christopher D Manning. 2017a. A copy-augmented sequence-to-sequence architecture gives good performance on task-oriented dialogue. 1446 Mihail Eric and Christopher D Manning. 2017b. Keyvalue retrieval networks for task-oriented dialogue. SIGDIAL . Jonas Gehring, Michael Auli, David Grangier, and Yann N Dauphin. 2017. A convolutional encoder model for neural machine translation. ACL . Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. ACL . Homa B Hashemi, Amir Asiaee, and Reiner Kraft. 2016. Query intent detection using convolutional neural networks. In International Conference on Web Search and Data Mining, Workshop on Query Understanding. Matthew Henderson, Blaise Thomson, and Jason D Williams. 2014a. The second dialog state tracking challenge. In SIGDIAL Conference. pages 263–272. Matthew Henderson, Blaise Thomson, and Jason D Williams. 2014b. The third dialog state tracking challenge. In Spoken Language Technology Workshop (SLT), 2014 IEEE. IEEE, pages 324–329. Matthew Henderson, Blaise Thomson, and Steve Young. 2013. Deep neural network approach for the dialog state tracking challenge. In Proceedings of the SIGDIAL 2013 Conference. pages 467–471. Andrej Karpathy and Li Fei-Fei. 2015. Deep visualsemantic alignments for generating image descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition. pages 3128–3137. John F Kelley. 1984. An iterative design methodology for user-friendly natural language office information applications. ACM Transactions on Information Systems (TOIS) 2(1):26–41. Seokhwan Kim, Luis Fernando DHaro, Rafael E Banchs, Jason D Williams, and Matthew Henderson. 2017. The fourth dialog state tracking challenge. In Dialogues with Social Robots, Springer, pages 435– 449. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. Sungjin Lee. 2013. Structured discriminative model for dialog state tracking. In SIGDIAL Conference. pages 442–451. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objective function for neural conversation models. arXiv preprint arXiv:1510.03055 . Bing Liu and Ian Lane. 2017. Iterative policy learning in end-to-end trainable task-oriented neural dialog models. arXiv preprint arXiv:1709.06136 . Hongyuan Mei, Mohit Bansal, and Matthew R Walter. 2016. What to talk about and how? selective generation using lstms with coarse-to-fine alignment. In NAACL. Nikola Mrkˇsi´c, Diarmuid O S´eaghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve Young. 2017. Neural belief tracker: Data-driven dialogue state tracking. ACL . Tim Paek and Eric Horvitz. 2000. Conversation as action under uncertainty. In Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence. Morgan Kaufmann Publishers Inc., pages 455–464. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics. Association for Computational Linguistics, pages 311–318. Yangyang Shi, Kaisheng Yao, Le Tian, and Daxin Jiang. 2016. Deep lstm based feature mapping for query classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 1501–1511. Joao Silva, Lu´ısa Coheur, Ana Cristina Mendes, and Andreas Wichert. 2011. From symbolic to subsymbolic information in question classification. Artificial Intelligence Review 35(2):137–154. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762 . Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina M Rojas-Barahona, Pei-Hao Su, Stefan Ultes, David Vandyke, and Steve Young. 2016a. Conditional generation and snapshot learning in neural dialogue systems. EMNLP . Tsung-Hsien Wen, Milica Gasic, Nikola Mrkˇsi´c, Lina M. Rojas Barahona, Pei-Hao Su, Stefan Ultes, David Vandyke, and Steve Young. 2016b. Conditional generation and snapshot learning in neural dialogue systems. In EMNLP. ACL, Austin, Texas, pages 2153–2162. https://aclweb.org/anthology/D16-1233. Tsung-Hsien Wen, Yishu Miao, Phil Blunsom, and Steve Young. 2017a. Latent intention dialogue models. ICML . Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017b. A networkbased end-to-end trainable task-oriented dialogue system. EACL . 1447 Jason Williams, Antoine Raux, Deepak Ramachandran, and Alan Black. 2013. The dialog state tracking challenge. In Proceedings of the SIGDIAL 2013 Conference. pages 404–413. Jason D Williams. 2012. A belief tracking challenge task for spoken dialog systems. In NAACL-HLT Workshop on future directions and needs in the spoken dialog community: tools and data. Association for Computational Linguistics, pages 23–24. Jason D Williams, Kavosh Asadi, and Geoffrey Zweig. 2017. Hybrid code networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) . Steve Young, Milica Gaˇsi´c, Simon Keizer, Franc¸ois Mairesse, Jost Schatzmann, Blaise Thomson, and Kai Yu. 2010. The hidden information state model: A practical framework for pomdp-based spoken dialogue management. Computer Speech & Language 24(2):150–174. Steve Young, Milica Gaˇsi´c, Blaise Thomson, and Jason D Williams. 2013a. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE 101(5):1160–1179. Steve Young, Milica Gaˇsi´c, Blaise Thomson, and Jason D Williams. 2013b. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE 101(5):1160–1179. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In AAAI. pages 2852–2858.
2018
133
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1448–1457 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1448 An End-to-end Approach for Handling Unknown Slot Values in Dialogue State Tracking Puyang Xu Qi Hu† Mobvoi AI Lab, Redmond, WA †University of Washington, Seattle, WA {puyangxu, qihuchn}@gmail.com Abstract We highlight a practical yet rarely discussed problem in dialogue state tracking (DST), namely handling unknown slot values. Previous approaches generally assume predefined candidate lists and thus are not designed to output unknown values, especially when the spoken language understanding (SLU) module is absent as in many end-to-end (E2E) systems. We describe in this paper an E2E architecture based on the pointer network (PtrNet) that can effectively extract unknown slot values while still obtains state-of-the-art accuracy on the standard DSTC2 benchmark. We also provide extensive empirical evidence to show that tracking unknown values can be challenging and our approach can bring significant improvement with the help of an effective feature dropout technique. 1 Introduction A dialogue state tracker is a core component in most of today’s spoken dialogue systems (SDS). The goal of dialogue state tracking (DST) is to monitor the user’s intentional states during the course of the conversation, and provide a compact representation, often called the dialogue states, for the dialogue manager (DM) to decide the next action to take. In task-oriented dialogues, or slot-filling dialogues in the simplistic form, the dialogue agent is tasked with helping the user achieve simple goals such as finding a restaurant or booking a train ticket. As the name itself suggests, a slot-filling 0The first author is now with Facebook. Qi contributed to the work during an internship at Mobvoi. dialogue is composed of a predefined set of slots that need to be filled through the conversation. The dialogue states in this case are therefore the values of these slot variables, which are essentially the search constraints the DM has to maintain in order to perform the database lookup. Traditionally in the research community, as exemplified in the dialogue state tracking challenge (DSTC) (Williams et al., 2013), which has become a standard evaluation framework for DST research, the dialogues are usually constrained by a fixed domain ontology, which essentially describes in detail all the possible values that each predefined slot can take. Having access to such an ontology can simplify the tracking problem in many ways, however, in many of the SDS applications we have built in the industry, such an ontology was not obtainable. Oftentimes, the backend databases are only exposed through an external API, which is owned and maintained by our partners. It is usually not possible to gain access to their data or enumerate all possible slot values in their knowledge base. Even if such lists or dictionaries exist, they can be very large in size and highly dynamic (e.g. new songs added, new restaurants opened etc.). It is therefore not amiable to many of the previously introduced DST approaches, which generally rely on classification over a fixed ontology or scoring each slot value pairs separately by enumerating the candidate list. In this paper, we will therefore focus on this particular aspect of the DST problem which has rarely been discussed in the community – namely how to perform state tracking in the absence of a comprehensive domain ontology and how to handle unknown slot values effectively. It is worth noting that end-to-end (E2E) modeling for task-oriented dialogue systems has become a popular trend (Williams and Zweig, 2016; Zhao and Eskenazi, 2016; Li et al., 2017; Liu et al., 1449 2017; Wen et al., 2017), although most of them focus on E2E policy learning and language generation, and still rely on explicit dialogue states in their models. While fully E2E approaches which completely obviate explicit DST have been attempted (Bordes and Weston, 2016; Eric and Manning, 2017a,b; Dhingra et al., 2017), their generality and scalability in real world applications remains to be seen. In reality, a dedicated DST component remains a central piece to most dialogue systems, even in most of the proclaimed E2E models. E2E approaches for DST, i.e. joint modeling of SLU and DST has also been presented in the literature (Henderson et al., 2014b,c; Mrksic et al., 2015; Zilka and Jurcicek, 2015; Perez and Liu, 2017; Mrksic et al., 2017). In these methods, the conventional practice of having a separate spoken language understanding (SLU) module is replaced by various E2E architectures that couple SLU and DST altogether. They are sometimes called word based state tracking as the dialogue states are derived directly from word sequences as opposed to SLU outputs. In the absence of SLU to generate value candidates, most E2E trackers today can only operate with fixed value sets. To address this limitation, we introduce an E2E tracker that allows us to effectively handle unknown value sets. The proposed solution is based on the recently introduced pointer network (PtrNet) (Vinyals et al., 2015), which essentially performs state tracking in an extractive fashion similar to the sequence labeling techniques commonly utilized for slot tagging in SLU (Tur and Mori, 2011). Our proposed technique is similar in spirit as the recent work in (Rastogi et al., 2018), which also targets the problem of unbounded and dynamic value sets. They introduce a sophisticated candidate generation strategy followed by a neural network based scoring mechanism for each candidate. Despite the similarity in the motivation, their system relies on SLU to generate value candidates, resulting in an extra module to maintain and potential error propagation as commonly faced by pipelined systems. The contributions of this paper are three-folds: Firstly, we target a very practical yet rarely investigated problem in DST, namely handling unknown slot values in the absence of a predefined ontology. Secondly, we describe a novel E2E architecture without SLU based on the PtrNet to perform state tracking. Thirdly, we also introduce an effective dropout technique for training the proposed model which drastically improves the recall rate of unknown slot values. The rest of the paper is structured as follows: We give a brief review of related work in the field in Section 2 and point out its limitations. The PtrNet and its proposed application in DST are described in Section 3. In Section 4, we demonstrate some caveats regarding the use of PtrNet and propose an additional classification module as a complementary component. The targeted dropout technique, which can be essential for generalization on some datasets, is described in Section 5. Experimental setup and results are presented in Section 6, followed by conclusions in Section 7. 2 Dialogue State Tracking In DSTC tasks, the dialogue states are defined as a set of search constraints (i.e. informable slots or goals) the user specified through the dialogue and a set of attribute questions regarding the search results (i.e. requestable slots or requests). The DST component is expected to track the values of the aforementioned slots taking into account the current user utterance as well as the entire dialogue context. As mentioned in the previous section, the values each slot variable can take are specified beforehand through an ontology. This is a hidden assumption that previous techniques usually rely upon implicitly and also what motivates our work in this paper. Discriminative DST While generative models aiming at modeling the joint distribution of dialogue states and miscellaneous evidences have been a popular modeling choice for DST for many years, the scalability issue resulting from large state spaces has limited the broader application of this family of models, despite the success of various approximation techniques. The discriminative methods, on the other hand, directly model the posterior distribution of dialogue states given the evidences accumulated through the conversation history. Models such as maximum entropy (Metallinou et al., 2013) and particularly the more recent deep learning based models (Henderson et al., 2014b,c; Zilka and Jurcicek, 2015; Mrksic et al., 2015, 2017; Perez and Liu, 2017) have demonstrated state-of-the-art results on public benchmarks. Such techniques 1450 often involve a multi-class classification step at the end (e.g. in the form of a softmax layer) which for each slot predicts the corresponding value based on the dialogue history. Sometimes the multi-class classification is replaced by a binary prediction that decides whether a particular slot value pair was expressed by the user, and the list of candidates comes from either a fixed ontology or the SLU output. E2E DST Previous work has also investigated joint modeling strategies merging SLU and DST altogether. In this line of work, the SLU module is removed from the standard SDS architecture, resulting in reduced development cost and alleviating the error propagation problem commonly affecting cascaded systems. In the absence of SLU providing fine-grained semantic features, the E2E approaches these days typically rely on variants of neural networks such as recurrent neural networks (RNN) or memory networks (Weston et al., 2014) to automatically learn features from the raw dialogue history. The deep learning based techniques cited in the previous subsection generally fall into this category. Current Limitations In short, most of the previous DST approaches, particularly E2E ones, are not designed to handle slot values that are not known to the tracker. As we have described in the introduction, the assumption that a predefined ontology exists for the dialogue and one can enumerate all possible values for each slot is often not valid in real world scenarios. Such an assumption has implicitly influenced many design choices of previous systems. The methods based on classification or scoring each slot value pair separately can be very difficult to apply when the set of slot values is not enumerable, either due to its size or its constantly changing nature, especially in E2E models where there is no SLU module to generate an enumerable candidate list for the tracker. It is important to point out the difference between unseen states and unknown states, as previous work has tried to address the problem of unseen slot values, i.e. values that were not observed during training. E2E approaches in particular, frequently employ a featurization strategy called delexicalization, which replaces slots and values mentioned in the dialogue text with generic labels. Such a conversion allows the models to generalize much better to new values that are infrequent or unseen in the training data. However, such slot values are still expected to be known to the tracker, either through a predefined value set or provided by SLU, otherwise the delexicalization cannot be performed, nor can the classifier properly output such values. 3 Pointer Network In this section, we briefly introduce the PtrNet (Vinyals et al., 2015), which is the main basis of the proposed technique, and how the DST problem can be reformulated to take advantage of the flexibility enabled by such a model. In the PtrNet architecture, similar as other sequence-to-sequence (seq2seq) models, there is an encoder which takes the input and iteratively produces a sequence of hidden states corresponding to the feature vector at each input position. There is also a decoder which generates outputs with the help of the weighted encoded states where the weights are computed through attention. Here, instead of using softmax to predict the distribution over a set of predefined candidates, the decoder directly normalizes the attention score at each position and obtains an output distribution over the input sequence. The index of the maximum probability is the pointed position, and the corresponding element is selected as decoder output, which is then fed into next decoding step. Both the encoder and decoder are based on various RNN models, capable of dealing with sequences of variable length. The PtrNet specifically targets the problems where the output corresponds to positions in the input sequence, and it is widely used for seq2seq tasks where some kind of copying from the input is needed. Among its various applications, machine comprehension (a form of question answering), such as in (Wang and Jiang, 2016), is the closest to how we apply the model to DST. The output of DST, same as in machine comprehension, is a word segment in the input sequence most of the time, thus can be naturally formulated as a pointing problem. Instead of generating longer output sequences, the decoder only has to predict the starting index and the ending index in order to identify the word segment. More specifically, words are mapped to embed1451 dings and the dialogue history w0, w1, ..., wt up to the current turn t is bidirectionally encoded using LSTM models. To differentiate words spoken by the user versus by the system, the word embeddings are further augmented with speaker role information. Other features, such as the entity type of each word, can also be fed into the encoder simultaneously in order to extract richer information from the dialogue context. The encoded state at each position can then be denoted as hi, which is the concatenation of forward state and backward state ([hf i , hb i]). The final forward state hf t is used as the initial hidden state of the decoder. We use a special symbol denoting the type of slot (e.g. <food>) as the first decoder input, which is also mapped to a trainable embedding Etype. Therefore, the starting index s0 of the slot value is computed as the following, where u0 i is the attention score of the ith word in the input against the decoder state d0. d0 = LSTM(hf t , Etype) u0 i = vT tanh(Whhi + Wdd0) a0 i = exp(u0 i )/ t X j=0 exp(u0 j) s0 = arg max i a0 i The attention scores at the second decoding step are computed similarly as below, where Ews0 is the embedding of the word at the selected starting position, and the ending position s1 can be obtained in the same way as s0. d1 = LSTM(d0, Ews0) u1 i = vT tanh(Whhi + Wdd1) Note that there is no guarantee that s1 > s0, although most of the time the model is able to identify consistent patterns in the data and therefore output reasonable word segments. When s1 < s0, it is often a good indication that the answer does not exist in the input (such as the none slot in DSTC2).1 Depending on the nature of the task, it is certainly possible to set a constraint at the second decoding step, forcing s1 to be larger than s0. One can clearly see how the described model can handle unknown slot values – as long as they are mentioned explicitly during the dialogue, we 1It is the backoff strategy we take in our experiments on DSTC2. Figure 1: An illustration of the proposed PtrNet based architecture for DST. The classifier outputs “other” indicating the decision should be made by PtrNet; The decoder (red) in PtrNet is predicting the ending word of the slot value given the predicted starting word via attention against the encoded states (blue). have a chance of finding them. Compared with previous approaches, which all require some kind of candidate lists, the proposed technique takes a different perspective on DST: For most slots in dialogue systems, tracking up-to-date values in a dialogue is not very different from tagging slots in a user query. While sequence labeling models such as conditional random field (CRF) has proven to be a great fit for slot tagging, the same formulation may as well be used for DST. 4 Rephrasing and Non-pointable Values Our PtrNet based architecture works by directly pinpointing in the conversation history the slot value that the user expressed in its surface form. The model is totally unaware of the different ways of referring to the same entity. Therefore, the derived dialogue states may not have canonical forms that are consistent with the values in the backend database, making it more difficult to retrieve the correct results. A good example from the DSTC2 dataset is the price slot which can take the reference value ”moderate”, in the actual dialogues however, they are frequently expressed as ”moderately priced”, causing problems for searching the database and also computing accuracy. While such a problem can be easily remedied by an extra canonicalization step (setting dialogue states to standard forms) before performing the 1452 Classifer PtrNet Rephrasing Yes *Yes none, dontcare, etc Yes No ASR errors Hard Hard Unknown values No Yes Table 1: Classifier vs. Pointer network in handling various difficult conditions. *PtrNet requires postnormalization to handle rephrasing. database lookup, it is a much bigger problem if the slot value is not indicated explicitly by any particular word or phrase in the dialogue history, we describe these slot values as non-pointable. To give an example, in DSTC tasks, the special none value is given when the user has not specified any constraint for the slot. While this information can be easily inferred from the dialogue, it is not possible to point to any specific word segment in the sentence as the corresponding slot value. The same problem also exists for the dontcare value in DSTC, which implies that the user can accept any values for a slot constraint. To address this issue, we add a classification component into our neural network architecture to handle non-pointable values. For each turn of the dialogue, the classifier makes a multi-class decision on whether the target slot should take any of the non-pointable values (e.g. dontcare or none) or it should be processed by the PtrNet. As illustrated in Figure 1, the final forward state out of the dialogue encoder is used as the feature vector for the classification layer, which is trained with cross entropy loss and jointly with the PtrNet. The best choice of the set of values to be handled by the classifier may not be obvious. In most cases both the classifier and the PtrNet are capable of extracting the correct slot value, although they both offer unique advantages over the other. Table 1 briefly summarizes the pros and cons of each model. The proposed combined architecture, taking the best of both worlds, is similar to the pointergenerator model introduced in (See et al., 2017) for abstractive text summarization. In their approach the PtrNet is also augmented with a classification based word generator, and the model can choose to generate words from a predefined vocabulary or copy words from the input. Other classify-and-copy mechanisms have also been explored in (Gu et al., 2016; Gulcehre et al., 2016; Eric and Manning, 2017a), and demonstrated improved performance on various seq2seq tasks such as summarization and E2E dialogue generation. 2 As we have shown in this paper, DST can also be formulated to incorporate such copying mechanisms, allowing itself to handle unknown slot values as well. 5 Targeted Feature Dropout Feature dropout is an effective technique to prevent feature co-adaption and improve model generalization (Hinton et al., 2012). It is most widely used for neural network based models but may as well be utilized for other feature based models. Targeted feature dropout however, was introduced in (Xu and Sarikaya, 2014) to address a very specific co-adaptation problem in slot filling, namely insufficient training of word context features. For slot filling, this problem often occurs when 1) the dictionary (a precompiled list of possible slot values) covers the majority of the slot values in the training data, or 2) most slot values repeat frequently resulting in insufficient tail representations. In both cases, the contextual features tend to get severely under-trained and as a result the model is not able to generalize to unknown slot values that are neither in the dictionary nor observed in training. The way our architecture works essentially extracts slot values in the same way as in slot filling, although the goal is to identify slots considering the entire dialogue context rather than a (usually) single user query. The same problem can also happen for DST if training data are not examined carefully. As an example, the DSTC2 task comes with a fixed ontology, it is not originally designed to track unknown slots (see the OOV rate in Table 2). Taking a closer look at the data, as shown in the histogram in Figure 2, the majority of the food type slot appears more than 10 times in the training data. As a result, the model oftentimes only learns to memorize these frequent slot values, and not the contextual patterns which can be more crucial for extracting slot values not known in advance. To alleviate the generalization issue, we adapt the targeted dropout trick to work with our neural 2The copy-augmented model in (Eric and Manning, 2017a) also outputs API call parameters (which are essentially dialogue states) in a seq2seq fashion, including unknown parameters by copying from dialogue history, although the work focuses entirely on dialogue generation. 1453 Figure 2: Histogram of food type slot on DSTC2 training data. network based architecture. Instead of randomly disabling unigram and dictionary features for CRF models as done in the original work, we randomly set to zero the input word embeddings that correspond to the slot values in the dialogue utterances. For example, the italian food type in DSTC2 appears almost 500 times in the training data. During training, every time “italian” gets mentioned in the dialogue as the labeled user goal, we turn off the word embedding of “italian” in the model input with some probability, forcing the model to learn from the context to identify the slot value. Dictionary features are not used in our experiments, otherwise they can be turned off similarly. As we will show later in the results, this proves to be a particularly effective yet simple trick for improving generalization to unknown slot values, without sacrificing accuracy for the known and observed ones. 6 Experiments and Results 6.1 Datasets We conduct our experiments on the DSTC2 dataset (Henderson et al., 2014a), and on the bAbI dialogue dataset as used in (Bordes and Weston, 2016). The DSTC2 dataset is the standard DST benchmark comprised of real dialogues between human and dialogue systems. We are mainly interested in tracking user goals, whereas the other two components of the dialogue state, namely search methods and requested slots, are not concerned with unknown slot values, and thus are not the focus in this paper. Meanwhile, the non-pointable values, none and dontcare, constitute a significant portion in DSTC2. Overall almost 60% of the user goals Original New #food types in train 74 48 #train instances 11677 8546 #test instances 9890 9890 OOV food types in test (%) 0 30.4 Table 2: Statistics of the new modified DSTC2 dataset with unknown food types. About 27% of the training instances are discarded. The test set remains the same. are labeled as either none or dontcare, the two predominant non-pointable values, it is therefore particularly suitable for evaluating our proposed hybrid architecture. An important part of our experimental evaluation is to demonstrate our ability to identify unknown slots. Although it happens frequently in real world situations, the original DSTC2 dataset does not suffer from this particular problem – on the test data, there are no unknown values that we have not observed in training for all of the three slot types. To conduct our investigation, we pick the food type slot to simulate unknown values. Specifically, we randomly select about 35% of the food types in the training set (26 out of 74) as unknown and discard all the training instances where the correct food type is one of the 26 unknown types that we selected. The statistics of the resulting dataset is shown in Table 2. On the other hand, the bAbI dialogue dataset is initially designed for evaluating E2E goal oriented dialogue systems and has not been used specifically for DST. The model is expected to predict both the system utterances and the API calls to access the database. We notice that the parameters of the API calls are essentially the dialogue states at the point of the dialogue, it may as well be used as a dataset for measuring the accuracy of the state tracker. We therefore convert Task 5 of the bAbI dataset, which is the full dialogue combination of Tasks 1-4, into a DST dataset for our experiments. Although simulated and with highly regular behaviors, the nice thing about the bAbI dialogue dataset is that it comes with an out-of-vocabulary (OOV) test set in which the entities (locations and food types) are not seen in any training dialogues. This poses exactly the same problem we are trying to address in this paper, namely predicting the API call parameters when they are not only unseen but also unknown to the system. Many of 1454 the previous E2E approaches simplifies the prediction problem as a selection among all API calls appeared in the entire dataset, thus bypassing the problem of tracking unknown dialogue states explicitly, although we believe it is not a realistic simplification. 6.2 Model and Training Details The proposed model is implemented in TensorFlow. We use the provided development set to tune the hyper-parameters, track the training progress and select the best performing model for reporting the accuracy on test sets. The joint architecture is trained separately for each slot type by minimizing the sum of the cross entropy loss from the PtrNet and the classifier. Mini-batch SGD with a batch size of 50 and Adam optimizer (Kingma and Ba, 2014) is used for training. Each word is mapped to a randomly initialized 100 dimensional embedding and each dialogue instance is represented as a 540 * 100 dimensional vector with zero paddings on the left when necessary. Instead of the using the raw word sequences, the system utterances are replaced by the more succinct and consistent dialogue act representations such as “request slot food”. One layer of LSTM is used with a state size of 200 (additional layers did not help noticeably). Standard dropout with a keep probability of 0.5 is performed for training at the input and output of the LSTM cells. To keep it simple, targeted dropout is done only once for the entire training set before training begins, the dataset is therefore static across epochs. To train the PtrNet, the location of the reference slot value in the dialogue needs to be provided. It does not require manual labeling though, and we simply use the last occurrence of the reference slot value in the dialogue history as the reference location. The occurrence is found via exact string match and the two most frequent spelling variations, “moderate” and “moderately”, “center” and “centre” are considered equivalent. If no occurrence exists in a training instance (due to ASR errors or rephrasing), it will not be used for training the PtrNet. On the other hand, the classifier serves as a gatekeeper that decides which slot values should be handed over to the PtrNet. On the bAbI dataset, there are zero non-pointable slots, and therefore everything is handled by the PtrNet. On DSTC2, we train the classifier to perform a three-way classification that determines if the slot values is none, dontcare or other. As we have described, other slot values can also become non-pointable in the actual dialogue: Those resulting from different surface forms are usually easier to handle, all we need is an extra post-processing step to normalize the value; The ones caused by ASR errors though, are much more challenging. One can argue that a classifier may be better equipped for these cases since it does not require locating the actual values in the word sequence, but unless there are consistent misrecognition patterns, they are difficult to handle for either the classifier or the PtrNet. The non-pointable values in DSTC2, besides none and dontcare, are predominantly due to recognition errors, and we decide not to do anything specific about them – the PtrNet is tasked with processing these misrecognized utterances, and no normalization (except for “moderately” and “center”) is performed on the network output for computing the accuracy. 3 6.3 Evaluation Setup The DSTC2 dataset is a standard benchmark for the task, we therefore compare the joint goal accuracy (a turn is considered correct if values are predicted correctly for all slots) of the proposed model with previous reported numbers to show the efficacy of our approach under regular circumstances, i.e. all slot values are known and observed in training. However, it is not our goal to outperform all previous DST systems – the main theme is that our technique allows identifying unknown slot values effectively and even if used in the standard setting, our model yields state-of-the-art results. Measuring the accuracy on unknown slot values, however, does not have well-established baselines in the literature. Most previous systems are not concerned with this problem, and many of them are inherently not capable of outputting unknown values. So instead of comparisons with previous techniques, we will focus on demonstrating how this could be a serious problem tracking unknown slot values and how the targeted dropout can improve things drastically. 6.4 Results The joint goal accuracy on the standard DSTC2 test set is shown in Table 3 comparing our Ptr3Non-pointable values besides none and dontcare constitute 9.7% of food, 7.6% of location and 4.7% of price on the test data, effectively setting an upper bound on the accuracy. 1455 Models Joint Acc. Delexicalizaed RNN 69.1 Delexicalizaed RNN + semdict 72.9 NBT-DNN 72.6 NBT-CNN 73.4 MemN2N 74.0 Scalable Multi-domain DST 70.3 PtrNet 72.1 Table 3: Joint goal accuracy on DSTC2 test set vs. various approaches as reported in the literature. Net based model against various previous reported baselines. It is important to emphasize that the PtrNet model is an E2E model without using any SLU output and makes use of only the 1-best ASR hypothesis without any confidence measure for testing. Although more sophisticated DST models sometimes demonstrate better accuracy, our PtrNet model holds various advantages against all baseline models: In comparison with our approach, the delexicalized RNN models (Henderson et al., 2014b,c) utilize the n-best list and/or the SLU output; The NBT (Mrksic et al., 2017) and MemN2N (Perez and Liu, 2017) models are E2E but both depend on candidate lists as given and hence are not designed to handle unknown (different from unseen) slot values; The scalable DST model (Rastogi et al., 2018), although addressing the same problem of unbounded value set, relies on SLU to generate value candidates, and also does not perform equally well on the standard test set. On the modified DSTC2 dataset with the reduced training set, the accuracy of the known/seen and unknown food types is shown in Figure 3. The standard training process with no targeted dropout performs poorly when the food types are not known beforehand, epitomizing the often overlooked challenge of handling unknown slot values. With a small dropout probability of 5%, the accuracy on unknown values essentially increases by three times (from 11.6% to 34.4%), while the accuracy on other values remains roughly the same. Similar observations can also be made on the bAbI dataset predicting OOV API parameters (Table 4). While the dataset is quite artificial and in most cases we can achieve perfect accuracy on the regular test set, the OOV parameter values are not nearly as easy to predict. The targeted dropout Figure 3: Accuracy of known/seen and unknown food types on the modified DSTC2 dataset with different dropout probabilities. Regular Test OOV Test p=0 p=0.1 p=0 p=0.1 food 100 100 86.2 100 location 100 100 74.7 99.6 Table 4: Accuracy of predicting regular and OOV food and location parameters in bAbI (Task 5) API calls w/ (p=0.1) and w/o (p=0) targeted dropout. however, allows us to bridge the accuracy gap entirely. 7 Conclusion An E2E dialogue state tracker is introduced based on the pointer network. The model outputs slot values in an extractive fashion similar to the slot filling task in SLU. We also add a jointly trained classification component to combine with the pointer network, forming a hybrid architecture that not only achieves state-of-the-art accuracy on the DSTC2 dataset, but also more importantly is able to handle unknown slot values, which is a problem often neglected although particularly valuable in real world situations. A feature dropout trick is also described and proves to be particularly effective. Acknowledgments We are grateful to the anonymous reviewers for their insightful comments. We also would like to thank Mei-Yuh Hwang for helpful discussions. References Antoine Bordes and Jason Weston. 2016. Learning end-to-end goal-oriented dialog. In CoRR. 1456 Bhuwan Dhingra, Lihong Li, Xiujun Li, Jianfeng Gao, Chen Yun-Nung, Faisal Ahmed, and Deng Li. 2017. Towards end-to-end reinforcement learning of dialogue agents for information access. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL). Mihail Eric and Christopher Manning. 2017a. A copyaugmented sequence-to-sequence architecture gives good performance on task-oriented dialogue. In arXiv preprint arXiv:1701.04024v3 [cs.CL]. Mihail Eric and Christopher Manning. 2017b. Keyvalue retrieval networks for task-oriented dialogue. In arXiv preprint arXiv:1705.05414v2 [cs.CL]. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). Matthew Henderson, Blaise Thomson, and Jason Williams. 2014a. The second dialog state tracking challenge. In 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Matthew Henderson, Blaise Thomson, and Steve Young. 2014b. Robust dialosg state tracking using delexicalised recurrent neural networks and unsupervised adaptation. In Proceedings of IEEE Spoken Language Technology.. Matthew Henderson, Blaise Thomson, and Steve Young. 2014c. Word based dialog state tracking with recurrent neural networks. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL). Geoffrey Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2012. Improving neural networks by preventing coadaptation of feature detectors. In arXiv preprint arXiv:1207.0580v1 [cs.NE]. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR). Xiujun Li, Yun-Nung Chen, Lihong Li, Jianfeng Gao, and Asli Celikyilmaz. 2017. End-to-end task-completion neural dialogue systems. In arxiv preprint arXiv:1703.01008v3 [cs.CL]. Bing Liu, Gokhan Tur, Dilek Hakkani-Tur, Pararth Shah, and Larry Heck. 2017. End-to-end optimization of task-oriented dialogue model with deep reinforcement learning. In arxiv preprint arXiv:1711.10712v2 [cs.CL]. Metallinou Metallinou, Dan Bohus, and Jason Williams. 2013. Discriminative state tracking for spoken dialog systems. In Proceedings of the 51th Annual Meeting of the Association for Computational Linguistics (ACL). Nikola Mrksic, Diarmuid Seaghdha, Blaise Thomson, Milica Gasic, Pei-Hao Su, David Vandyke, TsungHsien Wen, and Steve Young. 2015. Multi-domain dialog state tracking using recurrent neural networks. In Proceedings of the 53th Annual Meeting of the Association for Computational Linguistics (ACL). Nikola Mrksic, Diarmuid Seaghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve Young. 2017. Neural belief tracker: Data-driven dialogue state tracking. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL). Julien Perez and Fei Liu. 2017. Dialog state tracking, a machine reading approach using memory network. In Proceedings of EACL. Abhinav Rastogi, Dilek Hakkani-Tur, and Larry Heck. 2018. Scalable multi-domain dialogue state tracking. In arXiv preprint arXiv:1712.10224v2 [cs.CL]. Abigail See, Peter Liu, and Christopher Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL). Gokhan Tur and Renato De Mori. 2011. Spoken Language Understanding: Systems for Extracting Semantic Information from Speech. Wiley. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In NIPS. Shuohang Wang and Jing Jiang. 2016. Machine comprehension using match-lstm and answer pointer. In arXiv preprint arXiv:1608.07905v2 [cs.CL]. Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A networkbased end-to-end trainable task-oriented dialogue system. In Proceedings of EACL. Jason Weston, Sumit Chopra, and Antoine Bordes. 2014. Memory networks. In CoRR. Jason Williams, Antoine Raux, Deepak Ramachandran, and Alan Black. 2013. The dialog state tracking challenge. In Proceedings of the SIGDIAL 2013 Conference. Jason Williams and Geoffrey Zweig. 2016. End-toend lstm-based dialog control optimized with supervised and reinforcement learning. In arxiv preprint arXiv:1606.01269v1 [cs.CL]. 1457 Puyang Xu and Ruhi Sarikaya. 2014. Targeted feature dropout for robust slot filling in natural language understanding. In ISCA - International Speech Communication Association. Tiancheng Zhao and Maxine Eskenazi. 2016. Towards end-to-end learning for dialog state tracking and management using deep reinforcement learning. In Proceedings of SIGDIAL 2016 Conference. Lukas Zilka and Filip Jurcicek. 2015. Incremental lstm-based dialog state tracker. In ASRU.
2018
134
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1458–1467 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1458 Global-Locally Self-Attentive Dialogue State Tracker Victor Zhong, Caiming Xiong, Richard Socher Salesforce Research Palo Alto, CA {vzhong, cxiong, rsocher}@salesforce.com Abstract Dialogue state tracking, which estimates user goals and requests given the dialogue context, is an essential part of taskoriented dialogue systems. In this paper, we propose the Global-Locally SelfAttentive Dialogue State Tracker (GLAD), which learns representations of the user utterance and previous system actions with global-local modules. Our model uses global modules to share parameters between estimators for different types (called slots) of dialogue states, and uses local modules to learn slot-specific features. We show that this significantly improves tracking of rare states and achieves stateof-the-art performance on the WoZ and DSTC2 state tracking tasks. GLAD obtains 88.1% joint goal accuracy and 97.1% request accuracy on WoZ, outperforming prior work by 3.7% and 5.5%. On DSTC2, our model obtains 74.5% joint goal accuracy and 97.5% request accuracy, outperforming prior work by 1.1% and 1.0%. 1 Introduction Task oriented dialogue systems can significantly reduce operating costs by automating processes such as call center dispatch and online customer support. Moreover, when combined with automatic speech recognition systems, task-oriented dialogue systems provide the foundation of intelligent assistants such as Amazon Alexa, Apple Siri, and Google Assistant. In turn, these assistants allow for natural, personalized interactions with users by tailoring natural language system responses to the dialogue context. Dialogue state tracking (DST) is a crucial part of dialogue systems. In DST, a dialogue state tracker estimates the state of the conversation using the current user utterance and the conversation history. This estimated state is then used by the system to plan the next action and respond to the user. A state in DST typically consists of a set of requests and joint goals. Consider the task of restaurant reservation as an example. During each turn, the user may inform the system of particular goals the user would like to achieve (e.g. inform(food=french)), or request for more information from the system (e.g. request(address)). The set of goal and request slot-value pairs (e.g. (food, french), (request, address)) given during a turn are referred to as the turn goal and turn request. The joint goal is the set of accumulated turn goals up to the current turn. Figure 1 shows an example dialogue with annotated turn states, in which the user reserves a restaurant. Traditional dialogue state trackers rely on Spoken Language Understanding (SLU) systems (Henderson et al., 2012) in order to understand user utterances. These trackers accumulate errors from the SLU, which sometimes do not have the necessary dialogue context to interpret the user utterances. Subsequent DST research forgo the SLU and directly infer the state using the conversation history and the user utterance (Henderson et al., 2014b; Zilka and Jurcicek, 2015; Mrkˇsi´c et al., 2015). These trackers rely on handcrafted semantic dictionaries and delexicalization — the anonymization of slots and values using generic tags — to achieve generalization. Recent work by Mrkˇsi´c et al. (2017) apply representation learning using convolutional neural networks to learn features relevant for each state as opposed to hand-crafting features. A key problem in DST that is not addressed by existing methods is the extraction of rare slotvalue pairs that compose the state during each turn. Because task oriented dialogues cover large 1459 Where would you go to eat in the south part of town? inform(area=south) ok I can help you with that. Are you looking for a particular type of food, or within a specific price range? request(food) request(price range) I just want to eat at a cheap restaurant in the south part of town. What food types are available, can you also provide some phone numbers? inform(price range=cheap) inform(area=south) request(phone) request(food) request(food) I found two restaurants serving cheap food. Would you prefer Portuguese or Chinese food? Either is fine, can I have the phone number please? request(phone) The lucky start is at 01223 244277 and Nandos is at 01223 327908. Thank you very much. User System User utterance Turn goals and requests System actions System response Figure 1: An example dialogue from the WoZ restaurant reservation corpus. Dashed lines divide turns in the dialogue. A turn contains an user utterance (purple), followed by corresponding turnlevel goals and requests (blue). The system then executes actions (yellow), and formulates the result into a natural language response (yellow). state spaces, many slot-value pairs that compose the state rarely occur in the training data. Although the chance of a particular rare slot-value pair being specified by the user in a turn is small, the chance that at least one rare slot-value pair is specified is large. Failure to predict these rare slot-value pairs results in incorrect turn-level goal and request tracking. Accumulated errors in turn-level goal tracking significantly degrade joint goal-tracking. For example, in the WoZ state tracking dataset, slot-value pairs have 214.9 training examples on average, while 38.6% of turns have a joint goal that contains a rare slot-value pair with less than 20 training examples. In this work, we propose the Global-Locally Self-Attentive Dialogue State Tracker (GLAD), a new state-of-the-art model for dialogue state tracking. In contrast to previous work that estimate each slot-value pair independently, GLAD uses global modules to share parameters between estimators for each slot and local modules to learn slot-specific feature representations. We show that by doing so, GLAD generalizes on rare slot-value pairs with few training examples. GLAD achieves state-of-the-art results of 88.1% goal accuracy and 97.1% request accuracy on the WoZ dialogue state tracking task (Wen et al., 2017), outperforming prior best by 3.7% and 5.5%. On DSTC2 (Henderson et al., 2014a), we achieve 74.5% goal accuracy and 97.5% request accuracy, outperforming prior best by 1.1% and 1.0%. 2 Global-Locally Self-Attentive Dialogue State Tracker One formulation of state tracking is to predict the turn state given an user utterance and previous system actions (Williams and Young, 2007). Like previous methods (Henderson et al., 2014b; Wen et al., 2017; Mrkˇsi´c et al., 2017), GLAD decomposes the multi-label state prediction problem into a collection of binary prediction problems by using a distinct estimator for each slot-value pair that make up the state. Hence, we describe GLAD with respect to a slot-value pair that is being predicted by the model. Shown in Figure 2, GLAD is comprised of an encoder module and a scoring module. The encoder module consists of separate global-locally self-attentive encoders for the user utterance, the previous system actions, and the slot-value pair under consideration. The scoring module consists of two scorers. One scorer considers the contribution from the utterance while the other considers the contribution from previous system actions. 2.1 Global-Locally Self-Attentive Encoder We begin by describing the global-locally selfattentive encoder, which makes up the encoder module. DST datasets tend to be small relative to their state space in that many slot-value pairs rarely occur in the dataset. Because each state is comprised of a set of slot-value pairs, many of them rare, poor inference of rare slot-value pairs subsequently results in poor turn-level tracking. This problem is amplified in joint tracking, due to the accumulation of turn-level errors. In developing this encoder, we seek to better model 1460 request(food) I just want to eat at a cheap restaurant in the south part of town. What food types are available, can you also provide some phone numbers? System actions in previous turn request(price range) User utterance Slot value under consideration price range = cheap Action encoder P(price range=cheap) X Hutt cutt cval yact yutt y Utterance encoder Action scorer Slot-value encoder Utterance scorer Mixture Encoder module Scoring module A1 A2 V Cact 2 Cact 1 Figure 2: The Global-Locally Self-Attentive Dialogue State Tracker. Local BiLSTM Global BiLSTM Gated Mixture Local Self-Attn Global Self-Attn Gated Mixture c H X Hg Hs cs cg Figure 3: Global-locally self-attentive encoder. rare slot-value pairs by sharing parameters between each slot through global modules and learning slot-specific features through local modules. The global-locally self-attentive encoder consists of a bidirectional LSTM (Hochreiter and Schmidhuber, 1997), which captures temporal relationships within the sequence, followed by a self-attention layer to compute the summary of the sequence. Figure 3 illustrates the global-locally self-attentive encoder. Consider the process of encoding a sequence with respect to a particular slot s. Let n denote the number of words in the sequence, demb the dimension of the embeddings, and X ∈Rn×demb the word embeddings corresponding to words in the sequence. We produce a global encoding Hg of X using a global bidirectional LSTM. Hg = biLSTMg (X) ∈Rn×drnn (1) where drnn is the dimension of the LSTM state. We similarly produce a local encoding Hs of X, taking into account the slot s, using a local bidirectional LSTM. Hs = biLSTMs (X) ∈Rn×drnn (2) The outputs of the two LSTMs are combined through a mixture layer to yield a global-local encoding H of X. H = βsHs + (1 −βs) Hg ∈Rn×drnn (3) Here, the scalar βs is a learned parameter between 0 and 1 that is specific to the slot s. Next, we compute a global-local self-attention context c over H. Self-attention, or intra-attention, is a very effective method of computing summary context over variable-length sequences for natural language processing tasks (Cheng et al., 2016; Vaswani et al., 2017; He et al., 2017; Lee et al., 2017). In our case, we use a global self-attention module to compute attention context useful for general-purpose state tracking, as well as a local self-attention module to compute slot-specific attention context. For each ith element Hi, we compute a scalar global self-attention score ag i which is subsequently normalized across all elements using a softmax function. ag i = W gHi + bg ∈R (4) pg = softmax (ag) ∈Rn (5) 1461 The global self-attention context cg is then the sum of each element Hi, weighted by the corresponding normalized global self-attention score pg i . cg = ∑ i pg i Hi ∈Rdrnn (6) We similarly compute the local self-attention context cs. as i = W sHi + bs ∈R (7) ps = softmax (as) ∈Rn (8) cs = ∑ i ps iHi ∈Rdrnn (9) The global-local self-attention context c is the mixture c = βscs + (1 −βs) cg ∈Rn×drnn (10) For ease of exposition, we define the multivalue encode function encode (X). encode : X →H, c (11) This function maps the sequence X to the encoding H and the self-attention context c. 2.2 Encoding module Having defined the global-locally self-attentive encoder, we now build representations for the user utterance, the previous system actions, and the slot-value pair under consideration. Let U denote word embeddings of the user utterance, Aj denote those of the jth previous system action (e.g. request ( price range ), and V denote those of the slot-value pair under consideration (e.g. food = french). We have Hutt, cutt = encode (U) (12) Hact j , Cact j = encode (Aj) (13) Hval, cval = encode (V ) (14) 2.3 Scoring module Intuitively, there are two sources of contribution to whether the user has expressed the slot-value pair under consideration. The first source of contribution is the user utterance, in which the user directly states the goals and requests. An example of this is the user saying “how about a French restaurant in the centre of town?”, after the system asked “how may I help you?” To handle these cases, we determine whether the utterance specifies the slot-value pair. Namely, we attend over the user utterance Hutt, taking into account the slot-value pair being considered cval, and use the resulting attention context qutt to score the slot-value pair. autt i = ( Hutt i )⊺cval ∈R (15) putt = softmax ( autt) ∈Rm (16) qutt = ∑ i putt i Hutt i ∈Rdrnn (17) yutt = Wqutt + b ∈R (18) where m is the number of words in the user utterance. The score yutt indicates the degree to which the value was expressed by the user utterance. The second source of contribution is the previous system actions. This source is informative when the user utterance does not present enough information and instead refers to previous system actions. An example of this is the user saying “yes”, after the system asked “would you like a restaurant in the centre of town?” To handle these cases, we examine previous actions after considering the user utterance. First, we attend over the previous action representations Cact = [Cact 1 · · · Cact l ], taking into account the current user utterance cutt. Here, l is the number of previous system actions. Then, we use the similarity between the attention context qact and the slotvalue pair cval to score the slot-value pair. aact j = ( Cact j )⊺cutt ∈R (19) pact = softmax ( aact) ∈Rl+1 (20) qact = ∑ j pact j Cact j ∈Rdrnn (21) yact = ( qact)⊺cval ∈R (22) In addition to real previous system actions, we introduce a sentinel action to each turn which allows the attention mechanism to ignore previous system actions. The score yact indicates the degree to which the value was expressed by the previous actions. 1462 The final score y is then a weighted sum between the two scores yutt and yact, normalized by the sigmoid function σ. y = σ ( yutt + wyact) ∈R (23) Here, the weight w is a learned parameter. 3 Experiments 3.1 Dataset The Dialogue Systems Technology Challenges (DSTC) provides a common framework for developing and evaluating dialogue systems and dialogue state trackers (Williams et al., 2013; Henderson et al., 2014a). Under this framework, dialogue semantics such as states and actions are based on a task ontology such as restaurant reservation. During each turn, the user may inform the system of particular goals (e.g. inform(food=french)), or request for more information from the system (e.g. request(address)). For instance, food and area are examples of slots in the DSTC2 task, and french and chinese are example values within the food slot. We train and evaluate our model using DSTC2 as well as the Wizard of Oz (WoZ) restaurant reservation task (Wen et al., 2017), which also adheres to the DSTC framework and has the same ontology as DSTC2. For DSTC2, it is standard to evaluate using the N-best list of the automatic speech recognition system (ASR) that is included with the dataset. Because of this, each turn in the DSTC2 dataset contains several noisy ASR outputs instead of a noise-free user utterance. The WoZ task does not provide ASR outputs, and we instead train and evaluate using the user utterance. 3.2 Metrics We evaluate our model using turn-level request tracking accuracy as well as joint goal tracking accuracy. Our definition of GLAD in the previous sections describes how to obtain turn goals and requests. To compute the joint goal, we simply accumulate turn goals. In the event that the current turn goal specifies a slot that has been specified before, the new specification takes precedence. For example, suppose the user specifies a food=french restaurant during the current turn. If the joint goal has no existing food specifications, then we simply add food=french to the joint goal. Alternatively, if food=thai had been specified in a previous turn, we simply replace it with food=french. 3.3 Implementation Details We use fixed, pretrained GloVe embeddings (Pennington et al., 2014) as well as character n-gram embeddings (Hashimoto et al., 2017). Each model is trained using ADAM (Kingma and Ba, 2015). For regularization, we apply dropout with 0.2 drop rate (Srivastava et al., 2014) to the output of each local module and each global module. We use the development split for hyperparameter tuning and apply early stopping using the joint goal accuracy. For the DSTC2 task, we train using transcripts of user utterances and evaluate using the noisy ASR transcriptions. During evaluation, we take the sum of the scores resulting from each ASR output as the output score of a particular slot-value. We then normalize this sum using a sigmoid function as shown in Equation (23). We also apply word dropout, in which the embeddings of a word is randomly set to zero with a probability of 0.3. This accounts for the poor quality of ASR outputs in DSTC2, which frequently miss several words in the user utterance. We did not find word dropout to be helpful for the WoZ task, which does not contain noisy ASR outputs. 3.4 Comparison to Existing Methods Table 1 shows the performance of GLAD compared to previous state-of-the-art models. The delexicalisation models, which replace slots and values in the utterance with generic tags, are from Henderson et al. (2014b) for DSTC2 and Wen et al. (2017) for WoZ. Semantic dictionaries map slot-value pairs to hand-engineered synonyms and phrases. The NBT (Mrkˇsi´c et al., 2017) applies CNN over word embeddings learned over a paraphrase database (Wieting et al., 2015) instead of delexicalised n-gram features. On the WoZ dataset, we find that GLAD significantly improves upon previous state-of-theart performance by 3.7% on joint goal tracking accuracy and 5.5% on turn-level request tracking accuracy. On the DSTC dataset, which evaluates using noisy ASR outputs instead of user utterances, GLAD improves upon previous state of the art performance by 1.1% on joint goal tracking accuracy and 1.0% on turn-level request tracking accuracy. 1463 Model DSTC2 WoZ Joint goal Turn request Joint goal Turn request Delexicalisation-Based Model 69.1% 95.7% 70.8% 87.1% Delex. Model + Semantic Dictionary 72.9% 95.7% 83.7% 87.6% Neural Belief Tracker (NBT) - DNN 72.6% 96.4% 84.4% 91.2% Neural Belief Tracker (NBT) - CNN 73.4% 96.5% 84.2% 91.6% GLAD 74.5± 0.2% 97.5± 0.1% 88.1± 0.4% 97.1± 0.2% Table 1: Test accuracies on the DSTC2 and WoZ restaurant reservation datasets. The other models are: delexicalisation DSTC2 (Henderson et al., 2014b), delexicalisation WoZ (Wen et al., 2017), and NBT (Mrkˇsi´c et al., 2017). We run 10 models using random seeds with early stopping on the development set, and report the mean and standard deviation test accuracies for each dataset. Model Tn goal Jnt goal Tn request GLAD 93.7% 88.8% 97.3% - global 88.8% 73.4% 97.3% - local 93.1% 86.6% 95.1% - self-attn 91.6% 84.4% 97.1% - LSTM 88.7% 71.5% 93.2% Table 2: Ablation study showing turn goal, joint goal, and turn request accuracies on the dev. split of the WoZ dataset. For “- self-attn”, we use meanpooling instead of self-attention. For “- LSTM”, we compute self-attention over word embeddings. 3.5 Ablation study We perform ablation experiments on the development set to analyze the effectiveness of different components of GLAD. The results of these experiments are shown in Table 2. In addition to the joint goal accuracy and the turn request accuracy, we also show the turn goal accuracy for reference. Temporal order is important for state tracking. We experiment with an embedding-matching variant of GLAD with self-attention but without LSTMs. The weaker performance by this model suggests that representations that capture temporal dependencies is helpful for understanding phrases for state tracking. Self-attention allows slot-specific, robust feature learning. We observe a consistent drop in performance for models that use mean-pooling as opposed to self-attention (Equations (4) to (6)). This stems from the flexibility in the attention context computation afforded by the self-attention mechanism, which allows the model to focus on select words relevant to the slot-value pair under consideration. Figure 4 illustrates an example in which local self-attention modules focus on relevant parts of the utterance. We note that the model attends to relevant phrases that n-gram and embedding matching techniques do not capture (e.g. “within 5 miles” for the “area” slot). Global-local sharing improves goal tracking. We study the two extremes of sharing between the global module and the local module. The first uses only the local module and results in degradation in goal tracking but does not affect request tracking (e.g. βs = 1). This is because the former is a joint prediction over several slot-values with few training examples, whereas the latter predicts a single slot that has the most training examples. The second uses only the global module and underperforms in both goal tracking and request tracking (e.g. βs = 0). This model is less expressive, as it lacks slot-specific specializations except for the final scoring modules. Figure 5 shows the performance of the original model and the two sharing variants across different numbers of occurrences in the training data. GLAD consistently outperforms both variants for rare slot-value pairs. For slot-value pairs with an abundance of training data, there is no significant performance difference between models as there is sufficient data to generalize. 3.6 Qualitative analysis Table 3 shows example predictions by GLAD. In the first example, the user explicitly outlines requests and goals in a single utterance. In the second example, the model previously prompted the user for confirmation of two requests (e.g. for the restaurant’s address and phone number), and the user simply answers in the affirmative. In this case, the model still obtains the correct result by leveraging the system actions in the previous turn. The last example demonstrates an error made by 1464 wait , you never gave me the information . find me a chinese restaurant within 5 miles . <pad> area food master price range request Figure 4: Global and local self-attention scores on user utterances. Each row corresponds to the selfattention score for a particular slot. Slot-specific local self-attention modules emphasize relevant key words and phrases to that slot, whereas the global module attends to all relevant words. 0.8 0.85 0.9 0.95 1 0-100 100-200 200-1000 Average F1 score # Training instances GLAD - global - local Figure 5: F1 performance for each slot-value pair in the development split of the WoZ task, grouped by the number of training instances. the model. Here, the user does not answer the system’s previous request for the choice of food and instead asks for what food is available. The model misinterprets the lack of response as the user not caring about the choice of food. 4 Related Work Dialogue State Tracking. Traditional dialogue state trackers rely on a separate SLU component that serves as the initial stage in the dialogue agent pipeline. The downstream tracker then combines the semantics extracted by the SLU with previous dialogue context in order to estimate the current dialogue state (Thomson and Young, 2010; Wang and Lemon, 2013; Williams, 2014; Perez and Liu, 2017). Recent results in dialogue state tracking show that it is beneficial to jointly learn speech understanding and dialogue tracking (Henderson et al., 2014b; Zilka and Jurcicek, 2015; Wen et al., 2017). These approaches directly take as input the N-best list produced by the ASR system. By avoiding the accumulation of errors from the initial SLU component, these joint approaches typically achieved stronger performance on tasks such as DSTC2. One drawback to these approaches is that they rely on hand-crafted features and complex domain-specific lexicon (in addition to the ontology), and consequently are difficult to extend and scale to new domains. The recent Neural Belief Tracker (NBT) by Mrkˇsi´c et al. (2017) avoids reliance on hand-crafted features and lexicon by using representation learning. The NBT employs convolutional filters over word embeddings in lieu of previously-used hand-engineered features. Moreover, to outperform previous methods, the NBT uses pretrained embeddings tailored to retain semantic relationships by injecting semantic similarity constraints from the Paraphrase Database (Wieting et al., 2015; Ganitkevitch et al., 2013). On the one hand, these specialized embeddings are more difficult to obtain than word embeddings from language modeling. On the other hand, these embeddings are not specific to any dialogue domain and are directly usable for new domains. Neural attention models in NLP. Attention mechanisms have led to improvements on a variety of natural language processing tasks. Bahdanau et al. (2015) propose attentional sequence to sequence models for neural machine translation. Luong et al. (2015) analyze various attention techniques and highlight the effectiveness of the simple, parameterless dot product attention. Similar models have also proven successful in tasks such as summarization (See et al., 2017; Paulus et al., 2018). Self-attention, or intra-attention, has led improvements in language modeling, sentiment 1465 System actions in previous turn User utterance Predicted turn belief state N/A I would like Polynesian food in the South part of town. Please send me phone number and address. request(phone) request(address) inform(food=polynesian) inform(area=south) request(address) request(phone) There is a moderately priced italian place called Pizza hut at cherry hilton. would you like the address and phone number? Yes please. request(phone) request(address) request(food) request(price range) ok I can help you with that. Are you looking for a particular type of food, or within a specific price range? I just want to eat at a cheap restaurant in the south part of town. What food types are available, can you also provide some phone numbers? request(phone) inform(price range=cheap) inform(area=south) -inform(food=dontcare) +request(food) Table 3: Example predictions by Global-Locally Self-Attentive Dialogue State Tracker on the development split of the WoZ restaurant reservation dataset. Model predicted slot-value pairs that are not in the ground truth (e.g. false positives) are prefaced with a “+” symbol. Ground truth slot-value pairs that are not predicted by the model (e.g. false negatives) are prefaced with a “-” symbol. analysis, natural language inference (Cheng et al., 2016), semantic role labeling (He et al., 2017), and coreference resolution (Lee et al., 2017). Deep self-attention has also achieved state-of-the-art results in machine translation (Vaswani et al., 2017). Coattention, or bidirectional attention that codependently encode two sequences, have led to significant gains in question answering (Xiong et al., 2017; Seo et al., 2017) as well as visual question answering (Lu et al., 2016). Parameter sharing between related tasks. Sharing parameters between related tasks to improve joint performance is prominent in multitask learning (Caruana, 1998; Thrun, 1996). Early works in multi-tasking use Gaussian processes whose covariance matrix is induced from shared kernels (Lawrence and Platt, 2004; Yu et al., 2005; Seeger et al., 2005; Bonilla et al., 2008). Hashimoto et al. (2017) propose a progressively trained joint model for NLP tasks. When a new task is introduced, a new section is added to the network whose inputs are intermediate representations from sections for previous tasks. In this sense, tasks share parameters in a hierarchical manner. Johnson et al. (2016) propose a single model that jointly learns to translate between multiple language pairs, including one-tomany, many-to-one, and many-to-many translation. Kaiser et al. (2017) propose a model that jointly learns multiple tasks across modalities. Each modality has a corresponding modality net, which extracts a representation that is fed into a shared encoder. 5 Conclusion We introduced the Global-Locally Self-Attentive Dialogue State Tracker (GLAD), a new state-ofthe-art model for dialogue state tracking. At the core of GLAD is the global-locally self-attention encoder, whose global modules allow parameter sharing between slots and local modules allow slot-specific feature learning. This allows GLAD to generalize on rare slot-value pairs with few training data. GLAD achieves state-of-theart results of 88.1% goal accuracy and 97.1% request accuracy on the WoZ dialogue state tracking task, as well as 74.5% goal accuracy and 97.5% request accuracy on DSTC2. Acknowledgement We thank Nikola Mrkˇsi´c for providing us with a preprocessed version of the DSTC2 dataset. 1466 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. Edwin V Bonilla, Kian M Chai, and Christopher Williams. 2008. Multi-task gaussian process prediction. In NIPS. Rich Caruana. 1998. Multitask learning. In Learning to learn, pages 95–133. Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine reading. In ACL. Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. Ppdb: The paraphrase database. In HLT-NAACL. Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. 2017. A joint many-task model: Growing a neural network for multiple NLP tasks. In ACL. Luheng He, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2017. Deep semantic role labeling: What works and whats next. In ACL. Matthew Henderson, Milica Gaˇsi´c, Blaise Thomson, Pirros Tsiakoulis, Kai Yu, and Steve Young. 2012. Discriminative spoken language understanding using word confusion networks. In Spoken Language Technology Workshop (SLT). Matthew Henderson, Blaise Thomson, and Jason D Williams. 2014a. The second dialog state tracking challenge. In SIGDIAL. Matthew Henderson, Blaise Thomson, and Steve Young. 2014b. Word-based dialog state tracking with recurrent neural networks. In SIGDIAL. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Compututation 9(8). Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vigas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s multilingual neural machine translation system: Enabling zero-shot translation. Technical report, Google. Lukasz Kaiser, Aidan N Gomez, Noam Shazeer, Ashish Vaswani, Niki Parmar, Llion Jones, and Jakob Uszkoreit. 2017. One model to learn them all. arXiv preprint arXiv:1706.05137 . Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. Neil D Lawrence and John C Platt. 2004. Learning to learn with the informative vector machine. In ICML. Kenton Lee, Luheng He, Mike Lewis, and Luke S. Zettlemoyer. 2017. End-to-end neural coreference resolution. In EMNLP. Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016. Hierarchical question-image coattention for visual question answering. In NIPS. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. In ACL. Nikola Mrkˇsi´c, Diarmuid O S´eaghdha, Blaise Thomson, Milica Gaˇsi´c, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2015. Multidomain dialog state tracking using recurrent neural networks. In ACL. Nikola Mrkˇsi´c, Diarmuid O S´eaghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve Young. 2017. Neural belief tracker: Data-driven dialogue state tracking. In ACL. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In ICLR. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP. Julien Perez and Fei Liu. 2017. Dialog state tracking, a machine reading approach using memory network. In EACL. Abigail See, Peter Liu, and Christopher Manning. 2017. Get to the point: Summarization with pointergenerator networks. In ACL. Matthias Seeger, Yee-Whye Teh, and Michael Jordan. 2005. Semiparametric latent factor models. In AISTATS. Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In ICLR. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of machine learning research 15(1). Blaise Thomson and Steve Young. 2010. Bayesian update of dialogue state: A pomdp framework for spoken dialogue systems. Computer Speech & Language 24(4). Sebastian Thrun. 1996. Is learning the n-th thing any easier than learning the first? In NIPS. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. 1467 Zhuoran Wang and Oliver Lemon. 2013. A simple and generic belief tracking mechanism for the dialog state tracking challenge: On the believability of observed information. In SIGDIAL. Tsung-Hsien Wen, David Vandyke, Nikola Mrkˇsi´c, Milica Gaˇsi´c, Lina M. Rojas Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A networkbased end-to-end trainable task-oriented dialogue system. In EACL. John Wieting, Mohit Bansal, Kevin Gimpel, Karen Livescu, and Dan Roth. 2015. From paraphrase database to compositional paraphrase model and back. In ACL. Jason D Williams. 2014. Web-style ranking and slu combination for dialog state tracking. In SIGDIAL. Jason D Williams, Antoine Raux, Deepak Ramachandran, and Alan Black. 2013. The dialog state tracking challenge. In SIGDIAL. Jason D Williams and Steve Young. 2007. Partially observable markov decision processes for spoken dialog systems. Computer Speech and Language 21. Caiming Xiong, Victor Zhong, and Richard Socher. 2017. Dynamic coattention networks for question answering. In ICLR. Kai Yu, Volker Tresp, and Anton Schwaighofer. 2005. Learning gaussian processes from multiple tasks. In ICML. Lukas Zilka and Filip Jurcicek. 2015. Incremental lstm-based dialog state tracker. In Automatic Speech Recognition and Understanding Workshop (ASRU).
2018
135
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1468–1478 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1468 Mem2Seq: Effectively Incorporating Knowledge Bases into End-to-End Task-Oriented Dialog Systems Andrea Madotto∗, Chien-Sheng Wu∗, Pascale Fung Human Language Technology Center Center for Artificial Intelligence Research (CAiRE) Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong [eeandreamad,cwuak,pascale]@ust.hk Abstract End-to-end task-oriented dialog systems usually suffer from the challenge of incorporating knowledge bases. In this paper, we propose a novel yet simple end-toend differentiable model called memoryto-sequence (Mem2Seq) to address this issue. Mem2Seq is the first neural generative model that combines the multihop attention over memories with the idea of pointer network. We empirically show how Mem2Seq controls each generation step, and how its multi-hop attention mechanism helps in learning correlations between memories. In addition, our model is quite general without complicated taskspecific designs. As a result, we show that Mem2Seq can be trained faster and attain the state-of-the-art performance on three different task-oriented dialog datasets. 1 Introduction Task-oriented dialog systems help users to achieve specific goals with natural language such as restaurant reservation and schedule arrangement. Traditionally, they have been built with several pipelined modules: language understanding, dialog management, knowledge query, and language generation (Williams and Young, 2007; Hori et al., 2009; Lee et al., 2009; Levin et al., 2000; Young et al., 2013). Moreover, the ability to query external Knowledge Bases (KBs) is essential in taskoriented dialog systems, since the responses are guided not only by the dialog history but also by the query results (e.g. Table 1). However, despite the stability of such pipelined systems via combining domain-specific knowledge and slot-filling ∗These two authors contributed equally. Point of interest (poi) Distance Traffic info Poi type Address The Westin 5 miles moderate traffic rest stop 329 El Camino Real Round Table 4 miles no traffic pizza restaurant 113 Anton Ct Mandarin Roots 5 miles no traffic chinese restaurant 271 Springer Street Palo Alto Cafe 4 miles moderate traffic coffee or tea place 436 Alger Dr Dominos 6 miles heavy traffic pizza restaurant 776 Arastradero Rd Stanford Express Care 6 miles no traffic hospital 214 El Camino Real Hotel Keen 2 miles heavy traffic rest stop 578 Arbol Dr 1th Turn DRIVER Where can I get tea? Seq2Seq I have a away from away would you like the address +Attn I have a listing for a place that serves tea that is 5 miles away Ptr-Unk There is a away you like would you like more info Mem2Seq The nearest tea shop is Palo Alto Cafe located 4 miles away, would you like directions there? GOLD Palo Alto Cafe is 4 miles away and serves coffee and tea. Do you want the address? 2th Turn DRIVER Yes. Seq2Seq The is at at +Attn The address is 329 El Camino Real and it’s 3 miles away and there is no traffic Ptr-Unk Palo Alto Cafe is 4 miles away and PAD is no traffic Mem2Seq Palo Alto Cafe is 4 miles away at 436 Alger Drive GOLD Palo Alto is located at 436 Alger Dr. Table 1: Example of generated responses for the In-Car Assistant on the navigation domain. techniques, modeling the dependencies between modules is complex and the KB interpretation requires human effort. Recently, end-to-end approaches for dialog modeling, which use recurrent neural networks (RNN) encoder-decoder models, have shown promising results (Serban et al., 2016; Wen et al., 2017; Zhao et al., 2017). Since they can directly map plain text dialog history to the output responses, and the dialog states are latent, there is no need for hand-crafted state labels. Moreover, attention-based copy mechanism (Gulcehre et al., 2016; Eric and Manning, 2017) have been recently introduced to copy words directly from the input sources to the output responses. Using such mechanism, even when unknown tokens appear in the dialog history, the models are still able to produce correct and relevant entities. However, although the above mentioned approaches were successful, they still suffer from two main problems: 1) They struggle to effectively incorporate external KB information into the RNN hidden states (Sukhbaatar et al., 2015), 1469 Figure 1: The proposed Mem2Seq architecture for task-oriented dialog systems. (a) Memory encoder with 3 hops; (b) Memory decoder over 2 step generation. since RNNs are known to be unstable over long sequences. 2) Processing long sequences is very time-consuming, especially when using attention mechanisms. On the other hand, end-to-end memory networks (MemNNs) are recurrent attention models over a possibly large external memory (Sukhbaatar et al., 2015). They write external memories into several embedding matrices, and use query vectors to read memories repeatedly. This approach can memorize external KB information and rapidly encode long dialog history. Moreover, the multi-hop mechanism of MemNN has empirically shown to be essential in achieving high performance on reasoning tasks (Bordes and Weston, 2017). Nevertheless, MemNN simply chooses its responses from a predefined candidate pool rather than generating word-by-word. In addition, the memory queries need explicit design rather than being learned, and the copy mechanism is absent. To address these problems, we present a novel architecture that we call Memory-to-Sequence (Mem2Seq) to learn task-oriented dialogs in an end-to-end manner. In short, our model augments the existing MemNN framework with a sequential generative architecture, using global multihop attention mechanisms to copy words directly from dialog history or KBs. We summarize our main contributions as such: 1) Mem2Seq is the first model to combine multi-hop attention mechanisms with the idea of pointer networks, which allows us to effectively incorporate KB information. 2) Mem2Seq learns how to generate dynamic queries to control the memory access. In addition, we visualize and interpret the model dynamics among hops for both the memory controller and the attention. 3) Mem2Seq can be trained faster and achieve state-of-the-art results in several task-oriented dialog datasets. 2 Model Description Mem2Seq 1 is composed of two components: the MemNN encoder, and the memory decoder as shown in Figure 1. The MemNN encoder creates a vector representation of the dialog history. Then the memory decoder reads and copies the memory to generate a response. We define all the words in the dialog history as a sequence of tokens X = {x1, . . . , xn, $}, where $ is a special charter used as a sentinel, and the KB tuples as B = {b1, . . . , bl}. We further define U = [B; X] as the concatenation of the two sets X and B, Y = {y1, . . . , ym} as the set of words in the expected system response, and PTR = {ptr1, . . . , ptrm} as the pointer index set: ptri = ( max(z) if ∃z s.t. yi = uz n + l + 1 otherwise (1) where uz ∈U is the input sequence and n + l + 1 is the sentinel position index. 2.1 Memory Encoder Mem2Seq uses a standard MemNN with adjacent weighted tying (Sukhbaatar et al., 2015) as an encoder. The input of the encoder is word-level information in U. The memories of MemNN are represented by a set of trainable embedding matrices C = {C1, . . . , CK+1}, where each Ck maps tokens to vectors, and a query vector qk is used as a reading head. The model loops over K hops and 1The code is available at https://github.com/ HLTCHKUST/Mem2Seq 1470 it computes the attention weights at hop k for each memory i using: pk i = Softmax((qk)T Ck i ), (2) where Ck i = Ck(xi) is the memory content in position i, and Softmax(zi) = ezi/Σjezj. Here, pk is a soft memory selector that decides the memory relevance with respect to the query vector qk. Then, the model reads out the memory ok by the weighted sum over Ck+1 2, ok = X i pk i Ck+1 i . (3) Then, the query vector is updated for the next hop by using qk+1 = qk + ok. The result from the encoding step is the memory vector oK, which will become the input for the decoding step. 2.2 Memory Decoder The decoder uses RNN and MemNN. The MemNN is loaded with both X and B, since we use both dialog history and KB information to generate a proper system response. A Gated Recurrent Unit (GRU) (Chung et al., 2014), is used as a dynamic query generator for the MemNN. At each decoding step t, the GRU gets the previously generated word and the previous query as input, and it generates the new query vector. Formally: ht = GRU(C1(ˆyt−1), ht−1); (4) Then the query ht is passed to the MemNN which will produce the token, where h0 is the encoder vector oK. At each time step, two distribution are generated: one over all the words in the vocabulary (Pvocab), and one over the memory contents (Pptr), which are the dialog history and KB inofrmation. The first, Pvocab, is generated by concatenating the first hop attention read out and the current query vector. Pvocab( ˆyt) = Softmax(W1[ht; o1]) (5) where W1 is a trainable parameter. On the other hand, Pptr is generated using the attention weights at the last MemNN hop of the decoder: Pptr = pK t . Our decoder generates tokens by pointing to the input words in the memory, which is a similar mechanism to the attention used in pointer networks (Vinyals et al., 2015). 2Here is Ck+1 since we use adjacent weighted tying. We designed our architecture in this way because we expect the attention weights in the first and the last hop to show a “looser” and “sharper” distribution, respectively. To elaborate, the first hop focuses more on retrieving memory information and the last one tends to choose the exact token leveraging the pointer supervision. Hence, during training all the parameters are jointly learned by minimizing the sum of two standard cross-entropy losses: one between Pvocab( ˆyt) and yt ∈Y for the vocabulary distribution, and one between Pptr( ˆyt) and ptrt ∈PTR for the memory distribution. 2.2.1 Sentinel If the expected word is not appearing in the memories, then the Pptr is trained to produce the sentinel token $, as shown in Equation 1. Once the sentinel is chosen, our model generates the token from Pvocab, otherwise, it takes the memory content using the Pptr distribution. Basically, the sentinel token is used as a hard gate to control which distribution to use at each time step. A similar approach has been used in (Merity et al., 2017) to control a soft gate in a language modeling task. With this method, the model does not need to learn a gating function separately as in Gulcehre et al. (2016), and is not constrained by a soft gate function as in See et al. (2017). 2.3 Memory Content We store word-level content X in the memory module. Similar to Bordes and Weston (2017), we add temporal information and speaker information in each token of X to capture the sequential dependencies. For example, “hello t1 $u” means “hello” at time step 1 spoken by a user. On the other hand, to store B, the KB information, we follow the works of Miller et al. (2016); Eric et al. (2017) that use a (subject, relation, object) representation. For example, we represent the information of The Westin in Table 1: (The Westin, Distance, 5 miles). Thus, we sum word embeddings of the subject, relation, and object to obtain each KB memory representation. During decoding stage, the object part is used as the generated word for Pptr. For instance, when the KB tuple (The Westin, Distance, 5 miles) is pointed, our model copies “5 miles” as an output word. Notice that only a specific section of the KB, relevant to a specific dialog, is loaded into the memory. 1471 Task 1 2 3 4 5 DSTC2 In-Car Avg. User turns 4 6.5 6.4 3.5 12.9 6.7 2.6 Avg. Sys turns 6 9.5 9.9 3.5 18.4 9.3 2.6 Avg. KB results 0 0 24 7 23.7 39.5 66.1 Avg. Sys words 6.3 6.2 7.2 5.7 6.5 10.2 8.6 Max. Sys words 9 9 9 8 9 29 87 Pointer Ratio .23 .53 .46 .19 .60 .46 .42 Vocabulary 3747 1229 1601 Train dialogs 1000 1618 2425 Val dialogs 1000 500 302 Test dialogs 1000 + 1000 OOV 1117 304 Table 2: Dataset statistics for 3 different datasets. 3 Experimental Setup 3.1 Dataset We use three public multi-turn task-oriented dialog datasets to evaluate our model: the bAbI dialog (Bordes and Weston, 2017), DSTC2 (Henderson et al., 2014) and In-Car Assistant (Eric et al., 2017). The train/validation/test sets of these three datasets are split in advance by the providers. The dataset statistics are reported in Table 2. The bAbI dialog includes five end-to-end dialog learning tasks in the restaurant domain, which are simulated dialog data. Task 1 to 4 are about API calls, refining API calls, recommending options, and providing additional information, respectively. Task 5 is the union of tasks 1-4. There are two test sets for each task: one follows the same distribution as the training set and the other has out-of-vocabulary (OOV) entity values that does not exist in the training set. We also used dialogs extracted from the Dialog State Tracking Challenge 2 (DSTC2) with the refined version from Bordes and Weston (2017), which ignores the dialog state annotations. The main difference with bAbI dialog is that this dataset is extracted from real human-bot dialogs, which is noisier and harder since the bots made mistakes due to speech recognition errors or misinterpretations. Recently, In-Car Assistant dataset has been released. which is a human-human, multi-domain dialog dataset collected from Amazon Mechanical Turk. It has three distinct domains: calendar scheduling, weather information retrieval, and point-of-interest navigation. This dataset has shorter conversation turns, but the user and system behaviors are more diverse. In addition, the system responses are variant and the KB information is much more complicated. Hence, this dataset requires stronger ability to interact with KBs, rather than dialog state tracking. 3.2 Training We trained our model end-to-end using Adam optimizer (Kingma and Ba, 2015), and chose learning rate between [1e−3, 1e−4]. The MemNNs, both encoder and decoder, have hops K = 1, 3, 6 to show the performance difference. We use simple greedy search and without any re-scoring techniques. The embedding size, which is also equivalent to the memory size and the RNN hidden size (i.e., including the baselines), has been selected between [64, 512]. The dropout rate is set between [0.1, 0.4], and we also randomly mask some input words into unknown tokens to simulate OOV situation with the same dropout ratio. In all the datasets, we tuned the hyper-parameters with gridsearch over the validation set, using as measure to the Per-response Accuracy for bAbI dialog and DSTC2, and BLEU score for the In-Car Assistant. 3.3 Evaluation Metrics Per-response/dialog Accuracy: A generative response is correct only if it is exactly the same as the gold response. A dialog is correct only if every generated responses of the dialog are correct, which can be considered as the task-completion rate. Note that Bordes and Weston (2017) tests their model by selecting the system response from predefined response candidates, that is, their system solves a multi-class classification task. Since Mem2Seq generates each token individually, evaluating with this metric is much more challenging for our model. BLEU: It is a measure commonly used for machine translation systems (Papineni et al., 2002), but it has also been used in evaluating dialog systems (Eric and Manning, 2017; Zhao et al., 2017) and chat-bots (Ritter et al., 2011; Li et al., 2016). Moreover, BLEU score is a relevant measure in task-oriented dialog as there is not a large variance between the generated answers, unlike open domain generation (Liu et al., 2016). Hence, we include BLEU score in our evaluation (i.e. using Moses multi-bleu.perl script). Entity F1: We micro-average over the entire set of system responses and compare the entities in plain text. The entities in each gold system response are selected by a predefined entity list. This metric evaluates the ability to generate relevant entities from the provided KBs and to capture the semantics of the dialog (Eric and Manning, 2017; Eric et al., 2017). Note that the original In-Car Assis1472 Task QRN MemNN GMemNN Seq2Seq Seq2Seq+Attn Ptr-Unk Mem2Seq H1 Mem2Seq H3 Mem2Seq H6 T1 99.4 (-) 99.9 (99.6) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) T2 99.5 (-) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) T3 74.8 (-) 74.9 (2.0) 74.9 (0) 74.8 (0) 74.8 (0) 85.1 (19.0) 87.0 (25.2) 94.5 (59.6) 94.7 (62.1) T4 57.2 (-) 59.5 (3.0) 57.2 (0) 57.2 (0) 57.2 (0) 100 (100) 97.6 (91.7) 100 (100) 100 (100) T5 99.6 (-) 96.1 (49.4) 96.3 (52.5) 98.8 (81.5) 98.4 (87.3) 99.4 (91.5) 96.1 (45.3) 98.2 (72.9) 97.9 (69.6) T1-OOV 83.1 (-) 72.3 (0) 82.4 (0) 79.9 (0) 81.7 (0) 92.5 (54.7) 93.4 (60.4) 91.3 (52.0) 94.0 (62.2) T2-OOV 78.9 (-) 78.9 (0) 78.9 (0) 78.9 (0) 78.9 (0) 83.2 (0) 81.7 (1.2) 84.7 (7.3) 86.5 (12.4) T3-OOV 75.2 (-) 74.4 (0) 75.3 (0) 74.3 (0) 75.3 (0) 82.9 (13.4) 86.6 (26.2) 93.2 (53.3) 90.3 (38.7) T4-OOV 56.9 (-) 57.6 (0) 57.0 (0) 57.0 (0) 57.0 (0) 100 (100) 97.3 (90.6) 100 (100) 100 (100) T5-OOV 67.8 (-) 65.5 (0) 66.7 (0) 67.4 (0) 65.7 (0) 73.6 (0) 67.6 (0) 78.1 (0.4) 84.5 (2.3) Table 3: Per-response and per-dialog (in the parentheses) accuracy on bAbI dialogs. Mem2Seq achieves the highest average per-response accuracy and has the least out-of-vocabulary performance drop. Ent. F1 BLEU PerResp. PerDial. Rule-Based 33.3 QRN 43.8 MemNN 41.1 0.0 GMemNN 47.4 1.4 Seq2Seq 69.7 55.0 46.4 1.5 +Attn 67.1 56.6 46.0 1.4 +Copy 71.6 55.4 47.3 1.3 Mem2Seq H1 72.9 53.7 41.7 0.0 Mem2Seq H3 75.3 55.3 45.0 0.5 Mem2Seq H6 72.8 53.6 42.8 0.7 Table 4: Evaluation on DSTC2. Seq2Seq (+attn and +copy) is reported from Eric and Manning (2017). BLEU Ent. F1 Sch. F1 Wea. F1 Nav. F1 Human* 13.5 60.7 64.3 61.6 55.2 Rule-Based* 6.6 43.8 61.3 39.5 40.4 KV Retrieval Net* 13.2 48.0 62.9 47.0 41.3 Seq2Seq 8.4 10.3 09.7 14.1 07.0 +Attn 9.3 19.9 23.4 25.6 10.8 Ptr-Unk 8.3 22.7 26.9 26.7 14.9 Mem2Seq H1 11.6 32.4 39.8 33.6 24.6 Mem2Seq H3 12.6 33.4 49.3 32.8 20.0 Mem2Seq H6 9.9 23.6 34.3 33.0 4.4 Table 5: Evaluation on In-Car Assistant. Human, rulebased and KV Retrieval Net evaluation (with *) are reported from (Eric et al., 2017), which are not directly comparable. Mem2Seq achieves highest BLEU and entity F1 score over baselines. tant F1 scores reported in Eric et al. (2017) uses the entities in their canonicalized forms, which are not calculated based on real entity value. Since the datasets are not designed for slot-tracking, we report entity F1 rather than the slot-tracking accuracy as in (Wen et al., 2017; Zhao et al., 2017). 4 Experimental Results We mainly compare Mem2Seq with hop 1,3,6 with several existing models: query-reduction networks (QRN, Seo et al. (2017)), end-toend memory networks (MemNN, Sukhbaatar et al. (2015)), and gated end-to-end memory networks (GMemNN, Liu and Perez (2017)). We also implemented the following baseline models: standard sequence-to-sequence (Seq2Seq) models with and without attention (Luong et al., 2015), and pointer to unknown (Ptr-Unk, Gulcehre et al. (2016)). Note that the results we listed in Table 3 and Table 4 for QRN are different from the original paper, because based on their released code, 3 we discovered that the per-response accuracy was not correctly computed. bAbI Dialog: In Table 3, we follow Bordes 3We simply modified the evaluation part and reported the results. (https://github.com/uwnlp/qrn) and Weston (2017) to compare the performance based on per-response and per-dialog accuracy. Mem2Seq with 6 hops can achieve per-response 97.9% and per-dialog 69.6% accuracy in T5, and 84.5% and 2.3% for T5-OOV, which surpass existing methods by far. One can find that in T3 especially, which is the task to recommend restaurant based on their ranks, our model can achieve promising results due to the memory pointer. In terms of per-response accuracy, this indicates that our model can generalize well with few performance loss for test OOV data, while others have around 15-20% drop. The performance gain in OOV data is also mainly attributed to the use of copy mechanism. In addition, the effectiveness of hops is demonstrated in tasks 3-5, since they require reasoning ability over the KB information. Note that QRN, MemNN and GMemNN viewed bAbI dialog tasks as classification problems. Although their tasks are easier compared to our generative methods, Mem2Seq models can still overpass the performance. Finally, one can find that Seq2Seq and Ptr-Unk models are also strong baselines, which further confirms that generative methods can also achieve good performance in taskoriented dialog systems (Eric and Manning, 2017). 1473 DSTC2: In Table 4, the Seq2Seq models from Eric and Manning (2017) and the rule-based from Bordes and Weston (2017) are reported. Mem2Seq has the highest 75.3% entity F1 score and an high of 55.3 BLEU score. This further confirms that Mem2Seq can perform well in retrieving the correct entity, using the multiple hop mechanism without losing language modeling. Here, we do not report the results using match type (Bordes and Weston, 2017) or entity type (Eric and Manning, 2017) feature, since this meta-information are not commonly available and we want to have an evaluation on plain input output couples. One can also find out that, Mem2Seq comparable perresponse accuracy (i.e. 2% margin) among other existing solution. Note that the per-response accuracy for every model is less than 50% since the dataset is quite noisy and it is hard to generate a response that is exactly the same as the gold one. In-Car Assistant: In Table 5, our model can achieve highest 12.6 BLEU score. In addition, Mem2Seq has shown promising results in terms of Entity F1 scores (33.4%), which are, in general, much higher than those of other baselines. Note that the numbers reported from Eric et al. (2017) are not directly comparable to ours as we mention below. The other baselines such as Seq2Seq or PtrUnk especially have worse performances in this dataset since it is very inefficient for RNN methods to encode longer KB information, which is the advantage of Mem2Seq. Furthermore, we observe an interesting phenomenon that humans can easily achieve a high entity F1 score with a low BLEU score. This implies that stronger reasoning ability over entities (hops) is crucial, but the results may not be similar to the golden answer. We believe humans can produce good answers even with a low BLEU score, since there could be different ways to express the same concepts. Therefore, Mem2Seq shows the potential to successfully choose the correct entities. Note that the results of KV Retrieval Net baseline reported in Table 5 come from the original paper (Eric et al., 2017) of In-Car Assistant, where they simplified the task by mapping the expression of entities to a canonical form using named entity recognition (NER) and linking. Hence the evaluation is not directly comparable to our system. For example, their model learned to generate responses such as “You have a football game at foot70 T1 T4 124 T2 280 T3 403 T5 648 In-Car 1557 DSTC2 Maximum input lenght (# tokens) 0 5 10 15 20 Time per epoch (minutes) Mem2Seq H6 Seq2Seq Seq2Seq+Attn Ptr-Unk Figure 2: Training time per-epoch for different tasks (lower is better). The speed difference becomes larger as the maximal input length increases. ball time with football party,” instead of generating a sentence such as “You have a football game at 7 pm with John.” Since there could be more than one football party or football time, their model does not learn how to access the KBs, but it rather learns the canonicalized language model. Time Per-Epoch: We also compare the training time 4 in Figure 2. The experiments are set with batch size 16, and we report each model with the hyper-parameter that can achieved the highest performance. One can observe that the training time is not that different for short input length (bAbI dialog tasks 1-4) and the gap becomes larger as the maximal input length increases. Mem2Seq is around 5 times faster in InCar Assistant and DSTC2 compared to Seq2Seq with attention. This difference in training efficiency is mainly attributed to the fact that Seq2Seq models have input sequential dependencies which limit any parallelization. Moreover, it is unavoidable for Seq2Seq models to encode KBs, instead Mem2Seq only encodes with dialog history. 5 Analysis and Discussion Memory Attention: Analyzing the attention weights has been frequently used to show the memory read-out, since it is an intuitive way to understand the model dynamics. Figure 3 shows the attention vector at the last hop for each generated token. Each column represents the Pptr vector at the corresponding generation step. Our model has a sharp distribution over the memory, which im4Intel(R) Core(TM) i7-3930K [email protected], using a GeForce GTX 1080 Ti 1474 0 1 2 3 4 5 6 7 8 9 10 11 12 13 Generation Step Sentinel parking garage closest the to directions the are what 783 arcadia pl address chevron gas station poi type chevron moderate traffic traffic info chevron 3 miles distance chevron chevron poi gas station moderate traffic 3 miles 271 springer street address mandarin roots chinese restaurant poi type mandarin roots moderate traffic traffic info mandarin roots 4 miles distance mandarin roots mandarin roots poi chinese restaurant moderate traffic 4 miles 408 university ave address trader joes grocery store poi type trader joes no traffic traffic info trader joes 5 miles distance trader joes trader joes poi grocery store no traffic 5 miles 638 amherst st address sigona farmers market grocery store poi type sigona farmers market no traffic traffic info sigona farmers market 4 miles distance sigona farmers market sigona farmers market poi grocery store no traffic 4 miles 347 alta mesa ave address jills house friends house poi type jills house heavy traffic traffic info jills house 4 miles distance jills house jills house poi friends house heavy traffic 4 miles 270 altaire walk address civic center garage parking garage poi type civic center garage no traffic traffic info civic center garage 4 miles distance civic center garage civic center garage poi parking garage no traffic 4 miles 434 arastradero rd address ravenswood shopping center shopping center poi type ravenswood shopping center heavy traffic traffic info ravenswood shopping center 4 miles distance ravenswood shopping center ravenswood shopping center poi shopping center heavy traffic 4 miles Memory Content COR: the closest parking garage is civic center garage located 4 miles away at 270 altaire walk GEN: the closest parking garage is civic center garage at 270 altaire walk 4 miles away through the directions 0.0 0.2 0.4 0.6 0.8 Figure 3: Last hop memory attention visualization from the In-Car dataset. COR and GEN on the top are the correct response and our generated one. plies that it is able to select the right token from the memory. For example, the KB information “270 altarie walk” was retrieved at the sixth step, which is an address for “civic center garage”. On the other hand, if the sentinel is triggered, then the generated word comes from vocabulary distribution Pvocab. For instance, the third generation step triggered the sentinel, and “is” is generated from the vocabulary as the word is not present in the dialog history. Multiple Hops: Mem2Seq shows how multiple hops improve the model performance in several datasets. Task 3 in the bAbI dialog dataset serves as an example, in which the systems need to recommend restaurants to users based on restaurant ranking from highest to lowest. Users can reject the recommendation and the system has to reason over the next highest restaurant. We found out there are two common patterns between hops among different samples: 1) the first hop is usually used to score all the relevant memories and Figure 4: Principal component analysis of query vectors in hop (a) 1 and (b) 6 for bAbI dialog. room time date party event conference room 100 3pm Thursday sales team conference 11am Tuesday tennis activity 7pm Wednesday dentist appointment conference room 100 1pm Tuesday HR meeting 7pm Friday doctor appointment 11am Tuesday lab appointment DRIVER When’s my tennis match? Seq2seq Your lab appointment is on Monday at +Att Your lab appointment is on Tuesday at 11am Ptr-Unk Your tennis match is on PAD at 1pm Mem2Seq H1 Your is very welcome Mem2Seq H3 Your tennis is on Tuesday at 11am Mem2Seq H6 Your tennis is Monday at 11am GOLD Your tennis match is Tuesday at 11am Table 6: Example of generated responses for the In-Car Assistant on the scheduling domain. retrieve information; 2) the last hop tends to focus on a specific token and makes mistakes when the attention is not sharp. Such mistakes can be attributed to lack of hops, for some samples. For more information, we report two figures in the supplementary material. Query Vectors: In Figure 4, the principal component analysis of Mem2Seq queries vectors is shown for different hops. Each dot is a query vector ht during each decoding time step, and it has its corresponding generated word yt. The blue dots are the words generated from Pvocab, which triggered the sentinel, and orange ones are from Pptr. One can find that in (a) hop 1, there is no clear separation of two different colors but each of which tends to group together. On the other hand, the separation becomes clearer in (b) hop 6 as each color clusters into several groups such as location, cuisine, and number. Our model tends to retrieve more information in the first hop, and points into the memories in the last hop. 1475 Examples: Table 1 and 6 show the generated responses of different models in the two test set samples from the In-Car Assistant dataset. We report examples from this dataset since their answers are more human-like and not as structured and repetitive as others. Seq2Seq generally cannot produce related information, and sometimes fail in language modeling. Instead, using attention helps with this issue, but it still rarely produces the correct entities. For example, Seq2Seq with attention generated 5 miles in Table 1 but the correct one is 4 miles. In addition, Ptr-Unk often cannot copy the correct token from the input, as shown by “PAD” in Table 1. On the other hand, Mem2Seq is able to produce the correct responses in this two examples. In particular in the navigation domain, shown in Table 1, Mem2Seq produces a different but still correct utterance. We report further examples from all the domains in the supplementary material. Discussions: Conventional task-oriented dialog systems (Williams and Young, 2007), which are still widely used in commercial systems, require a multitude of human efforts in system designing and data collection. On the other hand, although end-to-end dialog systems are not perfect yet, they require much less human interference, especially in the dataset construction, as raw conversational text and KB information can be used directly without the need of heavy preprocessing (e.g. NER, dependency parsing). To this extent, Mem2Seq is a simple generative model that is able to incorporate KB information with promising generalization ability. We also discovered that the entity F1 score may be a more comprehensive evaluation metric than per-response accuracy or BLEU score, as humans can normally choose the right entities but have very diversified responses. Indeed, we want to highlight that humans may have a low BLEU score despite their correctness because there may not be a large n-gram overlap between the given response and the expected one. However, this does not imply that there is no correlation between BLEU score and human evaluation. In fact, unlike chat-bots and open domain dialogs where BLEU score does not correlate with human evaluation (Liu et al., 2016), in task-oriented dialogs the answers are constrained to particular entities and recurrent patterns. Thus, we believe BLEU score still can be considered as a relevant measure. In future works, several methods could be applied (e.g. Reinforcement Learning (Ranzato et al., 2016), Beam Search (Wiseman and Rush, 2016)) to improve both responses relevance and entity F1 score. However, we preferred to keep our model as simple as possible in order to show that it works well even without advanced training methods. 6 Related Works End-to-end task-oriented dialog systems train a single model directly on text transcripts of dialogs (Wen et al., 2017; Serban et al., 2016; Williams et al., 2017; Zhao et al., 2017; Seo et al., 2017; Serban et al., 2017). Here, RNNs play an important role due to their ability to create a latent representation, avoiding the need for artificial state labels. End-to-End Memory Networks (Bordes and Weston, 2017; Sukhbaatar et al., 2015), and its variants (Liu and Perez, 2017; Wu et al., 2017, 2018) have also shown good results in such tasks. In each of these architectures, the output is produced by generating a sequence of tokens, or by selecting a set of predefined utterances. Sequence-to-sequence (Seq2Seq) models have also been used in task-oriented dialog systems (Zhao et al., 2017). These architectures have better language modeling ability, but they do not work well in KB retrieval. Even with sophisticated attention models (Luong et al., 2015; Bahdanau et al., 2015), Seq2Seq fails to map the correct entities to the generated input. To alleviate this problem, copy augmented Seq2Seq models Eric and Manning (2017), were used. These models outperform utterance selection methods by copying relevant information directly from the KBs. Copy mechanisms has also been used in question answering tasks (Dehghani et al., 2017; He et al., 2017), neural machine translation (Gulcehre et al., 2016; Gu et al., 2016), language modeling (Merity et al., 2017), and summarization (See et al., 2017). Less related to dialog systems, but related to our work, are the memory based decoders and the nonrecurrent generative models: 1) Mem2Seq query generation phase used to access our memories can be seen as the memory controller used in Memory Augmented Neural Networks (MANN) (Graves et al., 2014, 2016). Similarly, memory encoders have been used in neural machine translation (Wang et al., 2016), and meta-learning application (Kaiser et al., 2017). However, Mem2Seq differs from these models as such: it uses multi1476 hop attention in combination with copy mechanism, whereas other models use a single matrix representation. 2) non-recurrent generative models (Vaswani et al., 2017), which only rely on selfattention mechanism, are related to the multi-hop attention mechanism used in MemNN. 7 Conclusion In this work, we present an end-to-end trainable Memory-to-Sequence model for task-oriented dialog systems. Mem2Seq combines the multi-hop attention mechanism in end-to-end memory networks with the idea of pointer networks to incorporate external information. We empirically show our model’s ability to produce relevant answers using both the external KB information and the predefined vocabulary, and visualize how the multihop attention mechanisms help in learning correlations between memories. Mem2Seq is fast, general, and able to achieve state-of-the-art results in three different datasets. Acknowledgments This work is partially funded by ITS/319/16FP of Innovation Technology Commission, HKUST 16214415 & 16248016 of Hong Kong Research Grants Council, and RDC 1718050-0 of EMOS.AI. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. International Conference on Learning Representations. Antoine Bordes and Jason Weston. 2017. Learning end-to-end goal-oriented dialog. International Conference on Learning Representations, abs/1605.07683. Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. NIPS Deep Learning and Representation Learning Workshop. Mostafa Dehghani, Sascha Rothe, Enrique Alfonseca, and Pascal Fleury. 2017. Learning to attend, copy, and generate for session-based query suggestion. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM ’17, pages 1747–1756, New York, NY, USA. ACM. Mihail Eric, Lakshmi Krishnan, Francois Charette, and Christopher D. Manning. 2017. Key-value retrieval networks for task-oriented dialogue. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 37–49. Association for Computational Linguistics. Mihail Eric and Christopher Manning. 2017. A copyaugmented sequence-to-sequence architecture gives good performance on task-oriented dialogue. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 468–473, Valencia, Spain. Association for Computational Linguistics. Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. CoRR. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka GrabskaBarwi´nska, Sergio G´omez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. 2016. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471–476. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1631–1640, Berlin, Germany. Association for Computational Linguistics. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 140–149, Berlin, Germany. Association for Computational Linguistics. Shizhu He, Cao Liu, Kang Liu, and Jun Zhao. 2017. Generating natural answers by incorporating copying and retrieving mechanisms in sequence-tosequence learning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 199– 208, Vancouver, Canada. Association for Computational Linguistics. Matthew Henderson, Blaise Thomson, and Jason D Williams. 2014. The second dialog state tracking challenge. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 263–272. Chiori Hori, Kiyonori Ohtake, Teruhisa Misu, Hideki Kashioka, and Satoshi Nakamura. 2009. Statistical dialog management applied to wfst-based dialog systems. In IEEE International Conference on Acoustics, Speech and Signal Processing, 2009. ICASSP 2009., pages 4793–4796. IEEE. Lukasz Kaiser, Ofir Nachum, Aurko Roy, and Samy Bengio. 2017. Learning to remember rare events. 1477 International Conference on Learning Representations. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. International Conference on Learning Representations. Cheongjae Lee, Sangkeun Jung, Seokhwan Kim, and Gary Geunbae Lee. 2009. Example-based dialog modeling for practical multi-domain dialog system. Speech Communication, 51(5):466–484. Esther Levin, Roberto Pieraccini, and Wieland Eckert. 2000. A stochastic model of human-machine interaction for learning dialog strategies. IEEE Transactions on speech and audio processing, 8(1):11–23. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119, San Diego, California. Association for Computational Linguistics. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122–2132, Austin, Texas. Association for Computational Linguistics. Fei Liu and Julien Perez. 2017. Gated end-to-end memory networks. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1–10, Valencia, Spain. Association for Computational Linguistics. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421, Lisbon, Portugal. Association for Computational Linguistics. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer sentinel mixture models. International Conference on Learning Representations. Alexander Miller, Adam Fisch, Jesse Dodge, AmirHossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly reading documents. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1400–1409, Austin, Texas. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. International Conference on Learning Representations. Alan Ritter, Colin Cherry, and William B. Dolan. 2011. Data-driven response generation in social media. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 583–593, Edinburgh, Scotland, UK. Association for Computational Linguistics. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083, Vancouver, Canada. Association for Computational Linguistics. Minjoon Seo, Sewon Min, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Query-reduction networks for question answering. International Conference on Learning Representations. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In AAAI, pages 3776–3784. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In AAAI, pages 3295–3301. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems, pages 2440–2448. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000–6010. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2692–2700. Curran Associates, Inc. Mingxuan Wang, Zhengdong Lu, Hang Li, and Qun Liu. 2016. Memory-enhanced decoder for neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 278–286, Austin, Texas. Association for Computational Linguistics. 1478 Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina Maria Rojas-Barahona, Pei hao Su, Stefan Ultes, David Vandyke, and Steve J. Young. 2017. A network-based end-to-end trainable task-oriented dialogue system. In EACL. Jason D Williams, Kavosh Asadi, and Geoffrey Zweig. 2017. Hybrid code networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 665– 677, Vancouver, Canada. Association for Computational Linguistics. Jason D Williams and Steve Young. 2007. Partially observable markov decision processes for spoken dialog systems. Computer Speech & Language, 21(2):393–422. Sam Wiseman and Alexander M. Rush. 2016. Sequence-to-sequence learning as beam-search optimization. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1296–1306, Austin, Texas. Association for Computational Linguistics. Chien-Sheng Wu, Andrea Madotto, Genta Winata, and Pascale Fung. 2017. End-to-end recurrent entity network for entity-value independent goal-oriented dialog learning. In Dialog System Technology Challenges Workshop, DSTC6. Chien-Sheng Wu, Andrea Madotto, Genta Winata, and Pascale Fung. 2018. End-to-end dynamic query memory network for entity-value independent taskoriented dialog. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Steve Young, Milica Gaˇsi´c, Blaise Thomson, and Jason D Williams. 2013. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5):1160–1179. Tiancheng Zhao, Allen Lu, Kyusong Lee, and Maxine Eskenazi. 2017. Generative encoder-decoder models for task-oriented spoken dialog systems with chatting capability. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 27–36. Association for Computational Linguistics.
2018
136
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1479–1488 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1479 Tailored Sequence to Sequence Models to Different Conversation Scenarios Hainan Zhang, Yanyan Lan, Jiafeng Guo, Jun Xu and Xueqi Cheng University of Chinese Academy of Sciences, Beijing, China CAS Key Lab of Network Data Science and Technology, Institute of Computing Technology, Chinese Academy of Sciences [email protected], {lanyanyan, guojiafeng, junxu, cxq}@ict.ac.cn Abstract Sequence to sequence (Seq2Seq) models have been widely used for response generation in the area of conversation. However, the requirements for different conversation scenarios are distinct. For example, customer service requires the generated responses to be specific and accurate, while chatbot prefers diverse responses so as to attract different users. The current Seq2Seq model fails to meet these diverse requirements, by using a general average likelihood as the optimization criteria. As a result, it usually generates safe and commonplace responses, such as ‘I don’t know’. In this paper, we propose two tailored optimization criteria for Seq2Seq to different conversation scenarios, i.e., the maximum generated likelihood for specific-requirement scenario, and the conditional value-at-risk for diverse-requirement scenario. Experimental results on the Ubuntu dialogue corpus (Ubuntu service scenario) and Chinese Weibo dataset (social chatbot scenario) show that our proposed models not only satisfies diverse requirements for different scenarios, but also yields better performances against traditional Seq2Seq models in terms of both metric-based and human evaluations. 1 Introduction This paper focuses on the problem of the singleturn dialogue generation, which is critical in many natural language processing applications such as customer services, intelligent assistant and chatbot. Recently, sequence to sequence (Seq2Seq) models (Sutskever et al., 2014) have been widely used in this area. In these Seq2Seq models, a recurrent neural network (RNN) based encoder is first utilized to encode the input post to a vector, and another RNN decoder is then used to automatically generate the response word by word. The parameters of the encoder and decoder are learned by maximizing the averaged likelihood of the training data. It is clear that the requirements for generated responses are distinct in different dialogue scenarios. For instance, in the scenario of customer service or mobile assistant, users mainly expect the system to help them solve a problem. Therefore, the responses should be specific and accurate to provide useful assistance. For example, if the user asks a question ‘How can I get the AMD driver running on Ubuntu 12.10?’, the system is expected to reply ‘The fglrx driver is in the repo. But it may depend on your exact chipset.’, rather than ‘I do not know about the package.’, even though the latter can also be viewed as relevant for the proposed question. We called this kind of scenario as specific-requirement scenario. While in other scenarios such as chatbot, users are interacting with the dialogue system for fun. Therefore, the generated responses should be diverse to attract different users. Take the post ‘Can you recommend me a tourist city?’ as an example. If the user prefers the magnificent mountains and rivers, it is better to reply ‘You may like the Bernina Express to the Alps’. While if the user loves literature, it is better to reply ‘Paris is a beautiful city with full of the literary atmosphere’. This kind of scenario is called diverse-requirement scenario. However, the current generation model Seq2Seq (Sutskever et al., 2014) usually tend to generate common responses, such as ‘I don’t know’ and ‘What does this mean?’ (Li et al., 2016a,b; Zhou et al., 2017), which fails to meet diverse requirements for different conversation 1480 scenarios. Intrinsically, conversation is a typical one-to-many application, i.e., multiple responses with different semantic meanings are correspondent to a same post. That means there are various post-response matching patterns in the training data. Seq2Seq optimizes an averaged likelihood, so it can only capture the common matching patterns, leading to common responses. The purpose of this paper is to propose two tailored optimization criteria for Seq2Seq models to accommodate different conversation scenarios, i.e. specific-requirement scenario and diverserequirement scenario. The key idea is to how capture the required post-response matching patterns. For the specific-requirement scenario, we define the maximum generated likelihood as the objective function. With this kind of criterion, we just require one ground-truth response to be close to the given post, instead of requiring the average of multiple ground-truth responses to be close to the post. Therefore, the most significant post-response matching pattern will be learned from the data, to facilitate generating a specific response. While for the diverse-requirement scenario, the conditional value-at-risk (CVaR) is used as the objective function. CVaR is a risk-sensitive function widely used in finances (Rockafellar and Uryasev, 2002; Alexander et al., 2006; Chen et al., 2015), defined to assessing the likelihood (at a specific confidence level) that a specific loss will exceed the value at risk. With CVaR as the objective function, the worst 1-α responses are required to be close to the post, therefore various post-response patterns can be captured, and the learned model has the ability to generate diverse responses. We use public data to evaluate our proposed models. For the specific-requirement scenario, the experiments on public Ubuntu dialogue corpus(Ubuntu service) show that optimizing the maximum generated likelihood produces more specific and accurate responses than traditional Seq2Seq models. While for the diverserequirement scenario, the experiments on the public Chinese Weibo dataset (social chatbot) show that optimizing CVaR produces diverse responses, as compared with Seq2Seq and the variants. 2 Related Work The basic neural-based Seq2Seq framework for dialogue generation is inspired by the studies of statistical machine translation. Sutskever et al. (Sutskever et al., 2014) proposed the original Seq2Seq framework(Seq2Seq), which used a multilayered Long Short-Term Memory(LSTM) to map the input sequence to a fixed dimension vector and then used another LSTM to decode the target sequence from the vector. Then Cho et al. (Cho et al., 2014) followed the above architecture, and proposed to feed the last hidden state of encoder to every cell of decoder(RNN-encdec), which enhanced the influence of contexts in generating each word of the targets. To further alleviate the long dependency problem, Bahdanau et al. (Bahdanau et al., 2015) introduced the attention mechanism into the neural network and achieved encouraging performances(Seq2Seq-att). Many studies (Shang et al., 2015; Vinyals and Le, 2015) directly applied the above neural SMT models to the task of dialogue generation, and gained some promising performances. Although the current Seq2Seq model is capable to generate fluent responses, these responses are usually general. Therefore, many researchers focused on how to improve the generation quality and specification. Li et al. (Li et al., 2016a) proposed a mutual information model(MMI) to tackle this problem. However, it is not a unified training model, instead it still trained original Seq2Seq model, and used the Maximum Mutual Information criterion only for testing to rerank the primary top-n list. Mou et al. (Mou et al., 2017) proposed a forward-backward keyword method which used a pointwise mutual information to predict a noun as a keyword and then used two Seq2Seq models to generate the forward sentence and the backward sentence. Xing et al. (Xing et al., 2017) proposed a joint attention mechanism model, which modified the generation probability by adding the topic keywords likelihood to the generated maximum likelihood with extra corpus. The recent works such as seqGAN (Yu et al., 2017) and Adver-REGS (Li et al., 2017) try to use Generative Adversarial Networks(GAN) for generation, where the discriminator scores are used as rewards for reinforcement learning. For the study of generating diverse responses, Vijayakumar et al. (Vijayakumar et al., 2016) introduced a diverse beam search which decoded a list of diverse outputs by optimizing for a diversity-augmented objective, which can control for the exploration and exploitation of the search space. Zhou (Zhou et al., 2017) proposed to apply 1481 a hidden state as a generating style(Mechanism). They make an assumption that some latent responding mechanisms can generate different responses, and model these mechanisms as latent embedding. With these latent embedding in the mid of Seq2Seq, the mechanism-aware Seq2Seq can generate different mechanism responses. However, most of these models are using an averaged approach for optimization, similar to that in Seq2Seq. This paper proposes two new criteria for different conversation scenarios. For the specific-requirement scenario, the maximum generated likelihood is used as the objective function. While for the diverse-requirement scenario, CVaR is used for optimization. 3 Sequence to Sequence Models We first introduce the typical LSTM-based Seq2Seq framework (Bahdanau et al., 2015) used in dialogue generation. Given a post X = {x1, . . . , xM} as the input, a standard LSTM first maps the input sequence to a fixed-dimension vector hM as follows. ik = σ(Wi[hk−1, wk]), fk = σ(Wf[hk−1, wk]), ok = σ(Wo[hk−1, wk]), lk = tanh(Wl[hk−1, wk]), ck = fkck−1 + iklk, hi = ok tanh(ck), (1) where ik, fk and ok are the input gate, the memory gate, and the output gate, respectively. wk is the word embedding for xk, and hk stands for the vector computed by LSTM at time k by combining wk and hk−1. ck is the cell at time k, and σ denotes the sigmoid function. Wi, Wf, Wo and Wl are parameters. Then another LSTM is used as the decoder to map the vector hM to the ground-truth response Y = {y1, · · · , yN}. Typically, the decoder is trained to predict the next word gi, given the context vector hM and the previous generated words {g1, . . . , gi−1}. In other words, the decoder defines a probability over the output Y by decomposing the joint probability into the ordered conditionals by chain rule in the probability theory: P(Y |X) = N Y i=1 p(yi|hM, y1, . . . , yi−1) = N Y i=1 g(hM, yi−1, h′ i), where gθ is a softmax function, h′ i is the hidden state in the decoder LSTM. Usually the attention mechanism is further introduced to the above Seq2Seq framework in real applications. Instead of using hM as the context vector in the decoder, we let the context vector, denoted as si, to be dependent on the sequence (h1, · · · , hM). Each hk contains information about the input sequence with a strong focus on the parts surrounding the k-th word of the input sentence. The context vector si is then computed as a weighted sum of these hk: si = M X k=1 αikhk. The weight αik of each representation hk is computed by: αik = exp (eik) PM j=1 exp (eij) , eik = vT tanh(W1h′ i−1 + W2hk), where vT ,W1 and W2 are learned parameters. eik is an alignment model which scores how well the inputs around position k and the output at position i match. The score is based on the LSTM hidden state h′ i−1 (just before emitting yi), and hk of the input sentence. Given a set of training data D, Seq2Seq assumes that data are i.i.d. sampled from a probability P, and uses the following log likelihood as the objective for maximization: L = X (X,Y )∈D log P(Y |X). (2) 4 Tailored Sequence to Sequence Models We can see that a general averaged likelihood of the training data is used as the objective function in Seq2Seq. However, this objective function is usually criticized for generating common responses, such as ‘I don’t know’ and ‘What does this mean?’. Clearly, this kind of responses cannot satisfy either the specific or the diverse requirements. The underlying reason is not difficult to understand. Intrinsically, conversation is a typical one-to-many application, i.e., multiple responses with different semantic meanings are correspondent to a same post. That means there are various post-response matching patterns in the 1482 training data. If we optimize an averaged likelihood, we can only capture the common matching patterns, which leads to generating common responses. Therefore, if we want to generate specific responses, we need to capture the most significant matching pattern; while if we want to generate diverse responses, we need to define a criteria which has the ability to capture the various matching patterns. Motivated by this idea, we propose two optimization criteria, i.e. maximum generated likelihood, and CVaR, to adapt two different scenarios. 4.1 Maximum Generated Likelihood Criteria To meet the specific requirement, we need to capture a specific matching pattern between post and response, rather than the common matching pattern. Therefore, instead of optimizing the averaged likelihood, we turn to use the maximum generated likelihood (MGL) as the objective function. Mathematically, for a given post X and its associated ground-truth responses (Y (1) X , Y (2) X , · · · , Y (mX) X ), the objective function is defined as: L = X X mX max k=1 log P(Y (k) X |X). From the definition, we can see that we aim to capture the most significant post-response matching pattern in the training data. Therefore, the learned model can output specific responses for a given post. Since there is a max operator in the objective function, which is difficult for accurate optimization, we approximate it by the softmax function. Then the objective function becomes the following form: L = X X mX X k=1 log P(Y (k) X |X) PmX j=1 P(Y (j) X |X) . If the probability for one ground-truth Y (k) X is small, it contributes little to the objective function. That is to say, we just require the top groundtruth responses with relative large probabilities to be close to the post. 4.2 CVaR Criteria To meet the diverse requirements, we need to capture various matching patterns between post and its multiple ground-truth responses. Therefore, instead of optimizing the averaged likelihood, we turn to optimize the conditional valueat-risk, named CVaR for short. CVaR is a prominent risk measure used extensively in finance, and it is proved to be coherent (Artzner et al., 1999) and numerically effective (Krokhmal et al., 2002; Uryasev, 2013). The definitions of VaR and CVaR are as follows. For a confidence level α ∈[0, 1], and a continuous random cost Z whose distribution is parameterized by a controllable parameter θ, the α-VaR of the cost Z, denoted by να(θ), is defined as: να(θ)= inf{ν ∈R|P(Z ≤ν) ≥α}. α-VaR denotes the maximum cost that might be incurred with probability at least α, or can be simply regarded as the α-quantile of Z. And the αCVaR, denoted by Φα(θ), is defined as: Φα(θ)= 1 1 −α Z 1 α νr(θ)dr=Eθ[Z|Z ≥να(θ)]. It can be viewed as the expected cost over the (1− α) worst outcomes of Z. Applying CVaR to generating diverse responses, we can define the random cost Z as −log P(Y |X), the corresponding CVaR is: Φα(θ) = 1 1 −α Z 1 α νr(θ)dr, where νr(θ) = inf{ν ∈R|P(−log P(Y |X) ≤ ν) ≥r}, and θ are parameters of the Seq2Seq model. Therefore, we have: νr(θ) = inf{ν ∈R|P(P(Y |X) ≥eν) ≥r}. Therefore, for a given post X and its groundtruth responses (Y (1) X , Y (2) X , · · · , Y (mX) X ), optimizing CVaR is equivalent to maximizing the following objective function: L = X X 1 1 −α X Y (k) X ∈Y1−α P(Y (k) X |X), where Y1−α is a collection of ground-truth responses such that: sup{P(Y (i) X |X) : Y i X ∈Y1−α} ≤α. We can see that maximizing the above objective function requires the worst 1 −α responses to be close to the post. Therefore, we aim to capture each distinct post-response matching pattern by optimizing the CVaR criteria, which can meet the requirement for generating diverse responses. 1483 5 Experiments In this section, we conduct experiments on both specific-requirement and diverse-requirement scenarios, to evaluate the performances of our proposed methods. 5.1 Experimental Settings 5.1.1 Datasets We use two public datasets in our experiments. For the specific-requirement scenario, we use the Ubuntu dialogue corpus1 extracted from Ubuntu question-answering forum, named Ubuntu (Lowe et al., 2015). The original training data consists of 7 million conversational post-responses pairs from 2014 to April 27,2012. The validation data are conversational pairs from April 27,2014 to August 7,2012, and the test data are from August 7,2012 to December 1,2012. We set the number of positive examples as 4,000,000 in the Github to directly sample data from the whole corpus. Then we construct post and response pairs based on the period from both context and utterance. We also conduct some data pro-processing. For example, we use the official script to tokenize, stem and lemmatize, and the duplicates and sentences with length less than 5 or longer than 50 are removed. Finally, we obtain 3,200,000, 100,000 and 100,000 for training, validation and testing, respectively. For the diverse-requirement scenario, we use the Chinese Weibo dataset, named STC (Shang et al., 2015). It consists of 3,788,571 postresponse pairs extracted from the Chinese Weibo website and cleaned by the data publishers. We randomly split the data to training, validation, and testing sets, which contains 3,000,000, 388,571 and 400,000 pairs, respectively. 2 5.1.2 Baseline Methods Six baseline methods are used for comparison, including traditional Seq2Seq (Sutskever et al., 2014), RNN-encdec (Cho et al., 2014), Seq2Seq with attention(Seq2Seq-att) (Bahdanau et al., 2015), mutual information(MMI) (Li et al., 2016b), Adver-REGS (Li et al., 2017) and Mechanism model (Zhou et al., 2017). Here are some empirical settings. We first introduce the input em1https://github.com/rkadlec/ubuntu-ranking-datasetcreator 2https://github.com/zhanghainan/TailoredSeq2Seq2 DifferentConversationScenarios beddings. For STC, we utilize character-level embeddings rather than word-level embeddings, due to the word sparsity, segmentation mistakes and unknown Chinese words which may lead to inferior performance (Hu et al., 2015). For Ubuntu, we use word embeddings trained by word2vec on the training dataset. In the training process, the dimension is set to be 300, the size of negative sample is set to be 3, and the learning rate is 0.05. For fair comparison among all the baseline methods and our methods, the number of hidden nodes is all set to 300, and batch size is set to 200. Stochastic gradient decent (SGD) is utilized in our experiment for optimization, instead of Adam, because SGD yields better performances in our experiments. The learning rate is set to be 0.5, and adaptively decays with rate 0.99 in the optimization process. We run our model on a Tesla K80 GPU card with Tensorflow framework. All the methods are pretrained with the same Seq2Seq model. For maximum generated likelihood(MGL) model, some people may argue that the specific results may be due to the usage of single postresponse pair. Thus we also implement the baseline of using a single post-response pair, by random selecting the response from the ground-truth for each post, denoted as Single Model. 5.1.3 Evaluation Measures We use both quantitative metrics and human judgements to evaluate the proposed MGL model and the CVaR model. Specifically, we use two kinds of metrics for quantitative comparisons. The first one kind is the traditional metric, including PPL and Bleu score (Xing et al., 2017). They are both widely used in natural language processing, and here we use them to evaluate the quality of the generated responses. The other kind is to evaluate the specific degree3 in (Li et al., 2016a,b). It measures the specific degree of the generated responses, by calculating the number of distinct unigrams and bigrams in the generated responses, denoted as distinct. If a model usually generates common responses, the distinct will be low. For the diverse-requirement scenario, we define two measures to evaluate the performance. Specifically, we set the beam as 10. Group-diversity is 3Though it is named as diversity in Li’s paper, this diversity is not the same as that used in our paper. This diversity measures the specific degree of the generated responses over all generations. While the diversity used in our paper means that the responses are required to be relevant to a post from different aspects. 1484 model distinct-1 distinct-2 BLEU PPL Seq2Seq 0.140 1.11 1.231 51.26 RNN-encdec 0.125 1.24 1.231 46.97 Seq2Seq-att 0.351 4.36 1.294 47.84 MMI 0.283 4.84 1.297 42.52 Adver-REGS 0.268 5.07 1.279 37.71 Single 0.324 5.27 1.342 30.36 MGL 0.358 6.30 1.354 26.34 CVaR 0.294 5.52 1.290 30.03 Table 1: The metric-based evaluation results(%) of different models on Ubuntu. defined to calculate the difference between each two generations for one post, denoted as divrs. Group-overlap is defined to calculate the overlap between each two generations for one post, denoted as overlap. The detailed definitions are shown as follows. divrs = 1 N N X i=1 X Xi cosine(Gi1, Gi2), overlap = 1 N N X i=1 X Xi overlap(Gi1, Gi2), where Gi1 and Gi2 are the generated responses from the model for post X, cosine(Gi1, Gi2) is the cosine similarity, and the overlap(Gi1, Gi2) is defined as the intersection divided by union. For human evaluation, given 200 randomly sampled post and it’s generated responses, three annotators, randomly selected from a class of computer science majored students(48 students), are required to give 3-graded judgements. The annotation criteria are defined as follows: 1. the response is nonfluent or has wrong logic; or the response is fluent but not related with the post; 2. the response is fluent and weak related, but it’s common which can reply many other posts; 3. the response is fluent and strong related with its post, which is like following a real person’s tone. 5.2 Specific-Requirement Scenario We demonstrate the experimental results on the specific-requirement scenario, based on the Ubuntu dataset. 5.2.1 Metric-based Evaluation The quantitative evaluation results are shown in Table 1. From the results, we can see that both model human score distribution(%) Ave. Kappa 1 2 3 Seq2Seq-att 46.5 38.6 14.9 1.684 0.387 MMI 42 38 20 1.78 0.395 Adver-REGS 42 26 32 1.9 0.379 Single 49 14 37 1.88 0.383 MGL 33 16 51 2.18 0.372 CVaR 40 12 48 2.08 0.381 Table 2: The comparisons of different models by human evaluation on Ubuntu. MMI and Adver-REGS outperform Seq2Seq baselines in terms of BLUE, PPL and distinct measures. That’s because both MMI and AdverREGS further consider some reward functions in the optimization process to encourage specific results. Specifically, MMI uses a predefined reward function to penalize generating common responses, and Adver-REGS uses a learned discriminator to define the reward function. Our MGL model obtains higher BLEU and lower PPL than baseline models. Take the BLEU score on Ubuntu dataset for example, the BLEU score of MGL model is 1.354, which is significantly better than that of MMI and Adver-REGS, i.e., 1.297 and 1.279. These results indicate that our MGL generates responses with higher quality. When compared with the Single model, MGL is also better because MGL considers more data in the model computation process. The distinct scores of MGL are also higher than baseline models, which indicate that our model can generate more specific responses. That’s because it has the ability to learn the significant matching pattern between post and responses, by optimizing the maximum generated likelihood rather than the averaged one. In summary, our maximum generated likelihood model produces more fluent and specific results, as compared with baseline methods. 5.2.2 Human Evaluation The human evaluation results are shown in Table 2, in which the percentage of sentences belonging to each grade and the averaged grade are demonstrated to evaluate the quality of generated responses. Kappa (Fleiss, 1971) value is presented to demonstrate the consistency of different annotators. From the results, we can see that MGL significantly outperforms baseline methods. The averaged score of MGL Model is 2.18, which is much higher than that of MMI and Adver-REGS, i.e., 1.78 and 1.9, respectively. The percentage of 1485 post Install grub to the hdd where the bios is set to boot. response1 So I would not just install it to /boot? Seq2Seq-att A log file a bug with the install cd. MMI What do you mean? Adver-REGS How do I install grub? Single Do you delete grub? MGL I have boot with the window from the grub and it go from the boot menu. CVaR If you have grub on the same pc, it is 0 0 boot partition. post I want to have a backup , sad I use up all my space with dual boot and game. response1 Oh you can get a sata to usb adapt : ) Seq2Seq-att What about your computer ? MMI It is not in the install , you can do that. Adver-REGS Game game game game. Single And other user will do too ? MGL And do you buy direct driver from the kernel and it cause run ? CVaR Yes , you can also use a text file to your file . post Take a look at install . response1 I am to cd to the directories contain the folder. Seq2Seq-att How do you install it? MMI I think it be a good idea to do that. Adver-REGS I have no idea what I am looking for. Single I think it is a bite , but I do not know a good thing to do that. I am use. MGL I think so, I have a lot of nautilus. I am already install. CVaR I just install it from synaptics, but I want to install it on the same repository. Table 3: The generated responses from different models on Ubuntu. strongly related sentences (i.e., the grade ‘3’) of MGL Model is 51%, which is also higher than that of MMI, Adver-REGS and Single Model, i.e., 20% , 32% and 37%. In summary, our maximum generated likelihood model produces better responses compared with baselines. As compared with MMI and Adver-REGS, both the metricbased improvements and human evaluation improvements of MGL are significant on Ubuntu datasets (p-value < 0.01). 5.2.3 Case Study Here we show some generated responses for demonstration. Specifically, Table 3 gives one example post and its ground-truth responses from Ubuntu. We also list the generated responses from different models. We can see that Seq2Seq-att, MMI and Adver-REGS all produce common responses, such as ‘What do you mean?’,‘I have no idea what I am looking for.’ and‘What about your computer?’. Our models give interesting responses with specific meanings. Take the post ‘Install grub to the hdd where the bios is set to boot.’ as an example, our model conveys more specific information by replying ‘I have boot with the window from the grub and it go from the boot menu.’ . And in another case, for the given post ‘I want to have a backup , sad I use up all my space with dual boot and game.’, our MGL model generates a question for the post ‘And do you buy direct driver from the kernel and it cause run?’, which is more intelligent. Similar observations have been obtained for many other posts, and we omit them for space limitations. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 20 40 60 80 86.35 79.4 79.4 79.71 76.06 74.46 71.32 76.73 73.79 75.32 58.33 51.38 49.55 49.96 45.05 42.05 38.86 47.61 43.82 63.38 6.24 7.04 7.51 7.18 6.97 6.75 6.95 6.83 6.77 6.67 4.84 4.68 4.52 4.57 4.66 4.33 4.26 4.05 4.01 4.34 α divrs overlap distinct-2 log(PPL) Figure 1: Influences of different α in CVaR. model BLEU PPL overlap divrs Seq2Seq 1.616 132.93 67.26 87.83 RNN-encdec 1.636 130.56 65.72 87.85 Seq2Seq-att 1.620 76.95 63.38 85.32 Adver-REGS 1.635 84.77 57.96 84.94 Mechanism 1.642 90.48 57.67 84.64 MGL 1.703 36.25 66.92 86.22 CVaR 1.652 70.94 38.96 71.38 Table 4: The metric-based evaluation results(%) of different models on STC. 5.3 Diverse-Requirement Scenario Now we introduce the experimental results for the diverse-requirement scenario, based on STC. 5.3.1 Parameters Setting First, we study the influences of different parameter α in CVaR. Specifically, we show the validation result with α ranging from 0 to 0.9 with step 0.1, to see the change of CVaR performances. Figure 1 show the results of different α in terms of divrs , overlap, distinct-2 and PPL. From the results, we can see that the performances of divrs , overlap and PPL are all changing in a similar trend, i.e. first drop and then increase. The best α for CVaR is 0.3, which is used in the following experiments. 5.3.2 Metric-based Evaluation The quantitative evaluation results are shown in Table 4. From the results, we can see that both Adver-REGS and Mechanism outperform Seq2Seq models in terms of BLUE and PPL measures. That’s because they both use some techniques to enhance the generation ability. AdverREGS uses a learned discriminator to define the reward function, while Mechanism uses a style 1486 model human score distribution(%) Ave. Kappa 1 2 3 Seq2Seq-att 54.5 21 24.5 1.7 0.452 MMI 56 15.5 28.5 1.725 0.447 Adver-REGS 48.5 20 31.5 1.83 0.436 Mechanism 52.5 17.5 30 1.775 0.427 MGL 37 11 52 2.15 0.451 CVaR 44.5 11.5 44 1.995 0.437 Table 5: The comparisons of different models by human evaluation on STC. hidden state to describe the generation mechanism. Both MGL and CVaR obtain better results in terms of BLUE and PPL, compared with other baselines. These results indicate that our proposed models generate more fluent responses in the diverse-requirement scenario. As for the evaluation for the diversity, we can see that CVaR model obtains the lowest overlap and divrs among all the baseline models. Take the overlap score on STC for example, the overlap score of CVaR model is 38.86, which is significantly lower than that of Adver-REGS, Mechanism and GLM, i.e., 57.96, 57.67 and 66.92. These results indicate that our CVaR model can generate responses with higher diversity. That’s because it has the capability to capture various matching patterns in the training data, by optimizing the worst 1 −α costs. Therefore, our CVaR model produces both fluent and diverse results, as compared with baseline methods. 5.3.3 Human Evaluation The human evaluation results are shown in Table 5. From the results, we can see MGL and CVaR models achieve comparable results, which are significantly better than baseline methods. Specifically, the averaged score of MGL and CVaR is 2.15 and 1.995, which is significantly higher than that of Adver-REGS and Mechanism, i.e., 1.83 and 1.775, respectively. The percentage of strongly related sentences (i.e., the grade ‘3’) of MGL Model and CVaR are 52% and 44%, which are also significantly higher than that of AdverREGS and Mechanism, i.e., 31.5% and 30%. We conducted significant test for the improvement. As compared with Adver-REGS and Mechanism, both the metric-based improvements and human evaluation improvements of CVaR are significant on STC datasets (p-value < 0.01). 5.3.4 Case Study Here we show some generated responses for demonstration. Specifically, Table 6 gives one post 总决赛继续等待韦德.(Waiting for Wade in the final games.) response 每个人都有每个人的喜爱(Everyone has his favorite stars.) response 分析比新浪分析的好多了(Analysis is much better than Sina) response 等待闪电侠彻底爆发!(Waiting for the explosion of Mr.Flash) Mechanism 韦德真伟大啊!支持!(Wade is really great! Support him!) Mechanism 韦德越来越好。(Wade will be better) Mechanism 韦德威武!(Wade is mighty) CVaR 决赛一定要去看看的!(I must go and see the final games) CVaR 詹皇怎么还能这么快啊(James is so fast) CVaR 决赛是一种对对手的打击,热火加油 (The final games is a blow for the opposite. Heat come on) post 神舟八号飞船与天宫一号第二次对接成功. (Shenzhou 8 spacecraft and Tiangong-1 has the second successful docking) response 希望其它方面也赶快领先世界(Hope other aspects will soon lead the world) response 过硬的技术,紧密地配合促成了这次成功。 Strong technology and close cooperation contributed to this success response 下一次就是载人对接啦.(Next will have a human in it) Mechanism 周末报还会直播?(Will broadcast in the Weekend Newspaper? ) Mechanism 不错,就是不错(It is really good) Mechanism 不错,就是在一起。(It is really good, they should be together) CVaR 是啊,还是要坚持在一起。(Yes, they should insisted on being together) CVaR 您这是在看头版吗?(Are you see it in the front page of the newspaper?) CVaR 不错,有空推荐给爸爸! (It is really good, you could recommend it to your father if you have time) Table 6: The generated responses from different models on STC. post and its three ground-truth responses from STC. We also give three generated responses from Mechanism and CVaR model. We can see that Mechanism produces responses with the same meaning, such as ‘Wade is so amazing’ and ‘It is really good’. However, our CVaR models give specific responses with diverse meanings. Take the post ‘Waiting for Wade in the final games.’ for example, CVaR’s responses are related to different topics. The response ‘I must go and see the final games ’ focuses on the game, while another response of ‘James is so fast ’ focuses on the person, James. For the other case, the post is about the docking of two spacecrafts and the CVaR responses are related to different users, such as the supporter of the event, the newspaper reader and the children who have a father concerned with the current news . We have obtained similar observations for many other posts, but we have to omit them for space limitations. 6 Conclusion In this paper, we propose two new optimization criteria for Seq2Seq model to adapt different conversation scenario. For the specific-requirement scenario, such as customer service, which requires specific and high quality responses, maximum generated likelihood is used as the objective function instead of the averaged one. While for the diverse-requirement, such as chatbot, which requires diverse and high quality responses even if for the same post, CVaR is used as the objective function for worst case optimization. Experimental results on both specific-requirement 1487 (Ubuntu data) and diverse-requirement scenarios (STC data) demonstrate that the proposed optimization criteria can meet the corresponding requirement, yielding better performances against traditional Seq2Seq models in terms of both metric-based and human evaluations. The contribution of this paper is to use tailored Seq2Seq model for different conversation scenarios. The study shows that if we want to generate specific responses, it is important to design the model to learn the most significant matching pattern between post and response. While if we want to generate diverse responses, a risk-sensitive objective functions is helpful. In future work, we plan to further investigate the impact of risksensitive objective functions, including the relations between model robustness and diverse generations. Acknowledgments This work was funded by the 973 Program of China under Grant No. 2014CB340401, the National Natural Science Foundation of China (NSFC) under Grants No. 61425016, 61472401, 61722211, 61773362, and 20180290, the Youth Innovation Promotion Association CAS under Grants No. 20144310, and 2016102, and the National Key R&D Program of China under Grants No. 2016QY02D0405. References S. Alexander, T. F. Coleman, and Y. Li. 2006. Minimizing cvar and var for a portfolio of derivatives. Journal of Banking and Finance 30(2):583–605. P. Artzner, F. Delbaen, J. M. Eber, and D. Heath. 1999. Coherent measures of risk mathematical finance 9. Mathematical Finance Theory Modeling Implementation volume 9(3):203–228(26). Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. The International Conference on Learning Representations . Youhua Chen, Minghui Xu, and Zhe George Zhang. 2015. Technical note—a risk-averse newsvendor model under the cvar criterion. Operations Research 57(4):1040–1044. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. Computer Science . Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. American Psychological Association . Baotian Hu, Qingcai Chen, and Fangze Zhu. 2015. Lcsts: A large scale chinese short text summarization dataset. arXiv preprint arXiv:1506.05865 . Pavlo Krokhmal, Jonas Palmquist, and Stanislav Uryasev. 2002. Portfolio optimization with conditional value-at-risk objective and constraints. Journal of Risk 4:11–27. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. The North American Chapter of the Association for Computational Linguistics . Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. 2016b. Deep reinforcement learning for dialogue generation. The Conference on Empirical Methods in Natural Language Processing . Jiwei Li, Will Monroe, Tianlin Shi, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. The Conference on Empirical Methods in Natural Language Processing . Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. Computer Science . Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, and Zhi Jin. 2017. Sequence to backward and forward sequences: A content-introducing approach to generative short-text conversation. The Annual Meeting of the Association for Computational Linguistics . R. Tyrrell Rockafellar and Stanislav Uryasev. 2002. Conditional value-at-risk for general loss distributions. Journal of Banking and Finance 26(7):1443– 1471. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. The Annual Meeting of the Association for Computational Linguistics . Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In The Annual Conference on Neural Information Processing Systems. pages 3104–3112. Stanislav Uryasev. 2013. Probabilistic constrained optimization: methodology and applications. Springer Science and Business Media . Ashwin K Vijayakumar, Michael Cogswell, Ramprasath R. Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2016. Diverse beam search: Decoding diverse solutions from neural sequence models. arXiv . 1488 Oriol Vinyals and Quoc Le. 2015. A neural conversational model. Computer Science . Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In The Association for the Advancement of Artificial Intelligence. pages 3351–3357. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In The Association for the Advancement of Artificial Intelligence. pages 2852– 2858. Ganbin Zhou, Ping Luo, Rongyu Cao, Fen Lin, Bo Chen, and Qing He. 2017. Mechanism-aware neural machine for dialogue response generation. In The Association for the Advancement of Artificial Intelligence. pages 3400–3407.
2018
137
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1489–1498 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1489 Knowledge Diffusion for Neural Dialogue Generation Shuman Liu†,§∗, Hongshen Chen‡, Zhaochun Ren‡, Yang Feng†, Qun Liu♦, Dawei Yin‡, † Key Laboratory of Intelligent Information Processing Institute of Computing Technology, Chinese Academy of Sciences ‡ Data Science Lab, JD.com ♦ADAPT centre, School of Computing, Dublin City University § University of Chinese Academy of Sciences [email protected], chenhongshen,[email protected], fengyang,[email protected], [email protected] Abstract End-to-end neural dialogue generation has shown promising results recently, but it does not employ knowledge to guide the generation and hence tends to generate short, general, and meaningless responses. In this paper, we propose a neural knowledge diffusion (NKD) model to introduce knowledge into dialogue generation. This method can not only match the relevant facts for the input utterance but diffuse them to similar entities. With the help of facts matching and entity diffusion, the neural dialogue generation is augmented with the ability of convergent and divergent thinking over the knowledge base. Our empirical study on a real-world dataset proves that our model is capable of generating meaningful, diverse and natural responses for both factoid-questions and knowledge grounded chi-chats. The experiment results also show that our model outperforms competitive baseline models significantly. 1 Introduction Dialogue systems are receiving more and more attention in recent years. Given previous utterances, a dialogue system aims to generate a proper response in a natural way. Compared with the traditional pipeline based dialogue system, the new method based on sequence-to-sequence model (Shang et al., 2015; Vinyals and Le, 2015; Cho et al., 2014) impressed the research communities with its elegant simplicity. Such methods are usually in an end-to-end manner: utterances are encoded by a recurrent neural network ∗Work done when the first author was an intern at Data Science Lab, JD.com. while responses are generated sequentially by another (sometimes identical) recurrent neural network. However, due to lack of universal background knowledge and common senses, the endto-end data-driven structure inherently tends to generate meaningless and short responses, such as “haha” or “I don’t know.” To bridge the gap of the common knowledge between human and computers, different kinds of knowledge bases ( e.g., the freebase (Google, 2013) and DBpedia (Lehmann et al., 2017) ) are leveraged. A related application of knowledge bases is question answering, where the given questions are first analyzed, followed by retrieving related facts from knowledge bases (KBs), and finally the answers are generated.The facts are usually presented in the form of “subject-relationobject” triplets, where the subject and object are entities. With the aid of knowledge triplets, neural generative question answering systems are capable of answering facts related inquiries (Yin et al., 2016; Zhu et al., 2017; He et al., 2017a), WH questions in particular, like “who is Yao Ming’s wife ?”. Although answering enquiries is essential for dialogue systems, especially for task-oriented dialogue systems (Eric et al., 2017), it is still far behind a natural knowledge grounded dialogue system, which should be able to understand the facts involved in current dialogue session (socalled facts matching), as well as diffuse them to other similar entities for knowledge-based chitchats (i.e. entity diffusion): 1) facts matching: in dialogue systems, matching utterances to exact facts is much harder than explicit factoid inquiries answering. Though some utterances are facts related inquiries, whose subjects and relations can be easily recognized, for some utterances, the subjects and relations are elusive, which leads the trouble in exact facts matching. 1490 ID Dialogue 1 A: Who is the director of the Titanic? 泰坦尼克号的导演是谁? B: James Cameron. 詹姆斯卡梅隆。 2 A: Titanic is my favorite film! 泰坦尼克号是我最爱的电影! B: The love inside it is so touching. 里面的爱情太感人了。 3 A: Is there anything like the Titanic? 有什么像泰坦尼克号一样的电影吗? B: I think the love story in film Waterloo Bridge is beautiful, too. 我觉得魂断蓝桥中的爱情故事也很美。 4 A: Is there anything like the Titanic? 有什么像泰坦尼克号一样的电影吗? B: Poseidon is also a classic marine film. 海神号也是一部经典的海难电影。 Table 1: Examples of knowledge grounded conversations. Knowledge entities are underlined. Table 1 shows an example: Item 1 and 2 are talking about the film “Titanic”, Unlike item 1, which is a typical question answering conversation,item 2 is a knowledge related chit-chat without any explicit relation. It is difficult to define the exact fact match for item 2. 2) entity diffusion: another noticeable phenomenon is that the conversation usually drifts from one entity to another. In Table 1, utterances in item 3 and 4 are about entity “Titanic”, however, the entity of responses are other similar films. Such entity diffusion relations are rarely captured by the current knowledge triplets. The response in item 3 shows that the two entities “Titanic” and “Waterloo Bridge” are relevant through “love stories”. Item 4 suggests another similar shipwreck film of “Titanic”. To deal with the aforementioned challenges, in this paper, we propose a neural knowledge diffusion (NKD) dialogue system to benefit the neural dialogue generation with the ability of both convergent and divergent thinking over the knowledge base, and handle factoid QA and knowledge grounded chit-chats simultaneously. NKD learns to match utterances to relevant facts; the matched facts are then diffused to similar entities; and finally, the model generates the responses with respect to all the retrieved knowledge items. In general, our contributions are as follows: • We identify the problem of incorporating knowledge bases and dialogue systems as facts matching and entity diffusion. • We manage both facts matching and entity diffusion by introducing a novel knowledge diffusion mechanism and generate the responses with the retrieved knowledge items, which enable the convergent and divergent thinking over the knowledge base. • The experimental results show that the proposed model effectively generate more diverse and meaningful responses involving more accurate relevant entities compared with the state-of-the-art baselines. The corpus will be released upon publication. 2 Model Figure 1: Neural Knowledge Diffusion Dialogue System. Given the input utterance X = (x1, x2, ..., xNX), NKD produces a response Y = (y1, y2, ..., yNY ) containing the entities from the knowledge base K. NX and NY are the number of tokens in the utterance and response respectively. The knowledge base K is a collection of knowledge facts in the form of triplets (subject, relation, object). In particular, both subjects and objects are entities in this work. As illustrated in Figure 1, the model mainly consists of four components: 1. An encoder encodes the input utterance X into a vector representation. 1491 2. A context RNN keeps the dialogue state along a conversation session. It takes the utterance representation as input, and outputs a vector guiding the response generation each turn. 3. A decoder generates the final response Y . 4. A knowledge retriever performs the facts matching and diffuses to similar entities at each turn. Our work is built on hierarchical recurrent encoder-decoder architecture (Sordoni et al., 2015a), and a knowledge retriever network integrates the structured knowledge base into the dialogue system. 2.1 Encoder The encoder transforms discrete tokens into vector representations. To capture information at different aspects, we learn utterance representations with two independent RNNs resulting with two hidden state sequences HC = (hC 1 , hC 2 , ..., hC NX) and HK = (hK 1 , hK 2 , ..., hK NX) respectively. One final hidden state hC NX is used as the input of context RNN to track the dialogue state. The other final hidden state hK NX is utilized in knowledge retriever and is designed to encode the knowledge entities and relations within the input utterances. For instance, in Figure 1, “director” and “Titanic” in X1 are knowledge elements. 2.2 Knowledge Retriever Knowledge retriever extracts a certain number of facts from knowledge base and specifies their importance. It enables the knowledge grounded neural dialogue system with convergent and divergent thinking ability through facts matching and entity diffusion. Figure 2 illustrates the process. 2.2.1 Facts Matching Given the input utterance X and hK NX, relevant facts are extracted from both the knowledge base and the dialogue history. A predefined number of relevant facts F = {f1, f2, ..., fNf } are obtained through string matching, entity linking or named entity recognition. As shown in Figure 2, in the first sentence, “Titanic” is recognized as an entity, all the relevant knowledge triplets are extracted. Then, these entities and knowledge triplets are transformed into fact representations hf = {hf 1, hf 2, ...hf Nf } by averaging the entity embedding and relation embedding. The relevance coefficient rf between a fact and the input utterances, ranging from 0 to 1, is calculated by a nonlinear function or a sub neural network. Here, we apply a multi-layer perceptron (MLP): rf k = MLP([hK NX, hf k]). For the multi-turn conversation, entities in previous utterances are also inherited and reserved as depicted in Figure 2 the dotted lines. For instance, in the second sentence of Figure 2 (right one), no new fact is extracted from the input utterance. Thus it is necessary to record the history entities “Titanic” and “James Cameron”. We summarize the facts as relevant fact representation Cf through a weighted average of fact representations hf: Cf = PNf k=1 rf khf k PNf k=1 rf k . 2.2.2 Entity Diffusion To retrieve other relevant entities, which are typically not mentioned in the dialogue utterance, we diffuse the matched facts. We calculate the similarity between the entities (except the entities that have occurred in previous utterances) in the knowledge base and the relevant fact representation through a multi-layer perceptron, resulting with a similarity coefficient re, ranging from 0 to 1: re k = MLP([hK NX, Cf, ek]), where ek is the entity embedding. The top Ne number of entities E = {e1, e2, ..., eNe} are selected as similar entities. Then, the similar entity representation Cs is formalized as: Cs = PNe k=1 re kek PNe k=1 re k . Back to the example in Figure 2, in the first turn, the matched fact of the input utterance (Titanic, direct by, JamesCameron) is of a high relevance coefficient in “facts matching” as expected. When a fact getting matched, intuitively it is not necessary for entity diffusion. In such case, from the Figure 2, we observe that the entities in “entity diffusing” are of low similarities. In the second turn, there is no triplets matched to the utterance, while the entity “Titanic” achieves a much higher relevance score. Then in “entity 1492 … … … entity entity relation Billy Zane Leonardo DiCaprio release_time_is Titanic Titanic 1997 Titanic act_by Titanic act_by direct_by act_by Titanic Kate Winslet Titanic James Cameron ··· ··· entity Resident Evil The Notebook Poseidon Waterloo Bridge … 0.001 0.001 score 0.001 1 0.001 0.001 … … score 0.001 0.05 0.15 0.1 … … … entity entity relation Kate Winslet James Cameron Titanic James Cameron Titanic act_by Titanic act_by release_time_is direct_by Titanic Leonardo DiCaprio Titanic 1997 ··· ··· entity Resident Evil The Notebook Poseidon Waterloo Bridge … 0.75 0.001 score 0.001 0.001 0.001 0.05 … … score 0.05 0.45 0.95 0.85 Figure 2: Knowledge Retriever. Facts related to input utterance are extracted by facts matching. Similar entities are then figured out by entity diffusion. The dotted lines show the inheritance of previous facts. diffusion”, the similar entities “Waterloo Bridge” and “Poseidon” get relatively higher similarity weights than in the first turn. 2.3 Context RNN Context RNN records the utterance level dialogue state. It takes in the utterance representation and the knowledge representations. The hidden state of the context RNN is updated as: hT t = RNN(hC t , [Cf, Cs], hT t−1). hT t is then conveyed to the decoder to guide the response generation. 2.4 Decoder The decoder generates the response sequentially through a word generator conditioned on hT t , Cf and Cs. Let C denotes the concatenation of hT t , Cf and Cs. Knowledge items coefficient R is the concatenation of relevance coefficient rf and similarity coefficient re. We introduce two variants of word generator: Vanilla decoder simply generates the response Y = (y1, y2, ..., yNy) according to C, R. The <START> James Cameron <END> act_by James Cameron Kate Winslet Titanic direct_by Titanic act_by Billy Zane release_time_is Titanic Leonardo DiCaprio Titanic Titanic 1997 act_by Titanic … 0.001 0.15 0.05 0.1 … 0.001 0.001 score 0.001 1 0.001 0.001 ··· ··· Resident Evil The Notebook Poseidon Waterloo Bridge knowledge … 0.001 0.15 0.05 0.1 … 0.001 0.001 score1 0.001 0.01 0.001 0.001 It’s Figure 3: The decoder generates words from both vocabulary and knowledge base. A score updater keeps tracking of the knowledge item coefficients to ensure its coverage during response generation. probability of Y is defined as p(y1, .., yNy|C, R; θ) = p(y1|C, R; θ) Ny Y t=2 p(yt|y1, .., yt−1, C, R; θ), 1493 where θ denotes the model parameters. The conditional probability of yt is specified by p(yt|y1, ..., yt−1, C, R; θ) = p(yt|yt−1, st, C, R; θ), where yt is the embedding of the vocabulary or object entities of retrieved knowledge items, st is the decoder RNN hidden state . Probabilistic gated decoder utilizes a gating variable zt (Yin et al., 2016) to indicate whether the tth word is generated from common vocabulary or knowledge entities. The probability of generating the tth word is given by: p(yt|yt−1, st, C, R; θ) =p(zt = 0|st; θ)p(yt|yt−1, st, C, R, zt = 0; θ) +p(zt = 1|st; θ)p(yt|R, zt = 1; θ), where p(zt|st; θ) is computed by a logistic regression, p(yt|R, zt = 1; θ) is approximated with the knowledge items coefficient R, and θ is the model parameter. During response generation, if an entity is overused, the response diversity will be reduced. Therefore, once a knowledge item occurred in the response, the corresponding coefficient should be reduced in case that an item occurs multiple times. To keep tracking the coverage of knowledge items, we update the knowledge items coefficient R at each time step. We also explore two coverage tracking mechanisms: 1) Mask coefficient tracker directly reduces the coefficient of the chosen knowledge item to 0 to ensure it can never be selected as the response word again. 2) Coefficient attenuation tracker calculates an attenuation score it based on st, R0, Rt−1 and yt−1: it = DNN(st, yt−1, R0, Rt−1), and then update the coefficient as: Rt = it · Rt−1, where it ranges from 0 to 1 to gradually decrease the coefficient. 2.5 Training The model parameters include the embedding of vocabulary, entities, relations, and all the model components. The model is differential and can be optimized in an end-to-end manner using backpropagation. Given the training data D = {(XNd 1 , Y Nd 1 , F Nd 1 , ENd 1 )} where Nd is the max turns of a dialogue, F denotes the set of relevant knowledge and E denotes the set of similar knowledge in response, the objective function is to minimize the negative loglikelihood: ℓ(D, θ) = − X ND X i=1 log p(Yi|Xi, Fi, Ei) 3 Experiment 3.1 Dataset Most existing knowledge related datasets are mainly focused on single-turn factoid question answering (Yin et al., 2016; He et al., 2017b). We here collect a multi-turn conversation corpus grounded on the knowledge base, which includes not only facts related inquiries but also knowledge-based chit-chats. The data is publicly available online1. We first obtain the element information of each movie, including the movie’s title, publication time, directors, actors and other attributes from https://movie.douban.com/, a popular Chinese social network for movies. Then, entities and relations are extracted as triplets to build the knowledge base K. To collect the question-answering dialogues, we crawled the corpus from a question-answering forum https://zhidao.baidu.com/. To gather the knowledge related chit-chat corpus, we mined the dataset from the social forum https://www.douban.com/group/. Users post their comments, feedbacks, and impressions of films and televisions on it. The conversations are grounded on the knowledge using NER, string match, and artificial scoring and filtering rules. The statistical information of the dataset is shown in Table 2. We observed that the conversations follow the long tail distribution, where famous films and televisions are discussed repeatedly and the low rating ones are rarely mentioned. 3.2 Experiment Detail The total 32977 conversations consisting of 104567 utterances are divided into training (32177) and testing set (800). Bi-directional LSTM (Schuster and Paliwal, 1997) is used for encoder, and the dimension of the LSTM hidden 1https://github.com/liushuman/neural-knowledgediffusion 1494 Knowledge base Community QA Multi-round dialogue #entities #relations #triplets #QA pairs #dialogues #sentences 152568 4 766854 8121 24856 88325 Table 2: Statistics of knowledge base and conversations. layer is set to 512. For the context RNN, the dimension of the LSTM unit is set to 1024. The dimension of word embedding shared by the vocabulary, entities and relations is also set to 512 empirically. We use Adam learning (Kingma and Ba, 2014) to update the gradient and clip the gradient in 5.0. It takes 140 to 150 epochs to train the model with a batch size of 80. 3.3 Baselines We compare our neural knowledge diffusion model with three state-of-the-art baselines: • Seq2Seq: a sequence to sequence model with vanilla RNN encoder-decoder (Shang et al., 2015; Vinyals and Le, 2015). • HRED: a hierarchical recurrent encoderdecoder model. • GenDS: a neural generative dialogue system that is capable of generating responses based on input message and related knowledge base (KB) (Zhu et al., 2017) . Three variants of the neural diffusion dialogue generation model are implemented to verify different configurations of decoders. • NKD-ori is the original model with a vanilla decoder and a mask coefficient tracker. • NKD-gated is augmented with a probabilistic gated decoder and a mask coefficient tracker. • NKD-atte utilizes a vanilla decoder and the coefficient attenuation tracker. 3.4 Evaluation Metric Both automatic and human evaluation metrics are used to analyze the model’s performance. To validate the effectiveness of facts matching and diffusion, we calculate entity accuracy and recall on factoid QA data set as well as the whole data set. Human evaluation rates the model in three aspects: fluency, knowledge relevance and correctness of the response. All these metrics range from 0 to 3, where 0 represents complete error, 1 model accuracy(%) recall(%) LSTM 7.8 7.5 HRED 3.7 3.9 GenDS 70.3 63.1 NKD-ori 67.0 56.2 NKD-gated 77.6 77.3 NKD-atte 55.1 46.6 Table 3: Evaluation results on factoid question answering dialogues. model accuracy(%) recall(%) entity number LSTM 2.6 2.5 1.65 HRED 1.4 1.5 1.79 GenDS 20.9 17.4 1.34 NKD-ori 22.9 19.7 2.55 NKD-gated 24.8 25.6 1.59 NKD-atte 18.4 16.0 3.41 Table 4: Evaluation results on entire dataset. for partially correct, 2 for almost correct, 3 for absolutely correct. 3.5 Experiment Result Table 3 displays the accuracy and recall of entities on factoid question answering dialogues. The performance of NKD is slightly better than the specific QA solution GenDS, while LSTM and HRED which are designed for chi-chat almost fail in this task. All the variants of NKD models are capable of generating entities with an accuracy of 60% to 70%, and NKD-gated achieves the best performance with an accuracy of 77.6% and a recall of 77.3%. Table 4 lists the accuracy and recall of entities on the entire dataset including both the factoid QA and knowledge grounded chit-chats. Not surprisingly, both NKD-ori and NKD-gated outperform GenDS on the entire dataset, and the relative improvement over GenDS is even higher than the improvement in QA dialogues. It confirms that although NKD and GenDS are comparable in answering factoid questions, NKD is better at introducing the knowledge entities for knowledge grounded chit-chats. All the NKD variants in Table 4 generate more entities than GenDS. LSTM and HRED also produce a certain amount of entities, but are of low 1495 model Fluency Appropriateness Entire of knowledge Correctness LSTM 2.52 0.88 0.8 HRED 2.48 0.36 0.32 GenDS 2.76 1.36 1.34 NKD-ori 2.42 1.92 1.58 NKD-gated 2.08 1.72 1.44 NKD-atte 2.7 1.54 1.38 Table 5: Human evaluation result. accuracies and recalls. We also noticed that NKDgated achieves the highest accuracy and recall, but generates fewer entities compared with NKDori and NKD-gated, whereas NKD-atte generates more entities but also with relatively low accuracies and recalls.This demonstrates that NKDgated not only learns to generate more entities but also maintains the quality ( with a relatively high accuracy and recall ). The results of human evaluation in Table 5 also validate the superiority of the proposed model, especially on appropriateness. Responses generated by LSTM and HRED are of high fluency, but are simply repetitions, or even dull responses as “I don’t know.”, “Good.”. NKD-gated is more adept at incorporating the knowledge base with respect to appropriateness and correctness, while NKDatte generates more fluent responses. NKD-ori is a compromise, and obtains the best correctness in completing an entire dialogue. Four evaluators rated the scores independently. The pairwise Cohen’s Kappa agreement scores are 0.67 on fluency, 0.54 on appropriateness, and 0.60 on entire correctness, which indicate a strong annotator agreement. To our surprise, one of the variant model of NKD, which utilized both probabilistic gated decoder and coefficient attenuation tracker does not perform well on entire dataset. The accuracy of the model is quite high, but the recall is very low compared to others. We speculate that this is due to the method of minimizing negative log-likelihood during the training process, which makes the model tend to generate completely correct answers, and therefore reduces the number of generated entities. 3.6 Case Study Table 6 shows typical examples of the generated responses. Both Item 1 and 2 are based on facts relevant utterances. NKD handles these questions by facts matching. Item 3 asks for a recommendation. NKD obtains similar entities by diffusing the entities. For item 4, 5 and 6, no explicit entity appears in the utterances. NKD is able to output appropriate recommendations through entity diffusion. The entities are recorded during the whole dialogue session, so NKD keeps recommending for several turns. Item 7 fails to generate an appropriate response because the entity in the golden response does not appear in the training set, which suggests the future work for out-ofvocabulary cases. 4 Related Work The successes of sequence-to-sequence architecture (Cho et al., 2014; Sutskever et al., 2014) motivated investigation in dialogue systems that can effectively learn to generate a response sequence given the previous utterance sequence (Shang et al., 2015; Sordoni et al., 2015b; Vinyals and Le, 2015). The model is trained to minimize the negative log-likelihood of the training data. Despite the current progress, the lack of response diversity is a notorious problem, where the model inherently tends to generate short, general responses in spite of different inputs. Li et al. (2016a); Serban et al. (2017); Cao and Clark (2017) suggested that theses boring responses are common in training data and shorter responses are more likely to be given a higher likelihood. To tackle the problem, Li et al. (2016a) introduced a maximum mutual information training objective. Serban et al. (2017), Cao and Clark (2017) and Chen et al. (2018) used latent variables to introduce stochasticity to enhance the response diversity. Vijayakumar et al. (2016),Shao et al. (2017) and Li et al. (2016b) recognized that the greedy search decoding process, especially beam-search with a wide beam size, leads the short responses possess higher likelihoods. They reserved more diverse candidates during beam-search decoding. In this paper, we present that the absence of background knowledge and common sense is another source of lacking diversity. We augment the knowledge base to endto-end dialogue generation. Another research line comes from the utilizing of knowledge bases. A typical application is question-answering (QA) systems. The end-toend QA also resort to the encoder-decoder framework (Yin et al., 2016; He et al., 2017a). Yin et al. (2016) enquired the knowledge-base to achieve one fact and answer the simple factoid questions 1496 ID Utterance Response 1 X1:Duckweed, casts Y1:Hongda Guan Jiufeng Shi Qi Wang Zhi Cheng 关宏达史久峰汪漪程之 乘风破浪,演员表 NKD:Hongda Guan Jiufeng Shi Qi Wang Zhi Cheng 关宏达史久峰汪漪程之 2 X1: Just spend the whole night watching the movie Y1:I saw it last night and had a hard time Silenced, a little depressed now 昨晚看了,心里好难过 刚看完熔炉... 连夜看完的,略低落 NKD:The film has an atmosphere make us rethink sth 气氛有,会反思一些事 3 X1:Which movie most representative of youth? I’d like Y1:Kids Return In the Heat of the Sun, Flipped, First Love(Sing Street) 坏孩子的天空 你觉得最能代表青春的电影有哪些呢我觉得有 NKD:《Love Letter》《Once》 阳光灿烂的日子,怦然心动,初恋这件小事(唱街) 《情书》《曾经》 4 X1:Can anyone recommend me some touching movie, Y1:Gift From Room 7 Hearty Paws Memories of Matsuko tears streaming after watching 7号房的礼物人狗奇缘被嫌弃的松子的一生 谁能给我推荐两部感人的电影,直接点的,看了 NKD:Wedding Dress, these ones,,, 可以流两缸眼泪那种 婚纱,这些,,, 5 X1:burn brain or suspense movies please, recommend me Y1:《The Skeleton Key》《Identity》 some burn brain or suspense or horror films please! suspense films which are less popular are ok!!! 《万能钥匙》《致命id》 求推荐烧脑的推理悬疑电影求各位大神推荐烧脑 NKD:Identity 的悬疑恐怖电影冷门的推理剧也爱!!! 致命id Y1:《Orphan》? X2:I’ve seen it, it’s nice 《孤儿怨》? NKD: Also Inception Whisper of the Heart 看过哦不错呢 Whisper of the Heart 还有盗梦空间侧耳倾听侧耳倾听 6 X1:It’s a bit boring recently. Any good horror movies, guys? I’ve watched a lot. Y1:Rec, A Wicked Ghost, Dead Silence, 1408 phontom horror, 死亡录像,山村老尸,死寂,1408, 最近有点无聊,各位豆友有什么好看的恐怖片推 荐啊?看了好多了。 NKD:Recommend a few I’ve seen, 《A Wicked Ghost》, 《hide and seek》 推荐几部我看过《山村老尸》、《捉迷藏》 X2:I’ve seen it all, anything else? Y1:The Others, hardly call to mind 小岛惊魂,冷不丁还真想不起来 卤煮都看过了。还有推荐? NKD: Identity, Fight Club, Interstellar, Snowpiercer 致命id、搏击俱乐部、星际穿越、雪国列车 X3:Have seen it before, really great Y3:《The Revenant》, nice movie 《亡灵》,非常好看 看过,真的很不错的说 NKD:So what type do you like? 那你喜欢什么类型的? 7 X1:any awesome animation like Song of the Sea, please Y1::The Girl Without Hands 无手的少女 求一些画风超赞的动画类似于海洋之歌这种的, NKD:strongly recommended 谢谢 强烈推荐 Table 6: Examples of the generated response. Entities are underlined and Yi denotes the gold response. by referring to the fact. He et al. (2017a) extended this approach by augmenting the copying mechanism and enabled the output words to copy from the original input sequence. Eric et al. (2017) noticed that neural task-oriented dialogue systems often struggle to smoothly interface with a knowledge base and they addressed the problem by augmenting the end-to-end structure with a key-value retrieval mechanism where a separate attention is performed over the key of each entry in the KB. Ghazvininejad et al. (2017) represented the unstructured text as bag of words representation and also performed soft attention over the facts to retrieve a facts vector. Zhu et al. (2017) generated responses with any number of answer entities in the structured KB, even when these entities never appear in the training set. Dhingra et al. (2017) proposed a multi-turn dialogue agent which helps users search knowledge base by soft KB lookup. In our model, we perform not only facts matching to answer factoid inquiries, but also entity diffusion to infer similar entities. Given previous utterances, we retrieve the relevant facts, diffuse them, and generate responses based on diversified rele1497 vant knowledge items. 5 Conclusion In this paper, we identify the knowledge diffusion in conversations and propose an end-to-end neural knowledge diffusion model to deal with the problem. The model integrates the dialogue system with the knowledge base through both facts matching and entity diffusion, which enable the convergent and divergent thinking over the knowledge base. Under such mechanism, the factoid question answering and knowledge grounded chitchats can be tackled together. Empirical results show the proposed model is able to generate more meaningful and diverse responses, compared with the state-of-the-art baselines. In future work, we plan to introduce reinforcement learning and knowledge base reasoning mechanisms to improve the performance. Acknowledgements This work is supported by the National Natural Science Foundation of China (No.61662077, No.61472428). We also would like to thank all the reviewers for their insightful and valuable comments and suggestions. References Kris Cao and Stephen Clark. 2017. Latent variable dialogue models and their diversity. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 182–187, Valencia, Spain. Association for Computational Linguistics. Hongshen Chen, Zhaochun Ren, Jiliang Tang, Yihong Eric Zhao, and Dawei Yin. 2018. Hierarchical variational memory network for dialogue generation. In Proceedings of the 2018 World Wide Web Conference on World Wide Web, pages 1653–1662. International World Wide Web Conferences Steering Committee. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734, Doha, Qatar. Association for Computational Linguistics. Bhuwan Dhingra, Lihong Li, Xiujun Li, Jianfeng Gao, Yun-Nung Chen, Faisal Ahmed, and Li Deng. 2017. Towards end-to-end reinforcement learning of dialogue agents for information access. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Mihail Eric, Lakshmi Krishnan, Francois Charette, and Christopher D. Manning. 2017. Key-value retrieval networks for task-oriented dialogue. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 37–49. Association for Computational Linguistics. Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2017. A knowledge-grounded neural conversation model. arXiv preprint arXiv:1702.01932. Google. 2013. Freebase data dumps. Shizhu He, Cao Liu, Kang Liu, Jun Zhao, Shizhu He, Cao Liu, Kang Liu, and Jun Zhao. 2017a. Generating natural answers by incorporating copying and retrieving mechanisms in sequence-to-sequence learning. In Meeting of the Association for Computational Linguistics, pages 199–208. Wei He, Kai Liu, Yajuan Lyu, Shiqi Zhao, Xinyan Xiao, Yuan Liu, Yizhong Wang, Hua Wu, Qiaoqiao She, Xuan Liu, Tian Wu, and Haifeng Wang. 2017b. Dureader: a chinese machine reading comprehension dataset from real-world applications. arXiv eprint arXiv:1711.05073. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, S¨oren Auer, and Christian Bizer. 2017. Dbpedia – a large-scale, multilingual knowledge base extracted from wikipedia. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119, San Diego, California. Association for Computational Linguistics. Jiwei Li, Will Monroe, and Jurafsky Dan. 2016b. A simple, fast diverse decoding algorithm for neural generation. arXiv preprint arXiv:1611.08562. M. Schuster and K. K. Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673–2681. Iulian Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In AAAI Conference on Artificial Intelligence. 1498 Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1577–1586, Beijing, China. Association for Computational Linguistics. Louis Shao, Stephan Gouws, Denny Britz, Anna Goldie, and Brian Strope. 2017. Generating long and diverse responses with neural conversation models. arXiv preprint arXiv:1701.03185. Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, and JianYun Nie. 2015a. A hierarchical recurrent encoderdecoder for generative context-aware query suggestion. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pages 553–562. ACM. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015b. A neural network approach to context-sensitive generation of conversational responses. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 196–205, Denver, Colorado. Association for Computational Linguistics. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Ashwin K Vijayakumar, Michael Cogswell, Ramprasath R Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2016. Diverse beam search: Decoding diverse solutions from neural sequence models. arXiv preprint arXiv:1610.02424. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869. Jun Yin, Xin Jiang, Zhengdong Lu, Lifeng Shang, Hang Li, and Xiaoming Li. 2016. Neural generative question answering. In International Joint Conference on Artificial Intelligence, pages 2972–2978. Wenya Zhu, Kaixiang Mo, Yu Zhang, Zhangbin Zhu, Xuezheng Peng, and Qiang Yang. 2017. Flexible end-to-end dialogue system for knowledge grounded conversation. arXiv eprint arXiv:1709.04264.
2018
138
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1499–1508 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1499 Generating Informative Responses with Controlled Sentence Function Pei Ke1, Jian Guan2, Minlie Huang1∗, Xiaoyan Zhu1 1Conversational AI group, AI Lab., Dept. of Computer Science, Tsinghua University 1Beijing National Research Center for Information Science and Technology, China 2Dept. of Physics, Tsinghua University, Beijing 100084, China [email protected], [email protected] [email protected], [email protected] Abstract Sentence function is a significant factor to achieve the purpose of the speaker, which, however, has not been touched in largescale conversation generation so far. In this paper, we present a model to generate informative responses with controlled sentence function. Our model utilizes a continuous latent variable to capture various word patterns that realize the expected sentence function, and introduces a type controller to deal with the compatibility of controlling sentence function and generating informative content. Conditioned on the latent variable, the type controller determines the type (i.e., function-related, topic, and ordinary word) of a word to be generated at each decoding position. Experiments show that our model outperforms state-of-the-art baselines, and it has the ability to generate responses with both controlled sentence function and informative content. 1 Introduction Sentence function is an important linguistic feature and a typical taxonomy in terms of the purpose of the speaker (Rozakis, 2003). There are four major function types in the language including interrogative, declarative, imperative, and exclamatory, as described in (Rozakis, 2003). Each sentence function possesses its own structure, and transformation between sentence functions needs a series of changes in word order, syntactic patterns and other aspects (Akmajian, 1984; Yule, 2010). Since sentence function is regarding the purpose of the speaker, it can be a significant factor indicating the conversational purpose during interac∗*Corresponding author: Minlie Huang. Post I’m really hungry now. Interrogative What did you have at breakfast? Response Imperative Let’s have dinner together! Declarative Me, too. But you ate too much at lunch. Figure 1: Responses with three sentence functions. Function-related words are in red, topic words in blue, and others are ordinary words. tions, but surprisingly, this problem is rather untouched in dialogue systems. As shown in Figure 1, responses with different functions can be used to achieve different conversational purposes: Interrogative responses can be used to acquire further information from the user; imperative responses are used to make requests, directions, instructions or invitations to elicit further interactions; and declarative responses commonly make statements to state or explain something.1 Interrogative and imperative responses can be used to avoid stalemates (Li et al., 2016b), which can be viewed as important proactive behaviors in conversation (Yu et al., 2016). Thus, conversational systems equipped with the ability to control the sentence function can adjust its strategy for different purposes within different contexts, behave more proactively, and may lead the dialogue to go further. Generating responses with controlled sentence functions differs significantly from other tasks on controllable text generation (Hu et al., 2017; Ficler and Goldberg, 2017; Asghar et al., 2017; Ghosh et al., 2017; Zhou and Wang, 2017; Dong et al., 2017; Murakami et al., 2017). These studies, involving the control of sentiment polarity, emotion, or tense, fall into local control, more or less, because the controllable variable can be locally re1 Note that we did not include the exclamatory category in this paper because an exclamatory sentence in conversation is only a strong emotional expression of the original sentence with few changes. 1500 flected by decoding local variable-related words, e.g., terrible for negative sentiment (Hu et al., 2017; Ghosh et al., 2017), glad for happy emotion (Zhou et al., 2018; Zhou and Wang, 2017), and was for past tense (Hu et al., 2017). By contrast, sentence function is a global attribute of text, and controlling sentence function is more challenging in that it requires to adjust the global structure of the entire text, including changing word order and word patterns. Controlling sentence function in conversational systems faces another challenge: in order to generate informative and meaningful responses, it has to deal with the compatibility of the sentence function and the content. Similar to most existing neural conversation models (Li et al., 2016a; Mou et al., 2016; Xing et al., 2017), we are also struggling with universal and meaningless responses for different sentence functions, e.g., “Is that right?” for interrogative responses, “Please!” for imperative responses and “Me, too.” for declarative responses. The lack of meaningful topics in responses will definitely degrade the utility of the sentence function so that the desired conversational purpose can not be achieved. Thus, the task needs to generate responses with both informative content and controllable sentence functions. In this paper, we propose a conversation generation model to deal with the global control of sentence function and the compatibility of controlling sentence function and generating informative content. We devise an encoder-decoder structure equipped with a latent variable in conditional variational autoencoder (CVAE) (Sohn et al., 2015), which can not only project different sentence functions into different regions in a latent space, but also capture various word patterns within each sentence function. The latent variable, supervised by a discriminator with the expected function label, is also used to realize the global control of sentence function. To address the compatibility issue, we use a type controller which lexicalizes the sentence function and the content explicitly. The type controller estimates a distribution over three word types, i.e., function-related, topic, and ordinary words. During decoding, the word type distribution will be used to modulate the generation distribution in the decoder. The type sequence of a response can be viewed as an abstract representation of sentence function. By this means, the model has an explicit and strong control on the function and the content. Our contributions are as follows: • We investigate how to control sentence functions to achieve different conversational purposes in open-domain dialogue systems. We analyze the difference between this task and other controllable generation tasks. • We devise a structure equipped with a latent variable and a type controller to achieve the global control of sentence function and deal with the compatibility of controllable sentence function and informative content in generation. Experiments show the effectiveness of the model. 2 Related Work Recently, language generation in conversational systems has been widely studied with sequenceto-sequence (seq2seq) learning (Sutskever et al., 2014; Bahdanau et al., 2015; Vinyals and Le, 2015; Shang et al., 2015; Serban et al., 2016, 2017). A variety of methods has been proposed to address the important issue of content quality, including enhancing diversity (Li et al., 2016a; Zhou et al., 2017) and informativeness (Mou et al., 2016; Xing et al., 2017) of the generated responses. In addition to the content quality, controllability is a critical problem in text generation. Various methods have been used to generate texts with controllable variables such as sentiment polarity, emotion, or tense (Hu et al., 2017; Ghosh et al., 2017; Zhou and Wang, 2017; Zhou et al., 2018) . There are mainly two solutions to deal with controllable text generation. First, the variables to be controlled are embedded into vectors which are then fed into the models to reflect the characteristics of the variables (Ghosh et al., 2017; Zhou et al., 2018). Second, latent variables are used to capture the information of controllable attributes as in the variational autoencoders (VAE) (Zhou and Wang, 2017). (Hu et al., 2017) combined the two techniques by disentangling a latent variable into a categorical code and a random part to better control the attributes of the generated text. The task in this paper differs from the above tasks in two aspects: (1) Unlike other tasks that realize controllable text generation by decoding attribute-related words locally, our task requires to not only decode function-related words, but also 1501 ܲ݋ݏݐ: I feel so great today Attention ܴ݁ݏ݌݋݊ݏ݁: What makes you happy? ݏଵ ݏଶ ݏହ ݏସ ݏଷ What makes you happy ? Recognition/Prior Network ࢠ~ࣨ(ߤ, ߪଶ۷) Discriminator ݈= Interrogative Mixture what please ⋯ happy feel ⋯ you very ⋯ Decoder Type Controller Encoder (Post) Encoder (Response) Function-related: 0.1 Topic: 0.7 Ordinary: 0.2 Concatenation ܺ ܻ [ܺ; ܻ] ࢠ ݏ௧ ࢠ ܲ(ݓݐ|ݏ௧, ࢠ) [ݏ௧; ࢠ] Concatenation ݓݐ ܲ Mergence 0.1 0.7 0.2 Figure 2: Model overview. During training, the latent variable z is sampled from the recognition network which is supervised by the function label in the discriminator. In the type controller, the latent variable and the decoder’s state are used to estimate a type distribution which modulates the final generation distribution. During test, z is sampled from the prior network whose input is only the post. The response encoder in the dotted box appears only in training. plan the words globally to realize the function type to be controlled. (2) The compatibility of controllable variables and content quality is less studied in the literature. The most similar work in (Zhao et al., 2017) proposed to control the dialogue act of a response, which is also a global attribute. However, the model controls dialog act by directly feeding a latent variable into the decoder, instead, our model has a stronger control on the generation process via a type controller in which words of different types are concretely modeled. 3 Model 3.1 Task Definition and Model Overview Our problem is formulated as follows: given a post X = x1x2 · · · xn and a sentence function category l, our task is to generate a response Y = y1y2 · · · ym that is not only coherent with the specified function category l but also informative in content. We denote c as the concatenation of all the input information, i.e. c = [X; l]. Essentially, the goal is to estimate the conditional probability: P(Y, z|c) = P(z|c) · P(Y |z, c) (1) The latent variable z is used to capture the sentence function of a response. P(z|c), parameterized as the prior network in our model, indicates the sampling process of z, i.e., drawing z from P(z|c). And P(Y |z, c) = Qm t=1 P(yt|y<t, z, c) is applied to model the generation of the response Y conditioned on the latent variable z and the input c, which is implemented by a decoder in our model. Figure 2 shows the overview of our model. As aforementioned, the model is constructed in the encoder-decoder framework. The encoder takes a post and a response as input, and obtains the hidden representations of the input. The recognition network and the prior network, adopted from the CVAE framework (Sohn et al., 2015), sample a latent variable z from two normal distributions, respectively. Supervised by a discriminator with the function label, the latent variable encodes meaningful information to realize a sentence function. The latent variable, along with the decoder’s state, is also used to control the type of a word in generation via the type controller. In the decoder, the final generation distribution is mixed by the type distribution which is obtained from the type controller. By this means, the latent variable encodes information not only from sentence function but also from word types, and in return, the decoder and the type controller can deal with the compatibility of realizing sentence function and information content in generation. 1502 3.2 Encoder-Decoder Framework The encoder-decoder framework has been widely used in language generation (Sutskever et al., 2014; Vinyals and Le, 2015). The encoder transforms the post sequence X = x1x2 · · · xn into hidden representations H = h1h2 · · · hn, as follows: ht = GRU(e(xt), ht−1) (2) where GRU is gated recurrent unit (Cho et al., 2014), and e(xt) denotes the embedding of the word xt. The decoder first updates the hidden states S = s1s2 · · · sm, and then generates the target sequence Y = y1y2 · · · ym as follows: st = GRU(st−1, e(yt−1), cvt−1) (3) yt ∼P(yt|y<t, st) = softmax(W st) (4) where this GRU does not share parameters with the encoder’s network. The context vector cvt−1 is a dynamic weighted sum of the encoder’s hidden states, i.e., cvt−1 = Pn i=1 αt−1 i hi, and αt−1 i scores the relevance between the decoder’s state st−1 and the encoder’s state hi (Bahdanau et al., 2015). 3.3 Recognition/Prior Network On top of the encoder-decoder structure, our model introduces the recognition network and the prior network of CVAE framework, and utilizes the two networks to draw latent variable samples during training and test respectively. The latent variable can project different sentence functions into different regions in a latent space, and also capture various word patterns within a sentence function. In the training process, our model needs to sample the latent variable from the posterior distribution P(z|Y, c), which is intractable. Thus, the recognition network qφ(z|Y, c) is introduced to approximate the true posterior distribution so that we can sample z from this deterministic parameterized model. We assume that z follows a multivariate Gaussian distribution whose covariance matrix is diagonal, i.e., qφ(z|Y, c) ∼N(µ, σ2I). Under this assumption, the recognition network can be parameterized by a deep neural network such as a multi-layer perceptron (MLP): [µ, σ2] = MLPposterior(Y, c) (5) During test, we use the prior network pθ(z|c) ∼ N(µ ′, σ ′2I) instead to draw latent variable samples, which can be implemented in a similar way: [µ ′, σ ′2] = MLPprior(c) (6) To bridge the gap between the recognition and the prior networks, we add the KL divergence term that should be minimized to the loss function: L1 = KL(qφ(z|Y, c)||pθ(z|c)) (7) 3.4 Discriminator The discriminator supervises z to encode function-related information in a response with supervision signals. It takes z as input instead of the generated response Y to avoid the vanishing gradient of z, and predicts the function category conditioned on z: P(l|z) = softmax(WD · MLPdis(z)) (8) This formulation can enforce z to capture the features of sentence function and enhance the influence of z in word generation. The loss function of the discriminator is given by: L2 = −Eqφ(z|Y,c)[log P(l|z)] (9) 3.5 Type Controller The type controller is designed to deal with the compatibility issue of controlling sentence function and generating informative content. As aforementioned, we classify the words in a response into three types: function-related, topic, and ordinary words. The type controller estimates a distribution over the word types at each decoding position, and the type distribution will be used in the mixture model of the decoder for final word generation. During the decoding process, the decoder’s state st and the latent variable z are taken as input to estimate the type distribution as follows: P(wt|st, z) = softmax(W0 · MLPtype(st, z)) (10) Noticeably, the latent variable z introduced to the RNN encoder-decoder framework often fails to learn a meaningful representation and has little influence on language generation, because the RNN decoder may ignore z during generation, known as the issue of vanishing latent variable (Bowman et al., 2016). By contrast, our model allows z to directly control the word type at each decoding position, which has more influence on language generation. 1503 3.6 Decoder Compared with the traditional decoder described in Section 3.2, our decoder updates the hidden state st with both the input information c and the latent variable z, and generates the response in a mixture form which is combined with the type distribution obtained from the type controller: st = GRU(st−1, e(yt−1), cvt−1, c, z) (11) P(yt|y<t, c, z) = P(yt|yt−1, st, c, z) = 3 X i=1 P(wt = i|st, z)P(yt|yt−1, st, c, z, wt = i) (12) where wt = 1, 2, 3 stand for function-related words, topic words, and ordinary words, respectively. The probability for choosing different word types at time t, P(wt = i|st, z), is obtained from the type controller, as shown in Equation (10). The probabilities of choosing words in different types are introduced as follows: Function-related Word: Function-related words represent the typical words for each sentence function, e.g., what for interrogative responses, and please for imperative responses. To select the function-related words at each position, we simultaneously consider the decoder’s state st, the latent variable z and the function category l. P(yt|yt−1, st, c, z, wt = 1) = softmax(W1 · [st, z, e(l)]) (13) where e(l) is the embedding vector of the function label. Under the control of z, our model can learn to decode function-related words at proper positions automatically. Topic Word: Topic words are crucial for generating an informative response. The probability for selecting a topic word at each decoding position depends on the current hidden state st: P(yt|yt−1, st, c, z, wt = 2) = softmax(W2st) (14) This probability is over the topic words we predict conditioned on a post. Section 3.8 will describe the details. Ordinary Word: Ordinary words play a functional role in making a natural and grammatical sentence. The probability of generating ordinary words is estimated as below: P(yt|yt−1, st, c, z, wt = 3) = softmax(W3st) (15) The generation loss of the decoder is given as below: L3 = −Eqφ(z|Y,c)[log P(Y |z, c)] = −Eqφ(z|Y,c)[ X t log P(yt|y<t, z, c)] (16) 3.7 Loss Function The overall loss L is a linear combination of the KL term L1, the classification loss of the discriminator L2, and the generation loss of the decoder L3: L = αL1 + L2 + L3 (17) We let α gradually increase from 0 to 1. This technique of KL cost annealing can address the optimization challenges of vanishing latent variables in the RNN encoder-decoder (Bowman et al., 2016). 3.8 Topic Word Prediction Topic words play a key role in generating an informative response. We resort to pointwise mutual information (PMI) (Church and Hanks, 1990) for predicting a list of topic words that are relevant to a post. Let x and y indicate a word in a post X and its response Y respectively, and PMI is computed as follows: PMI(x, y) = log P(x, y) P(x)P(y) (18) Then, the relevance score of a topic word to a given post x1x2 · · · xn can be approximated as follows, similar to (Mou et al., 2016): REL(x1, ..., xn, y) ≈ n X i=1 PMI(xi, y) (19) During training, the words in a response with high REL scores to the post are treated as topic words. During test, we use REL to select the top ranked words as topic words for a post. 4 Experiment 4.1 Data Preparation We collected a Chinese dialogue dataset from Weibo 2. We crawled about 10 million postresponses pairs. Since our model needs the sentence function label for each pair, we built a classifier to predict the sentence function automatically to construct large-scale labeled data. Thus, 2http://www.weibo.com 1504 we sampled about 2,000 pairs from the original dataset and annotated the data manually with four categories, i.e., interrogative, imperative, declarative and other. This small dataset was partitioned into the training, validation, and test sets with the ratio of 6:1:1. Three classifiers, including LSTM (Hochreiter and Schmidhuber, 1997), Bi-LSTM (Graves et al., 2005) and a self-attentive model (Lin et al., 2017), were attempted on this dataset. The results in Table 1 show that the self-attentive classifier outperforms other models and achieves the best accuracy of 0.78 on the test set. Model Accuracy LSTM 0.60 Bi-LSTM 0.75 Self-Attentive 0.78 Table 1: Accuracy of sentence function classification on the 2,000 post-response pairs. We then applied the self-attentive classifier to annotate the large dataset and obtained a dialogue dataset with noisy sentence function labels3. To balance the distribution of sentence functions, we randomly sampled about 0.6 million pairs for each sentence function to construct the final dataset. The statistics of this dataset are shown in Table 2. The dataset4 is available at http://coai.cs. tsinghua.edu.cn/hml/dataset. Training #Post 1,963,382 #Response Interrogative 618,340 Declarative 672,346 Imperative 672,696 Validation #Post 24,034 #Response Interrogative 7,045 Declarative 9,685 Imperative 7,304 Test #Post 6,000 Table 2: Corpus statistics. 4.2 Experiment Settings Our model was implemented with TensorFlow5. We applied bidirectional GRU with 256 cells to the encoder and GRU with 512 cells to the decoder. The dimensions of word embedding and function category embedding were both set to 100. We also set the dimension of latent variables to 128. The vocabulary size was set to 3Though the labels are noisy, the data are sufficient to train a generation model in practice. 4Note that we strictly obeyed the policies of Weibo and anonymized potential private information in dialogues. This dataset is strictly limited for academic use. 5https://github.com/tensorflow/tensorflow 40,000. Stochastic gradient descent (Qian, 1999) was used to optimize our model, with a learning rate of 0.1, a decay rate of 0.9995, and a momentum of 0.9. The batch size was set to 128. Our codes are available at https://github.com/ kepei1106/SentenceFunction. We chose several state-of-the-art baselines, which were implemented with the settings provided in the original papers: Conditional Seq2Seq (c-seq2seq): A Seq2Seq variant which takes the category (i.e., function type) embedding as additional input at each decoding position (Ficler and Goldberg, 2017). Mechanism-aware (MA): This model assumes that there are multiple latent responding mechanisms (Zhou et al., 2017). The number of responding mechanisms is set to 3, equal to the number of function types. Knowledge-guided CVAE (KgCVAE): A modified CVAE which aims to control the dialog act of a generated response (Zhao et al., 2017). 4.3 Automatic Evaluation Metrics: We adopted Perplexity (PPL) (Vinyals and Le, 2015), Distinct-1 (Dist-1), Distinct-2 (Dist-2) (Li et al., 2016a), and Accuracy (ACC) to evaluate the models at the content and function level. Perplexity can measure the grammaticality of generated responses. Distinct-1/distinct-2 is the proportion of distinct unigrams/bigrams in all the generated tokens, respectively. Accuracy measures how accurately the sentence function can be controlled. Specifically, we compared the prespecified function (as input to the model) with the function of a generated response, which is predicted by the self-attentive classifier (see Section 4.1). Model PPL Dist-1 Dist-2 ACC c-seq2seq 57.14 949/.007 5177/.041 0.973 MA 46.08 745/.005 2952/.027 0.481 KgCVAE 56.81 1531/.009 10683/.070 0.985 Our Model 55.85 1833/.008 15586/.075 0.992 Table 3: Automatic evaluation with perplexity (PPL), distinct-1 (Dist-1), distinct-2 (Dist-2), and accuracy (ACC). The integers in the Dist-* cells denote the total number of distinct n-grams. Results: Our model has lower perplexity than cseq2seq and KgCVAE, indicating that the model is comparable with other models in generating grammatical responses. Note that MA has the lowest perplexity because it tends to generate generic responses. 1505 Model Interrogative Declarative Imperative Gram. Appr. Info. Gram. Appr. Info. Gram. Appr. Info. Ours vs. c-seq2seq 0.534 0.536 0.896* 0.630* 0.573* 0.764* 0.685* 0.504 0.893* Ours vs. MA 0.802* 0.602* 0.675* 0.751* 0.592* 0.617* 0.929* 0.568* 0.577* Ours vs. KgCVAE 0.510 0.626* 0.770* 0.546* 0.515* 0.744* 0.780* 0.521* 0.837* Table 4: Manual evaluation results for different functions. The scores indicate the percentages that our model wins the baselines after removing tie pairs. The scores of our model marked with * are significantly better than the competitors (Sign Test, p-value < 0.05). As for distinct-1 and distinct-2, our model generates remarkably more distinct unigrams and bigrams than the baselines, indicating that our model can generate more diverse and informative responses compared to the baselines. In terms of sentence function accuracy, our model outperforms all the baselines and achieves the best accuracy of 0.992, which indicates that our model can control the sentence function more precisely. MA has a very low score because there is no direct way to control sentence function, instead, it learns automatically from the data. 4.4 Manual Evaluation To evaluate the generation quality and how well the models can control sentence function, we conducted pair-wise comparison. 200 posts were randomly sampled from the test set and each model was required to generate responses with three function types to each post. For each pair of responses (one by our model and the other by a baseline, along with the post), annotators were hired to give a preference (win, lose, or tie). The total annotation amounts to 200×3×3×3=5,400 since we have three baselines, three function types, and three metrics. We resorted to a crowdsourcing service for annotation, and each pair-wise comparison was judged by 5 curators. Metrics: We designed three metrics to evaluate the models from the perspectives of sentence function and content: grammaticality (whether a response is grammatical and coherent with the sentence function we prespecified), appropriateness (whether a response is a logical and appropriate reply to its post), and informativeness (whether a response provides meaningful information via the topic words relevant to the post). Note that the three metrics were separately evaluated. Results: The scores in Table 4 represent the percentages that our model wins a baseline after removing tie pairs. A value larger than 0.5 indicates that our model outperforms its competitor. Our model outperforms the baselines significantly in most cases (Sign Test, with p-value < 0.05). Among the three function types, our model performs significantly better than the baselines when generating declarative and imperative responses. As for interrogative responses, our model is better but the difference is not significant in some settings. This is because interrogative patterns are more apparent and easier to learn, thereby all the models can capture some of the patterns to generate grammatical and appropriate responses, resulting in more ties. By contrast, declarative and imperative responses have less apparent patterns whereas our model is better at capturing the global patterns through modeling the word types explicitly. We can also see that our model obtains particularly high scores in informativeness. This demonstrates that our model is better to generate more informative responses, and is able to control sentence functions at the same time. The annotation statistics are shown in Table 5. The percentage of annotations that at least 4 judges assign the same label (at least 4/5 agreement) is larger than 50%, and the percentage for at least 3/5 agreement is about 90%, indicating that annotators reached a moderate agreement. At least 3/5 At least 4/5 Grammaticality 91.7% 60.1% Appropriateness 88.6% 52.5% Informativeness 95.9% 71.2% Table 5: Annotation statistics. At least n/5 means there are no less than n judges assigning the same label to a record during annotation. 4.5 Words and Patterns in Function Control To further analyze how our model realizes the global control of sentence function, we presented frequent words and frequent word patterns within each function. Specifically, we counted the frequency of a function-related word in the generated responses. The type of a word is predicted by the type controller. Further, we replaced the 1506 Function Frequent Words Frequent Patterns Response Examples Chinese English Chinese English Chinese English Interrogative ? ᱟ ੇ 䈤 ӰѸ ? be particle mean what ݔᱟ䈤ݕੇ˛ Does ݔmean ݕ? ֐ᱟ䈤ᡁᐵੇ˛ Do you mean I’m handsome? ݔᱟ൘ݕੇ˛ Is ݔݕ? ֐ᱟ൘ཨᡁੇ˛ Are you praising me? ݔ൘ଚݕ୺˛ Where does ݔݕ? ֐൘ଚк⨝୺˛ Where do you work? ݔᜣݕӰѸݖ˛ What ݖdoes ݔwant to ݕ? ֐ᜣ㾱ӰѸ㊫රⲴ˛ What type do you want to choose? Imperative ! 㾱 ਟԕ ᶕ 䈧 ! will can come please 䛓ቡݕ੗ Do ݕ, then. 䛓ቡྭྭޫ⵰੗ Take care of yourself, then. ݔ㾱ᢺݕ㔉ݖ Let ݔgive ݕto ݖ. ᡁ㾱ᢺ֐Ⲵᡯᆀ㔉֐ Let me give your house to you. Declarative ᱟ ҏ 㿹ᗇ ਟᱟ ⋑ be also/too think but no ݔҏᱟݕˈਟᱟݖ ݔalso ݕ, but ݖ ᡁҏᱟ䘉ѸᜣⲴˈਟᱟᡁ㾱᢮ ањӪˈ૸૸ I also think so, but I will find a person. Ha-ha. ݔҏᱟˈܽ䜭ܾ ݔ, too, and ܽhas ܾ. ᡁҏᱟˈᡁⲴ㊹э䜭㻛ᡁ䴷㋮ Ҷ Me, too, and my fans have been shocked by me. Figure 3: Frequent function-related words and frequent patterns containing at least 3 function-related words. The letters denote the variables which replace ordinary and topic words in the generated responses. The underlined words in responses are those occurring in patterns. ordinary and topic words of a generated response with variables and treated each response as a sequence of function-related words and variables. We then used the Apriori algorithm (Agrawal and Srikant, 1994) to mine frequent patterns in these sequences. We retained frequent patterns that consist of at most 5 words and appear in at least 2% of the generated responses. Figure 3 presents the most frequent words (the second and third columns) and patterns (the fourth and fifth columns) for each function type. Note that the word patterns can be viewed as an abstract representation of sentence function. We observed that: First, function-related words are distributed at multiple positions of a sentence, indicating that realizing a sentence function needs a global control by not only predicting the word types but also planning the words of different types properly. Second, the frequent words clearly reveal the difference between function types. For instance, interrogatives like 什么(what), ?(?) and 吗(particle) are commonly seen in interrogative responses, words like 请(please), 来(come), and 要(will) occur frequently in imperative responses. Further, word patterns in different function types differ significantly (see the fourth/fifth columns), indicating that the model is able to learn functionspecific word patterns. Third, interrogative and imperative responses have explicit patterns, while declarative responses are more implicit and divergent in pattern. Interrogative responses fall into Yes-No, Wh-, How-, and other questions. Imperative responses generally start with the base form of verbs or imperative words including 请(please). Our model succeeds in capturing two typical forms in declarative responses: adversative and progressive structures. 4.6 Case Study Post: ྲ᷌ᴹаཙᡁケ❦䐏֐㔍Ӕˈ֐ՊᘾѸṧ˛ What would you do if I suddenly broke up with you someday? Target Function: Interrogative c-seq2seq: ֐ᱟ䈱˛ Who are you? MA: ᡁՊᖫḕޣҾ⌅ᖻ⴨ޣਨ⌅᧚ᯭⲴ㔃䇪ੇ˛ Should I investigate the conclusion of judicial measure related to the law? KgCVAE: ֐ᱟнᱟᜣᡁҶ Did you miss me? Our Model: ֐䈤ⲴᱟᡁⲴ䭉ੇ˛ Do you mean that it’s my fault? Target Function: Imperative c-seq2seq: ᡁՊྭྭⲴʽ I’ll be fine! MA: 㔍Ӕส֜ˈ㔍Ӕส֜ Break up with guys. Break up with guys. KgCVAE: ᡁՊ੺䇹֐Ⲵʽ I will tell you! Our Model: ᡁՊᢺ֐Ⲵ⽬⢙㔉֐Ⲵˈ᭮ᗳ Rest assured that I would give your gift to you. Target Function: Declarative c-seq2seq: ᡁҏᱟ䘉Ѹ䇔ѪⲴˈᡁ㿹ᗇᡁᱟњྭᆙᆀ I think so, too. I feel that I’m a good man. MA: ᡁҏн⸕䚃ᘾѸ࣎ˈնᱟᡁᙅᙅ I don’t know what to do, but I’m in fear. KgCVAE: ᡁҏ㿹ᗇᡁᱟњྭ⭧ӪDŽ I think I’m a good man, too. Our Model: ᡁ㿹ᗇᡁՊᖸ۫ˈᡁՊ㻛ᡁྸ傲Ⲵ I would think that I was stupid and I would be blamed by my mother. Figure 4: Generated responses of all the models for different sentence functions. In the responses of our model, function-related words are in red and topic words in blue. The word type is predicted by the type controller. 1507 We presented an example in Figure 4 to show that our model can generate responses of different function types better compared to baselines. We can see that each function type can be realized by a natural composition of function-related words (in red) and topic words (in blue). Moreover, function-related words are different and are placed at different positions across function types, indicating that the model learns function-specific word patterns. These examples also show that the compatibility issue of controlling sentence function and generating informative content is well addressed by planning function-related and topic words properly. Post ྲ᷌ᴹаཙᡁケ❦䐏֐㔍Ӕˈ֐ՊᘾѸṧ˛ What would you do if I suddenly broke up with you someday? Interrogative Response #1 ֐䈤ⲴᱟᡁⲴ䭉ੇ˛ Do you mean that it’s my fault? Interrogative Response #2 ֐ՊнՊ䈤䈍˛ Can you speak normally? Interrogative Response #3 ֐ᜣᡁᘾṧ˛ᡁ㾱н㾱㔍Ӕ˛ What do you think I should do? Shall I break up with you? Figure 5: Different patterns of interrogative responses generated by our model. Furthermore, we verified the ability of our model to capture fine-grained patterns within a sentence function. We took interrogative responses as example and obtained responses by drawing latent variable samples repeatedly. Figure 5 shows interrogative responses with different patterns generated by our model given the same post. The model generates several Yes-No questions led by words such as 吗(do), 会(can) and 要(shall), and a Wh-question led by 怎样(what). This example shows that the latent variable can capture the fine-grained patterns and improve the diversity of responses within a function. 5 Conclusion We present a model to generate responses with both controllable sentence function and informative content. To deal with the global control of sentence function, we utilize a latent variable to capture the various patterns for different sentence functions. To address the compatibility issue, we devise a type controller to handle function-related and topic words explicitly. The model is thus able to control sentence function and generate informative content simultaneously. Extensive experiments show that our model performs better than several state-of-the-art baselines. As for future work, we will investigate how to apply the technique to multi-turn conversational systems, provided that the most proper sentence function can be predicted under a given conversation context. Acknowledgments This work was partly supported by the National Science Foundation of China under grant No.61272227/61332007 and the National Basic Research Program (973 Program) under grant No. 2013CB329403. References Rakesh Agrawal and Ramakrishnan Srikant. 1994. Fast algorithms for mining association rules. In Proceedings of the 20th VLDB Conference, pages 487– 499. Adrian Akmajian. 1984. Sentence types and the formfunction fit. Natural Language Linguistic Theory, 2(1):1–23. Nabiha Asghar, Pascal Poupart, Jesse Hoey, Xin Jiang, and Lili Mou. 2017. Affective neural response generation. arXiv preprint arXiv:1709.03968. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of International Conference on Learning Representations. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pages 10–21. Kyunghyun Cho, Bart Van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. In Proceedings of Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103–111. Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Computational linguistics, pages 22–29. Li Dong, Shaohan Huang, Furu Wei, Mirella Lapata, Ming Zhou, and Ke Xu. 2017. Learning to generate product reviews from attributes. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, pages 623–632. Jessica Ficler and Yoav Goldberg. 2017. Controlling linguistic style aspects in neural language generation. In Proceedings of the Workshop on Stylistic Variation, pages 94–104. 1508 Sayan Ghosh, Mathieu Chollet, Eugene Laksana, Louis-Philippe Morency, and Stefan Scherer. 2017. Affect-lm: A neural language model for customizable affective text generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 634–642. Alex Graves, Santiago Fern´andez, and J¨urgen Schmidhuber. 2005. Bidirectional lstm networks for improved phoneme classification and recognition. In International Conference on Artificial Neural Networks, pages 799–804. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017. Toward controlled generation of text. In International Conference on Machine Learning. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119. Xiang Li, Lili Mou, Rui Yan, and Ming Zhang. 2016b. Stalematebreaker: A proactive content-introducing approach to automatic human-computer conversation. In Proceedings of International Joint Conference on Artificial Intelligence, pages 2845–2851. Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. In Proceedings of International Conference on Learning Representations. Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, and Zhi Jin. 2016. Sequence to backward and forward sequences: A content-introducing approach to generative short-text conversation. In Proceedings of 26th International Conference on Computational Linguistics, pages 3349–3358. Soichiro Murakami, Akihiko Watanabe, Akira Miyazawa, Keiichi Goshima, Toshihiko Yanase, Hiroya Takamura, and Yusuke Miyao. 2017. Learning to generate market comments from stock prices. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1374–1384. Ning Qian. 1999. On the momentum term in gradient descent learning algorithms. Neural Networks, 12(1):145–151. Laurie E. Rozakis. 2003. The complete idiot’s guide to grammar and style. Alpha. Iulian V. Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of AAAI conference on Artificial Intelligence. Iulian Vlad. Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In Proceedings of AAAI conference on Artificial Intelligence. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, pages 1577–1586. Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015. Learning structured output representation using deep conditional generative models. In Advances in Neural Information Processing Systems, pages 3483–3491. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. In International Conference on Machine Learning Deep Learning Workshop. Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In Proceedings of AAAI conference on Artificial Intelligence. Zhou Yu, Ziyu Xu, Alan W Black, and Alex I. Rudnicky. 2016. Strategy and policy learning for nontask-oriented conversational systems. In Proceedings of 17th Annual SIGdial Meeting on Discourse and Dialogue. George Yule. 2010. The study of language. Cambridge university press. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Ganbin Zhou, Ping Luo, Rongyu Cao, Fen Lin, Bo Chen, and Qing He. 2017. Mechanism-aware neural machine for dialogue response generation. In Proceedings of AAAI conference on Artificial Intelligence. Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018. Emotional chatting machine: Emotional conversation generation with internal and external memory. In Proceedings of AAAI conference on Artificial Intelligence. Xianda Zhou and William Yang Wang. 2017. Mojitalk: Generating emotional responses at scale. arXiv preprint arXiv:1711.04090.
2018
139
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 142–151 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 142 Extractive Summarization with SWAP-NET: Sentences and Words from Alternating Pointer Networks Aishwarya Jadhav Indian Institute of Science Bangalore, India [email protected] Vaibhav Rajan School of Computing National University of Singapore [email protected] Abstract We present a new neural sequence-tosequence model for extractive summarization called SWAP-NET (Sentences and Words from Alternating Pointer Networks). Extractive summaries comprising a salient subset of input sentences, often also contain important key words. Guided by this principle, we design SWAP-NET that models the interaction of key words and salient sentences using a new twolevel pointer network based architecture. SWAP-NET identifies both salient sentences and key words in an input document, and then combines them to form the extractive summary. Experiments on large scale benchmark corpora demonstrate the efficacy of SWAP-NET that outperforms state-of-the-art extractive summarizers. 1 Introduction Automatic summarization aims to shorten a text document while maintaining the salient information of the original text. The practical need for such systems is growing with the rapid and continuous increase in textual information sources in multiple domains. Summarization tools can be broadly classified into two categories: extractive and abstractive. Extractive summarization selects parts of the input document to create its summary while abstractive summarization generates summaries that may have words or phrases not present in the input document. Abstractive summarization is clearly harder as methods have to address factual and grammatical errors that may be introduced and problems in utilizing external knowledge sources to obtain paraphrasing or generalization. Extractive summarizers obviate the need to solve these problems by selecting the most salient textual units (usually sentences) from the input documents. As a result, they generate summaries that are grammatically and semantically more accurate than those from abstractive methods. While they may have problems like incorrect or unclear referring expressions or lack of coherence, they are computationally simpler and more efficient to generate. Indeed, state-of-the-art extractive summarizers are comparable or often better in performance to competitive abstractive summarizers (see (Nallapati et al., 2017) for a recent empirical comparison). Classical approaches to extractive summarization have relied on human-engineered features from the text that are used to score sentences in the input document and select the highestscoring sentences. These include graph or constraint-optimization based approaches as well as classifier-based methods. A review of these approaches can be found in Nenkova et al. (2011). Some of these methods generate summaries from multiple documents. In this paper, we focus on single document summarization. Modern approaches that show the best performance are based on end-to-end deep learning models that do not require human-crafted features. Neural models have tremendously improved performance in several difficult problems in NLP such as machine translation (Chen et al., 2017) and question-answering (Hao et al., 2017). Deep models with thousands of parameters require large, labeled datasets and for summarization this hurdle of labeled data was surmounted by Cheng and Lapata (2016), through the creation of a labeled dataset of news stories from CNN and Daily Mail consisting of around 280,000 documents and human-generated summaries. Recurrent neural networks with encoderdecoder architecture (Sutskever et al., 2014) have 143 been successful in a variety of NLP tasks where an encoder obtains representations of input sequences and a decoder generates target sequences. Attention mechanisms (Bahdanau et al., 2015) are used to model the effects of different loci in the input sequence during decoding. Pointer networks (Vinyals et al., 2015) use this mechanism to obtain target sequences wherein each decoding step is used to point to elements of the input sequence. This pointing ability has been effectively utilized by state-of-the-art extractive and abstractive summarizers (Cheng and Lapata, 2016; Nallapati et al., 2016; See et al., 2017). In this work, we design SWAP-NET a new deep learning model for extractive summarization. Similar to previous models, we use an encoderdecoder architecture with attention mechanism to select important sentences. Our key contribution is to design an architecture that utilizes key words in the selection process. Salient sentences of a document, that are useful in summaries, often contain key words and, to our knowledge, none of the previous models have explicitly modeled this interaction. We model this interaction through a two-level encoder and decoder, one for words and the other for sentences. An attention-based mechanism, similar to that of Pointer Networks, is used to learn important words and sentences from labeled data. A switch mechanism is used to select between words and sentences during decoding and the final summary is generated using a combination of selected sentences and words. We demonstrate the efficacy of our model on the CNN/Daily Mail corpus where it outperforms state-of-the-art extractive summarizers. Our experiments also suggest that the semantic redundancy in SWAPNET generated summaries is comparable to that of human-generated summaries. 2 Problem Formulation Let D denote an input document, comprising of a sequence of N sentences: s1, . . . , sN. Ignoring sentence boundaries, let w1, . . . , wn be the sequence of n words in document D. An extractive summary aims to obtain a subset of the input sentences that forms a salient summary. We use the interaction between words and sentences in a document to predict important words and sentences. Let the target sequence of indices of important words and sentences be V = v1, . . . , vm, where each index vj can point to either a sentence or a word in an input document. We design a supervised sequence-to-sequence recurrent neural network model, SWAP-NET, that uses these target sequences (of sentences and words) to learn salient sentences and key words. Our objective is to find SWAP-NET model parameters M that maximize the probability p(V |M, D) = Q j p(vj|v1, . . . , vj−1, M, D) = Q j p(vj|v<j, M, D). We omit M in the following to simplify notation. SWAP-NET predicts both key words and salient sentences, that are subsequently used for extractive summary generation. 3 Background We briefly describe Pointer Networks (Vinyals et al., 2015). Our approach, detailed in the following sections, uses a similar attention mechanism. Given a sequence of n vectors X = x1, ....xn and a sequence of indices R = r1, ....rm, each between 1 and n, the Pointer Network is an encoder-decoder architecture trained to maximize p(R|X; θ) = Qm j=1 pθ(rj|r1, ....rj−1, X; θ), where θ denotes the model parameters. Let the encoder and decoder hidden states be (e1, ...., en) and (d1, ...., dm) respectively. The attention vector at each output step j is computed as follows: uj i = vT tanh(Weei + Wddj), i ∈(1, . . . , n) αj i = softmax(uj i), i ∈(1, . . . , n) The softmax normalizes vector uj to be an attention mask over inputs. In a pointer network, the same attention mechanism is used to select one of the n input vectors with the highest probability, at each decoding step, thus effectively pointing to an input: p(rj|r1, ....rj−1, X) = softmax(uj) Here, v, Wd, and We are learnable parameters of the model. 4 SWAP-NET We use an encoder-decoder architecture with an attention mechanism similar to that of Pointer Networks. To model the interaction between words and sentences in a document we use two encoders and decoders, one at the word level and the other at the sentence level. The sentence-level decoder learns to point to important sentences while the 144 Figure 1: SWAP-NET architecture. EW: word encoder, ES: sentence encoder, DW: word decoder, DS: sentence decoder, Q: switch. Input document has words [w1, . . . , w5] and sentences [s1, s2]. Target sequence shown: v1 = w2, v2 = s1, v3 = w5. Best viewed in color. word-level decoder learns to point to important words. A switch mechanism is trained to select either a word or a sentence at each decoding step. The final summary is created using the output words and sentences. We now describe the details of the architecture. 4.1 Encoder We use two encoders: a bi-directional LSTM at the word level and a LSTM at the sentence level. Each word wi is represented by a K-dimensional embedding (e.g., via word2vec), denoted by xi. The word embedding xi is encoded as ei using bi-directional LSTM for i = 1, . . . , n. The vector output of BiLSTM at the end of a sentence is used to represent that entire sentence, which is further encoded by the sentence-level LSTM as Ek = LSTM(ekl, Ek−1), where kl is the index of the last word in the kth sentence in D and Ek is the hidden state at the kth step of LSTM, for k = 1, . . . , N. See figure 1. 4.2 Decoder We use two decoders – a sentence-level and a word-level decoder, that are both LSTMs, with each decoder pointing to sentences and words respectively (similar to a pointer network). Thus, we can consider the output of each decoder step to be an index in the input sequence to the encoder. Let m be the number of steps in each decoder. Let T1, . . . , Tm be the sequence of indices generated by the sentence-level decoder, where each index Tj ∈{1, . . . , N}; and let t1, . . . , tm be the sequence of indices generated by the word-level decoder, where each index tj ∈{1, . . . , n}. 4.3 Network Details At the jth decoding step, we have to select a sentence or a word which is done through a binary switch Qj that has two states Qj = 0 and Qj = 1 to denote word and sentence selection respectively. So, we first determine the switch probability p(Qj|v<j, D). Let αs kj denote the probability of selecting the kth input sentence at the jth decoding step of sentence decoder: αs kj = p(Tj = k|v<j, Qj = 1, D), and let αw ij denote the probability of selecting the ith input word at the jth decoding step of word decoder: αw ij = p(tj = i|v<j, Qj = 0, D), 145 Figure 2: Illustration of word and sentence level attention in the second decoder step (Eq. 1 and Eq. 2). Purple: attention on words, Orange: attention on sentences, Unidirectional dotted arrows: attention from previous step, Bidirectional arrows: attention from previous and to present step. Best viewed in color. both conditional on the corresponding switch selection. We set vj based on the probability values: vj = ( k = arg maxk ps kj if maxk ps kj > maxi pw ij i = arg maxi pw ij if maxi pw ij > maxk ps kj ps kj = αs kjp(Qj = 1|v<j, D), pw ij = αw ijp(Qj = 0|v<j, D). These probabilities are obtained through the attention weight vectors at the word and sentence levels and the switch probabilities: αw ij = softmax(vT t φ(whhj + wtei)), αs kj = softmax(V T T φ(WHHj + WT Ek)). Parameters vt, wh, wt, VT , WH and WT are trainable parameters. Parameters hj and Hj are the hidden vectors at the jth step of the wordlevel and sentence-level decoder respectively defined as: hj = LSTM(hj−1, aj−1, φ(Aj−1)) (1) Hj = LSTM(Hj−1, Aj−1, φ(aj−1)) (2) where aj = Pn i=0 αw ijei, Aj = PN k=0 αs kjEk. The non-linear transformation, φ (we choose tanh), is used to connect the word-level encodings to the sentence decoder and the sentence-level encodings to the word decoder. Specifically, the word-level decoder updates its state by considering a sum of sentence encodings, weighted by the attentions from the previous state and mutatis mutandis for the sentence-level decoder. The switch probability p(Qj|v<j, D) at the jth decoding step is given by: p(Qj = 1|v<j, D) = σ(wT Q(Hj−1, Aj−1, φ(hj−1, aj−1))) p(Qj = 0|v<j, D) = 1 −p(Qj = 1|v<j, D) where wQ is a trainable parameter and σ denotes the sigmoid function and φ is the chosen nonlinear transformation (tanh). During training the loss function lj at jth step is set to lj = −log(ps kjqs j + pw ijqw j ) − log p(Qj|v<j, D). Note that at each decoding step, switch is either qw j = 1, qs j = 0 if the jth output is a word or qw j = 0, qs j = 1 if the jth output is a sentence. The switch probability is also considered in the loss function. 146 4.4 Summary Generation Given a document whose summary is to be generated, its sentences and words are given as input to the trained encoder. At the jth decoding step, either a sentence or a word is chosen based on the probability values αs kj and αw ij and the switch probability p(Qj|v<j, D). We assign importance scores to the selected sentences based on their probability values during decoding as well as the probabilities of the selected words that are present in the selected sentences. Thus sentences with words selected by the decoder are given higher importance. Let the kth input sentence sk be selected at the jth decoding step and ith input word wi be selected at the lth decoding step. Then the importance of sk is defined as I(sk) = αs kj + λ X wi∈sk αw il (3) In our experiments we choose λ = 1. The final summary consists of three sentences with the highest importance scores. 5 Related Work Traditional approaches to extractive summarization rely on human-engineered features based on, for example, part of speech (Erkan and Radev, 2004) and term frequency (Nenkova et al., 2006). Sentences in the input document are scored using these features, ranked and then selected for the final summary. Methods used for extractive summarization include graph-based approaches (Mihalcea, 2005) and Integer Linear Programming (Gillick and Favre, 2009). There are many classifier-based approaches that select sentences for the extractive summary using methods such as Conditional Random Fields (Shen et al., 2007) and Hidden Markov models (Conroy and O’leary, 2001). A review of these classical approaches can be found in Nenkova et al. (2011). End-to-end deep learning based neural models that can effectively learn from text data, without human-crafted features, have witnessed rapid development, resulting in improved performance in multiple areas such as machine translation (Chen et al., 2017) and question-answering (Hao et al., 2017), to name a few. Large labelled corpora based on news stories from CNN and Daily Mail, with human generated summaries have become available (Cheng and Lapata, 2016), that have spurred the use of deep learning models in summarization. Recurrent neural network based architectures have been designed for both extractive (Cheng and Lapata, 2016; Nallapati et al., 2017) and abstractive (See et al., 2017; Tan et al., 2017) summarization problems. Among these, the work of Cheng and Lapata (2016) and Nallapati et al. (2017) are closest to our work on extractive singledocument summarization. An encoder-decoder architecture with an attention mechanism similar to that of a pointer network is used by Cheng and Lapata (2016). Their hierarchical encoder uses a CNN at the word level leading to sentence representations that are used in an RNN to obtain document representations. They use a hierarchical attention model where the first level decoder predicts salient sentences used for an extractive summary and based on this output, the second step predicts keywords which are used for abstractive summarization. Thus they do not use key words for extractive summarization and for abstractive summarization they generate key words based on sentences predicted independently of key words. SWAP-NET, in contrast, is simpler using only two-level RNNs for word and sentence level representations in both the encoder and decoder. In our model we predict both words and sentences in such a way that their attentions interact with each other and generate extractive summaries considering both the attentions. By modeling the interaction between these key words and important sentences in our decoder architecture, we are able to extract sentences that are closer to the gold summaries. SummaRuNNer, the method developed by Nallapati et al. (2017) is not similar to our method in its architecture but only in the aim of extractive summary generation. It does not use an encoderdecoder architecture; instead it is an RNN based binary classifier that decides whether or not to include a sentence in the summary. The RNN is multi-layered representing inputs, words, sentences and the final sentence labels. The decision of selecting a sentence at each step of the RNN is based on the content of the sentence, salience in the document, novelty with respect to previously selected sentences and other positional features. Their approach is considerably simpler than that of Cheng and Lapata (2016) but obtains summaries closer to the gold summaries, and additionally, facilitates interpretable visualization and 147 training from abstractive summaries. Their experiments show improved performance over both abstractive and extractive summarizers from several previous models (Nallapati et al., 2017). We note that several elements of our architecture have been introduced and used in earlier work. Pointer networks (Vinyals et al., 2015) used the attention mechanism of (Bahdanau et al., 2015) to solve combinatorial optimization problems. They have also been used to point to sentences in extractive (Cheng and Lapata, 2016) and abstractive (Nallapati et al., 2016; See et al., 2017) summarizers. The switch mechanism was introduced to incorporate rare or out-of-vocabulary words (Gulcehre et al., 2016) and are used in several summarizers (e.g. (Nallapati et al., 2016)). However, we use it to select between word and sentence level decoders in our model. The importance of all the three interactions: (i) sentence-sentence, (ii) word-word and (iii) sentence-word, for summarization, have been studied by Wan et al. (2007) using graph-based approaches. In particular, they show that methods that account for saliency using both the following considerations perform better than methods that consider either one of them alone, and SWAP-NET is based on the same principles. • A sentence should be salient if it is heavily linked with other salient sentences, and a word should be salient if it is heavily linked with other salient words. • A sentence should be salient if it contains many salient words, and a word should be salient if it appears in many salient sentences. 6 Data and Experiments 6.1 Experimental Settings In our experiments the maximum number of words per document is limited to 800, and the maximum number of sentences per document to 50 (padding is used to maintain the length of word sequences). We also use the symbols <GO> and <EOS> to indicate start and end of prediction by decoders. The total vocabulary size is 150,000 words. We use word embeddings of dimension 100 pretrained using word2vec (Mikolov et al., 2013) on the training dataset. We fix the LSTM hidden state size at 200. We use a batch size of 16 and the ADAM optimizer (Kingma and Ba, 2015) with parameters: learning rate = 0.001, β1 = 0.9, β2 = 0.999 to train SWAP-NET. We employ gradient clipping to regularize our model and an early stopping criterion based on the validation loss. During training we find that SWAP-NET learns to predict important sentences faster than to predict words. To speed up learning of word probabilities, we add the term −log αw ij to our loss function lj in the final iterations of training. It is possible to get the same sentence or word in multiple (usually consecutive) decoding steps. In that case, in Eq. 3 we consider the maximum value of alpha obtained across these steps and calculate maximum scores of distinct sentences and words. We select 3 top scoring sentences for the summary, as there are 3.11 sentences on average in the gold summary of the training set (similar to settings used by others, e.g., (Narayan et al., 2017)). 6.2 Baselines Two state-of-the-art methods for extractive summarization are SummaRuNNer (Nallapati et al., 2017) and NN, the neural summarizer of Cheng and Lapata (2016). SummaRuNNer can also provide extractive summaries while being trained abstractively (Nallapati et al., 2017); we denote this method by SummaRuNNer-abs. In addition, we compare our method with the Lead-3 summary which consists of the first three sentences from each document. We also compare our method with an abstractive summarizer that uses a similar attention-based encoder-decoder architecture (Nallapati et al., 2016), denoted by ABS. 6.3 Benchmark Datasets For our experiments, we use the CNN/DailyMail corpus (Hermann et al., 2015). We use the anonymized version of this dataset, from Cheng and Lapata (2016), which has labels for important sentences, that are used for training. To obtain labels for words, we extract keywords from each gold summary using RAKE, an unsupervised keyword extraction method (Rose et al., 2010). These keywords are used to label words in the corresponding input document during training. We replace numerical values in the documents by zeros to limit the vocabulary size. We have 193,986 training documents, 12,147 validation documents and 10,346 test documents from the DailyMail corpus and 83,568 training documents, 1,220 validation documents and 1,093 test documents from CNN subset with labels for sentences and words. 148 6.4 Evaluation Metrics We use the ROUGE toolkit (Lin and Hovy, 2003) for evaluation of the generated summaries in comparison to the gold summaries. We use three variants of this metric: ROUGE-1 (R1), ROUGE-2 (R2) and ROUGE-L (RL) that are computed by matching unigrams, bigrams and longest common subsequences respectively between the two summaries. To compare with (Cheng and Lapata, 2016) and (Nallapati et al., 2017) we use limited length ROUGE recall at 75 and 275 bytes for the Daily-Mail test set, and full length ROUGE-F1 score, as reported by them. 6.5 Results on Benchmark Datasets Performance on Daily Mail Data Models R1 R2 RL Lead-3 21.9 7.2 11.6 NN 22.7 8.5 12.5 SummaRuNNner-abs 23.8 9.6 13.3 SummaRuNNner 26.2 10.8 14.4 SWAP-NET 26.4 10.7 14.4 Table 1: Performance on Daily-Mail test set using the limited length recall of Rouge at 75 bytes. Models R1 R2 RL Lead-3 40.5 14.9 32.6 NN 42.2 17.3 34.8 SummaRuNNner-abs 40.4 15.5 32.0 SummaRuNNner 42.0 16.9 34.1 SWAP-NET 43.6 17.7 35.5 Table 2: Performance on Daily-Mail test set using the limited length recall of Rouge at 275 bytes. Table 1 shows the performance of SWAP-NET, state-of-the-art baselines NN and SummaRuNNer and other baselines, using ROUGE recall with summary length of 75 bytes, on the entire Daily Mail test set. The performance of SWAP-NET is comparable to that of SummaRuNNer and better than NN and other baselines. Table 2 compares the same algorithms using ROUGE recall with summary length of 275 bytes. SWAP-NET outperforms both state-of-the-art summarizers SummaRuNNer as well as NN. Performance on CNN/DailyMail Data SWAP-NET has the best performance on the combined CNN and Daily Mail corpus, outperforming Models R1 R2 RL Lead-3 39.2 15.7 35.5 ABS 35.4 13.3 32.6 SummaRuNNer-abs 37.5 14.5 33.4 SummaRuNNer 39.6 16.2 35.3 SWAP-NET 41.6 18.3 37.7 Table 3: Performance on CNN and Daily-Mail test set using the full length Rouge F score. the previous best reported F-score by SummaRuNNer, as seen in table 3, with a consistent improvement of over 2 ROUGE points in all three metrics. 6.6 Discussion SWAP-NET outperforms state-of-the-art extractive summarizers SummaRuNNer (Nallapati et al., 2017) and NN (Cheng and Lapata, 2016) on benchmark datasets. Our model is similar, although simpler, than that of NN and the main difference between SWAP-NET and these baselines is its explicit modeling of the interaction between key words and salient sentences. Automatic keyword extraction has been studied extensively (Hasan and Ng, 2014). We use a popular and well tested method, RAKE (Rose et al., 2010) to obtain key words in the training documents. A disadvantage with such methods is that they do not guarantee representation, via extracted keywords, of all the topics in the text (Hasan and Ng, 2014). So, if RAKE key words are directly applied to the input test document (without using word decoder trained on RAKE words, obtained from gold summary as done in SWAP-NET), then there is a possibility of missing sentences from the missed topics. So, we train SWAP-NET to predict key words and also model their interactions with sentences. Statistics Lead-3 SWAP-NET KW coverage 61.6% 73.8% Sentences with KW 92.2% 98% Table 4: Key word (KW) statistics per summary (average percentage) from 500 documents in Daily Mail test set. See text for definitions. We investigate the importance of modeling this interaction and the role of key words in the final summary. Table 4 shows statistics that reflect the importance of key words in extractive summaries. Key word coverage measures the proportion of key 149 Title: @entity19 vet surprised reason license plate denial Gold Summary: @entity9 of @entity10 , @entity1 , wanted to get ’ @entity11 - 0 ’ put on a license plate . that would have commemorated both @entity9 getting the @entity8 in 0 and his @entity16 . the @entity1 @entity21 denied his request , citing state regulations prohibiting the use of the number 0 because of its indecent connotations . SWAP-NET Summary: @entity9 of @entity10 wanted to get ’ @entity11 ’ put on a license plate , the @entity14 newspaper of @entity10 reported . that would have commemorated both @entity9 getting the @entity8 in 0 and his @entity16 , according to the newspaper . the @entity1 @entity21 denied his request , citing state regulations prohibiting the use of the number 0 because of its indecent connotations @entity9 had been an armored personnel carrier ’s gunner during his time in the @entity29 . SWAP-NET Key words: @entity1, @entity9, @entity8, citing, number, year, indecent, personalized, war, surprised, plate, @entity14, @entity11, @entity10, regulations, reported, wanted, connotations, license, request, according,@entity21, armored, @entity16 Lead 3 Summary: a @entity19 war veteran in @entity1 has said he ’s surprised over the reason for the denial of his request for a personalized license plate commemorating the year he was wounded and awarded a @entity8 . @entity9 of @entity10 wanted to get ’ @entity11 ’ put on a license plate , the @entity14 newspaper of @entity10 reported . that would have commemorated both @entity9 getting the @entity8 in 0 and his @entity16 , according to the newspaper . Table 5: Sample gold summary and summaries generated by SWAP-NET and Lead-3. Key words are highlighted, bold font indicates overlap with gold summary. words from those in the gold summary present in the generated summary. SWAP-NET obtains nearly 74% of the key words. In comparison Lead3 has only about 62% of the key words from the gold summary. Sentences with key words measures the proportion of sentences containing at least one key word. It is not surprising that in SWAP-NET summaries 98% of the sentences, on average, contain at least one key word: this is by design of SWAP-NET. However, note that Lead-3 which has poorer performance in all the benchmark datasets has much fewer sentences with key words. This highlights the importance of key words in finding salient sentences for extractive summaries. Gold summary Lead-3 SWAP-NET 0.81 0.553 0.8 Table 6: Average pairwise cosine distance between paragraph vector representations of sentences in summaries. We also find the SWAP-NET obtains summaries that have less semantic redundancy. Table 6 shows the average distance between pairs of sentences from the gold summary, and summaries generated from SWAP-NET and Lead-3. Distances are measured using cosine distance of paragraph vectors of each sentence (Le and Mikolov, 2014) from randomly selected 500 documents of the Daily Mail test set. Paragraph vectors have been found to be effective semantic representations of sentences (Le and Mikolov, 2014) and experiments in (Dai et al., 2015) also show that paragraph vectors can be effectively used to measure semantic similarity using cosine distance. For training we use GENSIM ( ˇReh˚uˇrek and Sojka, 2010) with embedding size 200 and initial learning rate 0.025. The model is trained on 500 documents from DailyMail dataset for 10 epochs and learning rate is decreased by 0.002 at each epoch. The average pair-wise distance of SWAP-NET is very close to that of the gold summary, both 150 nearly 0.8. In contrast, the average pairwise distance in Lead-3 summaries is 0.553 indicating higher redundancy. This highly desirable feature of SWAP-NET is likely due to use of of key words, that is affecting the choice of sentences in the final summary. Table 5 shows a sample gold summary from the Daily Mail dataset and the generated summary from SWAP-NET and, for comparison, from Lead-3. We observe the presence of key words in all the overlapping segments of text with the gold summary indicating the importance of key words in finding salient sentences. Modeling this interaction, we believe, is the reason for the superior performance of SWAP-NET in our experiments. An implementation of SWAP-NET and all the generated summaries from the test sets are available online in a github repository1. 7 Conclusion We present SWAP-NET, a neural sequence-tosequence model for extractive summarization that outperforms state-of-the-art extractive summarizers SummaRuNNer (Nallapati et al., 2017) and NN (Cheng and Lapata, 2016) on large scale benchmark datasets. The architecture of SWAPNET is simpler than that of NN but due to its effective modeling of interaction between salient sentences and key words in a document, SWAPNET achieves superior performance. SWAP-NET models this interaction using a new two-level pointer network based architecture with a switching mechanism. Our experiments also suggest that modeling sentence-keyword interaction has the desirable property of less semantic redundancy in summaries generated by SWAP-NET. 8 Acknowledgment The authors thank the ACL reviewers for their valuable comments. Vaibhav Rajan acknowledges the support from Singapore Ministry of Education Academic Research Fund Tier 1 towards funding this research. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations. 1https://github.com/aishj10/swap-net Huadong Chen, Shujian Huang, David Chiang, and Jiajun Chen. 2017. Improved neural machine translation with a syntax-aware encoder and decoder. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. John M Conroy and Dianne P O’leary. 2001. Text summarization via hidden markov models. In Proceedings of the 24th annual international ACM SIGIR conference on research and development in information retrieval, pages 406–407. ACM. Andrew M Dai, Christopher Olah, and Quoc V Le. 2015. Document embedding with paragraph vectors. arXiv preprint arXiv:1507.07998. G¨unes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research, 22:457–479. Dan Gillick and Benoit Favre. 2009. A scalable global model for summarization. In Proceedings of the Workshop on Integer Linear Programming for Natural Langauge Processing, Association for Computational Linguistics, pages 10–18. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 140–149. Yanchao Hao, Yuanzhe Zhang, Kang Liu, Shizhu He, Zhanyi Liu, Hua Wu, and Jun Zhao. 2017. An endto-end model for question answering over knowledge base with cross-attention combining global knowledge. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Kazi Saidul Hasan and Vincent Ng. 2014. Automatic keyphrase extraction: A survey of the state of the art. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, volume 1, pages 1262–1273. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1693– 1701. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations. 151 Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of the International Conference on Machine Learning, pages 1188–1196. Chin-Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summaries using n-gram cooccurrence statistics. In Proceedings of the North American Chapter of the Association for Computational Linguistics on Human Language TechnologyVolume 1, pages 71–78. Rada Mihalcea. 2005. Language independent extractive summarization. In Proceedings of the ACL 2005 on Interactive poster and demonstration sessions, pages 49–52. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. In Proceedings of Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17), pages 3075–3081. Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, Bing Xiang, et al. 2016. Abstractive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning (CoNLL). Shashi Narayan, Nikos Papasarantopoulos, Shay B Cohen, and Mirella Lapata. 2017. Neural extractive summarization with side information. arXiv preprint arXiv:1704.04530. Ani Nenkova, Kathleen McKeown, et al. 2011. Automatic summarization. Foundations and Trends R⃝in Information Retrieval, 5(2–3):103–233. Ani Nenkova, Lucy Vanderwende, and Kathleen McKeown. 2006. A compositional context sensitive multi-document summarizer: exploring the factors that influence summarization. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 573–580. Radim ˇReh˚uˇrek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45–50. Stuart Rose, Dave Engel, Nick Cramer, and Wendy Cowley. 2010. Automatic keyword extraction from individual documents. Text Mining: Applications and Theory, pages 1–20. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Dou Shen, Jian-Tao Sun, Hua Li, Qiang Yang, and Zheng Chen. 2007. Document summarization using conditional random fields. In Proceedings of International Joint Conferences on Artificial Intelligence, volume 7, pages 2862–2867. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104–3112. Jiwei Tan, Xiaojun Wan, and Jianguo Xiao. 2017. Abstractive document summarization with a graphbased attentional neural model. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 1171–1181. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems, pages 2692–2700. Xiaojun Wan, Jianwu Yang, and Jianguo Xiao. 2007. Towards an iterative reinforcement approach for simultaneous document summarization and keyword extraction. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, pages 552–559.
2018
14
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1509–1519 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1509 Sentiment Adaptive End-to-End Dialog Systems Weiyan Shi [24]7.ai [email protected] Zhou Yu University of California, Davis [email protected] Abstract End-to-end learning framework is useful for building dialog systems for its simplicity in training and efficiency in model updating. However, current end-to-end approaches only consider user semantic inputs in learning and under-utilize other user information. Therefore, we propose to include user sentiment obtained through multimodal information (acoustic, dialogic and textual), in the end-to-end learning framework to make systems more user-adaptive and effective. We incorporated user sentiment information in both supervised and reinforcement learning settings. In both settings, adding sentiment information reduced the dialog length and improved the task success rate on a bus information search task. This work is the first attempt to incorporate multimodal user information in the adaptive end-toend dialog system training framework and attained state-of-the-art performance. 1 Introduction Most of us have had frustrating experience and even expressed anger towards automated customer service systems. Unfortunately, none of the current commercial systems can detect user sentiment and let alone act upon it. Researchers have included user sentiment in rule-based systems (Acosta, 2009; Pittermann et al., 2010), where there are strictly-written rules that guide the system to react to user sentiment. Because traditional modular-based systems are harder to train, to update with new data and to debug errors, end-to-end trainable systems are more popular. However, no work has tried to incorporate sentiment information in the end-to-end trainable systems so far to create sentiment-adaptive systems that are easy to train. The ultimate evaluators of dialog systems are users. Therefore, we believe dialog system research should strive for better user satisfaction. In this paper, we not only included user sentiment information as an additional context feature in an end-to-end supervised policy learning model, but also incorporated user sentiment information as an immediate reward in a reinforcement learning model. We believe that providing extra feedback from the user would guide the model to adapt to user behaviour and learn the optimal policy faster and better. There are three contributions in this work: 1) an audio dataset1 with sentiment annotation (the annotators were given the complete dialog history); 2) an automatic sentiment detector that considers conversation history by using dialogic features, textual features and traditional acoustic features; and 3) end-to-end trainable dialog policies adaptive to user sentiment in both supervised and reinforcement learning settings. We believe such dialog systems with better user adaptation are beneficial in various domains, such as customer services, education, health care and entertainment. 2 Related Work Many studies in emotion recognition (Schuller et al., 2003; Nwe et al., 2003; Bertero et al., 2016) have used only acoustic features. But there has been work on emotion detection in spoken dialog systems incorporating extra information as well (Lee and Narayanan, 2005; Devillers et al., 2003; Liscombe et al., 2005; Burkhardt et al., 2009; Yu et al., 2017). For example, Liscombe et al. (2005) explored features like users’ dialog act, lexical context and discourse context of the previous turns. Our approach considered accumulated di1The dataset is available here. 1510 alogic features, such as total number of interruptions, to predict user sentiment along with acoustic and textual features. The traditional method to build dialog system is to train modules such as language understanding component, dialog manager and language generator separately (Levin et al., 2000; Williams and Young, 2007; Singh et al., 2002). Recently, more and more work combines all the modules in an end-to-end training framework (Wen et al., 2016; Li et al., 2017; Dhingra et al., 2016; Williams et al., 2017; Liu and Lane, 2017a). Specifically related to our work, Williams et al. (2017) built a model, which combined the traditional rule-based system and the modern deep-learning-based system, with experts designing actions masks to regulate the neural model. Action masks are bit vectors indicating allowed system actions at certain dialog state. The end-to-end framework made dialog system training simpler and model updating easier. Reinforcement learning (RL) is also popular in dialog system building (Zhao and Eskenazi, 2016; Liu and Lane, 2017b; Li et al., 2016). A common practice is to simulate users. However, building a user simulator is not a trivial task. Zhao and Eskenazi (2016) combines the strengths of reinforcement learning and supervised learning to accelerate the learning of a conversational game simulator. Li et al. (2016) provides a standard framework for building user simulators, which can be modified and generalized to different domains. Liu and Lane (2017b) describes a more advanced way to build simulators for both the user and the agent, and train both sides jointly for better performance. We simulated user sentiment by sampling from real data and incorporated it as immediate rewards in RL, which is different from common practice of using task success as delayed rewards in RL training. Some previous module-based systems integrated user sentiment in dialog planning (Acosta, 2009; Acosta and Ward, 2011; Pittermann et al., 2010). They all integrated user sentiment in the dialog manager with manually defined rules to react to different user sentiment and showed that tracking sentiment is helpful in gaining rapport with users and creating interpersonal interaction in the dialog system. In this work, we include user sentiment into end-to-end dialog system training and make the dialog policy learn to choose dialog actions to react to different user sentiments automatically. We achieve this through integrating user sentiment into reinforcement reward design. Many previous RL studies used delayed rewards, mostly task success. However, delayed rewards make the converging speed slow, so some studies integrated estimated per-turn immediate reward. For example, Ferreira and Lef`evre (2013) explored expert-based reward shaping in dialog management and Ultes et al. (2017) proposed Interaction Quality (IQ), a less subjective variant of user satisfaction, as immediate reward in dialog training. However, both methods are not end-toend trainable, and require manual input as prior, either in designing proper form of reward, or in annotating the IQ. Our approach is different as we detect the multimodal user sentiment on the fly and does not require manual input. Because sentiment information comes directly from real users, our method will adapt to user sentiment as the dialog evolves in real time. Another advantage of our model is that the sentiment scores come from a pre-trained sentiment detector, so no manual annotation of rewards is required. Furthermore, the sentiment information is independent of the user’s goal, so no prior domain knowledge is required, which makes our method generalizable and independent of the task. 3 Dataset We experimented our methods on DSTC1 dataset (Raux et al., 2005), which has a bus information search task. Although DSTC2 dataset is a more commonly-used dataset in evaluating dialog system performance, the audio recordings of DSTC2 are not publicly available and therefore, DSTC1 was chosen. There are a total of 914 dialogs in DSTC1 with both text and audio information. Statistics of this dataset are shown in Table 1. We used the automatic speech recognition (ASR) as the user text inputs instead of the transcripts, because the system’s action decisions heavily depend on ASR. There are 212 system action templates in this dataset. Four types of entities are involved, <place>, <time>, <route>, and <neighborhood>. 4 Annotation We manually annotated 50 dialogs consisting of 517 conversation turns for user sentiment. Sentiment is categorized into negative, neutral and positive. The annotator had access to the 1511 Category Total total dialogs 914 total dialogs in train 517 total dialogs in test 397 Statistics Total avg dialog len 13.8 vocabulary size 685 Table 1: Statistics of the text data. Category Total total dialogs 50 total audios 517 total audios in train 318 total audios in dev 99 total audios in test 100 Category Total neutral 254 negative 253 positive 10 Table 2: Statistics of the annotated audio set. entire dialog history in the annotation process because the dialog context gives the annotators a holistic view of the interactions, and annotating user sentiment in a dialog without the context is really difficult. Some previous studies have also performed similar user information annotation given context, such as Devillers et al. (2002). The annotation scheme is described in Table 10 in Appendix A.2. To address the concern that dialog quality may bias the sentiment annotation, we explicitly asked the annotators to focus on users’ behaviour instead of the system, and hid all the details of multimodal features from the annotators. Moreover, two annotators were calibrated on 37 audio files, and reached an inter-annotator agreement (kappa) of 0.74. The statistics of the annotation results are shown in Table 2. The skewness in the dataset is due to the data’s nature. In the annotation scheme, positive is defined as “excitement or other positive feelings”, but people rarely express obvious excitement towards automated task-oriented dialog systems. What we really want to distinguish is neutral and positive cases from negative cases so as to avoid the negative sentiment, and the dataset is balanced for these two cases. To the best of our knowledge, our dataset is the first publicly available dataset that annotated user sentiment with respect to the entire dialog history. There are similar datasets with emotion annotations (Schuller et al., 2013) but are not labeled under dialog contexts. 5 Multimodal Sentiment Classification To detect user sentiment, we extracted a set of acoustic, dialogic and textual features. 5.1 Acoustic features We used openSMILE (Eyben et al., 2013) to extract acoustic features. Specifically, we used the paralinguistics configuration from Schuller et al. (2003), which includes 1584 acoustic features, such as pitch, volume and jitter. In order to avoid possible overfitting caused by the large number of acoustic features, we performed tree-based feature selection (Pedregosa et al., 2011) to reduce the size of acoustic features to 20. The selected features are listed in Table 12 in Appendix A.4. 5.2 Dialogic features Four categories of dialogic features are selected according to previous literature (Liscombe et al., 2005) and the statistics observed in the dataset. We used not only the per-turn statistics of these features, but also the accumulated statistics of them throughout the entire conversation so that the sentiment classifier can also take the entire dialog context into consideration. Interruption is defined as the user interrupting the system speech. Interruptions occurred fairly frequently in our dataset (4896 times out of 14860 user utterances). Button usage When the user is not satisfied with the ASR performance of the system, he/she would rather choose to press a button for ”yes/no” questions, so the usage of buttons can be an indication of negative sentiment. During DSTC1 data collection, users were notified about the option to use buttons, so this kind of information is available in the data. Repetitions There are two kinds of repetitions: the user asks the system to repeat the previous sentence, and the system keeps asking the same question due to failures to catch some important entity. In our model, we combined these two situations as one feature because very few user repetitions occur in our data (<1%). But for other data, it might be helpful to separate them. Start over is active when the user chooses to restart the task in the middile of the conversation. The system is designed to give the user an option to start over after several turns. If the user takes this offer, he/she might have negative sentiment. 1512 5.3 Textual features We also noticed that the semantic content of the utterance was relevant to sentiment. So we used the entire dataset as a corpus and created a tf-idf vector for each utterance as textual features. 5.4 Classification results The sentiment classifier was trained on the 50 dialogs annotated with sentiment labels. The predictions made by this classifier were used for the supervised learning and reinforcement learning in the later sections. We used random forest as our classifier (an implementation from scikit-learn (Pedregosa et al., 2011)), as we had limited annotated data. We separated the data to be 60% for training, 20% for validation and 20% for testing. Due to the randomness in the experiments, we ran all the experiments 20 times and reported the average results of different models in Table 4. We also conducted unpaired one-tailed t-test to assess the statistical significance. We extracted 20 acoustic features, eight dialogic features and 164 textual features. From Table 4, we see that the model combining all the three categories of features performed the best (0.686 in F-1, p < 1e−6 compared to acoustic baseline). One interesting observation is that by only using eight dialogic features, the model already achieved 0.596 in F-1. Another interesting observation is that using 164 textual features alone reached a comparable performance (0.664), but the combination of acoustic and textual features actually brought down the performance to 0.647. One possible reason is that the acoustic information has noise that confused the textual information when combined. But this observation doesn’t necessarily apply to other datasets. The significance tests show that adding dialogic features improved the baseline significantly. For example, the model with both acoustic features and dialogic features are significantly better than the one with only acoustic features (p < 1e−6). In Table 3, we listed the dialogic features with their relative importance rank, which were obtained from ranking their feature importance scores in the classifier. We observe that “total interruptions so far” is the most useful dialogic features to predict user sentiment. The sentiment detector trained will be integrated in the end-to-end learning described later. Dialogic Features Relative Rank of importance total interruptions so far 1 interruptions 2 total button usages so far 3 total repetitions so far 4 repetition 5 button usage 6 total start over so far 7 start over 8 Table 3: Dialogic features’ relative importance rank in sentiment detection. Model Avg. of F-1 Std. of F-1 Max of F-1 Acoustic features only 0.635 0.027 0.686 Dialogic features only 0.596 0.001 0.596 Textual features only ⇤ 0.664 0.010 0.685 Textual + Dialogic ⇤ 0.672 0.011 0.700 Acoustic + Dialogic ⇤ 0.680 0.019 0.707 Acoustic + Textual 0.647 0.025 0.686 Acoustic + Dialogic + Text ⇤ 0.686 0.028 0.756 Table 4: Results of sentiment detectors using different features. The best result is highlighted in bold and * indicates statistical significance compared to the baseline, which is using acoustic features only. (p < 0.0001) 6 Supervised Learning (SL) We incorporated the detected user sentiment from the previous section into a supervised learning framework for training end-to-end dialog systems. There are many studies on building a dialog system in a supervised learning setting (Bordes and Weston (2016); Eric and Manning (2017); Seo et al. (2016); Liu and Lane (2017a); Li et al. (2017); Williams et al. (2017)). Following these approaches, we treated the problem of dialog policy learning as a classification problem, which is to select actions among system action templates given conversation history. Specifically, we decided to adopt the framework of Hybrid Code Network (HCN) introduced in Williams et al. (2017), because it is the current state-of-the-art model. We reimplemented HCN and used it as the baseline system, given the absence of direct comparison on DSTC1 data. One caveat is that HCN used action masks (bit vectors indicating allowed actions at certain dialog states) to prevent impossible system actions, but we didn’t use hand-crafted action masks in the supervised learning setting because manually designing action masks for 212 action templates is very labor-intensive. This makes our method more general and adaptive to different tasks. All the dialog modules were trained 1513 together instead of separately. Therefore, our method is end-to-end trainable and doesn’t require human expert involvement. We listed all the context features used in Williams et al. (2017) in Table 11 in Appendix A.3. In our model, we added one more set of context features, the user-sentiment-related features. For entity extraction, given that the entity values in our dataset form a simple unique fixed set, we used simple string matching. We conducted three experiments: the first one used entity presences as context features, which serves as the baseline; the second one used entity presences in addition to all the raw dialogic features mentioned in Table 3; the third experiment used the baseline features plus the predicted sentiment label by the prebuilt sentiment detector (converted to one-hot vector) instead of the raw dialogic features. We used the entire DSTC1 dataset to train the supervised model. The input is the normalized natural language and the contexutal features, and the output is the action template id. We kept the same experiment setting in Williams et al. (2017), e.g. last action taken was also used as a feature, along with word embeddings (Mikolov et al., 2013) and bag-of-words; LSTM with 128 hiddenunits and AdaDelta optimizer (Zeiler, 2012) were used to train the model. The results of different models are shown in Table 5. We observe that using the eight raw dialogic features did not improve turn-level F-1 score. One possible reason is that a total of eight dialogic features were added to the model, and some of them might contain noises and therefore caused the model to overfit. However, using predicted sentiment information as an extra feature, which is a more condensed information, outperformed the other models both in terms of turn-level F-1 score and dialog accuracy which indicates if all turns in a dialog are correct. The difference in absolute F1 score is small because we have a relatively large test set (5876 turns). But the unpaired one-tailed t-test shows that p < 0.01 for both the F-1 and the dialog accuracy. This suggests that including user sentiment information in action planning is helpful in a supervised learning setting. 7 Reinforcement Learning (RL) In the previous section, we discussed including sentiment features directly as a context feature in a supervised learning model for end-to-end dialog Model Weighted F-1 Dialog Acc. HCN 0.4198 6.05% HCN + raw dialogic features 0.4190 5.79% HCN + predicted sentiment label⇤ 0.4261 6.55% Table 5: Results of different SL models. The best result is highlighted in bold. ⇤indicates that the result is significantly better than the baseline (p < 0.01). Dialog accuracy indicates if all turns in a dialog are correct, so it’s low. For DSTC2 data, the state-of-art dialog accuracy is 1.9%, consistent with our results. system training, which showed promising results. But once a system operates at scale and interacts with a large number of users, it is desirable for the system to continue to learn autonomously using reinforcement learning (RL). With RL, each turn receives a measurement of goodness called reward (Williams et al., 2017). Previously, training taskoriented systems mainly relies on the delayed reward about task success. Due to the lack of informative immediate reward, the training takes a long time to converge. In this work, we propose to include user sentiment as immediate rewards to expedite the reinforcement learning training process and create a better user experience. To use sentiment scores in the reward function, we chose the policy gradient approach (Williams, 1992) and implemented the algorithm based on Zhu (2017). The traditional reward function uses a positive constant (e.g. 20) to reward the success of the task, 0 or a negative constant to penalize the failure of the task after certain number of turns, and gives -1 to each extra turn to encourage the system to complete the task sooner. However, such reward function doesn’t consider any feedback from the end-user. It is natural for human to consider conversational partner’s sentiment in planning dialogs. So, we propose a set of new reward functions that incorporate user sentiment to emulate human behaviors. The intuition of integrating sentiment in reward functions is as follows. The ultimate evaluator of dialog systems is the end-users. And user sentiment is a direct reflection of user satisfaction. Therefore, we detected the user sentiment scores from multimodal sources on the fly, and used them as immediate rewards in an adaptive end-to-end dialog training setting. This sentiment information came directly from real users, which made the system adapt to individual user’s sentiment as the 1514 dialog proceeds. Furthermore, the sentiment information is independent of the task, so our method doesn’t require any prior domain knowledge and can be easily generalized to other domains. There have been works that incorporated user information into reward design (Su et al., 2015; Ultes et al., 2017). But they used information from one single channel and sometimes required manual labelling of the reward. Our approach utilizes information from multiple channels and doesn’t involve manual work once a sentiment detector is ready. We built a simulated system in the same bus information search domain to test the effectiveness of using sentiment scores in the reward function. In this system, there are 3 entity types <departure>, <arrival>, and <time> and 5 actions, asking for different entities, and giving information. A simple action mask was used to prevent impossible actions, such as giving information of an uncovered place. The inputs to the system are the simulated user’s dialog acts and the simulated sentiment sampled from a subset of DSTC1, the CleanData, which will be described later. The output of the system is the system action template. 7.1 User simulator Given that reinforcement learning requires feedback from the environment - in our case, the users - and interacting with real users is always expensive, we created a user simulator to interact with the system. At the beginning of each dialog, the simulated user is initiated with a goal consisting of the three entities mentioned above and the goal remains unchanged throughout the conversation. The user responds to system’s questions with entities, which are placeholders like <departure> instead of real values. To simulate ASR errors, the simulated user’s act type occasionally changes from “informing slot values” to “making noises” at certain probabilities set by hand (10% in our case). Some example dialogs along with their associated rewards are shown in Table 8 and 9 in Appendix A.1. We simulated user sentiment by sampling from real data, the DSTC1 dataset. There are three steps involved. First, we cleaned the DSTC1 dialogs by removing the audio files with no ASR output and high ASR errors. This resulted in a dataset CleanData with 413 dialogs and 1918 user inputs. We observed that users accumulate their sentiment as the conversation unfolds. When the system repeatedly asks for the same entity, they express stronger sentiment. Therefore, summary statistics that record how many times certain entities have been asked during the conversation is representative of users’ accumulating sentiment. We designed a set of summary statistics S that record the statistics of system actions, e.g. how many times the arrival place has been asked or the schedule information has been given. The second step is to create a mapping between the five simulated system actions and the DSTC1 system actions. We do this by calculating a vector sreal consisting of the values in S for each user utterance in CleanData. sreal is used to compare the similarity between the real dialog and the simulated dialog. The final step is to sample from CleanData. For each simulated user utterance, we calculated the same vector ssim and compared it with each sreal. There are two possible results. If there are sreal equal to ssim,we would randomly sample one from all the matched user utterances to represent the sentiment of the simulated user. But if there is no sreal matching ssim, different strategies would be applied based on the reward function used, which will be described in details later. Once we have a sample, the eight dialogic features of the sample utterance are used to calculate the sentiment score. We didn’t use the acoustic or the textual features because in a simulated setting, only the dialogic features are valid. 7.2 Experiments We designed four experiments with different reward functions. A discount factor of 0.9 was applied to all the experiments. And the maximum number of turns is 15. Following Williams et al. (2017), we used LSTM with 32 hidden units for the RNN in the HCN and AdaDelta for the optimization, and updated the reinforcement learning policy after each dialog. The ✏-greedy exploration strategy (Tokic, 2010) was applied here. Given that the entire system was simulated, we only used the presence of each entity and the last action taken by the system as the context features, and didn’t use bag-of-words or utterance embedding features. In order to evaluate the method, we froze the policy after every 200 updates, and ran 500 simulated dialogs to calculate the task success rate. We 1515 repeated the process 20 times and reported the average performance in Figure 1, 2 and Table 6. 7.2.1 Baseline We define the baseline reward as follows without any sentiment involvement. Reward 1 Baseline if success then R1 = 20 else if failure then R1 = −10 else if each proceeding turn then R1 = −1 end if 7.2.2 Sentiment reward with random samples (SRRS) We designed the first simple reward function with user sentiment as the immediate reward: sentiment with random samples (SRRS). We first drew a sample from real data with matched context; if there was no matched data, a random sample was used instead. Because the amount of CleanData is relatively small, so only 36% turns were covered by matched samples. If the sampled dialogic features were not all zeros, the sentiment reward (SR) was calculated as a linear combination with tunable parameters. We chose it to be -5Pneg-Pneu+10Ppos for simplicity. When the dialogic features were all zero, in most cases it meant the user didn’t express an obvious sentiment, we set the reward to be -1. Reward 2 SRRS if success then R2 = 20 else if failure then R2 = −10 else if sample with all-zero dialogic features then R2 = −1 else if sample with non-zero dialogic features then R2=-5Pneg-Pneu+10Ppos end if 7.2.3 Sentiment reward with repetition penalty (SRRP) Random samples in SRRS may result in extreme sentiment data. So we used dialogic features to approximate sentiment for the unmatched data. Specifically, if there were repetitions, which correlate with negative sentiment (see Table 3), we assigned a penalty to that utterance. See Reward 3 Formula below for detailed parameters. 36% turns were covered by real data samples, 15% turns had no match in real data and had repetitions, and 33% turns had no match and no repetition. Moreover, we experimented with different penalty weights. When we increased the repetition penalty to 5, the success rate was similar to penalty of 2.5. However, when we increased the penalty even further to 10, the success rate was brought down by a large margin. Our interpretation is that increasing the repetition penalty to a big value made the focus less on the real sentiment samples but more on the repetitions, which did not help the learning. Reward 3 SRRP if success then R3 = 20 else if failure then R3 = −10 else if match then if all-zero dialogic features then R3 = −1 else if non-zero dialogic features then R3=-5Pneg-Pneu+10Ppos end if else if repeated question then R3 = −2.5 else R3 = −1 end if end if 7.2.4 Sentiment reward with repetition and interruption penalties (SRRIP) We observed in Section 5 that interruption is the most important feature in detecting sentiment, so if an interruption existed in the simulated user input, we assumed it had a negative sentiment and added an additional penalty of -1 to the previous sentiment reward SRRP to test the effect of interruption. 7.5% turns have interruptions. Reward 4 SRRIP if success then R4 = 20 else if failure then R4 = −10 else R4 = R3(SRRP ) if interruption then R4 = R4 −1 end if end if 7.3 Experiment results We evaluated every model on two metrics: dialog lengths and task success rates. We observed in Figure 1 that all the sentiment reward functions, even SRRS with random samples, reduced the average length of the dialogs, meaning that the system finished the task faster. The rationale behind is that by adapting to user sentiment, the model can avoid unnecessary system actions to make systems more effective. 1516 In terms of success rate, sentiment reward with both repetition and interruption penalties (SRRIP) performed the best (see Figure 2). In Figure 2, SRRIP is converging faster than the baseline. For example, around 5000 iterations, it outperforms the baseline by 5% in task success rate (60% vs 55%) with statistical significance (p < 0.01). It also converges to a better task success rate after 10000 iterations (92.4% vs 94.3%, p < 0.01). Figure 1: Average dialog length of RL models with different reward functions. Figure 2: Average success rate of the baseline and the best performing model, SRRIP. We describe all models’ performance in Table 6 in terms of the convergent success rate calculated as the mean success rate after 10000 dialogs. We observed that incorporating various sentiment rewards improved the success rate and expedited the training process overall with statistical significance. We found that even sentiment reward with random samples (SRRS) outperformed the baseline after convergence. By adding penalties for Model Convergent success rate Baseline 0.924 SRRS 0.938⇤ SRRP 0.941⇤ SRRIP 0.943⇤ Table 6: Convergent success rate of RL models with different reward functions. It is calculated as the mean success rate after 10000 dialogs. The best result is highlighted in bold. ⇤indicates that the result is significantly better than the baseline (p < 0.01). repetition, the algorithm covered more data points, and therefore, the task success rate and the convergence speed improved. We also found that penalizing interruption and repetition together (SRRIP) achieved a slightly better performance compared to penalizing repetition only (SRRP). This suggests that interruptions is another factor to consider when approximating sentiment. But the performances between SRRP and SRRIP is not significant. Our guess is that only 7.5% turns in our data contains interruption and the penalty is just an extra -1, so the model confused this signal with noises. But given more interruptions in the data, interruptions could still be helpful. 8 Discussion and Future Work The intuition behind the good performance of models with user sentiment is that the learned policy is in general more sentiment adaptive. For example, there are some system actions that have the same intention but with different surface forms, especially for error-handling strategies. By analyzing the results, we found that when the sentiment adaptive system detected a negative sentiment from the user, it chose to respond the user with a more detailed error-handling strategy than a general one. For example, it chose the template “Where are you leaving from? For example, you can say, <place>”, while the baseline model would respond with “Where would you like to leave from?”, which doesn’t provide details to help the user compared with the previous template. As we all know, dealing with a disappointed user to proceed, providing more details is always better. One example dialog is shown in Table 7. There was no written rules to force the model to choose one specific template under certain situ1517 Sentiment Adaptive System Baseline System without Sentiment SYS: The <route>. Where would you like to leave from? SYS: The <route>. Where would you like to leave from? USR: Yeah [negative sentiment] USR: Yeah SYS: Where are you leaving from? For example, you can say, <place>. SYS: Right. Where would you like to leave from? Table 7: An example dialog by different systems in the supervised learning setting. The sentiment-adaptive system gives a more detailed error-handling strategy than the baseline system. ations, so the model learned these subtle differences on its own. Some may argue that the system could always use a more detailed template to better guide the user instead of distinguishing between two similar system templates. But this is not necessarily true. Ideally, we want the system to be succinct initially to save users’ time, because we observe that users, especially repeated users, tend to interrupt long and detailed system utterances. If the user has attempted to answer the system question but failed, then it’s beneficial to provide detailed guidance. The performance of the sentiment detector is a key factor in our work. So in the future, we plan to incorporate features from more channels such as vision to further improve the sentiment predictor’s performance, and potentially further improve the performance of the dialog system. We also want to explore more in user sentiment simulation, for example, instead of randomly sampling data for the uncovered cases, we could use linear interpolation to create a similarity score between ssim and sreal, and choose the user utterance with the highest score. Furthermore, reward shaping (Ng et al., 1999; Ferreira and Lef`evre, 2013) is an important technique in RL. Specifically, Ferreira and Lef`evre (2013) talked about incorporating expert knowledge in reward design. We also plan to integrate information from different sources into reward function and apply reward shaping. Besides, creating a good user simulator is also very important in the RL training. There are some more advanced methods to create user simulators. For example, Liu and Lane (2017b) described how to optimize the agent and the user simulators jointly using RL. We plan to apply our sentiment reward functions in this framework in the future. 9 Conclusion We proposed to detect user sentiment from multimodal channels and incorporate the detected sentiment as feedback into adaptive end-to-end dialog system training to make the system more effective and user-adaptive. We included sentiment information directly as a context feature in the supervised learning framework and used sentiment scores as immediate rewards in the reinforcement learning setting. Experiments suggest that incorporating user sentiment is helpful in reducing the dialog length and increasing the task success rate in both SL and RL settings. This work proposed an adaptive methodology to incorporate user sentiment in end-to-end dialog policy learning and showed promising results on a bus information search task. We believe this approach can be easily generalized to other domains given its end-to-end training procedure and task independence. Acknowledgments The work is partly supported by Intel Lab Research Gift. References Jaime C Acosta. 2009. Using emotion to gain rapport in a spoken dialog system. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Student Research Workshop and Doctoral Consortium, pages 49–54. Association for Computational Linguistics. Jaime C Acosta and Nigel G Ward. 2011. Achieving rapport with turn-by-turn, user-responsive emotional coloring. Speech Communication, 53(9-10):1137– 1148. Dario Bertero, Farhad Bin Siddique, Chien-Sheng Wu, Yan Wan, Ricky Ho Yin Chan, and Pascale Fung. 2016. Real-time speech emotion and sentiment recognition for interactive dialogue systems. In EMNLP, pages 1042–1047. Antoine Bordes and Jason Weston. 2016. Learning end-to-end goal-oriented dialog. arXiv preprint arXiv:1605.07683. Felix Burkhardt, Markus Van Ballegooy, Klaus-Peter Engelbrecht, Tim Polzehl, and Joachim Stegmann. 2009. Emotion detection in dialog systems: applications, strategies and challenges. In Affective Computing and Intelligent Interaction and Workshops, 2009. ACII 2009. 3rd International Conference on, pages 1–6. IEEE. 1518 Laurence Devillers, Lori Lamel, and Ioana Vasilescu. 2003. Emotion detection in task-oriented spoken dialogues. In Multimedia and Expo, 2003. ICME’03. Proceedings. 2003 International Conference on, volume 3, pages III–549. IEEE. Laurence Devillers, Ioana Vasilescu, and Lori Lamel. 2002. Annotation and detection of emotion in a task-oriented human-human dialog corpus. In proceedings of ISLE Workshop. Bhuwan Dhingra, Lihong Li, Xiujun Li, Jianfeng Gao, Yun-Nung Chen, Faisal Ahmed, and Li Deng. 2016. End-to-end reinforcement learning of dialogue agents for information access. arXiv preprint arXiv:1609.00777. Mihail Eric and Christopher D Manning. 2017. A copy-augmented sequence-to-sequence architecture gives good performance on task-oriented dialogue. arXiv preprint arXiv:1701.04024. Florian Eyben, Felix Weninger, Florian Gross, and Bj¨orn Schuller. 2013. Recent developments in opensmile, the munich open-source multimedia feature extractor. In Proceedings of the 21st ACM International Conference on Multimedia, MM ’13, pages 835–838, New York, NY, USA. ACM. Emmanuel Ferreira and Fabrice Lef`evre. 2013. Expertbased reward shaping and exploration scheme for boosting policy learning of dialogue management. In Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on, pages 108– 113. IEEE. C. M. Lee and Shrikanth Narayanan. 2005. Toward Detecting Emotions in Spoken Dialogs. In IEEE Transactions on Speech and Audio Processing, volume 12, pages 293–303. Esther Levin, Roberto Pieraccini, and Wieland Eckert. 2000. A stochastic model of human-machine interaction for learning dialog strategies. IEEE Transactions on speech and audio processing, 8(1):11–23. Xiujun Li, Zachary C Lipton, Bhuwan Dhingra, Lihong Li, Jianfeng Gao, and Yun-Nung Chen. 2016. A user simulator for task-completion dialogues. arXiv preprint arXiv:1612.05688. Xuijun Li, Yun-Nung Chen, Lihong Li, and Jianfeng Gao. 2017. End-to-end task-completion neural dialogue systems. arXiv preprint arXiv:1703.01008. Jackson Liscombe, Giuseppe Riccardi, and Dilek Hakkani-T¨ur. 2005. Using context to improve emotion detection in spoken dialog systems. In Ninth European Conference on Speech Communication and Technology. Bing Liu and Ian Lane. 2017a. An end-to-end trainable neural network model with belief tracking for taskoriented dialog. arXiv preprint arXiv:1708.05956. Bing Liu and Ian Lane. 2017b. Iterative policy learning in end-to-end trainable task-oriented neural dialog models. arXiv preprint arXiv:1709.06136. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Andrew Y Ng, Daishi Harada, and Stuart Russell. 1999. Policy invariance under reward transformations: Theory and application to reward shaping. In ICML, volume 99, pages 278–287. Tin Lay Nwe, Say Wei Foo, and Liyanage C De Silva. 2003. Speech emotion recognition using hidden markov models. Speech communication, 41(4):603– 623. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830. Johannes Pittermann, Angela Pittermann, and Wolfgang Minker. 2010. Emotion recognition and adaptation in spoken dialogue systems. International Journal of Speech Technology, 13(1):49–60. Antoine Raux, Brian Langner, Dan Bohus, Alan W Black, and Maxine Eskenazi. 2005. Lets go public! taking a spoken dialog system to the real world. In in Proc. of Interspeech 2005. Citeseer. Bj¨orn Schuller, Gerhard Rigoll, and Manfred Lang. 2003. Hidden markov model-based speech emotion recognition. In Multimedia and Expo, 2003. ICME’03. Proceedings. 2003 International Conference on, volume 1, pages I–401. IEEE. Bj¨orn Schuller, Stefan Steidl, Anton Batliner, Alessandro Vinciarelli, Klaus Scherer, Fabien Ringeval, Mohamed Chetouani, Felix Weninger, Florian Eyben, Erik Marchi, et al. 2013. The interspeech 2013 computational paralinguistics challenge: social signals, conflict, emotion, autism. In Proceedings INTERSPEECH 2013, 14th Annual Conference of the International Speech Communication Association, Lyon, France. Minjoon Seo, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Query-regression networks for machine comprehension. arXiv preprint arXiv:1606.04582. Satinder Singh, Diane Litman, Michael Kearns, and Marilyn Walker. 2002. Optimizing dialogue management with reinforcement learning: Experiments with the njfun system. Journal of Artificial Intelligence Research, 16:105–133. Pei-Hao Su, David Vandyke, Milica Gasic, Dongho Kim, Nikola Mrksic, Tsung-Hsien Wen, and Steve 1519 Young. 2015. Learning from real users: Rating dialogue success with neural networks for reinforcement learning in spoken dialogue systems. arXiv preprint arXiv:1508.03386. Michel Tokic. 2010. Adaptive "-greedy exploration in reinforcement learning based on value differences. In Annual Conference on Artificial Intelligence, pages 203–210. Springer. Stefan Ultes, Paweł Budzianowski, Inigo Casanueva, Nikola Mrkˇsic, Lina Rojas-Barahona, Pei-Hao Su, Tsung-Hsien Wen, Milica Gaˇsic, and Steve Young. 2017. Domain-independent user satisfaction reward estimation for dialogue policy learning. In Proc. Interspeech, pages 1721–1725. Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2016. A networkbased end-to-end trainable task-oriented dialogue system. arXiv preprint arXiv:1604.04562. Jason Williams, Kavosh Asadi, and Geoffrey Zweig. 2017. Hybrid code networks: Practical and efficient end-to-end dialog control with supervised and reinforcement learning. In Proceedings of 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017). Association for Computational Linguistics. Jason D Williams and Steve Young. 2007. Partially observable markov decision processes for spoken dialog systems. Computer Speech & Language, 21(2):393–422. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256. Zhou Yu, Alan W Black, and Alexander I Rudnicky. 2017. Learning conversational systems that interleave task and non-task content. IJCAI. Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701. Tiancheng Zhao and Maxine Eskenazi. 2016. Towards end-to-end learning for dialog state tracking and management using deep reinforcement learning. arXiv preprint arXiv:1606.02560. Yuke Zhu. 2017. tensorflow-reinforce. https://github.com/yukezhu/ tensorflow-reinforce.
2018
140
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1520–1530 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1520 Embedding Learning Through Multilingual Concept Induction Philipp Dufter1, Mengjie Zhao2, Martin Schmitt1, Alexander Fraser1, Hinrich Sch¨utze1 1 Center for Information and Language Processing (CIS) LMU Munich, Germany 2 ´Ecole Polytechnique F´ed´erale de Lausanne, Switzerland {philipp,martin,fraser}@cis.lmu.de, [email protected] Abstract We present a new method for estimating vector space representations of words: embedding learning by concept induction. We test this method on a highly parallel corpus and learn semantic representations of words in 1259 different languages in a single common space. An extensive experimental evaluation on crosslingual word similarity and sentiment analysis indicates that concept-based multilingual embedding learning performs better than previous approaches. 1 Introduction Vector space representations of words are widely used because they improve performance on monolingual tasks. This success has generated interest in multilingual embeddings, shared representation of words across languages (Klementiev et al., 2012). Such embeddings can be beneficial in machine translation in sparse data settings because multilingual embeddings provide meaning representations of source and target in the same space. Similarly, in transfer learning, models trained in one language on multilingual embeddings can be deployed in other languages (Zeman and Resnik, 2008; McDonald et al., 2011; Tsvetkov et al., 2014). Automatically learned embeddings have the added advantage of requiring fewer resources for training (Klementiev et al., 2012; Hermann and Blunsom, 2014b; Guo et al., 2016). Thus, massively multilingual word embeddings (i.e., covering 100s or 1000s of languages) are likely to be important in NLP. The basic information many embedding learners use is word-context information; e.g., the embedding of a word is optimized to predict a representation of its context. We instead learn emH?m,FØ BmK,mK ?F,bȹB +v,ivBǶ #Bb,rQi DB,`?ďď ?F,+m^@?Ê iTB,r` b;,M;m KBQ,M/mi Figure 1: Example of a CLIQUE concept: “water” beddings from word-concept information. As a first approximation, a concept is a set of semantically similar words. Figure 1 shows an example concept and also indicates one way we learn concepts: we interpret cliques in the dictionary graph as concepts. The nodes of the dictionary graph are words, its edges connect words that are translations of each other. A dictionary node has the form prefix:word, e.g., “tpi:wara” (upper left node in the figure). The prefix is the ISO 639-3 code of the language; tpi is Tok Pisin. Our method takes a parallel corpus as input and induces a dictionary graph from the parallel corpus. Concepts and word-concept pairs are then induced from the dictionary graph. Finally, embeddings are learned from word-concept pairs. A key application of multilingual embeddings is transfer learning. Transfer learning is mainly of interest if the target is resource-poor. We therefore select as our dataset 1664 translations in 1259 languages of the New Testament from PBC, the Parallel Bible Corpus. Since “translation” is an ambiguous word, we will from now on refer to the 1664 translations as “editions”. PBC is aligned 1521 English King James Version (KJV) German Elberfelder 1905 Spanish Americas And he said , Do it the second time . And they did it the second time . . . Und er sprach : F¨ullet vier Eimer mit Wasser , und gießet es auf das Brandopfer und auf das Holz . Und er sprach : Tut es zum zweiten Male ! Und sie taten es zum zweiten Male . . . Y dijo : Llenad cuatro c´antaros de agua y derramadla sobre el holocausto y sobre la le˜na . Despu´es dijo : Hacedlo por segunda vez ; y lo hicieron por segunda vez . . . Table 1: Instances of verse 11018034. This multi-sentence verse is an example of verse misalignment. on the verse level; most verses consist of a single sentence, but some contain several (see Table 1). PBC is a good model for resource-poverty; e.g., the training set (see below) of KJV contains fewer than 150,000 tokens in 6458 verses. We evaluate multilingual embeddings on two tasks, roundtrip translation (RT) and sentiment analysis. RT on the word level is – to our knowledge – a novel evaluation method: a query word w of language L1 is translated to its closest (with respect to embedding similarity) neighbor v in L2 and then backtranslated to its closest neighbor w′ in L1. RT is successful if w = w′. There are well-known concerns about RT when it is used in the context of machine translation. A successful roundtrip translation does not necessarily imply that v is of high quality and it is not possible to decide whether an error occurred in the forward or backward translations. Despite these concerns about RT on the sentence level, we show that RT on the word level is a difficult task and an effective measure of embedding quality. Contributions. (i) We introduce a new embedding learning method, multilingual embedding learning through concept induction. (ii) We show that this new concept-based method outperforms previous approaches to multilingual embeddings. (iii) We propose both word-level and characterlevel dictionary induction methods and present evidence that concepts induced from word-level dictionaries are better for easily tokenizable languages and concepts induced from character-level dictionaries are better for difficult-to-tokenize languages. (iv) We evaluate our methods on a corpus of 1664 editions in 1259 languages. To the best of our knowledge, this is the first detailed evaluation, involving challenging tasks like word translation and crosslingual sentiment analysis, that has been done on such a large number of languages. 2 Methods 2.1 Pivot languages Most of our methods are based on bilingual dictionary graphs. With 1664 editions, it is computationally expensive to consider all editions simultaneously (more than 106 dictionaries). Thus we split the set of editions in 10 pivot and 1654 remaining editions, and do not compute nor use dictionaries within the 1654 editions. We refer to the ten pivot editions as pivot languages and give them a distinct role in concept induction. We refer to all editions (including pivot editions) as target editions. Thus, a pivot edition has two roles: as a pivot language and as a target edition. We select the pivot languages based on their sparseness. Sparseness is a challenge in NLP. In the case of embeddings, it is hard to learn a high-quality embedding for any infrequent word. Many of the world’s languages (including many PBC languages) exhibit a high degree of sparseness. But some languages suffer comparatively little from sparseness when simple preprocessing like downcasing and splitting on whitespace is employed. A simple measure of sparseness that affects embedding learning is the number of types. Fewer types is better since their average frequency will be higher. Table 2 shows the ten languages in PBC that have the smallest number of types in 5000 randomly selected verses. We randomly sample 5000 verses per edition and compare the number of types based on this selection because most editions do not contain a few of the selected 6458 verses. 2.2 Character-level modeling (CHAR) We will see that tokenization-based models have poor performance on a subset of the 1259 languages. To overcome tokenization problems, we represent a verse of length m bytes, as a sequence of m −(n −1) + 2 overlapping byte n-grams. In this paper, “n-gram” always refers to “byte ngram”. We pad the verse with initial and final space, resulting in two additional n-grams (hence “+2”). This representation is in the spirit of earlier byte-level processing, e.g., (Gillick et al., 2016). There are several motivations for this. (i) We can take advantage of byte-level generalizations. (ii) This is robust if there is noise in the byte encoding. (iii) Characters have different properties in different languages and encodings, e.g., English 1522 iso name family; (example) region types tokens lhu Lahu Sino-Tibetan; Thailand 1452 268 ahk Akha Sino-Tibetan; China 1550 315 hak Hakka Chinese Chinese; China 1596 242 ium Iu Mien Hmong-Mien; Laos 1779 191 tpi Tok Pisin Creole; PNG 1815 177 mio Pinotepa Mixtec Oto-Manguean; Oaxaca 1828 208 cya Highland Chatino Oto-Manguean; Oaxaca 1868 231 bis Bislama Creole; Vanuatu 1872 226 aji Aji¨e Austronesian; Houa¨ılou 1876 194 sag Sango Creole; Central Africa 1895 192 Table 2: Our ten pivot languages, the languages in PBC with the lowest number of types. Tokens in 1000s. Tok Pisin and Bislama are English-based and Sango is a Ngbandi-based creole. PNG = Papua New Guinea UTF-8 has properties different from Chinese UTF8. Thus, universal language processing is easier to design on the byte level. We refer to this ngram representation as CHAR and to standard tokenization as WORD. 2.3 Dictionary induction Alignment-based dictionary. We use fastalign (Dyer et al., 2013) to compute word alignments and use GDFA for symmetrization. All alignment edges that occurred at least twice are added to the dictionary graph. Initial experiments indicated that alignment-based dictionaries have poor quality for CHAR, probably due to the fact that overlapping ngram representations of sentences have properties quite different from the tokenized sentences that aligners are optimized for. Thus we use this dictionary induction method only for WORD and developed the following alternative for CHAR. Correlation-based dictionary (χ2). χ2 is a greedy algorithm, shown in Figure 2, that selects, in each iteration, the pair of units that has the highest χ2 score for cooccurrence in verses. Each selected pair is added to the dictionary and removed from the corpus. Low-frequency units are selected first and high-frequency units last; this prevents errors due to spurious association of highfrequency units with low-frequency units. We perform dmax = 5 passes; in each pass, the maximum degree of a dictionary node is 1 ≤d ≤dmax. So if the node has reached degree d, it is ineligible for additional edges during this pass. Again, this avoids errors due to spurious association of highfrequency units that already participate in many Algorithm 1 χ2-based dictionary induction 1: procedure DICTIONARYGRAPH(C) 2: A = all-edges(C), E = [] 3: for d ∈[1, 2, . . . , dmax] do 4: fmax = 2 5: while fmax ≤|C| do 6: fmin = max(min(5, fmax), 1 10fmax) 7: (χ2, s, t) = max-χ2-edge(A, fmin, fmax, d) 8: if χ2 < χmin then 9: fmax = fmax + 1; continue 10: end if 11: T = extend-ngram(A, fmin, fmax, d, s, t) 12: append(E, s, T) 13: remove-edges(A, s, T) 14: end while 15: end for 16: return dictionary-graph = (nodes(E), E) 17: end procedure Figure 2: χ2-based dictionary induction. C is a sentence-aligned corpus. A is initialized to contain all edges, i.e., the fully connected bipartite graph for each parallel verse. E collects the selected dictionary edges. d is the edge degree: in each pass through the loop only edges are considered whose participating units have a degree less than d. f max is the maximum frequency during this pass. |C| is the number of sentences in the corpus. extend-ngram extends a target ngram to left / right; e.g., if s = “jisas” is aligned with ngram t = “Jesu” in English, then “esus” is added to T. t is always a member of T. remove-edges removes edges in A between s and a member of T. edges with low-frequency units. Recall that this method is only applied for CHAR. Intra-pivot dictionary. We assume that pivot languages are easily tokenizable. Thus we only consider alignment-based dictionaries (in total 45) within the set of pivot languages. Pivot-to-target dictionary. We compute an alignment-based and a χ2-based dictionary between each pivot language and each target edition, yielding a total of 10*1664 dictionaries per dictionary type. (Note that this implies that, for χ2, the WORD version of the pivot language is aligned with its CHAR version.) 2.4 Concepts A concept is defined as a set of units that has two subsets: (i) a defining set of words from the ten pivot languages and (ii) a set of target units (words or n-grams) that are linked, via dictionary edges, 1523 Algorithm 2 CLIQUE concept induction 1: procedure CONCEPTS(I ∈Rn×n, θ, ν) 2: G = ([n], {(i, j) ∈[n] × [n] | Iij > θ}) 3: cliques = get-cliques(G, 3) 4: Gc := (Vc, Ec) = (∅, ∅) 5: for c1, c2 ∈cliques × cliques do 6: if |c1 ∩c2| ≥ν min{|c1|, |c2|} then 7: Vc = Vc ∪{c1, c2}, Ec = Ec ∪{(c1, c2)} 8: end if 9: end for 10: metacliques = get cliques(Gc, 1) 11: concepts = {flatten(c) | c ∈metacliques} 12: return concepts 13: end procedure Figure 3: CLIQUE concept induction. I is a normalized adjacency matrix of a dictionary graph (i.e., relative frequency of alignment edges with respect to possible alignment edges). get-cliques(G, n) returns all cliques in G of size greater or equal to n. flatten(A) flattens a set of sets. [n] denotes {1, 2, . . . , n}. θ = 0.4, ν = 0.6. to the pivot subset. We selected the ten “easiest” of the 1664 editions as pivot languages. Our premise is that semantic information is encoded in a simply accessible form in the pivot languages and so they should offer a good basis for learning concepts. We induce concepts from the dictionary graph, a multipartite graph consisting of ten pivot language node/word sets and all target edition node/unit sets (where units are words or n-grams). Edges either connect pivot nodes with other pivot nodes or pivot nodes with target units. 2.4.1 CLIQUE concept induction If concepts corresponded to each other in the overtly coding pivot languages, if words were not ambiguous and if alignments were perfect, then concepts would be cliques in the pivot part of the dictionary graph. These conditions are too strict for natural languages, so we relax them in our CLIQUE concept induction algorithm (Figure 3). The algorithm identifies maximal multilingual cliques (size ≥3) within the dictionary graph of the pivot languages and then merges two cliques if they share enough common words. The merging lets us identify clique-based concepts even if, e.g., a dictionary edge between two words is missing. It also accommodates the situation where more than one word of a pivot language should be part of a concept. The merging step can also be interpreted as metaconcept induction. Once we have identified the cliques, we project N(t) ={bis:Jorim, ium:yo-lim, sag:Yorim, tpi:Jorim} t∈T ={ac0:Yorim,atg0:iJorimu,bav0:Jorim,bom0:Yorim, dik0:Jorim, dtp0:Yorim, duo0:Jorim, eng1:Jorim, engb:Jorim, fij2:Lorima, fij3:Jorima, gor0:Yorim, hvn0:Yorim, ibo0:Jorim, iri0:Jorri, kmr0:Yorˆım, ksd0:Iorim, kwd0:Jorim, lia0:Yorimi, loz0:Jorimi, mbd0:Hurim, mfh0:Yorim, min0:Yorim, mrw0:Yorim,mse0:Jorimma,naq0:Jorimmi, smo1:Iorimo, srn1:Yorim, tsn2:Jorime, yor2:J´or´ım`u} Figure 4: Target neighborhood concept example: N(t) ∪T. N(t) is the target neighborhood for each of the target words in T. them to the target editions: a target-unit is added to a clique if it is connected to a proportion ν = 0.6 of its member words (to allow for missing edges). This identifies around 150k clique concepts that cover around 8k of the total vocabulary of 24k English words (WORD). As an alternative to cliques, Ammar et al. (2016) use connected components (CCs). The reachability relation (induced by CC) is the transitive closure of the edge relation. This results in semantically unrelated words being in the same concept for very low levels of noise. In contrast, cliques are more “strict”: only node subsets are considered whose corresponding edge relation is already transitive (or almost so for ν = 0.6). Transitivity across languages often does not hold in alignments or dictionaries; see, e.g., Simard (1999). This is why we only consider cliques (which reflect already existent transitivity) rather than CCs, which impose transitivity where it does not hold naturally. 2.4.2 N(t) (target neighborhood) concept induction Let N(t) be the neighborhood of target node t in the multipartite dictionary graph, i.e., the set of pivot words that are linked to t. We refer to N(t) as target neighborhood. Figure 4 shows an example of such a target neighborhood, the set N(t) consisting of four words.1 A target neighborhood concept consists of a set T of pivot words and all target words t for which T = N(t) holds. Motivation. Suppose N(t) = N(u) for target nodes t and u from two different languages and |N(t)| covers several pivot languages, e.g., |N(t)| = |N(u)| = 4 as in the figure. Again, if units closely corresponded to concepts, if there were no ambiguity, if the dictionary were perfect, 1We use numbers and lowercase letters at the fourth position of the prefix to distinguish different editions in the same language, e.g., “0”, “3” and “e” in “ace0”, “fij3”, “enge”. 1524 then we could safely conclude that the meanings of t and u are similar; if the meanings of t and u were unrelated, it is unlikely that they would be aligned to the exact same words in four different languages. In reality, there is no exact meaningform correspondence, there is ambiguity and the dictionary is not perfect. Still, we will see below that defining concepts as target neighborhoods works well. 2.4.3 Filtering target neighborhood concepts In contrast to CLIQUE, we do not put any constraint on the pivot-to-pivot connections within target neighborhoods; e.g., in Figure 4, we do not require that “bis:Jorim” and “sag:Yorim” are connected by an edge. We evaluate three postfiltering steps of target neighborhoods to increase their quality: restricting target neighborhoods to those that are cliques in N(t)-CLIQUE; to those that are connected components in N(t)-CC; and to those of size two that are valid edges in the dictionary in N(t)-EDGE. For N(t)-EDGE, we found that taking all edges performs well, so we also consider edges that are proper subsets of target neighborhoods. 2.5 Embedding learning We adopt the framework of embedding learning algorithms that define contexts and then sample pairs of an input word (more generally, an input unit) and a context word (more generally, a context unit) from each context. The only difference is that our contexts are concepts. For simplicity, we use word2vec (Mikolov et al., 2013a) as the implementation of this model.2 2.6 Baselines Baselines for multilingual embedding learning. One baseline is inspired by (Vuli´c and Moens, 2015). We consider words of one aligned verse in the pivot languages and one target language as a bag of words (BOW) and consider this bag as a context.3 Levy et al. (2017) show that sentence ID features (interpretable as an abstract representation of the word’s context) are effective. We use a corpus with lines consisting of pairs of an identifier of a 2We use code.google.com/archive/p/word2vec 3The actual implementation slightly differs to avoid very long lines. It does only consider two pivot languages at a time, but writes each verse multiple times. verse and a unit extracted from that verse as input to word2vec and call this baseline S-ID. Lardilleux and Lepage (2009) propose a simple and efficient baseline: sample-based concept induction. Words that strictly occur in the same verses are assigned to the same concept. To increase coverage, they propose to sample many different subcorpora.4 We induce concepts using this method and project them analogous to CLIQUE. We call this baseline SAMPLE. One novel contribution of this paper is roundtrip evaluation of embeddings. We learn embeddings based on a dictionary. The question arises: are the embeddings simply reproducing the information already in the dictionary or are they improving the performance of roundtrip search? As a baseline, we perform RTSIMPLE, a simple dictionary-based roundtrip translation method. Retrieve the pivot word p in pivot language Lp (i.e., p ∈Lp) that is closest to the query q ∈Lq. Retrieve the target unit t ∈Lt that is closest to p. Retrieve the pivot word p′ ∈Lp that is closest to t. Retrieve the unit q′ ∈Lq that is closest to p′. If q = q′, this is an exact hit. We run this experiment for all pivot and target languages. Note that roundtrip evaluation tests the capability of a system to go from any language to any other language. In an embedding space, this requires two hops. In a highly multilingual dataset of n languages in which not all O(n2) bilingual dictionaries exist, this requires four hops. 3 Experiments and results 3.1 Data We use PBC (Mayer and Cysouw, 2014). The version we pulled on 2017-12-11 contains 1664 Bible editions in 1259 languages (based on ISO 639-3 codes) after we discarded editions that have low coverage of the New Testament. We use 7958 verses that have good coverage in these 1664 editions. The data is verse aligned; a verse of the New Testament can consist of multiple sentences. We randomly split verses 6458/1500 into train/test. 3.2 Evaluation For sentiment analysis, we represent a verse as the IDF-weighted sum of its embeddings. Sentiment classifiers (linear SVMs) are trained on the training set of the World English Bible edition 4We use this implementation: anymalign.limsi.fr 1525 for the two decision problems positive vs. nonpositive and negative vs. non-negative. We create a silver standard by labeling verses in English editions with the NLTK (Bird et al., 2009) sentiment classifier. A positive vs. negative classification is not reasonable for the New Testament because a large number of verses is mixed, e.g., “Now is come salvation . . . the power of his Christ: for the accuser . . . cast down, which accused them before our God . . . ” Note that this verse also cannot be said to be neutral. Splitting the sentiment analysis into two subtasks (“contains positive sentiment: yes/no” and “contains negative sentiment: yes/no”) is an effective solution for this paper. The two trained models are then applied to the test set of all 1664 editions. All embeddings in this paper are learned on the training set only. So no test information was used for learning the embeddings. Roundtrip translation. There are no gold standards for the genre of our corpus (the New Testament); for only a few languages out-of-domain gold standards are available. Roundtrip evaluation is an evaluation method for multilingual embeddings that can be applied if no resources are available for a language. Loosely speaking, for a query q in a query language Lq (in our case English) and a target language Lt, roundtrip translation finds the unit wt in Lt that is closest to q and then the English unit we that is closest to wt. If the semantics of q and we are identical (resp. are unrelated), this is deemed evidence for (resp. counterevidence against) the quality of the embeddings. We work on the level of Bible edition, i.e., two editions in the same language are considered different “languages”. For a query q, we denote the set of its kI nearest neighbors in the target edition e by Ie(q) = {u1, u2, . . . , ukI}. For each intermediate entry we then consider its kT nearest neighbors in English. Overall we get a set Te(q) with kIkT predictions for each intermediate Bible edition e. See Figure 5 for an example. We evaluate the predictions Te(q) using two sets Gs(q) (strict) and Gr(q) (relaxed) of ground-truth semantic equivalences in English. Precision for a query q is defined as pi(q) := 1/|E| P e∈E min{1, |Te(q) ∩Gi(q)|} where E is the set of all Bible editions and i ∈ {s, r}. We report the mean and median across a interquery mediate predictions woman ⇒mujer ⇒wife woman women widows daughters daughter marry married ⇒esposa ⇒marry wife woman married marriage virgin daughters bridegroom Figure 5: Roundtrip translation example for KJV and Americas Bible (Spanish). In this example min{1, |Te(q) ∩Gi(q)|} equals 0 for S1 and R1, and 1 for S4 and S16. connu(3), connais(3), connaissent(3), savez(2), sachant(2), sait(2), sachiez(2), savoir, sc¸ai, ignorez, connaissiez, sache connaissez, connaissais, savent, savaient, connoissez, connue, reconnaˆıtrez, sais, connaissant, savons, connaissait, savait Figure 6: Intermediates aggregated over 17 French editions. q=“know”, N(t) embeddings, S16. Intermediates are correct with two possible exceptions: “ignorez” ‘you do not know’; “reconnaˆıtrez” ‘you recognize’. set of 70 queries selected from Swadesh (1946)’s list of 100 universal linguistic concepts. We create Gs and Gr as follows. For WORD, we define Gs(q) = {q} and Gr(q) = L(q) where L(q) is the set of words with the same lemma and POS as q. For CHAR, we need to find ngrams that correspond uniquely to the query q. Given a candidate ngram g we consider cqg := 1/c(g) P q′∈L(q),substring(g,q′) c(q′) where c(x) is the count of character sequence x across all editions in the query language. We add g to Gi(q) if cqg > σi where σs = .75 and σr = .5. We only consider queries where Gs(q) is non-empty. We vary the evaluation parameters (i, kI, kT ) as follows: “S1” represents (s, 1, 1), “S4” (s, 2, 2), “S16” (s, 2, 8), and “R1” (r, 1, 1). 3.3 Corpus generation and hyperparameters We train with the skipgram model and set vector dimensionality to 200; word2vec default parameters are used otherwise. Each concept – the union of a set of pivot words and a set of target units linked to the pivot words – is written out as a line or (if the set is large) as a sequence of shorter lines. Training corpus size is approximately 50 GB for all experiments. We write several copies of each line (shuffling randomly to ensure lines are different) where the multiplication factor is chosen to result in an overall corpus size of approximately 50 GB. There are two exceptions. For BOW, we did not find a good way of reducing the corpus size, so this 1526 roundtrip translation sentiment analysis WORD CHAR WORD CHAR S1 R1 S4 S16 S1 R1 S4 S16 µ Md µ Md µ Md µ Md N µ Md µ Md µ Md µ Md N pos neg pos neg 1 RTSIMPLE 33 24 37 36 67 24 13 32 21 70 2 BOW 7 5 8 7 13 12 26 28 69 3 2 3 2 5 4 10 11 70 33 81 13 83 3 S-ID 46 46 52 55 63 76 79 91 65 9 5 9 5 14 9 25 22 70 79 88 65 86 4 SAMPLE 33 23 43 42 54 59 82 96 65 53 59 59 72 67 85 79 99 58 82 89 77 89 5 CLIQUE 43 36 59 63 67 77 93 99 69 42 46 48 55 60 76 73 98 53 84 89 69 88 6 N(t) 54 59 61 69 80 87 94 100 69 50 53 54 59 73 82 90 99 66 82 89 87 90 7 N(t)-CLIQUE 11 0 11 0 16 0 22 0 18 39 45 41 47 58 74 76 94 56 22 84 61 84 8 N(t)-CC 3 0 3 0 5 0 7 0 5 11 0 11 0 16 0 25 0 21 4 84 40 83 9 N(t)-EDGE 35 30 43 36 56 55 87 94 69 39 29 49 52 64 78 88 100 63 84 90 84 89 Table 3: Roundtrip translation (mean/median accuracy) and sentiment analysis (F1) results for wordbased (WORD) and character-based (CHAR) multilingual embeddings. N (coverage): # queries contained in the embedding space. The best result across WORD and CHAR is set in bold. corpus is 10 times larger than the others. For SID, we use Levy et al. (2017)’s hyperparameters; in particular, we trained for 100 iterations and we wrote each verse-unit pair to the corpus only once, resulting in a corpus of about 4 GB. We set the n parameter of n-grams to n = 4 for Bible editions with ρ < 2, n = 8 for Bible editions with 2 ≤ρ < 3 and n = 12 for Bible editions with ρ ≥3 where ρ is the ratio between size in bytes of the edition and median size of the 1664 editions. In χ2 dictionary induction, we set χmin = 100. In the concept induction algorithm we set θ = 0.4 and ν = 0.6. Except for SAMPLE and CLIQUE, we filter out hapax legomena. 3.4 Results Table 3 presents evaluation results for roundtrip translation and sentiment analysis. Validity of roundtrip (RT) evaluation results. RTSIMPLE (line 1) is not competitive; e.g., its accuracy is lower by almost half compared to N(t). We also see that RT is an excellent differentiator of poor multilingual embeddings (e.g., BOW) vs. higher-quality ones like S-ID and N(t). This indicates that RT translation can serve as an effective evaluation measure. The concept-based multilingual embedding learning algorithms CLIQUE and N(t) (lines 5-6) consistently (except S1 WORD) outperform BOW and S-ID (lines 2-3) that are not based on concepts. BOW performs poorly in our low-resource setting; this is not surprising since BOW methods rely on large datasets and are therefore expected to fail in the face of severe sparseness. S-ID performs reasonably well for WORD, but even in that case it is outperformed by N(t), in some cases by a large margin, e.g., µ of 63 for S-ID vs. 80 for N(t) for S4. For CHAR, S-ID results are poor. On sentiment classification, N(t) also consistently outperforms S-ID. While S-ID provides a clearer signal to the embedding learner than BOW, it is still relatively crude to represent a word as – essentially – its binary vector of verse occurrence. Concept-based methods perform better because they can exploit the more informative dictionary graph. Comparison of graph-theoretic definitions of concepts: N(t)-CLIQUE, N(t)-CC. N(t) (line 6) has the most consistent good performance across tasks and evaluation measures. Postfiltering target neighborhoods down to cliques (line 7) and CCs (line 8) does not work. The reason is that the resulting number of concepts is too small; see, e.g., low coverages of N = 18 (N(t)-CLIQUE) and N = 5 (N(t)-CC) for WORD and N = 21 (N(t)-CC) for CHAR. N(t)-CLIQUE results are highly increased for CHAR, but still poorer by a large margin than the best methods. We can interpret this result as an instance of a precision-recall tradeoff: presumably the quality of the concepts found by N(t)-CLIQUE and N(t)-CC is better (higher precision), but there are too few of them (low recall) to get good evaluation numbers. Comparison of graph-theoretic definitions of concepts: CLIQUE. CLIQUE has strong performance for a subset of measures, e.g., ranks consistently second for RT (except S1 WORD) and sentiment analysis in WORD. Although CLIQUE is perhaps the most intuitive way of inducing a concept from a dictionary graph, it may suffer in relatively high-noise settings like ours. Comparison of graph-theoretic definitions of concepts: N(t) vs. N(t)-EDGE. Recall that N(t)-EDGE postfilters target neighborhoods by 1527 Page 1 of 1 extokenise 07/05/2018, 16:31 [ksw] ဒ"#တ◌"ကမၣ◌်လၢအပာ်လၢယလိၤခဲကနံၣ◌်အံၤ⋆, ⋆ထu#ပ(◌ၤအ3 ၣ◌်အသးတန့"ဘၣ◌်⋆. [cso] Hi³⋆sa³jun³⋆lɨ́¹³⋆ma³tson²⋆tsú²⋆ lɨ³ua³⋆cáun²⋆tso³⋆ñí¹⋆hná¹⋆nɨ́²⋆. [eng] Neither⋆can⋆they⋆prove⋆the⋆things⋆ whereof⋆they⋆now⋆accuse⋆me⋆. Figure 7: Verse 44024013. “*” = tokenization boundary. S’gaw Karen (ksw) is difficult to tokenize and CHAR > WORD for N(t). Chinanteco de Sochiapan (cso) has few types, similar to a pivot language, and CHAR < WORD for N(t). N(t) S-ID SAMPLE CLIQUE [CHAR] [WORD] [WORD] [WORD] iso ∆ iso ∆ iso ∆ iso ∆ arb1 54 pua0 61 jpn1 42 mya2 38 arz0 53 sun2 54 khm2 40 jpn1 36 cop3 49 jpn1 53 cap2 40 khm3 34 srp0 44 khm3 53 khm3 40 bsn0 28 cop2 44 khm2 50 mya2 39 khm2 27 . . . . . . ... ... ... ... ... . . . pis0 -23 vie7 -24 eng8 -7 haw0 -22 pcm0 -23 kri0 -25 enm1 -9 eng4 -23 ksw0 -24 tdt0 -27 lzh2 -9 enm2 -26 lzh2 -41 eng2 -27 eng4 -12 enm1 -26 lzh1 -51 vie6 -29 lzh1 -13 engj -28 Table 4: Comparison of N(t)[WORD] with four other methods. Difference in mean performance (across queries) in R1 per edition. Positive number means better performance of N(t)[WORD]. only considering pairs of pivot words that are linked by a dictionary edge. This “quality” filter does seem to work in some cases, e.g., best performance S16 Md for CHAR. But results for WORD are much poorer. SAMPLE performs best for CHAR: best results in five out of eight cases. However, its coverage is low: N = 58. This is also the reason that it does not perform well on sentiment analysis for CHAR (F1 = 77 for pos). Target neighborhoods N(t). The overall best method is N(t). It is the best method more often than any other method and in the other cases, it ranks second. This result suggests that the assumption that two target units are semantically similar if they have dictionary edges with exactly the same set of pivot words is a reasonable approximation of reality. Postfiltering by putting constraints on eligible sets of pivot words (i.e., the pivot words themselves must have a certain dictionary link structure) does not consistently improve upon target neighborhoods. WORD vs. CHAR. For roundtrip, WORD is a better representation than CHAR if we just count the bold winners: seven (WORD) vs. three (CHAR), with two ties. For sentiment, the more difficult task is pos and for this task, CHAR is better by 3 points than WORD (F1 = 87, line 6, vs. F1 = 84, lines 9/5). However, Table 4 shows that CHAR < WORD for one subset of editions (exemplified by cso in Figure 7) and CHAR > WORD for a different subset (exemplified by ksw). So there are big differences between CHAR and WORD in both directions, depending on the language. For some languages, WORD performs a lot better, for others, CHAR performs a lot better. We designed RT evaluation as a word-based evaluation that disfavors CHAR in some cases. The fourgram “ady@” in the World English Bible occurs in “already” (32 times), “ready” (31 times) and “lady” (9 times). Our RT evaluation thus disqualifies “ady@” as a strict match for “ready”. But all 17 aligned occurrences of “ady@” are part of “ready” – all others were not aligned. So in the χ2alignment interpretation, P(ready|ady@) = 1.0. In contrast to RT, we only used aligned ngrams in the sentiment evaluation. This discrepancy may explain why the best method for sentiment is a CHAR method whereas the best method for RT is a WORD method. First NLP task evaluation on more than 1000 languages. Table 3 presents results for 1664 editions in 1259 languages. To the best of our knowledge, this is the first detailed evaluation, involving two challenging NLP tasks, that has been done on such a large number of languages. For several methods, the results are above baseline for all 1664 editions; e.g., S1 measures are above 20% for all 1664 editions for N(t) on CHAR. 4 Related Work Following Upadhyay et al. (2016), we group multilingual embedding methods into classes A, B, C, D. Group A trains monolingual embedding spaces and subsequently uses a transformation to create a unified space. Mikolov et al. (2013b) find the transformation by minimizing the Euclidean distance between word pairs. Similarly, Zou et al. (2013), Xiao and Guo (2014) and Faruqui and Dyer (2014) use different data sources for identifying word pairs and creating the transformation (e.g., by CCA). Duong et al. (2017) is also simi1528 lar. These approaches need large datasets to obtain high quality monolingual embedding spaces and are thus inappropriate for a low-resource setting of 150,000 tokens per language. Group B starts from the premise that representation of aligned sentences should be similar. Neural network approaches include (Hermann and Blunsom, 2014a) (BiCVM) and (Sarath Chandar et al., 2014) (autoencoders). Again, we have not enough data for training neural networks of this size. Søgaard et al. (2015) learn an interlingual space by using Wikipedia articles as concepts and applying inverted indexing. Levy et al. (2017) show that what we call S-ID is a strongly performing embedding learning method. We use S-ID as a baseline. Group C combines mono- and multilingual information in the embedding learning objective. Klementiev et al. (2012) add a word-alignment based term in the objective. Luong et al. (2015) extend Mikolov et al. (2013a)’s skipgram model to a bilingual model. Gouws et al. (2015) introduce a crosslingual term in the objective, which does not rely on any word-pair or alignment information. For n editions, including O(n2) bilingual terms in the objective function does not scale. Group D creates pseudocorpora by merging data from multiple languages into a single corpus. One such method, due to Vuli´c and Moens (2015), is our baseline BOW. ¨Ostling (2014) generates multilingual concepts using a Chinese Restaurant process, a computationally expensive method. Wang et al. (2016) base their concepts on cliques. We extend their notion of clique from the bilingual to the multilingual case. Ammar et al. (2016) use connected components. Our baseline SAMPLE, based on (Lardilleux and Lepage, 2007, 2009), samples aligned sentences from a multilingual corpus and extracts perfect alignments. Malaviya et al. (2017), Asgari and Sch¨utze (2017), ¨Ostling and Tiedemann (2017) and Tiedemann (2018) perform evaluation on the language level (e.g., typology prediction) for 1000+ languages or perform experiments on 1000+ languages without evaluating each language. We present the first work that evaluates on 1000+ languages on the sentence level on a difficult task. Somers (2005) criticizes RT evaluation on the sentence level; but see Aiken and Park (2010). We demonstrated that when used on the word/unit level, it distinguishes weak from strong embeddings and correlates well with an independent sentiment evaluation. Any alignment algorithm can be used for dictionary induction. We only used a member of the IBM class of models (Dyer et al., 2013), but presumably we could improve results by using either higher performing albeit slower aligners or non-IBM aligners (e.g., (Och and Ney, 2003; Tiedemann, 2003; Melamed, 1997)). Other alignment algorithms include 2D linking (Kobdani et al., 2009), sampling based methods (e.g., Vulic and Moens (2012)) and EFMARAL ( ¨Ostling and Tiedemann, 2016). EFMARAL is especially intriguing as it is based on IBM1 and Agi´c et al. (2016) find IBM2-based models to favor closely related languages more than models based on IBM1. However, the challenge is that we need to compute tens of thousands of alignments, so speed is of the essence. We ran character-based and word-based induction separately; combining them is promising future research; cf. (Heyman et al., 2017). There is much work on embedding learning that does not require parallel corpora, e.g., (Vuli´c and Moens, 2012; Ammar et al., 2016). This work is more generally applicable, but a parallel corpus provides a clearer signal and is more promising (if available) for low-resource research. 5 Summary We presented a new method for estimating vector space representations of words: embedding learning by concept induction. We tested this method on a highly parallel corpus and learned semantic representations of words in 1259 different languages in a single common space. Our extensive experimental evaluation on crosslingual word similarity and sentiment analysis indicates that concept-based multilingual embedding learning performs better than previous approaches. The embedding spaces of the 1259 languages (SAMPLE, CLIQUE and N(t)) are available: http://cistern.cis.lmu.de/comult/. We gratefully acknowledge funding from the European Research Council (grants 740516 & 640550) and through a Zentrum Digitalisierung.Bayern fellowship awarded to the first author. We are indebted to Michael Cysouw for making PBC available to us. 1529 References ˇZeljko Agi´c, Anders Johannsen, Barbara Plank, H´ector Alonso Mart´ınez, Natalie Schluter, and Anders Søgaard. 2016. Multilingual projection for parsing truly low-resource languages. Transactions of the Association for Computational Linguistics, 4. Milam Aiken and Mina Park. 2010. The efficacy of round-trip translation for MT evaluation. Translation Journal, 14(1). Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A Smith. 2016. Massively multilingual word embeddings. arXiv preprint arXiv:1602.01925. Ehsaneddin Asgari and Hinrich Sch¨utze. 2017. Past, present, future: A computational investigation of the typology of tense in 1000 languages. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: Analyzing text with the natural language toolkit. O’Reilly Media. Long Duong, Hiroshi Kanayama, Tengfei Ma, Steven Bird, and Trevor Cohn. 2017. Multilingual training of crosslingual word embeddings. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. Chris Dyer, Victor Chahuneau, and Noah A Smith. 2013. A simple, fast, and effective reparameterization of ibm model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2016. Multilingual language processing from bytes. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. Bilbowa: fast bilingual distributed representations without word alignments. In Proceedings of the 32nd International Conference on International Conference on Machine Learning. Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2016. A representation learning framework for multi-source transfer parsing. In Proceedings of the 30th AAAI Conference on Artificial Intelligence. Karl Moritz Hermann and Phil Blunsom. 2014a. Multilingual distributed representations without word alignment. In Proceedings of the 2014 International Conference on Learning Representations. Karl Moritz Hermann and Phil Blunsom. 2014b. Multilingual models for compositional distributed semantics. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. Geert Heyman, Ivan Vuli´c, and Marie-Francine Moens. 2017. Bilingual lexicon induction by learning to combine word-level and character-level representations. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. Proceedings of the 24th International Conference on Computational Linguistics. Hamidreza Kobdani, Alex Fraser, and Hinrich Sch¨utze. 2009. Word alignment by thresholded twodimensional normalization. In Proceeedings of the 12th Machine Translation Summit. Adrien Lardilleux and Yves Lepage. 2007. The contribution of the notion of hapax legomena to word alignment. In Proceedings of the 4th Language and Technology Conference. Adrien Lardilleux and Yves Lepage. 2009. Samplingbased multilingual alignment. In Proceedings of 7th Conference on Recent Advances in Natural Language Processing. Omer Levy, Anders Søgaard, and Yoav Goldberg. 2017. A strong baseline for learning cross-lingual word embeddings from sentence alignments. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Bilingual word representations with monolingual quality in mind. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing. Chaitanya Malaviya, Graham Neubig, and Patrick Littell. 2017. Learning language representations for typology prediction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Thomas Mayer and Michael Cysouw. 2014. Creating a massively parallel bible corpus. In Proceedings of the 9th International Conference on Language Resources and Evaluation. Ryan T. McDonald, Slav Petrov, and Keith B. Hall. 2011. Multi-source transfer of delexicalized dependency parsers. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. 1530 I. Dan Melamed. 1997. A word-to-word model of translational equivalence. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and 8th Conference of the European Chapter of the Association for Computational Linguistics. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013b. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1). Robert ¨Ostling. 2014. Bayesian word alignment for massively parallel texts. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. Robert ¨Ostling and J¨org Tiedemann. 2016. Efficient word alignment with Markov Chain Monte Carlo. Prague Bulletin of Mathematical Linguistics, 106. Robert ¨Ostling and J¨org Tiedemann. 2017. Continuous multilinguality with language vectors. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. AP Sarath Chandar, Stanislas Lauly, Hugo Larochelle, Mitesh M. Khapra, Balaraman Ravindran, Vikas C. Raykar, and Amrita Saha. 2014. An autoencoder approach to learning bilingual word representations. In Proceedings of the 2014 Annual Conference on Neural Information Processing Systems. Michel Simard. 1999. Text-translation alignment: Three languages are better than two. In Proceedings of the 1999 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora. Anders Søgaard, ˇZeljko Agi´c, H´ector Mart´ınez Alonso, Barbara Plank, Bernd Bohnet, and Anders Johannsen. 2015. Inverted indexing for cross-lingual nlp. In The 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference of the Asian Federation of Natural Language Processing. Harold Somers. 2005. Round-trip translation: What is it good for? In Proceedings of the Australasian Language Technology Workshop 2005. Morris Swadesh. 1946. South Greenlandic (Eskimo). In Cornelius Osgood, editor, Linguistic Structures of Native America. Viking Fund Inc. (Johnson Reprint Corp.), New York. J¨org Tiedemann. 2003. Combining clues for word alignment. In Proceedings of the 10th Conference of the European Chapter of the Association for Computational Linguistics. J¨org Tiedemann. 2018. Emerging language spaces learned from massively multilingual corpora. arXiv preprint arXiv:1802.00273. Yulia Tsvetkov, Leonid Boytsov, Anatole Gershman, Eric Nyberg, and Chris Dyer. 2014. Metaphor detection with cross-lingual model transfer. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. Shyam Upadhyay, Manaal Faruqui, Chris Dyer, and Dan Roth. 2016. Cross-lingual models of word embeddings: An empirical comparison. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Ivan Vuli´c and Marie-Francine Moens. 2012. Detecting highly confident word translations from comparable corpora without any prior knowledge. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics. Ivan Vulic and Marie-Francine Moens. 2012. Subcorpora sampling with an application to bilingual lexicon extraction. In Proceedings of the 24th International Conference on Computational Linguistics. Ivan Vuli´c and Marie-Francine Moens. 2015. Bilingual word embeddings from non-parallel documentaligned data applied to bilingual lexicon induction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, volume 2. Rui Wang, Hai Zhao, Sabine Ploux, Bao-Liang Lu, Masao Utiyama, and Eiichiro Sumita. 2016. A novel bilingual word embedding method for lexical translation using bilingual sense clique. arXiv preprint arXiv:1607.08692. Min Xiao and Yuhong Guo. 2014. Distributed word representation learning for cross-lingual dependency parsing. In Proceedings of the 18th Conference on Computational Natural Language Learning. Daniel Zeman and Philip Resnik. 2008. Crosslanguage parser adaptation between related languages. In Proceedings of the 3rd International Joint Conference on Natural Language Processing. Will Y Zou, Richard Socher, Daniel Cer, and Christopher D Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing.
2018
141
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1531–1542 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1531 Isomorphic Transfer of Syntactic Structures in Cross-Lingual NLP Edoardo Maria Ponti LTL, University of Cambridge [email protected] Roi Reichart Technion, IIT [email protected] Anna Korhonen LTL, University of Cambridge [email protected] Ivan Vuli´c LTL, University of Cambridge [email protected] Abstract The transfer or share of knowledge between languages is a popular solution to resource scarcity in NLP. However, the effectiveness of cross-lingual transfer can be challenged by variation in syntactic structures. Frameworks such as Universal Dependencies (UD) are designed to be cross-lingually consistent, but even in carefully designed resources trees representing equivalent sentences may not always overlap. In this paper, we measure cross-lingual syntactic variation, or anisomorphism, in the UD treebank collection, considering both morphological and structural properties. We show that reducing the level of anisomorphism yields consistent gains in cross-lingual transfer tasks. We introduce a source language selection procedure that facilitates effective cross-lingual parser transfer, and propose a typologically driven method for syntactic tree processing which reduces anisomorphism. Our results show the effectiveness of this method for both machine translation and cross-lingual sentence similarity, demonstrating the importance of syntactic structure compatibility for boosting cross-lingual transfer in NLP. 1 Introduction Linguistic information can be transferred from resource-rich to resource-poor languages using approaches such as annotation projection, model transfer, and/or translation (Agi´c et al., 2014). Such cross-lingual transfer may rely on syntactic information. Structured and more cross-lingually consistent than linear sequences (Ponti, 2016), syntactic information has proved useful for cross-lingual parsing (Tiedemann, 2015; Rasooli and Collins, 2017), multilingual representation learning (Vuli´c and Korhonen, 2016; Vuli´c, 2017), causal relation identification (Ponti and Korhonen, 2017), and neural machine translation (Eriguchi et al., 2016; Aharoni and Goldberg, 2017). It can also guide the generation of synthetic data for multilingual tasks (Wang and Eisner, 2016). Universal Dependencies (UD) (Nivre et al., 2016) is a collection of treebanks for a variety of languages, annotated with a scheme optimised for knowledge transfer. The tag sets are languageindependent and there are direct links between content words. This reduces the variation of dependency trees, because content words are crosslingually more stable than function words (Croft et al., 2017), and benefits semantically-oriented applications (de Marneffe et al., 2014)1. Importantly, although UD is tailored to offer support to cross-lingual transfer, it also supports monolingual applications with a quality comparable to languagespecific annotations (Vincze et al., 2017, inter alia). Despite the careful design of this resource, there are still substantial variations in morphological richness and strategies employed to express the same syntactic constructions across languages. These variations posit challenges for syntax-based knowledge transfer. The first challenge is how to match the source and target languages so that differences are minimised. The common criteria are based on the typology of word order (Naseem et al., 2012; T¨ackstr¨om et al., 2013; Zhang and Barzilay, 2015) or part-of-speech n-grams (Rosa and Zabokrtsky, 2015; Agi´c, 2017). The second one is how to make knowledge transfer effective by harmonising syntactic trees (Smith and Eisner, 2009; Vilares et al., 2016) as to enable a better correspondence between source and target nodes. 1It is controversial whether it improves parsing: e.g., Groß and Osborne (2015, inter alia) argue against whereas Attardi et al. (2015, inter alia) argue in favour. 1532 In this paper we address these two challenges. We propose the concept of isomorphism (i.e., identity of shapes: syntactic structures) and its opposite, anisomorphism, as a probe to measuring quantitatively the extent to which syntactic tree pairs are cross-lingually compatible. We assess the variation of syntactic constructions by a) the average Zhang and Shasha (1989)’s tree edit distance between UD treebanks, and b) the variation in morphology by the Jaccard index of morphological feature sets. We show that these metrics are strong indicators for source language selection, and even preferable over widespread metrics such as genealogical language relatedness. Moreover, the concept of isomorphism facilitates the process of reshaping trees to make them compatible across languages via operations of deletion, addition, and relabeling. To this end, we propose a tree processing method which increases the level of isomorphism between trees of cross-lingually compatible sentences. This method leads to consistent improvements on cross-lingual tasks achieved through transfer. To verify the relevance of isomorphism for crosslingual transfer in NLP, we perform experiments on three tasks. Firstly, we use the Jaccard index of morphological feature sets to choose source languages for cross-lingual dependency parsing. Secondly, we use syntactic trees harmonised by our method in syntax-based neural machine translation of two typologically distant language pairs (Arabic to Dutch; Indonesian to Portuguese). Finally, we evaluate cross-lingual sentence similarity in a real-life resource-lean scenario where the target language has no annotated data. In all experiments, we enhance performance compared to baselines where the source shows a lower degree of isomorphism. In §2, we define the concept of (an)isomorphism, propose novel metrics for measuring it quantitatively, and introduce the tree processing algorithm. We then desribe the data (§3), methods (§4), and experimental results (§5). Related work is summarised in §6 and conclusions are drawn in §7. 2 Anisomorphism The ideal situation for knowledge transfer from one (syntactic) structure into another is when these structures are equivalent. In graph theory, there is isomorphism between the nodes VS of graph S and the nodes VT of graph T if there exists a bijection f(VS) →VT such that ∀si, sj ∈VS, it holds that: si sj ⇔f(si) f(sj), where the symbol stands for adjacency between nodes. In simple words, the mapping must preserve adjacencies between corresponding nodes. Syntactic trees are a special case of such graphs. However, vocabularies (the words in their nodes) are peculiar to each language, making their comparison impractical across languages. In this work, we probe isomorphism on delexicalised trees, where each node is the (cross-lingually consistent) dependency relation of the word in that position. Even so, however, isomorphic bijection is often impossible between trees of equivalent sentences in different languages owing to typological variation (see §2.1). Adopting the term from Ambati (2008), we define this property as anisomorphism, which can be quantified as the extent to which two structures differ in their morphological and syntactic properties (§2.2). We present a tree processing method to mitigate anisomorphism in §2.3. Afterwards, in §§3-5 we show how the concepts defined in this section facilitate cross-lingual transfer in three NLP tasks. 2.1 Sources of Anisomorphism Two main causes underpin anisomorphism. The first cause is the morphological type of a language: the same grammatical function may be expressed via morphemes, via separate words (so-called function words), or may not be expressed at all (Bybee, 1985, ch. 2). For instance, consider the following Latin-English example: (1) Crimen crime.NOM er-it be-FUT.3SG super-is god-DAT.PL et also me me fec-isse make-INF.PST nocent-em. guilty-ACC ‘It will be a reproach to the gods, that they have made even me guilty.’ The future tense is expressed by inflecting the verb erit in Latin, whereas English has the auxiliary verb will. In addition, Latin can express the English preposition to with the dative case -is. This variation has systematic impact on UD annotation. On one hand, Latin would display the attribute-value pairs TENSE=FUTURE and CASE=DATIVE among the features of erit and superis. On the other hand, in English the function words (will and to) add nodes to the dependency structure, modifying the equivalent words (be and gods). This pattern is not unique to English and Latin: there are similar correspondences between specific function words and morphological features in many other languages. 1533 Arabic Basque Bulgarian Catalan Chinese Coptic Croatian Czech Danish Dutch English Estonian Finnish French Galician German Gothic Greek Hebrew Hindi Hungarian Indonesian Irish Italian Japanese Kazakh Latin Latvian Norwegian Persian Polish Portuguese Romanian Russian Sanskrit Slovak Slovenian Spanish Swedish Tamil Turkish Ukrainian Uyghur Vietnamese Arabic Basque Bulgarian Catalan Chinese Coptic Croatian Czech Danish Dutch English Estonian Finnish French Galician German Gothic Greek Hebrew Hindi Hungarian Indonesian Irish Italian Japanese Kazakh Latin Latvian Norwegian Persian Polish Portuguese Romanian Russian Sanskrit Slovak Slovenian Spanish Swedish Tamil Turkish Ukrainian Uyghur Vietnamese 0.25 0.50 0.75 1.00 (a) Jaccard index of the morphological feature sets. Croatian Danish Finnish Hebrew Spanish Swedish Basque Bulgarian Croatian Danish Estonian Finnish French Hebrew Polish 12 14 16 18 (b) Average tree edit distance. Figure 1: Heatmaps of anisomorphism metrics for UD language pairs. The colours range from blue (low values) to red (high values). The other source of anisomorphism are construction strategies: the same syntactic construction is expressed through different types of strategies (Croft et al., 2017), which results in different kinds of subtrees in UD. An example construction is predicative possession, which conveys the ownership of an item by a possessor through the predicate of a clause (Stassen, 2009). Consider these examples in Dutch and Arabic, respectively: (2) Ik I heb have.1SG een a filmidee film+idea ‘I have an idea for a movie.’ (3) Laday-him¯a at-them ‘aˇsy¯a-‘u thing-NOM.PL muˇstarakat-un common-NOM.PL ‘They have things in common.’ In Dutch (Example 2), the owner Ik is the subject and the item filmidee is the object of a transitive verb (hab). However, in Arabic (Example 3) the owner is a predicate with a locative prefix (ladayhim¯a), the item ‘aˇsy¯a‘u is the subject, and there is no verb. These are called transitive and locative strategies, respectively. Each strategy results in a different (delexicalised) subtree, as shown in Figure 2b: this simple example with one construction already suggests that the variation in syntactic constructions affects the compatibility of cross-lingual trees pervasively.2 2Other strategies for predicative possession include topic, conjunctional and genitive. More examples of constructions are available in the supplemental material. 2.2 Measures of Anisomorphism How can the differences described in §2.1 translate into quantitative metrics of compatibility between sentences in different languages? As the first answer to this question, we propose to measure the affinity in morphological type by considering the sets of morphological features attested within each of the UD treebanks.3 Particularly, for each pair of a source language set MS and a target language set MT , we estimate their Jaccard index, which is defined for two sets as the cardinality of their intersection divided by the cardinality of their union, as shown in Equation (4). J(MS, MT ) = ||MS ∩MT || ||MS ∪MT || (4) The values of the Jaccard index lie in [0, 1] A heatmap is displayed in Figure 1a: the morphological similarity between language pairs varies considerably, ranging from low (0.07 in ChineseUyghur), mild (0.48 in Latvian-Tamil), to high (0.72 in Bulgarian-Ukrainian). Note that the Jaccard index 1 is an artifact for languages with no expression of grammatical function (in Vietnamese, among others) or lacking morphological annotation (in Japanese). This metric exhibits other disadvantages: it does not take into account another source of variation, the construction strategies, and is based on general properties of a grammar rather 3The full list of features can be consulted at http:// universaldependencies.org/u/feat/ 1534 ladayhim¯a ‘aˇsy¯a‘u muˇstarakatun NSUBJ AMOD (a) Original (source) tree NOUN case=loc NOUN ADJ VERB NOUN NOUN ADJ NSUBJ AMOD NSUBJ DOBJ AMOD DUMMY (b) Match templates DUMMY him¯a ‘aˇsy¯a‘u muˇstarakatun NSUBJ DOBJ AMOD (c) Processed (source) tree Figure 2: Tree processing steps that transform the locative strategy for predicative possession in an Arabic sentence into a transitive strategy. Tree processing is always applied on source language constructions. than specific individual sentences. Hence, we propose an approach to measure anisomorphism between individual sentences. We parse the texts of the multi-parallel Bible corpus (Christodouloupoulos and Steedman, 2015) with the SyntaxNet parser (see §4). The language pairs taken into account are limited to those present both in our UD sample and in the Bible corpus, and sentence-aligned by book, chapter and verse indices. For a given language pair, we estimate the tree edit distance between every corresponding pair of sentence trees S and T with the Zhang-Sasha algorithm (Zhang and Shasha, 1989) and then average over the number of trees.4 This tree edit distance operates on ordered trees with node (but not edge) labels, hence it is suited for delexicalised dependencies. In particular, it is defined over a map M, which is a list of node pairs where the former belongs to S or ϵ (empty node), and the latter belongs to T or ϵ. If both are non-empty, they trigger an operation of relabeling; if the latter is ϵ, it is deletion; if the former is ϵ, it is addition. The edit distance is the number of operations required for a complete transformation weighted by the factor γ.5 The following equation summarises the tree edit distance measure: γ(M, S, T) = X i,j∈M γ(Si →Tj) + γ(Si →ϵ) + γ(ϵ →Tj) The possible values of this metric are non-negative real numbers. We opted for this metric in particular because it allows the insertion of internal nodes but not transpositions. The former criterion allows to 4We implement this algorithm with the zs Python package, available at https://github.com/timtadh/ zhang-shasha. 5For simplicity, we set γ = 1. capture complex transformations without rebuilding entire subtrees, the latter is aimed at taking into account also variations in word order. In order to evaluate pure syntactic isomorphism one should allow for transpositions and/or operate on unordered trees.6 A heatmap of tree edit distances is shown in Figure 1b. The values reflect the typological affinity of the language pairs: e.g., Spanish is very close to French (both are Romance languages), mildly similar to Polish (Slavic language, but still part of the Indo-European family), but remote from Hebrew (from a different family, Semitic). The values agree in part with the metrics of Figure 1a, where the Jaccard indices of Hebrew (0.26), Polish (0.46), and French (0.59) mirror the same relationships. In §4, we show how these metrics can benefit the source selection for knowledge transfer, sometimes even outranking established criteria such as genealogical closeness. However, they have also weaknesses: the Jaccard index of feature sets is not reliable for languages with a limited number of morphologically expressed grammatical categories. On the other hand, the tree edit distance measure requires resources (such as treebanks and parallel corpora) that are not available for many languages. 2.3 Reduction of Anisomorphism The measures of anisomorphism reveal which languages are structurally similar, which is directly useful for source selection. However, the data available for many tasks are often limited to distant languages. Hence, it is necessary to increase their affinity by gearing one towards the other. We propose to process source dependency trees with an algorithm inspired by the same rules of the tree edit 6For a survey on tree edit distances, see Bille (2005). 1535 distance described in §2.2. We leverage the readily available documentation in typological databases (e.g., World Atlas of Language Structures: WALS) (Dryer and Haspelmath, 2013).7 Given a source and a target language, the documentation informs about their respective strategies. For each strategy, we manually define a ‘template’, i.e. the subtree it corresponds to, in terms of morpho-syntactic features. For instance, see the dashed circles in Figure 2b: note that templates are limited to a head and its immediate dependents. Then we explore source trees in a top-down breadth-first fashion, and if a template for a source strategy is identified, it is mapped to the corresponding target template. In order to preserve semantic information, contrary to Zhang and Shasha (1989), the mapping operates on lexicalised edge-labeled trees. Hence, ADD and CHANGE affect both words (nodes) and edges (dependency relations). The whole process is summarised in Algorithm 1. Algorithm 1 Tree processing with rules 1: strategiess ←WALSs ▷Define templates 2: strategiest ←WALSt 3: function CHANGE(s, t(l)) ▷Define operations 4: s ←t(l) 5: function DELETE(s) 6: s ←ϵ 7: function ADD(t(l)) 8: ϵ ←t(l) 9: function MAPPING(rs, strategiest) ▷Define mapping 10: assert(rs ∈strategiess) return {CHANGE, DELETE, ADD}* 11: for subtree in trees do ▷Explore tree 12: if subtree ∈strategiess then 13: list ←MAPPING(subtree) 14: for ns, nt in list do ▷Perform operations 15: if ns ̸= ϵ ∧nt ̸= ϵ then 16: CHANGE(ns, nt) 17: else if nt = ϵ then 18: DELETE(ns) 19: else if ns = ϵ then 20: ADD(nt) For instance, consider the transformation from the locative strategy for predicative possession in Arabic from Example 3 into a transitive strategy. By exploring its dependency graph (Figure 2a), the Algorithm identifies a subtree corresponding to one of the source strategies (left side of Figure 2b). This subtree is mapped to the target template (right 7In particular, we take into account the following relevant WALS features: 116 (polar questions), 122-123 (relativisation on subjects and obliques), 117 (predicative possession), 113-115 (negation), 107 (passive), 37-38 (articles), and 85 (prepositions). side of Figure 2b) with the following operations: it CHANGEs the root noun ladayhim¯a (the possessor) with a dummy node (the verb). The same noun is re-ADDed as a dependent with a new label nsubj. Finally, the dependency relation of the other noun ‘aˇsy¯a-‘u is CHANGEd from nsubj to dobj. The resulting tree uses the source language vocabulary, but target language construction strategies, as shown in Figure 2c. 3 Data In order to validate the usefulness of anisomorphism reduction through guided source selection and tree processing, we experiment with three different cross-lingual tasks: cross-lingual dependency parsing, neural machine translation (NMT), and cross-lingual sentence similarity (STS). In this section, we present the data used in these tasks. The data for dependency parsing are sourced from Universal Dependencies v1.4.8 We sample a group of 21 treebanks ensuring their representativeness by balancing them by family. We filter out all languages but two belonging to same branches of the Indo-European family, and keep those of all the other families.9 We take into account only the language-independent components of the annotation: coarse POS tags, morphological features, and dependency relations. Regarding NMT data, English is ubiquitous in the current datasets, overshadowing the wide spectrum of existing morphological types and syntactic strategies. To address this limitation, we create a new NMT dataset that matches typologically distant languages directly without the need of a bridge/pivot language. We extract aligned sentences from the Open Subtitles 2016 tokenised corpus (Tiedemann, 2009)10 for Arabic-Dutch and Indonesian-Portuguese. This choice was made based on their volume of parallel data in order to produce evaluation data similar in size to those of NMT datasets in shared tasks such as WMT16 (Bojar et al., 2016). Training and test sets consist of 3M and 5K sentences, respectively. These sentences come automatically annotated by SyntaxNet. The data for cross-lingual STS are chosen to resemble a real-world scenario with a resource-poor target language. The training data (9,709 sentence 8http://universaldependencies.org/ 9Language names are substituted in this work by their corresponding ISO 639-1 codes. A table of names and codes is provided in the supplemental material. 10http://opus.nlpl.eu/OpenSubtitles.php 1536 pairs) are in English, taken from the STS benchmark, the ensemble of all the datasets from SemEval 2012-2017 STS tasks. The test data (250 sentence pairs) come from Task 1 of SemEval 2017 (Cer et al., 2017); target language is Arabic.11 All the sentence pairs are associated with a label ranging from 0 (dissimilarity) to 5 (equivalence). 4 Methodology Cross-lingual Dependency Parsing. To assess if the anisomorphism metrics devised in §2.2 are reliable in finding compatible languages for knowledge transfer, we use the Jaccard index of the morphological feature sets as a criterion to choose source languages for cross-lingual parser transfer. We adopt the variant of delexicalised model transfer (Zeman and Resnik, 2008) for this task. This technique ignores lexicalised features and leverages only language-independent features instead. For each language from a sample of 7 (typologically diverse) targets, we report LAS scores using three different source languages: (1) the highestranked source according to the Jaccard index; (2) a source sampled from the middle of the list ranked by the Jaccard indices; (3) a very dissimilar language sampled from the bottom of the ranked list. The total number of sentences used for training corresponds to the smallest of the three source language treebanks in order to isolate the effect of treebank size on the final transfer results. We conduct experiments with two well-known transition-based parsers (Nivre, 2006): (1) DeSR (Attardi et al., 2007) and (2) SyntaxNet (Andor et al., 2016; Alberti et al., 2017). The two were selected as they represent two different architectures: the former is an SVM-based model with a polynomial kernel, whereas the latter is a feed-forward neural network with beam search based on conditional random fields. The results are evaluated in terms of LAS and UAS scores. Neural Machine Translation. For NMT, we examine whether the tree processing procedure from §2.3 can reduce anisomorphism between source and target language syntactic structures. We thus run NMT models in two settings: with and without the anisomorphism reduction procedure. For this experiment we rely on a state-of-the-art syntax-aware NMT architecture. We report its performance by BLEU scores (Papineni et al., 2002). 11http://alt.qcri.org/semeval2017/ task1/ In particular, we use an attentional encoder-decoder network that jointly learns to translate and align words (Bahdanau et al., 2015) implemented in the Nematus suite12 (Sennrich et al., 2017). The encoder is a bidirectional gated recurrent network. For each step i, the decoder predicts the next word in output by taking as input the current hidden state hi, the previous word wi−1 and a context vector, i.e., a weighted sum of all the hidden states Pn j=1 wj · h1. The weights are learned by a multilayer perceptron that estimates the likelihood of the alignment between the predicted word and each of the input words: wi,j = P(a|yi, xj). This model is enriched with additional linguistic features on input, as proposed by Sennrich and Haddow (2016). In particular, we select the following which are proven as useful in prior work, and also relevant to our experiment: word form, POS tag, and dependency relations. These features are concatenated and fed to the encoder. Tree processing from §2.3 affects these features (and consequently the sentence representation) by changing the initial tree structure. For instance, the original tree in Figure 2a and the processed one in Figure 2c would correspond to these feature sets: Original Preprocessed ladayhim¯a ⊕N ⊕ROOT him¯a ⊕N ⊕NSUBJ DUMMY ⊕V ⊕ROOT ‘aˇsy¯a‘u ⊕N ⊕NSUBJ ‘aˇsy¯a‘u ⊕N ⊕DOBJ muˇstarakatun ⊕ A ⊕ Amod muˇstarakatun ⊕ A ⊕ Amod Cross-lingual STS. We use cross-lingual STS as another evaluation task to validate if the anisomorphism reduction algorithm from §2.3 generalises beyond the initial application in NMT. The state-ofart approach to this task in the monolingual setting encodes trees of sentence pairs with a TreeLSTM architecture (Tai et al., 2015). The hidden representations of the tree roots of both sentences in each pair are then concatenated and fed to a multi-layer perceptron classifier, which yields a probability distribution over the six classes (from 0=dissimilarity to 5=equivalence). The following TreeLSTM has been implemented in PyTorch. The parameters of an LSTM model are the matrix weights Wq for inputs and Uq for hidden representations, and a bias bq. q corresponds to an input gate it, a forget gate ft, an output gate ot, or a memory cell ct at time step t. The hidden state ht 12https://github.com/EdinburghNLP/ nematus 1537 DA (4302) ES (5240) FI (2262) HE (4797) HR (8096) TA (3849) VI (4476) Parser Transfer: Target Language 10 20 30 40 50 60 LAS Score [%] SV FR ET FA BG RU ID SK PL RU AR ES SV HI EU HE HE FI FI RO NL High-Similarity Source Medium-Similarity Source Low-Similarity Source Figure 3: Results of delexicalised cross-lingual transfer using DeSR. Results with SyntaxNet are omitted as they show very similar patterns. The numbers in parentheses denote the amount of training sentences. is derived from the equations below. To extend this model to dependency trees, we consider ht−1 to equal the sum of the hidden states of the children of a node P k∈C(xt) hk, and provide a different forget gate ftk for each child. qt = σ (Wqxt + Uqht−1 + bq) (5) ct = ft ⊙ct−1 + it ⊙tanh (Wcxt + Ucht−1 + bc) (6) ht = ot ⊙tanh(ct) (7) In our resource-lean cross-lingual scenario the language of the training data (English) differs from that of the target (Arabic). Since TreeLSTM is a lexicalised model, we employ multilingual word embeddings, such that the words of both languages lie in the shared cross-lingual semantic space. In particular, we map English into Arabic through the iterative Procustes method devised by Artetxe et al. (2017). The results are evaluated through the Pearson correlation and the Mean Squared Error (MSE) between predicted and golden labels. Hyperparameters. DeSR has degree 2, γ 0.18, C 0.4, coef0 0.4, and ϵ 1.0. The hyper-parameters for the deep models are shown in Table 1: we have followed the training setup suggestions from prior work for all the models used in our experiments. 5 Results and Discussion Source Selection. The results for cross-lingual parser transfer with the DeSR parser are provided in Figure 3, while the results with SyntaxNet are provided as supplemental material as they follow the same trends. The selection of the source for SyntaxNet (Parsing) Nematus (NMT) TreeLSTM (STS) Hidden layers 2 2 1 Hidden size 512 1000 300 Input size 160 280 512 Batch size 256 80 25 Epochs 12 (greed); 10 (beam) Early stopping 5 Learning rate 0.8 1−4 1−2 Optimiser Adam AdaDelta SGD Dropout 0.2 / 0.3 0.1 / 0.2 0 Table 1: Hyper-parameters of the models. delexicalised cross-lingual parsing based on the proposed Jaccard index measure shows than selecting a source language with a lower degree of anisomorphism is crucial for knowledge transfer. The values for the selected languages are listed in Table 2. Target High Mid Low Danish 0.49 0.39 0.19 Spanish 0.59 0.46 0.26 Finnish 0.44 0.23 0.15 Hebrew 0.31 0.24 0.15 Croatian 0.62 0.46 0.25 Tamil 0.48 0.43 0.38 Vietnamese 1.00 0.02 0.01 Table 2: Jaccard indices of source-target pairs. The high-similarity source always outperforms the alternatives with both DeSR and SyntaxNet, and with respect to both LAS and UAS scores. For instance, Swedish is the best source for Danish, Estonian for Finnish, and Bulgarian for Croatian. Similarly, the preference for medium- over low1538 AR-NL ID-PT Baseline 7.01 14.79 +Syntax 14.40 23.70 ++Preprocessing 15.40 24.12 Table 3: NMT results: BLEU scores of a joint translator and aligner (Baseline), fed with linguistic features (+Syntax), and with processed trees to reduce anisomorphism (++Preprocessing). Pearson MSE Mono-lingual 77.9 0.94 Cross-lingual 44.7 1.82 +Preprocessing 48.0 1.64 Table 4: Cross-lingual STS results: Pearson and MSE scores of the TreeLSTM architecture with original and processed trees. ranking languages is pronounced, too, as it holds for 6 groups out of 7. For instance, Slovak is a better source choice for Danish than Basque, Polish is a better source choice for Spanish than Hebrew. Most notably, our findings generalise even to cases when the top-ranking language (e.g. Farsi) does not belong to the language family of the target (e.g. Hebrew) whereas the language with a medium overlap does (e.g. Arabic). Tree Processing. The results of the experiments also corroborate the idea that tree harmonisation informed by linguistic typology, and implemented through our anisomorphism reduction procedure can assist model transfer in cross-lingual tasks. The BLEU scores for Neural Machine Translation, shown in Table 3, reveal consistent improvements. The model enriched with syntactic features outperforms the baseline with joint translation and alignment without syntactic features by 7.39 BLEU points in Arabic-Dutch and 8.91 BLEU points in Indonesian-Portuguese. Importantly, our extension which reduces anisomorphism by processing syntactic trees in the source language leads to further improvements for both language pairs: it surpasses the model with syntactic features by 1.0 BLEU points in Arabic-Dutch, and 0.42 BLEU points in Indonesian-Portuguese. These results support our hypotheses: a) syntax is pivotal in NMT, confirming findings from prior work (Sennrich et al., 2017); b) the tree pro-10 -5 0 5 10 15 -10 0 10 Figure 4: Hidden representations of original (red circles) and processed (blue triangles) sentences. cessing algorithm from §2.3 facilitates the alignment between source and target words, and also grants the encoder-decoder architecture a better leverage of dependency features. This lends support to our argument that anisomorphism limits the ability of models to generalise beyond single languages, and reducing it can help cross-lingual syntax-aware NLP tasks. A similar conclusion can be reached by comparing the performance of TreeLSTM-based models on the cross-lingual STS task, reported in Table 4. In particular, the Pearson correlation score increases by 3.3 points and MSE decreases by 0.18 points when our tree processing algorithm is applied. We inspect the hidden representations of both original and processed sentences with t-SNE dimensionality reduction in Figure 4. The impact of the algorithm becomes evident as their clusters are completely separate. However, the comparison against the monolingual STS score obtained on the English test set shows that there is still a wide gap to be bridged by cross-lingual knowledge transfer. Note that our tree processing algorithm is guided by typological knowledge in WALS. The results of the NMT and cross-lingual STS tasks suggest that existing knowledge in such large typological databases (O’Horan et al., 2016; Bender, 2016) can be readily used to support cross-lingual transfer tasks in NLP, as well as the interpretation of polyglot neural models (Ponti et al., 2017). We hope that our work will spark further research on the use of typology in cross-lingual NLP applications. 1539 6 Related Work The need to account for discrepancies in tree structures emerged early in the domain of Information Theory: in particular, the tree edit distance turned out to be useful for correcting programming scripts (Tai, 1979), evolution studies, and most notably accounting for transformations in constituency trees (Selkow, 1977). Although previous works were aware of the problem of anisomorphism in the context of syntax-based NLP applications (Ambati, 2008), to our knowledge we are the first to quantify it formally and to leverage it in cross-lingual NLP. For source selection, similarity metrics from prior work mostly relied on information stored in typological databases (Naseem et al., 2012; T¨ackstr¨om et al., 2013; Zhang and Barzilay, 2015; Deri and Knight, 2016). Otherwise, the metrics were derived empirically: they mostly concerned linear-order properties such as part-of-speech ngrams (Rosa and Zabokrtsky, 2015; Agi´c, 2017). In domain adaptation, the selection also hinges upon topic models (Plank and Van Noord, 2011) or Bayesian Optimisation (Ruder and Plank, 2017). The metrics we defined in §2.2 are instead based on configurational properties of languages, and add another piece to the puzzle of source selection. The idea of tree processing dates back to the attempts to steer source towards target syntactic structures in statistical MT, although they were mostly limited to simple reordering steps. Gildea (2003) proposed cloning operations to relocate subtrees. Other works learned rewrite patterns in an automatic fashion to minimize differences in the order of chunks (Zhang et al., 2007) or labeled dependencies (Habash, 2007). Instead, Smith and Eisner (2009) proposed to learn jointly a translation and a loose alignment of nodes, in order to avoid enforcing the bias of the source structure. Reviving these approaches within the framework of deep learning seems crucial as far as state-of-art models depend on syntactic information (Eriguchi et al., 2016; Dyer et al., 2016). In general, our approach aims at developing and evaluating models focused on specific constructions rather than languages as a whole (Rimell et al., 2009; Bender, 2011; Rimell et al., 2016). The gist is that current models have reached a plateau in performance because they excel with frequent and simple phenomena, but they still lag behind with respect to rarer or more complex constructions. 7 Conclusions and Future Work We have demonstrated that syntactic structures differ across languages even in well-developed annotation schemes such as Universal Dependencies. This variation stems from morphological and syntactic differences across languages. This phenomenon, which we have labeled as anismorphism, can challenge the transfer of knowledge from one language to another. We have proposed novel methodology which reduces the degree of anisomorphism crosslingually 1) by selecting the most compatible languages for transfer, and 2) by editing the syntactic structures (i.e., trees) themselves. First, we have provided two measures of anisomorphism based on Jaccard distance of morphological feature sets, as well as average tree edit distance of parallel sentences. These can provide reliable indicators for language compatibility for source selection in cross-lingual parsing. Second, we have proposed a new method for fine-tuning source dependency trees to resemble target language trees in order to reduce anisomorphism. The method does not depend on parallel data, and it leverages readily available information in typological databases. It boosts the performance of standard frameworks in two downstream applications, obtaining competitive or state-of-art results for 1) NMT on a new dataset of Arabic-Dutch and Indonesian-Portuguese and 2) cross-lingual sentence similarity. Future work will look into automating the tree processing procedure. A parametrised model could be trained to imitate the operations performed by Zhang and Shasha (1989)’s algorithm on multiparallel texts, conditioned on the tree features and previous operations. Another possible research direction is learning the mapping between structures from parallel texts jointly with a main task, in the spirit of quasi-synchronous grammars (Smith and Eisner, 2009). Finally, a wider range of syntactic constructions could be covered by inferring typological strategies from texts ( ¨Ostling, 2015; Coke et al., 2016). The data for NMT, and the code for our crosslingual STS are available at the following link: github.com/ducdauge/isotransf. Acknowledgements This work is supported by the ERC Consolidator Grant LEXICAL (no 648909). The authors would like to thank the anonymous reviewers. 1540 References ˇZeljko Agi´c. 2017. Cross-lingual parser selection for low-resource languages. In Proceedings of the NoDaLiDa 2017 Workshop on Universal Dependencies (UDW 2017), pages 1–10. ˇZeljko Agi´c, J¨org Tiedemann, Kaja Dobrovoljc, Simon Krek, Danijela Merkler, and Sara Moˇze. 2014. Cross-lingual dependency parsing of related languages with rich morphosyntactic tagsets. In Proceedings of the EMNLP 2014 Workshop on Language Technology for Closely Related Languages and Language Variants, pages 13–24. Roee Aharoni and Yoav Goldberg. 2017. Towards string-to-tree neural machine translation. In Proceedings of ACL, pages 132–140. Chris Alberti, Daniel Andor, Ivan Bogatyy, Michael Collins, Dan Gillick, Lingpeng Kong, Terry Koo, Ji Ma, Mark Omernick, Slav Petrov, et al. 2017. SyntaxNet models for the CoNLL 2017 shared task. arXiv preprint arXiv:1703.04929. Vamshi Ambati. 2008. Dependency structure trees in syntax based machine translation. In Adv. MT Seminar Course Report, volume 137. Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In Proceedings of ACL, pages 2442––2452. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of ACL, pages 451–462. Giuseppe Attardi, Felice Dell’Orletta, Maria Simi, Atanas Chanev, and Massimiliano Ciaramita. 2007. Multilingual dependency parsing and domain adaptation using DeSR. In Proceedings of EMNLPCoNLL, pages 1112–1118. Giuseppe Attardi, Simone Saletti, and Maria Simi. 2015. Evolution of Italian treebank and dependency parsing towards Universal Dependencies. In Proceedings of the Second Italian Conference in Computational Linguistics (CLiC-it). Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR (Conference Papers). Emily M Bender. 2011. On achieving and evaluating language-independence in NLP. Linguistic Issues in Language Technology, 6(3):1–26. Emily M. Bender. 2016. Linguistic typology in natural language processing. Linguistic Typology, 20(3). Philip Bille. 2005. A survey on tree edit distance and related problems. Theoretical Computer Science, 337(1-3):217–239. Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, et al. 2016. Findings of the 2016 Conference on Machine Translation. In Proceedings of WMT, volume 2, pages 131–198. Joan L Bybee. 1985. Morphology: A study of the relation between meaning and form, volume 9. John Benjamins Publishing. Daniel Cer, Mona Diab, Eneko Agirre, Inigo LopezGazpio, and Lucia Specia. 2017. SemEval-2017 Task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of SEMEVAL, pages 1–14. Christos Christodouloupoulos and Mark Steedman. 2015. A massively parallel corpus: The Bible in 100 languages. Language Resources and Evaluation, 49(2):375–395. Reed Coke, Ben King, and Dragomir R. Radev. 2016. Classifying syntactic regularities for hundreds of languages. CoRR, abs/1603.08016. William Croft, Dawn Nordquist, Katherine Looney, and Michael Regan. 2017. Linguistic typology meets Universal Dependencies. In Proceedings of the 15th International Workshop on Treebanks and Linguistic Theories (TLT15), pages 63–75. Aliya Deri and Kevin Knight. 2016. Grapheme-tophoneme models for (almost) any language. In Proceedings of ACL, pages 399–408. Matthew S. Dryer and Martin Haspelmath, editors. 2013. WALS Online. Max Planck Institute for Evolutionary Anthropology, Leipzig. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of NAACL-HLT, pages 199–209. Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2016. Tree-to-sequence attentional neural machine translation. In Proceedings of ACL, pages 823—-833. Daniel Gildea. 2003. Loosely tree-based alignment for machine translation. In Proceedings of ACL, pages 80–87. Thomas Groß and Timothy Osborne. 2015. The dependency status of function words: Auxiliaries. In Proceedings of the International Conference on Dependency Linguistics (DepLing), pages 111–120. Nizar Habash. 2007. Syntactic preprocessing for statistical machine translation. Proceedings of MT SUMMIT. Marie-Catherine de Marneffe, Timothy Dozat, Natalia Silveira, Katri Haverinen, Filip Ginter, Joakim Nivre, and Christopher D. Manning. 2014. Universal Stanford dependencies: A cross-linguistic typology. In Proceedings of LREC, pages 4585–4592. 1541 Tahira Naseem, Regina Barzilay, and Amir Globerson. 2012. Selective sharing for multilingual dependency parsing. In Proceedings of ACL, pages 629–637. Joakim Nivre. 2006. Inductive dependency parsing. Springer. Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal Dependencies v1: A multilingual treebank collection. In Proceedings of LREC, pages 1659–1666. Helen O’Horan, Yevgeni Berzak, Ivan Vuli´c, Roi Reichart, and Anna Korhonen. 2016. Survey on the use of typological information in natural language processing. In Proceedings of COLING, pages 1297– 1308. Robert ¨Ostling. 2015. Word order typology through multilingual word alignment. In Proceedings of ACL, pages 205–211. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of ACL, pages 311–318. Barbara Plank and Gertjan Van Noord. 2011. Effective measures of domain similarity for parsing. In Proceedings of ACL, pages 1566–1576. Edoardo Maria Ponti. 2016. Divergence from syntax to linear order in Ancient Greek lexical networks. In Proceedings of the 29th International Florida Artificial Intelligence Research Society Conference (FLAIRS), pages 541–547. Edoardo Maria Ponti and Anna Korhonen. 2017. Event-related features in feedforward neural networks contribute to identifying causal relations in discourse. In Proceedings of the EACL 2017 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics, pages 25–30. Edoardo Maria Ponti, Ivan Vuli´c, and Anna Korhonen. 2017. Decoding sentiment from distributed representations of sentences. In Proceedings of *SEM, pages 22–32. Mohammad Sadegh Rasooli and Michael Collins. 2017. Cross-lingual syntactic transfer with limited resources. Transactions of the ACL, 5:279–293. Laura Rimell, Stephen Clark, and Mark Steedman. 2009. Unbounded dependency recovery for parser evaluation. In Proceedings of EMNLP, pages 813– 821. Laura Rimell, Jean Maillard, Tamara Polajnar, and Stephen Clark. 2016. RELPRON: A relative clause evaluation data set for compositional distributional semantics. Computational Linguistics, 42(4):661– 701. Rudolf Rosa and Zdenek Zabokrtsky. 2015. KLcpos3 - a language similarity measure for delexicalized parser transfer. In Proceedings of ACL, pages 243– 249. Sebastian Ruder and Barbara Plank. 2017. Learning to select data for transfer learning with bayesian optimization. In Proceedings of EMNLP, pages 372– 382. Stanley M. Selkow. 1977. The tree-to-tree editing problem. Information Processing Letters, 6(6):184–186. Rico Sennrich, Orhan Firat, Kyunghyun Cho, Alexandra Birch, Barry Haddow, Julian Hitschler, Marcin Junczys-Dowmunt, Samuel L¨aubli, Antonio Valerio Miceli Barone, Jozef Mokry, et al. 2017. Nematus: A toolkit for neural machine translation. In Proceedings of EACL, pages 65–68. Rico Sennrich and Barry Haddow. 2016. Linguistic input features improve neural machine translation. In Proceedings of WMT, pages 83–91. David A. Smith and Jason Eisner. 2009. Parser adaptation and projection with quasi-synchronous grammar features. In Proceedings of EMNLP, pages 822– 831. Leon Stassen. 2009. Predicative possession. Oxford University Press. Oscar T¨ackstr¨om, Ryan McDonald, and Joakim Nivre. 2013. Target language adaptation of discriminative transfer parsers. In Proceedings of NAACL-HLT, pages 1061–1071. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of ACL, pages 1556–1566. Kuo-Chung Tai. 1979. The tree-to-tree correction problem. Journal of the ACM, 26(3):422–433. J¨org Tiedemann. 2009. News from OPUS - A collection of multilingual parallel corpora with tools and interfaces. In Proceedings of RANLP, pages 237– 248. J¨org Tiedemann. 2015. Cross-lingual dependency parsing with universal dependencies and predicted POS labels. In Proceedings of the International Conference on Dependency Linguistics (DepLing), pages 340–349. David Vilares, Miguel A. Alonso, and Carlos G´omezRodr´ıguez. 2016. One model, two languages: Training bilingual parsers with harmonized treebanks. In Proceedings of ACL, pages 425–431. Veronika Vincze, Katalin Ilona Simk´o, Zsolt Sz´ant´o, and Rich´ard Farkas. 2017. Universal Dependencies and morphology for Hungarian–and on the price of universality. In Proceedings of EACL, pages 356– 365. 1542 Ivan Vuli´c. 2017. Cross-lingual syntactically informed distributed word representations. In Proceedings of EACL, pages 408–414. Ivan Vuli´c and Anna Korhonen. 2016. Is “universal syntax” universally useful for learning distributed word representations? In Proceedings of ACL, pages 518–524. Dingquan Wang and Jason Eisner. 2016. The Galactic Dependencies treebanks: Getting more data by synthesizing new languages. Transactions of the ACL, 4:491–505. Daniel Zeman and Philip Resnik. 2008. Crosslanguage parser adaptation between related languages. In Proceedings of IJCNLP, pages 35–42. Kaizhong Zhang and Dennis Shasha. 1989. Simple fast algorithms for the editing distance between trees and related problems. SIAM Journal on Computing, 18(6):1245–1262. Yuan Zhang and Regina Barzilay. 2015. Hierarchical low-rank tensors for multilingual transfer parsing. In Proceedings of EMNLP, pages 1857–1867. Yuqi Zhang, Richard Zens, and Hermann Ney. 2007. Chunk-level reordering of source language sentences with automatically learned rules for statistical machine translation. In Proceedings of the NAACLHLT 2007/AMTA Workshop on Syntax and Structure in Statistical Translation, pages 1–8.
2018
142
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1543–1553 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1543 Language Modeling for Code-Mixing: The Role of Linguistic Theory based Synthetic Data Adithya Pratapa1 Gayatri Bhat2∗ Monojit Choudhury1 Sunayana Sitaram1 Sandipan Dandapat3 Kalika Bali1 1 Microsoft Research, Bangalore, India 2 Language Technology Institute, Carnegie Mellon University 3 Microsoft R&D, Hyderabad, India 1{t-pradi, monojitc, sunayana.sitaram, kalikab}@microsoft.com, [email protected], [email protected] Abstract Training language models for Code-mixed (CM) language is known to be a difficult problem because of lack of data compounded by the increased confusability due to the presence of more than one language. We present a computational technique for creation of grammatically valid artificial CM data based on the Equivalence Constraint Theory. We show that when training examples are sampled appropriately from this synthetic data and presented in certain order (aka training curriculum) along with monolingual and real CM data, it can significantly reduce the perplexity of an RNN-based language model. We also show that randomly generated CM data does not help in decreasing the perplexity of the LMs. 1 Introduction Code-switching or code-mixing (CM) refers to the juxtaposition of linguistic units from two or more languages in a single conversation or sometimes even a single utterance.1 It is quite commonly observed in speech conversations of multilingual societies across the world. Although, traditionally, CM has been associated with informal or casual speech, there is evidence that in several societies, such as urban India and Mexico, CM has become the default code of communication (Parshad et al., 2016), and it has also pervaded written text, especially in computer-mediated communication and social media (Rijhwani et al., 2017). ∗Work done during author’s internship at Microsoft Research 1According to some linguists, code-switching refers to inter-sentential mixing of languages, whereas code-mixing refers to intra-sentential mixing. Since the latter is more general, we will use code-mixing in this paper to mean both. It is, therefore, imperative to build NLP technology for CM text and speech. There have been some efforts towards building of Automatic Speech Recognition Systems and TTS for CM speech (Li and Fung, 2013, 2014; Gebhardt, 2011; Sitaram et al., 2016), and tasks like language identification (Solorio et al., 2014; Barman et al., 2014), POS tagging (Vyas et al., 2014; Solorio and Liu, 2008), parsing and sentiment analysis (Sharma et al., 2016; Prabhu et al., 2016; Rudra et al., 2016) for CM text. Nevertheless, the accuracies of all these systems are much lower than their monolingual counterparts, primarily due to lack of enough data. Intuitively, since CM happens between two (or more languages), one would typically need twice as much, if not more, data to train a CM system. Furthermore, any CM corpus will contain large chunks of monolingual fragments, and relatively far fewer code-switching points, which are extremely important to learn patterns of CM from data. This implies that the amount of data required would not just be twice, but probably 10 or 100 times more than that for training a monolingual system with similar accuracy. On the other hand, apart from user-generated content on the Web and social media, it is extremely difficult to gather large volumes of CM data because (a) CM is rare in formal text, and (b) speech data is hard to gather and even harder to transcribe. In order to circumvent the data scarcity issue, in this paper we propose the use of linguisticallymotivated synthetically generated CM data (as a supplement to real CM data) for development of CM NLP systems. In particular, we use the Equivalence Constraint Theory (Poplack, 1980; Sankoff, 1998) for generating linguistically valid CM sentences from a pair of parallel sentences in the two languages. We then use these generated sentences, along with monolingual and little 1544 amount of real CM data to train a CM Language Model (LM). Our experiments show that, when trained following certain sampling strategies and training curriculum, the synthetic CM sentences are indeed able to improve the perplexity of the trained LM over a baseline model that uses only monolingual and real CM data. LM is useful for a variety of downstream NLP tasks such as Speech Recognition and Machine Translation. By definition, it is a discriminator between natural and unnatural language data. The fact that linguistically constrained synthetic data can be used to develop better LM for CM text is, on one hand an indirect statistical and task-based validation of the linguistic theory used to generate the data, and on the other hand an indication that the approach in general is promising and can help solve the issue of data scarcity for a variety of NLP tasks for CM text and speech. 2 Generating Synthetic Code-mixed Data There is a large and growing body of linguistic research regarding the occurrence, syntactic structure and pragmatic functions of codemixing in multilingual communities across the world. This includes many attempts to explain the grammatical constraints on CM, with three of the most widely-accepted being the EmbeddedMatrix (Joshi, 1985; Myers-Scotton, 1993, 1995), the Equivalence Constraint (EC) (Poplack, 1980; Sankoff, 1998) and the Functional Head Constraint (DiSciullo et al., 1986; Belazi et al., 1994) theories. For our experiments, we generate CM sentences as per the EC theory, since it explains a range of interesting CM patterns beyond lexical substitution and is also suitable for computational modeling. Further, in a brief human-evaluation we conducted, we found that it is representative of real CM usage. In this section, we list the assumptions made by the EC theory, briefly explain the theory, and then describe how we generate CM sentences as per this theory. 2.1 Assumptions of the EC Theory Consider two languages L1 and L2 that are being mixed. The EC Theory assumes that both languages are defined by context-free grammars G1 and G2. It also assumes that every nonterminal category X1 in G1 has a corresponding non-terminal category X2 in G2 and that every terminal symbol (or word) w1 in G1 has a corresponding terminal symbol w2 in G2. Finally, it assumes that every production rule in L1 has a corresponding rule in L2 - i.e, the non-terminal categories on the left-hand side of the two rules correspond to each other, and every category/symbol on the right-hand side of one rule corresponds to a category/symbol on the right-hand side of the other rule. All these correspondences must also hold viceversa (between languages L2 and L1), which implies that the two grammars can only differ in the ordering of categories/symbols on the right-hand side of any production rule. As a result, any sentence in L1 has a corresponding translation in L2, with their parse trees being equivalent except for the ordering of sibling nodes. Fig.1(a) and (b) illustrate one such sentence pair in English and Spanish and their parse-trees. The EC Theory describes a CM sentence as a constrained combination of two such equivalent sentences. While the assumptions listed above are quite strong, they do not prevent the EC Theory from being applied to two natural languages whose grammars do not correspond as described above. We apply a simple but effective strategy to reconcile the structures of a sentence and its translation - if any corresponding subtrees of the two parse trees do not have equivalent structures, we collapse each of these subtrees to a single node. Accounting for the actual asymmetry between a pair of languages will certainly allow for the generation of more CM variants of any L1-L2 sentence pair. However, in our experiments, this strategy retains most of the structural information in the parse trees, and allows for the generation of up to thousands of CM variants of a single sentence pair. 2.2 The Equivalence Constraint Theory Sentence production. Given two monolingual sentences (such as those introduced in Fig.1), a CM sentence is created by traversing all the leaf nodes in the parse tree of either of the two sentences. At each node, either the word at that node or at the corresponding node in the other sentence’s parse is generated. While the traversal may start at any leaf node, once the production enters one constituent, it will exhaust all the lexical slots (leaf nodes) in that constituent or its equivalent constituent in the other language before entering into a higher level constituent or a sister 1545 (a) SE VPE PPE NPE NNE house JJE white DTE a INE in VBZE lives NPE PRPE She (b) SS VPS PPS NPS JJS blanca NNS casa DTS una INS en VBZS vive NPS PRPS Elle (c) S VP PP NP JJ* white NNS casa DTS una INS en VBZE lives NPS PRPS Elle (d) S VP PP NPS JJS blanca NNS casa DTS una INE in VBZE lives NPS PRPS Elle Figure 1: Parse trees of a pair of equivalent (a) English and (b) Spanish sentences, with corresponding hierarchical structure (due to production rules), internal nodes (non-terminal categories) and leaf nodes (terminal symbols), and parse trees of (c) incorrectly code-mixed and (d) correctly code-mixed variants of these sentences (as per the EC theory). constituent. (Sankoff, 1998) This guarantees that the parse tree of a sentence so produced will have the same hierarchical structure as the two monolingual parse trees (Fig. 1(c) and (d)). The EC theory also requires that any monolingual fragment that occurs in the CM sentence must occur in one of the monolingual sentences (in the running example, the fragment una blanca would be disallowed since it does not appear in the Spanish sentence). Switch-point identification. To ensure that the CM sentence does not at any point deviate from both monolingual grammars, the EC theory imposes certain constraints on its parse tree. To this end and in order to identify the code-switching points in a generated sentence, nodes in its parse tree are assigned language labels according to the following rules: All leaf nodes are labeled by the languages of their symbols. If all the children of any internal node share a common label, the internal node is also labeled with that language. Any node that is out of rank-order among its siblings according to one language is labeled with the other language. (See labeling in Fig.1(c) and (d)) If any node acquires labels of both languages during this process (such as the node marked with an asterisk in Fig.1(c)), the sentence is disallowed as per the EC theory. In the labeled tree, any pair of adjacent sibling nodes with contrasting labels are said to be at a switch-point (SP). Equivalence constraint. Every switch-point identified in the generated sentence must abide by the EC. Let U →U1U2...Un and V →V1V2...Vn be corresponding rules applied in the two monolingual parse trees, and nodes Ui and Vi+1 be adjacent in the CM parse tree. This pair of nodes is a switch-point, and it only abides by the EC if every node in U1...Ui has a corresponding node in V1...Vi. This is true for the switch-point in Fig.1(d), and indicates that the two grammars are ‘equivalent’ at the code-switch point. More importantly, it shows that switching languages at this point does not require another switch later in the sentence. If every switch-point in the generated sentence abides by the EC, the generated sentence is allowed by the EC theory. 2.3 System Description We assume that the input to the generation model is a pair of parallel sentences in L1 and L2, along with word level alignments. For our experiments, L1 and L2 are English and Spanish, and Sec 3.2 describes how we create the input set. We use the Stanford Parser (Klein and Manning, 2003) to parse the English sentence. Projecting parses. We use the alignments to project the English parse tree onto the Spanish sentence in two steps: (1) We first replace every word in the English parse tree with its Spanish equivalent (2) We re-order the child nodes of each internal node in the tree such that their left-to-right order is as in the Spanish sentence. For instance, after replacing every English word in Fig.1(a) with its corresponding Spanish word, we interchange the positions of casa and blanca to arrive Fig.1(b). For a pair of parallel sentences that follow all the assumptions of the EC theory, these steps can be performed without exception and result in the creation of a Spanish parse tree with the same hierarchical structure as the English parse. We use various techniques to address cases in which the grammatical structures of the two sentences deviate. English words that are unaligned to any Spanish words are replaced by empty strings. (See Fig.2 wherein the English word she has no Spanish counterpart, since this pronoun is dropped in the Spanish sentence.) Contiguous word sequences in one sentence that are aligned to the 1546 (a) SE VPE VPE NPE PRPE it VBE do MDE will NPE NNPE She (b) SE VPE NPE PRPE it MD+VBE do will NPE NNPE She (c) SS VPS MD+VBS har´a NPS PRPS lo NPS NNPS <> Figure 2: (a) The parse of an English sentence as per Stanford CoreNLP. This parse is projected onto the parallel Spanish sentence Lo har´a and modified during this process, to produce corresponding (b) English and (c) Spanish parse trees. same word(s) in the other language are collapsed into a single multi-word node, and the entire subtree between these collapsed nodes and their closest common ancestor is flattened to accommodate this change (example in Fig.2). While these changes do result in slightly unnatural or simplified parse trees, they are used very sparingly since English and Spanish have very compatible grammars. Generating CS sentences. The number of CS sentences that can be produced by combining a corresponding pair of English and Spanish sentences increases exponentially with the length of the sentences. Instead of generating these sentences exhaustively, we use the parses to construct a finite-state automaton that succinctly captures the acceptable CS sentences. Since the CS sentence must have the same hierarchical structure as the monolingual sentences, we construct the automaton during a post-order traversal of the monolingual parses. An automaton is constructed at each node by (1) concatenating the automatons constructed at its child nodes, (2) splitting states and removing transitions to ensure that the EC theory is not violated. The last automaton to be constructed, which is associated with the root node, accepts all the CS sentences that can be generated using the monolingual parses. We do not provide the exact details of automaton construction here, but we plan to release our code in the near future. 3 Datasets In this work, we use three types of language data: monolingual data in English and Spanish (Mono), real code-mixed data (rCM), and artificial or generated code-mixed data (gCM). In this section, we describe these datasets and their CM properties. We begin with description of some metrics that we shall use for quantification of the complexity of a CM dataset. 3.1 Measuring CM Complexity The CM data, both real and artificial, can vary in the their relative usage and ordering of L1 and L2 words, and thereby, significantly affect downstream applications like language modeling. We use the following metrics to estimate the amount and complexity of code-mixing in the datasets. Switch-point (SP): As defined in the last section, switch-points are points within a sentence where the languages of the words on the two sides are different. Intuitively, sentences that have more number of SPs are inherently more complex. We also define the metric SP Fraction (SPF) as the number of SP in a sentence divided by the total number of word boundaries in the sentence. Code mixing index (CMI): Proposed by Gamback and Das (2014, 2016), CMI quantifies the amount of code mixing in a corpus by accounting for the language distribution as well as the switching between them. Let N be the number of language tokens, x an utterance; let tLi be the tokens in language Li, P be the number of code switching points in x. Then, the Code mixed index per utterance, Cu(x) for x computed as follows, Cu(x) = (N(x) −maxLi∈L{tLi}(x)) + P(x) N(x) (1) Note that all the metrics can be computed at the sentence level as well as at the corpus level by averaging the values for all the sentences in a corpus. 3.2 Real Datasets We chose to conduct all our experiments on English-Spanish CM tweets because EnglishSpanish CM is well documented (Solorio and Liu, 2008), is one of the most commonly mixed language pairs on social media (Rijhwani et al., 2017), and a couple of CM tweet datasets are readily available (Solorio et al., 2014; Rijhwani et al., 2017). 1547 Dataset # Tweets # Words CMI SPF Mono English 100K 850K (48K) 0 0 Spanish 100K 860K (61K) 0 0 rCM Train 100K 1.4M (91K) 0.31 0.105 Validation 100K 1.4M (91K) 0.31 0.106 Test-17 83K 1.1M (82K) 0.31 0.104 Test-14 13K 138K (16K) 0.12 0.06 gCM 31M 463M (79K) 0.75 0.35 Table 1: Size of the datasets. Numbers in parenthesis show the vocabulary size, i.e., the no. of unique words. For our experiments, we use a subset of the tweets collected by Rijhwani et al. (2017) that were automatically identified as English, Spanish or English-Spanish CM. The authors provided us around 4.5M monolingual tweets per language, and 283K CM tweets. These were already deduplicated and tagged for hashtags, URLs, emoticons and language labels automatically through the method proposed in the paper. Table 1 shows the sizes of the various datasets, which are also described below. Mono: 50K tweets were sampled for Spanish and English from the entire collection of monolingual tweets. The Spanish tweets were translated to English and vice versa, which gives us a total of 100K monolingual tweets in each language. We shall refer to this dataset as Mono. The sampling strategy and reason for generating translations will become apparent in Sec. 3.3. rCM: We use two real CM datasets in our experiment. The 283K real CM tweets provided by Rijhwani et al. (2017) were randomly divided into training, validation and test sets of nearly equal sizes. Note that for most of our experiments, we will use a very small subset of the training set consisting of 5000 tweets as train data, because the fundamental assumption of this work is that very little amount of CM data is available for most language pairs (which is in fact true for most pairs beyond some very popularly mixed languages like English-Spanish). Nevertheless, the much larger training set is required for studying the effect of varying the amount of real CM data on our models. We shall refer to this training dataset as rCM. The test set with 83K tweets will be referred to as Test-17. We also use another dataset of Figure 3: Average number of gCM sentences (yaxis) vs mean input sentence length (x-axis) English-Spanish CM tweets for testing our models which was released during the language labeling shared task at the Workshop on “Computational Approaches to Code-switching, EMNLP 2014” (Solorio et al., 2014). We mixed the training, validation and test datasets released during this shared task to construct a set of 13K tweets, which we shall refer to as Test-14. The two test datasets are tweets that were collected three years apart, and therefore, will help us estimate the robustness of the language models. As shown in Table 1, these datasets are quite different in terms of CMI and average number of SP per tweet. For computing the CMI and SP, we used a EnglishSpanish LID to language tag the words. In fact, 9500 tweets in the Test-14 dataset are monolingual, but we chose to retain them because it reflects the real distribution of CM data. Further, Test-14 also has manually annotated language labels, which will be helpful while conducting an in-depth analysis of the models. 3.3 Synthetic Code-Mixed Data As described in the previous section, we use parallel monolingual sentences to generate grammatically valid code mixed sentences. The entire process involves the following four steps. Step 1: We created the parallel corpus by generating translations for all the monolingual English and Spanish tweets (4.5M each) using the Bing Translator API.2 We have found, that the translation quality varies widely across different sentences. Thus, we rank the translated sentences using Pseudo Fuzzy-match Score (PFS) 2https://www.microsoft.com/enus/translator/translatorapi.aspx 1548 (He et al., 2010). First, the forward translation engine (eg. English-to-Spanish) translates monolingual source sentence s into target t. Then the reverse translation system (eg. Spanish-English) translates target t into pseudo source s′. Equation 2 computes the PFS between s and s′. PFS = EditDistance(s, s′) max(|s|, |s′|) (2) After manual inspection, we decided to select translation pairs whose PFS ≤0.7. The edit distance is based on Wagner and Fischer (1974). Step 2: We used the fast align toolkit3 (Dyer et al., 2013), to generate the word alignments from these parallel sentences. Step 3: The constituency parses for all the English tweets were obtained using the Stanford PCFG parser (Klein and Manning, 2003). Step 4: Using the parallel sentences, alignments and parse trees, we apply the Equivalent constraint theory (Sec 2.2) to generate all syntactically valid CM sentences while allowing for lexical substitution. We randomly selected 50K monolingual Spanish and English tweets whose PFS ≤0.7. This gave us 200K monolingual tweets in all (Mono dataset) and the total amount of generated CM sentences from these 100K translation pairs was 31M, which we shall refer to as gCM. Note that even though we consider the Mono and gCM as two separate sets, in reality the EC model also generates the monolingual sentences; further, existence of gCM presumes existence of Mono. Hence, we also use Mono as part of all training experiments which use gCM. We would also like to point out that the choice of experimenting with a much smaller set of tweets, only 50K per language, was made because the number of generated tweets even from this small set of monolingual tweet pairs is almost prohibitively large to allow experimentation with several models and their respective configurations. 4 Approach Language modeling is a very widely researched topic (Rosenfeld, 2000; Bengio et al., 2003; Sundermeyer et al., 2015). In recent times, deep learning has been successfully employed to build efficient LMs (Mikolov et al., 2010; Sundermeyer et al., 2012; Arisoy et al., 2012; Che et al., 2017). 3https://github.com/clab/fast align Baheti et al. (2017) recently showed that there is significant effect of the training curriculum, that is the order in which data is presented to an RNNbased LM, on the perplexity of the learnt EnglishSpanish CM language model on tweets. Along similar lines, in this study we focus our experiments on training curriculum, especially regarding the use of gCM data during training, which is the primary contribution of this paper. We do not attempt to innovate in terms of the architecture or computational structure of the LM, and use a standard LSTM-based RNN LM (Sundermeyer et al., 2012) for all our experiments. Indeed, there are enough reasons to believe that CM language is not fundamentally different from nonCM language, and therefore, should not require an altogether different LM architecture. Rather, the difference arises in terms of added complexity due to the presence of lexical items and syntactic structures from two linguistic systems that blows up the space of valid grammatical and lexical configurations, which makes it essential to train the models on large volumes of data. 4.1 Training Curricula Baheti et al. (2017) showed that rather than randomly mixing the monolingual and CM data during training, the best performance is achieved when the LM is first trained with a mixture of monolingual texts from both languages in nearly equal proportions, and ending with CM data. Motivated by this finding, we define the following basic training curricula (“X | Y” indicates training the model first with data X and then data Y): (1) rCM, (2) Mono, (3) Mono | rCM, (4a) Mono | gCM, (4b) gCM | Mono, (5a) Mono | gCM | rCM, (5b) gCM | Mono | rCM Curricula 1-3 are baselines, where gCM data is not used. Note that curriculum 3 is the best case according to Baheti et al. (2017). Curricula 4a and 4b help us examine how far generated data can substitute real data. Finally, curricula 5a and 5b use all the data, and we would expect them to perform the best. Note that we do not experiment with other potential combinations (e.g., rCM | gCM | Mono) because it is known (and we also see this in our experiments) that adding rCM data at the end always leads to better models. 1549 Figure 4: Scatter plot of fractional increase in word frequency in gCM (y-axis) vs original frequency (x-axis). 4.2 Sampling from gCM As we have seen in Sec 3.3 (Fig. 3), in the EC model, a pair of monolingual parallel tweets gives rise to a large number (typically exponential in the length of the tweet) of CM tweets. On the other hand, in reality, only a few of those tweets would be observed. Further, if all the generated sentences are used to train an LM, it is not only computationally expensive, it also leads to undesirable results because the statistical properties of the distribution of the gCM corpus is very different from real data. We see this in our experiments (not reported in this paper for paucity of space), and also in Fig 4, where we plot the ratio of the frequencies of the words in gCM and Mono corpora (y-axis) against their original frequencies in Mono (x-axis). We can clearly see that the frequencies of the words are scaled up non-uniformly, the ratios varying between 1 and 500,000 for low frequency words. In order to reduce this skew, instead of selecting the entire gCM data, we propose three sampling techniques for creating the training data from gCM: Random: For each monolingual pair of parallel tweets, we randomly pick a fixed number, k, of CM tweets. We shall refer to the resultant training corpus as χ-gCM. CMI-based: For each monolingual pair of parallel tweets, we randomly pick k CM tweets and bucket them using CMI (in 0.1 intervals). Thus, in this case we can define two different curricula, where we present the data in increasing or decreasing order of CMI during training, which will be represented by the notations ↑-gCM and ↓-gCM respectively. SPF-based: For each monolingual pair of parallel tweets, we randomly pick k CM tweets such that the SPF distribution (section 3.1) of these tweets is similar to that of rCM data (as estimated from the validation set). This strategy will be referred to as ρ-gCM. Thus, depending on the gCM sampling strategy used, curricula 4a-b and 5a-b can have three different versions each. Note that since CMI for Mono is 0, ↑-gCM is not meaningful for 4b and 5b and similarly, ↓-gCM not for 4a and 5a. 5 Experiments and Results For all our experiments, we use a 2 layered RNN with LSTM units and hidden layer dimension of 100. While training, we use sampled softmax with 5000 samples instead of a full softmax to speed up the training process. The sampling is based on the word frequency in the training corpus. We use momentum SGD with a learning rate of 0.002. We have used the CNTK toolkit for building our models.4 We use a fixed k=5 (from each monolingual pair) for sampling the gCM data. We observed the performance on ↑-gCM to be the best when trained till CMI 0.4 and similarly on ↓-gCM when trained from 1.0 to 0.6. 5.1 Results Table 2 presents the perplexities on validation, Test-14 and Test-17 datasets for all the models (Col. 3, 4 and 5). We observe the following trends: (1) Model 5(b)-ρ has the least perplexity value (significantly different from the second lowest value in the column, p < 0.00001 for a paired t-test). (2) There is 55 and 90 point reduction in perplexity on Test-17 and Test-14 sets respectively from the baseline experiment 3, that does not use gCM data. Thus, addition of gCM data is helpful. (3) Only the 4a and 4b models are worse than 3, while 5a and 5b models are better. Hence, rCM is indispensable, even though gCM helps. (4) SPF based sampling performs significantly better (again p < 0.00001) than other sampling techniques. To put these numbers in perspective, we also trained our model on 50k monolingual English data, which gave a PPL of 264. This shows that the high PPL values our models obtain are due to the inherent complexity of modeling CM language. This is further substantiated by the PPL 4https://www.microsoft.com/en-us/cognitive-toolkit/ 1550 ID Training curriculum Overall PPL Avg. SP PPL Valid Test-17 Test-14 Valid Test-17 Test-14 1 rCM 1995 2018 1822 5598 5670 8864 2 Mono 1588 1607 892 23378 23790 26901 3 Mono | rCM 1029 1041 861 4734 4824 7913 4(a)-χ Mono | χ-gCM 1749 1771 1119 5752 5869 6065 4(a)-↑ Mono | ↑-gCM 1852 1872 1208 9074 9167 8803 4(a)-ρ Mono | ρ-gCM 1599 1618 1116 6534 6618 7293 4(b)-χ χ-gCM | Mono 1659 1680 903 20634 21028 20300 4(b)-↓ ↓-gCM | Mono 1900 1917 973 28422 28722 25006 4(b)-ρ ρ-gCM | Mono 1622 1641 871 26191 26710 22557 5(a)-χ Mono | χ-gCM | rCM 1026 1038 836 4317 4386 5958 5(a)-↑ Mono | ↑-gCM | rCM 1045 1058 961 4983 5078 6861 5(a)-ρ Mono | ρ-gCM | rCM 999 1011 830 4736 4829 6807 5(b)-χ χ-gCM | Mono | rCM 1006 1019 790 4878 4987 7018 5(b)-↓ ↓-gCM | Mono | rCM 1012 1025 800 5396 5489 7476 5(b)-ρ ρ-gCM | Mono | rCM 976 986 772 4810 4912 6547 Table 2: Perplexity of the LM Models on all tweets and only on SP (right block). RL 3 5(a)-χ 5(a)-ρ 5(a)-↑ 5(b)-↓ 5(b)-χ 5(b)-ρ 1 13222 12815 13717 14017 13761 13494 13077 2 2201 2120 2064 2078 2155 2256 2108 3 970 926 902 896 914 966 911 4 643 594 567 575 573 608 571 5 574 540 509 517 502 553 503 6 593 545 529 543 520 566 529 ≥7 507 465 444 460 431 479 440 Table 3: Perplexities of minor language runs for various run lengths on Test-17. # rCM 0.5K 1K 2.5K 5K 10K 50K 3 1238 1186 1120 1041 991 812 5(b)-ρ 1181 1141 1068 986 951 808 Table 4: Perplexity variation on Test-17 with changes in amount of rCM train data. Similar trends for other models (left for paucity of space) values computed only at the code-switch points, which are shown in Table 2, col. 6, 7 and 8. Even for the best model, which in this case is 5(a)-χ, PPL is four times higher than the overall PPL on Test-17. Run length: The complexity of modeling CM is also apparent from Table 3, which reports the perplexity value of the 3 and 5 models for monolingual fragments of various run lengths. We define run length as the number of words in a maximal monolingual fragment or run within a tweet. In our analysis, we only consider runs of the embedded language, defined as the language that has fewer words. As one would expect, model 5(a)χ performs the best for run length 1 (recall that it has lowest PPL at SP), but as the run length increases, the models sampling the gCM data usSample size (k) 1 2 5 10 # tweets 93K 184K 497K 952K 5(b)-ρ 1081 1053 986 1019 Table 5: Variation of PPL on Test-17 with gCM sample size k. Similar trends for other models. ing CMI (5(a)-↑and 5(b)-↓) are better than the randomly sampled (χ) models. Run length 1 are typically cases of word borrowing and lexical substitution; higher run length segments are typically an indication of CM. Clearly, modeling the shorter runs of the embedded language seems to be one of the most challenging aspect of CM LM. Significance of Linguistic Constraints: To understand the importance of the linguistic constraints imposed by EC on generation of gCM, we conducted an experiment where a synthetic CM corpus was created by combining random contiguous segments from the monolingual tweets such that the generated CM tweets’ SPF distribution matched that of rCM. When we replaced gCM by this corpus in 5(b)-ρ, the PPL on test-17 was 1060, which is worse than the baseline PPL. Effect of rCM size: Table 4 shows the PPL values for models 3 and 5(b)-ρ when trained with different amounts of rCM data, keeping other parameters constant. As expected, the PPL drops for both models as rCM size increases. However, even with high rCM data, gCM does help in improving the LM until we have 50k rCM data (comparable to monolingual, and an unrealistic scenario in practice), where the returns of adding gCM starts diminishing. We also observe that in gen1551 eral, model 3 needs twice the amount of rCM data to perform as well as model 5(b)-ρ. Effect of gCM size: In our sampling methods on gCM data, we fixed our sample size, k as 5 for consistency and feasibility of experiments. To understand the effect of k (and hence the size of the gCM data), we experimented with k = 1, 2, and 10 keeping everything else fixed. Table 5 reports the results for the models 3 and 5(b)-ρ. We observe that unlike rCM data, increasing gCM data or k does not necessarily decrease PPL after a point. We speculate that there is trade-off between k and the amount of rCM data, and also probably between these and the amount of monolingual data. We plan to explore this further in future. 6 Related Work We briefly describe the various types of approaches used for building LM for CM text. Bilingual models: These models combine data from monolingual data sources in both languages (Weng et al., 1997). Factored models: Gebhardt (2011) uses Factored Language Models for rescoring n-best lists during ASR decoding. The factors used include POS tags, CS point probability and LID. In Adel et al.(2014b; 2014a; 2013) RNNLMs are combined with n-gram based models, or converted to backoff models, giving improvements in perplexity and mixed error rate. Models that incorporate linguistic constraints: Li and Fung (2013) use inversion constraints to predict CS points and integrates this prediction into the ASR decoding process. Li and Fung (2014) integrates Functional Head constraints (FHC) for code-switching into the Language Model for Mandarin-English speech recognition. This work uses parsing techniques to restrict the lattice paths during decoding of speech to those permissible under the FHC theory. Our method instead imposes grammatical constraints (EC theory) to generate synthetic data, which can potentially be used to augment real CM data. This allows flexibility to deploy any sophisticated LM architecture and the synthetic data generated can also be used for CM tasks other than speech recognition. Training curricula for CM: Baheti et al. (2017) show that a training curriculum where an RNN-LM is trained first with interleaved monolingual data in both languages followed by CM data gives the best results for English-Spanish LM. The perplexity of this model is 4544, which then reduces to 298 after interpolation with a statistical n-gram LM. However, these numbers are not directly comparable to our work because the datasets are different. Our work is an extension of this approach showing that adding synthetic data further improves results. We do not know of any work that uses synthetically generated CM data for training LMs. 7 Conclusion In this paper, we presented a computational method for generating synthetic CM data based on the EC theory of code-mixing, and showed that sampling text from the synthetic corpus (according to the distribution of SPF found in real CM data) helps in reduction of PPL of the RNN-LM by an amount which is equivalently achieved by doubling the amount of real CM data. We also showed that randomly generated CM data doesn’t improve the LM. Thus, the linguistic theory based generation is of crucial significance. There is no unanimous theory in linguistics on syntactic structure of CM language. Hence, as a future work, we would like to compare the usefulness of different linguistic theories and different constraints within each theory in our proposed LM framework. This can also provide an indirect validation of the theories. Further, we would like to study sampling techniques motivated by natural distributions of linguistic structures. Acknowledgements We would like to thank the anonymous reviewers for their valuable suggestions. References Heike Adel, K. Kirchhoff, N. T. Vu, D. Telaar, and T. Schultz. 2014a. Combining recurrent neural networks and factored language models during decoding of code-switching speech. In INTERSPEECH, pages 1415–1419. Heike Adel, K Kirchhoff, N T Vu, D Telaar, and T Schultz. 2014b. Comparing approaches to convert recurrent neural networks into backoff language models for efficient decoding. In INTERSPEECH, pages 651–655. Heike Adel, N T Vu, and T Schultz. 2013. Combination of recurrent neural networks and factored language models for code-switching language modeling. In ACL (2), pages 206–211. 1552 Ebru Arisoy, Tara N Sainath, Brian Kingsbury, and Bhuvana Ramabhadran. 2012. Deep neural network language models. In Proceedings of the NAACLHLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Future of Language Modeling for HLT, pages 20–28. Association for Computational Linguistics. Ashutosh Baheti, Sunayana Sitaram, Monojit Choudhury, and Kalika Bali. 2017. Curriculum design for code-switching: Experiments with language identification and language modeling with deep neural networks. In Proc. of ICON-2017, Kolkata, India, pages 65–74. Utsab Barman, Amitava Das, Joachim Wagner, and Jennifer Foster. 2014. Code mixing: A challenge for language identification in the language of social media. In The 1st Workshop on Computational Approaches to Code Switching, EMNLP 2014. Hedi M Belazi, Edward J Rubin, and Almeida Jacqueline Toribio. 1994. Code switching and x-bar theory: The functional head constraint. Linguistic inquiry, pages 221–237. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of machine learning research, 3(Feb):1137–1155. Tong Che, Yanran Li, Ruixiang Zhang, R Devon Hjelm, Wenjie Li, Yangqiu Song, and Yoshua Bengio. 2017. Maximum-likelihood augmented discrete generative adversarial networks. arXiv preprint arXiv:1702.07983. A.-M. DiSciullo, Pieter Muysken, and R. Singh. 1986. Government and code-mixing. Journal of Linguistics, 22:1–24. Chris Dyer, Victor Chahuneau, and N A. Smith. 2013. A simple, fast, and effective reparameterization of ibm model 2. In Proceedings of NAACL-HLT 2013, pages 644–648. Association for Computational Linguistics. B. Gamback and A Das. 2014. On measuring the complexity of code-mixing. In Proc. of the 1st Workshop on Language Technologies for Indian Social Media (Social-India). B. Gamback and A Das. 2016. Comparing the level of code-switching in corpora. In Proc. of the 10th International Conference on Language Resources and Evaluation (LREC). Jan Gebhardt. 2011. Speech recognition on englishmandarin code-switching data using factored language models. Yifan He, Yanjun Ma, Andy Way, and Josef Van Genabith. 2010. Integrating n-best smt outputs into a tm system. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 374–382. Association for Computational Linguistics. A. K. Joshi. 1985. Processing of Sentences with Intrasentential Code Switching. In D. R. Dowty, L. Karttunen, and A. M. Zwicky, editors, Natural Language Parsing: Psychological, Computational, and Theoretical Perspectives, pages 190–205. Cambridge University Press, Cambridge. D Klein and CD Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st annual meeting of the association for computational linguistics. Association of Computational Linguistics. Ying Li and P Fung. 2013. Improved mixed language speech recognition using asymmetric acoustic model and language model with code-switch inversion constraints. In ICASSP, pages 7368–7372. Ying Li and P Fung. 2014. Language modeling with functional head constraint for code switching speech recognition. In EMNLP. Tom´aˇs Mikolov, Martin Karafi´at, Luk´aˇs Burget, Jan ˇCernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh Annual Conference of the International Speech Communication Association. Carol Myers-Scotton. 1993. Duelling Languages:Grammatical structure in Code-switching. Clarendon Press, Oxford. Carol Myers-Scotton. 1995. A lexically based model of code-switching. In Lesley Milroy and Pieter Muysken, editors, One Speaker, Two Languages: Cross-disciplinary Perspectives on Code-switching, pages 233–256. Cambridge University Press, Cambridge. Rana D. Parshad, Suman Bhowmick, Vineeta Chand, Nitu Kumari, and Neha Sinha. 2016. What is India speaking? Exploring the “Hinglish” invasion. Physica A, 449:375–389. Shana Poplack. 1980. Sometimes Ill start a sentence in Spanish y termino en espaol. Linguistics, 18:581– 618. Ameya Prabhu, Aditya Joshi, Manish Shrivastava, and Vasudeva Varma. 2016. Towards sub-word level compositions for sentiment analysis of hindi-english code mixed text. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2482–2491. Shruti Rijhwani, R Sequiera, M Choudhury, K Bali, and C S Maddila. 2017. Estimating code-switching on Twitter with a novel generalized word-level language identification technique. In ACL. Ronald Rosenfeld. 2000. Two decades of statistical language modeling: Where do we go from here? Proceedings of the IEEE, 88(8):1270–1278. Koustav Rudra, S Rijhwani, R Begum, K Bali, M Choudhury, and N Ganguly. 2016. Understanding language preference for expression of opinion 1553 and sentiment: What do Hindi-English speakers do on Twitter? In EMNLP, pages 1131–1141. David Sankoff. 1998. A formal production-based explanation of the facts of code-switching. Bilingualism: language and cognition, 1(01):39–50. A. Sharma, S. Gupta, R. Motlani, P. Bansal, M. Srivastava, R. Mamidi, and D.M Sharma. 2016. Shallow parsing pipeline for hindi-english code-mixed social media text. In Proceedings of NAACL-HLT. Sunayana Sitaram, Sai Krishna Rallabandi, Shruti Rijhwani, and Alan W Black. 2016. Experiments with cross-lingual systems for synthesis of code-mixed text. In 9th ISCA Speech Synthesis Workshop. Thamar Solorio and Yang Liu. 2008. Part-of-speech tagging for english-spanish code-switched text. In Proc. of EMNLP. Thamar Solorio et al. 2014. Overview for the first shared task on language identification in codeswitched data. In 1st Workshop on Computational Approaches to Code Switching, EMNLP, pages 62– 72. Martin Sundermeyer, Hermann Ney, and Ralf Schl¨uter. 2015. From feedforward to recurrent lstm neural networks for language modeling. IEEE Transactions on Audio, Speech, and Language Processing, 23(3):517–529. Martin Sundermeyer, Ralf Schl¨uter, and Hermann Ney. 2012. Lstm neural networks for language modeling. In Thirteenth Annual Conference of the International Speech Communication Association. Yogarshi Vyas, S Gella, J Sharma, K Bali, and M Choudhury. 2014. POS Tagging of EnglishHindi Code-Mixed Social Media Content. In Proc. EMNLP, pages 974–979. Robert A Wagner and Michael J Fischer. 1974. The string-to-string correction problem. Journal of the ACM (JACM), 21(1):168–173. Fuliang Weng, H Bratt, L Neumeyer, and A Stolcke. 1997. A study of multilingual speech recognition. In EUROSPEECH, volume 1997, pages 359–362. Citeseer.
2018
143
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1554–1564 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1554 Chinese NER Using Lattice LSTM Yue Zhang∗and Jie Yang∗ Singapore University of Technology and Design yue [email protected] jie [email protected] Abstract We investigate a lattice-structured LSTM model for Chinese NER, which encodes a sequence of input characters as well as all potential words that match a lexicon. Compared with character-based methods, our model explicitly leverages word and word sequence information. Compared with word-based methods, lattice LSTM does not suffer from segmentation errors. Gated recurrent cells allow our model to choose the most relevant characters and words from a sentence for better NER results. Experiments on various datasets show that lattice LSTM outperforms both word-based and character-based LSTM baselines, achieving the best results. 1 Introduction As a fundamental task in information extraction, named entity recognition (NER) has received constant research attention over the recent years. The task has traditionally been solved as a sequence labeling problem, where entity boundary and category labels are jointly predicted. The current stateof-the-art for English NER has been achieved by using LSTM-CRF models (Lample et al., 2016; Ma and Hovy, 2016; Chiu and Nichols, 2016; Liu et al., 2018) with character information being integrated into word representations. Chinese NER is correlated with word segmentation. In particular, named entity boundaries are also word boundaries. One intuitive way of performing Chinese NER is to perform word segmentation first, before applying word sequence labeling. The segmentation →NER pipeline, however, can suffer the potential issue of error propagation, since NEs are an important source of OOV ∗Equal contribution. 南 South 京 Capital 市 City 长 Long 江 River 大 Big 桥 Bridge 长江 Yangtze River 市长 mayor 南京 Nanjing 大桥 Bridge 长江大桥 Yangtze River Bridge Person? 南京市 Nanjing City Figure 1: Word character lattice. in segmentation, and incorrectly segmented entity boundaries lead to NER errors. This problem can be severe in the open domain since crossdomain word segmentation remains an unsolved problem (Liu and Zhang, 2012; Jiang et al., 2013; Liu et al., 2014; Qiu and Zhang, 2015; Chen et al., 2017; Huang et al., 2017). It has been shown that character-based methods outperform word-based methods for Chinese NER (He and Wang, 2008; Liu et al., 2010; Li et al., 2014). One drawback of character-based NER, however, is that explicit word and word sequence information is not fully exploited, which can be potentially useful. To address this issue, we integrate latent word information into characterbased LSTM-CRF by representing lexicon words from the sentence using a lattice structure LSTM. As shown in Figure 1, we construct a wordcharacter lattice by matching a sentence with a large automatically-obtained lexicon. As a result, word sequences such as “长江大桥(Yangtze River Bridge)”, “长江(Yangtze River)” and “大 桥(Bridge)” can be used to disambiguate potential relevant named entities in a context, such as the person name “江大桥(Daqiao Jiang)”. Since there are an exponential number of wordcharacter paths in a lattice, we leverage a lattice LSTM structure for automatically controlling information flow from the beginning of the sentence to the end. As shown in Figure 2, gated cells are used to dynamically route information from 1555 南 South c h 京 Capital c h 市 City c h 长 Long c h 江 River c h 大 Big c h 桥 Bridge c h 长江 Yangtze River 市长 mayor 南京 Nanjing 长江大桥 Yangtze River Bridge 南京市 Nanjing City 大桥 Bridge Figure 2: Lattice LSTM structure. different paths to each character. Trained over NER data, the lattice LSTM can learn to find more useful words from context automatically for better NER performance. Compared with characterbased and word-based NER methods, our model has the advantage of leveraging explicit word information over character sequence labeling without suffering from segmentation error. Results show that our model significantly outperforms both character sequence labeling models and word sequence labeling models using LSTMCRF, giving the best results over a variety of Chinese NER datasets across different domains. Our code and data are released at https:// github.com/jiesutd/LatticeLSTM. 2 Related Work Our work is in line with existing methods using neural network for NER. Hammerton (2003) attempted to solve the problem using a unidirectional LSTM, which was among the first neural models for NER. Collobert et al. (2011) used a CNN-CRF structure, obtaining competitive results to the best statistical models. dos Santos et al. (2015) used character CNN to augment a CNN-CRF model. Most recent work leverages an LSTM-CRF architecture. Huang et al. (2015) uses hand-crafted spelling features; Ma and Hovy (2016) and Chiu and Nichols (2016) use a character CNN to represent spelling characteristics; Lample et al. (2016) use a character LSTM instead. Our baseline word-based system takes a similar structure to this line of work. Character sequence labeling has been the dominant approach for Chinese NER (Chen et al., 2006b; Lu et al., 2016; Dong et al., 2016). There have been explicit discussions comparing statistical word-based and character-based methods for the task, showing that the latter is empirically a superior choice (He and Wang, 2008; Liu et al., 2010; Li et al., 2014). We find that with proper representation settings, the same conclusion holds for neural NER. On the other hand, lattice LSTM is a better choice compared with both word LSTM and character LSTM. How to better leverage word information for Chinese NER has received continued research attention (Gao et al., 2005), where segmentation information has been used as soft features for NER (Zhao and Kit, 2008; Peng and Dredze, 2015; He and Sun, 2017a), and joint segmentation and NER has been investigated using dual decomposition (Xu et al., 2014), multi-task learning (Peng and Dredze, 2016), etc. Our work is in line, focusing on neural representation learning. While the above methods can be affected by segmented training data and segmentation errors, our method does not require a word segmentor. The model is conceptually simpler by not considering multi-task settings. External sources of information has been leveraged for NER. In particular, lexicon features have been widely used (Collobert et al., 2011; Passos et al., 2014; Huang et al., 2015; Luo et al., 2015). Rei (2017) uses a word-level language modeling objective to augment NER training, performing multi-task learning over large raw text. Peters et al. (2017) pretrain a character language model to enhance word representations. Yang et al. (2017b) exploit cross-domain and cross-lingual knowledge via multi-task learning. We leverage external data by pretraining word embedding lexicon over large automatically-segmented texts, while semisupervised techniques such as language modeling are orthogonal to and can also be used for our lattice LSTM model. Lattice structured RNNs can be viewed as a natural extension of tree-structured RNNs (Tai et al., 2015) to DAGs. They have been used to model motion dynamics (Sun et al., 2017), dependencydiscourse DAGs (Peng et al., 2017), as well as speech tokenization lattice (Sperber et al., 2017) and multi-granularity segmentation outputs (Su et al., 2017) for NMT encoders. Compared with existing work, our lattice LSTM is different in both motivation and structure. For example, being designed for character-centric lattice-LSTMCRF sequence labeling, it has recurrent cells but not hidden vectors for words. To our knowledge, we are the first to design a novel lattice LSTM representation for mixed characters and lexicon words, and the first to use a word-character lattice for segmentation-free Chinese NER. 1556 3 Model We follow the best English NER model (Huang et al., 2015; Ma and Hovy, 2016; Lample et al., 2016), using LSTM-CRF as the main network structure. Formally, denote an input sentence as s = c1, c2, . . . , cm, where cj denotes the jth character. s can further be seen as a word sequence s = w1, w2, . . . , wn, where wi denotes the ith word in the sentence, obtained using a Chinese segmentor. We use t(i, k) to denote the index j for the kth character in the ith word in the sentence. Take the sentence in Figure 1 for example. If the segmentation is “南京市长江大桥”, and indices are from 1, then t(2, 1) = 4 (长) and t(1, 3) = 3 (市). We use the BIOES tagging scheme (Ratinov and Roth, 2009) for both wordbased and character-based NER tagging. 3.1 Character-Based Model The character-based model is shown in Figure 3(a). It uses an LSTM-CRF model on the character sequence c1, c2, . . . , cm. Each character cj is represented using xc j = ec(cj) (1) ec denotes a character embedding lookup table. A bidirectional LSTM (same structurally as Eq. 11) is applied to x1, x2, . . . , xm to obtain −→h c 1, −→h c 2, . . . , −→h c m and ←−h c 1, ←−h c 2, . . . , ←−h c m in the left-to-right and right-to-left directions, respectively, with two distinct sets of parameters. The hidden vector representation of each character is: hc j = [−→h c j; ←−h c j] (2) A standard CRF model (Eq. 17) is used on hc 1, hc 2, . . . , hc m for sequence labelling. • Char + bichar. Character bigrams have been shown useful for representing characters in word segmentation (Chen et al., 2015; Yang et al., 2017a). We augment the character-based model with bigram information by concatenating bigram embeddings with character embeddings: xc j = [ec(cj); eb(cj, cj+1)], (3) where eb denotes a charater bigram lookup table. • Char + softword. It has been shown that using segmentation as soft features for character-based NER models can lead to improved performance (Zhao and Kit, 2008; Peng and Dredze, 2016). 京 Capital I-­‐LOC 𝒄" # 𝒙" # 𝒉" # E-­‐LOC 𝒄& # 𝒙& # 𝒉& # B-­‐LOC 𝒄' # 𝒙' # 𝒉' # 市 City 长 Long B-­‐LOC 𝒄( # 𝒙( # 𝒉( # 南 South (a) Character-based model. B-­‐LOC 𝒄"# 𝒙" # 𝒉"# E-­‐LOC 𝒄&# 𝒙& # 𝒉&# 市 City 南京 Nanjing B-­‐LOC 𝒄'# 𝒙' # 𝒉'# 长江 Yangtze River E-­‐LOC 𝒄(# 𝒙( # 𝒉(# 大桥 Bridge (b) Word-based model. 京 Capital I-­‐LOC 𝒄" # 𝒙"# 𝒉" # E-­‐LOC 𝒄& # 𝒙&# 𝒉& # 市 City B-­‐LOC 𝒄' # 𝒙'# 𝒉' # 南 South 南京市 Nanjing City 𝒙',& ) 𝒄',& ) (c) Lattice model. Figure 3: Models.1 We augment the character representation with segmentation information by concatenating segmentation label embeddings to character embeddings: xc j = [ec(cj); es(seg(cj))], (4) where es represents a segmentation label embedding lookup table. seg(cj) denotes the segmentation label on the character cj given by a word segmentor. We use the BMES scheme for repre1To keep the figure concise, we (i) do not show gate cells, which uses ht−1 for calculating ct; (ii) only show one direction. 1557 senting segmentation (Xue, 2003). hw i = [−→ hw i ; ←− hw i ] (5) Similar to the character-based case, a standard CRF model (Eq. 17) is used on hw 1 , hw 2 , . . . , hw m for sequence labelling. 3.2 Word-Based Model The word-based model is shown in Figure 3(b). It takes the word embedding ew(wi) for representation each word wi: xw i = ew(wi), (6) where ew denotes a word embedding lookup table. A bi-directioanl LSTM (Eq. 11) is used to obtain a left-to-right sequence of hidden states −→ hw 1 , −→ hw 2 , . . . , −→ hw n and a right-to-left sequence of hidden states ←− hw 1 , ←− hw 2 , . . . , ←− hw n for the words w1, w2, . . . , wn, respectively. Finally, for each word wi, −→ hw i and ←− hw i are concatenated as its representation: Integrating character representations Both character CNN (Ma and Hovy, 2016) and LSTM (Lample et al., 2016) have been used for representing the character sequence within a word. We experiment with both for Chinese NER. Denoting the representation of characters within wi as xc i, a new word representation is obtained by concatenation of ew(wi) and xc i: xw i = [ew(wi); xc i] (7) • Word + char LSTM. Denoting the embedding of each input character as ec(cj), we use a bi-directional LSTM (Eq. 11) to learn hidden states −→h c t(i,1), . . . , −→h c t(i,len(i)) and ←−h c t(i,1), . . . , ←−h c t(i,len(i)) for the characters ct(i,1), . . . , ct(i,len(i)) of wi, where len(i) denotes the number of characters in wi. The final character representation for wi is: xc i = [−→h c t(i,len(i)); ←−h c t(i,1)] (8) • Word + char LSTM′. We investigate a variation of word + char LSTM model that uses a single LSTM to obtain −→h c j and ←−h c j for each cj. It is similar with the structure of Liu et al. (2018) but not uses the highway layer. The same LSTM structure as defined in Eq. 11 is used, and the same method as Eq. 8 is used to integrate character hidden states into word representations. • Word + char CNN. A standard CNN (LeCun et al., 1989) structure is used on the character sequence of each word to obtain its character representation xc i. Denoting the embedding of character cj as ec(cj), the vector xc i is given by: xc i = max t(i,1)≤j≤t(i,len(i))(W⊤ CNN   ec(cj−ke−1 2 ) . . . ec(cj+ ke−1 2 )  + bCNN), (9) where WCNN and bCNN are parameters, ke = 3 is the kernal size and max denotes max pooling. 3.3 Lattice Model The overall structure of the word-character lattice model is shown in Figure 2, which can be viewed as an extension of the character-based model, integrating word-based cells and additional gates for controlling information flow. Shown in Figure 3(c), the input to the model is a character sequence c1, c2, . . . , cm, together with all character subsequences that match words in a lexicon D. As indicated in Section 2, we use automatically segmented large raw text for buinding D. Using wd b,e to denote such a subsequence that begins with character index b and ends with character index e, the segment wd 1,2 in Figure 1 is “南 京(Nanjing)” and wd 7,8 is “大桥(Bridge)”. Four types of vectors are involved in the model, namely input vectors, output hidden vectors, cell vectors and gate vectors. As basic components, a character input vector is used to represent each chacracter cj as in the character-based model: xc j = ec(cj) (10) The basic recurrent structure of the model is constructed using a character cell vector cc j and a hidden vector hc j on each cj, where cc j serves to record recurrent information flow from the beginning of the sentence to cj and hc j is used for CRF sequence labelling using Eq. 17. The basic recurrent LSTM functions are:   ic j oc j fc j ecc j  =   σ σ σ tanh    Wc⊤  xc j hc j−1  + bc cc j = fc j ⊙cc j−1 + ic j ⊙ecc j hc j = oc j ⊙tanh(cc j) (11) where ic j, fc j and oc j denote a set of input, forget and output gates, respectively. Wc⊤and bc are model parameters. σ() represents the sigmoid function. 1558 Different from the character-based model, however, the computation of cc j now considers lexicon subsequences wd b,e in the sentence. In particular, each subsequence wd b,e is represented using xw b,e = ew(wd b,e), (12) where ew denotes the same word embedding lookup table as in Section 3.2. In addition, a word cell cw b,e is used to represent the recurrent state of xw b,e from the beginning of the sentence. The value of cw b,e is calculated by:   iw b,e fw b,e ecw b,e  =   σ σ tanh    Ww⊤  xw b,e hc b  + bw cw b,e = fw b,e ⊙cc b + iw b,e ⊙ecw b,e (13) where iw b,e and fw b,e are a set of input and forget gates. There is no output gate for word cells since labeling is performed only at the character level. With cw b,e, there are more recurrent paths for information flow into each cc j. For example, in Figure 2, input sources for cc 7 include xc 7 (桥Bridge), cw 6,7 (大桥Bridge) and cw 4,7 (长江大桥Yangtze River Bridge).2 We link all cw b,e with b ∈ {b′|wd b′,e ∈D} to the cell cc e. We use an additional gate ic b,e for each subsequence cell cw b,e for controlling its contribution into cc b,e: ic b,e = σ Wl⊤  xc e cw b,e  + bl (14) The calculation of cell values cc j thus becomes cc j = P b∈{b′|wd b′,j∈D} αc b,j ⊙cw b,j + αc j ⊙ecc j (15) In Eq. 15, the gate values ic b,j and ic j are normalised to αc b,j and αc j by setting the sum to 1. αc b,j = exp(ic b,j) exp(ic j) + P b′∈{b′′|wd b′′,j∈D} exp(ic b′,j) αc j = exp(ic j) exp(ic j) + P b′∈{b′′|wd b′′,j∈D} exp(ic b′,j) (16) The final hidden vectors hc j are still computed as described by Eq. 11. During NER training, loss values back-propagate to the parameters 2We experimented with alternative configurations on indexing word and character path links, finding that this configuration gives the best results in preliminary experiments. Single-character words are excluded; the final performance drops slightly after integrating single-character words. Dataset Type Train Dev Test OntoNotes Sentence 15.7k 4.3k 4.3k Char 491.9k 200.5k 208.1k MSRA Sentence 46.4k – 4.4k Char 2169.9k – 172.6 Weibo Sentence 1.4k 0.27k 0.27k Char 73.8k 14.5k 14.8k resume Sentence 3.8k 0.46k 0.48k Char 124.1k 13.9k 15.1k Table 1: Statistics of datasets. Wc, bc, Ww, bw, Wl and bl allowing the model to dynamically focus on more relevant words during NER labelling. 3.4 Decoding and Training A standard CRF layer is used on top of h1, h2, . . . , hτ, where τ is n for character-based and lattice-based models and m for word-based models. The probability of a label sequence y = l1, l2, . . . , lτ is P(y|s) = exp(P i(Wli CRFhi + b(li−1,li) CRF )) P y′ exp(P i(W l′ i CRFhi + b (l′ i−1,l′ i) CRF )) (17) Here y′ represents an arbitary label sequence, and Wli CRF is a model parameter specific to li, and b(li−1,li) CRF is a bias specific to li−1 and li. We use the first-order Viterbi algorithm to find the highest scored label sequence over a word-based or character-based input sequence. Given a set of manually labeled training data {(si, yi)}|N i=1, sentence-level log-likelihood loss with L2 regularization is used to train the model: L = PN i=1 log(P(yi|si)) + λ 2||Θ||2, (18) where λ is the L2 regularization parameter and Θ represents the parameter set. 4 Experiments We carry out an extensive set of experiments to investigate the effectiveness of word-character lattice LSTMs across different domains. In addition, we aim to empirically compare word-based and character-based neural Chinese NER under different settings. Standard precision (P), recall (R) and F1-score (F1) are used as evaluation metrics. 4.1 Experimental Settings Data. Four datasets are used in this paper, which include OntoNotes 4 (Weischedel et al., 2011), MSRA (Levow, 2006) Weibo NER (Peng and 1559 Statistics Train Dev Test Country 260 33 28 Educational Institution 858 106 112 Location 47 2 6 Personal Name 952 110 112 Organization 4611 523 553 Profession 287 18 33 Ethnicity Background 115 15 14 Job Title 6308 690 772 Total Entity 13438 1497 1630 Table 2: Detailed statistics of resume NER. Dredze, 2015; He and Sun, 2017a) and a Chinese resume dataset that we annotate. Statistics of the datasets are shown in Table 1. We take the same data split as Che et al. (2013) on OntoNotes. The development set of OntoNotes is used for reporting development experiments. While the OntoNotes and MSRA datasets are in the news domain, the Weibo NER dataset is drawn from the social media website Sina Weibo.3 For more variety in test domains, we collected a resume dataset from Sina Finance4, which consists of resumes of senior executives from listed companies in the Chinese stock market. We randomly selected 1027 resume summaries and manually annotated 8 types of named entities. Statistics of the dataset is shown in Table 2. The inter-annotator agreement is 97.1%. We release this dataset as a resource for further research. Segmentation. For the OntoNotes and MSRA datasets, gold-standard segmentation is available in the training sections. For OntoNotes, gold segmentation is also available for the development and test sections. On the other hand, no segmentation is available for the MSRA test sections, nor the Weibo / resume datasets. As a result, OntoNotes is leveraged for studying oracle situations where gold segmentation is given. We use the neural word segmentor of Yang et al. (2017a) to automatically segment the development and test sets for word-based NER. In particular, for the OntoNotes and MSRA datasets, we train the segmentor using gold segmentation on their respective training sets. For Weibo and resume, we take the best model of Yang et al. (2017a) off the shelf5, which is trained using CTB 6.0 (Xue et al., 2005). 3https://www.weibo.com/ 4http://finance.sina.com.cn/stock/ index.shtml 5https://github.com/jiesutd/ RichWordSegmentor Parameter Value Parameter Value char emb size 50 bigram emb size 50 lattice emb size 50 LSTM hidden 200 char dropout 0.5 lattice dropout 0.5 LSTM layer 1 regularization λ 1e-8 learning rate lr 0.015 lr decay 0.05 Table 3: Hyper-parameter values. Word Embeddings. We pretrain word embeddings using word2vec (Mikolov et al., 2013) over automatically segmented Chinese Giga-Word6, obtaining 704.4k words in a final lexicon. In particular, the number of single-character, twocharacter and three-character words are 5.7k, 291.5k, 278.1k, respectively. The embedding lexicon is released alongside our code and models as a resource for further research. Word embeddings are fine-tuned during NER training. Character and character bigram embeddings are pretrained on Chinese Giga-Word using word2vec and finetuned at model training. Hyper-parameter settings. Table 3 shows the values of hyper-parameters for our models, which as fixed according to previous work in the literature without grid-search adjustments for each individual dataset. In particular, the embedding sizes are set to 50 and the hidden size of LSTM models to 200. Dropout (Srivastava et al., 2014) is applied to both word and character embeddings with a rate of 0.5. Stochastic gradient descent (SGD) is used for optimization, with an initial learning rate of 0.015 and a decay rate of 0.05. 4.2 Development Experiments We compare various model configurations on the OntoNotes development set, in order to select the best settings for word-based and character-based NER models, and to learn the influence of lattice word information on character-based models. Character-based NER. As shown in Table 4, without using word segmentation, a characterbased LSTM-CRF model gives a development F1score of 62.47%. Adding character-bigram and softword representations as described in Section 3.1 increases the F1-score to 67.63% and 65.71%, respectively, demonstrating the usefulness of both sources of information. In addition, a combination of both gives a 69.64% F1-score, which is the best 6https://catalog.ldc.upenn.edu/ LDC2011T13 1560 Input Models P R F1 Auto seg Word baseline 73.20 57.05 64.12 +char LSTM 71.98 65.41 68.54 +char LSTM′ 71.08 65.83 68.35 +char+bichar LSTM 72.63 67.60 70.03 +char CNN 73.06 66.29 69.51 +char+bichar CNN 72.01 65.50 68.60 No seg Char baseline 67.12 58.42 62.47 +softword 69.30 62.47 65.71 +bichar 71.67 64.02 67.63 +bichar+softword 72.64 66.89 69.64 Lattice 74.64 68.83 71.62 Table 4: Development results. among various character representations. We thus choose this model in the remaining experiments. Word-based NER. Table 4 shows a variety of different settings for word-based Chinese NER. With automatic segmentation, a word-based LSTM CRF baseline gives a 64.12% F1-score, which is higher compared to the character-based baseline. This demonstrates that both word information and character information are useful for Chinese NER. The two methods of using character LSTM to enrich word representations in Section 3.2, namely word+char LSTM and word+char LSTM′, lead to similar improvements. A CNN representation of character sequences gives a slightly higher F1-score compared to LSTM character representations. On the other hand, further using character bigram information leads to increased F1-score over word+char LSTM, but decreased F1-score over word+char CNN. A possible reason is that CNN inherently captures character n-gram information. As a result, we use word+char+bichar LSTM for wordbased NER in the remaining experiments, which gives the best development results, and is structurally consistent with the state-of-the-art English NER models in the literature. Lattice-based NER. Figure 4 shows the F1score of character-based and lattice-based models against the number of training iterations. We include models that use concatenated character and character bigram embeddings, where bigrams can play a role in disambiguating characters. As can be seen from the figure, lattice word information is useful for improving character-based NER, improving the best development result from 62.5% to 71.6%. On the other hand, the bigram-enhanced lattice model does not lead to further improvements compared with the original lattice model. 5 10 15 20 25 30 iteration 0.50 0.55 0.60 0.65 0.70 F1-value char_baseline char_lattice char+bichar_baseline char+bichar_lattice Figure 4: F1 against training iteration number. Input Models P R F1 Gold seg Yang et al. (2016) 65.59 71.84 68.57 Yang et al. (2016)*† 72.98 80.15 76.40 Che et al. (2013)* 77.71 72.51 75.02 Wang et al. (2013)* 76.43 72.32 74.32 Word baseline 76.66 63.60 69.52 +char+bichar LSTM 78.62 73.13 75.77 Auto seg Word baseline 72.84 59.72 65.63 +char+bichar LSTM 73.36 70.12 71.70 No seg Char baseline 68.79 60.35 64.30 +bichar+softword 74.36 69.43 71.81 Lattice 76.35 71.56 73.88 Table 5: Main results on OntoNotes. This is likely because words are better sources of information for character disambiguation compared with bigrams, which are also ambiguous. As shown in Table 4, the lattice LSTM-CRF model gives a development F1-score of 71.62%, which is significantly7 higher compared with both the word-based and character-based methods, despite that it does not use character bigrams or word segmentation information. The fact that it significantly outperforms char+softword shows the advantage of lattice word information as compared with segmentor word information. 4.3 Final Results OntoNotes. The OntoNotes test results are shown in Table 58. With gold-standard segmentation, our word-based methods give competitive results to the state-of-the-art on the dataset (Che et al., 2013; Wang et al., 2013), which leverage bilingual data. This demonstrates that LSTM-CRF is a competitive choice for word-based Chinese NER, as it is for other languages. In addition, the results show 7We use a p-value of less than 0.01 from pairwise t-test to indicate statistical significance. 8In Table 5, 6 and 7, we use * to denote a model with external labeled data for semi-supervised learning. † means that the model also uses discrete features. 1561 Models P R F1 Chen et al. (2006a) 91.22 81.71 86.20 Zhang et al. (2006)* 92.20 90.18 91.18 Zhou et al. (2013) 91.86 88.75 90.28 Lu et al. (2016) – – 87.94 Dong et al. (2016) 91.28 90.62 90.95 Word baseline 90.57 83.06 86.65 +char+bichar LSTM 91.05 89.53 90.28 Char baseline 90.74 86.96 88.81 +bichar+softword 92.97 90.80 91.87 Lattice 93.57 92.79 93.18 Table 6: Main results on MSRA. Models NE NM Overall Peng and Dredze (2015) 51.96 61.05 56.05 Peng and Dredze (2016)* 55.28 62.97 58.99 He and Sun (2017a) 50.60 59.32 54.82 He and Sun (2017b)* 54.50 62.17 58.23 Word baseline 36.02 59.38 47.33 +char+bichar LSTM 43.40 60.30 52.33 Char baseline 46.11 55.29 52.77 +bichar+softword 50.55 60.11 56.75 Lattice 53.04 62.25 58.79 Table 7: Weibo NER results. that our word-based models can serve as highly competitive baselines. With automatic segmentation, the F1-score of word+char+bichar LSTM decreases from 75.77% to 71.70%, showing the influence of segmentation to NER. Consistent with observations on the development set, adding lattice word information leads to an 88.81% → 93.18% increasement of F1-score over the character baseline, as compared with 88.81% →91.87% by adding bichar+softword. The lattice model gives significantly the best F1-score on automatic segmentation. MSRA. Results on the MSRA dataset are shown in Table 6. For this benchmark, no goldstandard segmentation is available on the test set. Our chosen segmentor gives 95.93% accuracy on 5-fold cross-validated training set. The best statistical models on the dataset leverage rich handcrafted features (Chen et al., 2006a; Zhang et al., 2006; Zhou et al., 2013) and character embedding features (Lu et al., 2016). Dong et al. (2016) exploit neural LSTM-CRF with radical features. Compared with the existing methods, our wordbased and character-based LSTM-CRF models give competitive accuracies. The lattice model significantly outperforms both the best characterbased and word-based models (p < 0.01), achieving the best result on this standard benchmark. Weibo/resume. Results on the Weibo NER dataset are shown in Table 7, where NE, NM and Models P R F1 Word baseline 93.72 93.44 93.58 +char+bichar LSTM 94.07 94.42 94.24 Char baseline 93.66 93.31 93.48 +bichar+softword 94.53 94.29 94.41 Lattice 94.81 94.11 94.46 Table 8: Main results on resume NER. 20< 40 60 80 100 >100 Sentence length 0.65 0.70 0.75 0.80 0.85 F1-value Word baseline Word+char+bichar LSTM Char baseline Char+bichar+softword Lattice Figure 5: F1 against sentence length. Overall denote F1-scores for named entities, nominal entities (excluding named entities) and both, respectively. Gold-standard segmentation is not available for this dataset. Existing state-of-theart systems include Peng and Dredze (2016) and He and Sun (2017b), who explore rich embedding features, cross-domain and semi-supervised data, some of which are orthogonal to our model9. Results on the resume NER test data are shown in Table 8. Consistent with observations on OntoNotes and MSRA, the lattice model significantly outperforms both the word-based mode and the character-based model for Weibo and resume (p < 0.01), giving state-of-the-art results. 4.4 Discussion F1 against sentence length. Figure 5 shows the F1-scores of the baseline models and lattice LSTM-CRF on the OntoNotes dataset. The character-based baseline gives relatively stable F1-scores over different sentence lengths, although the performances are relatively low. The word-based baseline gives substantially higher F1-scores over short sentences, but lower F1scores over long sentences, which can be because of lower segmentation accuracies over longer sentences. Both word+char+bichar and char+bichar+softword give better performances compared to their respective baselines, showing 9The results of Peng and Dredze (2015, 2016) are taken from Peng and Dredze (2017). 1562 Sentence (truncated) 卸下东莞台协会长职务后 After stepping down as president of Taiwan Association in Dongguan. Correct Segmentation 卸下东莞台协会长职务后 step down, Dongguan, Taiwan, association, president, role, after Auto Segmentation 卸下东莞台协会长职务后 step down, Dongguan, Taiwan, association president, role, after Lattice words 卸下下东东莞台协会协会会长长职职务 step down, incorrect word, Dongguan, Taiwan association, association, president, permanent job, role Word+char+bichar LSTM 卸下东莞GPE 台GPE协会长职务后 . .. Dongguan GPE Taiwan GPE ... Char+bichar+softword 卸下东莞台协会ORG长职务后 . .. Taiwan Association in Dongguan ORG ...(ungrammatical) Lattice 卸下东莞台协ORG会长职务后 . .. Taiwan Association in Dongguan ORG ... Table 9: Example. Red and green represent incorrect and correct entities, respectively. that word and character representations are complementary for NER. The accuracy of lattice also decreases as the sentence length increases, which can result from exponentially increasing number of word combinations in lattice. Compared with word+char+bichar and char+bichar+softword, the lattice model shows more robustness to increased sentence lengths, demonstrating the more effective use of word information. F1 against sentence length. Table 9 shows a case study comparing char+bichar+softword, word+char+bichar and the lattice model. In the example, there is much ambiguity around the named entity “东莞台协(Taiwan Association in Dongguan)”. Word+char+bichar yields the entities “东 莞(Dongguan)” and “台(Taiwan)” given that “东 莞台协(Taiwan Association in Dongguan)” is not in the segmentor output. Char+bichar+softword recognizes “东莞台协会(Taiwan Association in Dongguan)”, which is valid on its own, but leaves the phrase “长职务后” ungrammatical. In contrast, the lattice model detects the organization name correctly, thanks to the lattice words “东莞(Dongguan)”, “会长(President)” and “职 务(role)”. There are also irrelevant words such as “台协会(Taiwan Association)” and “下东(noisy word)” in the lexicon, which did not affect NER results. Note that both word+char+bichar and lattice use the same source of word information, namely the same pretrained word embedding lexicon. However, word+char+bichar first uses the lexicon in the segmentor, which imposes hard constrains (i.e. fixed words) to its subsequence use in NER. In contrast, lattice LSTM has the freedom of considering all lexicon words. Entities in lexicon. Table 10 shows the total number of entities and their respective match ratios in the lexicon. The error reductions (ER) of the final Dataset Split #Entity #Match Ratio (%) ER (%) OntoNotes Train 13.4k 9.5k 71.04 – Test 7.7k 6.0k 78.72 7.34 MSRA Train 74.7k 54.3k 72.62 – Test 6.2k 4.6k 73.76 16.11 Weibo (all) Train 1.9k 1.1k 58.83 – Test 414 259 62.56 4.72 resume Train 13.4k 3.8k 28.55 – Test 1.6k 483 29.63 0.89 Table 10: Entities in lexicon. lattice model over the best character-based method (i.e. “+bichar+softword”) are also shown. It can be seen that error reductions have a correlation between matched entities in the lexicon. In this respect, our automatic lexicon also played to some extent the role of a gazetteer (Ratinov and Roth, 2009; Chiu and Nichols, 2016), but not fully since there is no explicit knowledge in the lexicon which tokens are entities. The ultimate disambiguation power still lies in the lattice encoder and supervised learning. The quality of the lexicon may affect the accuracy of our NER model since noise words can potentially confuse NER. On the other hand, our lattice model can potentially learn to select more correct words during NER training. We leave the investigation of such influence to future work. 5 Conclusion We empirically investigated a lattice LSTM-CRF representations for Chinese NER, finding that it gives consistently superior performance compared to word-based and character-based LSTM-CRF across different domains. The lattice method is fully independent of word segmentation, yet more effective in using word information thanks to the freedom of choosing lexicon words in a context for NER disambiguation. Acknowledgments We thank the anonymous reviewers for their insightful comments. References Wanxiang Che, Mengqiu Wang, Christopher D Manning, and Ting Liu. 2013. Named entity recognition with bilingual constraints. In HLT-NAACL. pages 52–62. Aitao Chen, Fuchun Peng, Roy Shan, and Gordon Sun. 2006a. Chinese named entity recognition with conditional probabilistic models. In Proceedings of the 1563 Fifth SIGHAN Workshop on Chinese Language Processing. pages 173–176. Wenliang Chen, Yujie Zhang, and Hitoshi Isahara. 2006b. Chinese named entity recognition with conditional random fields. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing. pages 118–121. Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Pengfei Liu, and Xuanjing Huang. 2015. Long short-term memory neural networks for chinese word segmentation. In EMNLP. Lisbon, Portugal, pages 1197– 1206. http://aclweb.org/anthology/D15-1141. Xinchi Chen, Zhan Shi, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial multi-criteria learning for Chinese word segmentation. In ACL. volume 1, pages 1193–1203. Jason Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. TACL 4:357–370. https://transacl.org/ojs/index.php/tacl/article/view/792. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12(Aug):2493–2537. Chuanhai Dong, Jiajun Zhang, Chengqing Zong, Masanori Hattori, and Hui Di. 2016. Characterbased LSTM-CRF with radical-level features for Chinese named entity recognition. In International Conference on Computer Processing of Oriental Languages. Springer, pages 239–250. Cıcero dos Santos, Victor Guimaraes, RJ Niter´oi, and Rio de Janeiro. 2015. Boosting named entity recognition with neural character embeddings. In Proceedings of NEWS 2015 The Fifth Named Entities Workshop. page 25. Jianfeng Gao, Mu Li, Chang-Ning Huang, and Andi Wu. 2005. Chinese word segmentation and named entity recognition: A pragmatic approach. Computational Linguistics 31(4):531–574. James Hammerton. 2003. Named entity recognition with long short-term memory. In HLT-NAACL 2003-Volume 4. pages 172–175. Hangfeng He and Xu Sun. 2017a. F-score driven max margin neural network for named entity recognition in Chinese social media. In EACL. volume 2, pages 713–718. Hangfeng He and Xu Sun. 2017b. A unified model for cross-domain and semi-supervised named entity recognition in Chinese social media. In AAAI. pages 3216–3222. Jingzhou He and Houfeng Wang. 2008. Chinese named entity recognition and word segmentation based on character. In Proceedings of the Sixth SIGHAN Workshop on Chinese Language Processing. Shen Huang, Xu Sun, and Houfeng Wang. 2017. Addressing domain adaptation for chinese word segmentation with global recurrent structure. In IJCNLP. volume 1, pages 184–193. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991 . Wenbin Jiang, Meng Sun, Yajuan L¨u, Yating Yang, and Qun Liu. 2013. Discriminative learning with natural annotations: Word segmentation as a case study. In ACL. volume 1, pages 761–769. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In NAACL-HLT. pages 260–270. Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. 1989. Backpropagation applied to handwritten zip code recognition. Neural computation 1(4):541–551. Gina-Anne Levow. 2006. The third international Chinese language processing bakeoff: Word segmentation and named entity recognition. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing. pages 108–117. Haibo Li, Masato Hagiwara, Qi Li, and Heng Ji. 2014. Comparison of the impact of word segmentation on name tagging for Chinese and japanese. In LREC. pages 2532–2536. Liyuan Liu, Jingbo Shang, Frank Xu, Xiang Ren, Huan Gui, Jian Peng, and Jiawei Han. 2018. Empower sequence labeling with task-aware neural language model. AAAI . Yang Liu and Yue Zhang. 2012. Unsupervised domain adaptation for joint segmentation and pos-tagging. Proceedings of COLING 2012: Posters pages 745– 754. Yijia Liu, Yue Zhang, Wanxiang Che, Ting Liu, and Fan Wu. 2014. Domain adaptation for crf-based Chinese word segmentation using free annotations. In EMNLP. pages 864–874. Zhangxun Liu, Conghui Zhu, and Tiejun Zhao. 2010. Chinese named entity recognition with a sequence labeling approach: based on characters, or based on words? In Advanced intelligent computing theories and applications. With aspects of artificial intelligence, Springer, pages 634–640. Yanan Lu, Yue Zhang, and Dong-Hong Ji. 2016. Multiprototype Chinese character embedding. In LREC. Gang Luo, Xiaojiang Huang, Chin-Yew Lin, and Zaiqing Nie. 2015. Joint entity recognition and disambiguation. In EMNLP. pages 879–888. 1564 Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via Bi-directional LSTM-CNNsCRF. In ACL. volume 1, pages 1064–1074. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. pages 3111–3119. Alexandre Passos, Vineet Kumar, and Andrew McCallum. 2014. Lexicon infused phrase embeddings for named entity resolution. In CoNLL. pages 78–86. Nanyun Peng and Mark Dredze. 2015. Named entity recognition for Chinese social media with jointly trained embeddings. In EMNLP. pages 548–554. Nanyun Peng and Mark Dredze. 2016. Improving named entity recognition for Chinese social media with word segmentation representation learning. In ACL. volume 2, pages 149–155. Nanyun Peng and Mark Dredze. 2017. Supplementary results for named entity recognition on chinese social media with an updated dataset . Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, and Wen-tau Yih. 2017. Cross-sentence n-ary relation extraction with graph lstms. TACL 5:101–115. Matthew Peters, Waleed Ammar, Chandra Bhagavatula, and Russell Power. 2017. Semi-supervised sequence tagging with bidirectional language models. In ACL. volume 1, pages 1756–1765. Likun Qiu and Yue Zhang. 2015. Word segmentation for chinese novels. In AAAI. pages 2440–2446. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In CoNLL. pages 147–155. Marek Rei. 2017. Semi-supervised multitask learning for sequence labeling. In ACL. volume 1, pages 2121–2130. Matthias Sperber, Graham Neubig, Jan Niehues, and Alex Waibel. 2017. Neural lattice-to-sequence models for uncertain inputs. In EMNLP. pages 1380– 1389. https://www.aclweb.org/anthology/D171145. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. JMLR 15(1):1929–1958. Jinsong Su, Zhixing Tan, Deyi Xiong, Rongrong Ji, Xiaodong Shi, and Yang Liu. 2017. Lattice-based recurrent neural network encoders for neural machine translation. In AAAI. pages 3302–3308. Lin Sun, Kui Jia, Kevin Chen, Dit-Yan Yeung, Bertram E. Shi, and Silvio Savarese. 2017. Lattice long short-term memory for human action recognition. In ICCV. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In ACL-IJCNLP. Beijing, China, pages 1556–1566. http://www.aclweb.org/anthology/P151150. Mengqiu Wang, Wanxiang Che, and Christopher D Manning. 2013. Effective bilingual constraints for semi-supervised learning of named entity recognizers. In AAAI. Ralph Weischedel, Sameer Pradhan, Lance Ramshaw, Martha Palmer, Nianwen Xue, Mitchell Marcus, Ann Taylor, Craig Greenberg, Eduard Hovy, Robert Belvin, et al. 2011. Ontonotes release 4.0. LDC2011T03, Philadelphia, Penn.: Linguistic Data Consortium . Y Xu, Y Wang, T Liu, J Liu, Y Fan, Y Qian, J Tsujii, and EI Chang. 2014. Joint segmentation and named entity recognition using dual decomposition in Chinese discharge summaries. JAMIA 21(e1):e84–92. Naiwen Xue, Fei Xia, Fu-Dong Chiou, and Marta Palmer. 2005. The penn Chinese treebank: Phrase structure annotation of a large corpus. Natural language engineering 11(02):207–238. Nianwen Xue. 2003. Chinese word segmentation as character tagging. In International Journal of Computational Linguistics and Chinese Language Processing. Jie Yang, Zhiyang Teng, Meishan Zhang, and Yue Zhang. 2016. Combining discrete and neural features for sequence labeling. In International Conference on Computational Linguistics and Intelligent Text Processing. Jie Yang, Yue Zhang, and Fei Dong. 2017a. Neural word segmentation with rich pretraining. In ACL. Vancouver, Canada, pages 839–849. http://aclweb.org/anthology/P17-1078. Zhilin Yang, Ruslan Salakhutdinov, and William W Cohen. 2017b. Transfer learning for sequence tagging with hierarchical recurrent networks. In ICLR. Suxiang Zhang, Ying Qin, Juan Wen, and Xiaojie Wang. 2006. Word segmentation and named entity recognition for sighan bakeoff3. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing. pages 158–161. Hai Zhao and Chunyu Kit. 2008. Unsupervised segmentation helps supervised learning of character tagging for word segmentation and named entity recognition. In Proceedings of the Sixth SIGHAN Workshop on Chinese Language Processing. Junsheng Zhou, Weiguang Qu, and Fen Zhang. 2013. Chinese named entity recognition via joint identification and categorization. Chinese Journal of Electronics 22(2):225–230.
2018
144
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1565–1574 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1565 Nugget Proposal Networks for Chinese Event Detection Hongyu Lin1,2, Yaojie Lu1,2, Xianpei Han1, Le Sun1 1State Key Laboratory of Computer Science Institute of Software, Chinese Academy of Sciences, Beijing, China 2University of Chinese Academy of Sciences, Beijing, China {hongyu2016,yaojie2017,xianpei,sunle}@iscas.ac.cn Abstract Neural network based models commonly regard event detection as a word-wise classification task, which suffer from the mismatch problem between words and event triggers, especially in languages without natural word delimiters such as Chinese. In this paper, we propose Nugget Proposal Networks (NPNs), which can solve the word-trigger mismatch problem by directly proposing entire trigger nuggets centered at each character regardless of word boundaries. Specifically, NPNs perform event detection in a character-wise paradigm, where a hybrid representation for each character is first learned to capture both structural and semantic information from both characters and words. Then based on learned representations, trigger nuggets are proposed and categorized by exploiting character compositional structures of Chinese event triggers. Experiments on both ACE2005 and TAC KBP 2017 datasets show that NPNs significantly outperform the state-of-the-art methods. 1 Introduction Automatic event extraction is a fundamental task of information extraction. Event detection, which aims to identify event triggers of specific types, is a key step of event extraction. For example, from the sentence “Henry was injured, and then passed away soon”, an event detection system should detect an “Injure” event triggered by “injured”, and a “Die” event triggered by “passed away”. Recently, neural network methods, which transform event detection into a word-wise classification paradigm, have achieved significant progress in event detection (Nguyen and Grishman, 2015; Die 这家/ 公司/ 并购/ 了/ 多家/ 公司/ 。 Injure Transfer_Ownership Merge_Organization The injured solider died. 那个/ 受/ 了/ 伤/ 的/ 士兵/ 不治/ 身亡/ 。 The company acquired and merged with a number of companies. (a) (b) Figure 1: Examples of word-trigger mismatch. Slashes in the figure indicate word boundaries. Chen et al., 2015b; Ghaeini et al., 2016). For instance, a model will detect events in sentence ”Henry was injured” by successively classifying its three words into NIL, NIL and Injure. By automatically extracting features from raw texts, these methods rely little on prior knowledge and achieved promising results. Unfortunately, word-wise event detection models suffer from the word-trigger mismatch problem, because a number of triggers do not exactly match with a word. Specifically, a trigger can be part of a word or cross multiple words, which is impossible to detect using word-wise models. This problem is more severe in languages without natural word delimiters such as Chinese. Figure 1 (a) shows several examples of part-of-word triggers, where two characters in one word “¿ ”(acquire and merge) trigger two different events: a “Merge Org” event triggered by “¿”(merge) and a “Transfer Ownership” event triggered by “ ” (acquire). Figure 1 (b) shows a multi-word trigger, where three words “É”(is), “ ” and “ú”(injured) trigger an Injure event together. Table 1 shows the statistics of different types of word-trigger match on two standard datasets. We can see that word-trigger mismatch is crucial for Chinese event detection since nearly 25% of triggers in RichERE and 15% of them in ACE2005 dataset don’t exactly match with a word. To resolve the word-trigger mismatch problem, 1566 Match Type Rich ERE ACE2005 Exact Match 75.52% 85.39% Part of Word 19.55% 11.67% Cross words 4.93% 2.94% Table 1: Percentages of different types of matches between words and triggers. this paper proposes Nugget Proposal Networks (NPNs), which identify triggers by modeling character compositional structures of trigger nuggets regardless of word boundaries. Given a sentence, NPNs regard characters as basic detecting units and are able to 1) directly propose the entire potential trigger nugget at each character by exploiting inner compositional structure of triggers; 2) effectively categorize proposed triggers by learning semantic representation from both characters and words. For example, at character “ú”(injured) in Figure 1 (b), NPNs are not only capable to detect it is part of an Injure event trigger, but also can propose the entire trigger nugget “É ú”(is injured). The main idea behind NPNs is that most Chinese triggers have regular character compositional structure (Li et al., 2012). Concretely, most of Chinese event triggers have one central character which can indicate its event type, e.g. “à”(kill) in “là”(kill by shooting). Furthermore, characters are composed into a trigger based on regular compositional structures, e.g. “manner + verb” for “là”(kill by shooting), “và”(hack to death), as well as “verb + auxiliary + noun” for “É ú”(is injured) and “E ‹”(beaten). Figure 2 shows the architecture of NPNs. Given a character in sentence, a hybrid representation learning module is first used to learn its semantic representation from both characters and words in the sentence. This hybrid representation is then fed into two modules: one is trigger nugget generator, which proposes the entire potential trigger nugget by exploiting inner character compositional structure. Once a trigger is proposed, an event type classifier is applied to determine its event type. Compared with previous methods, NPNs mainly have following advantages: 1) By directly proposing the entire trigger nugget centered at a character, trigger nugget generator can effectively resolve the word-trigger mismatch problem. First, using characters as basic units, NPNs will not suffer from the word-trigger mismatch problem of word-wise methods. Furthermore, by modeling and exploiting character compositional structure 这家/ 公司/ 并购/ 了/ 多家/ 公司 Type Classifier Hybrid Char-Word Representation Learning Nugget Generator … … … … The company acquired and merged with a number of companies. Figure 2: The overall architecture of Nugget Proposal Networks. The concerning character is “ ”. of triggers, our model is more error-tolerant to character-wise classification errors than traditional character-based models, as shown in Section 4.4. 2) By summarizing information from both characters and words, our hybrid representation can effectively capture information for both inner character composition and accurate event categorization. For example, the inner compositional structure of trigger “là”(kill by shooting) can be learned from the character-level sequence. Besides, characters are often ambiguous, therefore the accurate representations must take their word context into consideration. For example, the representation “à”(kill) in “ l à”(kill by shooting) should be different from its representation in “à“”(completed). We conducted experiments on both the ACE2005 and the TAC KBP 2017 Event Nugget Detection datasets. Experiment results show that NPNs can effectively solve the word-mismatch problem, and therefore significantly outperform previous state-of-the-art methods1. 2 Hybrid Representation Learning Given a sentence, NPNs will first learn a representation for each character, then the representation is fed into downstream modules. We observe that both characters and words contain rich information for Chinese event detection: characters reveals the inner compositional structure of event 1Our source code, including all hyper-parameter settings and pre-trained word embeddings, is openly available at github.com/sanmusunrise/NPNs. 1567 WE PE Word/Position Embedding 这家 公司 并购 了 多家 公司 ...... Convolutional Feature Map Compositional Feature 0 -1 -2 1 2 max(c11) max(c12) 并购 公司 了 Lexical Feature 3 。4 Conv. Dynamic Multi-Pooling Token Level Feature Figure 3: Token-level feature extractor, where PE is relative positional embeddings and WE is word embeddings. The concerning token is “¿ ”. triggers (Li et al., 2012), while words can provide more accurate and less ambiguous semantics than characters (Chen et al., 2015a). For example, character-level information can tell us that “l à”(kill by shooting) is a trigger constructed of regular pattern “manner + verb”. While wordlevel sequences can provide more explicit information when we distinguish the semantics of “à”(kill) in this context with that character in other words like “à“”(completed). Therefore, we propose to learn a hybrid representation which can summarize information from both characters and words. Specifically, we first learn two separate character-level and word-level representations using token-level neural networks. Then we design three kinds of hybrid paradigms to obtain the hybrid representation. 2.1 Token-level Representation Learning Two token-level neural networks are used to extract features from characters and words respectively. The network architecture is similar to DMCNN (Chen et al., 2015b). Figure 3 shows a word-level example. Given n tokens t1, t2, ..., tn in the sentence and the concerning token tc, let xi be the concatenation of the word embedding of ti and the embedding of ti’s relative position to tc, a convolutional layer with window size as h is introduced to capture compositional semantics: rij = tanh(wi · xj:j+h−1 + bi) (1) Here xi:i+j refers to the concatenation of embeddings from xi to xi+j, wi is the i-th filter of the convolutional layer, bi ∈R is a bias term. Then a dynamic multi-pooling layer is applied to preserve important signals of different parts of the sentence: rleft i = max j<c rij, rright i = max j≥c rij (2) fC f’ char zG fG fN fT zN zT f’ word f’ char f’ word f’ char f’ word (a) Concat Hybrid (b) General Hybrid (c) Task-specific Hybrid Figure 4: Three hybrid representation learning methods. After that we concatenate rleft i and rright i from all feature maps, as well as the embeddings of tokens nearing to tc to obtain the word-level representation fword of tc. Using the same procedure to character sequences, we can obtain the characterlevel representation fchar. 2.2 Hybrid Representation Learning So far we have both character-level feature representation fchar and word-level feature representation fword . This section describes how we mix them up to obtain a hybrid representation. Before this, we first project fchar and fword respectively into the same vector space using two dense layers, and we represent the projected d′-dimensional vectors as f′ char and f′ word. Then we design three different paradigms to mix them up: Concat Hybrid, General Hybrid and Task-specific Hybrid, as illustrated in Figure 4. Concat Hybrid is the most simple method, which simply concatenates character-level and word-level representations: fC = f ′ char ⊕f ′ word (3) This simple approach doesn’t introduce any additional parameter, but we find it very effective in our experiments. General Hybrid aims to learn a shared hybrid representation for both trigger nugget proposal and event type classification. Specifically, we design a gated structure to model the information flow from f′ char and f′ word to the general hybrid feature representation fG: zG = s(WGHf ′ char + UGHf ′ word + bGH) (4) fG = zGf ′ char + (1 −zG)f ′ word (5) Here s is the sigmoid function, WGH ∈Rd′×d′ and UGH ∈ Rd′×d′ are weight matrix, and 1568 bGH ∈ Rd′ is the bias term. zG is a d′dimensional vector whose values represent the contribution of f′ char and f′ word to the final hybrid representation, which models the importance of individual features in the given contexts. As two downstream modules of NPNs have individual functions, they might hold different requirements to the input features. Intuitively, trigger nugget generator depends more on finegrained character-level features. In contrast, wordlevel features might play more important roles in the event type classifier since it is enriched with more explicit semantics. As a result, a unified representation may be insufficient and it is better to learn task-specific hybrid representations. Task-specific Hybrid is proposed to tackle this problem, where two gates are introduced for two modules respectively. Formally, we learn one representation for the trigger nugget generator and one for event type classifier as: zN = s(WNf ′ char + UNf ′ word + bN) (6) zT = s(WTf ′ char + UTf ′ word + bT) (7) fN = zNf ′ char + (1 −zN)f ′ word (8) fT = zTf ′ char + (1 −zT)f ′ word (9) Here fN and fT are hybrid features for the trigger nugget generator and the event type classifier respectively and the meanings of other parameters are similar to the ones in Equation (4) and (5). 3 Nugget Proposal Networks Given the hybrid representation of a character in a sentence, the goal of NPNs is to propose the potential trigger nugget, as well as to identify its corresponding event type at each character. For example in Figure 5, centered at the character “ú”(injured), NPNs need to propose “É ú”(is injured) as the entire trigger nugget and identify its event type as “Injure”. For this, NPNs are equipped with two modules: one is called trigger nugget generator, which is used to propose the potential trigger nugget containing the concerning character by exploiting character compositional structures of triggers. Another module, named as event type classifier, is used to determine the specific type of this event once a trigger nugget is detected. 那 受 了 伤 的 士 兵 3 3 受了伤 0.75 2 2 了伤 0.01 1 1 伤 0.05 1 2 伤的 0.02 1 3 伤的士 0.01 2 3 了伤的 0.01 NIL 0.15 Figure 5: Our trigger nugget generator. For each character, there are 7 candidate nuggets including “NIL” if the maximum length of nuggets is 3. 3.1 Trigger Nugget Generator Chinese event triggers have regular inner compositional structures, e.g. “É ú”(is injured) and “E ‹”(is beaten) have the same “verb + auxiliary + noun” structure, and “l à”(kill by shooting) and “à”(kill by shooting) share the same “manner + verb” pattern. If a model is able to learn this compositional structure regularity, it can effectively detect trigger nuggets at characters. Recent advances have presented that convolutional neural networks are effective at capturing and predicting the region information in object detection (Ren et al., 2015) and semantic segmentation (He et al., 2017), which reveals the strong ability of CNNs to learning spatial and positional information. Inspired by this, we propose a neural network based trigger nugget generator, which is expected to not only be able to predict whether a character belongs to a trigger nugget, but also can point out the entire trigger nugget. Figure 5 is an illustration of our trigger nugget generator. Hybrid representation fN for concerning character is first learned as described in Section 2, which is then fed into a fully-connected layer to compute the scores for different possible trigger nuggets containing that character: OG = WGfN + bG (10) where OG ∈RdN and dN is the amount of candidate nuggets plus one “NIL” label indicating this character doesn’t belong to an trigger. Given the maximum length L of trigger nuggets, there are L2+L 2 possible nuggets containing a specific character, as we shown in Figure 5. In both ACE and Rich ERE corpus, more than 98.5% triggers contain no more than 3 characters, so for a specific character we consider 6 candidate nuggets and 1569 thus dN = 7. We expect NPNs to give a high score to a nugget if it follows a regular compositional structure of triggers. For example in Figure 5, “É ú”(is injured) follows the compositional pattern of “verb + auxiliary + noun”, therefore a high score is given to the category where “ú” is at the 3rd place of a nugget with a length of 3. By contrast “ ú” does not match a regular pattern, then the score for “ú” at the 2nd place of a nugget with a length of 2 will be low in this context. After obtaining the scores for each nugget, a softmax layer is applied to normalize the scores: P(yG i |x; θ) = eOG i PdN j=1 eOG j (11) where OG i is the i-the element in OG and θ is the model parameters. 3.2 Event Type Classifier The event type classifier aims to identify whether the given character in the given context will exhibit an event type. Once we detect an event trigger nugget at one character, the hybrid feature fT extracted previously is then feed into a neural network classifier, which further determines the specific type of this trigger. Following previous work (Chen and Ng, 2012), our event type classifier directly classifies nuggets into event subtypes, while ignores the hierarchy between event types. Formally, given the hybrid feature vector fT of input x, a fully-connected layer is applied to compute its scores assigned to each event subtype: OC = WCfT + bC (12) where OC ∈RdT and dT is the number of event subtypes. Then similar to the trigger nugget generator, a softmax layer is introduced: P(yC i |x; θ) = eOC i PdT j=1 eOC j (13) where OC i is the i-th element in OC, representing the score for i-th subtype. 3.3 Dealing with Conflicts between Proposed Nuggets While NPNs directly propose nugget at each character, there might exists conflicts between proposed nuggets at different characters. Generally speaking, there are two types of conflicts: (i) NIL/trigger conflict, which means NPNs propose a trigger nugget at one character, but classify other character in that nugget into “NIL” (e.g., proposing nugget “É ú”(is injured) at “É” and output “NIL” at “ ”); (ii) overlapped conflict, i.e., proposing two overlapped nuggets (e.g., proposing nugget “É ú”(is injured) at “É” and nugget “ú” at “ú”). But we find that overlapped conflict is very rare because NPNs is very effective in capturing positional knowledge and the main challenge of event detection is to distinguish triggers from non-triggers. Therefore in this paper, we employ a redundant prediction strategy by simply adding all proposed nuggets into results and ignoring “NIL” predictions. For example, if NPNs successively propose “É ú”(is injured), “NIL”, “ ú” from “É ú”, then we will ignore the “NIL” and add both two other nuggets into result. We found such a redundant prediction paradigm is an advantage of our model. Compared with conventional character-based models, even NPNs mistakenly classified character “ 0into “NIL0, we can still accurately detect trigger “É ú”(is injured) if we can predict the entire nugget at character “É0or “ú0. This redundant prediction makes our model more error-tolerant to character-wise classification errors, as verified in Section 4.4. 3.4 Model Learning To train the trigger nugget generator, we regard all characters included in trigger nuggets as positive training instances, and randomly sample characters not in any trigger as negative instances and label them as “NIL”. Suppose we have T G training examples in SG = {(xk, yG k )|k = 1, 2, ...T G} to train the trigger nugget generator, as well as T C examples in SC = {(xk, yC k )|k = 1, 2, ...T C} to train the event type classifier, we can define the loss function L(θ) as follow: L(θ) = − X (xk,yG k )∈SG log P(yG k |xk; θ) − X (xk,yC k )∈SC log P(yC k |xk; θ) (14) where θ is parameters in NPNs. Since all modules in NPNs are differentiable, any gradient-based algorithms can be applied to minimize L(θ). 4 Experiments 4.1 Data Preparation and Evaluation We conducted experiments on two standard datasets: ACE2005 and TAC KBP 2017 Even1570 ACE2005 KBPEval2017 Model Trigger Identification Trigger Classification Trigger Identification Trigger Classification P R F1 P R F1 P R F1 P R F1 FBRNN(Char) 61.3 45.6 52.3 57.5 42.8 49.1 57.97 36.92 45.11 51.71 32.94 40.24 DMCNN(Char) 60.1 61.6 60.9 57.1 58.5 57.8 53.67 49.92 51.73 50.03 46.53 48.22 C-BiLSTM* 65.6 66.7 66.1 60.0 60.9 60.4 FBRNN(Word) 64.1 63.7 63.9 59.9 59.6 59.7 65.10 46.86 54.50 60.05 43.22 50.27 DMCNN(Word) 66.6 63.6 65.1 61.6 58.8 60.2 60.43 51.64 55.69 54.81 46.84 50.51 HNN* 74.2 63.1 68.2 77.1 53.1 63.0 Rich-C* 62.2 71.9 66.7 58.9 68.1 63.2 KBP2017 Best* 67.76 45.92 54.74 62.69 42.48 50.64 NPN(Concat) 76.5 59.8 67.1 72.8 56.9 63.9 64.58 50.31 56.56 59.14 46.07 51.80 NPN(General) 71.5 63.2 67.1 67.3 59.6 63.2 63.67 51.32 56.83 57.78 46.58 51.57 NPN(Task-specific) 64.8 73.8 69.0 60.9 69.3 64.8 64.32 53.16 58.21 57.63 47.63 52.15 Table 2: Experiment results on ACE2005 and KBPEval2017. * indicates the result adapted from the original paper. For KBPEval2017, “Trigger Identification” corresponds to the “Span” metric and “Trigger Classification” corresponds to the “Type” metric reported in official evaluation. t Nugget Detection Evaluation (KBPEval2017) datasets. For ACE2005 (LDC2006T06), we used the same setup as Chen and Ji (2009), Feng et al. (2016) and Zeng et al. (2016), in which 569/64/64 documents are used as training/development/test set. For KBPEval2017, we evaluated our model on the 2017 Chinese evaluation dataset(LDC2017E55), using previous RichERE annotated Chinese datasets (LDC2015E78, LDC2015E105, LDC2015E112, and LDC2017E02) as the training set except 20 randomly sampled documents reserved as development set. Finally, there were 506/20/167 documents for training/development/test set. We used Stanford CoreNLP toolkit (Manning et al., 2014) to preprocess all documents for sentence splitting and word segmentation. Adadelta update rule (Zeiler, 2012) is applied for optimization. Models are evaluated by micro-averaged Precision(P), Recall(R) and F1-score. For ACE2005, we followed Chen and Ji (2009) to compute the above measures. For KBPEval2017, we used the official evaluation toolkit 2 to obtain these metrics. 4.2 Baselines Three groups of baselines were compared: Character-based NN models. This group of methods solve Chinese Event Detection in a character-level sequential labeling paradigm, which include Convolutional Bi-LSTM model (C-BiLSTM) proposed by Zeng et al. (2016), Forward-backward Recurrent Neural Network2github.com/hunterhector/EvmEval/ tarball/master s (FBRNN) by Ghaeini et al. (2016), and a character-level DMCNN model with a classifier using IOB encoding (Sang and Veenstra, 1999). Word-based NN models. This group of methods directly adopt currently NN models into wordlevel sequences, which includes word-based FBRNN, word-based DMCNN and Hybrid Neural Network proposed by Feng et al. (2016), which incorporates CNN with Bi-LSTM and achieves the SOTA NN based result on ACE2005. To alleviate OOV problem stemming from word-trigger mismatch, we also adopt errata table replacing (Han et al., 2017), which introduce an errata table extracted from the training data and replace those words that part of whom was a trigger nugget with that trigger directly. Feature-enriched Methods. This group of methods includes Rich-C (Chen and Ng, 2012) and CLUZH (KBP2017 Best) (Makarov and Clematide, 2017). Rich-C developed several handcraft Chinese-specific features, which is one of the state-of-the-art on ACE2005. CLUZH incorporated many heuristic features into LSTM encoder, which achieved the best performance in TAC KBP2017 evaluation. 4.3 Overall Results Table 2 shows the results on ACE2005 and KBPEval2017. From this table, we can see that: 1) NPNs steadily outperform all baselines significantly. Compared with baselines, NPN(Taskspecific) gains at least 1.6 (2.5%) and 1.5 (3.0%) F1-score improvements on trigger classification task on ACE2005 and KBPEval2017 respectively. 1571 2) By exploiting compositional structures of triggers, our trigger nugget generator can effectively resolve the word-trigger mismatch problem. As shown in Table 2, NPN(Taskspecific) achieved significant F1-score improvements on trigger identification task on both datasets. It is notable that our method achieved a remarkable high recall on both datasets, which indicates that NPNs do detect a number of triggers which previous methods can not identify. 3) By summarizing information from both characters and words, the hybrid representation learning is effective for event detection. Comparing with corresponding characterbased methods3, word-based methods achieved 2 to 3 F1-score improvements, which indicates that words can provide additional information for event detection. By combining character-level and word-level features, NPNs are able to perform character-based event detection meanwhile take word-level knowledge into consideration too. 4.4 Comparing with Conventional Character-based Methods To further investigate the effects of the trigger nugget generator, we compared NPNs with other character-based methods and analyzed behaviors of them. We conducted a supplementary experiment by replacing our trigger nugget generator and event type classifier with an IOB encoding labeling layer. We call this system NPN(IOB). Besides, we also compared the result with FBRNN(Char), which proposes candidate trigger nuggets according to an external trigger table. Model P R F1 FBRNN(Char) 57.97 36.92 45.11 NPN(IOB) 60.96 47.39 53.32 NPN(Task-specific) 64.32 53.16 58.21 Table 3: Performances of character-based methods on KBP2017Eval Trigger Identification task. Table 3 shows the results on KBP2017Eval. We can see that NPN(Task-specific) outperforms other methods significantly. We believe this is because: 1) FBRNN(Char) only regards tokens in the candidate table as potential trigger nuggets, which 3C-BiLSTM and HNN are similar methods to some extent. They both use a hybrid representation from CNN and BiLSTM encoders. limits the choice of possible trigger nuggets and results in a very low recall rate. 2) To accurately identify a trigger, NPN(IOB) and conventional character-based methods require all characters in a trigger being classified correctly, which is very challenging (Zeng et al., 2016): many characters appear in a trigger nugget will not serve as a part of a trigger nugget in the majority of contexts, thus they will be easily classified into “NIL”. For the first example in Table 5, NPN(IOB) was unable to fully recognize the trigger nugget “å>”(congratulatory message) because character “å”(congratulatory) doesn’t often serve as part of ”PhoneWrite” trigger. In fact, “å” serves as a “NIL” in the majority of similar contexts, e.g., “åU”(congratulation) and “6å”(congratulation). 3) NPNs are able to handle above problems. First, NPNs doesn’t rely on candidate tables to generate potential triggers, which guarantees a good generalization ability. Second, NPNs propose the entire trigger nugget at each character, such a redundant prediction paradigm makes NPNs more error-tolerant to character-level errors. For example, even might mistakenly classify “å” into “NIL”, NPNs can still identify the correct nugget “å>” at character “>” because “>” is a common part of “PhoneWrite” event trigger. 4.5 Influence of Word-Trigger Mismatch This subsection investigates the effects of resolving the word-trigger mismatch problem using different methods. According to different types of word-trigger match, we split KBP2017Eval test set into three parts: Exact, Part-of-Word, CrossWords, which are as defined in Table 1. Model Exact Part Cross NPN(IOB) 48.65 29.13 8.54 DMCNN(Word) 57.36 23.28 0.00 - w/o Errata replacing 59.03 0.00 0.00 NPN(Task-specific) 56.47 42.66 26.58 Table 4: Recall rates on three word-trigger match splits on KBP2017Eval Trigger Identification task. Table 4 shows the recall of different methods on each split. NPN(Task-specific) significantly outperform other baselines when trigger-word mismatch exists. This verified that NPNs can resolve different cases of word-trigger mismatch problems robustly, meanwhile retain high performance on exact match cases. In contrast, NPN(IOB) can not 1572 Sentence DMCNN NPN(IOB) NPN Correct å å å> > >/©/Xe,... Full congratulatory message:... (å>,PhoneWrite) (>,PhoneWrite) (å>,PhoneWrite) (å>,PhoneWrite) k k kú ú ú//¤k/¬W... all soldiers died and injured... None (k,Die) (ú,Injure) (k,Die) (ú,Injure) (k,Die) (ú,Injure) Table 5: System prediction examples. (X,Y) indicates a trigger nugget X is annotated with event type Y. exactly detect boundaries of trigger nuggets, thus has a low recall on all splits. Conventional DMCNN regards words as potential triggers, which means it can only identify triggers that exactly match with words. As the second example in Table 5, word “kú”(dead or injured) as a whole has never been annotated as a trigger, so DMCNN is unable to recognize it at all. Errata replacing can only solve some of the part-of-word mismatch problem, but it can not handle the cases where one word contains multiple triggers(e.g., “kú” in Table 5) and the cases that a trigger crosses multiple words. 4.6 Effects of Hybrid Representation This section analyzed the effect of feature hybrid in NPNs. First, from Table 2, we can see that Task-specific Hybrid method achieved the best performance in both datasets. Surprisingly, simple Concat Hybrid outperforms the General Hybrid approach. We believe this is because the trigger nugget generator and the event type classifier rely on different information, and therefore using one unified gate is not enough. And Task-specific Hybrid uses two different task-specific gates which can satisfy both sides, thus resulting in the best overall performance. Furthermore, to investigate the necessary of using hybrid features, an auxiliary experiment, called NPN(Char), was conducted by removing word-level features from NPNs. Also, we compared with the model removing character-level features, which is the original DMCNN(Word). Model P R F1 DMCNN(Word) 54.81 46.84 50.51 NPN(Char) 56.19 43.88 49.28 NPN(Task-specific) 57.63 47.63 52.15 Table 6: Results of using different representation on Trigger Classification task on KBP2017Eval. Table 6 shows the experiment results. We can see that neither character-level or wordlevel representation can achieve competitive results with the NPNs. This verified the necessity of hybrid representation. Besides, we can see that NPN(Char) outperforms other character-level methods in Table 2, which further confirms that our trigger nugget generator is still effective even only using character-level information. 5 Related Work Event detection is an important task in information extraction and has attracted many attentions. Traditional methods (Ji and Grishman, 2008; Patwardhan and Riloff, 2009; Liao et al., 2010; McClosky et al., 2011; Hong et al., 2011; Huang and Riloff, 2012; Li et al., 2013a,b, 2014) rely heavily on hand-craft features, which are hard to transfer among languages and annotation standards. Recently, deep learning methods, which automatically extract high-level features and perform token-level classification with neural networks (Chen et al., 2015b; Nguyen and Grishman, 2015), have achieved significant progress. Some improvements have been made by jointly predicting triggers and arguments (Nguyen et al., 2016) and introducing more complicated architectures to capture larger scale of contexts (Feng et al., 2016; Nguyen and Grishman, 2016; Ghaeini et al., 2016). These methods have achieved promising results in English event detection. Unfortunately, the word-trigger mismatch problem significantly undermines the performance of word-level models in Chinese event detection (Chen and Ji, 2009). To resolve this problem, Chen and Ji (2009) proposed a feature-driven BIO tagging methods at character-level sequences. Qin et al. (2010) introduced a method which can automatically expand candidate Chinese trigger set. While Li et al. (2012) and Li and Zhou (2012) defined manually character compositional patterns for Chinese event triggers. However, their methods rely on hand-crafted features and patterns, which make them difficult to be integrated into recent Deep Learning models. Recent advances have shown that neural networks can effectively capture spatial and positional information from raw inputs (Ren et al., 2015; He et al., 2017; Wang and Jiang, 2017). 1573 This paper designs Nugget Proposal Networks to capture character compositional structure of event triggers, which is more robust and more effective than previous hand-crafted patterns or characterlevel sequential labeling methods. 6 Conclusions and Future Work This paper proposes Nugget Proposal Networks for Chinese event detection, which can effectively resolve the word-trigger mismatch problem by modeling and exploiting character compositional structure of Chinese event triggers, using hybrid representation which can summarize information from both characters and words. Experiment results have shown that our method significantly outperforms conventional methods. Because the mismatch between words and extraction units is a common problem in information extraction, we believe our method can also be applied to many other languages and tasks for exploiting inner composition structure during extraction, such as Named Entity Recognition. Acknowledgments This work is supported by the National Natural Science Foundation of China under Grants no. 61433015, 61572477 and 61772505, and the Young Elite Scientists Sponsorship Program no. YESS20160177. Moreover, we sincerely thank all reviewers for their valuable comments. References Chen Chen and Vincent Ng. 2012. Joint modeling for chinese event extraction with rich linguistic features. In Proceedings of COLING 2012. Xinxiong Chen, Lei Xu, Zhiyuan Liu, Maosong Sun, and Huanbo Luan. 2015a. Joint learning of character and word embeddings. In Proceedings of IJCAI 2015. Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015b. Event extraction via dynamic multi-pooling convolutional neural networks. In Proceedings of ACL 2015. Zheng Chen and Heng Ji. 2009. Language specific issue and feature exploration in chinese event extraction. In Proceedings of NAACL-HLT 2009. Xiaocheng Feng, Lifu Huang, Duyu Tang, Bing Qin, Heng Ji, and Ting Liu. 2016. A languageindependent neural network for event detection. In Proceedings of ACL 2016. Reza Ghaeini, Xiaoli Z Fern, Liang Huang, and Prasad Tadepalli. 2016. Event nugget detection with forward-backward recurrent neural networks. In Proceedings of ACL 2016. Xianpei Han, Xiliang Song, Hongyu Lin, Qichen Zhu, Yaojie Lu, Le Sun, Jingfang Xu, Mingrong Liu, Ranxu Su, Sheng Shang, Chenwei Ran, and Feifei Xu. 2017. ISCAS Sogou at TAC-KBP 2017. In Proceedings of TAC 2017. Kaiming He, Georgia Gkioxari, Piotr Doll´ar, and Ross Girshick. 2017. Mask r-cnn. arXiv preprint arXiv:1703.06870. Yu Hong, Jianfeng Zhang, Bin Ma, Jianmin Yao, Guodong Zhou, and Qiaoming Zhu. 2011. Using cross-entity inference to improve event extraction. In Proceedings of ACL-HLT 2011. Ruihong Huang and Ellen Riloff. 2012. Modeling textual cohesion for event extraction. In Proceedings of AAAI 2012. Heng Ji and Ralph Grishman. 2008. Refining event extraction through cross-document inference. In Proceedings of ACL 2008. Peifeng Li and Guodong Zhou. 2012. Employing morphological structures and sememes for chinese event extraction. In Proceedings of COLING 2012. Peifeng Li, Guodong Zhou, Qiaoming Zhu, and Libin Hou. 2012. Employing compositional semantics and discourse consistency in chinese event extraction. In Proceedings of EMNLP-CoNLL 2012. Peifeng Li, Qiaoming Zhu, and Guodong Zhou. 2013a. Argument inference from relevant event mentions in chinese argument extraction. In Proceedings of ACL 2013. Qi Li, Heng Ji, Yu HONG, and Sujian Li. 2014. Constructing information networks using one single model. In Proceedings of EMNLP 2014. Qi Li, Heng Ji, and Liang Huang. 2013b. Joint event extraction via structured prediction with global features. In Proceedings of ACL 2013. Shasha Liao, New York, Ralph Grishman, and New York. 2010. Using document level cross-event inference to improve event extraction. In Proceedings of ACL 2010. Peter Makarov and Simon Clematide. 2017. UZH at TAC KBP 2017: Event nugget detection via joint learning with softmax-margin objective. In Proceedings of TAC 2017. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of ACL 2014. 1574 David McClosky, Mihai Surdeanu, and Christopher D Manning. 2011. Event extraction as dependency parsing. In Proceedings of ACL-HLT 2011. Thien Huu Nguyen, Kyunghyun Cho, and Ralph Grishman. 2016. Joint event extraction via recurrent neural networks. In Proceedings of NAACL-HLT 2016. Thien Huu Nguyen and Ralph Grishman. 2015. Event detection and domain adaptation with convolutional neural networks. In Proceedings of ACL 2015. Thien Huu Nguyen and Ralph Grishman. 2016. Modeling skip-grams for event detection with convolutional neural networks. In Proceedings of EMNLP 2016. Siddharth Patwardhan and Ellen Riloff. 2009. A unified model of phrasal and sentential evidence for information extraction. In Proceedings of EMNLP 2009. Bing Qin, Yanyan Zhao, Xiao Ding, Ting Liu, and Guofu Zhai. 2010. Event type recognition based on trigger expansion. Tsinghua Science and Technology, 15(3):251–258. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In Proceedings of NIPS 2015. Erik F. Tjong Kim Sang and Jorn Veenstra. 1999. Representing text chunks. In Proceedings of EACL 1999. Shuohang Wang and Jing Jiang. 2017. Machine comprehension using match-lstm and answer pointer. In Proceedings of ICLR 2017. Matthew D. Zeiler. 2012. Adadelta: An adaptive learning rate method. arXiv preprint arXiv:1212.5701. Ying Zeng, Honghui Yang, Yansong Feng, Zheng Wang, and Dongyan Zhao. 2016. A convolution bilstm neural network model for chinese event extraction. In Proceedings of NLPCC-ICCPOL 2016.
2018
145
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1575–1584 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1575 Higher-order Relation Schema Induction using Tensor Factorization with Back-off and Aggregation Madhav Nimishakavi Indian Institute of Science Bangalore [email protected] Manish Gupta Microsoft Hyderabad [email protected] Partha Talukdar Indian Institute of Science Bangalore [email protected] Abstract Relation Schema Induction (RSI) is the problem of identifying type signatures of arguments of relations from unlabeled text. Most of the previous work in this area have focused only on binary RSI, i.e., inducing only the subject and object type signatures per relation. However, in practice, many relations are high-order, i.e., they have more than two arguments and inducing type signatures of all arguments is necessary. For example, in the sports domain, inducing a schema win(WinningPlayer, OpponentPlayer, Tournament, Location) is more informative than inducing just win(WinningPlayer, OpponentPlayer). We refer to this problem as Higher-order Relation Schema Induction (HRSI). In this paper, we propose Tensor Factorization with Backoff and Aggregation (TFBA), a novel framework for the HRSI problem. To the best of our knowledge, this is the first attempt at inducing higher-order relation schemata from unlabeled text. Using the experimental analysis on three real world datasets, we show how TFBA helps in dealing with sparsity and induce higher order schemata. 1 Introduction Building Knowledge Graphs (KGs) out of unstructured data is an area of active research. Research in this has resulted in the construction of several large scale KGs, such as NELL (Mitchell et al., 2015), Google Knowledge Vault (Dong et al., 2014) and YAGO (Suchanek et al., 2007). These KGs consist of millions of entities and beliefs involving those entities. Such KG construction methods are schema-guided as they require the list of input relations and their schemata (e.g., playerPlaysSport(Player, Sport)). In other words, knowledge of schemata is an important first step towards building such KGs. While beliefs in such KGs are usually binary (i.e., involving two entities), many beliefs of interest go beyond two entities. For example, in the sports domain, one may be interested in beliefs of the form win(Roger Federer, Nadal, Wimbledon, London), which is an instance of the high-order (or n-ary) relation win whose schema is given by win(WinningPlayer, OpponentPlayer, Tournament, Location). We refer to the problem of inducing such relation schemata involving multiple arguments as Higher-order Relation Schema Induction (HRSI). In spite of its importance, HRSI is mostly unexplored. Recently, tensor factorization-based methods have been proposed for binary relation schema induction (Nimishakavi et al., 2016), with gains in both speed and accuracy over previously proposed generative models. To the best of our knowledge, tensor factorization methods have not been used for HRSI. We address this gap in this paper. Due to data sparsity, straightforward adaptation of tensor factorization from (Nimishakavi et al., 2016) to HRSI is not feasible, as we shall see in Section 3.1. We overcome this challenge in this paper, and make the following contributions. • We propose Tensor Factorization with Backoff and Aggregation (TFBA), a novel tensor factorization-based method for Higher-order RSI (HRSI). In order to overcome data sparsity, TFBA backs-off and jointly factorizes multiple lower-order tensors derived from an extremely sparse higher-order tensor. • As an aggregation step, we propose a constrained clique mining step which constructs 1576 the higher-order schemata from multiple binary schemata. • Through experiments on multiple real-world datasets, we show the effectiveness of TFBA for HRSI. Source code of TFBA is available at https: //github.com/madhavcsa/TFBA. The remainder of the paper is organized as follows. We discuss related work in Section 2. In Section 3.1, we first motivate why a back-off strategy is needed for HRSI, rather than factorizing the higher-order tensor. Further, we discuss the proposed TFBA framework in Section 3.2. In Section 4, we demonstrate the effectiveness of the proposed approach using multiple real world datasets. We conclude with a brief summary in Section 5. 2 Related Work In this section, we discuss related works in two broad areas: schema induction, and tensor and matrix factorizations. Schema Induction: Most work on inducing schemata for relations has been in the binary setting (Mohamed et al., 2011; Movshovitz-Attias and Cohen, 2015; Nimishakavi et al., 2016). McDonald et al. (2005) and Peng et al. (2017) extract n-ary relations from Biomedical documents, but do not induce the schema, i.e., type signature of the n-ary relations. There has been significant amount of work on Semantic Role Labeling (Lang and Lapata, 2011; Titov and Khoddam, 2015; Roth and Lapata, 2016), which can be considered as nary relation extraction. However, we are interested in inducing the schemata, i.e., the type signature of these relations. Event Schema Induction is the problem of inducing schemata for events in the corpus (Balasubramanian et al., 2013; Chambers, 2013; Nguyen et al., 2015). Recently, a model for event representations is proposed in (Weber et al., 2018). Cheung et al. (2013) propose a probabilistic model for inducing frames from text. Their notion of frame is closer to that of scripts (Schank and Abelson, 1977). Script learning is the process of automatically inferring sequence of events from text (Mooney and DeJong, 1985). There is a fair amount of recent work in statistical script learning (Pichotta and Mooney, 2016), (Pichotta and Mooney, 2014). While script learning deals with the sequence of events, we try to find the schemata of relations at a corpus level. Ferraro and Durme (2016) propose a unified Bayesian model for scripts, frames and events. Their model tries to capture all levels of Minsky Frame structure (Minsky, 1974), however we work with the surface semantic frames. Tensor and Matrix Factorizations: Matrix factorization and joint tensor-matrix factorizations have been used for the problem of predicting links in the Universal Schema setting (Riedel et al., 2013; Singh et al., 2015). Chen et al. (2015) use matrix factorizations for the problem of finding semantic slots for unsupervised spoken language understanding. Tensor factorization methods are also used in factorizing knowledge graphs (Chang et al., 2014; Nickel et al., 2012). Joint matrix and tensor factorization frameworks, where the matrix provides additional information, is proposed in (Acar et al., 2013) and (Wang et al., 2015). These models are based on PARAFAC (Harshman, 1970), a tensor factorization model which approximates the given tensor as a sum of rank1 tensors. A boolean Tucker decomposition for discovering facts is proposed in (Erdos and Miettinen, 2013). In this paper, we use a modified version (Tucker2) of Tucker decomposition (Tucker, 1963). RESCAL (Nickel et al., 2011) is a simplified Tucker model suitable for relational learning. Recently, SICTF (Nimishakavi et al., 2016), a variant of RESCAL with side information, is used for the problem of schema induction for binary relations. SICTF cannot be directly used to induce higher order schemata, as the higher-order tensors involved in inducing such schemata tend to be extremely sparse. TFBA overcomes these challenges to induce higher-order relation schemata by performing Non-Negative Tucker-style factorization of sparse tensor while utilizing a back-off strategy, as explained in the next section. 3 Higher Order Relation Schema Induction using Back-off Factorization In this section, we start by discussing the approach of factorizing a higher-order tensor and provide the motivation for back-off strategy. Next, we discuss the proposed TFBA approach in detail. Please refer to Table 1 for notations used in this paper. 1577 Notation Definition R+ Set of non-negative reals. X ∈Rn1×n2×...×nN + N th -order non-negative tensor. X(i) mode-i matricization of tensor X . Please see (Kolda and Bader, 2009) for details. A ∈Rn×r + Non-negative matrix of order n × r. ∗ Hadamard product: (A ∗B)i,j = Ai,j × Bi,j. Table 1: Notations used in the paper. Figure 1: Overview of Step 1 of TFBA. Rather than factorizing the higher-order tensor X, TFBA performs joint Tucker decomposition of multiple 3-mode tensors, X 1, X 2, and X 3, derived out of X. This joint factorization is performed using shared latent factors A, B, and C. This results in binary schemata, each of which is stored as a cell in one of the core tensors G1, G2, and G3. Please see Section 3.2.1 for details. 3.1 Factorizing a Higher-order Tensor Given a text corpus, we use OpenIEv5 (Mausam, 2016) to extract tuples. Consider the following sentence “Federer won against Nadal at Wimbledon.”. Given this sentence, OpenIE extracts the 4-tuple (Federer, won, against Nadal, at Wimbledon). We lemmatize the relations in the tuples and only consider the noun phrases as arguments. Let T represent the set of these 4-tuples. We can construct a 4-order tensor X ∈Rn1×n2×n3×m + from T. Here, n1 is the number of subject noun phrases (NPs), n2 is the number of object NPs, n3 is the number of other NPs, and m is the number of relations in T. Values in the tensor correspond to the frequency of the tuples. In case of 5-tuples of the form (subject, relation, object, other-1, other-2), we split the 5-tuples into two 4-tuples of the form (subject, relation, object, other-1) and (subject, relation, object, other-2) and frequency of these 4tuples is considered to be same as the original 5tuple. Factorizing the tensor X results in discovering latent categories of NPs, which help in inducing the schemata. We propose the following approach to factorize X. min G,A,B,C ∥X −G ×1 A ×2 B ×3 C ×4 I∥2 F + λa ∥A∥2 F + λb ∥B∥2 F + λc ∥C∥2 F , where, A ∈Rn1×r1 + , B ∈Rn2×r2 + , C ∈Rn3×r3 + , G ∈Rr1×r2×r3×m + , λa ≥0, λb ≥0 and λc ≥0. Here, I is the identity matrix. Non-negative updates for the variables can be obtained following (Lee and Seung, 2000). Similar to (Nimishakavi et al., 2016), schemata induced will be of the form relation ⟨Ai, Bj, Ck⟩. Here, Pi represents the ith column of a matrix P. A is the embedding matrix of subject NPs in T (i.e., mode-1 of X), r1 is the embedding rank in mode-1 which is the number of latent categories of subject NPs. Similarly, B and 1578 Figure 2: Overview of Step 2 of TFBA. Induction of higher-order schemata from the tri-partite graph formed from the columns of matrices A, B, and C. Triangles in this graph (solid) represent a 3-ary schema, n-ary schemata for n > 3 can be induced from the 3-ary schemata. Please refer to Section 3.2.2 for details. C are the embedding matrices of object NPs and other NPs respectively. r2 and r3 are the number of latent categories of object NPs and other NPs respectively. G is the core tensor. λa, λb and λc are the regularization weights. However, the 4-order tensors are heavily sparse for all the datasets we consider in this work. The sparsity ratio of this 4-order tensor for all the datasets is of the order 1e-7. As a result of the extreme sparsity, this approach fails to learn any schemata. Therefore, we propose a more successful back-off strategy for higher-order RSI in the next section. 3.2 TFBA: Proposed Framework To alleviate the problem of sparsity, we construct three tensors X 3, X 2, and X 1 from T as follows: • X 3 ∈Rn1×n2×m + is constructed out of the tuples in T by dropping the other argument and aggregating resulting tuples, i.e., X 3 i,j,p = Pn3 k=1 Xi,j,k,p. For example, 4tuples ⟨(Federer, Win, Nadal, Wimbledon), 10⟩and ⟨(Federer, Win, Nadal, Australian Open), 5⟩will be aggregated to form a triple ⟨(Federer, Win, Nadal), 15⟩. • X 2 ∈Rn1×n3×m + is constructed out of the tuples in T by dropping the object argument and aggregating resulting tuples i.e., X 2 i,j,p = Pn2 k=1 Xi,k,j,p. • X 1 ∈Rn2×n3×m + constructed out of the tuples in T by dropping the subject argument and aggregating resulting tuples i.e., X 1 i,j,p = Pn1 k=1 Xk,i,j,p. The proposed framework TFBA for inducing higher order schemata involves the following two steps. • Step 1: In this step, TFBA factorizes multiple lower-order overlapping tensors, X 1, X 2, and X 3, derived from X to induce binary schemata. This step is illustrated in Figure 1 and we discuss details in Section 3.2.1. • Step 2: In this step, TFBA connects multiple binary schemata identified above to induce higher-order schemata. The method accomplishes this by solving a constrained clique problem. This step is illustrated in Figure 2 and we discuss the details in Section 3.2.2. 3.2.1 Step 1: Back-off Tensor Factorization A schematic overview of this step is shown in Figure 1. TFBA first preprocesses the corpus and extracts OpenIE tuple set T out of it. The 4-mode tensor X is constructed out of T. Instead of performing factorization of the higher-order tensor X as in Section 3.1, TFBA creates three tensors out of X: X 1 n2×n3×m, X 2 n1×n3×m and X 3 n1×n2×m. TFBA performs a coupled non-negative Tucker factorization of the input tensors X 1, X 2 and X 3 by solving the following optimization problem. min A,B,C G1,G2,G3 f(X 3, G3, A, B) + f(X 2, G2, A, C) + f(X 1, G1, B, C) + λa ∥A∥2 F + λb ∥B∥2 F + λc ∥C∥2 F , (1) where, f(X i, Gi, P, Q) = X i −Gi ×1 P ×2 Q ×3 I 2 F A ∈Rn1×r1 + , B ∈Rn2×r2 + , C ∈Rn3×r3 + G1 ∈Rr2×r3×m + , G2 ∈Rr1×r3×m + , G3 ∈Rr1×r2×m + . We enforce non-negativity constraints on the matrices A, B, C and the core tensors Gi (i ∈ {1, 2, 3}). Non-negativity is essential for learning interpretable latent factors (Murphy et al., 2012). 1579 Each slice of the core tensor G3 corresponds to one of the m relations. Each cell in a slice corresponds to an induced schema in terms of the latent factors from matrices A and B. In other words, G3 i,j,k is an induced binary schema for relation k involving induced categories represented by columns Ai and Bj. Cells in G1 and G2 may be interpreted accordingly. We derive non-negative multiplicative updates for A, B and C following the NMF updating rules given in (Lee and Seung, 2000). For the update of A, we consider the mode-1 matricization of first and the second term in Equation 1 along with the regularizer. A ←A ∗ X 3 (1)G⊤ BA + X 2 (1)G⊤ CA A[GBAG⊤ BA + GCAG⊤ CA] + λaA, where, GBA = (G3 ×2 B)(1), GCA = (G2 ×2 C)(1). In order to estimate B, we consider mode-2 matricization of first term and mode-1 matricization of third term in Equation 1, along with the regularization term. We get the following update rule for B B ←B ∗ X 3 (2)G⊤ AB + X 1 (1)G⊤ CB B[GABG⊤ AB + GCBG⊤ CB] + λbB, where, GAB = (G3 ×1 A)(2), GCB = (G1 ×2 C)(1). For updating C, we consider mode-2 matricization of second and third terms in Equation 1 along with the regularization term, and we get C ←C ∗ X 3 (2)G⊤ BC + X 2 (2)G⊤ AC C[GACG⊤ AC + GBCG⊤ BC] + λcC, where, GAC = (G3 ×1 B)(2), GBC = (G2 ×1 A)(2). Finally, we update the three core tensors in Equation 1 following (Kim and Choi, 2007) as follows, G1 ←G1 ∗ X 1 ×1 B⊤×2 C⊤ G1 ×1 B⊤B ×2 C⊤C, G2 ←G2 ∗ X 2 ×1 A⊤×2 C⊤ G2 ×1 A⊤A ×2 C⊤C, G3 ←G3 ∗ X 3 ×1 A⊤×2 B⊤ G3 ×1 A⊤A ×2 B⊤B. In all the above updates, P Q represents elementwise division and I is the identity matrix. Initialization: For initializing the component matrices A, B, and C, we first perform a nonnegative Tucker2 Decomposition of the individual input tensors X 1, X 2, and X 3. Then compute the average of component matrices obtained from each individual decomposition for initialization. We initialize the core tensors G1, G2, and G3 with the core tensors obtained from the individual decompositions. 3.2.2 Step 2: Binary to Higher-Order Schema Induction In this section, we describe how a higher-order schema is constructed from the factorization described in the previous sub-section. Each relation k has three representations given by the slices G1 k, G2 k and G3 k from each core tensor. We need a principled way to produce a joint schema from these representations. For a relation, we select top-n indices (i, j) with highest values from each matrix. The indices i and j from G3 k correspond to column numbers of A and B respectively, indices from G2 k correspond to columns from A and C and columns from G1 k correspond to columns from B and C. We construct a tri-partite graph with the column numbers from each of the component matrices A, B and C as the vertices belonging to independent sets, the top-n indices selected are the edges between these vertices. From this tri-partite graph, we find all the triangles which will give schema with three arguments for a relation, illustrated in Figure 2. We find higher order schemata, i.e., schemata with more than three arguments by merging two third order schemata with same column number from A and B. For example, if we find two schemata (A2, B4, C10) and (A2, B4, C8) then we merge these two to give (A2, B4, C10, C8) as a higher order schema. This can be continued further for even higher order schemata. This process may be thought of as finding a constrained 1580 clique over the tri-partite graph. Here the constraint is that in the maximal clique, there can only be one edge between sets corresponding to columns of A and columns of B. The procedure above is inspired by (McDonald et al., 2005). However, we note that (McDonald et al., 2005) solved a different problem, viz., n-ary relation instance extraction, while our focus is on inducing schemata. Though we discuss the case of back-off from 4-order to 3-order, ideas presented above can be extended for even higher orders depending on the sparsity of the tensors. 4 Experiments In this section, we evaluate the performance of TFBA for the task of HRSI. We also propose a baseline model for HRSI called HardClust. HardClust: We propose a baseline model called the Hard Clustering Baseline (HardClust) for the task of higher order relation schema induction. This model induces schemata by grouping perrelation NP arguments from OpenIE extractions. In other words, for each relation, all the Noun Phrases (NPs) in first argument form a cluster that represents the subject of the relation, all the NPs in the second argument form a cluster that represents object and so on. Then from each cluster, the top most frequent NPs are chosen as the representative NPs for the argument type. We note that this method is only able to induce one schema per relation. Datasets: We run our experiments on three datasets. The first dataset (Shootings) is a collection of 1,335 documents constructed from a publicly available database of mass shootings in the United States. The second is New York Times Sports (NYT Sports) dataset which is a collection of 20,940 sports documents from the period 2005 and 2007. And the third dataset (MUC) is a set of 1300 Latin American newswire documents about terrorism events. After performing the processing steps described in Section 3, we obtained 357,914 unique OpenIE extractions from the NYT Sports dataset, 10,847 from Shootings dataset, and 8,318 from the MUC dataset. However, in order to properly analyze and evaluate the model, we consider only the 50 most frequent relations in the datasets and their corresponding OpenIE extractions. This is done to avoid noisy OpenIE extractions to yield better data quality and to aid subsequent manual evaluation of the data. We construct input tensors following the procedure described in Section 3.2. Details on the dimensions of tensors obtained are given in Table 2. Model Selection: In order to select appropriate TFBA parameters, we perform a grid search over the space of hyper-parameters, and select the set of hyper-parameters that give best Average FIT score (AvgFIT). AvgFIT(G1, G2, G3, A, B, C, X 1, X 2, X 3) = 1 3{FIT(X 1, G1, B, C) + FIT(X 2, G2, A, C) + FIT(X 3, G3, A, B)}, where, FIT(X, G, P, Q) = 1−∥X −G ×1 P ×2 Q∥F ∥X∥F . We perform a grid search for the rank parameters between 5 and 20, for the regularization weights we perform a grid search over 0 and 1. Table 3 provides the details of hyper-parameters set for different datasets. Evaluation Protocol: For TFBA, we follow the protocol mentioned in Section 3.2.2 for constructing higher order schemata. For every relation, we consider top 5 binary schemata from the factorization of each tensor. We construct a tripartite graph, as explained in Section 3.2.2, and mine constrained maximal cliques from the tripartite graphs for schemata. Table 4 provides some qualitative examples of higher-order schemata induced by TFBA. Accuracy of the schemata induced by the model is evaluated by human evaluators. In our experiments, we use human judgments from three evaluators. For every relation, the first and second columns given in Table 4 are presented to the evaluators and they are asked to validate the schema. We present top 50 schemata based on the score of the constrained maximal clique induced by TFBA to the evaluators. This evaluation protocol was also used in (Movshovitz-Attias and Cohen, 2015) for evaluating ontology induction. All evaluations were blind, i.e., the evaluators were not aware of the model they were evaluating. Difficulty with Computing Recall: Even though recall is a desirable measure, due to the lack of availability of gold higher-order schema annotated corpus, it is not possible to compute recall. Although the MUC dataset has gold annotations for some predefined list of events, it does not have annotations for the relations. 1581 Dataset X 1shape X 2shape X 3shape Shootings 3365 × 1295 × 50 2569 × 1295 × 50 2569 × 3365 × 50 NYT Sports 57, 820 × 20, 356 × 50 49, 659 × 20, 356 × 50 49, 659 × 57, 820 × 50 MUC 2825 × 962 × 50 2555 × 962 × 50 2555 × 2825 × 50 Table 2: Details of dimensions of tensors constructed for each dataset used in the experiments. Dataset (r1, r2, r3) (λa, λb, λc) Shootings (10, 20,15) (0.3, 0.1, 0.7) NYT Sports (20, 15, 15) (0.9, 0.5, 0.7) MUC (15, 12, 12) (0.7, 0.7, 0.4) Table 3: Details of hyper-parameters set for different datasets. Experimental results comparing performance of various models for the task of HRSI are given in Table 5. We present evaluation results from three evaluators represented as E1, E2 and E3. As can be observed from Table 5, TFBA achieves better results than HardClust for the Shootings and NYT Sports datasets, however HardClust achieves better results for the MUC dataset. Percentage agreement of the evaluators for TFBA is 72%, 70% and 60% for Shootings, NYT Sports and MUC datasets respectively. HardClust Limitations: Even though HardClust gives better induction for MUC corpus, this approach has some serious drawbacks. HardClust can only induce one schema per relation. This is a restrictive constraint as multiple senses can exist for a relation. For example, consider the schemata induced for the relation shoot as shown in Table 4. TFBA induces two senses for the relation, but HardClust can induce only one schema. For a set of 4-tuples, HardClust can only induce ternary schemata; the dimensionality of the schemata cannot be varied. Since the latent factors induced by HardClust are entirely based on frequency, the latent categories induced by HardClust are dominated by only a fixed set of noun phrases. For example, in NYT Sports dataset, subject category induced by HardClust for all the relations is ⟨team, yankees, mets⟩. In addition to inducing only one schema per relation, most of the times HardClust only induces a fixed set of categories. Whereas for TFBA, the number of categories depends on the rank of factorization, which is a user provided parameter, thus providing more flexibility to choose the latent categories. 4.1 Using Event Schema Induction for HRSI Event schema induction is defined as the task of learning high-level representations of events, like a tournament, and their entity roles, like winningplayer etc, from unlabeled text. Even though the main focus of event schema induction is to induce the important roles of the events, as a side result most of the algorithms also provide schemata for the relations. In this section, we investigate the effectiveness of these schemata compared to the ones induced by TFBA. Event schemata are represented as a set of (Actor, Rel, Actor) triples in (Balasubramanian et al., 2013). Actors represent groups of noun phrases and Rels represent relations. From this style of representation, however, the n-ary schemata for relations cannot be induced. Event schemata generated in (Weber et al., 2018) are similar to that in (Balasubramanian et al., 2013). Event schema induction algorithm proposed in (Nguyen et al., 2015) doesn’t induce schemata for relations, but rather induces the roles for the events. For this investigation we experiment with the following algorithm. Chambers-13 (Chambers, 2013): This model learns event templates from text documents. Each event template provides a distribution over slots, where slots are clusters of NPs. Each event template also provides a cluster of relations, which is most likely to appear in the context of the aforementioned slots. We evaluate the schemata of these relation clusters. As can be observed from Table 5, the proposed TFBA performs much better than Chambers-13. HardClust also performs better than Chambers-13 on all the datasets. From this analysis we infer that there is a need for algorithms which induce higher-order schemata for relations, a gap we fill in this paper. Please note that the experimental results provided in (Chambers, 2013) for MUC dataset are for the task of event schema induction, but in this work we evaluate the relation schemata. Hence the results in (Chambers, 2013) and results in this paper are not comparable. Example 1582 Relation Schema NPs from the induced categories Evaluator Judgment (Human) Suggested Label Shootings leave⟨A6, B0, C7⟩ A6: shooting, shooting incident, double shooting valid < shooting > B0: one person, two people, three people < people > C7: dead, injured, on edge <injured > identify⟨A1, B1, C5, C6⟩ A1: police, officers, huntsville police valid < police > B1: man, victims, four victims < victim(s)> C5: sunday, shooting staurday, wednesday afternoon <day/time > C6: apartment, bedroom, building in the neighborhood <place > shoot⟨A7, B6, C1⟩ A7: gunman, shooter, smith valid < perpetrator > B6: freeman, slain woman, victims <victim > C1: friday, friday night, early monday morning < time> shoot⟨A4, B2, C13⟩ A4: <num>-year-old man, <num>-year-old george reavis, <num>-year-old brockton man valid < victim> B2: in the leg, in the head, in the neck < body part> C13: in macon, in chicago, in an alley < location > say⟨A1, B1, C5⟩ A1: police, officers, huntsville police invalid – B1: man, victims, four victims C5: sunday, shooting staurday, wednesday afternoon NYT sports spend⟨A0, B16, C3⟩ A0: yankees, mets, jets valid < team > B14: $ <num> million, $ <num>, $ <num> billion < money > C3: <num>, year, last season < year > win⟨A2, B10, C3⟩ A2: red sox, team, yankees valid < team > B10: world series, title, world cup < championship > C3: <num>, year, last season < year > get⟨A4, B4, C1⟩ A4: umpire, mike cameron, andre agassi invalid – B4: ball, lives, grounder C1: back, forward, <num>-yard line MUC tell⟨A7, B2, C0⟩ A7: medardo gomez, jose azcona, gregorio roza chavez valid < politician > B2: media, reporters, newsmen <media > C0: today, at <num>, tonight < day/time > occur⟨A9, B5, C10⟩ A9: bomb, blast, explosion valid < bombing > B5: near san salvador, here in madrid, in the same office < place > C10: at <num>, this time, simultaneously < time > suffer⟨A5, B4, C4) A5: justice maria elena diaz, vargas escobar, judge sofia de roldan invalid – B4: casualties , car bomb, grenade C4: settlement of refugees, in san roman, now Table 4: Examples of schemata induced by TFBA. Please note that some of them are 3-ary while others are 4-ary. For details about schema induction, please refer to Section 3.2. Shootings NYT Sports MUC E1 E2 E3 Avg E1 E2 E3 Avg E1 E2 E3 Avg HardClust 0.64 0.70 0.64 0.66 0.42 0.28 0.52 0.46 0.64 0.58 0.52 0.58 Chambers-13 0.32 0.42 0.28 0.34 0.08 0.02 0.04 0.07 0.28 0.34 0.30 0.30 TFBA 0.82 0.78 0.68 0.76 0.86 0.6 0.64 0.70 0.58 0.38 0.48 0.48 Table 5: Higher-order RSI accuracies of various methods on the three datasets. Induced schemata for each dataset and method are evaluated by three human evaluators, E1, E2, and E3. TFBA performs better than HardClust for Shootings and NYT Sports datasets. Even though HardClust achieves better accuracy on MUC dataset, it has several limitations, see Section 4 for more details. Chambers-13 solves a slightly different problem called event schema induction, for more details about the comparison with Chambers-13 see Section 4.1. schemata induced by TFBA and (Chambers-13) are provided as part of the supplementary material. 5 Conclusion Higher order Relation Schema Induction (HRSI) is an important first step towards building domainspecific Knowledge Graphs (KGs). In this paper, we proposed TFBA, a tensor factorizationbased method for higher-order RSI. To the best of our knowledge, this is the first attempt at inducing higher-order (n-ary) schemata for relations from unlabeled text. Rather than factorizing a severely sparse higher-order tensor directly, TFBA performs back-off and jointly factorizes multiple lower-order tensors derived out of the higher-order tensor. In the second step, TFBA solves a constrained clique problem to induce schemata out of multiple binary schemata. We are hopeful that the backoff-based factorization idea exploited in TFBA will be useful in other sparse factorization settings. 1583 Acknowledgment We thank the anonymous reviewers for their insightful comments and suggestions. This research has been supported in part by the Ministry of Human Resource Development (Government of India), Accenture, and Google. References Evrim Acar, Morten Arendt Rasmussen, Francesco Savorani, Tormod Ns, and Rasmus Bro. 2013. Understanding data fusion within the framework of coupled matrix and tensor factorizations. Chemometrics and Intelligent Laboratory Systems 129:53–63. Niranjan Balasubramanian, Stephen Soderland, Mausam, and Oren Etzioni. 2013. Generating coherent event schemas at scale. In EMNLP. Nathanael Chambers. 2013. Event schema induction with a probabilistic entity-driven model. In EMNLP. Kai-Wei Chang, Wen tau Yih, Bishan Yang, and Christopher Meek. 2014. Typed tensor decomposition of knowledge bases for relation extraction. In EMNLP. Yun-Nung Chen, William Yang Wang, Anatole Gershman, and Alexander I. Rudnicky. 2015. Matrix factorization with knowledge graph propagation for unsupervised spoken language understanding. In ACL. Jackie Chi Kit Cheung, Hoifung Poon, and Lucy Vanderwende. 2013. Probabilistic frame induction. In NAACL-HLT. Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In KDD. Dora Erdos and Pauli Miettinen. 2013. Discovering facts with boolean tensor tucker decomposition. In CIKM. Francis Ferraro and Benjamin Van Durme. 2016. A unified bayesian model of scripts, frames and language. In AAAI. R. A. Harshman. 1970. Foundations of the PARAFAC procedure: Models and conditions for an” explanatory” multi-modal factor analysis. UCLA Working Papers in Phonetics 16(1):84. Yong-Deok Kim and Seungjin Choi. 2007. Nonnegative tucker decomposition. In CVPR. Tamara G Kolda and Brett W Bader. 2009. Tensor decompositions and applications. SIAM review 51(3):455–500. Joel Lang and Mirella Lapata. 2011. Unsupervised semantic role induction via split-merge clustering. In NAACL-HLT. Daniel D. Lee and H. Sebastian Seung. 2000. Algorithms for non-negative matrix factorization. In NIPS. Mausam. 2016. Open information extraction systems and downstream applications. In IJCAI. Ryan McDonald, Fernando Pereira, Seth Kulick, Scott Winters, Yang Jin, and Pete White. 2005. Simple algorithms for complex relation extraction with applications to biomedical ie. In ACL. Marvin Minsky. 1974. A framework for representing knowledge. Technical report. T. Mitchell, W. Cohen, E. Hruschka, P. Talukdar, J. Betteridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohamed, N. Nakashole, E. Platanios, A. Ritter, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, and J. Welling. 2015. Never-ending learning. In AAAI. Thahir P. Mohamed, Jr. Estevam R. Hruschka, and Tom M. Mitchell. 2011. Discovering relations between noun categories. In EMNLP. Raymond Mooney and Gerald DeJong. 1985. Learning schemata for natural language processing. In IJCAI. Dana Movshovitz-Attias and William W. Cohen. 2015. Kb-lda: Jointly learning a knowledge base of hierarchy, relations, and facts. In ACL. Brian Murphy, Partha Talukdar, and Tom Mitchell. 2012. Learning effective and interpretable semantic models using non-negative sparse embedding. In COLING. Kiem-Hieu Nguyen, Xavier Tannier, Olivier Ferret, and Romaric Besanc¸on. 2015. Generative event schema induction with entity disambiguation. In ACL. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In ICML. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2012. Factorizing yago: Scalable machine learning for linked data. In WWW. Madhav Nimishakavi, Uday Singh Saini, and Partha Talukdar. 2016. Relation schema induction using tensor factorization with side information. In EMNLP. Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, and Wen-tau Yih. 2017. Cross-sentence n-ary relation extraction with graph lstms. TACL 5:101–115. 1584 Karl Pichotta and Raymond J. Mooney. 2014. Statistical script learning with multi-argument events. In EACL. Karl Pichotta and Raymond J. Mooney. 2016. Learning statistical scripts with lstm recurrent neural networks. In AAAI. Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M. Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In NAACL-HLT. Michael Roth and Mirella Lapata. 2016. Neural semantic role labeling with dependency path embeddings. In ACL. R. Schank and R. Abelson. 1977. Scripts, plans, goals and understanding: An inquiry into human knowledge structures. Lawrence Erlbaum Associates, Hillsdale, NJ. Sameer Singh, Tim Rockt¨aschel, and Sebastian Riedel. 2015. Towards Combined Matrix and Tensor Factorization for Universal Schema Relation Extraction. In NAACL Workshop on Vector Space Modeling for NLP (VSM). Fabian M Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In WWW. Ivan Titov and Ehsan Khoddam. 2015. Unsupervised induction of semantic roles within a reconstructionerror minimization framework. In NAACL-HLT. L. R. Tucker. 1963. Implications of factor analysis of three-way matrices for measurement of change. In Problems in measuring change., University of Wisconsin Press, Madison WI, pages 122–137. Yichen Wang, Robert Chen, Joydeep Ghosh, Joshua C. Denny, Abel N. Kho, You Chen, Bradley A. Malin, and Jimeng Sun. 2015. Rubik: Knowledge guided tensor factorization and completion for health data analytics. In KDD. Noah Weber, Niranjan Balasubramanian, and Nathanael Chambers. 2018. Event representations with tensor-based compositions. In AAAI.
2018
146
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1585–1594 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1585 Discovering Implicit Knowledge with Unary Relations Michael Glass IBM Research AI Knowledge Induction and Reasoning [email protected] Alfio Gliozzo IBM Research AI Knowledge Induction and Reasoning [email protected] Abstract State-of-the-art relation extraction approaches are only able to recognize relationships between mentions of entity arguments stated explicitly in the text and typically localized to the same sentence. However, the vast majority of relations are either implicit or not sententially localized. This is a major problem for Knowledge Base Population, severely limiting recall. In this paper we propose a new methodology to identify relations between two entities, consisting of detecting a very large number of unary relations, and using them to infer missing entities. We describe a deep learning architecture able to learn thousands of such relations very efficiently by using a common deep learning based representation. Our approach largely outperforms state of the art relation extraction technology on a newly introduced web scale knowledge base population benchmark, that we release to the research community. 1 Introduction Knowledge Base Population (KBP) from text is the problem of extracting relations between entities with respect to a given schema, usually defined by a set of types and relations. The facts added to the KB are triples, consisting of two entities connected by a relation. Although providing explicit provenance for the triples is often a subgoal in KBP, we focus on the case where correct triples are gathered from text without necessarily annotating any particular text with a relation. Humans are able to perform very well on the task of understanding relations in text. For example, if the target relation is presidentOf, anyone will be able to detect an occurrence of this relation between the entities TRUMP and UNITED STATES from both the sentences “Trump issued a presidential memorandum for the United States” and “The Houston Astros will visit President Donald Trump and the White House on Monday”. However, the first example expresses an explicit relation between the two entities, while the second states the same relation implicitly and requires some background knowledge and inference to identify it properly. In fact, the entity UNITED STATES is not even mentioned explicitly in the text, and it is up to the reader to recall that US presidents live in the White House, and therefore people visiting it are visiting the US president. Very often, relations expressed in text are implicit. This reflects in the low recall of the current KBP relation extraction methods, that are mostly based on recognizing lexical-syntactic connections between two entities within the same sentence. The state-of-the-art systems are affected by very low performance, close to 16.6% F1, as shown in the latest TAC-KBP evaluation campaigns and in the open KBP evaluation benchmark1. Existing approaches to dealing with implicit information such as textual entailment depend on unsolved problems like inducing entailment rules from text. In this paper, we address the problem of identifying implicit relations in text using a radically different approach, consisting of reducing the problem of identifying binary relations into a much larger set of simpler unary relations. For example, to build a Knowledge Base (KB) about presidents in the G8 countries, the presidentOf relation can be expanded to presidentOf:UNITED STATES, presidentOf:GERMANY, presidentOf:JAPAN, and so 1https://kbpo.stanford.edu 1586 on. For all these unary relations, we train a multiclass (and in other cases, multi-label) classifier from all the available training data. This classifier takes textual evidence where only one entity is identified (e.g. ANGELA MERKEL) and predicts a confidence score for each unary relation. In this way, ANGELA MERKEL will be assigned to the unary relation presidentOf:GERMANY, which in turn generates the triple ⟨ANGELA MERKEL presidentOf GERMANY⟩. To implement the idea above, we explore the use of knowledge-level supervision, sometimes called distant supervision, to train a deep learning based approach. The training data in this approach is a knowledge base and an unannotated corpus. A pre-existing Entity Detection and Linking system first identifies and links mentions of entities in the corpus. For each entity, the system gathers its context set, the contexts (e.g. sentences or token windows) where it is mentioned. The context set forms the textual evidence for a multi-class, multilabel deep network. The final layer of the network is vector of unary relation predictions and the intermediate layers are shared. This architecture allows us to efficiently train thousands of unary relations, while reusing the feature representations in the intermediate layers across relations as a form of transfer learning. The predictions of this network represent the probability for the input entity to belong to each unary relation. To demonstrate the effectiveness of our approach we developed a new KBP benchmark, consisting of extracting unseen DBPedia triples from the text of a web crawl, using a portion of DBpedia to train the model. As part of the contributions for this paper, we release the benchmark to the research community providing the software needed to generate it from Common Crawl and DBpedia as an open source project2. As a baseline, we adapt a state of the art deep learning based approach for relation extraction (Lin et al., 2016). Our experiments clearly show that using unary relations to generate new triples greatly complements traditional binary approaches. An analysis of the data shows that our approach is able to capture implicit information from textual mentions and to highlight the reasons why the assignments have been made. The paper is structured as follows. In section 2 we describe the state of the art in distantly super2https://github.com/IBM/cc-dbp vised KBP methodologies, with a focus on knowledge induction applications. Section 3 introduces the use of Unary Relations for KBP and section 4 outlines the process for producing and training them. Section 5 describes a deep learning architecture able to recognize unary relations from textual evidence. In section 6 we describe the benchmark for evaluation. Section 7 provides an extensive evaluation of unary relations, and a saliency map exploration of what the deep learning model has learned. Section 8 concludes the paper highlighting research directions for future work. 2 Related Work Binary relation extraction using distant supervision has a long history (Surdeanu et al., 2012; Mintz et al., 2009). Mentions of entities from the knowledge base are located in text. When two entities are mentioned in the same sentence that sentence becomes part of the evidence for the relation (if any) between those entities. The set of sentences mentioning an entity pair is used in a machine learning model to predict how the entities are related, if at all. Deep learning has been applied to binary relation extraction. CNN-based (Zeng et al., 2014), LSTM-based (Xu et al., 2015), attention based (Wang et al., 2016) and compositional embedding based (Gormley et al., 2015) models have been trained successfully using a sentence as the unit of context. Recently, cross sentence approaches have been explored by building paths connecting the two identified arguments through related entities (Peng et al., 2017; Zeng et al., 2016). These approaches are limited by requiring both entities to be mentioned in a textual context. The context aggregation approaches of state-of-the-art neural models, max-pooling (Zeng et al., 2015) and attention (Lin et al., 2016), do not consider that different contexts may contribute to the prediction in different ways. Instead, the context pooling only determines the degree of a sentence’s contribution to the relation prediction. TAC-KBP is a long running challenge for knowledge base population. Effective systems in these competitions combine many approaches such as rule-based relation extraction, directly supervised linear and neural network extractors, distantly supervised neural network models (Zhang et al., 2016) and tensor factorization approaches to relation prediction. Compositional Universal 1587 Schema is an approach based on combining the matrix factorization approach of universal schema (Riedel et al., 2013), with repesentations of textual relations produced by an LSTM (Chang et al., 2016). The rows of the universal schema matrix are entity pairs, and will only be supported by a textual relation if they occur in a sentence together. Other approaches to relational knowledge induction have used distributed representations for words or entities and used a model to predict the relation between two terms based on their semantic vectors (Drozd et al., 2016). This enables the discovery of relations between terms that do not co-occur in the same sentence. However, the distributed representation of the entities is developed from the corpus without any ability to focus on the relations of interest. One example of such work is LexNET, which developed a model using the distributional word vectors of two terms to predict lexical relations between them (DSh). The term vectors are concatenated and used as input to a single hidden layer neural network. Unlike our approach to unary relations the term vectors are produced by a standard relation-independent model of the term’s contexts such as word2vec (Mikolov et al., 2013). Unary relations can be considered to be similar to types. Work on ontology population has considered the general distribution of a term in text to predict its type (Cimiano and V¨olker, 2005). Like the method of DSh, this does not customize the representation of an entity to a set of target relations. 3 Unary vs Binary Relations The basic idea presented in this paper is that in many cases relation extraction problems can be reduced to sets of simpler and inter-related unary relation extraction problems. This is possible by providing a specific value to one of the two arguments, transforming the relations into a set of categories. For example, the livesIn relation between persons and countries can be decomposed into 195 relations (one relation for each country), including livesIn:UNITED STATES, livesIn:CANADA, and so on. The argument that is combined with the binary relation to produce the unary relation is called the fixed argument while the other argument is the filler argument. The KB extension of a unary relation is the set of all filler arguments in the KB, and the corpus extension is the subset of the KB extension that occurs in the corpus. A requisite for a unary relation is that in the training KB there should exist many triples that share a relation and a particular entity as one argument, thus providing enough training for each unary classifier. Therefore, in the example above, we will not likely be able to generate predicates for all the 195 countries, because some of them will either not occur at all in the training data or they will be very infrequent. However, even in cases where arguments tend to follow a long tail distribution, it makes sense to generate unary predicates for the most frequent ones. 1 10 100 1000 10000 100000 1000000 1 10 100 1000 10000 Number of Unary Relations Corpus Extension Threshold Figure 1: Minimum Corpus Extension to Number of Unary Relations Figure 1 shows the relationship between the threshold for the size of the corpus extension of a unary relation and the number of different unary relations that can be found in our dataset. The relationship is approximately linear on a log-log scale. There are 26 unary relations with a corpus extension of at least 10,000. These relations include: • hasLocation:UNITED STATES • background:GROUP OR BAND • kingdom:ANIMAL • language:ENGLISH LANGUAGE Lowering the threshold to 100 we have 8711 unary relations and we get close to 1M unary relations with more than 10 entities. In a traditional binary KBP task a triple has a relevant context set if the two entities occur at least once together in the corpus - where the notion of ‘together’ is typically intra-sentential (within a single sentence). In KBP based on unary relations, a triple ⟨FILLER rel FIXED⟩has a relevant context 1588 set if the unary relation rel:FIXED has the filler argument in its corpus extension, i.e. the filler occurs in the corpus. Both approaches are limited in different important respects. KBP with unary relations can only produce triples when fixing a relation and argument provides a relatively large corpus extension. Triples such as ⟨BARACK OBAMA spouse MICHELLE OBAMA⟩ cannot be extracted in this way, since neither Barack nor Michelle Obama have a large set of spouses. The limitation of binary relation extraction is that the arguments must occur together. But for many triples, such as those relating to a person’s occupation, a film’s genre or a company’s product type, the second argument is often not given explicitly. In both cases, a relevant context set is a necessary but not sufficient condition for extracting the triple from text, since the context set may not express (even implicitly) the relation. Figure 2 shows the number of triples in our dataset that have a relevant context set with unary relations exclusively, binary relations exclusively and both unary and binary. The corpus extension threshold for the unary relations is 100. 2,783,357 199,515 566,990 Uniquely Unary Uniquely Binary Both Figure 2: Triples with Relevant Context Sets PerRelation Style Although unary relations could also be viewed as types, we argue that it is preferable to consider them as relations. For example, if the unary relation lives in:UNITED STATES is represented as the type US-PERSON, it has no structured relationship to the type USCOMPANY (based in:UNITED STATES). So the inference rule that companies tend to employ people who live in the countries they are based in (⟨company employs person⟩ ∧ ⟨company based in country⟩ ⇒ ⟨person lives in country⟩) is not representable. 4 Training and Using Unary Relation Classifiers A unary relation extraction system is a multi-class, multi-label classifier that takes an entity as input and returns its probability as a slot filler for each relation. In this paper, we represent each entity by the set of contexts (sentences in our experiments) where their mentions have been located; we call them context sets. The process of training and applying a KBP system using unary relations is outlined step-by-step below. • Build a set of unary relations that have a corpus extension above some threshold. • Locate the entities from the knowledge graph in text. • Create a context set for each entity from all the sentences that mention the entity. • Label the context set with the unary relations (if any) for the entity. The negatives for each unary relation will be all the entities where that unary relation is not true. • Train a model to determine the unary relations for any given entity from its context set. • Apply the model to all the entities in the corpus, including those that do not exist in the knowledge graph. • Convert the extracted unary relations back to binary relations and add to the knowledge graph as new edges. Any new entities are added to the knowledge graph as new nodes. A closer look to the generated training data can provide insight in the value of unary relations for distant supervision. Below are example binary contexts relating an organization to a country. The two arguments are shown in bold. Some contexts where two entities occur together (relevant contexts) will imply a relation between them, while others will not. In the first context, Philippines and Eagle Cement are not textually related. While in the second context, Dyna Management Services is explicitly stated to be located in Bermuda. 1589 Domain Corpus Entity Detection and Linking … Entity Context Set Unary Deep Network Knowledge Triples Figure 3: Unary Relational Knowledge Induction Architecture Overview The company competes with Holcim Philippines, the local unit of Swiss company LafargeHolcim, and Eagle Cement, a company backed by diversified local conglomerate San Miguel which is aggressively expanding into infrastructure. ... said Richmond, who is vice president of Dyna Management Services, a Bermuda-based insurance management company. On the other hand, there are many triples that have no relevant context using binary extraction, but can be supported with unary extraction. JB Hi-Fi is a company located in Australia, (unary relation hasLocation:AUSTRALIA). Although “JB Hi-Fi” never occurs together with “Australia” in our corpus, we can gather implicit textual evidence for this relation from its unary relation context sets. Furthermore, even cases where there is a relevant binary context set, the contexts may not provide enough or any textual support for the relation, while the unary context sets might. Woolworths, Coles owner Wesfarmers, JB Hi-Fi and Harvey Norman were also trading higher. JB Hi-Fi in talks to buy The Good Guys In equities news, protective glove and condom maker Ansell and JB Hi-Fi are slated to post half year results, while Bitcoin group is expected to list on ASX. The key indicators are: “ASX”, which is an Australian stock exchange, and the other Australian businesses mentioned, such as Woolworths, Wesfarmers, Harvey Norman, The Good Guys, Ansell and Bitcoin group. There is no strict logical entailment, indicating JB Hi-Fi is located in Australia, instead there is textual evidence that makes it probable. 5 Architecture for Unary Relations Figure 3 illustrates the overall architecture. First an Entity Detection and Linking system identifies occurrences in text of entities that are or should be in the knowledge base. Second, the contexts (here we use a sentence as the unit of context) for each entity are then gathered into an entity context set. This context set provides all the sentences that contain a mention of a particular entity and is the textual evidence for what triples are true for the entity. Third, the context set is then fed into a deep neural network, given in Figure 4. The output of the network is a set of predicted triples that can be added to the knowledge base. Figure 4 shows the architecture of the deep learning model for unary relation based KBP. From an entity context set, each sentence is projected into a vector space using a piecewise convolutional neural network (Zeng et al., 2015). The sentence vectors are then aggregated using a Network-in-Network layer (NiN) (Lin et al., 2013). The sentence-to-vector portion of the neural architecture begins by looking up the words in a word embedding table. The word embeddings are initialized with word2vec (Mikolov et al., 2013) and updated during training. The position of each word relative to the entity is also looked up in a position embedding table. Each word vector is concatenated with its position vector to produce each word representation vector. A piecewise max-pooled convolution (PCNN) is applied over 1590 …co-founded Allen & Shariff in 1993… -1 0 0 0 1 2 … … … … … Sentence To Vector Sentence Vector Aggregation Figure 4: Deep Learning Architecture for Unary Relations the resulting sentence matrix, with the pieces defined by the position of the entity argument: before the entity, the entity, and after the entity. A fully connected layer then produces the sentence vector representation. This is a refinement of the Neural Relation Extraction (NRE) (Lin et al., 2016) approach to sentence-to-vector mapping. The presence of only a single argument simply reduces from two position encoding vectors to one. The fully connected layer over the PCNN is an addition. The sentence vector aggregation portion of the neural architecture uses a Network-in-Network over the sentence vectors. Network-in-Network (NiN) (Lin et al., 2013) is an approach of 1x1 CNNs to image processing. The width-1 CNN we use for mention aggregation is an adaptation to a set of sentence vectors. The result is maxpooled and put through a fully connected layer to produce the score for each unary relation. Unlike a maximum aggregation used in many previous works (Riedel et al., 2010; Zeng et al., 2015) for binary relation extraction the evidence from many contexts can be combined to produce a prediction. Unlike attention-based pooling also used previously for binary relation extraction (Lin et al., 2016), the different contexts can contribute to different aspects, not just different degrees. For example, a prediction that a city is in France might depend on the conjunction of several facets of textual evidence linking the city to the French language, the Euro, and Norman history. In contrast, the common maximum aggregation approach is to move the final prediction layer to the sentence-to-vector modules and then aggregate by max-pooling the sentence level predictions. This aggregation strategy means that only the sentence most strongly indicating the relation contributes to its prediction. We measured the impact of the Network-in-Network sentence vector aggregation approach on the validation set. Relative to Network-in-Network aggregation and using the same hyperparameters, a maximum aggregation strategy gets two percent lower precision at one thousand: 66.55% compared to 68.49%. There are 790 unary relations with at least one thousand positives in our benchmark. To speed the training, we divided these into eight sets of approximately 100 relations each and trained the models for them in parallel. Unary relations based on the same binary relation were grouped together to share useful learned representations. The resulting split also put similar numbers of positive examples in the training set for each model. Training continued until no improvement was found on the validation set. This occurred at between five and nine epochs. All eight models were trained with the hyperparameters in Table 1. Dropout was applied on the penultimate layer, the max-pooled NiN. Based on validation set performance, we found that when larger numbers of relations are trained together the NiN filters and sentence vector dimension must be increased. Of all the hyperparameters, the training time is most sensitive to the 1591 Hyperparameter Value word embedding 50 position embedding 5 PCNN filters 1000 PCNN filter width 3 sentence vector 400 NiN filters 400 dropout 0.5 learnRate 0.003 decay multiplier 0.95 batch size 16 optimizer SGD Table 1: Hyperparameters used number of PCNN filters, since these are applied to every sentence in a context set. We found major improvements moving from the 230 filters used for NRE to 1000 filters, but less improvement or no improvement to increases beyond that. 6 Benchmark Large KBs and corpora are needed to train KBP systems in order to collect enough mentions for each relation. However, most of the existing Knowledge Base Population tasks are small in size (e.g. NYT-FB (Riedel et al., 2010) and TACKBP3) or focused on title-oriented-documents which are not available for most domains (e.g. WikiReading (Hewlett et al., 2016)). Therefore, we needed to create a new web-scale knowledge base population benchmark that we called CCDBP4. It combines the text of Common Crawl5 with the triples from 298 frequent relations in DBpedia (Auer et al., 2007). Mentions of DBpedia entities are located in text by gazetteer matching of the preferred label. We use the October 2017 Common Crawl and the most recent (2016-10) version of DBpedia, in both cases limited to English. We divided the entity pairs into training, validation and test sets with a 80%, 10%, 10% split. All triples for a given entity pair are in one of the three splits. This split increases the challenge, since many relations could be used to predict others (such as birthPlace implying nationality). The task is to generate new triples for each relation and rank them according to their probability. We show 3https://tac.nist.gov/ 4https://github.com/IBM/cc-dbp 5http://commoncrawl.org the precision / recall curves and focus on the relative area under the curves to evaluate the quality of different systems. Figure 5 shows the distribution of triples with relevant unary context sets per relation type. The relations giving rise to the most triples are high level relations such as hasLocation, a superrelation comprised of the sub-relations: country, state, city, headquarter, hometown, birthPlace, deathPlace, and others. Interestingly there are 165 years with enough people born in them to produce unary relations. While these all will have at least 100 relevant context sets, typically the context sets do not have textual evidence for any birth year. Perhaps most importantly, there are a large number of diverse relations that are suitable for a unary KBP approach. This indicates the broad applicability of our method. To test what improvement can be found by incorporating unary relations into KBP, we combine the output of a state-of-the-art binary relation extraction system with our unary relation extraction system. For binary relation extraction, we use a slightly altered version of the PCNN model from NRE (Lin et al., 2016), with the addition of a fully connected layer for each sentence representation before the max-pooled aggregation over relation predictions. We found this refinement to perform slightly better in NYT-FB (Riedel et al., 2010), a standard dataset for distantly supervised relation extraction. The binary and unary systems are trained from their relevant context sets to predict the triples in train. The validation set is used to tune hyperparameters and choose a stopping point for training. We combine the output of the two systems by, for each triple, taking the highest confidence from each system. 7 Evaluation Figure 6 shows the precision-recall curves for unary only, binary only and the combined system. The unary and binary systems alone achieve similar performance. But they are effective at very different triples. This is shown in the large gains from combining these complementary approaches. For example, at 0.5 precision, the combined approach has a recall of more than double (15,750 vs 7,400) compared to binary alone, which represents over 100% relative improvement. The recall is given as a triple count rather than 1592 Animal activeYears Start computing Platform Pop Rock industry publisher instrument birthPlace alma Mater hometown activeYears EndYear death Place isClassifiedBy area Code media Type kingdom formerTeam leader Title has Role draft Year literary Genre langauge number location battle award type Figure 5: Distribution of Unary Relation Counts a percentage. Traditional attempts to measure the recall of KBP systems use the set of all triples explicitly stated in text for the denominator of recall. This is unsuitable for evaluating our approach because the system is able to make probabilistic predictions based on implicit and partial textual evidence, thus producing correct triples outside the classic recall basis. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 10000 20000 30000 40000 50000 Precision Recall Count Unary Binary Binary & Unary Figure 6: Precision Recall Curves for KBP 7.1 Saliency Maps To gain some insight into how the unary KBP system is able to extract implicit knowledge we turn to saliency maps (Simonyan et al., 2014). By finding the derivative of a particular prediction with respect to the inputs, we can discover a locally linear approximation of how much each part of the input contributed (Zeiler and Fergus, 2014). Cold Lake Provincial Park (Alberta, Canada) is mentioned in two sentences in the Common Crawl English text. The unary relational knowledge induction system predicts hasLocation:CANADA with the highest confidence (over 90%). Both sentences contribute to the decision. We see high weight from words including “cold”, “provincial” and “french”. A handful of countries have “provincial parks” including Argentina, Belgium, South Africa and Canada. Belgium and Canada have substantial French speaking populations and Canada has by far the coldest climate. • located within 10 minutes of cold lake with quick access to OOV ridge ski hill , cold lake provincial park and french bay . • welcome to cold lake provincial park on average 4.00 pages are viewed each , by the estimated 959 daily visitors . Rock Kills Kid is a band mentioned twice in the corpus. From this context set, the relation background:GROUP OR BAND is predicted with high confidence. The fact that “Kid” occurs in the name of the entity seems to be important in identifying it as a musical group. The first sentence also draws focus to the band’s connection to rock and pop. 1593 While the second sentence seems to recognize the band - song (year) pattern as well as the comparison to Duran Duran. • the latest stylish pop synth band is rock kills kid . • rock kills kid are you nervous ? ( 2006 ) who ever thought duran duran would become so influential ? The Japanese singer-songwriter Masaki Haruna, aka Klaha is mentioned twice in the corpus. From this context set, the relation background:SOLO SINGER is predicted with high confidence. The first sentence clearly establishes the connection to music while the second indicates that Klaha is a solo artist. The conjunction of these two facets, accomplished through the context vector aggregation using NiN permits the conclusion of SOLO SINGER. • tvk music chat interview klaha parade . • klaha tvk music chat OOV red scarf interview the tv k folks did after klaha went solo . 8 Conclusions In this paper we presented a new methodology to identify relations between entities in text. Our approach, focusing on unary relations, can greatly improve the recall in automatic construction and updating of knowledge bases by making use of implicit and partial textual markers. Our method is extremely effective and complement very nicely existing binary relation extraction methods for KBP. This is just the first step in our wider research program on KBP, whose goal is to improve recall by identifying implicit information from texts. First of all, we plan to explore the use of more advanced forms of entity detection and linking, including propagating features from the EDL system forward for both unary and binary deep models. In addition we plan to exploit unary and binary relations as source of evidence to bootstrap a probabilistic reasoning approach, with the goal of leveraging constraints from the KB schema such as domain, range and taxonomies. We will also integrate the new triples gathered from textual evidence with new triples predicted from existing KB relationships by knowledge base completion. References Sren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. In In 6th Intl Semantic Web Conference, Busan, Korea. Springer, pages 11– 15. H Chang, M Abdurrahman, Ao Liu, J Tian-Zheng Wei, Aaron Traylor, Ajay Nagesh, Nicholas Monath, Patrick Verga, Emma Strubell, and Andrew McCallum. 2016. Extracting multilingual relations under limited resources: Tac 2016 cold-start kb construction and slot-filling using compositional universal schema. Proceedings of TAC . Philipp Cimiano and Johanna V¨olker. 2005. Towards large-scale, open-domain and ontology-based named entity classification. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP). Aleksandr Drozd, Anna Gladkova, and Satoshi Matsuoka. 2016. Word embeddings, analogies, and machine learning: Beyond king - man + woman = queen. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics. pages 3519–3530. Matthew R Gormley, Mo Yu, and Mark Dredze. 2015. Improved relation extraction with feature-rich compositional embedding models. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pages 1774–1784. Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, and David Berthelot. 2016. Wikireading: A novel large-scale language understanding task over wikipedia. In Proceedings of the Conference on Association for Computational Linguistics. Min Lin, Qiang Chen, and Shuicheng Yan. 2013. Network in network. arXiv preprint arXiv:1312.4400 . Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceedings of ACL. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed Representations of Words and Phrases and their Compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, Curran Associates, Inc., 1594 pages 3111–3119. http://papers.nips.cc/paper/5021distributed-representations-of-words-and-phrasesand-their-compositionality.pdf. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2 Volume 2. Association for Computational Linguistics, Stroudsburg, PA, USA, ACL ’09, pages 1003–1011. http://dl.acm.org/citation.cfm?id=1690219.1690287. Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, and Wen-tau Yih. 2017. Cross-sentence n-ary relation extraction with graph lstms. Transactions of the Association for Computational Linguistics 5:101–115. Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. Machine learning and knowledge discovery in databases pages 148–163. Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 74–84. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2014. Deep inside convolutional networks: Visualising image classification models and saliency maps. In ICLR Workshop. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning. Association for Computational Linguistics, pages 455–465. Linlin Wang, Zhu Cao, Gerard de Melo, and Zhiyuan Liu. 2016. Relation classification via multi-level attention cnns. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 1298–1307. Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015. Classifying relations via long short term memory networks along shortest dependency paths. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pages 1785–1794. Matthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In European conference on computer vision. Springer, pages 818–833. Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In EMNLP. pages 1753–1762. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. pages 2335–2344. Wenyuan Zeng, Yankai Lin, Zhiyuan Liu, and Maosong Sun. 2016. Incorporating relation paths in neural relation extraction. In CoRR. Yuhao Zhang, Arun Chaganty, Ashwin Paranjape, Danqi Chen, Jason Bolton, Peng Qi, and Christopher D Manning. 2016. Stanford at tac kbp 2016: Sealing pipeline leaks and understanding chinese. Proceedings of TAC .
2018
147
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1595–1604 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1595 Improving Entity Linking by Modeling Latent Relations between Mentions Phong Le1 and Ivan Titov1,2 1University of Edinburgh 2University of Amsterdam {ple,ititov}@inf.ed.ac.uk Abstract Entity linking involves aligning textual mentions of named entities to their corresponding entries in a knowledge base. Entity linking systems often exploit relations between textual mentions in a document (e.g., coreference) to decide if the linking decisions are compatible. Unlike previous approaches, which relied on supervised systems or heuristics to predict these relations, we treat relations as latent variables in our neural entity-linking model. We induce the relations without any supervision while optimizing the entity-linking system in an end-to-end fashion. Our multirelational model achieves the best reported scores on the standard benchmark (AIDACoNLL) and substantially outperforms its relation-agnostic version. Its training also converges much faster, suggesting that the injected structural bias helps to explain regularities in the training data. 1 Introduction Named entity linking (NEL) is the task of assigning entity mentions in a text to corresponding entries in a knowledge base (KB). For example, consider Figure 1 where a mention “World Cup” refers to a KB entity FIFA WORLD CUP. NEL is often regarded as crucial for natural language understanding and commonly used as preprocessing for tasks such as information extraction (Hoffmann et al., 2011) and question answering (Yih et al., 2015). Potential assignments of mentions to entities are regulated by semantic and discourse constraints. For example, the second and third occurrences of mention “England” in Figure 1 are coreferent and thus should be assigned to the same entity. Besides coreference, there are many other relations between entities which constrain or favor certain alignment configurations. For example, consider relation participant in in Figure 1: if “World Cup” is aligned to the entity FIFA WORLD CUP then we expect the second “England” to refer to a football team rather than a basketball one. NEL methods typically consider only coreference, relying either on off-the-shelf systems or some simple heuristics (Lazic et al., 2015), and exploit them in a pipeline fashion, though some (e.g., Cheng and Roth (2013); Ren et al. (2017)) additionally exploit a range of syntactic-semantic relations such as apposition and possessives. Another line of work ignores relations altogether and models the predicted sequence of KB entities as a bag (Globerson et al., 2016; Yamada et al., 2016; Ganea and Hofmann, 2017). Though they are able to capture some degree of coherence (e.g., preference towards entities from the same general domain) and are generally empirically successful, the underlying assumption is too coarse. For example, they would favor assigning all the occurrences of “England” in Figure 1 to the same entity. We hypothesize that relations useful for NEL can be induced without (or only with little) domain expertise. In order to prove this, we encode relations as latent variables and induce them by optimizing the entity-linking model in an end-to-end fashion. In this way, relations between mentions in documents will be induced in such a way as to be beneficial for NEL. As with other recent approaches to NEL (Yamada et al., 2017; Ganea and Hofmann, 2017), we rely on representation learning and learn embeddings of mentions, contexts and relations. This further reduces the amount of human expertise required to construct the system and, in principle, may make it more portable across languages and domains. Our multi-relational neural model achieves an 1596 World Cup 1966 was held in England …. England won… The final saw England beat West Germany . coreference beat located_in FIFA_World_Cup FIBA_Basketball_ World_Cup ... West_Germany Germany_national_ football_team Germany_national_ basketball_team ... England England_national_ football_team England_national_ basketball_team ... England England_national _football_team England_national _basketball_team ... participant_in Figure 1: Example for NEL, linking each mention to an entity in a KB (e.g. “World Cup” to FIFA WORLD CUP rather than FIBA BASKETBALL WORLD CUP). Note that the first and the second “England” are in different relations to “World Cup”. improvement of 0.85% F1 over the best reported scores on the standard AIDA-CoNLL dataset (Ganea and Hofmann, 2017). Substantial improvements over the relation-agnostic version show that the induced relations are indeed beneficial for NEL. Surprisingly its training also converges much faster: training of the full model requires ten times shorter wall-clock time than what is needed for estimating the simpler relationagnostic version. This may suggest that the injected structural bias helps to explain regularities in the training data, making the optimization task easier. We qualitatively examine induced relations. Though we do not observe direct counterparts of linguistic relations, we, for example, see that some of the induced relations are closely related to coreference whereas others encode forms of semantic relatedness between the mentions. 2 Background and Related work 2.1 Named entity linking problem Formally, given a document D containing a list of mentions m1, ..., mn, an entity linker assigns to each mi an KB entity ei or predicts that there is no corresponding entry in the KB (i.e., ei = NILL). Because a KB can be very large, it is standard to use an heuristic to choose potential candidates, eliminating options which are highly unlikely. This preprocessing step is called candidate selection. The task of a statistical model is thus reduced to choosing the best option among a smaller list of candidates Ci = (ei1, ..., eili). In what follows, we will discuss two classes of approaches tackling this problem: local and global modeling. 2.2 Local and global models Local models rely only on local contexts of mentions and completely ignore interdependencies between the linking decisions in the document (these interdependencies are usually referred to as coherence). Let ci be a local context of mention mi and Ψ(ei, ci) be a local score function. A local model then tackles the problem by searching for e∗ i = arg max ei∈Ci Ψ(ei, ci) (1) for each i ∈{1, ..., n} (Bunescu and Pas¸ca, 2006; Lazic et al., 2015; Yamada et al., 2017). A global model, besides using local context within Ψ(ei, ci), takes into account entity coherency. It is captured by a coherence score function Φ(E, D): E∗= arg max E∈C1×...×Cn n ∑ i=1 Ψ(ei, ci) + Φ(E, D) where E = (e1, ..., en). The coherence score function, in the simplest form, is a sum over all pairwise scores Φ(ei, ej, D) (Ratinov et al., 2011; Huang et al., 2015; Chisholm and Hachey, 2015; Ganea et al., 2016; Guo and Barbosa, 2016; Globerson et al., 2016; Yamada et al., 2016), resulting in: E∗= arg max E∈C1×...×Cn n ∑ i=1 Ψ(ei, ci)+ ∑ i̸=j Φ(ei, ej, D) (2) A disadvantage of global models is that exact decoding (Equation 2) is NP-hard (Wainwright et al., 2008). Ganea and Hofmann (2017) overcome this using loopy belief propagation (LBP), 1597 an approximate inference method based on message passing (Murphy et al., 1999). Globerson et al. (2016) propose a star model which approximates the decoding problem in Equation 2 by approximately decomposing it into n decoding problems, one per each ei. 2.3 Related work Our work focuses on modeling pairwise score functions Φ and is related to previous approaches in the two following aspects. Relations between mentions A relation widely used by NEL systems is coreference: two mentions are coreferent if they refer to the same entity. Though, as we discussed in Section 1, other linguistic relations constrain entity assignments, only a few approaches (e.g., Cheng and Roth (2013); Ren et al. (2017)), exploit any relations other than coreference. We believe that the reason for this is that predicting and selecting relevant (often semantic) relations is in itself a challenging problem. In Cheng and Roth (2013), relations between mentions are extracted using a labor-intensive approach, requiring a set of hand-crafted rules and a KB containing relations between entities. This approach is difficult to generalize to languages and domains which do not have such KBs or the settings where no experts are available to design the rules. We, in contrast, focus on automating the process using representation learning. Most of these methods relied on relations predicted by external tools, usually a coreference system. One notable exception is Durrett and Klein (2014): they use a joint model of entity linking and coreference resolution. Nevertheless their coreference component is still supervised, whereas our relations are latent even at training time. Representation learning How can we define local score functions Ψ and pairwise score functions Φ? Previous approaches employ a wide spectrum of techniques. At one extreme, extensive feature engineering was used to define useful features. For example, Ratinov et al. (2011) use cosine similarities between Wikipedia titles and local contexts as a feature when computing the local scores. For pairwise scores they exploit information about links between Wikipedia pages. At the other extreme, feature engineering is almost completely replaced by representation learning. These approaches rely on pretrained embeddings of words (Mikolov et al., 2013; Pennington et al., 2014) and entities (He et al., 2013; Yamada et al., 2017; Ganea and Hofmann, 2017) and often do not use virtually any other hand-crafted features. Ganea and Hofmann (2017) showed that such an approach can yield SOTA accuracy on a standard benchmark (AIDA-CoNLL dataset). Their local and pairwise score functions are Ψ(ei, ci) = eT i Bf(ci) Φ(ei, ej, D) = 1 n −1eT i Rej (3) where ei, ej ∈Rd are the embeddings of entity ei, ej, B, R ∈Rd×d are diagonal matrices. The mapping f(ci) applies an attention mechanism to context words in ci to obtain a feature representations of context (f(ci) ∈Rd). Note that the global component (the pairwise scores) is agnostic to any relations between entities or even to their ordering: it models e1, ..., en simply as a bag of entities. Our work is in line with Ganea and Hofmann (2017) in the sense that feature engineering plays no role in computing local and pair-wise scores. Furthermore, we argue that pair-wise scores should take into account relations between mentions which are represented by relation embeddings. 3 Multi-relational models 3.1 General form We assume that there are K latent relations. Each relation k is assigned to a mention pair (mi, mj) with a non-negative weight (‘confidence’) αijk. The pairwise score (mi, mj) is computed as a weighted sum of relation-specific pairwise scores (see Figure 2, top): Φ(ei, ej, D) = K ∑ k=1 αijkΦk(ei, ej, D) Φk(ei, ej, D) can be any pairwise score function, but here we adopt the one from Equation 3. Namely, we represent each relation k by a diagonal matrix Rk ∈Rd×d, and Φk(ei, ej, D) = eT i Rkej 1598 The weights αijk are normalized scores: αijk = 1 Zijk exp {fT (mi, ci)Dkf(mj, cj) √ d } (4) where Zijk is a normalization factor, f(mi, ci) is a function mapping (mi, ci) onto Rd, and Dk ∈ Rd×d is a diagonal matrix. ei,mi,ci ej,mj,cj αij1Φ1(ei,ej,D) αij2Φ2(ei,ej,D) αij3Φ3(ei,ej,D) ei,mi,ci ej,mj,cj (general form) (rel-norm) normalize over relations: αij1 + αij2 + αij3 = 1 ei,mi,ci ej,mj,cj (ment-norm) normalize over mentions: αi12 + αi22 + … + αij2 + … + αin2 = 1 e1,m1,c1 en,mn,cn ... ... Figure 2: Multi-relational models: general form (top), rel-norm (middle) and ment-norm (bottom). Each color corresponds to one relation. In our experiments, we use a single-layer neural network as f (see Figure 3) where ci is a concatenation of the average embedding of words in the left context with the average embedding of words in the right context of the mention.1 As αijk is indexed both by mention index j and relation index k, we have two choices for Zijk: normalization over relations and normalization over mentions. We consider both versions of the model. 1We also experimented with LSTMs but we could not prevent them from severely overfitting, and the results were poor. 3.2 Rel-norm: Relation-wise normalization For rel-norm, coefficients αijk are normalized over relations k, in other words, Zijk = K ∑ k′=1 exp {fT (mi, ci)Dk′f(mj, cj) √ d } so that ∑K k=1 αijk = 1 (see Figure 2, middle). We can also re-write the pairwise scores as Φ(ei, ej, D) = eT i Rijej (5) where Rij = ∑K k=1 αijkRk. In foreign policy Bill Clinton ordered U.S. military tanh, dropout Figure 3: Function f(mi, ci) is a single-layer neural network, with tanh activation function and a layer of dropout on top. Intuitively, αijk is the probability of assigning a k-th relation to a mention pair (mi, mj). For every pair rel-norm uses these probabilities to choose one relation from the pool and relies on the corresponding relation embedding Rk to compute the compatibility score. For K = 1 rel-norm reduces (up to a scaling factor) to the bag-of-entities model defined in Equation 3. In principle, instead of relying on the linear combination of relation embeddings matrices Rk, we could directly predict a context-specific relation embedding Rij = diag{g(mi, ci, mj, cj)} where g is a neural network. However, in preliminary experiments we observed that this resulted in overfitting and poor performance. Instead, we choose to use a small fixed number of relations as a way to constrain the model and improve generalization. 3.3 Ment-norm: Mention-wise normalization We can also normalize αijk over j: Zijk = n ∑ j′=1 j′̸=i exp {fT (mi, ci)Dkf(mj′, cj′) √ d } 1599 This implies that ∑n j=1,j̸=i αijk = 1 (see Figure 2, bottom). If we rewrite the pairwise scores as Φ(ei, ej, D) = K ∑ k=1 αijkeT i Rkej, (6) we can see that Equation 3 is a special case of ment-norm when K = 1 and D1 = 0. In other words, Ganea and Hofmann (2017) is our monorelational ment-norm with uniform α. The intuition behind ment-norm is that for each relation k and mention mi, we are looking for mentions related to mi with relation k. For each pair of mi and mj we can distinguish two cases: (i) αijk is small for all k: mi and mj are not related under any relation, (ii) αijk is large for one or more k: there are one or more relations which are predicted for mi and mj. In principle, rel-norm can also indirectly handle both these cases. For example, it can master (i) by dedicating a distinct ‘none’ relation to represent lack of relation between the two mentions (with the corresponding matrix Rk set to 0). Though it cannot assign large weights (i.e., close to 1) to multiple relations (as needed for (ii)), it can in principle use the ‘none’ relation to vary the probability mass assigned to the rest of relations across mention pairs, thus achieving the same effect (up to a multiplicative factor). Nevertheless, in contrast to ment-norm, we do not observe this behavior for rel-norm in our experiments: the inductive basis seems to disfavor such configurations. Ment-norm is in line with the current trend of using the attention mechanism in deep learning (Bahdanau et al., 2014), and especially related to multi-head attention of Vaswani et al. (2017). For each mention mi and for each k, we can interpret αijk as the probability of choosing a mention mj among the set of mentions in the document. Because here we have K relations, each mention mi will have maximally K mentions (i.e. heads in terminology of Vaswani et al. (2017)) to focus on. Note though that they use multi-head attention for choosing input features in each layer, whereas we rely on this mechanism to compute pairwise scoring functions for the structured output (i.e. to compute potential functions in the corresponding undirected graphical model, see Section 3.4). Mention padding A potentially serious drawback of ment-norm is that the model uses all K relations even in cases where some relations are inapplicable. For example, consider applying relation coreference to mention “West Germany” in Figure 1. The mention is non-anaphoric: there are no mentions co-referent with it. Still the ment-norm model has to distribute the weight across the mentions. This problem occurs because of the normalization ∑n j=1,j̸=i αijk = 1. Note that this issue does not affect standard applications of attention: normally the attention-weighted signal is input to another transformation (e.g., a flexible neural model) which can then disregard this signal when it is useless. This is not possible within our model, as it simply uses αijk to weight the bilinear terms without any extra transformation. Luckily, there is an easy way to circumvent this problem. We add to each document a padding mention mpad linked to a padding entity epad. In this way, the model can use the padding mention to damp the probability mass that the other mentions receive. This method is similar to the way some mention-ranking coreference models deal with non-anaphoric mentions (e.g. Wiseman et al. (2015)). 3.4 Implementation Following Ganea and Hofmann (2017) we use Equation 2 to define a conditional random field (CRF). We use the local score function identical to theirs and the pairwise scores are defined as explained above: q(E|D) ∝exp    n ∑ i=1 Ψ(ei, ci) + ∑ i̸=j Φ(ei, ej, D)    We also use max-product loopy belief propagation (LBP) to estimate the max-marginal probability ˆqi(ei|D) ≈ max e1,...,ei−1 ei+1,...,en q(E|D) for each mention mi. The final score function for mi is given by: ρi(e) = g(ˆqi(e|D), ˆp(e|mi)) where g is a two-layer neural network and ˆp(e|mi) is the probability of selecting e conditioned only on mi. This probability is computed by mixing mention-entity hyperlink count statistics from Wikipedia, a large Web corpus and YAGO.2 2See Ganea and Hofmann (2017, Section 6). 1600 We minimize the following ranking loss: L(θ) = ∑ D∈D ∑ mi∈D ∑ e∈Ci h(mi, e) (7) h(mi, e) = max ( 0, γ −ρi(e∗ i ) + ρi(e) ) where θ are the model parameters, D is a training dataset, and e∗ i is the ground-truth entity. Adam (Kingma and Ba, 2014) is used as an optimizer. For ment-norm, the padding mention is treated like any other mentions. We add fpad = f(mpad, cpad) and epad ∈Rd, an embedding of epad, to the model parameter list, and tune them while training the model. In order to encourage the models to explore different relations, we add the following regularization term to the loss function in Equation 7: λ1 ∑ i,j dist(Ri, Rj) + λ2 ∑ i,j dist(Di, Dj) where λ1, λ2 are set to −10−7 in our experiments, dist(x, y) can be any distance metric. We use: dist(x, y) = x ∥x∥2 − y ∥y∥2 2 Using this regularization to favor diversity is important as otherwise relations tend to collapse: their relation embeddings Rk end up being very similar to each other. 4 Experiments We evaluated four models: (i) rel-norm proposed in Section 3.2; (ii) ment-norm proposed in Section 3.3; (iii) ment-norm (K = 1): the monorelational version of ment-norm; and (iv) mentnorm (no pad): the ment-norm without using mention padding. Recall also that our mono-relational (i.e. K = 1) rel-norm is equivalent to the relationagnostic baseline of Ganea and Hofmann (2017). We implemented our models in PyTorch and run experiments on a Titan X GPU. The source code and trained models will be publicly available at https://github.com/lephong/ mulrel-nel. 4.1 Setup We set up our experiments similarly to those of Ganea and Hofmann (2017), run each model 5 times, and report average and 95% confidence interval of the standard micro F1 score (aggregates over all mentions). Datasets For in-domain scenario, we used AIDA-CoNLL dataset3 (Hoffart et al., 2011). This dataset contains AIDA-train for training, AIDA-A for dev, and AIDA-B for testing, having respectively 946, 216, and 231 documents. For out-domain scenario, we evaluated the models trained on AIDA-train, on five popular test sets: MSNBC, AQUAINT, ACE2004, which were cleaned and updated by Guo and Barbosa (2016); WNEDCWEB (CWEB), WNED-WIKI (WIKI), which were automatically extracted from ClueWeb and Wikipedia (Guo and Barbosa, 2016; Gabrilovich et al., 2013). The first three are small with 20, 50, and 36 documents whereas the last two are much larger with 320 documents each. Following previous works (Yamada et al., 2016; Ganea and Hofmann, 2017), we considered only mentions that have entities in the KB (i.e., Wikipedia). Candidate selection For each mention mi, we selected 30 top candidates using ˆp(e|mi). We then kept 4 candidates with the highest ˆp(e|mi) and 3 candidates with the highest scores eT (∑ w∈di w ) , where e, w ∈Rd are entity and word embeddings, di is the 50-word window context around mi. Hyper-parameter setting We set d = 300 and used GloVe (Pennington et al., 2014) word embeddings trained on 840B tokens for computing f in Equation 4, and entity embeddings from Ganea and Hofmann (2017).4 We use the following parameter values: γ = 0.01 (see Equation 7), the number of LBP loops is 10, the dropout rate for f was set to 0.3, the window size of local contexts ci (for the pairwise score functions) is 6. For rel-norm, we initialized diag(Rk) and diag(Dk) by sampling from N(0, 0.1) for all k. For ment-norm, we did the same except that diag(R1) was sampled from N(1, 0.1). To select the best number of relations K, we considered all values of K ≤7 (K > 7 would not fit in our GPU memory, as some of the documents are large). We selected the best ones based on the development scores: 6 for rel-norm, 3 for mentnorm, and 3 for ment-norm (no pad). When training the models, we applied early stopping. For rel-norm, when the model reached 3TAC KBP datasets are no longer available. 4https://github.com/dalab/deep-ed 1601 91% F1 on the dev set, 5 we reduced the learning rate from 10−4 to 10−5. We then stopped the training when F1 was not improved after 20 epochs. We did the same for ment-norm except that the learning rate was changed at 91.5% F1. Note that all the hyper-parameters except K and the turning point for early stopping were set to the values used by Ganea and Hofmann (2017). Systematic tuning is expensive though may have further increased the result of our models. 4.2 Results Methods Aida-B Chisholm and Hachey (2015) 88.7 Guo and Barbosa (2016) 89.0 Globerson et al. (2016) 91.0 Yamada et al. (2016) 91.5 Ganea and Hofmann (2017) 92.22 ± 0.14 rel-norm 92.41 ± 0.19 ment-norm 93.07 ± 0.27 ment-norm (K = 1) 92.89 ± 0.21 ment-norm (no pad) 92.37 ± 0.26 Table 1: F1 scores on AIDA-B (test set). Table 1 shows micro F1 scores on AIDA-B of the SOTA methods and ours, which all use Wikipedia and YAGO mention-entity index. To our knowledge, ours are the only (unsupervisedly) inducing and employing more than one relations on this dataset. The others use only one relation, coreference, which is given by simple heuristics or supervised third-party resolvers. All four our models outperform any previous method, with ment-norm achieving the best results, 0.85% higher than that of Ganea and Hofmann (2017). Table 2 shows micro F1 scores on 5 out-domain test sets. Besides ours, only Cheng and Roth (2013) employs several mention relations. Mentnorm achieves the highest F1 scores on MSNBC and ACE2004. On average, ment-norm’s F1 score is 0.3% higher than that of Ganea and Hofmann (2017), but 0.2% lower than Guo and Barbosa (2016)’s. It is worth noting that Guo and Barbosa (2016) performs exceptionally well on WIKI, but substantially worse than ment-norm on all other datasets. Our other three models, however, have lower average F1 scores compared to the best previous model. The experimental results show that ment-norm outperforms rel-norm, and that mention padding plays an important role. 5We chose the highest F1 that rel-norm always achieved without the learning rate reduction. 4.3 Analysis Mono-relational v.s. multi-relational For rel-norm, the mono-relational version (i.e., Ganea and Hofmann (2017)) is outperformed by the multi-relational one on AIDA-CoNLL, but performs significantly better on all five outdomain datasets. This implies that multi-relational rel-norm does not generalize well across domains. For ment-norm, the mono-relational version performs worse than the multi-relational one on all test sets except AQUAINT. We speculate that this is due to multi-relational ment-norm being less sensitive to prediction errors. Since it can rely on multiple factors more easily, a single mistake in assignment is unlikely to have large influence on its predictions. Oracle G&H rel-norm ment-norm (K=1) ment-norm 92 92.5 93 93.5 94 94.5 LBP oracle Figure 4: F1 on AIDA-B when using LBP and the oracle. G&H is Ganea and Hofmann (2017). In order to examine learned relations in a more transparant setting, we consider an idealistic scenario where imperfection of LBP, as well as mistakes in predicting other entities, are taken out of the equation using an oracle. This oracle, when we make a prediction for mention mi, will tell us the correct entity e∗ j for every other mentions mj, j ̸= i. We also used AIDA-A (development set) for selecting the numbers of relations for relnorm and ment-norm. They are set to 6 and 3, respectively. Figure 4 shows the micro F1 scores. Surprisingly, the performance of oracle relnorm is close to that of oracle ment-norm, although without using the oracle the difference was substantial. This suggests that rel-norm is more sensitive to prediction errors than mentnorm. Ganea and Hofmann (2017), even with the help of the oracle, can only perform slightly better than LBP (i.e. non-oracle) ment-norm. This 1602 Methods MSNBC AQUAINT ACE2004 CWEB WIKI Avg Milne and Witten (2008) 78 85 81 64.1 81.7 77.96 Hoffart et al. (2011) 79 56 80 58.6 63 67.32 Ratinov et al. (2011) 75 83 82 56.2 67.2 72.68 Cheng and Roth (2013) 90 90 86 67.5 73.4 81.38 Guo and Barbosa (2016) 92 87 88 77 84.5 85.7 Ganea and Hofmann (2017) 93.7 ± 0.1 88.5 ± 0.4 88.5 ± 0.3 77.9 ± 0.1 77.5 ± 0.1 85.22 rel-norm 92.2 ± 0.3 86.7 ± 0.7 87.9 ± 0.3 75.2 ± 0.5 76.4 ± 0.3 83.67 ment-norm 93.9 ± 0.2 88.3 ± 0.6 89.9 ± 0.8 77.5 ± 0.1 78.0 ± 0.1 85.51 ment-norm (K = 1) 93.2 ± 0.3 88.4 ± 0.4 88.9 ± 1.0 77.0 ± 0.2 77.2 ± 0.1 84.94 ment-norm (no pad) 93.6 ± 0.3 87.8 ± 0.5 90.0 ± 0.3 77.0 ± 0.2 77.3 ± 0.3 85.13 Table 2: F1 scores on five out-domain test sets. Underlined scores show cases where the corresponding model outperforms the baseline. suggests that its global coherence scoring component is indeed too simplistic. Also note that both multi-relational oracle models substantially outperform the two mono-relational oracle models. This shows the benefit of using more than one relations, and the potential of achieving higher accuracy with more accurate inference methods. Relations In this section we qualitatively examine relations that the models learned by looking at the probabilities αijk. See Figure 5 for an example. In that example we focus on mention “Liege” in the sentence at the top and study which mentions are related to it under two versions of our model: rel-norm (leftmost column) and ment-norm (rightmost column). For rel-norm it is difficult to interpret the meaning of the relations. It seems that the first relation dominates the other two, with very high weights for most of the mentions. Nevertheless, the fact that rel-norm outperforms the baseline suggests that those learned relations encode some useful information. For ment-norm, the first relation is similar to coreference: the relation prefers those mentions that potentially refer to the same entity (and/or have semantically similar mentions): see Figure 5 (left, third column). The second and third relations behave differently from the first relation as they prefer mentions having more distant meanings and are complementary to the first relation. They assign large weights to (1) “Belgium” and (2) “Brussels” but small weights to (4) and (6) “Liege”. The two relations look similar in this example, however they are not identical in general. See a histogram of bucketed values of their weights in Figure 5 (right): their α have quite different distributions. Complexity The complexity of rel-norm and ment-norm is linear in K, so in principle our models should be considerably more expensive than Ganea and Hofmann (2017). However, our models converge much faster than their relation-agnostic model: on average ours needs 120 epochs, compared to theirs 1250 epochs. We believe that the structural bias helps the model to capture necessary regularities more easily. In terms of wall-clock time, our model requires just under 1.5 hours to train, that is ten times faster than the relation agnostic model (Ganea and Hofmann, 2017). In addition, the difference in testing time is negligible when using a GPU. 5 Conclusion and Future work We have shown the benefits of using relations in NEL. Our models consider relations as latent variables, thus do not require any extra supervision. Representation learning was used to learn relation embeddings, eliminating the need for extensive feature engineering. The experimental results show that our best model achieves the best reported F1 on AIDA-CoNLL with an improvement of 0.85% F1 over the best previous results. Conceptually, modeling multiple relations is substantially different from simply modeling coherence (as in Ganea and Hofmann (2017)). In this way we also hope it will lead to interesting follow-up work, as individual relations can be informed by injecting prior knowledge (e.g., by training jointly with relation extraction models). In future work, we would like to use syntactic and discourse structures (e.g., syntactic dependency paths between mentions) to encourage the models to discover a richer set of relations. We also would like to combine ment-norm and relnorm. Besides, we would like to examine whether 1603 rel-norm on Friday , Liege police said in ment-norm (1) missing teenagers in Belgium . (2) UNK BRUSSELS UNK (3) UNK Belgian police said on (4) , ” a Liege police official told (5) police official told Reuters . (6) eastern town of Liege on Thursday , (7) home village of UNK . (8) link with the Marc Dutroux case , the (9) which has rocked Belgium in the past 0.25 0.30 0.35 0.40 0.45 0.50 0.55 α 0 10 20 30 40 50 60 α •,2 α •,3 Figure 5: (Left) Examples of α. The first and third columns show αijk for oracle rel-norm and oracle ment-norm, respectively. (Right) Histograms of α•k for k = 2, 3, corresponding to the second and third relations from oracle ment-norm. Only α > 0.25 (i.e. high attentions) are shown. the induced latent relations could be helpful for relation extract. Acknowledgments We would like to thank anonymous reviewers for their suggestions and comments. The project was supported by the European Research Council (ERC StG BroadSem 678254), the Dutch National Science Foundation (NWO VIDI 639.022.518), and an Amazon Web Services (AWS) grant. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Razvan Bunescu and Marius Pas¸ca. 2006. Using encyclopedic knowledge for named entity disambiguation. In 11th Conference of the European Chapter of the Association for Computational Linguistics. Xiao Cheng and Dan Roth. 2013. Relational inference for wikification. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1787–1796, Seattle, Washington, USA. Association for Computational Linguistics. Andrew Chisholm and Ben Hachey. 2015. Entity disambiguation with web links. Transactions of the Association of Computational Linguistics, 3:145–156. Greg Durrett and Dan Klein. 2014. A joint model for entity analysis: Coreference, typing, and linking. Transactions of the Association for Computational Linguistics, 2:477–490. Evgeniy Gabrilovich, Michael Ringgaard, and Amarnag Subramanya. 2013. Facc1: Freebase annotation of clueweb corpora. Octavian-Eugen Ganea, Marina Ganea, Aurelien Lucchi, Carsten Eickhoff, and Thomas Hofmann. 2016. Probabilistic bag-of-hyperlinks model for entity linking. In Proceedings of the 25th International Conference on World Wide Web, pages 927–938. International World Wide Web Conferences Steering Committee. Octavian-Eugen Ganea and Thomas Hofmann. 2017. Deep joint entity disambiguation with local neural attention. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2609–2619. Association for Computational Linguistics. Amir Globerson, Nevena Lazic, Soumen Chakrabarti, Amarnag Subramanya, Michael Ringaard, and Fernando Pereira. 2016. Collective entity resolution with multi-focal attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 621–631. Association for Computational Linguistics. Zhaochen Guo and Denilson Barbosa. 2016. Robust named entity disambiguation with random walks. Semantic Web, (Preprint). Zhengyan He, Shujie Liu, Mu Li, Ming Zhou, Longkai Zhang, and Houfeng Wang. 2013. Learning entity representation for entity disambiguation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 30–34, Sofia, Bulgaria. Association for Computational Linguistics. Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen F¨urstenau, Manfred Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named entities in text. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 782–792. Association for Computational Linguistics. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S. Weld. 2011. 1604 Knowledge-based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 541–550, Portland, Oregon, USA. Association for Computational Linguistics. Hongzhao Huang, Larry Heck, and Heng Ji. 2015. Leveraging deep neural networks and knowledge graphs for entity disambiguation. arXiv preprint arXiv:1504.07678. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Nevena Lazic, Amarnag Subramanya, Michael Ringgaard, and Fernando Pereira. 2015. Plato: A selective context model for entity resolution. Transactions of the Association for Computational Linguistics, 3:503–515. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. David Milne and Ian H Witten. 2008. Learning to link with wikipedia. In Proceedings of the 17th ACM conference on Information and knowledge management, pages 509–518. ACM. Kevin P Murphy, Yair Weiss, and Michael I Jordan. 1999. Loopy belief propagation for approximate inference: An empirical study. In Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence, pages 467–475. Morgan Kaufmann Publishers Inc. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Association for Computational Linguistics. Lev Ratinov, Dan Roth, Doug Downey, and Mike Anderson. 2011. Local and global algorithms for disambiguation to wikipedia. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1375–1384. Association for Computational Linguistics. Xiang Ren, Zeqiu Wu, Wenqi He, Meng Qu, Clare R Voss, Heng Ji, Tarek F Abdelzaher, and Jiawei Han. 2017. Cotype: Joint extraction of typed entities and relations with knowledge bases. In Proceedings of the 26th International Conference on World Wide Web, pages 1015–1024. International World Wide Web Conferences Steering Committee. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 6000–6010. Curran Associates, Inc. Martin J Wainwright, Michael I Jordan, et al. 2008. Graphical models, exponential families, and variational inference. Foundations and Trends R⃝in Machine Learning, 1(1–2):1–305. Sam Wiseman, Alexander M. Rush, Stuart Shieber, and Jason Weston. 2015. Learning anaphoricity and antecedent ranking features for coreference resolution. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1416–1426. Association for Computational Linguistics. Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. 2016. Joint learning of the embedding of words and entities for named entity disambiguation. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 250–259. Association for Computational Linguistics. Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. 2017. Learning distributed representations of texts and entities from knowledge base. arXiv preprint arXiv:1705.02494. Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1321–1331, Beijing, China. Association for Computational Linguistics.
2018
148
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1605–1615 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1605 Dating Documents using Graph Convolution Networks Shikhar Vashishth IISc Bangalore [email protected] Shib Sankar Dasgupta IISc Bangalore [email protected] Swayambhu Nath Ray IISc Bangalore [email protected] Partha Talukdar IISc Bangalore [email protected] Abstract Document date is essential for many important tasks, such as document retrieval, summarization, event detection, etc. While existing approaches for these tasks assume accurate knowledge of the document date, this is not always available, especially for arbitrary documents from the Web. Document Dating is a challenging problem which requires inference over the temporal structure of the document. Prior document dating systems have largely relied on handcrafted features while ignoring such documentinternal structures. In this paper, we propose NeuralDater, a Graph Convolutional Network (GCN) based document dating approach which jointly exploits syntactic and temporal graph structures of document in a principled way. To the best of our knowledge, this is the first application of deep learning for the problem of document dating. Through extensive experiments on real-world datasets, we find that NeuralDater significantly outperforms state-of-the-art baseline by 19% absolute (45% relative) accuracy points. 1 Introduction Date of a document, also referred to as the Document Creation Time (DCT), is at the core of many important tasks, such as, information retrieval (Olson et al., 1999; Li and Croft, 2003; Dakka et al., 2008), temporal reasoning (Mani and Wilson, 2000; Llid´o et al., 2001), text summarization (Wan, 2007), event detection (Allan et al., 1998), and analysis of historical text (de Jong et al., 2005a), among others. In all such tasks, the document date is assumed to be available and also DCT (?) AFTER SAME obj subj SAME AFTER Swiss adopted that form of taxation in 1995. The concession was approved by the govt ... last September. Four years after, the IOC BEFORE subj nmod case Correct DCT: 1999 Score Figure 1: Top: An example document annotated with syntactic and temporal dependencies. In order to predict the right value of 1999 for the Document Creation Time (DCT), inference over these document structures is necessary. Bottom: Document date prediction by two state-of-the-art-baselines and NeuralDater, the method proposed in this paper. While the two previous methods are getting misled by the temporal expression (1995) in the document, NeuralDater is able to use the syntactic and temporal structure of the document to predict the right value (1999). accurate – a strong assumption, especially for arbitrary documents from the Web. Thus, there is a need to automatically predict the date of a document based on its content. This problem is referred to as Document Dating. Initial attempts on automatic document dating started with generative models by (de Jong et al., 2005b). This model is later improved by (Kanhabua and Nørv˚ag, 2008a) who incorporate additional features such as POS tags, collocations, etc. Chambers (2012) shows significant improvement over these prior efforts through their discriminative models using handcrafted temporal features. Kotsakos et al. (2014) propose a statistical approach for document dating exploiting term bursti1606 adopted (e1) Swiss concession approved (e2) DCT Average Pooling govt 1995 (t1) four years after (t2) Context Embedding (Bi-LSTM) subj nmod subj Temporal Embedding (T-GCN) Classifier Score Year Sentences SAME AFTER AFTER SAME BEFORE Syntactic Embedding (S-GCN) Temporal Relation Extraction Dependency Parsing Figure 2: Overview of NeuralDater. NeuralDater exploits syntactic and temporal structure in a document to learn effective representation, which in turn are used to predict the document time. NeuralDater uses a Bi-directional LSTM (Bi-LSTM), two Graph Convolution Networks (GCN) – one over the dependency tree and the other over the document’s temporal graph – along with a softmax classifier, all trained end-to-end jointly. Please see Section 4 for more details. ness (Lappas et al., 2009). Document dating is a challenging problem which requires extensive reasoning over the temporal structure of the document. Let us motivate this through an example shown in Figure 1. In the document, four years after plays a crucial role in identifying the creation time of the document. The existing approaches give higher confidence for timestamp immediate to the year mention 1995. NeuralDater exploits the syntactic and temporal structure of the document to predict the right timestamp (1999) for the document. With the exception of (Chambers, 2012), all prior works on the document dating problem ignore such informative temporal structure within the document. Research in document event extraction and ordering have made it possible to extract such temporal structures involving events, temporal expressions, and the (unknown) document date in a document (Mirza and Tonelli, 2016; Chambers et al., 2014). While methods to perform reasoning over such structures exist (Verhagen et al., 2007, 2010; UzZaman et al., 2013; Llorens et al., 2015; Pustejovsky et al., 2003), none of them have exploited advances in deep learning (Krizhevsky et al., 2012; Hinton et al., 2012; Goodfellow et al., 2016). In particular, recently proposed Graph Convolution Networks (GCN) (Defferrard et al., 2016; Kipf and Welling, 2017) have emerged as a way to learn graph representation while encoding structural information and constraints represented by the graph. We adapt GCNs for the document dating problem and make the following contributions: • We propose NeuralDater, a Graph Convolution Network (GCN)-based approach for document dating. To the best of our knowledge, this is the first application of GCNs, and more broadly deep neural network-based methods, for the document dating problem. • NeuralDater is the first document dating approach which exploits syntactic as well temporal structure of the document, all within a principled joint model. • Through extensive experiments on multiple real-world datasets, we demonstrate NeuralDater’s effectiveness over state-of-the-art baselines. NeuralDater’s source code and datasets used in the paper are available at http://github. com/malllabiisc/NeuralDater. 2 Related Work Automatic Document Dating: de Jong et al. (2005b) propose the first approach for automating document dating through a statistical language 1607 model. Kanhabua and Nørv˚ag (2008a) further extend this work by incorporating semantic-based preprocessing and temporal entropy (Kanhabua and Nørv˚ag, 2008b) based term-weighting. Chambers (2012) proposes a MaxEnt based discriminative model trained on hand-crafted temporal features. He also proposes a model to learn probabilistic constraints between year mentions and the actual creation time of the document. We draw inspiration from his work for exploiting temporal reasoning for document dating. Kotsakos et al. (2014) propose a purely statistical method which considers lexical similarity alongside burstiness (Lappas et al., 2009) of terms for dating documents. To the best of our knowledge, NeuralDater, our proposed method, is the first method to utilize deep learning techniques for the document dating problem. Event Ordering Systems: Temporal ordering of events is a vast research topic in NLP. The problem is posed as a temporal relation classification between two given temporal entities. Machine Learned classifiers and well crafted linguistic features for this task are used in (Chambers et al., 2007; Mirza and Tonelli, 2014). D’Souza and Ng (2013) use a hybrid approach by adding 437 hand-crafted rules. Chambers and Jurafsky (2008); Yoshikawa et al. (2009) try to classify with many more temporal constraints, while utilizing integer linear programming and Markov logic. CAEVO, a CAscading EVent Ordering architecture (Chambers et al., 2014) use sieve-based architecture (Lee et al., 2013) for temporal event ordering for the first time. They mix multiple learners according to their precision based ranks and use transitive closure for maintaining consistency of temporal graph. Mirza and Tonelli (2016) recently propose CATENA (CAusal and TEmporal relation extraction from NAtural language texts), the first integrated system for the temporal and causal relations extraction between pre-annotated events and time expressions. They also incorporate sieve-based architecture which outperforms existing methods in temporal relation classification domain. We make use of CATENA for temporal graph construction in our work. Graph Convolutional Networks (GCN): GCNs generalize Convolutional Neural Network (CNN) over graphs. GCN is introduced by (Bruna et al., 2014), and later extended by (Defferrard et al., 2016) with efficient localized filter approximation in spectral domain. Kipf and Welling (2017) propose a first-order approximation of localized filters through layer-wise propagation rule. GCNs over syntactic dependency trees have been recently exploited in the field of semantic-role labeling (Marcheggiani and Titov, 2017), neural machine translation (Bastings et al., 2017a), event detection (Bastings et al., 2017b). In our work, we successfully use GCNs for document dating. 3 Background: Graph Convolution Networks (GCN) In this section, we provide an overview of Graph Convolution Networks (GCN) (Kipf and Welling, 2017). GCN learns an embedding for each node of the graph it is applied over. We first present GCN for undirected graphs and then move onto GCN for directed graph setting. 3.1 GCN on Undirected Graph Let G = (V, E) be an undirected graph, where V is a set of n vertices and E the set of edges. The input feature matrix X ∈Rn×m whose rows are input representation of node u, xu ∈Rm, ∀u ∈V. The output hidden representation hv ∈Rd of a node v after a single layer of graph convolution operation can be obtained by considering only the immediate neighbors of v. This can be formulated as: hv = f  X u∈N(v) (Wxu + b)  , ∀v ∈V. Here, model parameters W ∈Rd×m and b ∈Rd are learned in a task-specific setting using firstorder gradient optimization. N(v) refers to the set of neighbors of v and f is any non-linear activation function. We have used ReLU as the activation function in this paper1. In order to capture nodes many hops away, multiple GCN layers may be stacked one on top of another. In particular, hk+1 v , representation of node v after kth GCN layer can be formulated as: hk+1 v = f  X u∈N(v)  W khk u + bk  , ∀v ∈V. where hk u is the input to the kth layer. 1ReLU: f(x) = max(0, x) 1608 3.2 GCN on Labeled and Directed Graph In this section, we consider GCN formulation over graphs where each edge is labeled as well as directed. In this setting, an edge from node u to v with label l(u, v) is denoted as (u, v, l(u, v)). While a few recent works focus on GCN over directed graphs (Yasunaga et al., 2017; Marcheggiani and Titov, 2017), none of them consider labeled edges. We handle both direction and label by incorporating label and direction specific filters. Based on the assumption that the information in a directed edge need not only propagate along its direction, following (Marcheggiani and Titov, 2017) we define an updated edge set E′ which expands the original set E by incorporating inverse, as well self-loop edges. E′ = E ∪{(v, u, l(u, v)−1) | (u, v, l(u, v)) ∈E} ∪{(u, u, ⊤) | u ∈V)}. (1) Here, l(u, v)−1 is the inverse edge label corresponding to label l(u, v), and ⊤is a special empty relation symbol for self-loop edges. We now define hk+1 v as the embedding of node v after kth GCN layer applied over the directed and labeled graph as: hk+1 v = f  X u∈N(v)  W k l(u,v)hk u + bk l(u,v)   . (2) We note that the parameters W k l(u,v) and bk l(u,v) in this case are edge label specific. 3.3 Incorporating Edge Importance In many practical settings, we may not want to give equal importance to all the edges. For example, in case of automatically constructed graphs, some of the edges may be erroneous and we may want to automatically learn to discard them. Edgewise gating may be used in a GCN to give importance to relevant edges and subdue the noisy ones. Bastings et al. (2017b); Marcheggiani and Titov (2017) used gating for similar reasons and obtained high performance gain. At kth layer, we compute gating value for a particular edge (u, v, l(u, v)) as: gk u,v = σ  hk u · ˆwk l(u,v) + ˆbk l(u,v)  , where, σ(·) is the sigmoid function, ˆwk l(u,v) and ˆbk l(u,v) are label specific gating parameters. Thus, gating helps to make the model robust to the noisy labels and directions of the input graphs. GCN embedding of a node while incorporating edge gating may be computed as follows. hk+1 v = f  X u∈N(v) gk u,v ×  W k l(u,v)hk u + bk l(u,v)   . 4 NeuralDater Overview The Documents Dating problem may be cast as a multi-class classification problem (Kotsakos et al., 2014; Chambers, 2012). In this section, we present an overview of NeuralDater, the document dating system proposed in this paper. Architectural overview of NeuralDater is shown in Figure 2. NeuralDater is a deep learning-based multiclass classification system. It takes in a document as input and returns its predicted date as output by exploiting the syntactic and temporal structure of document. NeuralDater network consists of three layers which learns an embedding for the Document Creation Time (DCT) node corresponding to the document. This embedding is then fed to a softmax classifier which produces a distribution over timestamps. Following prior research (Chambers, 2012; Kotsakos et al., 2014), we work with year granularity for the experiments in this paper. We however note that NeuralDater can be trained for finer granularity with appropriate training data. The NeuralDater network is trained end-to-end using training data. We briefly present NeuralDater’s various components below. Each component is described in greater detail in subsequent sections. • Context Embedding: In this layer, NeuralDater uses a Bi-directional LSTM (BiLSTM) to learn embedding for each token in the document. Bi-LSTMs have been shown to be quite effective in capturing local context inside token embeddings (Sutskever et al., 2014). • Syntactic Embedding: In this step, NeuralDater revises token embeddings from previous step by running a GCN over the dependency parses of sentences in the document. We refer to this GCN as Syntactic GCN or S-GCN. While the Bi-LSTM captures immediate local context in token embeddings, S1609 GCN augments them by capturing syntactic context. • Temporal Embedding: In this step, NeuralDater further refines embeddings learned by S-GCN to incorporate cues from temporal structure of event and times in the document. NeuralDater uses state-of-the-art causal and temporal relation extraction algorithm (Mirza and Tonelli, 2016) for extracting temporal graph for each document. A GCN is then run over this temporal graph to refine the embeddings from previous layer. We refer to this GCN as Temporal GCN or T-GCN. In this step, a special DCT node is introduced whose embedding is also learned by the T-GCN. • Classifier: Embedding of the DCT node along with average pooled embeddings learned by S-GCN are fed to a fully connected softmax classifier which makes the final prediction about the date of the document. Even though the previous discussion is presented in a sequential manner, the whole network is trained in a joint end-to-end manner using backpropagation. 5 NeuralDater Details In this section, we present detailed description of various components of NeuralDater. 5.1 Context Embedding (Bi-LSTM) Let us consider a document D with n tokens w1, w2, ..., wn. We first represent each token by a k-dimensional word embedding. For the experiments in this paper, we use GloVe (Pennington et al., 2014) embeddings. These token embeddings are stacked together to get the document representation X ∈Rn×k. We then employ a Bi-directional LSTM (Bi-LSTM) (Hochreiter and Schmidhuber, 1997) on the input matrix X to obtain contextual embedding for each token. After stacking contextual embedding of all these tokens, we get the new document representation matrix Hcntx ∈Rn×rcntx. In this new representation, each token is represented in a rcntx-dimensional space. Our choice of LSTMs for learning contextual embeddings for tokens is motivated by the previous success of LSTMs in this task (Sutskever et al., 2014). 5.2 Syntactic Embedding (S-GCN) While the Bi-LSTM is effective at capturing immediate local context of a token, it may not be as effective in capturing longer range dependencies among words in a sentence. For example, in Figure 1, we would like the embedding of token approved to be directly affected by govt, even though they are not immediate neighbors. A dependency parse may be used to capture such longer-range connections. In fact, similar features were exploited by (Chambers, 2012) for the document dating problem. NeuralDater captures such longerrange information by using another GCN run over the syntactic structure of the document. We describe this in detail below. The context embedding, Hcntx ∈Rn×rcntx learned in the previous step is used as input to this layer. For a given document, we first extract its syntactic dependency structure by applying the Stanford CoreNLP’s dependency parser (Manning et al., 2014) on each sentence in the document individually. We now employ the Graph Convolution Network (GCN) over this dependency graph using the GCN formulation presented in Section 3.2. We call this GCN the Syntactic GCN or SGCN, as mentioned in Section 4. Since S-GCN operates over the dependency graph and uses Equation 2 for updating embeddings, the number of parameters in S-GCN is directly proportional to the number of dependency edge types. Stanford CoreNLP’s dependency parser returns 55 different dependency edge types. This large number of edge types is going to significantly over-parameterize S-GCN, thereby increasing the possibility of overfitting. In order to address this, we use only three edge types in SGCN. For each edge connecting nodes wi and wj in E′ (see Equation 1), we determine its new type L(wi, wj) as follows: • L(wi, wj) =→if (wi, wj, l(wi, wj)) ∈E′, i.e., if the edge is an original dependency parse edge • L(wi, wj) =←if (wi, wj, l(wi, wj)−1) ∈E′, i.e., if the edges is an inverse edge • L(wi, wj) = ⊤if (wi, wj, ⊤) ∈E′, i.e., if the edge is a self-loop with wi = wj S-GCN now estimates embedding hsyn wi ∈Rrsyn for each token wi in the document using the for1610 mulation shown below. hsyn wi = f P wj∈N(wi)  WL(wi,wj)hcntx wj + bL(wi,wj)  ! Please note S-GCN’s use of the new edge types L(wi, wj) above, instead of the l(wi, wj) types used in Equation 2. By stacking embeddings for all the tokens together, we get the new embedding matrix Hsyn ∈Rn×rsyn representing the document. AveragePooling: We obtain an embedding havg D for the whole document by average pooling of every token representation. havg D = 1 n n X i=1 hsyn wi . (3) 5.3 Temporal Embedding (T-GCN) In this layer, NeuralDater exploits temporal structure of the document to learn an embedding for the Document Creation Time (DCT) node of the document. First, we describe the construction of temporal graph, followed by GCN-based embedding learning over this graph. Temporal Graph Construction: NeuralDater uses Stanford’s SUTime tagger (Chang and Manning, 2012) for date normalization and the event extraction classifier of (Chambers et al., 2014) for event detection. The annotated document is then passed to CATENA (Mirza and Tonelli, 2016), current state-of-the-art temporal and causal relation extraction algorithm, to obtain a temporal graph for each document. Since our task is to predict the creation time of a given document, we supply DCT as unknown to CATENA. We hypothesize that the temporal relations extracted in absence of DCT are helpful for document dating and we indeed find this to be true, as shown in Section 7. Temporal graph is a directed graph, where nodes correspond to events, time mentions, and the Document Creation Time (DCT). Edges in this graph represent causal and temporal relationships between them. Each edge is attributed with a label representing the type of the temporal relation. CATENA outputs 9 different types of temporal relations, out of which we selected five types, viz., AFTER, BEFORE, SAME, INCLUDES, and IS INCLUDED. The remaining four types were ignored as they were substantially infrequent. Please note that the temporal graph may involve only a small number of tokens in the document. Datasets # Docs Start Year End Year APW 675k 1995 2010 NYT 647k 1987 1996 Table 1: Details of datasets used. Please see Section 6 for details. For example, in the temporal graph in Figure 2, there are a total of 5 nodes: two temporal expression nodes (1995 and four years after), two event nodes (adopted and approved), and a special DCT node. This graph also consists of temporal relation edges such as (four years after, approved, BEFORE). Temporal Graph Convolution: NeuralDater employs a GCN over the temporal graph constructed above. We refer to this GCN as the Temporal GCN or T-GCN, as mentioned in Section 4. T-GCN is based on the GCN formulation presented in Section 3.2. Unlike S-GCN, here we consider label and direction specific parameters as the temporal graph consists of only five types of edges. Let nT be the number of nodes in the temporal graph. Starting with Hsyn (Section 5.2), T-GCN learns a rtemp-dimensional embedding for each node in the temporal graph. Stacking all these embeddings together, we get the embedding matrix Htemp ∈RnT ×rtemp. T-GCN embeds the temporal constraints induced by the temporal graph in htemp DCT ∈Rrtemp, embedding of the DCT node of the document. 5.4 Classifier Finally, the DCT embedding htemp DCT and averagepooled syntactic representation havg D (see Equation 3) of document D are concatenated and fed to a fully connected feed forward network followed by a softmax. This allows the NeuralDater to exploit context, syntactic, and temporal structure of the document to predict the final document date y. havg+temp D = [htemp DCT ; havg D ] p(y|D) = Softmax(W · havg+temp D + b). 6 Experimental Setup Datasets: We experiment on Associated Press Worldstream (APW) and New York Times (NYT) sections of Gigaword corpus (Parker et al., 2011). The original dataset contains around 3 million 1611 documents of APW and 2 million documents of NYT from span of multiple years. From both sections, we randomly sample around 650k documents while maintaining balance among years. Documents belonging to years with substantially fewer documents are omitted. Details of the dataset can be found in Table 1. For train, test and validation splits, the dataset was randomly divided in 80:10:10 ratio. Evaluation Criteria: Given a document, the model needs to predict the year in which the document was published. We measure performance in terms of overall accuracy of the model. Baselines: For evaluating NeuralDater, we compared against the following methods: • BurstySimDater Kotsakos et al. (2014): This is a purely statistical method which uses lexical similarity and term burstiness (Lappas et al., 2009) for dating documents in arbitrary length time frame. For our experiments, we took the time frame length as 1 year. Please refer to (Kotsakos et al., 2014) for more details. • MaxEnt-Time-NER: Maximum Entropy (MaxEnt) based classifier trained on hand-crafted temporal and Named Entity Recognizer (NER) based features. More details in (Chambers, 2012). • MaxEnt-Joint: Refers to MaxEnt-TimeNER combined with year mention classifier as described in (Chambers, 2012). • MaxEnt-Uni-Time: MaxEnt based discriminative model which takes bag-of-words representation of input document with normalized time expression as its features. • CNN: A Convolution Neural Network (CNN) (LeCun et al., 1999) based text classification model proposed by (Kim, 2014), which attained state-of-the-art results in several domains. • NeuralDater: Our proposed method, refer Section 4. Hyperparameters: By default, edge gating (Section 3.3) is used in all GCNs. The parameter K represents the number of layers in T-GCN (Section 5.3). We use 300-dimensional GloVe embeddings and 128-dimensional hidden state for both Method APW NYT BurstySimDater 45.9 38.5 MaxEnt-Time+NER 52.5 42.3 MaxEnt-Joint 52.5 42.5 MaxEnt-Uni-Time 57.5 50.5 CNN 56.3 50.4 NeuralDater 64.1 58.9 Table 2: Accuracies of different methods on APW and NYT datasets for the document dating problem (higher is better). NeuralDater significantly outperforms all other competitive baselines. This is our main result. Please see Section 7.1 for more details. Figure 3: Mean absolute deviation (in years; lower is better) between a model’s top prediction and the true year in the APW dataset. We find that NeuralDater, the proposed method, achieves the least deviation. Please see Section 7.1 for details. Method Accuracy T-GCN 57.3 S-GCN + T-GCN (K = 1) 57.8 S-GCN + T-GCN (K = 2) 58.8 S-GCN + T-GCN (K = 3) 59.1 Bi-LSTM 58.6 Bi-LSTM + CNN 59.0 Bi-LSTM + T-GCN 60.5 Bi-LSTM + S-GCN + T-GCN (no gate) 62.7 Bi-LSTM + S-GCN + T-GCN (K = 1) 64.1 Bi-LSTM + S-GCN + T-GCN (K = 2) 63.8 Bi-LSTM + S-GCN + T-GCN (K = 3) 63.3 Table 3: Accuracies of different ablated methods on the APW dataset. Overall, we observe that incorporation of context (Bi-LSTM), syntactic structure (S-GCN) and temporal structure (T-GCN) in NeuralDater achieves the best performance. Please see Section 7.1 for details. GCNs and BiLSTM with 0.8 dropout. We used Adam (Kingma and Ba, 2014) with 0.001 learning rate for training. 7 Results 7.1 Performance Comparison In order to evaluate the effectiveness of NeuralDater, our proposed method, we compare it 1612 against existing document dating systems and text classification models. The final results are summarized in Table 2. Overall, we find that NeuralDater outperforms all other methods with a significant margin on both datasets. Compared to the previous state-of-the-art in document dating, BurstySimDater (Kotsakos et al., 2014), we get 19% average absolute improvement in accuracy across both datasets. We observe only a slight gain in the performance of MaxEnt-based model (MaxEnt-Time+NER) of (Chambers, 2012) on combining with temporal constraint reasoner (MaxEnt-Joint). This may be attributed to the fact that the model utilizes only year mentions in the document, thus ignoring other relevant signals which might be relevant to the task. BurstySimDater performs considerably better in terms of precision compared to the other baselines, although it significantly underperforms in accuracy. We note that NeuralDater outperforms all these prior models both in terms of precision and accuracy. We find that even generic deep-learning based text classification models, such as CNN (Kim, 2014), are quite effective for the problem. However, since such a model doesn’t give specific attention to temporal features in the document, its performance remains limited. From Figure 3, we observe that NeuralDater’s top prediction achieves on average the lowest deviation from the true year. 7.2 Ablation Comparisons For demonstrating the efficacy of GCNs and BiLSTM for the problem, we evaluate different ablated variants of NeuralDater on the APW dataset. Specifically, we validate the importance of using syntactic and temporal GCNs and the effect of eliminating BiLSTM from the model. Overall results are summarized in Table 3. The first block of rows in the table corresponds to the case when BiLSTM layer is excluded from NeuralDater, while the second block denotes the case when BiLSTM is included. We also experiment with multiple stacked layers of T-GCN (denoted by K) to observe its effect on the performance of the model. We observe that embeddings from Syntactic GCN (S-GCN) are much better than plain GloVe embeddings for T-GCN as S-GCN encodes the syntactic neighborhood information in event and time embeddings which makes them more relevant for document dating task. Accuracy Figure 4: Evaluating performance of different methods on dating documents with and without time mentions. Please see Section 7.3 for details. Overall, we observe that including BiLSTM in the model improves performance significantly. Single BiLSTM model outperforms all the models listed in the first block of Table 3. Also, some gain in performance is observed on increasing the number of T-GCN layers (K) in absence of BiLSTM, although the same does not follow when BiLSTM is included in the model. This observation is consistent with (Marcheggiani and Titov, 2017), as multiple GCN layers become redundant in the presence of BiLSTM. We also find that eliminating edge gating from our best model deteriorates its overall performance. In summary, these results validate our thesis that joint incorporation of syntactic and temporal structure of a document in NeuralDater results in improved performance. 7.3 Discussion and Error Analysis In this section, we list some of our observations while trying to identify pros and cons of NeuralDater, our proposed method. We divided the development split of the APW dataset into two sets – those with and without any mention of time expressions (year). We apply NeuralDater and other methods to these two sets of documents and report accuracies in Figure 4. We find that overall, NeuralDater performs better in comparison to the existing baselines in both scenarios. Even though the performance of NeuralDater degrades in the absence of time mentions, its performance is still the best relatively. Based on other analysis, we find that NeuralDater fails to identify timestamp of documents reporting local infrequent incidents without explicit time mention. NeuralDater becomes confused in the presence of multiple misleading time mentions; it also loses out on documents discussing events which are outside the time range of the text on which the model was trained. In future, we plan to eliminate these pitfalls by 1613 incorporating additional signals from Knowledge Graphs about entities mentioned in the document. We also plan to utilize free text temporal expression (Kuzey et al., 2016) in documents for improving performance on this problem. 8 Conclusion We propose NeuralDater, a Graph Convolutional Network (GCN) based method for document dating which exploits syntactic and temporal structures in the document in a principled way. To the best of our knowledge, this is the first application of deep learning techniques for the problem of document dating. Through extensive experiments on real-world datasets, we demonstrate the effectiveness of NeuralDater over existing state-of-theart approaches. We are hopeful that the representation learning techniques explored in this paper will inspire further development and adoption of such techniques in the temporal information processing research community. Acknowledgements We thank the anonymous reviewers for their constructive comments. This work is supported in part by the Ministry of Human Resource Development (Government of India) and by a gift from Google. References James Allan, Ron Papka, and Victor Lavrenko. 1998. On-line new event detection and tracking. In Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, New York, NY, USA, SIGIR ’98, pages 37–45. https://doi.org/10.1145/290941.290954. Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Simaan. 2017a. Graph convolutional encoders for syntax-aware neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copenhagen, Denmark, pages 1957–1967. https://www.aclweb.org/anthology/D17-1209. Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Sima’an. 2017b. Graph convolutional encoders for syntax-aware neural machine translation. CoRR abs/1704.04675. http://arxiv.org/abs/1704.04675. Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann Lecun. 2014. Spectral networks and locally connected networks on graphs. In International Conference on Learning Representations (ICLR2014), CBLS, April 2014. Nathanael Chambers. 2012. Labeling documents with timestamps: Learning from their time expressions. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers Volume 1. Association for Computational Linguistics, Stroudsburg, PA, USA, ACL ’12, pages 98–106. http://dl.acm.org/citation.cfm?id=2390524.2390539. Nathanael Chambers, Taylor Cassidy, Bill McDowell, and Steven Bethard. 2014. Dense event ordering with a multi-pass architecture. Transactions of the Association of Computational Linguistics 2:273– 284. http://www.aclweb.org/anthology/Q14-1022. Nathanael Chambers and Dan Jurafsky. 2008. Jointly combining implicit constraints improves temporal ordering. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Stroudsburg, PA, USA, EMNLP ’08, pages 698–706. http://dl.acm.org/citation.cfm?id=1613715.1613803. Nathanael Chambers, Shan Wang, and Dan Jurafsky. 2007. Classifying temporal relations between events. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions. Association for Computational Linguistics, Stroudsburg, PA, USA, ACL ’07, pages 173–176. http://dl.acm.org/citation.cfm?id=1557769.1557820. Angel X. Chang and Christopher Manning. 2012. Sutime: A library for recognizing and normalizing time expressions. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012). European Language Resources Association (ELRA). http://www.aclweb.org/anthology/L12-1122. Wisam Dakka, Luis Gravano, and Panagiotis G. Ipeirotis. 2008. Answering general time sensitive queries. In Proceedings of the 17th ACM Conference on Information and Knowledge Management. ACM, New York, NY, USA, CIKM ’08, pages 1437–1438. https://doi.org/10.1145/1458082.1458320. Franciska M.G. de Jong, H. Rode, and Djoerd Hiemstra. 2005a. Temporal Language Models for the Disclosure of Historical Text, KNAW, pages 161–168. Imported from EWI/DB PMS [dbutwente:inpr:0000003683]. Franciska M.G. de Jong, H. Rode, and Djoerd Hiemstra. 2005b. Temporal Language Models for the Disclosure of Historical Text, KNAW, pages 161–168. Imported from EWI/DB PMS [dbutwente:inpr:0000003683]. Micha¨el Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. In 1614 Proceedings of the 30th International Conference on Neural Information Processing Systems. Curran Associates Inc., USA, NIPS’16, pages 3844–3852. http://dl.acm.org/citation.cfm?id=3157382.3157527. Jennifer D’Souza and Vincent Ng. 2013. Classifying temporal relations with rich linguistic knowledge. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 918–927. http://www.aclweb.org/anthology/N131112. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. MIT Press. http://www. deeplearningbook.org. G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury. 2012. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine 29(6):82–97. https://doi.org/10.1109/MSP.2012.2205597. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput. 9(8):1735– 1780. https://doi.org/10.1162/neco.1997.9.8.1735. Nattiya Kanhabua and Kjetil Nørv˚ag. 2008a. Improving temporal language models for determining time of non-timestamped documents. In Proceedings of the 12th European Conference on Research and Advanced Technology for Digital Libraries. SpringerVerlag, Berlin, Heidelberg, ECDL ’08, pages 358– 370. Nattiya Kanhabua and Kjetil Nørv˚ag. 2008b. Improving temporal language models for determining time of non-timestamped documents. In International Conference on Theory and Practice of Digital Libraries. Springer, pages 358–370. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, pages 1746–1751. https://doi.org/10.3115/v1/D14-1181. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR abs/1412.6980. Thomas N. Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR). Dimitrios Kotsakos, Theodoros Lappas, Dimitrios Kotzias, Dimitrios Gunopulos, Nattiya Kanhabua, and Kjetil Nørv˚ag. 2014. A burstinessaware approach for document dating. In Proceedings of the 37th International ACM SIGIR Conference on Research &#38; Development in Information Retrieval. ACM, New York, NY, USA, SIGIR ’14, pages 1003–1006. https://doi.org/10.1145/2600428.2609495. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1. Curran Associates Inc., USA, NIPS’12, pages 1097–1105. http://dl.acm.org/citation.cfm?id=2999134.2999257. Erdal Kuzey, Vinay Setty, Jannik Str¨otgen, and Gerhard Weikum. 2016. As time goes by: Comprehensive tagging of textual phrases with temporal scopes. In Proceedings of the 25th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, WWW ’16, pages 915–925. https://doi.org/10.1145/2872427.2883055. Theodoros Lappas, Benjamin Arai, Manolis Platakis, Dimitrios Kotsakos, and Dimitrios Gunopulos. 2009. On burstiness-aware search for document sequences. In Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, New York, NY, USA, KDD ’09, pages 477–486. https://doi.org/10.1145/1557019.1557075. Yann LeCun, Patrick Haffner, L´eon Bottou, and Yoshua Bengio. 1999. Object recognition with gradient-based learning. In Shape, Contour and Grouping in Computer Vision. Springer-Verlag, London, UK, UK, pages 319–. http://dl.acm.org/citation.cfm?id=646469.691875. Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2013. Deterministic coreference resolution based on entity-centric, precision-ranked rules. Comput. Linguist. 39(4):885–916. Xiaoyan Li and W. Bruce Croft. 2003. Timebased language models. In Proceedings of the Twelfth International Conference on Information and Knowledge Management. ACM, New York, NY, USA, CIKM ’03, pages 469–475. https://doi.org/10.1145/956863.956951. D. Llid´o, R. Berlanga, and M. J. Aramburu. 2001. Extracting temporal references to assign document event-time periods*. In Heinrich C. Mayr, Jiri Lazansky, Gerald Quirchmayr, and Pavel Vogel, editors, Database and Expert Systems Applications. Springer Berlin Heidelberg, Berlin, Heidelberg, pages 62–71. Hector Llorens, Nathanael Chambers, Naushad UzZaman, Nasrin Mostafazadeh, James Allen, and James Pustejovsky. 2015. Semeval-2015 task 5: 1615 Qa tempeval-evaluating temporal information understanding with question answering. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015). pages 792–800. Inderjeet Mani and George Wilson. 2000. Robust temporal processing of news. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, Stroudsburg, PA, USA, ACL ’00, pages 69–76. https://doi.org/10.3115/1075218.1075228. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations. pages 55–60. http://www.aclweb.org/anthology/P/P14/P14-5010. Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. CoRR abs/1703.04826. http://arxiv.org/abs/1703.04826. Paramita Mirza and Sara Tonelli. 2014. Classifying temporal relations with simple features. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, pages 308–317. https://doi.org/10.3115/v1/E14-1033. Paramita Mirza and Sara Tonelli. 2016. Catena: Causal and temporal relation extraction from natural language texts. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee, pages 64–75. http://www.aclweb.org/anthology/C16-1007. MA Olson, K Bostic, MI Seltzer, and DB Berkeley. 1999. Usenix annual technical conference, freenix track. Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2011. English gigaword fifth edition ldc2011t07. dvd. Philadelphia: Linguistic Data Consortium . Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP). pages 1532– 1543. http://www.aclweb.org/anthology/D14-1162. James Pustejovsky, Patrick Hanks, Roser Sauri, Andrew See, Robert Gaizauskas, Andrea Setzer, Dragomir Radev, Beth Sundheim, David Day, Lisa Ferro, et al. 2003. The timebank corpus. In Corpus linguistics. Lancaster, UK., volume 2003, page 40. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2. MIT Press, Cambridge, MA, USA, NIPS’14, pages 3104–3112. http://dl.acm.org/citation.cfm?id=2969033.2969173. Naushad UzZaman, Hector Llorens, Leon Derczynski, James Allen, Marc Verhagen, and James Pustejovsky. 2013. Semeval-2013 task 1: Tempeval-3: Evaluating time expressions, events, and temporal relations. In Second Joint Conference on Lexical and Computational Semantics (* SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013). volume 2, pages 1–9. Marc Verhagen, Robert Gaizauskas, Frank Schilder, Mark Hepple, Graham Katz, and James Pustejovsky. 2007. Semeval-2007 task 15: Tempeval temporal relation identification. In Proceedings of the 4th international workshop on semantic evaluations. Association for Computational Linguistics, pages 75– 80. Marc Verhagen, Roser Sauri, Tommaso Caselli, and James Pustejovsky. 2010. Semeval-2010 task 13: Tempeval-2. In Proceedings of the 5th international workshop on semantic evaluation. Association for Computational Linguistics, pages 57–62. Xiaojun Wan. 2007. Timedtextrank: Adding the temporal dimension to multi-document summarization. In Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, New York, NY, USA, SIGIR ’07, pages 867–868. https://doi.org/10.1145/1277741.1277949. Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srinivasan, and Dragomir R. Radev. 2017. Graph-based neural multi-document summarization. In Proceedings of CoNLL-2017. Association for Computational Linguistics. Katsumasa Yoshikawa, Sebastian Riedel, Masayuki Asahara, and Yuji Matsumoto. 2009. Jointly identifying temporal relations with markov logic. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. Association for Computational Linguistics, pages 405–413. http://www.aclweb.org/anthology/P09-1046.
2018
149
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 152–161 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 152 Retrieve, Rerank and Rewrite: Soft Template Based Neural Summarization Ziqiang Cao1,2 Wenjie Li1,2 Furu Wei3 Sujian Li4 1Department of Computing, The Hong Kong Polytechnic University, Hong Kong 2Hong Kong Polytechnic University Shenzhen Research Institute, China 3Microsoft Research, Beijing, China 4Key Laboratory of Computational Linguistics, Peking University, MOE, China {cszqcao, cswjli}@comp.polyu.edu.hk [email protected] [email protected] Abstract Most previous seq2seq summarization systems purely depend on the source text to generate summaries, which tends to work unstably. Inspired by the traditional template-based summarization approaches, this paper proposes to use existing summaries as soft templates to guide the seq2seq model. To this end, we use a popular IR platform to Retrieve proper summaries as candidate templates. Then, we extend the seq2seq framework to jointly conduct template Reranking and templateaware summary generation (Rewriting). Experiments show that, in terms of informativeness, our model significantly outperforms the state-of-the-art methods, and even soft templates themselves demonstrate high competitiveness. In addition, the import of high-quality external summaries improves the stability and readability of generated summaries. 1 Introduction The exponentially growing online information has necessitated the development of effective automatic summarization systems. In this paper, we focus on an increasingly intriguing task, i.e., abstractive sentence summarization (Rush et al., 2015a), which generates a shorter version of a given sentence while attempting to preserve its original meaning. It can be used to design or refine appealing headlines. Recently, the application of the attentional sequence-to-sequence (seq2seq) framework has attracted growing attention and achieved state-of-the-art performance on this task (Rush et al., 2015a; Chopra et al., 2016; Nallapati et al., 2016). Most previous seq2seq models purely depend on the source text to generate summaries. However, as reported in many studies (Koehn and Knowles, 2017), the performance of a seq2seq model deteriorates quickly with the increase of the length of generation. Our experiments also show that seq2seq models tend to “lose control” sometimes. For example, 3% of summaries contain less than 3 words, while there are 4 summaries repeating a word for even 99 times. These results largely reduce the informativeness and readability of the generated summaries. In addition, we find seq2seq models usually focus on copying source words in order, without any actual “summarization”. Therefore, we argue that, the free generation based on the source sentence is not enough for a seq2seq model. Template based summarization (e.g., Zhou and Hovy (2004)) is a traditional approach to abstractive summarization. In general, a template is an incomplete sentence which can be filled with the input text using the manually defined rules. For instance, a concise template to conclude the stock market quotation is: [REGION] shares [open/close] [NUMBER] percent [lower/higher], e.g., “hong kong shares close #.# percent lower”. Since the templates are written by humans, the produced summaries are usually fluent and informative. However, the construction of templates is extremely time-consuming and requires a plenty of domain knowledge. Moreover, it is impossible to develop all templates for summaries in various domains. Inspired by retrieve-based conversation systems (Ji et al., 2014), we assume the golden summaries of the similar sentences can provide a reference point to guide the input sentence summarization process. We call these existing summaries soft templates since no actual rules are nee153 ded to build new summaries from them. Due to the strong rewriting ability of the seq2seq framework (Cao et al., 2017a), in this paper, we propose to combine the seq2seq and template based summarization approaches. We call our summarization system Re3Sum, which consists of three modules: Retrieve, Rerank and Rewrite. We utilize a widely-used Information Retrieval (IR) platform to find out candidate soft templates from the training corpus. Then, we extend the seq2seq model to jointly learn template saliency measurement (Rerank) and final summary generation (Rewrite). Specifically, a Recurrent Neural Network (RNN) encoder is applied to convert the input sentence and each candidate template into hidden states. In Rerank, we measure the informativeness of a candidate template according to its hidden state relevance to the input sentence. The candidate template with the highest predicted informativeness is regarded as the actual soft template. In Rewrite, the summary is generated according to the hidden states of both the sentence and template. We conduct extensive experiments on the popular Gigaword dataset (Rush et al., 2015b). Experiments show that, in terms of informativeness, Re3Sum significantly outperforms the state-ofthe-art seq2seq models, and even soft templates themselves demonstrate high competitiveness. In addition, the import of high-quality external summaries improves the stability and readability of generated summaries. The contributions of this work are summarized as follows: • We propose to introduce soft templates as additional input to improve the readability and stability of seq2seq summarization systems. Code and results can be found at http://www4.comp.polyu. edu.hk/˜cszqcao/ • We extend the seq2seq framework to conduct template reranking and template-aware summary generation simultaneously. • We fuse the popular IR-based and seq2seqbased summarization systems, which fully utilize the supervisions from both sides. 2 Method As shown in Fig. 1, our summarization system consists of three modules, i.e., Retrieve, Rerank and Rewrite. Given the input sentence x, the Retrieve module filters candidate soft templates C = {ri} from the training corpus. For validation and test, we regard the candidate template with the highest predicted saliency (a.k.a informativeness) score as the actual soft template r. For training, we choose the one with the maximal actual saliency score in C, which speeds up convergence and shows no obvious side effect in the experiments. Then, we jointly conduct reranking and rewriting through a shared encoder. Specifically, both the sentence x and the soft template r are converted into hidden states with a RNN encoder. In the Rerank module, we measure the saliency of r according to its hidden state relevance to x. In the Rewrite module, a RNN decoder combines the hidden states of x and r to generate a summary y. More details will be described in the rest of this section 2.1 Retrieve The purpose of this module is to find out candidate templates from the training corpus. We assume that similar sentences should hold similar summary patterns. Therefore, given a sentence x, we find out its analogies in the corpus and pick their summaries as the candidate templates. Since the size of our dataset is quite large (over 3M), we leverage the widely-used Information Retrieve (IR) system Lucene1 to index and search efficiently. We keep the default settings of Lucene2 to build the IR system. For each input sentence, we select top 30 searching results as candidate templates. 2.2 Jointly Rerank and Rewrite To conduct template-aware seq2seq generation (rewriting), it is a necessary step to encode both the source sentence x and soft template r into hidden states. Considering that the matching networks based on hidden states have demonstrated the strong ability to measure the relevance of two pieces of texts (e.g., Chen et al. (2016)), we propose to jointly conduct reranking and rewriting through a shared encoding step. Specifically, we employ a bidirectional Recurrent Neural Network (BiRNN) encoder (Cho et al., 2014) to read x and r. Take the sentence x as an example. Its hidden state of the forward RNN at timestamp i can be 1https://lucene.apache.org/ 2TextField with EnglishAnalyzer 154 Figure 1: Flow chat of the proposed method. We use the dashed line for Retrieve since there is an IR system embedded. represented by: −→ h x i = RNN(xi, −→ h x i−1) (1) The BiRNN consists of a forward RNN and a backward RNN. Suppose the corresponding outputs are [−→ h x 1; · · · ; −→ h x −1] and [←− h x 1; · · · ; ←− h x −1], respectively, where the index “−1” stands for the last element. Then, the composite hidden state of a word is the concatenation of the two RNN representations, i.e., hx i = [−→ h x i ; ←− h x i ]. The entire representation for the source sentence is [hx 1; · · · ; hx −1]. Since a soft template r can also be regarded as a readable concise sentence, we use the same BiRNN encoder to convert it into hidden states [hr 1; · · · ; hr −1]. 2.2.1 Rerank In Retrieve, the template candidates are ranked according to the text similarity between the corresponding indexed sentences and the input sentence. However, for the summarization task, we expect the soft template r resembles the actual summary y∗as much as possible. Here we use the widely-used summarization evaluation metrics ROUGE (Lin, 2004) to measure the actual saliency s∗(r, y∗) (see Section 3.2). We utilize the hidden states of x and r to predict the saliency s of the template. Specifically, we regard the output of the BiRNN as the representation of the sentence or template: hx = [←− h x 1; −→ h x −1] (2) hr = [←− h r 1; −→ h r −1] (3) Next, we use Bilinear network to predict the saliency of the template for the input sentence. s(r, x) = sigmoid(hrWshT x + bs), (4) where Ws and bs are parameters of the Bilinear network, and we add the sigmoid activation function to make the range of s consistent with the actual saliency s∗. According to Chen et al. (2016), Bilinear outperforms multi-layer forward neural networks in relevance measurement. As shown later, the difference of s and s∗will provide additional supervisions for the seq2seq framework. 2.2.2 Rewrite The soft template r selected by the Rerank module has already competed with the state-of-the-art method in terms of ROUGE evaluation (see Table 4). However, r usually contains a lot of named entities that does not appear in the source (see Table 5). Consequently, it is hard to ensure that the soft templates are faithful to the input sentences. Therefore, we leverage the strong rewriting ability of the seq2seq model to generate more faithful and informative summaries. Specifically, since the input of our system consists of both the sentence and soft template, we use the concatenation function3 to combine the hidden states of the sentence and template: Hc = [hx 1; · · · ; hx −1; hr 1; · · · ; hr −1] (5) The combined hidden states are fed into the prevailing attentional RNN decoder (Bahdanau et al., 2014) to generate the decoding hidden state at the position t: st = Att-RNN(st−1, yt−1, Hc), (6) where yt−1 is the previous output summary word. Finally, a softmax layer is introduced to predict the current summary word: ot = softmax(stWo), (7) where Wo is a parameter matrix. 2.3 Learning There are two types of costs in our system. For Rerank, we expect the predicted saliency s(r, x) close to the actual saliency s∗(r, y∗). Therefore, 3We also attempted complex combination approaches such as the gate network Cao et al. (2017b) but failed to achieve obvious improvement. We assume the Rerank module has partially played the role of the gate network. 155 Figure 2: Jointly Rerank and Rewrite we use the cross entropy (CE) between s and s∗as the loss function: JR(θ) = CE(s(r, x), s∗(r, y∗)) (8) = −s∗log s −(1 −s∗) log(1 −s), where θ stands for the model parameters. For Rewrite, the learning goal is to maximize the estimated probability of the actual summary y∗. We adopt the common negative log-likelihood (NLL) as the loss function: JG(θ) = −log(p(y∗|x, r)) (9) = − X t log(ot[y∗ t ]) To make full use of supervisions from both sides, we combine the above two costs as the final loss function: J(θ) = JR(θ) + JG(θ) (10) We use mini-batch Stochastic Gradient Descent (SGD) to tune model parameters. The batch size is 64. To enhance generalization, we introduce dropout (Srivastava et al., 2014) with probability p = 0.3 for the RNN layers. The initial learning rate is 1, and it will decay by 50% if the generation loss does not decrease on the validation set. 3 Experiments 3.1 Datasets We conduct experiments on the Annotated English Gigaword corpus, as with (Rush et al., 2015b). This parallel corpus is produced by pairing the first sentence in the news article and its headline as the summary with heuristic rules. All the training, development and test datasets can be downloaded at https://github. com/harvardnlp/sent-summary. The statistics of the Gigaword corpus is presented in Table 1. Dataset Train Dev. Test Count 3.8M 189k 1951 AvgSourceLen 31.4 31.7 29.7 AvgTargetLen 8.3 8.3 8.8 COPY(%) 45 46 36 Table 1: Data statistics for English Gigaword. AvgSourceLen is the average input sentence length and AvgTargetLen is the average summary length. COPY means the copy ratio in the summaries (without stopwords). 3.2 Evaluation Metrics We adopt ROUGE (Lin, 2004) for automatic evaluation. ROUGE has been the standard evaluation metric for DUC shared tasks since 2004. It measures the quality of summary by computing the overlapping lexical units between the candidate summary and actual summaries, such as unigram, bi-gram and longest common subsequence (LCS). Following the common practice, we report ROUGE-1 (uni-gram), ROUGE-2 (bi-gram) and ROUGE-L (LCS) F1 scores4 in the following experiments. We also measure the actual saliency of a candidate template r with its combined ROUGE scores given the actual summary y∗: s∗(r, y∗) = RG(r, y∗) + RG(r, y∗), (11) where “RG” stands for ROUGE for short. ROUGE mainly evaluates informativeness. We also introduce a series of metrics to measure the summary quality from the following aspects: LEN DIF The absolute value of the length difference between the generated summaries and the actual summaries. We use mean value ± standard deviation to illustrate this item. The average value partially reflects the readability and informativeness, while the standard deviation links to stability. 4We use the ROUGE evaluation option: -m -n 2 -w 1.2 156 LESS 3 The number of the generated summaries, which contains less than three tokens. These extremely short summaries are usually unreadable. COPY The proportion of the summary words (without stopwords) copied from the source sentence. A seriously large copy ratio indicates that the summarization system pays more attention to compression rather than required abstraction. NEW NE The number of the named entities that do not appear in the source sentence or actual summary. Intuitively, the appearance of new named entities in the summary is likely to bring unfaithfulness. We use Stanford CoreNLP (Manning et al., 2014) to recognize named entities. 3.3 Implementation Details We use the popular seq2seq framework OpenNMT5 as the starting point. To make our model more general, we retain the default settings of OpenNMT to build the network architecture. Specifically, the dimensions of word embeddings and RNN are both 500, and the encoder and decoder structures are two-layer bidirectional Long Short Term Memory Networks (LSTMs). The only difference is that we add the argument “share embeddings” to share the word embeddings between the encoder and decoder. This practice largely reduces model parameters for the monolingual task. On our computer (GPU: GTX 1080, Memory: 16G, CPU: i7-7700K), the training spends about 2 days. During test, we use beam search of size 5 to generate summaries. We add the argument “replace unk” to replace the generated unknown words with the source word that holds the highest attention weight. Since the generated summaries are often shorter than the actual ones, we introduce an additional length penalty argument “alpha 1” to encourage longer generation, like Wu et al. (2016). 3.4 Baselines We compare our proposed model with the following state-of-the-art neural summarization systems: ABS Rush et al. (2015a) used an attentive CNN encoder and a NNLM decoder to summarize 5https://github.com/OpenNMT/OpenNMT-py the sentence. ABS+ Rush et al. (2015a) further tuned the ABS model with additional hand-crafted features to balance between abstraction and extraction. RAS-Elman As the extension of the ABS model, it used a convolutional attention-based encoder and a RNN decoder (Chopra et al., 2016). Featseq2seq Nallapati et al. (2016) used a complete seq2seq RNN model and added the hand-crafted features such as POS tag and NER, to enhance the encoder representation. Luong-NMT Chopra et al. (2016) implemented the neural machine translation model of Luong et al. (2015) for summarization. This model contained two-layer LSTMs with 500 hidden units in each layer. OpenNMT We also implement the standard attentional seq2seq model with OpenNMT. All the settings are the same as our system. It is noted that OpenNMT officially examined the Gigaword dataset. We distinguish the official result6 and our experimental result with suffixes “O” and “I” respectively. FTSum Cao et al. (2017b) encoded the facts extracted from the source sentence to improve both the faithfulness and informativeness of generated summaries. In addition, to evaluate the effectiveness of our joint learning framework, we develop a baseline named “PIPELINE”. Its architecture is identical to Re3Sum. However, it trains the Rerank module and Rewrite module in pipeline. 3.5 Informativeness Evaluation Model Perplexity ABS† 27.1 RAS-Elman† 18.9 FTSum† 16.4 OpenNMTI 13.2 PIPELINE 12.5 Re3Sum 12.9 Table 2: Final perplexity on the development set. † indicates the value is cited from the corresponding paper. ABS+, Featseq2seq and Luong-NMT do not provide this value. Let’s first look at the final cost values (Eq. 9) on the development set. From Table 2, we can 6http://opennmt.net/Models/ 157 Model RG-1 RG-2 RG-L ABS† 29.55∗ 11.32∗ 26.42∗ ABS+† 29.78∗ 11.89∗ 26.97∗ Featseq2seq† 32.67∗ 15.59∗ 30.64∗ RAS-Elman† 33.78∗ 15.97∗ 31.15∗ Luong-NMT† 33.10∗ 14.45∗ 30.71∗ FTSum† 37.27 17.65∗ 34.24 OpenNMT† O 33.13∗ 16.09∗ 31.00∗ OpenNMTI 35.01∗ 16.55∗ 32.42∗ PIPELINE 36.49 17.48∗ 33.90 Re3Sum 37.04 19.03 34.46 Table 3: ROUGE F1 (%) performance. “RG” represents “ROUGE” for short. “∗” indicates statistical significance of the corresponding model with respect to the baseline model on the 95% confidence interval in the official ROUGE script. Type RG-1 RG-2 RG-L Random 2.81 0.00 2.72 First 24.44 9.63 22.05 Max 38.90 19.22 35.54 Optimal 52.91 31.92 48.63 Rerank 28.77 12.49 26.40 Table 4: ROUGE F1 (%) performance of different types of soft templates. see that our model achieves much lower perplexity compared against the state-of-the-art systems. It is also noted that PIPELINE slightly outperforms Re3Sum. One possible reason is that Re3Sum additionally considers the cost derived from the Rerank module. The ROUGE F1 scores of different methods are then reported in Table 3. As can be seen, our model significantly outperforms most other approaches. Note that, ABS+ and Featseq2seq have utilized a series of hand-crafted features, but our model is completely data-driven. Even though, our model surpasses Featseq2seq by 22% and ABS+ by 60% on ROUGE-2. When soft templates are ignored, our model is equivalent to the standard atItem Template OpenNMT Re3Sum LEN DIF 2.6±2.6 3.0±4.4 2.7±2.6 LESS 3 0 53 1 COPY(%) 31 80 74 NEW NE 0.51 0.34 0.30 Table 5: Statistics of different types of summaries. Type RG-1 RG-2 RG-L +Random 32.60 14.31 30.19 +First 36.01 17.06 33.21 +Max 41.50 21.97 38.80 +Optimal 46.21 26.71 43.19 +Rerank(Re3Sum) 37.04 19.03 34.46 Table 6: ROUGE F1 (%) performance of Re3Sum generated with different soft templates. tentional seq2seq model OpenNMTI. Therefore, it is safe to conclude that soft templates have great contribute to guide the generation of summaries. We also examine the performance of directly regarding soft templates as output summaries. We introduce five types of different soft templates: Random An existing summary randomly selected from the training corpus. First The top-ranked candidate template given by the Retrieve module. Max The template with the maximal actual ROUGE scores among the 30 candidate templates. Optimal An existing summary in the training corpus which holds the maximal ROUGE scores. Rerank The template with the maximal predicted ROUGE scores among the 30 candidate templates. It is the actual soft template we adopt. As shown in Table 4, the performance of Random is terrible, indicating it is impossible to use one summary template to fit various actual summaries. Rerank largely outperforms First, which verifies the effectiveness of the Rerank module. However, according to Max and Rerank, we find the Rerank performance of Re3Sum is far from perfect. Likewise, comparing Max and First, we observe that the improving capacity of the Retrieve module is high. Notice that Optimal greatly exceeds all the state-of-the-art approaches. This finding strongly supports our practice of using existing summaries to guide the seq2seq models. 3.6 Linguistic Quality Evaluation We also measure the linguistic quality of generated summaries from various aspects, and the results are present in Table 5. As can be seen from the rows “LEN DIF” and “LESS 3”, the performance of Re3Sum is almost the same as that of soft templates. The soft templates indeed well guide the summary generation. Compared with 158 Source grid positions after the final qualifying session in the indonesian motorcycle grand prix at the sentul circuit , west java , saturday : UNK Target indonesian motorcycle grand prix grid positions Template grid positions for british grand prix OpenNMT circuit Re3Sum grid positions for indonesian grand prix Source india ’s children are getting increasingly overweight and unhealthy and the government is asking schools to ban junk food , officials said thursday . Target indian government asks schools to ban junk food Template skorean schools to ban soda junk food OpenNMT india ’s children getting fatter Re3Sum indian schools to ban junk food Table 7: Examples of generated summaries. We use Bold font to indicate the crucial rewriting behavior from the templates to generated summaries. Re3Sum, the standard deviation of LEN DF is 0.7 times larger in OpenNMT, indicating that OpenNMT works quite unstably. Moreover, OpenNMT generates 53 extreme short summaries, which seriously reduces readability. Meanwhile, the copy ratio of actual summaries is 36%. Therefore, the copy mechanism is severely overweighted in OpenNMT. Our model is encouraged to generate according to human-written soft templates, which relatively diminishes copying from the source sentences. Look at the last row “NEW NE”. A number of new named entities appear in the soft templates, which makes them quite unfaithful to source sentences. By contrast, this index in Re3Sum is close to the OpenNMT’s. It highlights the rewriting ability of our seq2seq framework. 3.7 Effect of Templates In this section, we investigate how soft templates affect our model. At the beginning, we feed different types of soft templates (refer to Table 4) into the Rewriting module of Re3Sum. As illustrated in Table 6, the more high-quality templates are provided, the higher ROUGE scores are achieved. It is interesting to see that,while the ROUGE-2 score of Random templates is zero, our model can still generate acceptable summaries with Random templates. It seems that Re3Sum can automatically judge whether the soft templates are trustworthy and ignore the seriously irrelevant ones. We believe that the joint learning with the Rerank model plays a vital role here. Next, we manually inspect the summaries generated by different methods. We find the outputs of Re3Sum are usually longer and more fluent than the outputs of OpenNMT. Some illustrative examples are shown in Table 7. In Example 1, there is no predicate in the source sentence. Since OpenNMT prefers selecting source words around the predicate to form the summary, it fails on this sentence. By contract, Re3Sum rewrites the template and produces an informative summary. In Example 2, OpenNMT deems the starting part of the sentences are more important, while our model, guided by the template, focuses on the second part to generate the summary. In the end, we test the ability of our model to generate diverse summaries. In practice, a system that can provide various candidate summaries is probably more welcome. Specifically, two candidate templates with large text dissimilarity are manually fed into the Rewriting module. The corresponding generated summaries are shown in Table 8. For the sake of comparison, we also present the 2-best results of OpenNMT with beam search. As can be seen, with different templates given, our model is likely to generate dissimilar summaries. In contrast, the 2-best results of OpenNMT is almost the same, and often a shorter summary is only a piece of the other one. To sum up, our model demonstrates promising prospect in generation diversity. 4 Related Work Abstractive sentence summarization aims to produce a shorter version of a given sentence while preserving its meaning (Chopra et al., 2016). This task is similar to text simplification (Saggion, 2017) and facilitates headline design and refine. Early studies on sentence summariza159 Source anny ainge said thursday he had two one-hour meetings with the new owners of the boston celtics but no deal has been completed for him to return to the franchise . Target ainge says no deal completed with celtics Templates major says no deal with spain on gibraltar roush racing completes deal with red sox owner Re3Sum ainge says no deal done with celtics ainge talks with new owners OpenNMT ainge talks with celtics owners ainge talks with new owners Source european stock markets advanced strongly thursday on some bargain-hunting and gains by wall street and japanese shares ahead of an expected hike in us interest rates . Target european stocks bounce back UNK UNK with closing levels Templates european stocks bounce back strongly european shares sharply lower on us interest rate fears Re3Sum european stocks bounce back strongly european shares rise strongly on bargain-hunting OpenNMT european stocks rise ahead of expected us rate hike hike european stocks rise ahead of us rate hike Table 8: Examples of generation with diversity. We use Bold font to indicate the difference between two summaries tion include template-based methods (Zhou and Hovy, 2004), syntactic tree pruning (Knight and Marcu, 2002; Clarke and Lapata, 2008) and statistical machine translation techniques (Banko et al., 2000). Recently, the application of the attentional seq2seq framework has attracted growing attention and achieved state-of-the-art performance on this task (Rush et al., 2015a; Chopra et al., 2016; Nallapati et al., 2016). In addition to the direct application of the general seq2seq framework, researchers attempted to integrate various properties of summarization. For example, Nallapati et al. (2016) enriched the encoder with hand-crafted features such as named entities and POS tags. These features have played important roles in traditional feature based summarization systems. Gu et al. (2016) found that a large proportion of the words in the summary were copied from the source text. Therefore, they proposed CopyNet which considered the copying mechanism during generation. Recently, See et al. (2017) used the coverage mechanism to discourage repetition. Cao et al. (2017b) encoded facts extracted from the source sentence to enhance the summary faithfulness. There were also studies to modify the loss function to fit the evaluation metrics. For instance, Ayana et al. (2016) applied the Minimum Risk Training strategy to maximize the ROUGE scores of generated summaries. Paulus et al. (2017) used the reinforcement learning algorithm to optimize a mixed objective function of likelihood and ROUGE scores. Guu et al. (2017) also proposed to encode human-written sentences to improvement the performance of neural text generation. However, they handled the task of Language Modeling and randomly picked an existing sentence in the training corpus. In comparison, we develop an IR system to find proper existing summaries as soft templates. Moreover, Guu et al. (2017) used a general seq2seq framework while we extend the seq2seq framework to conduct template reranking and template-aware summary generation simultaneously. 5 Conclusion and Future Work This paper proposes to introduce soft templates as additional input to guide the seq2seq summarization. We use the popular IR platform Lucene to retrieve proper existing summaries as candidate soft templates. Then we extend the seq2seq framework to jointly conduct template reranking and template-aware summary generation. Experiments show that our model can generate informative, readable and stable summaries. In addition, our model demonstrates promising prospect in generation diversity. We believe our work can be extended in vari160 ous aspects. On the one hand, since the candidate templates are far inferior to the optimal ones, we intend to improve the Retrieve module, e.g., by indexing both the sentence and summary fields. On the other hand, we plan to test our system on the other tasks such as document-level summarization and short text conversation. Acknowledgments The work described in this paper was supported by Research Grants Council of Hong Kong (PolyU 152036/17E), National Natural Science Foundation of China (61672445 and 61572049) and The Hong Kong Polytechnic University (G-YBP6, 4BCDV). References Shiqi Shen Ayana, Zhiyuan Liu, and Maosong Sun. 2016. Neural headline generation with minimum risk training. arXiv preprint arXiv:1604.01904. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Michele Banko, Vibhu O Mittal, and Michael J Witbrock. 2000. Headline generation based on statistical translation. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, pages 318–325. Association for Computational Linguistics. Ziqiang Cao, Chuwei Luo, Wenjie Li, and Sujian Li. 2017a. Joint copying and restricted generation for paraphrase. In AAAI, pages 3152–3158. Ziqiang Cao, Furu Wei, Wenjie Li, and Sujian Li. 2017b. Faithful to the original: Fact aware neural abstractive summarization. arXiv preprint arXiv:1711.04434. Danqi Chen, Jason Bolton, and Christopher D Manning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. arXiv preprint arXiv:1606.02858. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. Sumit Chopra, Michael Auli, Alexander M Rush, and SEAS Harvard. 2016. Abstractive sentence summarization with attentive recurrent neural networks. Proceedings of NAACL-HLT16, pages 93–98. James Clarke and Mirella Lapata. 2008. Global inference for sentence compression: An integer linear programming approach. Journal of Artificial Intelligence Research, 31:399–429. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. arXiv preprint arXiv:1603.06393. Kelvin Guu, Tatsunori B Hashimoto, Yonatan Oren, and Percy Liang. 2017. Generating sentences by editing prototypes. arXiv preprint arXiv:1709.08878. Zongcheng Ji, Zhengdong Lu, and Hang Li. 2014. An information retrieval approach to short text conversation. arXiv preprint arXiv:1408.6988. Kevin Knight and Daniel Marcu. 2002. Summarization beyond sentence extraction: A probabilistic approach to sentence compression. Artificial Intelligence, 139(1):91–107. Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. arXiv preprint arXiv:1706.03872. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Proceedings of the ACL Workshop, pages 74–81. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of ACL: System Demonstrations, pages 55–60. Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, Bing Xiang, et al. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023. Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304. Alexander M Rush, Sumit Chopra, and Jason Weston. 2015a. A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015b. A neural attention model for abstractive sentence summarization. In Proceedings of EMNLP, pages 379–389. Horacio Saggion. 2017. Automatic text simplification. Synthesis Lectures on Human Language Technologies, 10(1):1–137. 161 Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Liang Zhou and Eduard Hovy. 2004. Template-filtered headline summarization. Text Summarization Branches Out.
2018
15
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1616–1626 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1616 A Graph-to-Sequence Model for AMR-to-Text Generation Linfeng Song1, Yue Zhang3, Zhiguo Wang2 and Daniel Gildea1 1Department of Computer Science, University of Rochester, Rochester, NY 14627 2IBM T.J. Watson Research Center, Yorktown Heights, NY 10598 3Singapore University of Technology and Design Abstract The problem of AMR-to-text generation is to recover a text representing the same meaning as an input AMR graph. The current state-of-the-art method uses a sequence-to-sequence model, leveraging LSTM for encoding a linearized AMR structure. Although it is able to model non-local semantic information, a sequence LSTM can lose information from the AMR graph structure, and thus faces challenges with large graphs, which result in long sequences. We introduce a neural graph-to-sequence model, using a novel LSTM structure for directly encoding graph-level semantics. On a standard benchmark, our model shows superior results to existing methods in the literature. 1 Introduction Abstract Meaning Representation (AMR) (Banarescu et al., 2013) is a semantic formalism that encodes the meaning of a sentence as a rooted, directed graph. Figure 1 shows an AMR graph in which the nodes (such as “describe-01” and “person”) represent the concepts, and edges (such as “:ARG0” and “:name”) represent the relations between concepts they connect. AMR has been proven helpful on other NLP tasks, such as machine translation (Jones et al., 2012; Tamchyna et al., 2015), question answering (Mitra and Baral, 2015), summarization (Takase et al., 2016) and event detection (Li et al., 2015). The task of AMR-to-text generation is to produce a text with the same meaning as a given input AMR graph. The task is challenging as word tenses and function words are abstracted away when constructing AMR graphs from texts. The translation from AMR nodes to text phrases can :name :ARG0 describe-01 name person "Ryan" :op1 :ARG1 genius :ARG2 Figure 1: An example of AMR graph meaning “Ryan’s description of himself: a genius.” be far from literal. For example, shown in Figure 1, “Ryan” is represented as “(p / person :name (n / name :op1 “Ryan”))”, and “description of” is represented as “(d / describe-01 :ARG1 )”. While initial work used statistical approaches (Flanigan et al., 2016b; Pourdamghani et al., 2016; Song et al., 2017; Lampouras and Vlachos, 2017; Mille et al., 2017; Gruzitis et al., 2017), recent research has demonstrated the success of deep learning, and in particular the sequence-to-sequence model (Sutskever et al., 2014), which has achieved the state-of-the-art results on AMR-to-text generation (Konstas et al., 2017). One limitation of sequence-to-sequence models, however, is that they require serialization of input AMR graphs, which adds to the challenge of representing graph structure information, especially when the graph is large. In particular, closely-related nodes, such as parents, children and siblings can be far away after serialization. It can be difficult for a linear recurrent neural network to automatically induce their original connections from bracketed string forms. To address this issue, we introduce a novel graph-to-sequence model, where a graph-state LSTM is used to encode AMR structures directly. 1617 To capture non-local information, the encoder performs graph state transition by information exchange between connected nodes, with a graph state consisting of all node states. Multiple recurrent transition steps are taken so that information can propagate non-locally, and LSTM (Hochreiter and Schmidhuber, 1997) is used to avoid gradient diminishing and bursting in the recurrent process. The decoder is an attention-based LSTM model with a copy mechanism (Gu et al., 2016; Gulcehre et al., 2016), which helps copy sparse tokens (such as numbers and named entities) from the input. Trained on a standard dataset (LDC2015E86), our model surpasses a strong sequence-tosequence baseline by 2.3 BLEU points, demonstrating the advantage of graph-to-sequence models for AMR-to-text generation compared to sequence-to-sequence models. Our final model achieves a BLEU score of 23.3 on the test set, which is 1.3 points higher than the existing state of the art (Konstas et al., 2017) trained on the same dataset. When using gigaword sentences as additional training data, our model is consistently better than Konstas et al. (2017) using the same amount of gigaword data, showing the effectiveness of our model on large-scale training set. We release our code and models at https: //github.com/freesunshine0316/ neural-graph-to-seq-mp. 2 Baseline: a seq-to-seq model Our baseline is a sequence-to-sequence model, which follows the encoder-decoder framework of Konstas et al. (2017). 2.1 Input representation Given an AMR graph G = (V, E), where V and E denote the sets of nodes and edges, respectively, we use the depth-first traversal of Konstas et al. (2017) to linearize it to obtain a sequence of tokens v1, . . . , vN, where N is the number of tokens. For example, the AMR graph in Figure 1 is serialized as “describe :arg0 ( person :name ( name :op1 ryan ) ) :arg1 person :arg2 genius”. We can see that the distance between “describe” and “genius”, which are directly connected in the original AMR, becomes 14 in the serialization result. A simple way to calculate the representation for each token vj is using its word embedding ej: xj = W1ej + b1, (1) where W1 and b1 are model parameters for compressing the input vector size. To alleviate the data sparsity problem and obtain better word representation as the input, we also adopt a forward LSTM over the characters of the token, and concatenate the last hidden state hc j with the word embedding: xj = W1  [ej; hc j]  + b1 (2) 2.2 Encoder The encoder is a bi-directional LSTM applied on the linearized graph by depth-first traversal, as in Konstas et al. (2017). At each step j, the current states ←hj and → hj are generated given the previous states ←hj+1 and → hj 1 and the current input xj: ←hj = LSTM(←hj+1, xj) → hj = LSTM(→ hj 1, xj) 2.3 Decoder We use an attention-based LSTM decoder (Bahdanau et al., 2015), where the attention memory (A) is the concatenation of the attention vectors among all input words. Each attention vector aj is the concatenation of the encoder states of an input token in both directions (←hj and → hj) and its input vector (xj): aj = [←hj; → hj; xj] (3) A = [a1; a2; . . . ; aN] (4) where N is the number of input tokens. The decoder yields an output sequence w1, w2, . . . , wM by calculating a sequence of hidden states s1, s2 . . . , sM recurrently. While generating the t-th word, the decoder considers five factors: (1) the attention memory A; (2) the previous hidden state of the LSTM model st 1; (3) the embedding of the current input (previously generated word) et; (4) the previous context vector µt 1, which is calculated with attention from A; and (5) the previous coverage vector γt 1, which is the accumulation of all attention distributions so far (Tu et al., 2016). When t = 1, we initialize µ0 and γ0 as zero vectors, set e1 to the embedding of the start token “<s>”, and s0 as the average of all encoder states. For each time-step t, the decoder feeds the concatenation of the embedding of the current input et and the previous context vector µt 1 into the 1618 Time Figure 2: Graph state LSTM. LSTM model to update its hidden state. Then the attention probability αt,i on the attention vector ai ∈A for the time-step is calculated as: ϵt,i = vT 2 tanh(Waai + Wsst + Wγγt 1 + b2) αt,i = exp(ϵt,i) PN j=1 exp(ϵt,j) where Wa, Ws, Wγ, v2 and b2 are model parameters. The coverage vector γt is updated by γt = γt 1 + αt, and the new context vector µt is calculated via µt = PN i=1 αt,iai. The output probability distribution over a vocabulary at the current state is calculated by: Pvocab = softmax(V3[st, µt] + b3), (5) where V3 and b3 are learnable parameters, and the number of rows in V3 represents the number of words in the vocabulary. 3 The graph-to-sequence model Unlike the baseline sequence-to-sequence model, we leverage a recurrent graph encoder to represent each input AMR, which directly models the graph structure without serialization. 3.1 The graph encoder Figure 2 shows the overall structure of our graph encoder. Formally, given a graph G = (V, E), we use a hidden state vector hj to represent each node vj ∈V . The state of the graph can thus be represented as: g = {hj}|vj∈V In order to capture non-local interaction between nodes, we allow information exchange between nodes through a sequence of state transitions, leading to a sequence of states g0, g1, . . . , gt, . . . , where gt = {hj t}|vj∈V . The initial state g0 consists of a set of initial node states hj 0 = h0, where h0 is a hyperparameter of the model. State transition A recurrent neural network is used to model the state transition process. In particular, the transition from gt 1 to gt consists of a hidden state transition for each node, as shown in Figure 2. At each state transition step t, we allow direct communication between a node and all nodes that are directly connected to the node. To avoid gradient diminishing or bursting, LSTM (Hochreiter and Schmidhuber, 1997) is adopted, where a cell cj t is taken to record memory for hj t. We use an input gate ij t, an output gate oj t and a forget gate fj t to control information flow from the inputs and to the output hj t. The inputs include representations of edges that are connected to vj, where vj can be either the source or the target of the edge. We define each edge as a triple (i, j, l), where i and j are indices of the source and target nodes, respectively, and l is the edge label. xl i,j is the representation of edge (i, j, l), detailed in Section 3.3. The inputs for vj are distinguished by incoming and outgoing edges, before being summed up: xi j = X (i,j,l)∈Ein(j) xl i,j xo j = X (j,k,l)∈Eout(j) xl j,k, where Ein(j) and Eout(j) denote the sets of incoming and outgoing edges of vj, respectively. In addition to edge inputs, a cell also takes the hidden states of its incoming nodes and outgoing nodes during a state transition. In particular, the states of all incoming nodes and outgoing nodes are summed up before being passed to the cell and gate nodes: hi j = X (i,j,l)∈Ein(j) hi t 1 ho j = X (j,k,l)∈Eout(j) hk t 1, Based on the above definitions of xi j, xo j, hi j and ho j, the state transition from gt 1 to gt, as repre1619 sented by hj t, can be defined as: ij t = σ(Wixi j + ˆ Wixo j + Uihi j + ˆUiho j + bi), oj t = σ(Woxi j + ˆ Woxo j + Uohi j + ˆ Uoho j + bo), fj t = σ(Wfxi j + ˆ Wfxo j + Ufhi j + ˆ Ufho j + bf), uj t = σ(Wuxi j + ˆ Wuxo j + Uuhi j + ˆ Uuho j + bu), cj t = fj t ⊙cj t 1 + ij t ⊙uj t, hj t = oj t ⊙tanh(cj t), where ij t, oj t and fj t are the input, output and forget gates mentioned earlier. Wx, ˆWx, Ux, ˆUx, bx, where x ∈{i, o, f, u}, are model parameters. 3.2 Recurrent steps Using the above state transition mechanism, information from each node propagates to all its neighboring nodes after each step. Therefore, for the worst case where the input graph is a chain of nodes, the maximum number of steps necessary for information from one arbitrary node to reach another is equal to the size of the graph. We experiment with different transition steps to study the effectiveness of global encoding. Note that unlike the sequence LSTM encoder, our graph encoder allows parallelization in nodestate updates, and thus can be highly efficient using a GPU. It is general and can be potentially applied to other tasks, including sequences, syntactic trees and cyclic structures. 3.3 Input Representation Different from sequences, the edges of an AMR graph contain labels, which represent relations between the nodes they connect, and are thus important for modeling the graphs. Similar with Section 2, we adopt two different ways for calculating the representation for each edge (i, j, l): xl i,j = W4  [el; ei]  + b4 (6) xl i,j = W4  [el; ei; hc i]  + b4, (7) where el and ei are the embeddings of edge label l and source node vi, hc i denotes the last hidden state of the character LSTM over vi, and W4 and b4 are trainable parameters. The equations correspond to Equations 1 and 2 in Section 2.1, respectively. 3.4 Decoder We adopt the attention-based LSTM decoder as described in Section 2.3. Since our graph encoder generates a sequence of graph states, only the last graph state is adopted in the decoder. In particular, we make the following changes to the decoder. First, each attention vector becomes aj = [hj T ; xj], where hj T is the last state for node vj. Second, the decoder initial state s 1 is the average of the last states of all nodes. 3.5 Integrating the copy mechanism Open-class tokens, such as dates, numbers and named entities, account for a large portion in the AMR corpus. Most appear only a few times, resulting in a data sparsity problem. To address this issue, Konstas et al. (2017) adopt anonymization for dealing with the data sparsity problem. In particular, they first replace the subgraphs that represent dates, numbers and named entities (such as “(q / quantity :quant 3)” and “(p / person :name (n / name :op1 “Ryan”))”) with predefined placeholders (such as “num 0” and “person name 0”) before decoding, and then recover the corresponding surface tokens (such as “3” and “Ryan”) after decoding. This method involves hand-crafted rules, which can be costly. Copy We find that most of the open-class tokens in a graph also appear in the corresponding sentence, and thus adopt the copy mechanism (Gulcehre et al., 2016; Gu et al., 2016) to solve this problem. The mechanism works on top of an attention-based RNN decoder by integrating the attention distribution into the final vocabulary distribution. The final probability distribution is defined as the interpolation between two probability distributions: Pfinal = θtPvocab + (1 θt)Pattn, (8) where θt is a switch for controlling generating a word from the vocabulary or directly copying it from the input graph. Pvocab is the probability distribution of directly generating the word, as defined in Equation 5, and Pattn is calculated based on the attention distribution αt by summing the probabilities of the graph nodes that contain identical concept. Intuitively, θt is relevant to the current decoder input et and state st, and the context vector µt. Therefore, we define it as: θt = σ(wT µ µt + wT s st + wT e et + b5), (9) where vectors wµ, ws, we and scalar b5 are model parameters. The copy mechanism favors gener1620 ating words that appear in the input. For AMRto-text generation, it facilitates the generation of dates, numbers, and named entities that appear in AMR graphs. Copying vs anonymization Both copying and anonymization alleviate the data sparsity problem by handling the open-class tokens. However, the copy mechanism has the following advantages over anonymization: (1) anonymization requires significant manual work to define the placeholders and heuristic rules both from subgraphs to placeholders and from placeholders to the surface tokens, (2) the copy mechanism automatically learns what to copy, while anonymization relies on hard rules to cover all types of the open-class tokens, and (3) the copy mechanism is easier to adapt to new domains and languages than anonymization. 4 Training and decoding We train our models using the cross-entropy loss over each gold-standard output sequence W ∗= w∗ 1, . . . , w∗ t , . . . , w∗ M: l = M X t=1 log p(w∗ t |w∗ t 1, . . . , w∗ 1, X; θ), (10) where X is the input graph, and θ is the model parameters. Adam (Kingma and Ba, 2014) with a learning rate of 0.001 is used as the optimizer, and the model that yields the best devset performance is selected to evaluate on the test set. Dropout with rate 0.1 is used during training. Beam search with beam size to 5 is used for decoding. Both training and decoding use Tesla K80 GPUs. 5 Experiments 5.1 Data We use a standard AMR corpus (LDC2015E86) as our experimental dataset, which contains 16,833 instances for training, 1368 for development and 1371 for test. Each instance contains a sentence and an AMR graph. Following Konstas et al. (2017), we supplement the gold data with large-scale automatic data. We take Gigaword as the external data to sample raw sentences, and train our model on both the sampled data and LDC2015E86. We adopt Konstas et al. (2017)’s strategy for sampling sentences from Gigaword, and choose JAMR (Flanigan et al., 2016a) to parse selected sentences into Model BLEU Time Seq2seq 18.8 35.4s Seq2seq+copy 19.9 37.4s Seq2seq+charLSTM+copy 20.6 39.7s Graph2seq 20.4 11.2s Graph2seq+copy 22.2 11.1s Graph2seq+Anon 22.1 9.2s Graph2seq+charLSTM+copy 22.8 16.3s Table 1: DEV BLEU scores and decoding times. AMRs, as the AMR parser of Konstas et al. (2017) only works on the anonymized data. For training on both sampled data and LDC2015E86, we also follow the method of Konstas et al. (2017), which is fine-tuning the model on the AMR corpus after every epoch of pretraining on the gigaword data. 5.2 Settings We extract a vocabulary from the training set, which is shared by both the encoder and the decoder. The word embeddings are initialized from Glove pretrained word embeddings (Pennington et al., 2014) on Common Crawl, and are not updated during training. Following existing work, we evaluate the results with the BLEU metric (Papineni et al., 2002). For model hyperparameters, we set the graph state transition number as 9 according to development experiments. Each node takes information from at most 10 neighbors. The hidden vector sizes for both encoder and decoder are set to 300 (They are set to 600 for experiments using largescale automatic data). Both character embeddings and hidden layer sizes for character LSTMs are set 100, and at most 20 characters are taken for each graph node or linearized token. 5.3 Development experiments As shown in Table 1, we compare our model with a set of baselines on the AMR devset to demonstrate how the graph encoder and the copy mechanism can be useful when training instances are not sufficient. Seq2seq is the sequence-to-sequence baseline described in Section 2. Seq2seq+copy extends Seq2seq with the copy mechanism, and Seq2seq+charLSTM+copy further extends Seq2seq+copy with character LSTM. Graph2seq is our graph-to-sequence model, Graph2seq+copy extends Graph2seq with the copy mechanism, and Graph2seq+charLSTM+copy further extends 1621 Graph2seq+copy with the character LSTM. We also try Graph2seq+Anon, which applies our graph-to-sequence model on the anonymized data from Konstas et al. (2017). The graph encoder As can be seen from Table 1, the performance of Graph2seq is 1.6 BLEU points higher than Seq2seq, which shows that our graph encoder is effective when applied alone. Adding the copy mechanism (Graph2seq+copy vs Seq2seq+copy), the gap becomes 2.3. This shows that the graph encoder learns better node representations compared to the sequence encoder, which allows attention and copying to function better. Applying the graph encoder together with the copy mechanism gives a gain of 3.4 BLEU points over the baseline (Graph2seq+copy vs Seq2seq). The graph encoder is consistently better than the sequence encoder no matter whether character LSTMs are used. We also list the encoding part of decoding times on the devset, as the decoders of the seq2seq and the graph2seq models are similar, so the time differences reflect efficiencies of the encoders. Our graph encoder gives consistently better efficiency compared with the sequence encoder, showing the advantage of parallelization. The copy mechanism Table 1 shows that the copy mechanism is effective on both the graph-to-sequence and the sequence-to-sequence models. Anonymization gives comparable overall performance gains on our graph-to-sequence model as the copy mechanism (comparing Graph2seq+Anon with Graph2seq+copy). However, the copy mechanism has several advantages over anonymization as discussed in Section 3.5. Character LSTM Character LSTM helps to increase the performances of both systems by roughly 0.6 BLEU points. This is largely because it further alleviates the data sparsity problem by handling unseen words, which may share common substrings with in-vocabulary words. 5.4 Effectiveness on graph state transitions We report a set of development experiments for understanding the graph LSTM encoder. Number of iterations We analyze the influence of the number of state transitions to the model performance on the devset. Figure 3 shows the BLEU scores of different state transition numbers, 1 2 3 4 5 6 7 8 9 10.0 12.0 14.0 16.0 18.0 20.0 22.0 24.0 only-incoming only-outgoing both Figure 3: DEV BLEU scores against transition steps for the graph encoder. 0 5 10 15 20 0 20 40 60 80 100 Figure 4: Percentage of DEV AMRs with different diameters. when both incoming and outgoing edges are taken for calculating the next state (as shown in Figure 2). The system is Graph2seq+charLSTM+copy. Executing only 1 iteration results in a poor BLEU score of 14.1. In this case the state for each node only contains information about immediately adjacent nodes. The performance goes up dramatically to 21.5 when increasing the iteration number to 5. In this case, the state for each node contains information of all nodes within a distance of 5. The performance further goes up to 22.8 when increasing the iteration number from 5 to 9, where all nodes with a distance of less than 10 are incorporated in the state for each node. Graph diameter We analyze the percentage of the AMR graphs in the devset with different graph diameters and show the cumulative distribution in Figure 4. The diameter of an AMR graph is defined as the longest distance between two AMR nodes.1 Even though the diameters for less than 80% of the AMR graphs are less or equal than 10, our development experiments show that it is not necessary to incorporate the whole-graph information for each node. Further increasing state transition number may lead to additional improvement. 1The diameter of single-node graphs is 0. 1622 Model BLEU PBMT 26.9 SNRG 25.6 Tree2Str 23.0 MSeq2seq+Anon 22.0 Graph2seq+copy 22.7 Graph2seq+charLSTM+copy 23.3 MSeq2seq+Anon (200K) 27.4 MSeq2seq+Anon (2M) 32.3 Seq2seq+charLSTM+copy (200K) 27.4 Seq2seq+charLSTM+copy (2M) 31.7 Graph2seq+charLSTM+copy (200K) 28.2 Graph2seq+charLSTM+copy (2M) 33.0 Table 2: TEST results. “(200K)”, “(2M)” and “(20M)” represent training with the corresponding number of additional sentences from Gigaword. We do not perform exhaustive search for finding the optimal state transition number. Incoming and outgoing edges As shown in Figure 3, we analyze the efficiency of state transition when only incoming or outgoing edges are used. From the results, we can see that there is a huge drop when state transition is performed only with incoming or outgoing edges. Using edges of one direction, the node states only contain information of ancestors or descendants. On the other hand, node states contain information of ancestors, descendants, and siblings if edges of both directions are used. From the results, we can conclude that not only the ancestors and descendants, but also the siblings are important for modeling the AMR graphs. This is similar to observations on syntactic parsing tasks (McDonald et al., 2005), where sibling features are adopted. We perform a similar experiment for the Seq2seq+copy baseline by only executing singledirectional LSTM for the encoder. We observe BLEU scores of 11.8 and 12.7 using only forward or backward LSTM, respectively. This is consistent with our graph model in that execution using only one direction leads to a huge performance drop. The contrast is also reminiscent of using the normal input versus the reversed input in neural machine translation (Sutskever et al., 2014). 5.5 Results Table 2 compares our final results with existing work. MSeq2seq+Anon (Konstas et al., 2017) is an attentional multi-layer sequence-to-sequence model trained with the anonymized data. PBMT (Pourdamghani et al., 2016) adopts a phrase-based model for machine translation (Koehn et al., 2003) on the input of linearized AMR graph, SNRG (Song et al., 2017) uses synchronous node replacement grammar for parsing the AMR graph while generating the text, and Tree2Str (Flanigan et al., 2016b) converts AMR graphs into trees by splitting the re-entrances before using a tree transducer to generate the results. Graph2seq+charLSTM+copy achieves a BLEU score of 23.3, which is 1.3 points better than MSeq2seq+Anon trained on the same AMR corpus. In addition, our model without character LSTM is still 0.7 BLEU points higher than MSeq2seq+Anon. Note that MSeq2seq+Anon relies on anonymization, which requires additional manual work for defining mapping rules, thus limiting its usability on other languages and domains. The neural models tend to underperform statistical models when trained on limited (16K) gold data, but performs better with scaled silver data (Konstas et al., 2017). Following Konstas et al. (2017), we also evaluate our model using both the AMR corpus and sampled sentences from Gigaword. Using additional 200K or 2M gigaword sentences, Graph2seq+charLSTM+copy achieves BLEU scores of 28.2 and 33.0, respectively, which are 0.8 and 0.7 BLEU points better than MSeq2seq+Anon using the same amount of data, respectively. The BLEU scores are 5.3 and 10.1 points better than the result when it is only trained with the AMR corpus, respectively. This shows that our model can benefit from scaled data with automatically generated AMR graphs, and it is more effective than MSeq2seq+Anon using the same amount of data. Using 2M gigaword data, our model is better than all existing methods. Konstas et al. (2017) also experimented with 20M external data, obtaining a BLEU of 33.8. We did not try this setting due to hardware limitations. The Seq2seq+charLSTM+copy baseline trained on the large-scale data is close to MSeq2seq+Anon using the same amount of training data, yet is much worse than our model. 5.6 Case study We conduct case studies for better understanding the model performances. Table 3 shows example outputs of sequence-to-sequence (S2S), graph-to1623 sequence (G2S) and graph-to-sequence with copy mechanism (G2S+CP). Ref denotes the reference output sentence, and Lin shows the serialization results of input AMRs. The best hyperparameter configuration is chosen for each model. For the first example, S2S fails to recognize the concept “a / account” as a noun and loses the concept “o / old” (both are underlined). The fact that “a / account” is a noun is implied by “a / account :mod (o / old)” in the original AMR graph. Though directly connected in the original graph, their distance in the serialization result (the input of S2S) is 26, which may be why S2S makes these mistakes. In contrast, G2S handles “a / account” and “o / old” correctly. In addition, the copy mechanism helps to copy “look-over” from the input, which rarely appears in the training set. In this case, G2S+CP is incorrect only on hyphens and literal reference to “anti-japanese war”, although the meaning is fully understandable. For the second case, both G2S and G2S+CP correctly generate the noun “agreement” for “a / agree” in the input AMR, while S2S fails to. The fact that “a / agree” represents a noun can be determined by the original graph segment “p / provide :ARG0 (a / agree)”, which indicates that “a / agree” is the subject of “p / provide”. In the serialization output, the two nodes are close to each other. Nevertheless, S2S still failed to capture this structural relation, which reflects the fact that a sequence encoder is not designed to explicitly model hierarchical information encoded in the serialized graph. In the training instances, serialized nodes that are close to each other can originate from neighboring graph nodes, or distant graph nodes, which prevents the decoder from confidently deciding the correct relation between them. In contrast, G2S sends the node “p / provide” simultaneously with relation “ARG0” when calculating hidden states for “a / agree”, which facilitates the yielding of “the agreement provides”. 6 Related work Among early statistical methods for AMR-to-text generation, Flanigan et al. (2016b) convert input graphs to trees by splitting re-entrances, and then translate the trees into sentences with a tree-tostring transducer. Song et al. (2017) use a synchronous node replacement grammar to parse input AMRs and generate sentences at the same time. Pourdamghani et al. (2016) linearize input (p / possible-01 :polarity :ARG1 (l / look-over-06 :ARG0 (w / we) :ARG1 (a / account-01 :ARG1 (w2 / war-01 :ARG1 (c2 / country :wiki “Japan” :name (n2 / name :op1 “Japan”)) :time (p2 / previous) :ARG1-of (c / call-01 :mod (s / so))) :mod (o / old)))) Lin: possible :polarity - :arg1 ( look-over :arg0 we :arg1 ( account :arg1 ( war :arg1 ( country :wiki japan :name ( name :op1 japan ) ) :time previous :arg1-of ( call :mod so ) ) :mod old ) ) Ref: we can n’t look over the old accounts of the previous so-called anti-japanese war . S2S: we can n’t be able to account the past drawn out of japan ’s entire war . G2S: we can n’t be able to do old accounts of the previous and so called japan war. G2S+CP: we can n’t look-over the old accounts of the previous so called war on japan . (p / provide-01 :ARG0 (a / agree-01) :ARG1 (a2 / and :op1 (s / staff :prep-for (c / center :mod (r / research-01))) :op2 (f / fund-01 :prep-for c))) Lin: provide :arg0 agree :arg1 ( and :op1 ( staff :prep-for ( center :mod research ) ) :op2 ( fund :prep-for center ) ) Ref: the agreement will provide staff and funding for the research center . S2S: agreed to provide research and institutes in the center . G2S: the agreement provides the staff of research centers and funding . G2S+CP: the agreement provides the staff of the research center and the funding . Table 3: Example system outputs. graphs by breadth-first traversal, and then use a phrase-based machine translation system2 to generate results by translating linearized sequences. Prior work using graph neural networks for NLP include the use graph convolutional networks (GCN) (Kipf and Welling, 2017) for semantic role labeling (Marcheggiani and Titov, 2017) and neural machine translation (Bastings et al., 2017). Both GCN and the graph LSTM update node states by exchanging information between neighboring nodes within each iteration. However, our graph state LSTM adopts gated operations for making updates, while GCN uses a linear transformation. Intuitively, the former has better learning power than the later. Another major difference is that our graph state LSTM keeps a cell vector for each node to remember all history. The contrast 2http://www.statmt.org/moses/ 1624 between our model with GCN is reminiscent of the contrast between RNN and CNN. We leave empirical comparison of their effectiveness to future work. In this work our main goal is to show that graph LSTM encoding of AMR is superior compared with sequence LSTM. Closest to our work, Peng et al. (2017) modeled syntactic and discourse structures using DAG LSTM, which can be viewed as extensions to tree LSTMs (Tai et al., 2015). The state update follows the sentence order for each node, and has sequential nature. Our state update is in parallel. In addition, Peng et al. (2017) split input graphs into separate DAGs before their method can be used. To our knowledge, we are the first to apply an LSTM structure to encode AMR graphs. The recurrent information exchange mechanism in our state transition process is remotely related to the idea of loopy belief propagation (LBP) (Murphy et al., 1999). However, there are two major differences. First, messages between LSTM states are gated neural node values, rather than probabilities in LBP. Second, while the goal of LBP is to estimate marginal probabilities, the goal of information exchange between graph states in our LSTM is to find neural representation features, which are directly optimized by a task objective. In addition to NMT (Gulcehre et al., 2016), the copy mechanism has been shown effective on tasks such as dialogue (Gu et al., 2016), summarization (See et al., 2017) and question generation (Song et al., 2018). We investigate the copy mechanism on AMR-to-text generation. 7 Conclusion We introduced a novel graph-to-sequence model for AMR-to-text generation. Compared to sequence-to-sequence models, which require linearization of AMR before decoding, a graph LSTM is leveraged to directly model full AMR structure. Allowing high parallelization, the graph encoder is more efficient than the sequence encoder. In our experiments, the graph model outperforms a strong sequence-to-sequence model, achieving the best performance. Acknowledgments We thank the anonymized reviewers for the insightful comments, and the Center for Integrated Research Computing (CIRC) of University of Rochester for providing computation resources. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (ICLR). Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178–186. Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Simaan. 2017. Graph convolutional encoders for syntax-aware neural machine translation. In Conference on Empirical Methods in Natural Language Processing (EMNLP-17), pages 1957–1967, Copenhagen, Denmark. Jeffrey Flanigan, Chris Dyer, Noah A. Smith, and Jaime Carbonell. 2016a. CMU at semeval-2016 task 8: Graph-based AMR parsing with infinite ramp loss. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1202–1206, San Diego, California. Jeffrey Flanigan, Chris Dyer, Noah A. Smith, and Jaime Carbonell. 2016b. Generation from abstract meaning representation using tree transducers. In Proceedings of the 2016 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-16), pages 731–739. Normunds Gruzitis, Didzis Gosko, and Guntis Barzdins. 2017. RIGOTRIO at SemEval-2017 Task 9: Combining Machine Learning and Grammar Engineering for AMR Parsing and Generation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 924–928, Vancouver, Canada. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL-16), pages 1631–1640, Berlin, Germany. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL-16), pages 140–149, Berlin, Germany. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Bevan Jones, Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, and Kevin Knight. 2012. Semantics-based machine translation with hyperedge replacement grammars. In Proceedings of 1625 the International Conference on Computational Linguistics (COLING-12), pages 1359–1376. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Thomas N. Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR). Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-03), pages 48–54. Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural AMR: Sequence-to-sequence models for parsing and generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL-17), pages 146–157, Vancouver, Canada. Gerasimos Lampouras and Andreas Vlachos. 2017. Sheffield at semeval-2017 task 9: Transition-based language generation from amr. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 586–591, Vancouver, Canada. Xiang Li, Thien Huu Nguyen, Kai Cao, and Ralph Grishman. 2015. Improving event detection with abstract meaning representation. In Proceedings of the First Workshop on Computing News Storylines, pages 11–15, Beijing, China. Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In Conference on Empirical Methods in Natural Language Processing (EMNLP17), pages 1506–1515, Copenhagen, Denmark. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL-05), pages 91–98, Ann Arbor, Michigan. Simon Mille, Roberto Carlini, Alicia Burga, and Leo Wanner. 2017. Forge at semeval-2017 task 9: Deep sentence generation based on a sequence of graph transducers. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval2017), pages 920–923, Vancouver, Canada. Arindam Mitra and Chitta Baral. 2015. Addressing a question answering challenge by combining statistical methods with inductive rule learning and reasoning. In Proceedings of the National Conference on Artificial Intelligence (AAAI-16). Kevin P Murphy, Yair Weiss, and Michael I Jordan. 1999. Loopy belief propagation for approximate inference: An empirical study. In Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence, pages 467–475. Morgan Kaufmann Publishers Inc. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL-02), pages 311–318. Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, and Wen-tau Yih. 2017. Cross-sentence n-ary relation extraction with graph LSTMs. Transactions of the Association for Computational Linguistics, 5:101–115. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Conference on Empirical Methods in Natural Language Processing (EMNLP14), pages 1532–1543. Nima Pourdamghani, Kevin Knight, and Ulf Hermjakob. 2016. Generating English from abstract meaning representations. In International Conference on Natural Language Generation (INLG-16), pages 21–25, Edinburgh, UK. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL-17), pages 1073–1083, Vancouver, Canada. Linfeng Song, Xiaochang Peng, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2017. AMR-to-text generation with synchronous node replacement grammar. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL-17), pages 7–13, Vancouver, Canada. Linfeng Song, Zhiguo Wang, Wael Hamza, Yue Zhang, and Daniel Gildea. 2018. Leveraging context information for natural question generation. In Proceedings of the 2018 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-18), New Orleans. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL-15), pages 1556–1566, Beijing, China. 1626 Sho Takase, Jun Suzuki, Naoaki Okazaki, Tsutomu Hirao, and Masaaki Nagata. 2016. Neural headline generation on abstract meaning representation. In Conference on Empirical Methods in Natural Language Processing (EMNLP-16), pages 1054–1059, Austin, Texas. Aleˇs Tamchyna, Chris Quirk, and Michel Galley. 2015. A discriminative model for semanticsto-string translation. In Proceedings of the 1st Workshop on Semantics-Driven Statistical Machine Translation (S2MT 2015), pages 30–36, Beijing, China. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL-16), pages 76–85, Berlin, Germany.
2018
150
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1627–1637 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1627 GTR-LSTM: A Triple Encoder for Sentence Generation from RDF Data Bayu Distiawan Trisedya1, Jianzhong Qi1, Rui Zhang1∗, Wei Wang2 1 The University of Melbourne 2 University of New South Wales [email protected] {jianzhong.qi,rui.zhang}@unimelb.edu.au [email protected] Abstract A knowledge base is a large repository of facts that are mainly represented as RDF triples, each of which consists of a subject, a predicate (relationship), and an object. The RDF triple representation offers a simple interface for applications to access the facts. However, this representation is not in a natural language form, which is difficult for humans to understand. We address this problem by proposing a system to translate a set of RDF triples into natural sentences based on an encoder-decoder framework. To preserve as much information from RDF triples as possible, we propose a novel graph-based triple encoder. The proposed encoder encodes not only the elements of the triples but also the relationships both within a triple and between the triples. Experimental results show that the proposed encoder achieves a consistent improvement over the baseline models by up to 17.6%, 6.0%, and 16.4% in three common metrics BLEU, METEOR, and TER, respectively. 1 Introduction Knowledge bases (KBs) are becoming an enabling resource for many applications including Q&A systems, recommender systems, and summarization tools. KBs are designed based on a W3C standard called the Resource Description Framework (RDF)1. An RDF triple consists of three elements in the form of ⟨subject, predicate (relationship), object⟩. It describes a relationship between an entity (the subject) and another entity or literal (the object) ∗Corresponding author 1https://www.w3.org/RDF/ RDF triples ⟨John Doe,birth place,London⟩ ⟨John Doe,birth date,1967-01-10⟩ ⟨London,capital of,England⟩ Target sentence John Doe was born on 1967-01-10 in London, the capital of England. Table 1: RDF based sentence generation. via the predicate. This representation allows easy data share between KBs. However, usually the elements of a triple are stored as Uniform Resource Identifiers (URIs), and many predicates (words or phrases) are not intuitive; this representation is difficult to comprehend by humans. Translating RDF triples into natural sentences helps humans to comprehend the knowledge embedded in the triples, and building a natural language based user interface is an important task in user interaction studies (Damljanovic et al., 2010). This task has many applications, such as question answering (Bordes et al., 2014; Fader et al., 2014), profile summarizing (Lebret et al., 2016; Chisholm et al., 2017), and automatic weather forecasting (Mei et al., 2016). For example, the SPARQL inference of a Q&A system (Unger et al., 2012) returns a set of RDF triples which need to be translated into natural sentences to provide a more readable answer for the users. Table 1 illustrates such an example. Suppose a user is asking a question about “John Doe”. By querying a KB, a Q&A system retrieves three triples “⟨John Doe,birth place,London⟩”, “⟨John Doe,birth date,1967-01-10⟩”, and “⟨London,capital of,England⟩.” We aim to generate a natural sentence that incorporates the information of the triples and is easier to be understood by the user. In this example, the generated sentence is “John Doe was born on 1967-01-10 in London, 1628 the capital of England.” Most existing studies for this task use domain specific rules. Bontcheva and Wilks (2004) create rules to generate sentences in the medical domain, while Cimiano et al. (2013) create rules to generate step by step cooking instructions. The problem of rule-based methods is that they need a lot of human efforts to create the rules, which mostly cannot deal with complex or novel cases. Recent studies propose neural language generation systems. Lebret et al. (2016) generate the first sentence of a biography by a conditional neural language model. Mei et al. (2016) propose an encoder-aligner-decoder architecture to generate weather forecasts. The model does not need predefined rules and hence generalizes better to open domain data. A straightforward adaptation of neural language generation system is to use the encoder-decoder model by first concatenating the elements of the RDF triples into a linear sequence and then feeding the sequence as the model input to learn the corresponding natural sentence. We implemented such a model (detailed in Section 3.2) that ranked top in the WebNLG Challenge 20172. This Challenge has a primary objective of generating syntactically correct natural sentences from a set of RDF triples. Our model achieves the highest global scores on the automatic evaluation, outperforming competitors that use rule-based methods, statistical machine translation, and neural machine translation (Gardent et al., 2017b). While our previous model achieves a good result, simply concatenating the elements in the RDF triples may lose the relationship between entities that affects the semantics of the resulting sentence (cf. Table 3). To address this issue, in this paper, we propose a novel graph-based triple encoder model that maintain the structure of RDF triples as a small knowledge graph named the GTR-LSTM model. This model computes the hidden state of each entity in a graph to preserve the relationships between entities in a triple (intra-triple relationships) and the relationships between entities in related triples (inter-triple relationships) that helps to achieve even more accurate sentences. This leads to two problems of preserving the relationships in a knowledge graph: (1) how to deal with a cycle in a knowledge graph; (2) how to deal with multiple non-predefined re2http://talc1.loria.fr/webnlg/stories/challenge.html lationships between two entities in a knowledge graph. The proposed model differs from existing non-linear LSTM models such as Tree LSTM (Tai et al., 2015) and Graph LSTM (Liang et al., 2016) in addressing the mentioned problem. In particular, Tree LSTM does not allow cycles, while the proposed model handles cycles by first using a combination of topological sort and breadth-first traversal over a graph, and then using an attention model to capture the global information of the knowledge graph. Meanwhile, Graph LSTM only allows a predefined set of relationships between entities, while the proposed model allows any relationships by treating them as part of the input for the hidden state computation. To further enhance the capability of our model to handle unseen entities, we propose to use entity masking, which maps the entities in the model training pairs to their types, e.g., we map an entity (literal) “1967-01-10” to a type symbol “DATE” in the training pairs. This way, our model can learn to handle any date entities rather than just “1967-01-10”. This is particularly helpful when there is a limited training dataset. Our contributions are: • We propose an end-to-end encoder-decoder based framework for the problem of translating RDF triples into natural sentences. • We further propose a graph-based triple encoder to optimize the amount of information preserved in the input of the framework. The proposed model can handle cycles to capture the global information of a knowledge graph. The proposed model also handles nonpredefined relationships between entities. • We evaluate the proposed framework and model over two real datasets. The results show that our model outperforms the stateof-the-art models consistently. The rest of this paper is organized as follows. Section 2 summarizes previous studies on sentence generation. Section 3 details the proposed model. Section 4 presents the experimental results. Section 5 concludes the paper. 2 Related Work The studied problem falls in the area of Natural Language Generation (NLG) (Reiter and Dale, 2000). Bontcheva and Wilks (2004) follow a 1629 traditional NLG approach to generate sentences from RDF data in the medical domain. They start with filtering repetitive RDF data (document planning) and then group coherent triples (microplanning). After that, they aggregate the sentences generated for coherent triples to produce the final sentences (aggregation and realization). Cimiano et al. (2013) generate cooking recipes from semantic web data. They focus on using a large corpus to extract lexicon in the cooking domain. The lexicon is then used with a traditional NLG approach to generate cooking recipes. Duma and Klein (2013) learn a sentence template from a parallel RDF data and text corpora. They first align entities in RDF triples with entities mentioned in sentences. Then, they extract templates from the aligned sentences by replacing the entity mention with a unique token. This method works well on RDF triples in a seen domain but fails on RDF triples in a previously unseen domain. Recently, several methods using neural networks are proposed. Lebret et al. (2016) generate the first sentence of a biography using a conditional neural language model. This model is trained to predict the next word of a sentence not only based on previous words, but also by using features captured from Wikipedia infoboxes. Mei et al. (2016) propose an encoder-aligner-decoder model to generate weather forecasts. The aligner is used to filter the most relevant data to be used to predict the weather forecast. Both studies experiment on cross-domain datasets. The result shows that the neural language generation approach is more flexible to work in an open domain since it is not limited to handcrafted rules. This motivates us to use a neural network based framework. The most similar system to ours is Neural Wikipedian (Vougiouklis et al., 2017), which generates a summary from RDF triples. It uses feedforward neural networks to encode RDF triples and concatenate them as the input of the decoder. The decoder uses LSTM to predict a sequence of words as a summary. There are differences from our work. First, Neural Wikipedian only works with a set of RDF triples with a single entity point of view (i.e., the entity of interest must be in either the subject or the object of every triple). Our system does not have this constraint. Second, Neural Wikipedian uses standard feed-forward neural networks in the encoder. We design new triple encoder models to accommodate specific features of … … Encoder Decoder Target Text De-lexicalizer Sentence Normalizer Target Text Pre-processor RDF Triples Entity Type Mapper Masking Module RDF Pre-processor s1 p1 o1 on … w1 w2 wm … Figure 1: RDF sentence generation based on an encoder-decoder architecture. RDF triples. Experimental results show that our framework outperforms Neural Wikipedian. 3 Proposed Model We start with the problem definition. We consider a set of RDF triples as the input, which is denoted by T = [t1, t2, ..., tn] where a triple ti consists of three elements (subject si, predicate pi, and object oi), ti = ⟨si, pi, oi⟩. Every element can contain multiple words. We aim to generate a set of sentences that consist of a sequence of words S = ⟨w1, w2, ..., wm⟩, such that the relationships in the input triples are correctly represented in S while the sentences have a high quality. We use BLEU, METEOR, and TER to assess the quality of the sentence (detailed in Section 4). Table 1 illustrates our problem input and the target output. This section is organized as follows. First we describe the overall framework (Section 3.1). Next, we describe three triple encoder models including the adapted standard BLSTM model (Section 3.2), the adapted standard triple encoder model (Section 3.3), and the proposed GTR-LSTM model (Section 3.4). The decoder which is used for all encoder models is described in Section 3.5. The entity masking is described in Section 3.6 3.1 Solution Framework Our solution framework uses an encoder-decoder architecture as illustrated in Fig. 1. The framework 1630 consists of three components including an RDF pre-processor, a target text pre-processor, and an encoder-decoder module. The RDF pre-processor consists of an entity type mapper and a masking module. The entity type mapper maps the subjects and objects in the triples to their types, such that the sentence patterns learned are based on entity types rather than entities. For example, the input entities in Table 1, “John Doe”, “London”, “England”, and “1967-01-10” can be mapped to “PERSON”, “CITY”, “COUNTRY”, and “DATE”, respectively. The mapping has been shown in our experiments to be highly effective in improving the model output quality. The masking module converts each entity into an entity identifier (eid). The target text pre-processor consists of a text normalizer and a de-lexicalizer. The text normalizer converts abbreviations and dates into the same format as the corresponding entities in the triples. The de-lexicalizer replaces all entities in the target sentences by their eids. The RDF and target text pre-processors are detailed in Section 3.6. The replaced target sentences are combined with the original target sentences and the English Wikipedia articles is used as a corpus to learn the word embeddings of the vocabulary. To accommodate the RDF data, in the encoder side, we consider three triple encoder models: (1) the adapted standard BLSTM encoder; (2) the adapted standard triple encoder; and (3) the proposed GTR-LSTM triple encoder. The adapted standard BLSTM encoder concatenates the tokens in RDF triples as an input sequence, while the standard triple encoder first encodes each RDF triple into a vector representation and then concatenates the vectors of different triples. The latter model better captures intra-triple relationships but suffers in capturing inter-triple relationships. Considering the native representation of RDF triples as a small knowledge graph, our graph-based GTRLSTM triple encoder captures both intra-triple and inter-triple entity relationships. 3.2 Adapted Standard BLSTM Encoder The standard encoder-decoder model with a BLSTM encoder is a sequence to sequence learning model (Cho et al., 2014). To adapt such a model for our problem, we transform a set of RDF triples input T into a sequence of elements (i.e., T = [w1,1, w1,2, ..., w1,j, ..., wn,j]), where wn,j is John →w1,1 Doe →w1,2 birth →w1,3 place →w1,4 London →w1,5 London →w2,1 capital →w2,2 of →w2,3 England →w2,4 <pad> →w2,5 … →wn,1 … →wn,2 … →wn,3 … →wn,4 … →wn,5 t1 word embedding Input representation LSTM t2 tn hn,1 hn,2 hn,3 hn,4 hn,5 h1,5 h2,5 hn,5 wn,1 wn,2 wn,3 wn,4 wn,5 ... hT Figure 2: LSTM-based standard triple encoder. the word embedding of a word in the n-th triple. For example, following the triples in Table 1, w1,1 is the word embedding of “John”, w1,2 is the word embedding of “Doe”, etc. This sequence forms an input for the encoder. We use zero padding to ensure that each input has the same representation size. The rest of the model is the same as the standard encoder-decoder model with an attention mechanism (Bahdanau et al., 2015). We call this model the adapted standard BLSTM encoder. 3.3 Adapted Standard Triple Encoder The standard BLSTM encoder suffers in capturing the element relationships as the elements are simply concatenated together. Next, we adapt the standard BLSTM encoder to aggregate the word embeddings of the elements of the same triple to retain the intra-triple relationship. We call this the adapted standard triple encoder. The adaptation is done by grouping the elements of each triple, so the input is represented as T = [⟨w1,1, ..., w1,j⟩, ..., ⟨wn,1, ...wn,j⟩], where wn,j is the word embedding of a word in the n-th triple. We use zero padding to ensure that each triple has the same representation size. An LSTM network of the encoder computes a hidden state of each triple and concatenates them together to be the input for the decoder: hT = [f(t1); f(t2); ...; f(tn)] (1) where hT is the input vector representation for the decoder and f is an LSTM network (cf. Fig. 2). 3.4 GTR-LSTM Triple Encoder The adapted standard triple encoder has an advantage in preserving the intra-triple relationship. However, it has not considered the structural rela1631 birth_place capital_of Mary England London John spouse lead_by Figure 3: A small knowledge graph formed by a set of RDF triples. tionships between the entities in different triples. To overcome this limitation, we propose a graphbased triple encoder. We call it the GTR-LSTM triple encoder. This encoder takes the input triples in the form of a graph, which preserves the natural structure of the triples (cf. Fig. 3). GTR-LSTM differs from existing Graph LSTM (Liang et al., 2016) and Tree LSTM (Tai et al., 2015) models in the following aspects. Graph LSTM is proposed for image data. It constructs the graph based on the spatial relationships among super-pixels of an image. Tree LSTM uses the dependency tree as the structure of a sentence. Both models have a predefined relationship between the vertices (Graph LSTM uses spatial relationships: top, bottom, left, or right between super-pixels; Tree LSTM uses dependencies between words in a sentence as the relationship). In contrast, a KB has an open set of relationships between the vertices (i.e., the predicate defines the relationship between entities/vertices) which make our problem more difficult to model. Our GTR-LSTM triple encoder overcomes the difficulty as follows. It receives a directed graph G = ⟨V, E⟩as the input, where V is a set of vertices that represent entities or literals, and E is a set of directed edges that represent predicates. Since the graph can contain cycles, we use a combination of topological sort and breadth-first traversal algorithms to traverse the graph. The traversal is used to create an ordering of feeding the vertices into a GTR-LSTM unit to compute their hidden states. We start with running a topological sort to establish an order of the vertices until no further vertex has a zero in-degree. For the remaining vertices, they must be in strongly connected component(s). Then, we run a breadthfirst traversal over the remaining vertices with a random starting vertex, since every vertex can be reached from all vertices of a strongly connected component. When a vertex vi is visited, the hidden states of all adjacent vertices of vi are computed (or updated if the hidden state of the vertex < > John null hjohn h0 Mary hmary England capital_of hengland London hlondon John lead_by h’john spouse birth_place Attention model Figure 4: GTR-LSTM triple encoder. is already computed in the previous step). Following the graph in Fig. 3, the order of hidden state computation is as follows. The process starts with a vertex with zero in-degree. Because there is no such vertex, a vertex is randomly selected as the starting vertex. Assume we pick “John” as the starting vertex, then we compute hjohn using h0 as the previous hidden state. Next, following the breadth-first traversal algorithm, we visit vertex “John” and compute hmary and hlondon by passing hjohn as the previous hidden state. Next step, vertex “Mary” is visited, but no hidden states are computed or updated since it does not have any adjacent vertices. In the last step, vertex “England” is visited and hjohn is updated. Fig. 4 illustrates the overall process. Different from the Graph LSTM, our GTRLSTM model computes a hidden state by taking into account the processed entity and its edge (the edge pointing to the current entity from the previous entity) to handle non-predefined relationships (any relationships between entities in a knowledge graph). Thus, our GTR-LSTM unit (cf. Fig. 4) receives two inputs, i.e., the entity and its relationship. We propose the following model to compute the hidden state of each GTR-LSTM unit. it = σ X e U iexte + W ieht−1  ! (2) fte = σ U fxte + W fht−1  (3) ot = σ X e (U oexte + W oeht−1) ! (4) gt = tanh X e (U gexte + W geht−1) ! (5) ct = ct−1 ∗ X e fte ! + (gt ∗it) (6) ht = tanh(ct) ∗ot (7) Here, U and W are learned parameter matrices, σ denotes the sigmoid function, ∗denotes element1632 x1 x2 xn Decoder hT Decoder previous hidden state hd t Attention model … α = {α1, α2, …, αn } Figure 5: Attention model of GTR-LSTM. wise multiplication, and x is the input at the current time-step. The input gate i determines the weight of the current input. The forget gate f determines the weight of the previous state. The output gate o determines the weight of the cell state forwarded to the next time-step. The state g is the candidate hidden state used to compute the internal memory unit c based on the current input and the previous state. The subscript t is the timestep. The subscript/superscript e is the input element (an entity or a predicate). Following Tree LSTM (Tai et al., 2015) and Graph LSTM (Liang et al., 2016), we also use a separate forget gate for each input that allows the GTR-LSTM unit to incorporate information from each input selectively. From Fig. 4, we can see that the traversal creates two branches, one ended in hmary and the other ended in h′ john. After the encoder computes the hidden states of each vertex, h′ john does not include the information of hmary and vice versa. Moreover, the graph can contain cycles that cause difficulty in determining the starting and ending vertices. Our traversal procedure ensures that the hidden states of all vertices are updated based on their adjacent vertices (local neighbors). To further capture the global information of the graph, we apply an attention model on the GTR-LSTM triple encoder. The attention model takes the hidden states of all vertices computed by the encoder and the previous hidden state of the decoder to compute the final input vector of each decoder time-step. Figure 5 illustrates the attention model of GTR-LSTM. Inspired by Luong et al. (2015), we adapt the following equation to compute the weight of each vertex. αn = exp(hdt T Wxn) P|X| j=1 exp(hdt T Wxj) (8) Here, hdt is the previous hidden state of the decoder, |X| is the total number of entities in the triples, W is a learned parameter matrix, xn and xj are hidden states of vertices, and α = {α1, α2, ..., αn} is the weight vector of all vertices. Then the input of the decoder for each timestep can be computed as follows. hT = |X| X n=1 αnxn (9) 3.5 Decoder The decoder of the proposed framework is a standard LSTM. It is trained to generate the output sequence by predicting the next output word wt conditioned on the hidden state hdt. The current hidden state hdt is conditioned on the hidden state of the previous time-step hdt−1, the output of the previous time-step wt−1, and input vector representation hT . The hidden state and the output of the decoder at time-step t are computed as: hdt = f(hdt−1, wt−1, hT ) (10) wt = softmax(V ht) (11) Here, f is a single LSTM unit, and V is the hidden-to-output weight matrix. The encoder and the decoder are trained to maximize the conditional log-likelihood: p(Sn | Tn) = |Sn| X t=1 log wt (12) Hence, the training objective is to minimize the negative conditional log-likelihood: J = N X n=1 −log p(Sn | Tn) (13) where (Sn, Tn) is a pair of output word sequence and input RDF triple set given for the training. 3.6 Entity Masking Entity masking makes our framework generalizes better to unseen entities. This technique addresses the problem of a limited training set which is faced by many NLG problems. Entity masking replaces entity mentions with eids and entity types in both the input triples and the target sentences. However, we do not want our model to be overly generalized either. Thus, we need to have general and specific entity types. For example, the entity “John Doe” is replaced by “ENT-1 PERSON GOVERNOR”. To add the entity types, we use the DBpedia lookup API. The 1633 API returns several entity types. The general and specific entity types are defined by the level of the word in the WordNet (Fellbaum, 1998) hierarchy. In the encoder side, each element of the triple tn = ⟨sn, pn, on⟩is transformed into sn = ⟨lsn, gsn, dsn⟩, pn = ⟨lpn⟩, and on = ⟨lon, gon, don⟩, where l is the label of an element, g is the general entity type, and d is the specific entity type. The labels of the subject and the object are latter replaced by eids, while the label of the predicate is preserved, since it indicates the relationship between the subject and the object. On the decoder side, the entities in the target text are also replaced by their corresponding eids. Entity matching is beyond the scope of our study. We simply use a combination of three string matching methods to find entity mentions in the sentence: exact matching, n-gram matching, and parse tree matching. The exact matching is used to find the exact mention; the n-gram matching is used to handle partial matching with the same token length; and parse tree matching is used to find a partial matching with different token length. 4 Experiments We evaluate our framework on two datasets. The first is the dataset from Gardent et al. (2017a). We call it the WebNLG dataset. This dataset contains 25,298 RDF triple set-text pairs, with 9,674 unique sets of RDF triples. The dataset consists of a Train+Dev dataset and a Test Unseen dataset. We split Train+Dev into a training set (80%), a development set (10%), and a Seen testing set (10%). The Train+Dev dataset contains RDF triples in ten categories (topics, e.g., astronaut, monument, food, etc.), while the Test Unseen dataset has five other unseen categories. The maximum number of triples in each RDF triple set is seven. For the second dataset, we collected data from Wikipedia pages regarding landmarks. We call it the GKB dataset. We first extract RDF triples from Wikipedia infoboxes and sentences from the Wikipedia text that contain entities mentioned in the RDF triples. Human annotators then filter out false matches to obtain 1,000 RDF triple set-text pairs. This dataset is split into the training and development set (80%) and the testing set (20%). Table 1 illustrates an example of the data pairs of WebNLG and GKB dataset. We implement the existing models, the adapted model, and the proposed model using Keras3. We use three common evaluation metrics including BLEU (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2011), and TER (Snover et al., 2006). For the metric computation and significance testing, we use MultEval (Clark et al., 2011). 4.1 Tested Models We compare our proposed graph-based triple encoder (GTR-LSTM, Section 3.4) with three existing model including the adapted standard BLSTM encoder (BLSTM, Section 3.2), Neural Wikipedian (Vougiouklis et al., 2017) (TFF), and statistical machine translation (Hoang and Koehn, 2008) (SMT) trained on a 6-gram language model. We also compare with the adapted standard triple encoder (TLSTM, Section 3.3). 4.2 Hyperparameters We use grid search to find the best hyperparameters for the neural networks. We use GloVe (Pennington et al., 2014) trained on the GKB and WebNLG training data and full English Wikipedia data dump to get 300-dimension word embeddings. We use 512 hidden units for both encoder and decoder. We use a 0.5 dropout rate for regularization on both encoder and decoder to avoid overfitting. We train our model on NVIDIA Tesla K40c. We find that using adaptive learning rates for the optimization is efficient and leads the model to converge faster. Thus, we use Adam (Kingma and Ba, 2015) with a learning rate of 0.0002 instead of stochastic gradient descent. The update of parameters in training is computed using a mini batch of 64 instances. We further apply early stopping to detect the convergence. 4.3 Effect of Entity Masking Table 2 shows the overall comparison of model performance. It shows that entity masking gives a consistent performance improvement for all models. Generalizing the input triples and target sentences helps the models to learn the relationships between entities from their types. This is particularly helpful when there is limited training data. We use a combination of exact matching, n-gram matching and parse tree matching to find the entity mentions in the sentence. The entity masking accuracy for WebNLG dataset is 87.15%, while for 3https://nmt-keras.readthedocs.io/en/latest/ 1634 Model Metric/Dataset BLEU↑ METEOR↑ TER↓ Seen Unseen GKB Seen Unseen GKB Seen Unseen GKB Entity Unmasking Existing models BLSTM 42.7 23.0 28.0 34.4 28.7 27.5 55.7 69.9 67.7 SMT 41.1 23.9 27.7 33.2 28.3 27.6 57.0 70.1 63.8 TFF 44.6 26.4 26.4 33.9 29.4 27.2 52.4 62.6 60.1 Adapted model TLSTM 45.9 28.1 29.4 34.9 30.1 28.5 50.5 62.7 59.0 Our proposed GTR-LSTM 54.0 29.2 37.1 37.3 27.8 30.6 45.3 59.8 55.1 Entity Masking Existing models BLSTM 49.8 28.0 34.8 38.3 29.4 28.6 49.9 64.9 65.8 SMT 46.5 24.8 32.0 37.1 29.1 28.5 52.3 62.2 67.8 TFF 47.8 28.4 33.7 35.9 30.5 28.9 49.9 61.2 58.4 Adapted Model TLSTM 50.5 31.6 36.7 36.5 30.7 30.1 47.7 60.4 57.2 Our proposed GTR-LSTM 58.6 34.1 40.1 40.6 32.0 34.6 41.7 57.9 50.6 Table 2: Comparison of model performance. RDF inputs ⟨Elizabeth Tower, location, London⟩, ⟨Wembley Stadium, location, London⟩, ⟨London, capital of, England⟩, ⟨Theresa May, prime minister, England⟩ Reference london , england is home to wembley stadium and the elizabeth tower. the name of the leader in england is theresa may. BLSTM england is lead by theresa may and is located in the city of london . the elizabeth tower is located in the city of england and is located in the wembley stadium. SMT wembley stadium is located in london , elizabeth tower . theresa may is the leader of england , england. TFF the elizabeth tower is located in london , england , where wembley stadium is the leader and theresa may is the leader. TLSTM the wembley stadium is located in london , england . the country is the location of elizabeth tower . theresa may is the leader of london. GTR-LSTM the wembley stadium and elizabeth tower are both located in london , england . theresa may is the leader of england. Table 3: Sample output of the system. The error is highlighted in bold. the GKB dataset is 82.45%. Entity masking improves the BLEU score of the proposed GTR-LSTM model by 8.5% (from 54.0 on the Entity Unmasking model to 58.6 on the Entity Masking model), 16.7%, and 8.0% on the WebNLG seen testing data (denoted by “Seen”), WebNLG unseen testing data (denoted by “Unseen”), and the GKB testing data (denoted by “GKB”). Using the entity masking not only improves the performance by recognizing the unknown vocabulary via eid masking but also improves the running time performance by requiring a smaller training vocabulary. 4.4 Effect of Models Table 2 also shows that the proposed GTR-LSTM triple encoder achieves a consistent improvement over the baseline models, and the improvement is statistically significant, with p < 0.01 based on the t-test of all metrics. We use MultEval to compute the p value based on an approximate randomization (Clark et al., 2011). The improvement on the BLEU score indicates that the model reduces the errors in the generated sentence. Our manual inspection confirms this result. The better (lower) TER score suggests that the model generates a more compact output (i.e., better aggregation). Table 3 shows a sample output of all models. From this table, we can see that all baseline models produce sentences that contain wrong relationships between entities (e.g., the BLSTM output contains a wrong relationship “the elizabeth tower is located in the city of england”). Moreover, the baseline models generate sentences with a weak aggregation (e.g., “Elizabeth Tower” and “Wembley Stadium” are in separate sentences for TLSTM). The proposed GTR-LSTM model successfully avoids these problems. Model training time. GTR-LSM is slower in training than the baseline models, which is expected as it needs to encode more information. However, its training time is no more than twice as that of any baseline models tested, and the training can complete within one day which seems reasonable. Meanwhile, the number of parameters trained for GTR-LSTM is up to 59% smaller than those of the baseline models, which saves the space cost for model storage. 4.5 Human Evaluation To complement the automatic evaluation, we conduct human evaluations for all of the masked models. We ask five human annotators. Each of them 1635 Model Dataset/Metric Seen Unseen GKB Correctness Grammar Fluency Correctness Grammar Fluency Correctness Grammar Fluency Existing Models BLSTM 2.25 2.33 2.29 1.53 1.71 1.68 1.54 1.84 1.84 SMT 2.03 2.11 2.07 1.36 1.48 1.44 1.81 1.99 1.89 TFF 1.77 1.91 1.88 1.44 1.69 1.66 1.71 1.99 1.96 Adapted Model TLSTM 2.53 2.61 2.55 1.75 1.93 1.86 2.21 2.38 2.35 Our Proposed GTR-LSTM 2.64 2.66 2.57 1.96 2.04 1.99 2.29 2.42 2.41 Table 4: Human evaluation results. has studied English for at least ten years and completed education in a full English environment for at least two years. We provide a website4 that shows them the RDF triples and the generated text. The annotators are given training on the scoring criteria. We also provide scoring examples. We randomly selected 100 sets of triples along with the output of each model. We only select sets of triples that contain more than two triples. Following (Gardent et al., 2017b), we use three evaluation metrics including correctness, grammaticality, and fluency. For each pair of triple set and generated sentences, the annotators are asked to give a score between one to three for each metric. Correctness is used to measure the semantics of the output sentence. A score of 3 is given to generated sentences that contain no errors in the relationships between entities; a score of 2 is given to generated sentences that contain one error in the relationship; and a score of 1 is given to generated sentences that contain more than one errors in the relationships. Grammaticality is used to rate the grammatical and spelling errors of the generated sentences. Similar to the correctness metric, a score of 3 is given to generated sentences with no grammatical and spelling errors; a score of 2 is given to generated sentences with one error; and a score of 1 for the others. The last metric, fluency, is used to measure the fluency of the sentence output. We ask the annotators to give a score based on the aggregation of the sentences and the existence of sentence repetition. Table 4 shows the results of the human evaluations. The results confirm the automatic evaluation in which our proposed model achieves the best scores. Error analysis. We further perform a manual inspection of 100 randomly selected output sentences of GTR-LSTM and BLSTM on the Seen and Unseen test data. We find that 32% of BLSTM output contains wrong relationships between entities. In comparison, only 8% of GTR-LSTM output contains such errors. Besides, we find duplicate sub-sentences in 4http://bit.ly/gkb-mappings the output of GTR-LSTM (15%). The following output is an example: “beef kway teow is a dish from singapore, where english language is spoken and the leader is tony tan. the leader of singapore is tony tan.” While the duplicate sentence is not wrong, it affects the reading experience. We conjecture that the LSTM in the decoder caused such an issue. We aim to solve this problem in future work. 5 Conclusions We proposed a novel graph-based triple encoder GTR-LSTM for sentence generation from RDF data. The proposed model maintains the structure of input RDF triples as a small knowledge graph to optimize the amount of information preserved in the input of the model. The proposed model can handle cycles to capture the global information of a knowledge graph and also handle non-predefined relationships between entities of a knowledge graph. Our experiments show that GTR-LSTM offers a better performance than all the competitors. On the WebNLG dataset, our model outperforms the best existing model, the standard BLSTM model, by up to 17.6%, 6.0%, and 16.4% in terms of BLEU, METEOR, and TER scores, respectively. On the GKB dataset, our model outperforms the standard BLSTM model by up to 15.2%, 20.9%, and 23.1% in these three metrics, respectively. Acknowledgments Bayu Distiawan Trisedya is supported by the Indonesian Endowment Fund for Education (LPDP). This work is supported by Australian Research Council (ARC) Discovery Project DP180102050 and Future Fellowships Project FT120100832, and Google Faculty Research Award. This work is partly done while Jianzhong Qi is visiting the University of New South Wales. Wei Wang was partially supported by D2DCRC DC25002, DC25003, ARC DP 170103710 and 180103411. 1636 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (ICLR). http://arxiv.org/abs/1409.0473. Kalina Bontcheva and Yorick Wilks. 2004. Automatic Report Generation from Ontologies: The MIAKT Approach, Springer, Berlin, Heidelberg, pages 324–335. https://doi.org/10.1007/978-3-54027779-8 28. Antoine Bordes, Sumit Chopra, and Jason Weston. 2014. Question answering with subgraph embeddings. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 615–620. https://www.aclweb.org/anthology/D/D14/D141067.pdf. Andrew Chisholm, Will Radford, and Ben Hachey. 2017. Learning to generate one-sentence biographies from wikidata. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL). pages 633–642. http://aclweb.org/anthology/E17-1060. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 1724– 1734. http://www.aclweb.org/anthology/D14-1179. Philipp Cimiano, Janna L¨uker, David Nagel, and Christina Unger. 2013. Exploiting ontology lexica for generating natural language texts from rdf data. In Proceedings of the 14th European Workshop on Natural Language Generation (ENLG). pages 10– 19. http://www.aclweb.org/anthology/W13-2102. Jonathan H. Clark, Chris Dyer, Alon Lavie, and Noah A. Smith. 2011. Better hypothesis testing for statistical machine translation: Controlling for optimizer instability. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT). pages 176–181. http://www.aclweb.org/anthology/P11-2031. Danica Damljanovic, Milan Agatonovic, and Hamish Cunningham. 2010. Natural language interfaces to ontologies: combining syntactic analysis and ontology-based lookup through the user interaction. In Proceedings of the 7th International Conference on The Semantic Web (ISWC). pages 106–120. https://doi.org/10.1007/978-3-642-13486-9 8. Michael J. Denkowski and Alon Lavie. 2011. Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems. In Proceedings of the Sixth Workshop on Statistical Machine Translation (WMT). pages 85–91. http://aclweb.org/anthology/W11-2107. Daniel Duma and Ewan Klein. 2013. Generating natural language from linked data: Unsupervised template extraction. In Proceedings of the 10th International Conference on Computational Semantics (IWCS). pages 83–94. http://www.aclweb.org/anthology/W13-0108. Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and extracted knowledge bases. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD). pages 1156–1165. https://doi.org/10.1145/2623330.2623677. Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. MIT Press. http://aclweb.org/anthology/J99-2008. Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017a. Creating training corpora for nlg micro-planners. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL). pages 179–188. http://aclweb.org/anthology/P17-1017. Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017b. The webnlg challenge: Generating text from rdf data. In Proceedings of the 10th International Conference on Natural Language Generation (INLG). pages 124– 133. http://www.aclweb.org/anthology/W17-3518. Hieu Hoang and Philipp Koehn. 2008. Design of the moses decoder for statistical machine translation. In Software Engineering, Testing, and Quality Assurance for Natural Language Processing (SETQA-NLP). pages 58–65. http://www.aclweb.org/anthology/W08-0510. Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR). https://arxiv.org/abs/1412.6980. R´emi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 1203–1213. https://aclweb.org/anthology/D161128. Xiaodan Liang, Xiaohui Shen, Jiashi Feng, Liang Lin, and Shuicheng Yan. 2016. Semantic object parsing with graph lstm. In Proceedings of the 14th European Conference on Computer Vision (ECCV). pages 125–143. https://doi.org/10.1007/978-3-31946448-0 8. 1637 Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 1412–1421. http://aclweb.org/anthology/D15-1166. Hongyuan Mei, Mohit Bansal, and Matthew R. Walter. 2016. What to talk about and how? selective generation using lstms with coarse-to-fine alignment. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). pages 720–730. http://www.aclweb.org/anthology/N16-1086. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics (ACL). pages 311–318. http://aclweb.org/anthology/P02-1040.pdf. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 1532–1543. https://aclweb.org/anthology/D14-1162. Ehud Reiter and Robert Dale. 2000. Building natural language generation systems. Cambridge University Press. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of Association for Machine Translation in the Americas (AMTA). pages 223–231. http://mt-archive.info/AMTA-2006-Snover.pdf. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL) and the 7th International Joint Conference on Natural Language Processing (IJCNLP). pages 1556–1566. http://www.aclweb.org/anthology/P151150. Christina Unger, Lorenz Bhmann, Jens Lehmann, Axel-Cyrille Ngonga Ngomo, Daniel Gerber, and Philipp Cimiano. 2012. Template-based question answering over rdf data. In Proceedings of the 21st international conference on World Wide Web (WWW). pages 639–648. https://doi.org/10.1145/2187836.2187923. Pavlos Vougiouklis, Hady Elsahar, Lucie-Aime Kaffee, Christoph Gravier, Frederique Laforest, Jonathon Hare, and Elena Simperl. 2017. Neural wikipedian: Generating textual summaries from knowledge base triples. arXiv preprint arXiv:1711.00155 https://arxiv.org/pdf/1711.00155.pdf.
2018
151
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1638–1649 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1638 Learning to Write with Cooperative Discriminators Ari Holtzman† Jan Buys† Maxwell Forbes† Antoine Bosselut† David Golub† Yejin Choi†‡ †Paul G. Allen School of Computer Science & Engineering, University of Washington ‡Allen Institute for Artificial Intelligence {ahai,jbuys,mbforbes,antoineb,golubd,yejin}@cs.washington.edu Abstract Despite their local fluency, long-form text generated from RNNs is often generic, repetitive, and even self-contradictory. We propose a unified learning framework that collectively addresses all the above issues by composing a committee of discriminators that can guide a base RNN generator towards more globally coherent generations. More concretely, discriminators each specialize in a different principle of communication, such as Grice’s maxims, and are collectively combined with the base RNN generator through a composite decoding objective. Human evaluation demonstrates that text generated by our model is preferred over that of baselines by a large margin, significantly enhancing the overall coherence, style, and information of the generations. 1 Introduction Language models based on Recurrent Neural Networks (RNNs) have brought substantial advancements across a wide range of language tasks (Jozefowicz et al., 2016; Bahdanau et al., 2015; Chopra et al., 2016). However, when used for longform text generation, RNNs often lead to degenerate text that is repetitive, self-contradictory, and overly generic, as shown in Figure 1. We propose a unified learning framework that can address several challenges of long-form text generation by composing a committee of discriminators each specializing in a different principle of communication. Starting with an RNN language model, our framework learns to construct a more powerful generator by training a number of discriminative models that can collectively address limitations of the base RNN generator, and Context: The two guards thudded into the door on the other side and began pounding on it, shouting furiously. Scious raised the large bunch of keys then placed them in a large pocket in his coat. "Come, we have to go," he whispered and moved up the corridor, the guards still hammering on the door. Jared looked around. The table next to the room they had been in was bare. LM: There was a small table set into the center of the table, and a table with two chairs and a table. A chair sat next to the couch. A large man stood in the doorway of the room, his arms crossed over his chest. "What 's up?" He asked. L2W: Empty. It looked like some kind of prison. There were no windows, no doors, no windows. No sign of anyone in the house. No one had ever been here before. Reference: Only a flagon of wine and a couple of mugs plus a crude dice game. Light flickered in the stone corridor from lanterns on the walls. The place was dank and gloomy, moss in scattered patches on the floor and walls. The corridor ended just beyond the door to their former prison. No one else was about. Figure 1: Sample generations from an RNN language model (LM) and our system (L2W) conditioning on the context shown on the top. The red, underlined text highlights repetitions, while the blue, italicized text highlights details that have a direct semantic parallel in the reference text. then learns how to weigh these discriminators to form the final decoding objective. These “cooperative” discriminators complement each other and the base language model to form a stronger, more global decoding objective. The design of our discriminators are inspired by Grice’s maxims (Grice et al., 1975) of quantity, quality, relation, and manner. The discriminators learn to encode these qualities through the selection of training data (e.g. distinguishing a true continuation from a randomly sampled one as in §3.2 Relevance Model), which includes generations from partial models (e.g. distinguishing a true continuation from one generated by a language model as in §3.2 Style Model). The system 1639 then learns to balance these discriminators by initially weighing them uniformly, then continually updating its weights by comparing the scores the system gives to its own generated continuations and to the reference continuation. Empirical results (§5) demonstrate that our learning framework is highly effective in converting a generic RNN language model into a substantially stronger generator. Human evaluation confirms that language generated by our model is preferred over that of competitive baselines by a large margin in two distinct domains, and significantly enhances the overall coherence, style, and information content of the generated text. Automatic evaluation shows that our system is both less repetitive and more diverse than baselines. 2 Background RNN language models learn the conditional probability P(xt|x1, ..., xt−1) of generating the next word xt given all previous words. This conditional probability learned by RNNs often assigns higher probability to repetitive, overly generic sentences, as shown in Figure 1 and also in Table 3. Even gated RNNs such as LSTMs (Hochreiter and Schmidhuber, 1997) and GRUs (Cho et al., 2014) have difficulties in properly incorporating long-term context due to explaining-away effects (Yu et al., 2017b), diminishing gradients (Pascanu et al., 2013), and lack of inductive bias for the network to learn discourse structure or global coherence beyond local patterns. Several methods in the literature attempt to address these issues. Overly simple and generic generation can be improved by length-normalizing the sentence probability (Wu et al., 2016), future cost estimation (Schmaltz et al., 2016), or a diversityboosting objective function (Shao et al., 2017; Vijayakumar et al., 2016). Repetition can be reduced by prohibiting recurrence of the trigrams as a hard rule (Paulus et al., 2018). However, such hard constraints do not stop RNNs from repeating through paraphrasing while preventing occasional intentional repetition. We propose a unified framework to address all these related challenges of long-form text generation by learning to construct a better decoding objective, generalizing over various existing modifications to the decoding objective. 3 The Learning Framework We propose a general learning framework for conditional language generation of a sequence y given a fixed context x. The decoding objective for generation takes the general form fλ(x, y) = log(Plm(y|x))+ X k λksk(x, y), (1) where every sk is a scoring function. The proposed objective combines the RNN language model probability Plm (§3.1) with a set of additional scores sk(x, y) produced by discriminatively trained communication models (§3.2), which are weighted with learned mixture coefficients λk (§3.3). When the scores sk are log probabilities, this corresponds to a Product of Experts (PoE) model (Hinton, 2002). Generation is performed using beam search (§3.4), scoring incomplete candidate generations y1:i at each time step i. The RNN language model decomposes into per-word probabilities via the chain rule. However, in order to allow for more expressivity over long range context we do not require the discriminative model scores to factorize over the elements of y, addressing a key limitation of RNNs. More specifically, we use an estimated score s′ k(x, y1:i) that can be computed for any prefix of y = y1:n to approximate the objective during beam search, such that s′ k(x, y1:n) = sk(x, y). To ensure that the training method matches this approximation as closely as possible, scorers are trained to discriminate prefixes of the same length (chosen from a predetermined set of prefix lengths), rather than complete continuations, except for the entailment module as described in §3.2 Entailment Model. The prefix scores are re-estimated at each time-step, rather than accumulated over beam search. 3.1 Base Language Model The RNN language model treats the context x and the continuation y as a single sequence s: log Plm(s) = X i log Plm(si|s1:i−1). (2) 3.2 Cooperative Communication Models We introduce a set of discriminators, each of which encodes an aspect of proper writing that RNNs usually fail to capture. Each model is trained to discriminate between good and bad generations; we vary the model parameterization and 1640 training examples to guide each model to focus on a different aspect of Grice’s Maxims. The discriminator scores are interpreted as classification probabilities (scaled with the logistic function where necessary) and interpolated in the objective function as log probabilities. Let D = {(x1, y1), . . . (xn, yn)} be the set of training examples for conditional generation. Dx denote all contexts and Dy all continuations. The scoring functions are trained on prefixes of y to simulate their application to partial continuations at inference time. In all models the first layer embeds each word w into a 300-dimensional vector e(w) initialized with GloVe (Pennington et al., 2014) pretrainedembeddings. Repetition Model This model addresses the maxim of Quantity by biasing the generator to avoid repetitions. The goal of the repetition discriminator is to learn to distinguish between RNN-generated and gold continuations by exploiting our empirical observation that repetitions are more common in completions generated by RNN language models. However, we do not want to completely eliminate repetition, as words do recur in English. In order to model natural levels of repetition, a score di is computed for each position in the continuation y based on pairwise cosine similarity between word embeddings within a fixed window of the previous k words, where di = max j=i−k...i−1(CosSim(e(yj), e(yi))), (3) such that di = 1 if yi is repeated in the window. The score of the continuation is then defined as srep(y) = σ(w⊤ r RNNrep(d)), (4) where RNNrep(d) is the final state of a unidirectional RNN ran over the similarity scores d = d1 . . . dn and wr is a learned vector. The model is trained to maximize the ranking log likelihood Lrep = X (x,yg)∈D, ys∼LM(x) log σ(srep(yg) −srep(ys)), (5) which corresponds to the probability of the gold ending yg receiving a higher score than the ending sampled from the RNN language model. Entailment Model Judging textual quality can be related to the natural language inference (NLI) task of recognizing textual entailment (Dagan et al., 2006; Bowman et al., 2015): we would like to guide the generator to neither contradict its own past generation (the maxim of Quality) nor state something that readily follows from the context (the maxim of Quantity). The latter case is driven by the RNNs habit of paraphrasing itself during generation. We train a classifier that takes two sentences a and b as input and predicts the relation between them as either contradiction, entailment or neutral. We use the neutral class probability of the sentence pair as discriminator score, in order to discourage both contradiction and entailment. As entailment classifier we use the decomposable attention model (Parikh et al., 2016), a competitive, parameter-efficient model for entailment classification.1 The classifier is trained on two large entailment datasets, SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2017), which together have more than 940,000 training examples. We train separate models based on the vocabularies of each of the datasets we use for evaluation. In contrast to our other communication models, this classifier cannot be applied directly to the full context and continuation sequences it is scoring. Instead every completed sentence in the continuation should be scored against all preceding sentences in both the context and continuation. Let t(a, b) be the log probability of the neutral class. Let S(y) be the set of complete sentences in y, Slast(y) the last complete sentence, and Sinit(y) the sentences before the last complete sentence. We compute the entailment score of Slast(y) against all preceding sentences in x and y, and use the score of the sentence-pair for which we have the least confidence in a neutral classification: sentail(x, y) = mina∈S(x)∪Sinit(y)t(a, Slast(y)). (6) Intuitively, we only use complete sentences because the ending of a sentence can easily flip entailment. As a result, we carry over entailment score of the last complete sentence in a generation until the end of the next sentence, in order to maintain the presence of the entailment score in the objective. Note that we check that the current 1We use the version without intra-sentence attention. 1641 Data: context x, beam size k, sampling temperature t Result: best continuation best = None beam = [x] for step = 0; step < max steps; step = step +1 do next beam = [] for candidate in beam do next beam.extend(next k(candidate)) if termination score(candidate) > best.score then best = candidate.append(term) end end for candidate in next beam do ▷score with models candidate.score += fλ(candidate) end ▷sample k candidates by score beam = sample(next beam, k, t) end if learning then update λ with gradient descent by comparing best against the gold. end return best Algorithm 1: Inference/Learning in the Learning to Write Framework. sentence is not directly entailed or contradicted by a previous sentence and not the reverse. 2 In contrast to our other models, the score this model returns only corresponds to a subsequence of the given continuation, as the score is not accumulated across sentences during beam search. Instead the decoder is guided locally to continue complete sentences that are not entailed or contradicted by the previous text. Relevance Model The relevance model encodes the maxim of Relation by predicting whether the content of a candidate continuation is relevant to the given context. We train the model to distinguish between true continuations and random continuations sampled from other (human-written) endings in the corpus, conditioned on the given context. First both the context and continuation sequences are passed through a convolutional layer, followed by maxpooling to obtain vector representations of the sequences: a = maxpool(conva(e(x))), (7) b = maxpool(convb(e(y))). (8) 2If the current sentence entails a previous one it may simply be adding more specific information, for instance: “He hated broccoli. Every time he ate broccoli he was reminded that it was the thing he hated most.” The goal of maxpooling is to obtain a vector representing the most important semantic information in each dimension. The scoring function is then defined as srel = wT l · (a ◦b), (9) where element-wise multiplication of the context and continuation vectors will amplify similarities. We optimize the ranking log likelihood Lrel = X (x,yg)∈D, yr∼Dy log σ(srel(x, yg) −srel(x, yr)), (10) where yg is the gold ending and yr is a randomly sampled ending. Lexical Style Model In practice RNNs generate text that exhibit much less lexical diversity than their training data. To counter this effect we introduce a simple discriminator based on observed lexical distributions which captures writing style as expressed through word choice. This classifier therefore encodes aspects of the maxim of Manner. The scoring function is defined as sbow(y) = wT s maxpool(e(y)). (11) The model is trained with a ranking loss using negative examples sampled from the language model, similar to Equation 5. 3.3 Mixture Weight Learning Once all the communication models have been trained, we learn the combined decoding objective. In particular we learn the weight coefficients λk in equation 1 to linearly combine the scoring functions, using a discriminative loss Lmix = X (x,y)∈D (fλ(x, y) −fλ(x, A(x))2, (12) where A is the inference algorithm for beam search decoding. The weight coefficients are thus optimized to minimize the difference between the scores assigned to the gold continuation and the continuation predicted by the current model. Mixture weights are learned online: Each successive generation is performed based on the current values of λ, and a step of gradient descent is then performed based on the prediction. This has the effect that the objective function changes 1642 BookCorpus TripAdvisor Model BLEU Meteor Length Vocab Trigrams BLEU Meteor Length Vocab % Trigrams L2W 0.52 6.8 43.6 73.8 98.9 1.7 11.0 83.8 64.1 96.2 ADAPTIVELM 0.52 6.3 43.5 59.0 92.7 1.94 11.2 94.1 52.6 92.5 CACHELM 0.33 4.6 37.9 31.0 44.9 1.36 7.2 52.1 39.2 57.0 SEQ2SEQ 0.32 4.0 36.7 23.0 33.7 1.84 8.0 59.2 33.9 57.0 SEQGAN 0.18 5.0 28.4 73.4 99.3 0.73 6.7 47.0 57.6 93.4 REFERENCE 100.0 100.0 65.9 73.3 99.7 100.0 100.0 92.8 69.4 99.4 Table 1: Results for automatic evaluation metrics for all systems and domains, using the original continuation as the reference. The metrics are: Length - Average total length per example; Trigrams - % unique trigrams per example; Vocab - % unique words per example. dynamically during training: As the current samples from the model are used to update the mixture weights, it creates its own learning signal by applying the generative model discriminatively. The SGD learning rate is tuned separately for each dataset. 3.4 Beam Search Due to the limitations of greedy decoding and the fact that our scoring functions do not decompose across time steps, we perform generation with a beam search procedure, shown in Algorithm 1. The naive approach would be to perform beam search based only on the language model, and then rescore the k best candidate completions with our full model. We found that this approach leads to limited diversity in the beam and therefore cannot exploit the strengths of the full model. Instead we score the current hypotheses in the beam with the full decoding objective: First, each hypothesis is expanded by selecting the k highest scoring next words according to the language model (we use beam size k = 10). Then k sequences are sampled from the k2 candidates according to the (softmax normalized) distribution over the candidate scores given by the full decoding objective. Sampling is performed in order to increase diversity, using a temperature of 1.8, which was tuned by comparing the coherence of continuations on the validation set. At each step, the discriminator scores are recomputed for all candidates, with the exception of the entailment score, which is only recomputed for hypotheses which end with a sentence terminating symbol. We terminate beam search when the termination score, the maximum possible score achievable by terminating generation at the current position, is smaller than the current best score. 4 Experiments 4.1 Corpora We use two English corpora for evaluation. The first is the TripAdvisor corpus (Wang et al., 2010), a collection of hotel reviews with a total of 330 million words.3 The second is the BookCorpus (Zhu et al., 2015), a 980 million word collection of novels by unpublished authors.4 In order to train the discriminators, mixing weights, and the SEQ2SEQ and SEQGAN baselines, we segment both corpora into sections of length ten sentences, and use the first 5 sentence as context and the second 5 as the continuation. See supplementary material for further details. 4.2 Baselines ADAPTIVELM Our first baseline is the same Adaptive Softmax (Grave et al., 2016) language model used as base generator in our framework (§3.1). This enables us to evaluate the effect of our enhanced decoding objective directly. A 100k vocabulary is used and beam search with beam size of 5 is used at decoding time. ADAPTIVELM achieves perplexity of 37.46 and 18.81 on BookCorpus and TripAdvisor respectively. CACHELM As another LM baseline we include a continuous cache language model (Grave et al., 2017) as implemented by Merity et al. (2018), which recently obtained state-of-the-art perplexity on the Penn Treebank corpus (Marcus et al., 1993). Due to memory constraints, we use a vocabulary size of 50k for CACHELM. To generate, beam search decoding is used with a beam size 5. CACHELM obtains perplexities of 70.9 and 29.71 on BookCorpus and TripAdvisor respectively. 3http://times.cs.uiuc.edu/˜wang296/ Data/ 4http://yknzhu.wixsite.com/mbweb 1643 BookCorpus Specific Criteria Overall Quality L2W vs. Repetition Contradiction Relevance Clarity Better Equal Worse ADAPTIVELM +0.48 +0.18 +0.12 +0.11 47% 20% 32% CACHELM +1.61 +0.37 +1.23 +1.21 86% 6% 8% SEQ2SEQ +1.01 +0.54 +0.83 +0.83 72% 7% 21% SEQGAN +0.20 +0.32 +0.61 +0.62 63% 20% 17% LM VS. REFERENCE -0.10 -0.07 -0.18 -0.10 41% 7 % 52% L2W VS. REFERENCE +0.49 +0.37 +0.46 +0.55 53% 18% 29% TripAdvisor Specific Criteria Overall Quality L2W vs. Repetition Contradiction Relevance Clarity Better Equal Worse ADAPTIVELM +0.23 -0.02 +0.19 -0.03 47% 19% 34% CACHELM +1.25 +0.12 +0.94 +0.69 77% 9% 14% SEQ2SEQ +0.64 +0.04 +0.50 +0.41 58% 12% 30% SEQGAN +0.53 +0.01 +0.49 +0.06 55% 22% 22% LM VS. REFERENCE -0.10 -0.04 -0.15 -0.06 38% 10% 52% L2W VS. REFERENCE -0.49 -0.36 -0.47 -0.50 25% 18% 57% Table 2: Results of crowd-sourced evaluation on different aspects of the generation quality as well as overall quality judgments. For each sub-criteria we report the average of comparative scores on a scale from -2 to 2. For the overall quality evaluation decisions are aggregated over 3 annotators per example. SEQ2SEQ As our evaluation can be framed as sequence-to-sequence transduction, we compare against a seq2seq model directly trained to predict 5 sentence continuations from 5 sentences of context, using the OpenNMT attention-based seq2seq implementation (Klein et al., 2017). Similarly to CACHELM, a 50k vocabulary was used and beam search decoding was performed with a beam size of 5. SEQGAN Finally, as our use of discriminators is related to Generative Adversarial Networks (GANs), we use SeqGAN (Yu et al., 2017a), a GAN for discrete sequences trained with policy gradients.5 This model is trained on 10 sentence sequences, which is significantly longer than previous experiments with GANs for text; the vocabulary is restricted to 25k words to make training tractable. Greedy sampling was found to outperform beam search. For implementation details, see the supplementary material. 4.3 Evaluation Setup We pose the evaluation of our model as the task of generating an appropriate continuation given an initial context. In our open-ended generation setting the continuation is not required to be a specific length, so we require our models and baselines to generate 5-sentence continuations, consistent with the way the discriminator and seq2seq baseline datasets are constructed. Previous work has reported that automatic mea5We use the implementation available at https:// github.com/nhynes/abc. sures such as BLEU (Papineni et al., 2002) and Meteor (Denkowski and Lavie, 2010) do not lead to meaningful evaluation when used for long or creative text generation where there can be high variance among acceptable generation outputs (Wiseman et al., 2017; Vedantam et al., 2015). However, we still report these measures as one component of our evaluation. Additionally we report a number of custom metrics which capture important properties of the generated text: Length – Average sequence length per example; Trigrams – percentage of unique trigrams per example; Vocab – percentage of unique words per example. Endings generated by our model and the baselines are compared against the reference endings in the original text. Results are given in Table 1. For open-ended generation tasks such as our own, human evaluation has been found to be the only reliable measure (Li et al., 2016b; Wiseman et al., 2017). For human evaluation, two possible endings are presented to a human, who assesses the text according to several criteria, which are closely inspired by Grice’s Maxims: repetition, contradiction, relevance and clarity. See supplementary material for examples of the evaluation forms we used. For each criterion, the two continuations are compared using a 5-point Likert scale, to which we assign numerical values of −2 to 2. The scale measures whether one generation is strongly or somewhat preferred above the other, or whether they are equal. Finally, the human is asked to make a judgement about overall quality: which ending is better, or are they of equal qual1644 CONTEXT We thoroughly enjoyed staying at this hotel. We travelled with our kids aged 4 and 6. The rooms were clean, spacious and well maintained. The beds were great and all of us slept well. The hotel provided vouchers towards breakfast at the adjoining restaurant. L2W We didn’t have breakfast, so can’t comment on that. The concierge was very helpful, and gave us directions, etc. Overall, we would definitely recommend this hotel to others. We would stay here again in a heartbeat. If you’re looking for a great location, and good value, I’d recommend this hotel. ADAPTIVELM The breakfast buffet was very good, and the staff was very friendly and helpful, and it was a great way to start our day! I would definitely stay here again, and would recommend it to my friends, and would stay there again if I ever go back to Montreal. The location was great, and the staff was friendly and helpful. I would definitely stay here again!! CACHELM The staff was very friendly and helpful. The hotel is located in a great location. It is right in the middle of the gaslamp district. I would recommend this hotel to anyone. I would definitely stay here again. SEQ2SEQ The staff were very friendly and helpful. The hotel is in a great location. The staff were very friendly and helpful. The hotel is in a great location. The staff were very friendly and helpful. SEQGAN We had a breakfast at Shula’s & a delicious breakfast. The staff was very helpful and helpful. The breakfast was great as well. The staff was very helpful and friendly. We had a great service and the food was excellent. REFERENCE The restaurant was great and we used the vouchers towards whatever breakfast we ordered. The hotel had amazing grounds with a putting golf course that was fun for everyone. The pool was fantastic and we lucked out with great weather. We spent many hours in the pool, lounging, playing shuffleboard and snacking from the attached bar. The happy hour was great perk. Table 3: Example continuations generated by our model (L2W) and various baselines (all given the same context from TripAdvisor) compared to the reference continuation. For more examples go to https://ari-holtzman.github.io/l2w-demo/. ity? The human evaluation is performed on 100 examples selected from the test set of each corpus, for every pair of generators that are compared. We present the examples to workers on Amazon Mechanical Turk, using three annotators for each example. The results are given in Table 2. For the Likert scale, we report the average scores for each criterion, while for the overall quality judgement we simply aggregate votes across all examples. 5 Results and Analysis 5.1 Quantitative Results The absolute performance of all the evaluated systems on BLEU and Meteor is quite low (Table 1), as expected. However, in relative terms L2W is superior or competitive with all the baselines, of which ADAPTIVELM performs best. In terms of vocabulary and trigram diversity only SEQGAN is competitive with L2W, likely due to the fact that sampling based decoding was used. For generation length only L2W and ADAPTIVELM even approach human levels, with the former better on BookCorpus and the latter on TripAdvisor. Under the crowd-sourced evaluation (Table 2), on BookCorpus our model is consistently favored over the baselines on all dimensions of comparison. In particular, our model tends to be much less repetitive, while being more clear and relevant than the baselines. ADAPTIVELM is the most competitive baseline owing partially to the robustness of language models and to greater vocabulary coverage through the adaptive softmax. SEQGAN, while failing to achieve strong coherency, is surprisingly diverse, but tended to produce far shorter sentences than the other models. CACHELM has trouble dealing with the complex vocabulary of our domains without the support of either a hierarchical vocabulary structure (as in ADAPTIVELM) or a structured training method (as with SEQGAN), leading to overall poor results. While the SEQ2SEQ model has low conditional perplexity, we found that in practice it is less able to leverage long-distance dependencies than the base language model, producing more generic output. This reflects our need for more complex evaluations for generation, as such models are rarely evaluated under metrics that inspect characteristics of the text, rather than ability to predict the gold or overlap with the gold. For the TripAdvisor corpus, L2W is ranked higher than the baselines on overall quality, as well as on most individual metrics, with the exception that it fails to improve on contradiction and clarity over the ADAPTIVELM (which is again the most competitive baseline). Our model’s strongest improvements over the baselines are on repetition and relevance. 1645 Trip Advisor Ablation Ablation vs. LM Repetition Contradiction Relevance Clarity Better Neither Worse REPETITION ONLY +0.63 +0.30 +0.37 +0.42 50% 23% 27% ENTAILMENT ONLY +0.01 +0.02 +0.05 -0.10 39% 20% 41% RELEVANCE ONLY -0.19 +0.09 +0.10 +0.060 36% 22% 42% LEXICAL STYLE ONLY +0.11 +0.16 +0.20 +0.16 38% 25% 38% ALL +0.23 -0.02 +0.19 -0.03 47% 19% 34% Table 4: Crowd-sourced ablation evaluation of generations on TripAdvisor. Each ablation uses only one discriminative communication model, and is compared to ADAPTIVELM. Ablation To investigate the effect of individual discriminators on the overall performance, we report the results of ablations of our model in Table 4. For each ablation we include only one of the communication modules, and train a single mixture coefficient for combining that module and the language model. The diagonal of Table 4 contains only positive numbers, indicating that each discriminator does help with the purpose it was designed for. Interestingly, most discriminators help with most aspects of writing, but all except repetition fail to actually improve the overall quality over ADAPTIVELM. The repetition module gives the largest boost by far, consistent with the intuition that many of the deficiencies of RNN as a text generator lie in semantic repetition. The entailment module (which was intended to reduce contradiction) is the weakest, which we hypothesize is the combination of (a) mismatch between training and test data (since the entailment module was trained on SNLI and MultiNLI) and (b) the lack of smoothness in the entailment scorer, whose score could only be updated upon the completion of a sentence. Crowd Sourcing Surprisingly, L2W is even preferred over the original continuation of the initial text on BookCorpus. Qualitative analysis shows that L2W’s continuation is often a straightforward continuation of the original text while the true continuation is more surprising and contains complex references to earlier parts of the book. While many of the issues of automatic metrics (Liu et al., 2016; Novikova et al., 2017) have been alleviated by crowd-sourcing, we found it difficult to incentivize crowd workers to spend significant time on any one datum, forcing them to rely on a shallower understanding of the text. 5.2 Qualitative Analysis L2W generations are more topical and stylistically coherent with the context than the baselines. Table 3 shows that L2W, ADAPTIVELM, and SEQGAN all start similarly, commenting on the breakfast buffet, as breakfast was mentioned in the last sentence of the context. The language model immediately offers generic compliments about the breakfast and staff, whereas L2W chooses a reasonable but less obvious path, stating that the previously mentioned vouchers were not used. In fact, L2W is the only system not to use the line “The staff was very friendly and helpful.”, despite this sentence appearing in less than 1% of reviews. The semantics of this sentence, however, is expressed in many different surface forms in the training data (e.g., “The staff were kind and quick to respond.”). The CACHELM begins by generating the same over-used sentence and only produce short, generic sentences throughout. Seq2Seq simply repeats sentences that occur often in the training set, repeating one sentence three times and another twice. This indicates that the encoded context is essentially being ignored as the model fails to align the context and continuation. The SEQGAN system is more detailed, e.g. mentioning a specific location “Shula’s” as would be expected given its highly diverse vocabulary (as seen in Table 1). Yet it repeats itself in the first sentence. (e.g. “had a breakfast”, “and a delicious breakfast”). Consequently SEQGAN quickly devolves into generic language, repeating the incredibly common sentence “The staff was very helpful and friendly.”, similar to SEQ2SEQ. The L2W models do not fix every degenerate characteristic of RNNs. The TripAdvisor L2W generation consists of meaningful but mostly disconnected sentences, whereas human text tends to build on previous sentences, as in the reference continuation. Furthermore, while L2W re1646 peats itself less than any of our baselines, it still paraphrases itself, albeit more subtly: “we would definitely recommend this hotel to others.” compared to “I’d recommend this hotel.” This example also exposes a more fine-grained issue: L2W switches from using “we” to using “I” midgeneration. Such subtle distinctions are hard to capture during beam re-ranking and none of our models address the linguistic issues of this subtlety. 6 Related Work Alternative Decoding Objectives A number of papers have proposed alternative decoding objectives for generation (Shao et al., 2017). Li et al. (2016a) proposed a diversity-promoting objective that interpolates the conditional probability score with negative marginal or reverse conditional probabilities. Yu et al. (2017b) also incorporate the reverse conditional probability through a noisy channel model in order to alleviate the explaining-away problem, but at the cost of significant decoding complexity, making it impractical for paragraph generation. Modified decoding objectives have long been a common practice in statistical machine translation (Koehn et al., 2003; Och, 2003; Watanabe et al., 2007; Chiang et al., 2009) and remain common with neural machine translation, even when an extremely large amount of data is available (Wu et al., 2016). Inspired by all the above approaches, our work presents a general learning framework together with a more comprehensive set of composite communication models. Pragmatic Communication Models Models for pragmatic reasoning about communicative goals such as Grice’s maxims have been proposed in the context of referring expression generation (Frank and Goodman, 2012). Andreas and Klein (2016) proposed a neural model where candidate descriptions are sampled from a generatively trained speaker, which are then re-ranked by interpolating the score with that of the listener, a discriminator that predicts a distribution over choices given the speaker’s description. Similar to our work the generator and discriminator scores are combined to select utterances which follow Grice’s maxims. Yu et al. (2017c) proposed a model where the speaker consists of a convolutional encoder and an LSTM decoder, trained with a ranking loss on negative samples in addition to optimizing log-likelihood. Generative Adversarial Networks GANs (Goodfellow et al., 2014) are another alternative to maximum likelihood estimation for generative models. However, backpropagating through discrete sequences and the inherent instability of the training objective (Che et al., 2017) both present significant challenges. While solutions have been proposed to make it possible to train GANs for language (Che et al., 2017; Yu et al., 2017a) they have not yet been shown to produce high quality long-form text, as our results confirm. Generation with Long-term Context Several prior works studied paragraph generation using sequence-to-sequence models for image captions (Krause et al., 2017), product reviews (Lipton et al., 2015; Dong et al., 2017), sport reports (Wiseman et al., 2017), and recipes (Kiddon et al., 2016). While these prior works focus on developing neural architectures for learning domain specific discourse patterns, our work proposes a general framework for learning a generator that is more powerful than maximum likelihood decoding from an RNN language model for an arbitrary target domain. 7 Conclusion We proposed a unified learning framework for the generation of long, coherent texts, which overcomes some of the common limitations of RNNs as text generation models. Our framework learns a decoding objective suitable for generation through a learned combination of sub-models that capture linguistically-motivated qualities of good writing. Human evaluation shows that the quality of the text produced by our model exceeds that of competitive baselines by a large margin. Acknowledgments We thank the anonymous reviewers for their insightful feedback and Omer Levy for helpful discussions. This research was supported in part by NSF (IIS-1524371), DARPA CwC through ARO (W911NF-15-1-0543), Samsung AI Research, and gifts by Tencent, Google, and Facebook. References Jacob Andreas and Dan Klein. 2016. Reasoning about pragmatics with neural listeners and speakers. In 1647 Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1173– 1182. Association for Computational Linguistics. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 632–642. Association for Computational Linguistics. Tong Che, Yanran Li, Ruixiang Zhang, R. Devon Hjelm, Wenjie Li, Yangqiu Song, and Yoshua Bengio. 2017. Maximum-likelihood augmented discrete generative adversarial networks. CoRR, abs/1702.07983. David Chiang, Kevin Knight, and Wei Wang. 2009. 11,001 new features for statistical machine translation. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 218–226, Boulder, Colorado. Association for Computational Linguistics. Kyunghyun Cho, Bart van Merri¨enboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder–decoder approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103–111. Sumit Chopra, Michael Auli, and Alexander M. Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 93–98, San Diego, California. Association for Computational Linguistics. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment challenge. In Proceedings of the First International Conference on Machine Learning Challenges: Evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment, MLCW’05, pages 177–190, Berlin, Heidelberg. Springer-Verlag. Michael Denkowski and Alon Lavie. 2010. Extending the METEOR Machine Translation Evaluation Metric to the Phrase Level. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 250–253. Li Dong, Shaohan Huang, Furu Wei, Mirella Lapata, Ming Zhou, and Ke Xu. 2017. Learning to generate product reviews from attributes. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 623–632, Valencia, Spain. Association for Computational Linguistics. Michael C. Frank and Noah D. Goodman. 2012. Predicting pragmatic reasoning in language games. Science, 336(6084):998–998. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672–2680. Edouard Grave, Armand Joulin, Moustapha Ciss´e, David Grangier, and Herv´e J´egou. 2016. Efficient softmax approximation for gpus. arXiv preprint arXiv:1609.04309. Edouard Grave, Armand Joulin, and Nicolas Usunier. 2017. Improving neural language models with a continuous cache. In International Conference on Learning Representations. H Paul Grice, Peter Cole, Jerry Morgan, et al. 1975. Logic and conversation. 1975, pages 41–58. Geoffrey E Hinton. 2002. Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8):1771–1800. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language modeling. CoRR, abs/1602.02410. Chlo´e Kiddon, Luke Zettlemoyer, and Yejin Choi. 2016. Globally coherent text generation with neural checklist models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 329–339. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M. Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. In Proceedings of the Association of Computational Linguistics. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1, pages 48–54. Association for Computational Linguistics. Jonathan Krause, Justin Johnson, Ranjay Krishna, and Li Fei-Fei. 2017. A hierarchical approach for generating descriptive image paragraphs. In Proceedings of the Conference on Computer Vision and Pattern Recognition. 1648 Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119, San Diego, California. Association for Computational Linguistics. Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016b. Deep reinforcement learning for dialogue generation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1192–1202, Austin, Texas. Association for Computational Linguistics. Zachary Chase Lipton, Sharad Vikram, and Julian McAuley. 2015. Capturing meaning in product reviews with character-level generative text models. CoRR, abs/1511.03683. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 2122–2132, Austin, Texas. Association for Computational Linguistics. Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computational Linguistics, 19(2):313–330. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. Regularizing and optimizing lstm language models. ICLR. Jekaterina Novikova, Ondˇrej Duˇsek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for nlg. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 2241–2252, Copenhagen, Denmark. Association for Computational Linguistics. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160–167, Sapporo, Japan. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the Association for Computational Linguistics, pages 311–318. Association for Computational Linguistics. Ankur Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 2249–2255. Association for Computational Linguistics. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In International Conference on Machine Learning (ICML), pages 1310–1318. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. CoRR, abs/1705.04304. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Allen Schmaltz, Alexander M. Rush, and Stuart Shieber. 2016. Word ordering without syntax. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 2319– 2324, Austin, Texas. Association for Computational Linguistics. Yuanlong Shao, Stephan Gouws, Denny Britz, Anna Goldie, Brian Strope, and Ray Kurzweil. 2017. Generating high-quality and informative conversation responses with sequence-to-sequence models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 2210–2219. Association for Computational Linguistics. Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4566–4575. Ashwin K Vijayakumar, Michael Cogswell, Ramprasath R Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2016. Diverse beam search: Decoding diverse solutions from neural sequence models. arXiv preprint arXiv:1610.02424. Hongning Wang, Yue Lu, and ChengXiang Zhai. 2010. Latent aspect rating analysis on review text data: a rating regression approach. In SIGKDD Conference on Knowledge Discovery and Data Mining. Taro Watanabe, Jun Suzuki, Hajime Tsukada, and Hideki Isozaki. 2007. Online large-margin training for statistical machine translation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL), pages 764–773, Prague, Czech Republic. Association for Computational Linguistics. Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. CoRR, abs/1704.05426. 1649 Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in data-to-document generation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 2253–2263, Copenhagen, Denmark. Association for Computational Linguistics. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017a. Seqgan: Sequence generative adversarial nets with policy gradient. In Proceedings of the Association for the Advancement of Artificial Intelligence, pages 2852–2858. Lei Yu, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Tomas Kocisky. 2017b. The neural noisy channel. In International Conference on Learning Representations. Licheng Yu, Hao Tan, Mohit Bansal, and Tamara L Berg. 2017c. A joint speaker-listener-reinforcer model for referring expressions. In Proceedings of the Conference on Computer Vision and Pattern Recognition, volume 2. Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In arXiv preprint arXiv:1506.06724.
2018
152
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1650–1660 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1650 A Neural Approach to Pun Generation Zhiwei Yu and Jiwei Tan and Xiaojun Wan Institute of Computer Science and Technology, Peking University The MOE Key Laboratory of Computational Linguistics, Peking University {yuzw,tanjiwei,wanxiaojun}@pku.edu.cn Abstract Automatic pun generation is an interesting and challenging text generation task. Previous efforts rely on templates or laboriously manually annotated pun datasets, which heavily constrains the quality and diversity of generated puns. Since sequence-to-sequence models provide an effective technique for text generation, it is promising to investigate these models on the pun generation task. In this paper, we propose neural network models for homographic pun generation, and they can generate puns without requiring any pun data for training. We first train a conditional neural language model from a general text corpus, and then generate puns from the language model with an elaborately designed decoding algorithm. Automatic and human evaluations show that our models are able to generate homographic puns of good readability and quality. 1 Introduction Punning is an ingenious way to make conversation enjoyable and plays important role in entertainment, advertising and literature. A pun is a means of expression, the essence of which is in the given context the word or phrase can be understood in two meanings simultaneously (Mikhalkova and Karyakin, 2017). Puns can be classified according to various standards, and the most essential distinction for our research is between homographic and homophonic puns. A homographic pun exploits distinct meanings of the same written word while a homophonic pun exploits distinct meanings of the same spoken word. Puns can be homographic, homophonic, both, or neither (Miller and Gurevych, 2015). Puns have the potential to combine novelty and familiarity appropriately, which can induce pleasing effect to advertisement (Valitutti et al., 2008). Using puns also contributes to elegancy in literary writing, as laborious manual counts revealed that puns are one of the most commonly used rhetoric of Shakespeare, with the frequency in certain of his plays ranging from 17 to 85 instances per thousand lines (Miller and Gurevych, 2015). It is not an overstatement to say that pun generation has significance in human society. However, as a special branch of humor, generating puns is not easy for humans, let alone automatically generating puns with artificial intelligence techniques. While text generation is a topic of interest in the natural language processing community, pun generation has received little attention. Recent sequence-to-sequence (seq2seq) framework is proved effective on text generation tasks including machine translation (Sutskever et al., 2014), image captioning (Vinyals et al., 2015), and text summarization (Tan et al., 2017). The end-to-end framework has the potential to train a language model which can generate fluent and creative sentences from a large corpus. Great progress has achieved on the tasks with sufficient training data like machine translation, achieving state-of-the-art performance. Unfortunately, due to the limited puns which are deemed insufficient for training a language model, there has not been any research concentrated on generating puns based on the seq2seq framework as far as we know. The inherent property of humor makes the pun generation task more challenging. Despite decades devoted to theories and algorithms for humor, computerized humor still lacks of creativity, sophistication of language, world knowledge, 1651 empathy and cognitive mechanisms compared to humans, which are extremely difficult to model (Hossain et al., 2017). In this paper, we study the challenging task of generating puns with seq2seq models without using a pun corpus for training. We propose a brandnew method to generate homographic puns using normal text corpus which can result in good quality of language model and avoid considerable expense of human annotators on the limited pun resources. Our proposed method can generate puns according to the given two senses of a target word. We achieve this by first proposing an improved language model that is able to generate a sentence containing a given word with a specific sense. Based on the improved language model, we are able to generate a pun sentence that is suitable for two specified senses of a homographic word, using a novel joint beam search algorithm we propose. Moreover, based on the observed characteristics of human generated puns, we further enhance the model to generate puns highlighting intended word senses. The proposed method demonstrates the ability to generate homographic puns containing the assigned two senses of a target word. Our approach only requires a general text corpus, and we use the Wikipedia corpus in our experiment. We introduce both manual ways and automatic metrics to evaluate the generated puns. Experimental results demonstrate that our methods are powerful and inspiring in generating homographic puns. The contributions of our work are as follows: • To our knowledge, our work is the first attempt to adopt neural language models on pun generation. And we do not use any templates or pun data sets in training the model. • We propose a brand-new algorithm to generate sentences containing assigned distinct senses of a target word. • We further ameliorate our model with associative words and multinomial sampling to produce better pun sentences. • Our approach yields substantial results on generating homographic puns with high accuracy of assigned senses and low perplexity. 2 Related Work 2.1 Pun Generation In recent decades, exploratory research into computational humor has developed to some extent, but seldom is research specifically concerned with puns. Miller and Gurevych (2015) found that most previous studies on puns tend to focus on phonological or syntactic pattern rather than semantic pattern. In this subsection we briefly review some prior work on pun generation. Lessard and Levison (1992) devised a program to create Tom Swifty, a type of pun which is present in a quoted utterance followed by a punning adverb. Binsted and Ritchie (1994) came up with an early prototype of pun-generator Joke Analysis and Production Engine (JAPE). The model generates question-answer punning with two types of structures: schemata for determining relationships between key words in a joke, and templates for producing the surface form of the joke. Later its successor JAPE-2 (Binsted, 1996; Binsted et al., 1997) and STANDUP (Ritchie et al., 2007) introduced constructing descriptions. The Homonym Common Phrase Pun generator (Venour, 1999) could create two-utterance texts: a one-sentence set-up and a punch-line. Venour (1999) used schemata to specify the required lexical items and their intern relations, and used templates to indicate where to fit the lexical items in a skeleton text (Ritchie, 2004). McKay (2002) proposed WISCRAIC program which can produce puns in three forms: question-answer form, single sentence and a two-sentence sequence. The Template-Based Pun Extractor and Generator (Hong and Ong, 2009) utilized phonetic and semantic linguistic resources to extract word relationships in puns automatically. The system stores the extracted knowledge in template form and results in computer-generated puns. Most previous research on pun generation is based on templates which is convenient but lacks linguistic subtlety and can be inflexible. None of the systems aimed to be creative as the skeletons of the sentences are fixed and the generation process based on lexical information rarely needs world knowledge or reasoning (Ritchie, 2004). Recently more and more work focuses on pun detection and interpretation (Miller et al., 2017; Miller and Gurevych, 2015; Doogan et al., 2017), rather than pun generation. 1652 2.2 Natural Language Generation Natural language generation is an important area of NLP and it is an essential foundation for the tasks like machine translation, dialogue response generation, summarization and of course pun generation. In the past, text generation is usually based on the techniques like templates or rules, probabilistic models like n-gram or log-linear models. Those models are fairly interpretable and wellbehaved but require infeasible amounts of handengineering to scale with the increasing training data (Xie, 2017). In most cases larger corpus reveals better what matters, so it is natural to tackle large scale modeling (J´ozefowicz et al., 2016). Recently, neural network language models (Bengio et al., 2003) have shown the good ability to model language and fight the curse of dimensionality. Cho et al. (2014) propose the encoderdecoder structure which proves very efficient to generate text. The encoder produces a fixed-length vector representation of the input sequence and the decoder uses the representation to generate another sequence of symbols. Such model has a simple structure and maps the source to the target directly, which outperforms the prior models in text generation tasks. 3 Our Models The goal of our pun generation model is to generate a sentence containing a given target word as homographic pun. Give two senses of the target word (a polyseme) as input, our model generates a sentence where both senses of the word are appropriate in the sentence. We adopt the encoderdecoder framework to train a conditional language model which can generate sentences containing each given sense of the target word. Then we propose a joint beam search algorithm to generate an appropriate sentence to convey both senses of the target word. We call this Joint Model whose basic structure is illustrated in Figure 1. We further propose an improved model to highlight the different senses of the target word in one sentence, by reminding people the specific senses of the target word, which may not easily come to mind. We achieve this by using Pointwise Mutual Information (PMI) to find the associative words of each sense of the target word and increase their probability of appearance while decoding. To improve the diversity of the generated sentence, we use multinomial sampling to decode words in the decoding process. The improved model is named the Highlight Model. 3.1 Joint Model 3.1.1 Conditional Language Model For a given word as input, we would like to generate a natural sentence containing the target word with the specified sense. We improve the neural language model to achieve this goal, and name it conditional language model. The conditional language model for pun generation is similar to the seq2seq model with an input of only one word. We use Long Short-Term Memory (LSTM) as encoder to map the input sequence (target word) to a vector of a fixed dimensionality, and then another LSTM network as decoder to decode the target sequence from the vector (Sutskever et al., 2014). Our goal is to generate a sentence containing the target word. However, vanilla seq2seq model cannot guarantee the target word to appear in the generated sequence all the time. To solve this problem, we adopt the asynchronous forward/backward generation model proposed by Mou et al. (2015), which employs a mechanism to guarantee some word to appear in the output in seq2seq models. The model first generates the backward sequence starting from the target word wt at position t of the sentence (i.e., the words before wt), and ending up with “</s>” at the position 0 of the sentence. The probability of the backward sequence is denoted as p(w1 t ). Then we reverse the output of the backward sequence as the input to the forward model. In this process, the goal of the encoder is to map the generated half sentence to a vector representation and the decoder will generate the latter part accordingly. The probability of the forward sequence is denoted as p(wn t ). Then the input and output of the forward model are concatenated to form the generated sentence. In the asynchronous forward/backward model, the probability of the output sentence can be decomposed as: p( w1 t= wn t )= p(wt) t i=0 p(bw)(wt−i|·) m−t+1  i=0 p(fw)(wt+i|·), (1) where p(≑) denotes the probability of a particular backward/forward sequence (Mou et al., 2015). p(bw)(wt|·) or p(fw)(wt|·) denotes the probabil1653 濶瀂瀈濿濷瀁瀠瀇 濶瀂瀈濿濷瀁瀠瀇 瀀濴瀇濻濸瀀濴瀇濼濶濼濴瀁 濼瀁濸瀃瀇 澤澢澥 澤澢澦 澤澢澨 澤澢澧 澤澢澤澭 澤澢澥 濗濣濩濠濘瀁濨  input  input 濏瀆濑 濏瀆濑 濶瀂瀈濿濷瀁瀠瀇 濶瀂瀈濿濷瀁瀠瀇 濶瀂瀈濿濷瀁瀠瀇 濶瀂瀈濿濷瀁瀠瀇 濶瀂瀈瀁瀇瀉濃濄 濶瀂瀈瀁瀇瀉濃濋 瀂瀁 瀂瀁 濻濼瀆 濻濼瀆 瀂瀁 瀂瀁 濹瀅濼濸瀁濷瀆 濹瀅濼濸瀁濷瀆 濏濂瀆濑 濏濂瀆濑 濻濼瀆 濹瀅濼濸瀁濷瀆 濏瀆濑 濏瀆濑 濻濼瀆 濹瀅濼濸瀁濷瀆 瀖 濗濣濩濠濘瀁濨 濣濢 濗濣濩濠濘瀁濨 濣濢 濗濣濩濠濘瀁濨 濣濢 濗濣濩濠濘瀁濨 濣濢 濨濜濙 濗濣濩濠濘瀁濨 濣濢 濨濜濙 瀖 瀀濴瀇濻濸瀀濴瀇濼濶濼濴瀁 瀀濴瀇濻濸瀀濴瀇濼濶濼濴瀁 瀀濴瀇濻濸瀀濴瀇濼濶濼濴瀁 濏濂瀆濑 濏濂瀆濑 濼瀁濸瀃瀇 濼瀁濸瀃瀇 濼瀁濸瀃瀇 瀇濻濸 瀇濻濸 瀇濻濸 瀇濻濸 瀇濻濸 瀇濻濸 濼瀁濸瀃瀇 濼瀁濸瀃瀇 瀀濴瀇濻濸瀀濴瀇濼濶濼濴瀁 瀀濴瀇濻濸瀀濴瀇濼濶濼濴瀁 Figure 1: Framework of the proposed Joint Model. (Top) Two senses of the target word input1 and input2 (e.g. “countv01” and “countv08”) are firstly provided to the backward model, to generate the backward sequence starting from the target senses and ending up with “</s>”. (Bottom) Then the backward sequence are reversed and inputted to the forward model, to generate the forward sequence. The inputs and outputs of the forward model are concatenated to form the final output sentence. Joint beam search algorithm is used to generate each word that has the potential to make the generated sentence suitable for both input senses. ity of wt given previous sequence · in the backward or forward model respectively. The above model can only guarantee the target word to appear in the generated sentence. Since we hope to generate a sentence containing the specified word sense, we treat different senses of the same word as independent new pseudo-words. We label the senses of words with Word Sense Disambiguation (WSD) tools, and then we train the language model using the corpus with labeled senses so that for each word sense we can generate a sentence accordingly. We use the Python Implementations of WSD Technologies1 for WSD. This tool can return the most possible sense for the target word based on WordNet (Miller, 1995). We attach the sense label to the word and form a new pseudo-word accordingly. Taking “count” for example, “countv01” means “determine the number 1https://github.com/alvations/pywsd or amount of”, while “countv08” means “have faith or confidence in”. 3.1.2 Decoding with Joint Beam Search Algorithm Beam search is a frequently-used algorithm in the decoding stage of seq2seq models to generate the output sequence. It can be viewed as an adaptation of branch-and-bound search that uses an inadmissible pruning rule. In the beam search algorithm, only the most promising nodes at each level of the search graph are selected and the rest nodes are permanently removed. This strategy makes beam search able to find a solution within practical time or memory limits and work well in practical tasks (Zhou and Hansen, 2005; Freitag and Al-Onaizan, 2017). We also use beam search in our pun generation model. According to the definition of homographic puns, at least two senses of the target word 1654 should be interpreted in one sentence. We hope to generate a same sentence for distinct senses of the same word, and in this way the target word in the sentence can be interpreted as various senses. Provided with two senses of a target word as inputs to the encoder in the backward generation process, e.g. “countv01” as input1 and “countv08” as input2, we decode two output sentences in parallel, and the two sentences should be the same except for the input pseudo-words. Assume h(s) t,i denotes the hidden state of the i-th beam at time step t, when given the s-th pseudo-word as input (s =1 or 2). In the traditional beam search algorithm, softmax layer is applied on the hidden state to get the probability distribution on the vocabulary, and the log likelihood of the probability is used to get a word score distribution d(s) t,i : d(s) t,i = log(softmax layer(h(s) t,i )). (2) The accumulated score distribution on the i-th beam is: p(s) t,i = u(s) t−1,i + d(s) t,i , (3) |V | denotes the vocabulary size. u(s) t−1,i is a |V |dimensional vector whose values are all equal to the accumulated score of the generated sequence till time step t −1. Assume the beam width is b, p(s) t is the concatenation of p(s) t,i on all beams and its dimension size is |V |∗b. The beam search algorithm selects b candidate words at each time step according to p(s) t (s =1 or 2). When decoding for input1 and input2 in parallel, at each time step there will be b candidates for each input according to p(1) t and p(2) t respectively. Since input1 and input2 are different, the candidates for two inputs will hardly be the same. However, our goal is to choose candidate words which have the potential to result in candidate sentences suitable for both senses. Our joint beam search algorithm selects b candidates while decoding for the two inputs according to the joint score distribution on all beams. The joint score distribution on the i-th beam is: ot,i=p(1) t,i + p(2) t,i . (4) The summation of the log scores can be viewed as the product of original probabilities, which represents the joint probability if the two probability distributions are viewed independent. Given the b candidates selected according to the joint score distribution, our joint beam search algorithm Algorithm 1 Joint Beam Search Algorithm b denotes the beam width. l denotes the number of unfinished beams. BeamId records which beams the candidates come from. WordId records the indices of candidates in the vocabulary where 1 is the index of “<s>”. BEAM t[i] denotes the i-th beam history till time step t. |V | denotes the vocabulary size. Copy(m, n) aims to make an n-dimensional vector by replicating m for n times. The initial states of the decoder (h(1) −1,i,h(2) −1,i) are equal to the final states of the encoder accordingly. m ⊎n denotes appending n to m. BEAM −1[i]= [], i=0, 1, ..., b −1 u(1) −1,i = u(2) −1,i = Copy(0, |V |),i = 0, 1, ..., b −1 BeamId = [0, 1, ..., b −1] WordId = [1, .., 1] ∈Rb Outputs = []; t = 0; l = b while l > 0 do o=[] for i= 0 to b −1 do xt,i is the word embedding corresponding to WordId[i] h(1) t,i= LSTM(xt,i, h(1) t−1,i) h(2) t,i= LSTM(xt,i, h(2) t−1,i) p(1) t,i = u(1) t−1,i + log(softmax layer(h(1) t,i )) p(2) t,i = u(2) t−1,i + log(softmax layer(h(2) t,i )) ot,i = p(1) t,i + p(2) t,i o ⊎ot,i end for WordId = the indices of words with the top b scores in o BeamId = the indices of source beams w.r.t. WordId for i= 0 to b −1 do BEAMt[i] = BEAMt−1[BeamId[i]] ⊎WordId[i] u(1) t,i = u(2) t,i = Copy(ot,BeamId[i][WordId[i]], |V |) if WordId[i] represents “</s>” l = l −1 Outputs = Outputs ⊎BEAMt[i] end if end for t = t + 1 return top b items in Outputs is similar to the vanilla beam search algorithm, which generates the candidate sequences step by step. If any beam selects “</s>” as the candidate, we regard this branch has finished decoding. The decoding process will be finished after all the beams have selected “</s>”. The joint beam search algorithm is described in Algorithm 1. 3.2 Highlight Model 3.2.1 Word Association The joint model we described above is able to generate sentences suitable for both given senses of the target word. But we found this model is prone to generate monotonous sentences, making it difficult to discover that the target word in the sentence can be understood in two ways. For example, in the sentence “He couldn’t count on his friends”, people can easily realize that the common meaning “have faith or confidence in” of the 1655 word “count”, but may ignore other senses of the word. If we add some words and modify the sentence as “The inept mathematician couldn’t count on his friends”, people can also come up with the meaning “determine the number or amount of” due to the word “mathematician”. Comparing the examples above, the two senses are proper in both sentences, but people may interpret “count” in the two sentences differently. Based on such observations, we improve the pun generation model by adding some keywords to the sentence which could remind people some special sense of the target word. We call those keywords associative words, and the improved model is named as Highlight Model. To extract associative words of each sense of the target word, we first build word association norms in our corpus by using pointwise mutual information (PMI). As mutual information compares the probability of observing w1 and w2 together (the joint probability) with the probabilities of observing w1 and w2 independently (chance) (Church and Hanks, 1990), positive PMI scores indicate that the words occur together more than would be expected under an independence assumption, and negative scores indicate that one word tends to appear solely when the other does not (Islam and Inkpen, 2006). In this case we take top k associative words for each sense with relatively high positive PMI scores, which are calculated as follows: PMI(w1, w2) = log2 p(w1, w2) p(w1) · p(w2). (5) During decoding we increase the probability of the associative words to be chosen according to their PMI scores. For each sense of the target word, we normalize the PMI scores of the associative words as follows: Asso(w(s) t , cp) = σ( PMI(w(s) t , cp) maxcjPMI(w(s) t , cj) ), (6) where w(s) t represents the s-th sense of the target word wt, and cp is the p-th associative word for w(s) t . To smooth the PMI scores we use sigmoid function σ which is differentiable and widely used in the neural network models. The final PMI score for each associative word is denoted as Asso(w(s) t , cp). As we choose candidates according to a score distribution on the whole vocabulary, we need a PMI score distribution (S(w(s) t )) rather than single scores, and the value at position q is supposed to be: S  w(s) t  [q]=  Asso  w(s) t ,vq  , vq ∈AssoTK(w(s) t ); 0, else, (7) where vq denotes the q-th word in the vocabulary, and AssoTK(w(s) t ) denotes the top k associative words of w(s) t . 3.2.2 Multinomial Sampling In our highlight model, we add S(w(1) t ) and S(w(2) t ) to ot,i , as: ot,i =ot,i+α1·S(w(1) t )+α2·S(w(2) t ), (8) where we use α1 and α2 as coefficient weights to balance the PMI scores of the two assigned senses and the joint score. In the Highlight Model, we first select 2b candidates according to the scores of words from Eq. 8. Then we use multinomial sampling to select the final b candidates. Sampling is useful in cases where we may want to get a variety of outputs for a particular input. One example of a situation where sampling is meaningful would be in a seq2seq model for a dialog system (Neubig, 2017). In our pun generation model we hope to produce relatively more creative sentences, so we use multinomial sampling to increase the uncertainty when generating the sentence. The multinomial distribution can be seen as a multivariate generalization of the binomial distribution and it is prone to choose the words with relatively high probabilities. If an associative word of one sense has been selected, we decay the scores for all associative words of this sense. In this way we can prevent the sentence obviously being prone to reflect one sense of the target word. 4 Experiments 4.1 Data Preprocessing Most text generation tasks using seq2seq model require large amount of training data. However, for many tasks, like pun generation, it is difficult to get adequate data to train a seq2seq model. In this study, our pun generation model does not rely on training data of puns. We only require a text corpus to train the conditional language model, 1656 which is very cheap to get. In this paper, we use the English Wikipedia corpus to train the language model. The corpus texts are firstly lowercased and tokenized, and all numeric characters are replaced with “#”. We split the texts into sentences and discard the sentences whose length is less than 5 words or more than 50 words. We then select polysemes appearing in the homographic pun data set (Miller et al., 2017) and pun websites. Those polysemes in the corpus are replaced by the labeled sense. We restrict that each sentence can be labeled with at most two polysemes in order to train a reliable language model. If there are more polysemes in one sentence, we keep the last two because in our observation we found pun words tend to occur near the end of a sentence. After labeling, we keep the 105,000 most frequently occurring words and other words are replaced with the “<unk>” token. We discard the sentences with two or more “<unk>” tokens. There are totally 3,974 distinct labeled senses corresponding to a total of 772 distinct polysemes. We assume those reserved senses are more likely to generate puns of good quality. While training the language model we use 2,595,435 sentences as the training set, and 741,551 sentences as the development set to decide when to stop training. 4.2 Training Details The number of LSTM layers we use in the seq2seq model is 2 and each layer has 128 units. To avoid overfitting, we set the dropout rate to 0.2. We use Stochastic Gradient Descent (SGD) with a decreasing learning rate schedule as optimizer. The initial learning rate is 1.0 and is halved every 1k steps after training for 8k steps, which is the same as Luong et al. (2017). We set beam size b = 5 while decoding. For each sense we select at most 30 associative words (k=30). To increase the probability of choosing the associative words, we set α1 = 6.0 and α2 = 6.0. If an associative word of some sense of a target word has been chosen, its corresponding α will be set to zero for all the associative words of this sense. 4.3 Baselines Since there is no existing neural model applied on this special task, we implement two baseline models for comparison. We select 100 target words and two senses for each word to test the quality of those models. Normal Language Model: It is trained with an encoder-decoder model and uses beam search while decoding. In the training process, inputs are unlabeled target words and outputs are sentences containing the target words. Pun Language Model: We use the data set of homographic puns from Miller et al. (2017). The model is trained on the data set in asynchronous forward/backward way. As the pun data set is limited, the pun language model has no creativity, which means if we input a word appearing in the training data, then the output will usually be an existing sentence from the training data. Therefore, we remove the sentences which contain words in the 100 target words from the pun data set, and then train the model for test. 4.4 Automatic Evaluation We select 100 target words and two senses for each word for test. We use the language modeling toolkit SRILM2 to train a trigram model with another 7,746,703 sentences extracted from Wikipedia, which are different from the data set used before. The perplexity scores (PPL) of our models and baseline models are estimated based on the trained language model, as shown in Table 1. Normal Language Model has no constraint of generating sentences suitable for both senses. This means at each time step the beam search algorithm can select the candidates with highest probabilities. And thus it is natural that it obtains the lowest perplexity. Taking the constraint of senses into consideration, the perplexity scores of Joint Model and Highlight Model are still comparable to that of Normal Language Model. However, Pun Language Model could not be trained well considering the limit of the pun training data, so it gets the highest perplexity score. This result reveals that it is not feasible to build a homographic pun generation system based on the pun data set since pun data is far from enough. In the table, We further compare the diversity of the generated sentences of four models following Li et al. (2016). Distinct-1 (d.-1) and distinct-2 (d.-2) are the ratios of the distinct unigrams and bigrams in generated sentences, i.e., the number of distinct unigrams or bigrams divided by the total number of unigrams or bigrams. The results show our models are more creative than Normal Language and 2http://www.speech.sri.com/projects/srilm/ 1657 Model PPL d.-1(%) d.-2(%) Highlight 91.80 27.13 62.85 Joint 63.48 22.13 50.59 Normal Language 62.66 19.60 41.62 Pun Language 889.07 14.78 23.11 Table 1: Results of automatic evaluation. 澧澢澫澩 澧澢澫澥 澧澢澪澧 澧澢澬澫 澧澢澫澧 澧澢澬澥 澧澢澪澩 澧澢澥澩 澧澢澪澧 澦澢澪澥 澥澢澦澭 澦澢澩澫 澨澢澩澬 澨澢澩澩 澨澢澩澩 澺濠濩澢 澵濗濗濩澢 濆濙濕濘 澼濝濛濜濠濝濛濜濨 澾濣濝濢濨 濂濣濦濡濕濠澔激濕濢濛濩濕濛濙 濄濩濢澔激濕濢濛濩濕濛濙 澻濣濠濘澔濄濩濢濧 Figure 2: Results of human evaluation. Pun Language models, and Highlight Model can generate sentences with the best diversity. 4.5 Human Evaluation Because of the subtle and delicate structure of puns, automatic evaluation is not enough. So we sample one sentence for each word from four models mentioned above and then get 100 sentences of each model generated from the target words, together with 100 puns containing the same target words from homographic pun data set in Miller et al. (2017). We ask judges on Amazon Mechanical Turk to evaluate all the sentences and the rating score ranges from 1 to 5. Five native English speakers are asked to give a score on each sentence in three aspects with the following information: Readability indicates whether the sentence is easy to understand semantically; Accuracy indicates whether the given senses are suitable in a sentence; Fluency indicates whether the sentence is fluent and consistent with the rules of grammar. The results in Figure 2 show that pun data is not enough to train an ideal language model, while Normal Language Model has enough corpus to train a good language model. But Normal Language Model is unable to make the given two senses appear in one sentence and in a few cases even can not assure the appearance of the target words. Joint Model and Highlight Model can generate fluent sentences for the assigned two senses. Although Highlight Model could remind people Model # sentences avg. score Highlight 15 0.98 Joint 12 0.87 Gold Puns 28 1.38 Table 2: Results of Soft Turing Test. specific senses of the target words in most cases, in few cases sampled words make the whole sentence unsatisfactory and get a relatively lower score of accuracy. As to the Readability, the Joint Model performs better than other three models. Both Joint model and Highlight model outperform Normal Language Model and Pun Language Model. To test the potential of the sentences generated by our models to be homographic puns, we further design a Soft Turing Test. We select 30 sentences generated by Joint Model and 30 sentences generated by Highlight Model independently, together with 30 gold puns from the homographic pun data set. We mix them up, and give the definition of homographic pun and ask 10 people on Amazon Mechanical Turk to judge each sentence. People can judge each sentence as one of three categories: definitely by human, might by human and definitely by machine. The three categories correspond to the scores of 2, 1 and 0, respectively. If the average score of one sentence is equal or greater than 1, we regard it as judged to be generated by human. The number of sentences judged as by human for each model and the average score for each model are shown in Table 2. Due to the flexible language structure of Highlight Model, the generated homographic puns outperform those generated by Joint Model in the Soft Turing Test, however still far from gold-standard puns. Our models are adept at generating homographic puns containing assigned senses but weak in making homographic puns humorous. 4.6 Examples We show some examples generated by different models in Table 3. For the two senses of “pitch”, Highlight Model generates a sentence which uses “high” to remind readers the sense related to sound and uses “player” to highlight the sense related to throwing a baseball. Joint Model returns a sentence that can be understood in both way roughly only if we give the two senses in advance, otherwise readers may only think of the 1658 Model Sample pitch: 1) the property of sound that arise with variation in the frequency of vibration; 2) the act of throwing a baseball by a pitcher to a batter. Highlight in one that denotes player may have had a high pitch in the world Joint the object of the game is based on the pitch of the player Normal Language this is a list of high pitch plot Pun Language our bikinis are exciting they are simply the tops on the mouth Gold Puns if you sing while playing baseball you won’t get a good pitch square: 1) a plane rectangle with four equal sides and four right angles, a four-sided regular polygon; 2) someone who doesn’t understand what is going on. Highlight little is known when he goes back to the square of the football club Joint there is a square of the family Normal Language the population density was # people per square mile Pun Language when the pirate captain’s ship ran aground he couldn’t fathom why Gold Puns my advanced geometry class is full of squares problem: 1) a source of difficulty; 2) a question raised for consideration or solution. Highlight you do not know how to find a way to solve the problem which in the state Joint he is said to be able to solve the problem as he was a professor Normal Language in # he was appointed a member of the new york stock exchange Pun Language those who iron clothes have a lot of pressing veteran Gold Puns math teachers have lots of problems Table 3: Examples of outputs by different models. sense related to baseball. For Normal Language Model, it is difficult to be interpreted in two senses we assigned. Pun Language Model has no ability to return a sentence containing the assigned word at all. Observing the gold pun, the context describes a more vivid scene which we need to pay attention to. For “square”, sentences generated by Highlight Model and Joint Model can be interpreted in two senses and Highlight Model results in a sentence with dexterity. Normal Language Model give a sentence where “square” means neither of the two given senses. Pun Language Model cannot return a sentence we need with no surprise. For “problem”, both Highlight Model and Joint Model can generate sentences containing assigned two senses while Normal Language Model and Pun Language Model can not return sentences with the target word. Compare to our generated sentences, we find gold puns are more concise and accurate, which takes us into consideration on the delicate structure of puns and the conclusion is still in exploration. 5 Conclusion and Future Work In this paper, we proposed two models for pun generation without using training data of puns. Joint Model makes use of conditional language model and the joint beam search algorithm, which can assure the assigned senses of target words suitable in one sentence. Highlight Model takes associative words into consideration, which makes the distinct senses more obvious in one sentence. The produced puns are evaluated using automatic evaluation and human evaluation, and they outperform the sentences generated by our baseline models. For future work, we hope to improve the results by using the pun data and design a more proper way to select candidates from associative words. Acknowledgment This work was supported by National Natural Science Foundation of China (61772036, 61331011) and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We thank the anonymous reviewers for their helpful comments. Xiaojun Wan is the corresponding author. 1659 References Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research 3:1137–1155. http://www.jmlr.org/papers/v3/bengio03a.html. Kim Binsted. 1996. Machine humour: An implemented model of puns . Kim Binsted, Helen Pain, and Graeme D Ritchie. 1997. Children’s evaluation of computer-generated punning riddles. Pragmatics & Cognition 5(2):305– 354. Kim Binsted and Graeme Ritchie. 1994. An implemented model of punning riddles. In Proceedings of the 12th National Conference on Artificial Intelligence, Seattle, WA, USA, July 31 - August 4, 1994, Volume 1.. pages 633–638. http://www.aaai.org/Library/AAAI/1994/aaai94096.php. Kyunghyun Cho, Bart van Merrienboer, C¸ aglar G¨ulc¸ehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. http://arxiv.org/abs/1406.1078. Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Computational Linguistics 16(1):22–29. Samuel Doogan, Aniruddha Ghosh, Hanyang Chen, and Tony Veale. 2017. Idiom savant at semeval2017 task 7: Detection and interpretation of english puns. In Proceedings of the 11th International Workshop on Semantic Evaluation, SemEval@ACL 2017, Vancouver, Canada, August 3-4, 2017. pages 103–108. https://doi.org/10.18653/v1/S17-2011. Markus Freitag and Yaser Al-Onaizan. 2017. Beam search strategies for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, NMT@ACL 2017, Vancouver, Canada, August 4, 2017. pages 56–60. https://aclanthology.info/papers/W173207/w17-3207. Bryan Anthony Hong and Ethel Ong. 2009. Automatically extracting word relationships as templates for pun generation. In Proceedings of the Workshop on Computational Approaches to Linguistic Creativity. Association for Computational Linguistics, Stroudsburg, PA, USA, CALC ’09, pages 24–31. http://dl.acm.org/citation.cfm?id=1642011.1642015. Nabil Hossain, John Krumm, Lucy Vanderwende, Eric Horvitz, and Henry A. Kautz. 2017. Filling the blanks (hint: plural noun) for mad libs humor. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017. pages 638–647. https://aclanthology.info/papers/D171067/d17-1067. Aminul Islam and Diana Inkpen. 2006. Second order co-occurrence PMI for determining the semantic similarity of words. In Proceedings of the Fifth International Conference on Language Resources and Evaluation, LREC 2006, Genoa, Italy, May 22-28, 2006.. pages 1033–1038. Rafal J´ozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410. http://arxiv.org/abs/1602.02410. Greg Lessard and Michael Levison. 1992. Computational modelling of linguistic humour: Tom swifties. In In ALLC/ACH Joint Annual Conference, Oxford. pages 175–178. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016. pages 110–119. http://aclweb.org/anthology/N/N16/N16-1014.pdf. Minh-Thang Luong, Eugene Brevdo, and Rui Zhao. 2017. Neural machine translation (seq2seq) tutorial. https://github.com/tensorflow/nmt . Justin McKay. 2002. Generation of idiom-based witticisms to aid second language learning. In In Stock et al.. pages 77–87. Elena Mikhalkova and Yuri Karyakin. 2017. Punfields at semeval-2017 task 7: Employing roget’s thesaurus in automatic pun recognition and interpretation. arXiv preprint arXiv:1707.05479. http://arxiv.org/abs/1707.05479. George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM 38(11):39– 41. Tristan Miller and Iryna Gurevych. 2015. Automatic disambiguation of english puns. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers. pages 719–729. http://aclweb.org/anthology/P/P15/P15-1070.pdf. Tristan Miller, Christian F. Hempelmann, and Iryna Gurevych. 2017. SemEval-2017 Task 7: Detection and interpretation of English puns. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). pages 59–69. 1660 Lili Mou, Rui Yan, Ge Li, Lu Zhang, and Zhi Jin. 2015. Backbone language modeling for constrained natural language generation. arXiv preprint arXiv:1512.06612. http://arxiv.org/abs/1512.06612. Graham Neubig. 2017. Neural machine translation and sequence-to-sequence models: A tutorial. arXiv preprint arXiv:1703.01619. http://arxiv.org/abs/1703.01619. Graeme Ritchie. 2004. The linguistic analysis of jokes. Routledge. Graeme Ritchie, Ruli Manurung, Helen Pain, Annalu Waller, Rolf Black, and Dave O’Mara. 2007. A practical application of computational humour. In Proceedings of the 4th International Joint Conference on Computational Creativity. pages 91–98. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 813 2014, Montreal, Quebec, Canada. pages 3104– 3112. http://papers.nips.cc/paper/5346-sequenceto-sequence-learning-with-neural-networks. Jiwei Tan, Xiaojun Wan, and Jianguo Xiao. 2017. Abstractive document summarization with a graphbased attentional neural model. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers. Association for Computational Linguistics, pages 1171–1181. https://doi.org/10.18653/v1/P17-1108. Alessandro Valitutti, Carlo Strapparava, and Oliviero Stock. 2008. Textual affect sensing for computational advertising. In Creative Intelligent Systems, Papers from the 2008 AAAI Spring Symposium, Technical Report SS-08-03, Stanford, California, USA, March 26-28, 2008. pages 117–122. Chris Venour. 1999. The computational generation of a class of puns. In Master’s thesis, Queen’s University,Kingston, Ontario. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015. IEEE Computer Society, pages 3156–3164. https://doi.org/10.1109/CVPR.2015.7298935. Ziang Xie. 2017. Neural text generation: A practical guide. arXiv preprint arXiv:1711.09534 . Rong Zhou and Eric A. Hansen. 2005. Beamstack search: Integrating backtracking with beam search. In Proceedings of the Fifteenth International Conference on Automated Planning and Scheduling (ICAPS 2005), June 5-10 2005, Monterey, California, USA. pages 90–98. http://www.aaai.org/Library/ICAPS/2005/icaps05010.php.
2018
153
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1661–1671 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1661 Learning to Generate Move-by-Move Commentary for Chess Games from Large-Scale Social Forum Data Harsh Jhamtani∗, Varun Gangal∗, Eduard Hovy, Graham Neubig, Taylor Berg-Kirkpatrick Language Technologies Institute Carnegie Mellon University {jharsh,vgangal,hovy,gneubig,tberg}@cs.cmu.edu Abstract This paper examines the problem of generating natural language descriptions of chess games. We introduce a new largescale chess commentary dataset and propose methods to generate commentary for individual moves in a chess game. The introduced dataset consists of more than 298K chess move-commentary pairs across 11K chess games. We highlight how this task poses unique research challenges in natural language generation: the data contain a large variety of styles of commentary and frequently depend on pragmatic context. We benchmark various baselines and propose an end-to-end trainable neural model which takes into account multiple pragmatic aspects of the game state that may be commented upon to describe a given chess move. Through a human study on predictions for a subset of the data which deals with direct move descriptions, we observe that outputs from our models are rated similar to ground truth commentary texts in terms of correctness and fluency.1 1 Introduction A variety of work in NLP has sought to produce fluent natural language descriptions conditioned on a contextual grounding. For example, several lines of work explore methods for describing images of scenes and videos (Karpathy and Fei-Fei, 2015), while others have conditioned on structured sources like Wikipedia infoboxes (Lebret et al., ∗HJ and VG contributed equally for this paper 1We will make the code-base (including data collection and processing) publicly available at https://github. com/harsh19/ChessCommentaryGeneration 2016). In most cases, progress has been driven by the availability of large training corpora that pair natural language with examples from the grounding (Lin et al., 2014). One line of work has investigated methods for producing and interpreting language in the context of a game, a space that has rich pragmatic structure, but where training data has been hard to come by. In this paper, we introduce a new large-scale resource for learning to correlate natural language with individual moves in the game of chess. We collect a dataset of more than 298K chess move/commentary pairs across ≈ 11K chess games from online chess forums. To the best of our knowledge, this is the first such dataset of this scale for a game commentary generation task. We provide an analysis of the dataset and highlight the large variety in commentary texts by categorizing them into six different aspects of the game that they respectively discuss. Figure 1: Move commentary generated from our method (Game-aware neural commentary generation (GAC)) and some baseline methods for a sample move. Automated game commentary generation can be a useful learning aid. Novices and experts alike can learn more about the game by hearing expla1662 nations of the motivations behind moves, or their quality. In fact, on sites for game aficionados, these commentaries are standard features, speaking to their interestingness and utility as complements to concrete descriptions of the game boards themselves. Game commentary generation poses a number of interesting challenges for existing approaches to language generation. First, modeling human commentary is challenging because human commentators rely both on their prior knowledge of game rules as well as their knowledge of effective strategy when interpreting and referring to the game state. Secondly, there are multiple aspects of the game state that can be talked about for a given move — the commentator’s choice depends on the pragmatic context of the game. For example, for the move shown in Figure 1, one can comment simply that the pawn was moved, or one may comment on how the check was blocked by that move. Both descriptions are true, but the latter is most salient given the player’s goal. However, sometimes, none of the aspects may stand out as being most salient, and the most salient aspect may even change from commentator to commentator. Moreover, a human commentator may introduce variations in the aspects he or she chooses to talk about, in order to reduce monotony in the commentary. This makes the dataset a useful testbed not only for NLG but also for related work on modeling pragmatics in language (Liu et al., 2016). Prior work has explored game commentary generation. Liao and Chang (1990); Sadikov et al. (2006) have explored chess commentary generation, but for lack of large-scale training data their methods have been mainly rule-based. Kameko et al. (2015) have explored commentary generation for the game of Shogi, proposing a twostep process where salient terms are generated from the game state and then composed in a language model. In contrast, given the larger amount of training data available to us, our proposed model uses an end-to-end trainable neural architecture to predict commentaries given the game state. Our model conditions on semantic and pragmatic information about the current state and explicitly learns to compose, conjoin, and select these features in a recurrent decoder module. We perform an experimental evaluation comparing against baselines and variants of our model that ablate various aspects of our proposed archiFigure 2: A multi-move, single commentary example from our data. Here, the sequence of moves Ba4 →b5 →Nd6 → bxa4 →e5 is commented upon. Statistic Value Total Games 11,578 Total Moves 298,008 Average no. of recorded steps in a game 25.73 Frequent Word Types2 39,424 Rare Word Types 167,321 Word Tokens 6,125,921 Unigram Entropy 6.88 Average Comment Length (in #words) 20.55 Long Comments (#words > 5) 230745 (77%) Table 1: Dataset and Vocabulary Statistics tecture. Outputs on the ‘Move Description’ subset of data from our final model were judged by humans to be as good as human written ground truth commentaries on measures of fluency and correctness. 2 Chess Commentary Dataset In this section we introduce our new large-scale Chess Commentary dataset, share some statistics about the data, and discuss the variety in type of commentaries. The data is collected from the online chess discussion forum gameknot.com, which features multiple games self-annotated with move-by-move commentary. The dataset consists of 298K aligned game move/commentary pairs. Some commentaries are written for a sequence of few moves (Figure 2) while others correspond to a single move. For the purpose of initial analysis and modeling, we limit ourselves to only those data points where commentary text corresponds to a single move. Additionally, we split the multi-sentence commentary texts to create multiple data points with the same chess board and move inputs. What are commentaries about? We observe that there is a large variety in the commentary 1663 Category Example % in data Val acc. Direct Move Description An attack on the queen 31.4% 71% Move Quality A rook blunder. 8.0% 90% Comparative At this stage I figured I better move my knight. 3.7% 77.7% Planning / Rationale Trying to force a way to eliminate d5 and prevent Bb5. 31.2% 65% Contextual Game Info Somehow, the game I should have lost turned around in my favor . 12.6% 87% General Comment Protect Calvin , Hobbs 29.9% 78% Table 2: Commentary texts have a large variety making the problem of content selection an important challenge in our dataset. We classify the commentaries into 6 different categories using a classifier trained on some hand-labelled data, a fraction of which is kept for validation. % data refers to the percentage of commentary sentences in the tagged data belonging to the respective category. texts. To analyze this variety, we consider labelling the commentary texts in the data with a predefined set of categories. The choice of these categories is made based on a manual inspection of a sub-sample of data. We consider the following set of commentary categories (Also shown in Table 2): • Direct move description (MoveDesc3): Explicitly or implicitly describe the current move. • Quality of move (Quality4): Describe the quality of the current move. • Comparative: Compare multiple possible moves. • Move Rationale or Planning (Planning): Describe the rationale for the current move, in terms of the future gameplay, advantage over other potential moves etc. • Contextual game information: Describe not the current move alone, but the overall game state – such as possibility of win/loss, overall aggression/defence, etc. • General information: General idioms & advice about chess, information about players/tournament, emotional remarks, retorts, etc. The examples in Table 2 illustrate these classes. Note that the commentary texts are not necessarily limited to one tag, though that is true for most 3MoveDesc & ‘Move Description’ used interchangeably 4Quality and ‘Move Quality’ used interchangeably of the data. A total of 1K comments are annotated by two annotators. A SVM classifier (Pedregosa et al., 2011a) is trained for each comment class, considering the annotation as ground truth and using word unigrams as features. This classifier is then used to predict tags for the train, validation and test sets. For “Comparative” category, we found that a classifier with manually defined rules such as presence of word “better” performs better than the classifier, perhaps due to the paucity of data, and thus we use this instead . As can be observed in Table 2, the classifiers used are able to generalize well on the held out dataset 3 Game Aware Neural Commentary Generations (GAC) Our dataset D consists of data points of the form (Si, Mi, Gi), i ∈{1, 2, .., |D|}, where Si is the commentary text for move Mi and Gi is the corresponding chess game. Si is a sequence of m tokens Si1, Si2, ..., Sim We want to model P(Si|Mi, Gi). For simplicity, we use only current board (Ci) and previous board (Ri) information from the game. P(Si|Mi, Gi) = P(Si|Mi, Ci, Ri). We model this using an end-to-end trainable neural model, which models conjunctions of features using feature encoders. Our model employs a selection mechanism to select the salient features for a given chess move. Finally a LSTM recurrent neural network (Hochreiter and Schmidhuber, 1997) is used to generate the commentary text based on selected features from encoder. 3.1 Incorporating Domain Knowledge Past work shows that acquiring domain knowledge is critical for NLG systems (Reiter et al., 2003b; Mahamood and Reiter, 2012). Commentary texts cover a range of perspectives, including criticism or goodness of current move, possible alternate moves, quality of alternate moves, etc. To be able to make such comments, the model must learn about the quality of moves, as well as the set of valid moves for a given chess board state. We consider the following features to provide our model with necessary information to generate commentary texts (Figure 3): Move features fmove(Mi, Ci, Ri) encode the current move information such as which piece moved, the position of the moved piece before and after the move was made, the type and position 1664 Figure 3: The figure shows some features extracted using the chess board states before (left) and after (right) a chess move. Our method uses various semantic and pragmatic features of the move, including the location and type of piece being moved, which opposing team pieces attack the piece being moved before as well as after the move, the change in score by Stockfish UCI engine, etc. of the captured piece (if any), whether the current move is castling or not, and whether there was a check or not. Threat features fthreat(Mi, Ci, Ri) encode information about pieces of opposite player attacking the moved piece before and after the move, and the pieces of opposite player being attacked by the piece being moved. To extract this information, we use the python-chess library 5 Score features fscore(Mi, Ci, Ri) capture the quality of move and general progress of the game. This is done using the game evaluation score before and after the move, and average rank of pawns of both the players. We use Stockfish evaluation engine to obtain the game evaluation scores. 6 3.2 Feature Representation In our simplest conditioned language generation model GAC-sparse, we represent the above described features using sparse representations through binaryvalued features. gsparse(Mi, Ci, Ri) = SparseRep(fmove, fthreat, fscore) For our full GAC model we consider representing features through embeddings. This has the advantage of allowing for a shared embedding space, which is pertinent for our problem since attribute values can be shared, e.g. the same piece type can occur as the moved piece as well as the captured piece. For categorical features, such as those indicating which piece was moved, we directly look up the embedding using corresponding token. For real valued features 5https://pypi.org/project/ python-chess/ 6https://stockfishchess.org/about/ such as game scores, we first bin them and then use corresponding number for embedding lookup. Let E represent the embedding matrix. Then E[fj move] represents embeddings of jth move feature, or in general E[fmove] represents the concatenated embeddings of all move features. Similarly, E(fmove, fthreat, fscore) represents concatenated embeddings of all the features. 3.3 Feature Conjunctions We conjecture that explicitly modeling feature conjunctions might improve the performance. So we need an encoder which can handle input sets of features of variable length (features such as pieces attacking the moved piece can be of variable length). One way to handle this is by picking up a canonical ordering of the features and consider a bidirectional LSTM encoder over the feature embeddings. As shown in Figure 4, this generates conjunctions of features. genc = BiLSTM∗({E(fmove, fthreat, fscore))}) Here E() represents the embedding matrix as described earlier and BiLSTM∗represents a sequential application of the BiLSTM function. Thus, if there a total of m feature keys and embedding dimension is d, E(fmove, fthreat, fscore) is matrix of m ∗d. If hidden size of BILSTM is of size x, then genc is of dimensionality m ∗x. We observe that different orderings gave similar performance. We also experimented with running k encoders, each on different ordering of features, and then letting the decoder access to each of the k encodings. This did not yield any significant gain in performance. The GAC model, unlike GAC-sparse, has some advantages as it uses a shared, continuous space 1665 Previous state piece = pawn color = black from = f7 to = f5 attacked-by = [ bishop,  knight] . . . black pawn f7 f5 ATTACK-BY bishop knight PIECE Feature Extraction Feature Representation Selection mechanism Input Selection vector RNN threatens Black white Decoder <Start> Black threatens white bishop Figure 4: The figure shows a model overview. We first extract various semantic and pragmatic features from the previous and current chess board states. We represent features through embedding in a shared space. We observe that feeding in feature conjunctions helps a lot. We consider a selection mechanism for the model to choose salient attributes from the input at every decoder step. to embed attribute values of different features, and can perform arbitrary feature conjunctions before passing a representation to the decoder, thereby sharing the burden of learning the necessary feature conjunctions. Our experiments confirm this intuition — GAC produces commentaries with higher BLEU as well as more diversity compared to GAC-sparse. 3.4 Decoder We use a LSTM decoder to generate the sentence given the chess move and the features g. At every output step t, the LSTM decoder predicts a distribution over vocabulary words taking into account the current hidden state ht, the input token it, and additional selection vector ct. For GAC-sparse, the selection vector is simply an affine transformation of the features g. For GAC model selection vector is derived via a selection mechanism. ot, hdec t = LSTM(hdec t−1, [concat(Edec(it), ct)]) pt = softmax(Wo[concat(ot, ct)] + bs) where pt represents th probability distribution over the vocabulary, Edec() represents the decoder word embedding matrix and elements of Wo matrix are trainable parameters. Selection/Attention Mechanism: As there are different salient attributes across the different chess moves, we also equip the GAC model with a mechanism to select and identify these attributes. We first transform hdec t by multiplying it with a trainable matrix Wc, and then take dot product of the result with each gi. a(i) t = dot(Wc ∗hdec t , genc i ) αt = softmax(at) ct = i=|g| X i=1 α(i) t genc i We use cross-entropy loss over the decoding outputs to train the model. 4 Experiments We split each of the data subsets in a 70:10:20 ratio into train, validation and test. All our models are implemented in Pytorch version 0.3.1 (Paszke et al., 2017). We use the ADAM optimizer (Kingma and Ba, 2014) with its default parameters and a mini-batch size of 32. Validation set perplexity is used for early-stopping. At test-time, we use greedy search to generate the model output. We observed that beam decoding does not lead to any significant improvement in terms of validation BLEU score. We observe the BLEU (Papineni et al., 2002) and BLEU-2 (Vedantam et al., 2015) scores to measure the performance of the models. Addi1666 tionally, we consider a measure to quantify the diversity in the generated outputs. Finally, we also conduct a human evaluation study. In the remainder of this section, we discuss baselines along with various experiments and results. 4.1 Baselines In this subsection we discuss the various baseline methods. Manually-defined template (TEMP) We devise manually defined templates (Reiter, 1995) for ‘Move Description’ and ‘Move Quality’ categories. Note that template-based outputs tend to be repetitive as they lack diversity - drawing from a small, fixed vocabulary and using a largely static sentence structure. We define templates for a fixed set of cases which cover our data (For exact template specifications, refer to Appendix B). Nearest Neighbor (NN): We observe that the same move on similar board states often leads to similar commentary texts. To construct a simple baseline, we find the most similar move NMCR from among training data points for a given previous (R) and current (C) board states and move M. The commentary text corresponding to NMCR is selected as the output. Thus, we need to consider a scoring function to find the closest matching data point in training set. We use the Move, Threat and Score features to compute similarity to do so. By using a sparse representation, we consider total of 148 Move features, 18 Threat features, and 19 Score features. We use sklearn’s (Pedregosa et al., 2011b) NearestNeighbor module to find the closest matching game move. Raw Board Information Only (RAW): The RAW baseline ablates to assess the importance of our pragmatic feature functions. This architecture is similar to GAC, except that instead of our custom features A(f(Ri, Ci)), the encoder encodes raw board information of current and previous board states. ARAW (Ri, Ci) = [Lin(Ri), Lin(Ci)] Lin() for a board denotes it’s representation in a row-linear fashion. Each element of Lin() is a piece name (e.g pawn) denoting the piece at that square with special symbols for empty squares. 4.2 Comment Category Models As shown earlier, we categorize comments into six different categories. Among these, in this paper Dataset Features BLEU BLEU-2 Diversity MoveDesc TEMP 0.72 20.77 4.43 NN (M+T+S) 1.28 21.07 7.85 RAW 1.13 13.74 2.37 GAC-sparse 1.76 21.49 4.29 GAC (M+T) 1.85 23.35 4.72 Quality TEMP 16.17 47.29 1.16 NN (M+T) 5.98 42.97 4.52 RAW 16.92 47.72 1.07 GAC-sparse 14.98 51.46 2.63 GAC(M+T+S) 16.94 47.65 1.01 Comparative NN (M) 1.28 24.49 6.97 RAW 2.80 23.26 3.03 GAC-sparse 3.58 25.28 2.18 GAC(M+T) 3.51 29.48 3.64 Table 3: Performance of baselines and our model with different subsets of features as per various quantitative measures. ( S = Score, M= Move, T = Threat features; ) On all data subsets, our model outperforms the TEMP and NN baselines. Among proposed models, GAC performs better than GACsparse & RAW in general. For NN, GAC-sparse and GAC methods, we experiment with multiple feature combinations and report only the best as per BLEU scores. we consider only the first three as the amount of variance in the last three categories indicates that it would be extremely difficult for a model to learn to reproduce them accurately. The number of data points, as tagged by the trained classifiers, in the subsets ‘Move Description’, ‘Move Quality’ and ‘Comparative’ are 28,228, 793 and 5397 respectively. We consider separate commentary generation models for each of the three categories. Each model is tuned separately on the corresponding validation sets. Table 3 shows the BLEU and BLEU-2 scores for the proposed model under different subsets of features. Overall BLEU scores are low, likely due to the inherent variance in the language generation task (Novikova et al., 2017) , although a precursory examination of the outputs for data points selected randomly from test set indicated that they were reasonable. Figure 5 illustrates commentaries generated by our models through an example (a larger list of qualitative examples can be found in Appendix C). Which features are useful? In general, adding Threat features improves the performance, though the same is not always true for Score features. Qual has higher BLEU scores than the other datasets due to smaller vocabulary and lesser variation in commentary. As can be observed in Table 4, Threat features are useful for both ‘Move Quality’ and ‘Move Description’ subsets of data. Adding Score features helps for ‘Move Quality’ subset. This intuitively makes sense since Score 1667 Figure 5: Outputs from various models on a test example from the MoveDesc subset. Dataset Features BLEU BLEU-2 Diversity MoveDesc GAC (M) 1.41 19.06 4.32 GAC (M+T) 1.85 23.35 4.72 GAC (M+T+S) 1.64 22.82 4.29 Quality GAC (M) 13.05 48.37 1.61 GAC (M+T) 14.22 49.57 1.54 GAC(M+T+S) 14.44 51.79 1.48 Comparative GAC(M) 3.10 19.84 2.88 GAC(M+T) 3.51 29.48 3.64 GAC(M+T+S) 1.15 25.44 3.14 Table 4: Performance of the GAC model with different feature sets. ( S = Score, M= Move, T = Threat features; ) Different subset of features work best for different subsets. For instance, Score features seem to help only in the Quality category. Note that the results for Quality are from 5-fold crossvalidation, since the number of datapoints in the category is much lesser than the other two. features directly encode proxies for move quality as per a chess evaluation engine. 4.3 A Single Model For All Categories In this experiment, we merge the training and validation data of the first three categories and tune a single model for this merged data. We then compare its performance on all test sentences in our data. COMB denotes using the best GAC model for a test example based on its original class (e.g Desc) and computing the BLEU of the sentences so generated with the ground truth. GAC-all represents the GAC model learnt on the merged training data. As can be seen from Table 5, this does not lead to any performance improvements. We investigate this issue further by analyzing whether the board states are predictive of the type of category or not. To achieve this, we construct a multi-class classifier using all the Move, Threat and Score features to predict the three categories under consideration. However, we observe accuracy of around 33.4%, which is very close to the performance of a random prediction model. This partially explains why a single model did not fare better even though it had the opportunity to learn Dataset Features BLEU BLEU-2 Diversity All COMB (M) 2.07 20.13 4.50 COMB (M+T) 2.43 25.37 4.88 COMB (M+T+S) 1.83 28.86 4.33 All GAC-all(M) 1.69 20.66 4.67 GAC-all(M+T) 1.94 24.11 5.16 GAC-all (M+T+S) 2.02 24.70 4.97 All CAT (M) 1.90 19.96 3.82 Table 5: The COMB approaches show the combined performance of separately trained models on the respective test subsets. from a larger dataset. Category-aware model (CAT) We observed above that with the considered features, it is not possible to predict the type of comment to be made, and the GAC-all model results are better than COMB results. Hence, we extend the GACall model to explicitly provide with the information about the comment category. We achieve this by adding a one-hot representation of the category of the comment to the input of the RNN decoder at every time step. As can be seen in the Table 5, CAT(M) performs better than GAC-all(M) in terms of BLEU-4, while performing slightly worse on BLEU-2. This demonstrates that explicitly providing information about the comment category can help the model. 4.4 Diversity In Generated Commentaries Humans use some variety in the choice of words and sentence structure. As such, outputs from rule based templates, which demonstrate low variety, may seem repetitive and boring. To capture this quantitatively, and to demonstrate the variety in texts from our method, we calculate the entropy (Shannon, 1951) of the distribution of unigrams, bigrams and trigrams of words in the predicted outputs, and report the geometric mean of these values. Using only a small set of words in similar counts will lead to lower entropy and is undesirable. As can be observed from Table 3, template 1668 baseline performs worse on the said measure compared to our methods for the ’MoveDesc’ subset of the data. 4.5 Human Evaluation Study As discussed in the qualitative examples above, we often found the outputs to be good - though BLEU scores are low. BLEU is known to correlate poorly (Reiter and Belz, 2009; Wiseman et al., 2017; Novikova et al., 2017) with human relevance scores for NLG tasks. Hence, we conduct a human evaluation study for the best 2 neural (GAC,GAC-sparse) and best 2 non-neural methods (TEMP,NN). Setup: Specifically, annotators are shown a chess move through previous board and resulting board snapshots, along with information on which piece moved (a snapshot of a HIT7 is provided in the Appendix D). With this context, they were shown text commentary based on this move and were asked to judge the commentary via three questions, shortened versions of which can be seen in the first column of Table 6. We randomly select 100 data points from the test split of ‘Move Description’ category and collect the predictions from each of the methods under consideration. We hired two Anglophone (Lifetime HIT acceptance % > 80) annotators for every human-evaluated test example. We additionally assess chess proficiency of the annotators using questions from the chess-QA dataset by (Cirik et al., 2015). Within each HIT, we ask two randomly selected questions from the chess-QA dataset. Finally we consider only those HITs wherein the annotator was able to answer the proficiency questions correctly. Results: We conducted a human evaluation study for the MoveDesc subset of the data. As can be observed from Table 6, outputs from our method attain slightly more favorable scores compared to the ground truth commentaries. This shows that the predicted outputs from our model are not worse than ground truth on the said measures. This is in spite of the fact that the BLEU-4 score for the predicted outputs is only ∼2 w.r.t. the ground truth outputs. One reason for slightly lower performance of the ground truth outputs on the said measures is that some of the human writ7Human Intelligence Task ten commentaries are either very ungrammatical or too concise. A more surprising observation is that around 30% of human written ground truth outputs were also marked as not valid for given board move. On inspection, it seems that commentary often contains extraneous game information beyond that of move alone, which indicates that an ideal comparison should be over commentary for an entire game, although this is beyond the scope of the current work. The inter-annotator agreement for our experiments (Cohens κ (Cohen, 1968)) is 0.45 for Q1 and 0.32 for Q2. We notice some variation in κ coefficients across different systems. While TEMP and GAC responses had a 0.5-0.7 coefficient range, the responses for CLM had a much lower coefficient. In our setup, each HIT consists of 7 comments, one from each system. For Q3 (fluency), which is on an ordinal scale, we measure rank-order consistency between the responses of the two annotators of a HIT. Mean Kendall τ (Kendall, 1938) across all HITs was found to be 0.39. To measures significance of results, we perform bootstrap tests on 1000 subsets of size 50 with a significance threshold of p = 0.05 for each pair of systems. For Q1, we observe that GAC(M), GAC(M+T) and GAC(M+T+S) methods are significantly better than baselines NN and GAC-sparse. We find that neither of GAC(M+T) and GT significantly outperform each other on Q1 as well as Q2. But we do find that GAC(M+T) does better than GAC(M) on both Q1 and Q2. For fluency scores, we find that GAC(M+T) is more fluent than GT, NN , GAC-sparse, GAC(M). Neither of GAC(M) and GAC(M+T+S) is significantly more fluent than the other. 5 Related Work NLG research has a long history, with systems ranging from completely rule-based to learningbased ones (Reiter et al., 2005, 2003a), which have had both practical successes (Reiter et al., 2005) and failures (Reiter et al., 2003a). Recently, there have been numerous works which propose text generation given structured records, biographies (Lebret et al., 2016), recipes (Yang et al., 2016; Kiddon et al., 2016), etc. A key difference between generation given a game state compared to these inputs is that the game state is an evolving description at a point in a process, as opposed 1669 Question GT GAC (M) GAC (MT) GAC (MTS) GAC -sparse TEMP NN Is commentary correct for the given move? (%Yes) 70.4 42.3 64.8 67.6 56.3 91.5 52.1 Can the move be inferred from the commentary? (%Yes) 45.1 25.3 42.3 36.7 40.8 92.9 42.3 Fluency (scale of (least)1 - 5(most) ) Mean (Std. dev.) 4.03 (1.31) 4.15 (1.20) 4.44 (1.02) 4.54 (0.89) 4.15 (1.26) 4.69 (0.64) 3.72 (1.36) Table 6: Human study results on MoveDesc data category. Outputs from GAC are in general better than ground truth, NN and GAC-sparse. TEMP outperforms other methods, though as shown earlier, outputs from TEMP lack diversity. to recipes (which are independent of each other), records (which are static) and biographies (which are one per person, and again independent). Moreover, our proposed method effectively uses various types of semantic and pragmatic information about the game state. In this paper we have introduced a new largescale data for game commentary generation. The commentaries cover a variety of aspects like move description, quality of move, and alternative moves. This leads to a content selection challenge, similar to that noted in Wiseman et al. (2017). Unlike Wiseman et al. (2017), our focus is on generating commentary for individual moves in a game, as opposed to game summaries from aggregate statistics as in their task. One of the first NLG datasets was the SUMTIME-METEO (Reiter et al., 2005) corpus with ≈500 record-text pairs for technical weather forecast generation. Liang et al (2009) worked on common weather forecast generation using the WEATHERGOV dataset, which has ≈10K record-text pairs. A criticism of WEATHERGOV dataset (Reiter, 2017) is that weather records themselves may have used templates and rules with optional human post-editing. There have been prior works on generating commentary for ROBOCUP matches (Chen and Mooney, 2008; Mei et al., 2015). The ROBOCUP dataset, however, is collected from 4 games and contains about 1K events in total. Our dataset is two orders of magnitude larger than the ROBOCUP dataset, and we hope that it provides a promising setting for future NLG research. 6 Conclusions In this paper, we curate a dataset for the task of chess commentary generation and propose methods to perform generation on this dataset. Our proposed method effectively utilizes information related to the rules and pragmatics of the game. A human evaluation study judges outputs from the proposed methods to be as good as human written commentary texts for ‘Move Description’ subset of the data. Our dataset also contains multi-move-single commentary pairs in addition to single movesingle commentary pairs. Generating commentary for such multi-moves is a potential direction for future work. We anticipate this task to require even deeper understanding of the game pragmatics than the single move-single commentary case. Recent work (Silver et al., 2016) has proposed reinforcement learning based game-playing agents which learn to play board games from scratch, learning end-to-end from both recorded games and self-play. An interesting point to explore is whether such pragmatically trained game state representations can be leveraged for the task of game commentary generation. Acknowledgements We thank Volkan Cirik, Daniel Clothiaux, Hiroaki Hayashi and anonymous reviewers for providing valuable comments and feedback. References David L Chen and Raymond J Mooney. 2008. Learning to sportscast: a test of grounded language acquisition. In Proceedings of the 25th international conference on Machine learning. ACM, pages 128– 135. Volkan Cirik, Louis-Philippe Morency, and Eduard Hovy. 2015. Chess q&a: Question Answering on Chess Games. In Reasoning, Attention, Memory (RAM) Workshop, Neural Information Processing Systems. Jacob Cohen. 1968. Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. Psychological bulletin 70(4):213. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. 1670 Hirotaka Kameko, Shinsuke Mori, and Yoshimasa Tsuruoka. 2015. Learning a game commentary generator with grounded move expressions. In Computational Intelligence and Games (CIG), 2015 IEEE Conference on. IEEE, pages 177–184. Andrej Karpathy and Li Fei-Fei. 2015. Deep visualsemantic alignments for generating image descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition. pages 3128–3137. Maurice G Kendall. 1938. A new measure of rank correlation. Biometrika 30(1/2):81–93. Chlo´e Kiddon, Luke Zettlemoyer, and Yejin Choi. 2016. Globally Coherent Text Generation with Neural Checklist Models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. pages 329–339. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . R´emi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. arXiv preprint arXiv:1603.07771 . Percy Liang, Michael I Jordan, and Dan Klein. 2009. Learning semantic correspondences with less supervision. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1. Association for Computational Linguistics, pages 91– 99. Jen-Wen Liao and Jason S Chang. 1990. Computer Generation of Chinese Commentary on Othello Games. In Proceedings of Rocling III Computational Linguistics Conference III. pages 393–415. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision. Springer, pages 740–755. Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. arXiv preprint arXiv:1603.08023 . Saad Mahamood and Ehud Reiter. 2012. Working with clinicians to improve a patient-information NLG system. In Proceedings of the Seventh International Natural Language Generation Conference. Association for Computational Linguistics, pages 100–104. Hongyuan Mei, Mohit Bansal, and Matthew R Walter. 2015. What to talk about and how? selective generation using LSTMs with coarse-to-fine alignment. arXiv preprint arXiv:1509.00838 . Jekaterina Novikova, Ondˇrej Duˇsek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for nlg. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copenhagen, Denmark, pages 2241–2252. https://www.aclweb.org/anthology/D17-1238. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics. Association for Computational Linguistics, pages 311–318. Adam Paszke, Sam Gross, Soumith Chintala, and Gregory Chanan. 2017. Pytorch. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011a. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research 12:2825–2830. Fabian Pedregosa, Ga¨el Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011b. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12(Oct):2825–2830. Ehud Reiter. 1995. NLG vs. templates. arXiv preprint cmp-lg/9504013 . Ehud Reiter. 2017. You Need to Understand Your Corpora - the Weathergov Example. Blogpost https://ehudreiter.com/2017/05/09/ weathergov/ . Ehud Reiter and Anja Belz. 2009. An investigation into the validity of some metrics for automatically evaluating natural language generation systems. Computational Linguistics 35(4):529–558. Ehud Reiter, Roma Robertson, and Liesl M Osman. 2003a. Lessons from a failure: Generating tailored smoking cessation letters. Artificial Intelligence 144(1-2):41–58. Ehud Reiter, Somayajulu Sripada, Jim Hunter, Jin Yu, and Ian Davy. 2005. Choosing words in computergenerated weather forecasts. Artificial Intelligence 167(1-2):137–169. Ehud Reiter, Somayajulu G Sripada, and Roma Robertson. 2003b. Acquiring correct knowledge for natural language generation. Journal of Artificial Intelligence Research 18:491–516. Aleksander Sadikov, Martin Moina, Matej Guid, Jana Krivec, and Ivan Bratko. 2006. Automated chess tutor. In International Conference on Computers and Games. Springer, pages 13–25. 1671 Claude E Shannon. 1951. Prediction and entropy of printed English. Bell Labs Technical Journal 30(1):50–64. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. 2016. Mastering the game of Go with deep neural networks and tree search. nature 529(7587):484–489. Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition. pages 4566–4575. Sam Wiseman, Stuart M Shieber, and Alexander M Rush. 2017. Challenges in Data-to-Document Generation. arXiv preprint arXiv:1707.08052 . Zichao Yang, Phil Blunsom, Chris Dyer, and Wang Ling. 2016. Reference-Aware Language Models. arXiv preprint arXiv:1611.01628 .
2018
154
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1672–1682 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1672 From Credit Assignment to Entropy Regularization: Two New Algorithms for Neural Sequence Prediction Zihang Dai∗, Qizhe Xie∗, Eduard Hovy Language Technologies Institute Carnegie Mellon University {dzihang, qizhex, hovy}@cs.cmu.edu Abstract In this work, we study the credit assignment problem in reward augmented maximum likelihood (RAML) learning, and establish a theoretical equivalence between the token-level counterpart of RAML and the entropy regularized reinforcement learning. Inspired by the connection, we propose two sequence prediction algorithms, one extending RAML with fine-grained credit assignment and the other improving Actor-Critic with a systematic entropy regularization. On two benchmark datasets, we show the proposed algorithms outperform RAML and Actor-Critic respectively, providing new alternatives to sequence prediction. 1 Introduction Modeling and predicting discrete sequences is the central problem to many natural language processing tasks. In the last few years, the adaption of recurrent neural networks (RNNs) and the sequenceto-sequence model (seq2seq) (Sutskever et al., 2014; Bahdanau et al., 2014) has led to a wide range of successes in conditional sequence prediction, including machine translation (Sutskever et al., 2014; Bahdanau et al., 2014), automatic summarization (Rush et al., 2015), image captioning (Karpathy and Fei-Fei, 2015; Vinyals et al., 2015; Xu et al., 2015) and speech recognition (Chan et al., 2016). Despite the distinct evaluation metrics for the aforementioned tasks, the standard training algorithm has been the same for all of them. Specifically, the algorithm is based on maximum likelihood estimation (MLE), which maximizes the log∗Equal contribution. likelihood of the “ground-truth” sequences empirically observed.1 While largely effective, the MLE algorithm has two obvious weaknesses. Firstly, the MLE training ignores the information of the task specific metric. As a result, the potentially large discrepancy between the log-likelihood during training and the task evaluation metric at test time can lead to a suboptimal solution. Secondly, MLE can suffer from the exposure bias, which refers to the phenomenon that the model is never exposed to its own failures during training, and thus cannot recover from an error at test time. Fundamentally, this issue roots from the difficulty in statistically modeling the exponentially large space of sequences, where most combinations cannot be covered by the observed data. To tackle these two weaknesses, there have been various efforts recently, which we summarize into two broad categories: • A widely explored idea is to directly optimize the task metric for sequences produced by the model, with the specific approaches ranging from minimum risk training (MRT) (Shen et al., 2015) and learning as search optimization (LaSO) (Daum´e III and Marcu, 2005; Wiseman and Rush, 2016) to reinforcement learning (RL) (Ranzato et al., 2015; Bahdanau et al., 2016). In spite of the technical differences, the key component to make these training algorithms practically efficient is often a delicate credit assignment scheme, which transforms the sequence-level signal into dedicated smaller units (e.g., token-level or chunk-level), and allocates them to specific decisions, allowing for efficient optimization with a much lower variance. For instance, the beam search optimiza1In this work, we use the terms “ground-truth” and “reference” to refer to the empirical observations interchangeably. 1673 tion (BSO) (Wiseman and Rush, 2016) utilizes the position of margin violations to produce signals to the specific chunks, while the actor-critic (AC) algorithm (Bahdanau et al., 2016) trains a critic to enable token-level signals. • Another alternative idea is to construct a task metric dependent target distribution, and train the model to match this task-specific target instead of the empirical data distribution. As a typical example, the reward augmented maximum likelihood (RAML) (Norouzi et al., 2016) defines the target distribution as the exponentiated pay-off (sequence-level reward) distribution. This way, RAML not only can incorporate the task metric information into training, but it can also alleviate the exposure bias by exposing imperfect outputs to the model. However, RAML only works on the sequence-level training signal. In this work, we are intrigued by the question whether it is possible to incorporate the idea of fine-grained credit assignment into RAML. More specifically, inspired by the token-level signal used in AC, we aim to find the token-level counterpart of the sequence-level RAML, i.e., defining a token-level target distribution for each autoregressive conditional factor to match. Motived by the question, we first formally define the desiderata the token-level counterpart needs to satisfy and derive the corresponding solution (§2). Then, we establish a theoretical connection between the derived token-level RAML and entropy regularized RL (§3). Motivated by this connection, we propose two algorithms for neural sequence prediction, where one is the token-level extension to RAML, and the other a RAML-inspired improvement to the AC (§4). We empirically evaluate the two proposed algorithms, and show different levels of improvement over the corresponding baseline. We further study the importance of various techniques used in our experiments, providing practical suggestions to readers (§6). 2 Token-level Equivalence of RAML We first introduce the notations used throughout the paper. Firstly, capital letters will denote random variables and lower-case letters are the values to take. As we mainly focus on conditional sequence prediction, we use x for the conditional input, and y for the target sequence. With y denoting a sequence, yj i then denotes the subsequence from position i to j inclusively, while yt denotes the single value at position t. Also, we use |y| to indicate the length of the sequence. To emphasize the ground-truth data used for training, we add superscript ∗to the input and target, i.e., x∗and y∗. In addition, we use Y to denote the set of all possible sequences with one and only one eos symbol at the end, and W to denote the set of all possible symbols in a position. Finally, we assume length of sequences in Y is bounded by T. 2.1 Background: RAML As discussed in §1, given a ground-truth pair (x∗, y∗), RAML defines the target distribution using the exponentiated pay-off of sequences, i.e., PR(y | x∗, y∗) = exp (R(y; y∗)/τ) P y′∈Y exp (R(y′; y∗)/τ), (1) where R(y; y∗) is the sequence-level reward, such as BLEU score, and τ is the temperature hyperparameter controlling the sharpness. With the definition, the RAML algorithm simply minimizes the cross entropy (CE) between the target distribution and the model distribution Pθ(Y | x∗), i.e., min θ CE PR(Y | x∗, y∗)∥Pθ(Y | x∗) . (2) Note that, this is quite similar to the MLE training, except that the target distribution is different. With the particular choice of target distribution, RAML not only makes sure the ground-truth reference remains the mode, but also allows the model to explore sequences that are not exactly the same as the reference but have relatively high rewards. Compared to algorithms trying to directly optimize task metric, RAML avoids the difficulty of tracking and sampling from the model distribution that is consistently changing. Hence, RAML enjoys a much more stable optimization without the need of pretraining. However, in order to optimize the RAML objective (Eqn. (2)), one needs to sample from the exponentiated pay-off distribution, which is quite challenging in practice. Thus, importance sampling is often used (Norouzi et al., 2016; Ma et al., 2017). We leave the details of the practical implementation to Appendix B.1. 2.2 Token-level Target Distribution Despite the appealing properties, RAML only operates on the sequence-level reward. As a result, the reward gap between any two sequences cannot be attributed to the responsible decisions precisely, 1674 which often leads to a low sample efficiency. Ideally, since we rely on the auto-regressive factorization Pθ(y | x∗) = Q|y| t=1 Pθ(yt | yt−1 1 , x∗), the optimization would be much more efficient if we have the target distribution for each token-level factor Pθ(Yt | yt−1 1 , x∗) to match. Conceptually, this is exactly how the AC algorithm improves upon the vanilla sequence-level REINFORCE algorithm (Ranzato et al., 2015). With this idea in mind, we set out to find such a token-level target. Firstly, we assume the tokenlevel target shares the form of a Boltzmann distribution but parameterized by some unknown negative energy function QR, i.e.,2 PQR(yt | yt−1 1 , y∗) = exp QR(yt−1 1 , yt; y∗)/τ P w∈W exp QR(yt−1 1 , w; y∗)/τ. (3) Intuitively, QR(yt−1 1 , w; y∗) measures how much future pay-off one can expect if w is generated, given the current status yt−1 1 and the reference y∗. This quantity highly resembles the action-value function (Q-function) in reinforcement learning. As we will show later, it is indeed the case. Before we state the desiderata for QR, we need to extend the definition of R in order to evaluate the goodness of an unfinished partial prediction, i.e., sequences without an eos suffix. Let Y−be the set of unfinished sequences, following Bahdanau et al. (2016), we define the pay-off function R for a partial sequence ˆy ∈Y−, |ˆy| < T as R(ˆy; y∗) = R(ˆy + eos; y∗), (4) where the + indicates string concatenation. With the extension, we are ready to state two requirements for QR: 1. Marginal match: For PQR to be the token-level equivalence of PR, the sequence-level marginal distribution induced by PQR must match PR, i.e., for any y ∈Y, |y| Y t=1 PQR(yt | yt−1 1 ) = PR(y). (5) Note that there are infinitely many QR’s satisfying Eqn. (5), because adding any constant value to QR does not change the Boltzmann distribution, known as shift-invariance w.r.t. the energy. 2To avoid clutter, the conditioning on x∗will be omitted in the sequel, assuming it’s clear from the context. 2. Terminal condition: Secondly, let’s consider the value of QR when emitting an eos symbol to immediately terminate the generation. As mentioned earlier, QR measures the expected future pay-off. Since the emission of eos ends the generation, the future pay-off can only come from the immediate increase of the pay-off. Thus, we require QR to be the incremental pay-off when producing eos, i.e. QR(ˆy, eos; y∗) = R(ˆy + eos; y∗) −R(ˆy; y∗), (6) for any ˆy ∈Y−. Since Eqn. (6) enforces the absolute of QR at a point, it also solves the ambiguity caused by the shift-invariance property. Based on the two requirements, we can derive the form QR, which is summarized by Proposition 1. Proposition 1. PQR and QR satisfy requirements (5) and (6) if and only if for any ground-truth pair (x∗, y∗) and any sequence prediction y ∈Y, QR(yt−1 1 , yt; y∗) = R(yt 1; y∗) −R(yt−1 1 ; y∗) + τ log X w∈W exp  QR(yt 1, w; y∗)/τ  , (7) when t < |y|, and otherwise, i.e., when t = |y| QR(yt−1 1 , yt; y∗) = R(yt 1; y∗) −R(yt−1 1 ; y∗). (8) Proof. See Appendix A.1. Note that, instead of giving an explicit form for the token-level target distribution, Proposition 1 only provides an equivalent condition in the form of an implicit recursion. Thus, we haven’t obtained a practical algorithm yet. However, as we will discuss next, the recursion has a deep connection to entropy regularized RL, which ultimately inspires our proposed algorithms. 3 Connection to Entropy-regularized RL Before we dive into the connection, we first give a brief review of the entropy-regularized RL. For an in-depth treatment, we refer readers to (Ziebart, 2010; Schulman et al., 2017). 3.1 Background: Entropy-regularized RL Following the standard convention of RL, we denote a Markov decision process (MDP) by a tuple M = (S, A, ps, r, γ), where S, A, ps, r, γ are the state space, action space, transition probability, reward function and discounting factor respectively.3 3In sequence prediction, we are only interested in the periodic (finite horizon) case. 1675 Based on the notation, the goal of entropyregularized RL augments is to learn a policy π(at | st) which maximizes the discounted expected future return and causal entropy (Ziebart, 2010), i.e., max π X t E st∼ρs,at∼π(·|st) γt−1[r(st, at) + αH(π(· | st))], where H denotes the entropy and α is a hyperparameter controlling the relative importance between the reward and the entropy. Intuitively, compared to standard RL, the extra entropy term encourages exploration and promotes multi-modal behaviors. Such properties are highly favorable in a complex environment. Given an entropy-regularized MDP, for any fixed policy π, the state-value function V π(s) and the action-value function Qπ can be defined as V π(s) = E a∼π(·|s)[Qπ(s, a)] + αH(π(· | s)), Qπ(s, a) = r(s, a) + E s′∼ρs [γV π(s′)]. (9) With the definitions above, it can further be proved (Ziebart, 2010; Schulman et al., 2017) that the optimal state-value function V ∗, the actionvalue function Q∗and the corresponding optimal policy π∗satisfy the following equations V ∗(s) = α log X a∈A exp Q∗(s, a)/α , (10) Q∗(s, a) = r(s, a) + γ E s′∼ρs [V ∗(s′)], (11) π∗(a | s) = exp (Q∗(s, a)/α) P a′∈A exp (Q∗(s, a′)/α). (12) Here, Eqn. (10) and (11) are essentially the entropy-regularized counterparts of the optimal Bellman equations in standard RL. Following previous literature, we will refer to Eqn. (10) and (11) as the optimal soft Bellman equations, and the V ∗ and Q∗as optimal soft value functions. 3.2 An RL Equivalence of the Token-level RAML To reveal the connection, it is convenient to define the incremental pay-off r(yt−1 1 , yt; y∗) = R(yt 1; y∗) −R(yt−1 1 ; y∗), (13) and the last term of Eqn. (7) as VR(yt 1; y∗) = τ log X w∈W exp  QR(yt 1, w; y∗)/τ  (14) Substituting the two definitions into Eqn. (7), the recursion simplifies as QR(yt−1 1 , yt; y∗) = r(yt−1 1 , yt; y∗) + VR(yt 1; y∗). (15) Now, it is easy to see that the Eqn. (14) and (15), which are derived from the token-level RAML, highly resemble the optimal soft Bellman equations (10) and (11) in entropy-regularized RL. The following Corollary formalizes the connection. Corollary 1. For any ground-truth pair (x∗, y∗), the recursion specified by Eqn. (13), (14) and (15) is equivalent to the optimal soft Bellman equation of a “deterministic” MDP in entropy-regularized reinforcement learning, denoted as MR, where • the state space S corresponds to Y−, • the action space A corresponds to W, • the transition probability ρs is a deterministic process defined by string concatenation • the reward function r corresponds to the incremental pay-off defined in Eqn. (13), • the discounting factor γ = 1, • the entropy hyper-parameter α = τ, • and a period terminates either when eos is emitted or when its length reaches T and we enforce the generation of eos. Moreover, the optimal soft value functions V ∗and Q∗of the MDP exactly match the VR and QR defined by Eqn. (14) and (15) respectively. The optimal policy π∗is hence equivalent to the tokenlevel target distribution PQR. Proof. See Appendix A.1. The connection established by Corollary 1 is quite inspiring: • Firstly, it provides a rigorous and generalized view of the connection between RAML and entropy-regularized RL. In the original work, Norouzi et al. (2016) point out RAML can be seen as reversing the direction of KL (Pθ∥PR), which is a sequence-level view of the connection. Now, with the equivalence between the token-level target PQR and the optimal Q∗, it generalizes to matching the future action values consisting of both the reward and the entropy. • Secondly, due to the equivalence, if we solve the optimal soft Q-function of the corresponding MDP, we directly obtain the token-level target distribution. This hints at a practical algorithm with token-level credit assignment. 1676 • Moreover, since RAML is able to improve upon MLE by injecting entropy, the entropyregularized RL counterpart of the standard AC algorithm should also lead to an improvement in a similar manner. 4 Proposed Algorithms In this section, we explore the insights gained from Corollary 1 and present two new algorithms for sequence prediction. 4.1 Value Augmented Maximum Likelihood The first algorithm we consider is the token-level extension of RAML, which we have been discussing since §2. As mentioned at the end of §2.2, Proposition 1 only gives an implicit form of QR, and so is the token-level target distribution PQR (Eqn. (3)). However, thanks to Corollary 1, we now know that QR is the same as the optimal soft action-value function Q∗of the entropyregularized MDP MR. Hence, by finding the Q∗, we will have access to PQR. At the first sight, it seems recovering Q∗is as difficult as solving the original sequence prediction problem, because solving Q∗from the MDP is essentially the same as learning the optimal policy for sequence prediction. However, it is not true because QR (i.e., PQR) can condition on the correct reference y∗. In contrast, the model distribution Pθ can only depend on x∗. Therefore, the function approximator trained to recover Q∗can take y∗as input, making the estimation task much easier. Intuitively, when recovering Q∗, we are trying to train an ideal “oracle”, which has access to the ground-truth reference output, to decide the best behavior (policy) given any arbitrary (good or not) state. Thus, following the reasoning above, we first train a parametric function approximator Qφ to search the optimal soft action value. In this work, for simplicity, we employ the Soft Qlearning algorithm (Schulman et al., 2017) to perform the policy optimization. In a nutshell, Soft Q-Learning is the entropy-regularized version of Q-Learning, an off-policy algorithm which minimizes the mean squared soft Bellman residual according to Eqn. (11). Specifically, given groundtruth pair (x∗, y∗), for any trajectory y ∈Y, the training objective is min φ |y| X t=1 h Qφ(yt−1 1 , yt; y∗) −ˆQφ(yt−1 1 , yt; y∗) i2 , (16) where ˆQφ(yt−1 1 , yt; y∗) = r(yt−1 1 , yt; y∗) + Vφ(yt 1; y∗) is the one-step look-ahead target Q-value, and Vφ(yt 1; y∗) = τ log P w∈W exp Qφ(yt 1, w; y∗)/τ as defined in Eqn. (10). In the recent instantiation of Q-Learning (Mnih et al., 2015), to stabilize training, the target Q-value is often estimated by a separate slowly updated target network. In our case, as we have access to a significant amount of reference sequences, we find the target network not necessary. Thus, we directly optimize Eqn. (16) using gradient descent, and let the gradient flow through both Qφ(yt−1 1 , yt; y∗) and Vφ(yt 1; y∗) (Baird, 1995). After the training of Qφ converges, we fix the parameters of Qφ, and optimize the cross entropy CE PQφ∥Pθ  w.r.t. the model parameters θ, which is equivalent to4 min θ E y∼PQφ   |y| X t=1 CE PQφ(Yt | yt−1 1 )∥Pθ(Yt | yt−1 1 )   . (17) Compared to the of objective of RAML in Eqn. (2), having access to PQφ(Yt | yt−1 1 ) allows us to provide a distinct token-level target for each conditional factor Pθ(Yt | yt−1 1 ) of the model. While directly sampling from PR is practically infeasible (§2.1), having a parametric target distribution PQφ makes it theoretically possible to sample from PQφ and perform the optimization. However, empirically, we find the samples from PQφ are not diverse enough (§6). Hence, we fall back to the same importance sampling approach (see Appendix B.2) as used in RAML. Finally, since the algorithm utilizes the optimal soft action-value function to construct the tokenlevel target, we will refer to it as value augmented maximum likelihood (VAML) in the sequel. 4.2 Entropy-regularized Actor Critic The second algorithm follows the discussion at the end of §3.2, which is essentially an actor-critic algorithm based on the entropy-regularized MDP in Corollary 1. For this reason, we name the algorithm entropy-regularized actor critic (ERAC). As with standard AC algorithm, the training process interleaves the evaluation of current policy using the parametric critic Qφ and the optimization of the actor policy πθ given the current critic. Critic Training. The critic is trained to perform policy evaluation using the temporal difference 4See Appendix A.2 for a detailed derivation. 1677 learning (TD), which minimizes the TD error min φ E y∼πθ |y| X t=1 h Qφ(yt−1 1 , yt; y∗) −ˆQ ¯φ(yt−1 1 , yt; y∗) i2 (18) where the TD target ˆQ¯φ is constructed based on fixed policy iteration in Eqn. (9), i.e., ˆQ ¯φ(yt−1 1 , yt; y∗) = r(yt−1 1 , yt) + τ H(πθ(· | yt 1)) + X w∈W πθ(w | yt 1)Q ¯φ(yt 1, w; y∗). (19) It is worthwhile to emphasize that the objective (18) trains the critic Qφ to evaluate the current policy. Hence, it is entirely different from the objective (16), which is performing policy optimization by Soft Q-Learning. Also, the trajectories y used in (18) are sequences drawn from the actor policy πθ, while objective (16) theoretically accepts any trajectory since Soft Q-Learning can be fully offpolicy.5 Finally, following Bahdanau et al. (2016), the TD target ˆQ¯φ in Eqn. (9) is evaluated using a target network, which is indicated by the bar sign above the parameters, i.e., ¯φ. The target network is slowly updated by linearly interpolating with the up-to-date network, i.e., the update is ¯φ ←βφ+(1−β)¯φ for β in (0, 1) (Lillicrap et al., 2015). We also adapt another technique proposed by Bahdanau et al. (2016), which smooths the critic by minimizing the “variance” of Q-values, i.e., min φ λvar E y∼πθ |y| X t=1 X w∈W  Qφ(yt 1, w; y∗) −¯Qφ(yt 1; y∗) 2 where ¯Qφ(yt 1; y∗) = 1 |W| P w′∈W Qφ(yt 1, w′; y∗) is the mean Q-value, and λvar is a hyper-parameter controlling the relative weight between the TD loss and the smooth loss. Actor Training. Given the critic Qφ, the actor gradient (to maximize the expected return) is given by the policy gradient theorem of the entropyregularized RL (Schulman et al., 2017), which has the form E y∼πθ |y| X t=1 X w∈W ∇θπθ(w | yt−1 1 )Qφ(yt−1 1 , w; y∗) + τ∇θH(πθ(· | yt−1 1 )). (20) Here, for each step t, we follow Bahdanau et al. (2016) to sum over the entire symbol set W, instead of using the single sample estimation often 5Different from Bahdanau et al. (2016), we don’t use a delayed actor network to collect trajectories for critic training. seen in RL. Hence, no baseline is employed. It is worth mentioning that Eqn. (20) is not simply adding an entropy term to the standard policy gradient as in A3C (Mnih et al., 2016). The difference lies in that the critic Qφ trained by Eqn. (18) additionally captures the entropy from future steps, while the ∇θH term only captures the entropy of the current step. Finally, similar to (Bahdanau et al., 2016), we find it necessary to first pretrain the actor using MLE and then pretrain the critic before the actorcritic training. Also, to prevent divergence during actor-critic training, it is helpful to continue performing MLE training along with Eqn. (20), though using a smaller weight λmle. 5 Related Work Task Loss Optimization and Exposure Bias Apart from the previously introduced RAML, BSO, Actor-Critic (§1), MIXER (Ranzato et al., 2015) also utilizes chunk-level signals where the length of chunk grows as training proceeds. In contrast, minimum risk training (Shen et al., 2015) directly optimizes sentence-level BLEU. As a result, it requires a large number (100) of samples per data to work well. To solve the exposure bias, scheduled sampling (Bengio et al., 2015) adopts a curriculum learning strategy to bridge the training and the inference. Professor forcing (Lamb et al., 2016) introduces an adversarial training mechanism to encourage the dynamics of the model to be the same at training time and inference time. For image caption, self-critic sequence training (SCST) (Rennie et al., 2016) extends the MIXER algorithm with an improved baseline based on the current model performance. Entropy-regularized RL Entropy regularization been explored by early work in RL and inverse RL (Williams and Peng, 1991; Ziebart et al., 2008). Lately, Schulman et al. (2017) establish the equivalence between policy gradients and Soft Q-Learning under entropy-regularized RL. Motivated by the multi-modal behavior induced by entropy-regularized RL, Haarnoja et al. (2017) apply energy-based policy and Soft Q-Learning to continuous domain. Later, Nachum et al. (2017) proposes path consistency learning, which can be seen as a multi-step extension to Soft Q-Learning. More recently, in the domain of simulated control, Haarnoja et al. (2018) also consider the actor critic algorithm under the framework of en1678 tropy regularized reinforcement learning. Despite the conceptual similarity to ERAC presented here, Haarnoja et al. (2018) focuses on continuous control and employs the advantage actor critic variant as in (Mnih et al., 2016), while ERAC follows the Q actor critic as in (Bahdanau et al., 2016). 6 Experiments 6.1 Experiment Settings In this work, we focus on two sequence prediction tasks: machine translation and image captioning. Due to the space limit, we only present the information necessary to compare the empirical results at this moment. For a more detailed description, we refer readers to Appendix B and the code6. Machine Translation Following Ranzato et al. (2015), we evaluate on IWSLT 2014 German-toEnglish dataset (Mauro et al., 2012). The corpus contains approximately 153K sentence pairs in the training set. We follow the pre-processing procedure used in (Ranzato et al., 2015). Architecture wise, we employ a seq2seq model with dot-product attention (Bahdanau et al., 2014; Luong et al., 2015), where the encoder is a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) with each direction being size 128, and the decoder is another LSTM of size 256. Moreover, we consider two variants of the decoder, one using the input feeding technique (Luong et al., 2015) and the other not. For all algorithms, the sequence-level BLEU score is employed as the pay-off function R, while the corpus-level BLEU score (Papineni et al., 2002) is used for the final evaluation. The sequence-level BLEU score is scaled up by the sentence length so that the scale of the immediate reward at each step is invariant to the length. Image Captioning For image captioning, we consider the MSCOCO dataset (Lin et al., 2014). We adapt the same preprocessing procedure and the train/dev/test split used by Karpathy and FeiFei (2015). The NIC (Vinyals et al., 2015) is employed as the baseline model, where a feature vector of the image is extracted by a pre-trained CNN and then used to initialize the LSTM decoder. Different from the original NIC model, we employ a pretrained 101-layer ResNet (He et al., 2016) rather than a GoogLeNet as the CNN encoder. 6https://github.com/zihangdai/ERAC-VAML For training, each image-caption pair is treated as an i.i.d. sample, and sequence-level BLEU score is used as the pay-off. For testing, the standard multi-reference BLEU4 is used. 6.2 Comparison with the Direct Baseline Firstly, we compare ERAC and VAML with their corresponding direct baselines, namely AC (Bahdanau et al., 2016) and RAML (Norouzi et al., 2016) respectively. As a reference, the performance of MLE is also provided. Due to non-neglected performance variance observed across different runs, we run each algorithm for 9 times with different random seeds,7 and report the average performance, the standard deviation and the performance range (min, max). Machine Translation The results on MT are summarized in the left half of Tab. 1. Firstly, all four advanced algorithms significantly outperform the MLE baseline. More importantly, both VAML and ERAC improve upon their direct baselines, RAML and AC, by a clear margin on average. The result suggests the two proposed algorithms both well combine the benefits of a delicate credit assignment scheme and the entropy regularization, achieving improved performance. Image Captioning The results on image captioning are shown in the right half of Tab. 1. Despite the similar overall trend, the improvement of VAML over RAML is smaller compared to that in MT. Meanwhile, the improvement from AC to ERAC becomes larger in comparison. We suspect this is due to the multi-reference nature of the MSCOCO dataset, where a larger entropy is preferred. As a result, the explicit entropy regularization in ERAC becomes immediately fruitful. On the other hand, with multiple references, it can be more difficult to learn a good oracle Q∗(Eqn. (15)). Hence, the token-level target can be less accurate, resulting in smaller improvement. 6.3 Comparison with Existing Work To further evaluate the proposed algorithms, we compare ERAC and VAML with the large body of existing algorithms evaluated on IWSTL 2014. As a note of caution, previous works don’t employ the exactly same architectures (e.g. number of layers, hidden size, attention type, etc.). Despite that, 7For AC, ERAC and VAML, 3 different critics are trained first, and each critic is then used to train 3 actors. 1679 MT (w/o input feeding) MT (w/ input feeding) Image Captioning Algorithm Mean Min Max Mean Min Max Mean Min Max MLE 27.01 ± 0.20 26.72 27.27 28.06 ± 0.15 27.84 28.22 29.54 ± 0.21 29.27 29.89 RAML 27.74 ± 0.15 27.47 27.93 28.56 ± 0.15 28.35 28.80 29.84 ± 0.21 29.50 30.17 VAML 28.16 ± 0.11 28.00 28.26 28.84 ± 0.10 28.62 28.94 29.93 ± 0.22 29.51 30.24 AC 28.04 ± 0.05 27.97 28.10 29.05 ± 0.06 28.95 29.16 30.90 ± 0.20 30.49 31.16 ERAC 28.30 ± 0.06 28.25 28.42 29.31 ± 0.04 29.26 29.36 31.44 ± 0.22 31.07 31.82 Table 1: Test results on two benchmark tasks. Bold faces highlight the best in the corresponding category. for VAML and ERAC, we use an architecture that is most similar to the majority of previous works, which is the one described in §6.1 with input feeding. Based on the setting, the comparison is summarized in Table 2.8 As we can see, both VAML and ERAC outperform previous methods, with ERAC leading the comparison with a significant margin. This further verifies the effectiveness of the two proposed algorithms. Algorithm BLEU MIXER (Ranzato et al., 2015) 20.73 BSO (Wiseman and Rush, 2016) 27.9 Q(BLEU) (Li et al., 2017) 28.3 AC (Bahdanau et al., 2016) 28.53 RAML (Ma et al., 2017) 28.77 VAML 28.94 ERAC 29.36 Table 2: Comparison with existing algorithms on IWSTL 2014 dataset for MT. All numbers of previous algorithms are from the original work. 6.4 Ablation Study Due to the overall excellence of ERAC, we study the importance of various components of it, hopefully offering a practical guide for readers. As the input feeding technique largely slows down the training, we conduct the ablation based on the model variant without input feeding. Firstly, we study the importance of two techniques aimed for training stability, namely the target network and the smoothing technique (§4.2). Based on the MT task, we vary the update speed β of the target critic, and the λvar, which controls the 8For a more detailed comparison of performance together with the model architectures, see Table 7 in Appendix C. HHHHHH λvar β 0.001 0.01 0.1 1 0 27.91 26.27† 28.88 27.38† 0.001 29.41 29.26 29.32 27.44 Table 3: Average validation BLEU of ERAC. As a reference, the average BLEU is 28.1 for MLE. λvar = 0 means not using the smoothing technique. β = 1 means not using a target network. † indicates excluding extreme values due to divergence. strength of the smoothness regularization. The average validation performances of different hyperparameter values are summarized in Tab. 3. • Comparing the two rows of Tab. 3, the smoothing technique consistently leads to performance improvement across all values of τ. In fact, removing the smoothing objective often causes the training to diverge, especially when β = 0.01 and 1. But interestingly, we find the divergence does not happen if we update the target network a little bit faster (β = 0.1) or quite slowly (β = 0.001). • In addition, even with the smoothing technique, the target network is still necessary. When the target network is not used (β = 1), the performance drops below the MLE baseline. However, as long as a target network is employed to ensure the training stability, the specific choice of target network update rate does not matter as much. Empirically, it seems using a slower (β = 0.001) update rate yields the best result. Next, we investigate the effect of enforcing different levels of entropy by varying the entropy hyper-parameter τ. As shown in Fig. 1, it seems there is always a sweet spot for the level of entropy. On the one hand, posing an over strong en1680 0.000 0.005 0.010 0.020 0.045 26.0 27.0 28.0 29.0 30.0 BLEU Dev Test (a) Machine translation 0.000 0.001 0.005 0.010 0.020 29.5 30.0 30.5 31.0 31.5 BLEU Dev Test (b) Image captioning Figure 1: ERAC’s average performance over multiple runs on two tasks when varying τ. tropy regularization can easily cause the actor to diverge. Specifically, the model diverges when τ reaches 0.03 on the image captioning task or 0.06 on the machine translation task. On the other hand, as we decrease τ from the best value to 0, the performance monotonically decreases as well. This observation further verifies the effectiveness of entropy regularization in ERAC, which well matches our theoretical analysis. Finally, as discussed in §4.2, ERAC takes the effect of future entropy into consideration, and thus is different from simply adding an entropy term to the standard policy gradient as in A3C (Mnih et al., 2016). To verify the importance of explicitly modeling the entropy from future steps, we compared ERAC with the variant that only applies the entropy regularization to the actor but not to the critic. In other words, the τ is set to 0 when performing policy evaluating according to Eqn. (4.2), while the τ for the entropy gradient in Eqn. (20) remains. The comparison result based on 9 runs on test set of IWSTL 2014 is shown in Table 4. As we can see, simply adding a local entropy gradient does not even improve upon the AC. This further verifies the difference between ERAC and A3C, and shows the importance of taking future entropy into consideration. Algorithm Mean Max ERAC 28.30 ± 0.06 28.42 ERAC w/o Future Ent. 28.06 ± 0.05 28.11 AC 28.04 ± 0.05 28.10 Table 4: Comparing ERAC with the variant without considering future entropy. 7 Discussion In this work, motivated by the intriguing connection between the token-level RAML and the entropy-regularized RL, we propose two algorithms for neural sequence prediction. Despite the distinct training procedures, both algorithms combine the idea of fine-grained credit assignment and the entropy regularization, leading to positive empirical results. However, many problems remain widely open. In particular, the oracle Q-function Qφ we obtain is far from perfect. We believe the ground-truth reference contains sufficient information for such an oracle, and the current bottleneck lies in the RL algorithm. Given the numerous potential applications of such an oracle, we believe improving its accuracy will be a promising future direction. 1681 References Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2016. An actor-critic algorithm for sequence prediction. arXiv preprint arXiv:1607.07086 . Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 . Leemon Baird. 1995. Residual algorithms: Reinforcement learning with function approximation. In Machine Learning Proceedings 1995, Elsevier, pages 30–37. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In Advances in Neural Information Processing Systems. pages 1171–1179. William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals. 2016. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE, pages 4960–4964. Hal Daum´e III and Daniel Marcu. 2005. Learning as search optimization: Approximate large margin methods for structured prediction. In Proceedings of the 22nd international conference on Machine learning. ACM, pages 169–176. Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. 2017. Reinforcement learning with deep energy-based policies. arXiv preprint arXiv:1702.08165 . Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. 2018. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290 . Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. pages 770– 778. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Po-Sen Huang, Chong Wang, Dengyong Zhou, and Li Deng. 2017. Toward neural phrasebased machine translation . Andrej Karpathy and Li Fei-Fei. 2015. Deep visualsemantic alignments for generating image descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition. pages 3128–3137. Alex M Lamb, Anirudh Goyal ALIAS PARTH GOYAL, Ying Zhang, Saizheng Zhang, Aaron C Courville, and Yoshua Bengio. 2016. Professor forcing: A new algorithm for training recurrent networks. In Advances In Neural Information Processing Systems. pages 4601–4609. Jiwei Li, Will Monroe, and Dan Jurafsky. 2017. Learning to decode for future success. arXiv preprint arXiv:1701.06549 . Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. 2015. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 . Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision. Springer, pages 740–755. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025 . Xuezhe Ma, Pengcheng Yin, Jingzhou Liu, Graham Neubig, and Eduard Hovy. 2017. Softmax qdistribution estimation for structured prediction: A theoretical interpretation for raml. arXiv preprint arXiv:1705.07136 . Cettolo Mauro, Girardi Christian, and Federico Marcello. 2012. Wit3: Web inventory of transcribed and translated talks. In Conference of European Association for Machine Translation. pages 261–268. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning. pages 1928–1937. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. 2015. Human-level control through deep reinforcement learning. Nature 518(7540):529. Ofir Nachum, Mohammad Norouzi, Kelvin Xu, and Dale Schuurmans. 2017. Bridging the gap between value and policy based reinforcement learning. In Advances in Neural Information Processing Systems. pages 2772–2782. Mohammad Norouzi, Samy Bengio, Navdeep Jaitly, Mike Schuster, Yonghui Wu, Dale Schuurmans, et al. 2016. Reward augmented maximum likelihood for neural structured prediction. In Advances In Neural Information Processing Systems. pages 1723–1731. 1682 Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics. Association for Computational Linguistics, pages 311–318. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732 . Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. 2016. Self-critical sequence training for image captioning. arXiv preprint arXiv:1612.00563 . Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685 . John Schulman, Pieter Abbeel, and Xi Chen. 2017. Equivalence between policy gradients and soft qlearning. arXiv preprint arXiv:1704.06440 . Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2015. Minimum risk training for neural machine translation. arXiv preprint arXiv:1512.02433 . Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. pages 3104–3112. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on. IEEE, pages 3156–3164. Ronald J Williams and Jing Peng. 1991. Function optimization using connectionist reinforcement learning algorithms. Connection Science 3(3):241–268. Sam Wiseman and Alexander M Rush. 2016. Sequence-to-sequence learning as beam-search optimization. arXiv preprint arXiv:1606.02960 . Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning. pages 2048–2057. Brian D Ziebart. 2010. Modeling purposeful adaptive behavior with the principle of maximum causal entropy. Carnegie Mellon University. Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, and Anind K Dey. 2008. Maximum entropy inverse reinforcement learning. In AAAI. Chicago, IL, USA, volume 8, pages 1433–1438.
2018
155
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1683–1693 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1683 DuoRC: Towards Complex Language Understanding with Paraphrased Reading Comprehension Amrita Saha Rahul Aralikatte Mitesh M. Khapra Karthik Sankaranarayanan IBM Research IBM Research IIT Madras IBM Research {amrsaha4,rahul.a.r,kartsank}@in.ibm.com [email protected] Abstract We propose DuoRC, a novel dataset for Reading Comprehension (RC) that motivates several new challenges for neural approaches in language understanding beyond those offered by existing RC datasets. DuoRC contains 186,089 unique questionanswer pairs created from a collection of 7680 pairs of movie plots where each pair in the collection reflects two versions of the same movie - one from Wikipedia and the other from IMDb - written by two different authors. We asked crowdsourced workers to create questions from one version of the plot and a different set of workers to extract or synthesize answers from the other version. This unique characteristic of DuoRC where questions and answers are created from different versions of a document narrating the same underlying story, ensures by design, that there is very little lexical overlap between the questions created from one version and the segments containing the answer in the other version. Further, since the two versions have different levels of plot detail, narration style, vocabulary, etc., answering questions from the second version requires deeper language understanding and incorporating external background knowledge. Additionally, the narrative style of passages arising from movie plots (as opposed to typical descriptive passages in existing datasets) exhibits the need to perform complex reasoning over events across multiple sentences. Indeed, we observe that state-of-the-art neural RC models which have achieved near human performance on the SQuAD dataset (Rajpurkar et al., 2016b), even when coupled with traditional NLP techniques to address the challenges presented in DuoRC exhibit very poor performance (F1 score of 37.42% on DuoRC v/s 86% on SQuAD dataset). This opens up several interesting research avenues wherein DuoRC could complement other RC datasets to explore novel neural approaches for studying language understanding. 1 Introduction Natural Language Understanding is widely accepted to be one of the key capabilities required for AI systems. Scientific progress on this endeavor is measured through multiple tasks such as machine translation, reading comprehension, questionanswering, and others, each of which requires the machine to demonstrate the ability to “comprehend” the given textual input (apart from other aspects) and achieve their task-specific goals. In particular, Reading Comprehension (RC) systems are required to “understand” a given text passage as input and then answer questions based on it. It is therefore critical, that the dataset benchmarks established for the RC task keep progressing in complexity to reflect the challenges that arise in true language understanding, thereby enabling the development of models and techniques to solve these challenges. For RC in particular, there has been significant progress over the recent years with several benchmark datasets, the most popular of which are the SQuAD dataset (Rajpurkar et al., 2016a), TriviaQA (Joshi et al., 2017), MS MARCO (Nguyen et al., 2016), MovieQA (Tapaswi et al., 2016) and clozestyle datasets (Mostafazadeh et al., 2016; Onishi et al., 2016; Hermann et al., 2015). However, these benchmarks, owing to both the nature of the passages and the QA pairs to evaluate the RC task, have 2 primary limitations in studying language understanding: (i) Other than MovieQA, which is 1684 a small dataset of 15K QA pairs, all other largescale RC datasets deal only with factual descriptive passages and not narratives (involving events with causality linkages that require reasoning and background knowledge) which is the case with a lot of real-world content such as story books, movies, news reports, etc. (ii) their questions possess a large lexical overlap with segments of the passage, or have a high noise level in QA pairs themselves. As demonstrated by recent work, this makes it easy for even simple keyword matching algorithms to achieve high accuracy (Weissenborn et al., 2017). In fact, these models have been shown to perform poorly in the presence of adversarially inserted sentences which have a high word overlap with the question but do not contain the answer (Jia and Liang, 2017). While this problem does not exist in TriviaQA it is admittedly noisy because of the use of distant supervision. Similarly, for cloze-style datasets, due to the automatic question generation process, it is very easy for current models to reach near human performance (Cui, 2017). This therefore limits the complexity in language understanding that a machine is required to demonstrate to do well on the RC task. Motivated by these shortcomings and to push the state-of-the-art in language understanding in RC, in this paper we propose DuoRC, which specifically presents the following challenges beyond the existing datasets: 1. DuoRC is especially designed to contain a large number of questions with low lexical overlap between questions and their corresponding passages. 2. It requires the use of background and commonsense knowledge to arrive at the answer and go beyond the content of the passage itself. 3. It contains narrative passages from movie plots that require complex reasoning across multiple sentences to infer the answer. 4. Several of the questions in DuoRC, while seeming relevant, cannot actually be answered from the given passage, thereby requiring the machine to detect the unanswerability of questions. In order to capture these four challenges, DuoRC contains QA pairs created from pairs of documents describing movie plots which were gathered as follows. Each document in a pair is a different version of the same movie plot written by different authors; one version of the plot is taken from the Wikipedia page of the movie whereas the other from its IMDb page (see Fig. 1 for portions of an example pair of plots from the movie “Twelve Monkeys”). We first showed crowd workers on Amazon Mechanical Turk (AMT) the first version of the plot and asked them to create QA pairs from it. We then showed the second version of the plot along with the questions created from the first version to a different set of workers on AMT and asked them to provide answers by reading the second version only. Since the two versions contain different levels of plot detail, narration style, vocabulary, etc., answering questions from the second version exhibits all of the four challenges mentioned above. We now make several interesting observations from the example in Fig. 1. For 4 out of the 8 questions (Q1, Q2, Q4, and Q7), though the answers extracted from the two plots are exactly the same, the analysis required to arrive at this answer is very different in the two cases. In particular, for Q1 even though there is no explicit mention of the prisoner living in a subterranean shelter and hence no lexical overlap with the question, the workers were still able to infer that the answer is Philadelphia because that is the city to which James Cole travels to for his mission. Another interesting characteristic of this dataset is that for a few questions (Q6, Q8) alternative but valid answers are obtained from the second plot. Further, note the kind of complex reasoning required for answering Q8 where the machine needs to resolve coreferences over multiple sentences (that man refers to Dr. Peters) and use common sense knowledge that if an item clears an airport screening, then a person can likely board the plane with it. To re-emphasize, these examples exhibit the need for machines to demonstrate new capabilities in RC such as: (i) employing a knowledge graph (e.g. to know that Philadelphia is a city in Q1), (ii) common-sense knowledge (e.g., clearing airport security implies boarding) (iii) paraphrase/semantic understanding (e.g. revolver is a type of handgun in Q7) (iv) multiple-sentence inferencing across events in the passage including coreference resolution of named entities and nouns, and (v) educated guesswork when the question is not directly answerable but there are subtle hints in the passage (as in Q1). Finally, for quite a few questions, there wasn’t sufficient information in the second plot to obtain their answers. In such cases, the workers marked the question as “unanswerable”. This brings out a very important challenge for machines (detect unanswerability of questions) 1685 Figure 1: Example QA pairs obtained from the original movie plot and the paraphrased plot. The relevant spans needed for answering the corresponding question are highlighted in blue and red with the respective question numbers. Note that the span highlighting shown here is for illustrative purposes only and is not available in the dataset. because a practical system should be able to know when it is not possible for it to answer a question given the data available to it, and in such cases, possibly delegate the task to a human instead. Current RC systems built using existing datasets are far from possessing these capabilities to solve the above challenges. In Section 4, we seek to establish solid baselines for DuoRC employing stateof-the-art RC models coupled with a collection of standard NLP techniques to address few of the above challenges. Proposing novel neural models that solve all of the challenges in DuoRC is out of the scope of this paper. Our experiments demonstrate that when the existing state-of-the-art RC systems are trained and evaluated on DuoRC they perform poorly leaving a lot of scope for improvement and open new avenues for research in RC. Do note that this dataset is not a substitute for existing RC datasets but can be coupled with them to collectively address a large set of challenges in language understanding with RC (the more the merrier). 2 Related Work Over the past few years, there has been a surge in datasets for Reading Comprehension. Most of these datasets differ in the manner in which questions and answers are created. For example, in SQuAD (Rajpurkar et al., 2016a), NewsQA (Trischler et al., 2016), TriviaQA (Joshi et al., 2017) and MovieQA (Tapaswi et al., 2016) the answers correspond to a span in the document. MSMARCO uses web queries as questions and the answers are synthesized by workers from documents relevant to the query. On the other hand, in most cloze-style datasets (Mostafazadeh et al., 2016; Onishi et al., 2016) the questions are created automatically by deleting a word/entity from a sentence. There are also some datasets for RC with multiple choice questions (Richardson et al., 2013; Berant et al., 2014; Lai et al., 2017) where the task is to select one among k given candidate answers. Another notable RC Dataset is NarrativeQA(s Koˇ cisk´y et al., 2018) which contains 40K QA pairs created from plot summaries of movies. It poses two tasks, where the first task involves reading the plot summaries from which the QA pairs were annotated and the second task is read the entire book or movie script (which is usually 60K words long) instead of the summary to answer the question. As acknowledged by the authors, while the first task is similar in scope to the previous datasets, the second task is at present, intractable for existing neural models, owing to the length of the passage. Due to the kind of the challenges presented by their second task, it is not comparable to our dataset and is much more futuristic in nature. 1686 Given that there are already a few datasets for RC, a natural question to ask is “Do we really need any more datasets?”. We believe that the answer to this question is yes. Each new dataset brings in new challenges and contributes towards building better QA systems. It keeps researchers on their toes and prevents research from stagnating once state-of-theart results are achieved on one dataset. A classic example of this is the CoNLL NER dataset (Tjong Kim Sang and De Meulder, 2003). While several NER systems (Passos et al., 2014) gave close to human performance on this dataset, NER on general web text, domain specific text, noisy social media text is still an unsolved problem (mainly due to the lack of representative datasets which cover the real-world challenges of NER). In this context, DuoRC presents 4 new challenges mentioned earlier which are not exhibited in existing RC datasets and would thus enable exploring novel neural approaches in complex language understanding. The hope is that all these datasets (including ours) will collectively help in addressing a wide range of challenges in QA and prevent stagnation via overfitting on a single dataset. 3 Dataset In this section, we elaborate on the three phases of our dataset collection process. Extracting parallel movie plots: We first collected top 40K movies from IMDb across different genres (crime, drama, comedy, etc.) whose plot synopsis were crawled from Wikipedia as well as IMDb. We retained only 7680 movies for which both the plots were available and longer than 100 words. In general, we found that the IMDb plots were usually longer (avg. length 926 words) and more descriptive than the Wikipedia plots (avg. length 580 words). To make sure that the content between the two plots are indeed different and one is not just a subset of another, we calculated wordlevel jaccard distance between them i.e. the ratio of intersection to union of the bag-of-words in the two plots and found it to be 26%. This indicates that one of the plots is usually longer and descriptive, and, the two plots are infact quite different, even though the information content is very similar. Collecting QA pairs from shorter version of the plot (SelfRC): As mentioned earlier, on average the longer version of the plot is almost double the size of the shorter version which is itself usually 500 words long. Intuitively, the longer version should have more details and the questions asked from the shorter version should be answerable from the longer one. Hence, we first showed the shorter version of the plot to workers on AMT and asked them to create QA pairs from it. The instructions given to the workers for this phase are as follows: (i) the answer must preferably be a single word or a short phrase, (ii) subjective questions (like asking for opinion) are not allowed, (iii) questions should be answerable only from the passage and not require any external knowledge, and (iv) questions and answers should be well formed and grammatically correct. The workers were also given freedom to either pick an answer which directly matches a span in the document or synthesize the answer from scratch. This option allowed them to be creative and ask hard questions where possible. We found that in 70% of the cases the workers picked an answer directly from the document and in 30% of the cases they synthesized the answer. We thus collected 85,773 such QA pairs along with their corresponding documents. We refer to this as the SelfRC dataset because the answers were derived from the same document from which the questions were asked. Collecting answers from longer version of the plot (ParaphraseRC): We then paired the questions from the SelfRC dataset with the corresponding longer version of the plot and showed it to a different set of AMT workers asking them to answer these questions from the longer version of the plot. They now have the option to either (i) select an answer which matches a span in the longer version, (ii) synthesize the answer from scratch, or (iii) mark the question not-answerable because of lack of information in the given passage. One trick we used to reduce the fatigue of workers (caused by reading long pieces of text), and thus maintain the answer quality is to split the long plots into multiple segments. Every question obtained from the first phase of annotation is paired separately with each of these segments and each (question, segment) pair is posted as a different job. With this approach, we essentially get multiple answers to the same question, if it is answerable from more than one segment. However, on an average we get approximately one unique answer for each question. We found that in 50% of the cases the workers selected an answer which matched a span in the document, whereas in 37% cases they synthesized the answer and in 13% cases they said that question was not answerable. The workers were strictly instructed to 1687 keep the answers short, derive the answer from the plot and use general knowledge or logic to answer the questions. They were not allowed to rely on personal knowledge about the movie (in any case given the large number of movies in our dataset the chance of a worker remembering all the plot details for a given movie is very less). For quality assessment purposes, various levels of manual and semi-automated inspections were done, especially in the second phase of annotation, such as:(i) weeding out annotators who mark majority of answers as non-answerable, by taking into account their response time, and (ii) annotators for whom a high percentage of answers have no entity (or noun phrase) overlap with the entire passage were subjected to strict manual inspection and blacklisted if necessary. Further, a wait period of 2-3 weeks was deliberately introduced between the two phases of data collection to ensure the availability of a fresh pool of workers as well as to reduce information bias among workers common to both the tasks. Overall 2559 workers took part in the first phase of the annotation, and 8021 workers in the second phase. Only 703 workers were common between the phases. We refer to this dataset, where the questions are taken from one version of the document and the answers are obtained from a different version, as ParaphraseRC which contains 100,316 such {question, answer, document} triplets. Overall, 62% of the questions in SelfRC and ParaphraseRC have partial overlap in their answers, which is indicative of the fact that quality is reasonable. The remaining 38% where there is no overlap can be attributed to nonanswerablity of the question from the bigger plot, information gap, or paraphrasing of information between the two plots. Figure 2: Analysis of the Question Types Note that the number of unique questions in the ParaphraseRC dataset is the same as that in SelfRC because we do not create any new questions from the longer version of the plot. We end up with a greater number of {question, answer, document} triplets in ParaphraseRC as compared to SelfRC (100,316 v/s 85,773) since movies that are remakes of a previous movie had very little difference in their Wikipedia plots. Therefore, we did not separately collect questions from the Wikipedia plot of the remake. However, the IMDb plots of the two movies are very different and so we have two different longer versions of the movie (one for the original and one for the remake). We can thus pair the questions created from the Wikipedia plot with both the IMDb versions of the plot thus augmenting the {question, answer, document} triplets. Another notable observation is that in many cases the answers to the same question are different in the two versions. Specifically, only 40.7% of the questions have the same answer in the two documents. For around 37.8% of the questions there is no overlap between the words in the two answers. For the remaining 21% of the questions there is a partial overlap between the two answers. For e.g., the answer derived from the shorter version could be “using his wife’s gun” and from the longer version could be “with Dana’s handgun” where Dana is the name of the wife. In Appendix A, we provide a few randomly picked examples from our dataset which should convince the reader of the difficulty of ParaphraseRC and its differences with SelfRC. We refer to this combined dataset containing a total Metrics for Comparative Analysis Movie QA NarrativeQA over plotsummaries SelfRC ParaphraseRC Avg. word distance 20.67 24.94 13.4 45.3 Avg. sentence distance 1.67 1.95 1.34 2.7 Number of sentences for inferencing 2.3 1.95 1.51 2.47 % of instances where both Query & Answer entities were found in passage 67.96 59.4 58.79 12.25 % of instances where Only Query entities were found in passage 59.61 61.77 63.39 47.05 % Length of the Longest Common sequence of nonstop words in Query (w.r.t Query Length) and Plot 25 26.26 38 21 Table 1: Comparison between various RC datasets of 186,089 instances as DuoRC1. Fig. 2 shows the distribution of different Wh-type questions in our dataset. Some interesting comparative analysis are presented in Table 1 and also in Appendix B. In Table 1, we compare various RC datasets with two embodiments of our dataset i.e. the SelfRC and ParaphraseRC. We use NER and noun phrase/verb phrase extraction over the entire dataset to iden1The dataset is available at https://duorc.github.io 1688 tify key entities in the question, plot and answer which is in turn used to compute the metrics mentioned in the table. The metrics “Avg word distance” and “Avg sentence distance” indicate the average distance (in terms of words/sentences) between the occurrence of the question entities and closest occurrence of the answer entities in the passage. “Number of sentences for inferencing” is indicative of the minimum number of sentences required to cover all the question and answer entities. It is evident that tackling ParaphraseRC is much harder than the others on account of (i) larger distance between the query and answer, (ii) low word-overlap between query & passage, and (iii) higher number of sentences required to infer an answer. 4 Models In this section, we describe in detail the various state-of-the-art RC and language generation models along with a collection of traditional NLP techniques employed together that will serve to establish baseline performance on the DuoRC dataset. Most of the current state-of-the-art models for RC assume that the answer corresponds to a span in the document and the task of the model is to predict this span. This is indeed true for the SQuAD, TriviaQA and NewsQA datasets. However, in our dataset, in many cases the answers do not correspond to an exact span in the document but are synthesized by humans. Specifically, for the SelfRC version of the dataset around 30% of the answers are synthesized and do not match a span in the document whereas for the ParaphraseRC task this number is 50%. Nevertheless, we could still leverage the advances made on the SQuAD dataset and adapt these span prediction models for our task. To do so, we propose to use two models. The first model is a basic span prediction model which we train and evaluate using only those instances in our dataset where the answer matches a span in the document. The purpose of this model is to establish whether even for instances where the answer matches a span in the document, our dataset is harder than the SQuAD dataset or not. Specifically, we want to explore the performance of state-of-the-art models (such as DCN (Xiong et al., 2016)), which exhibit near human results on the SQuAD dataset, on DuoRC (especially, in the ParaphraseRC setup). To do so, we seek to employ a good span prediction model for which (i) the performance is within 3-5% of the top performing model on the SQuAD leaderboard (Rajpurkar et al., 2016b) and (ii) the results are reproducible based on the code released by the authors of the paper. Note that the second criteria is important to ensure that the poor performance of the model is not due to incorrect implementation. The Bidirectional Attention Flow (BiDAF) model (Seo et al., 2016) satisfies these criteria and hence we employ this model. Due to space constraints, we do not provide details of the BiDAF model here and simply refer the reader to the original paper. In the remainder of this paper we will refer to this model as the SpanModel. The second model that we employ is a two stage process which first predicts the span and then synthesizes the answers from the span. Here again, for the first step (i.e., span prediction) we use the BiDAF model (Seo et al., 2016). The job of the second model is to then take the span (minidocument) and question (query) as input and generate the answer. For this, we employ a state-of-theart query based abstractive summarization model (Nema et al., 2017) as this task is very similar to our task. Specifically, in query based abstractive summarization the training data is of the form {query, document, generated summary} and in our case the training data is of the form {query, mini-document, generated answer}. Once again we refer the reader to the original paper (Nema et al., 2017) for details of the model. We refer to this two stage model as the GenModel. Note that (Tan et al., 2017) recently proposed an answer generation model for the MS MARCO dataset. However, the authors have not released their code and therefore, in the interest of reproducibility of our work, we omit incorporating this model in this paper. Additional NLP pre-processing: Referring back to the example cited in Fig. 1, we reiterate that ideally a good model for ParaphraseRC would require: (i) employing a knowledge graph, (ii) common-sense knowledge (iii) paraphrase/semantic understanding (iv) multiplesentence inferencing across events in the passage including coreference resolution of named entities and nouns, and (v) educated guesswork when the question is not directly answerable but there are subtle hints in the passage. While addressing all of these challenges in their entirety is beyond the scope of a single paper, in the interest of establishing a good baseline for DuoRC, we additionally 1689 seek to address some of these challenges to a certain extent by using standard NLP techniques. Specifically, we look at the problems of paraphrase understanding, coreference resolution and handling long passages. To do so, we prune the document and extract only those sentences which are most relevant to the question, so that the span detector does not need to look at the entire 900-word long ParaphraseRC plot. Now, since these relevant sentences are obtained not from the original but the paraphrased version of the document, they may have a very small word overlap with the question. For example, the question might contain the word “hand gun” and the relevant sentence in the document may contain the word “revolver”. Further some of the named entities in the question may not be exactly present in the relevant sentence but may simply be co-referenced. To resolve these coreferences, we first employ the Stanford coreference resolution on the entire document. We then compute the fraction of words in a sentence which match a query word (ignoring stop words). Two words are considered to match if (a) they have the same surface form, or (b) one words is an inflected form of the word (e.g., river and rivers), or (c) the Glove (Pennington et al., 2014) and Skip-thought (Kiros et al., 2015) embeddings of the two words are very close to each other (two word vectors are considered to be close if one appears within the top 50 neighbors of the other), or (d) the two words appear in the same synset in Wordnet. We consider a sentence to be relevant for the question if at least 50% of the query words (ignoring stop words) match the words in the sentence. If none of the sentences in the document have atleast 50% overlap with the question, then we pick sentences having atleast a 30% overlap with the question. The selection of this threshold was based on manual observation of a small sample set. This observation gave us an idea of what a decent threshold value should be, that can have a reasonable precision and recall on the relevant snippet extraction step. Since this step was rule-based we could only employ such qualitative inspections to set this parameter. Also, since this step was targeted to have high recall, we relaxed the threshold to 30% if no match was found. 5 Experimental Setup In the following sub-sections we describe (i) the evaluation metrics, and (ii) the choices considered for augmenting the training data for the answer generation model. Note that when creating the train, validation and test set, we ensure that the test set does not contain QA pairs for any movie that was seen during training. We split the movies in such a way that the resulting train, valid, test sets respectively contain 70%, 15% and 15% of the total number of QA pairs. Span-Based Test Set and Full Test Set As mentioned earlier, the SpanModel only predicts the span in the document whereas the GenModel generates the answer after predicting the span. Ideally, the SpanModel should only be evaluated on those instances in the test set where the answer matches a span in the document. We refer to this subset of the test set as the Span-based Test Set. Though not ideal, we also evaluate the SpanModel model on the entire test set. This is not ideal because there are many answers in the test set which do not correspond to a span in the document whereas the model was only trained to predict spans. We refer to this as the Full Test Set. We also evaluate the GenModel on both the test sets. Training Data for the GenModel As mentioned earlier, the GenModel contains two stages; the first stage predicts the span and the second stage then generates an answer from the predicted span. For the first step we plug-in the best performing SpanModel from our earlier exploration. To train the second stage we need training data of the form {x = span, y= answer} which comes from two types of instances: one where the answer matches a span and the other where the answer is synthesized and the span corresponding to it is not known. In the first case x=y and there is nothing interesting for the model to learn (except for copying the input to the output). In the second case x is not known. To overcome this problem, for the second type of instances, we consider various approaches for finding the approximate span from which the answer could have been generated, and augment the training data with {x = approx span, y= answer}. The easiest method was to simply treat the entire document as the true span from which the answer was generated (x = document, y = answer). The second alternative that we tried was to first extract the named entities, noun phrases and verb phrases from the question and create a lucene query from these components. We then used the lucene search engine to extract the most relevant portions of the document given this query. We then considered this portion of the document as the true span (as 1690 opposed to treating the entire document as the true span). Note that lucene could return multiple relevant spans in which case we treat all these {x = approx span, y= answer} as training instances. Another alternative was to find the longest common subsequence (LCS) between the document and the question and treat this subsequence as the span from which the answer was generated. Of these, we found that the model trained using {x = approx span, y= answer} pairs created using the LCS based method gave the best results. We report numbers only for this model. Evaluation Metrics Similar to (Rajpurkar et al., 2016a) we use Accuracy and F-score as the evaluation metrics. We also report the BLEU scores for each task. While accuracy, being a stricter metric, considers a predicted answer to be correct only if it exactly matches the true answer, F-score and BLEU also give credit to predictions partially overlapping with the true answer. 6 Results and Discussions The results of our experiments are summarized in Tables 2 to 4 which we discuss in the following sub-sections. Preprocessing step of Relevant Subplot Extraction Plot Compression Answer Recall WordNet synonym + Glove based paraphrase 30% 66.51% WordNet synonym + Glove based paraphrase on Coref resolved plots 50% 84.10% WordNet synonym + Glove + Skip-thought based paraphrase on Coref resolved plots 48% 85% Table 2: Performance of the preprocessing. Plot compression is the % size of the extracted plot w.r.t the original plot size SelfRC Span Test Full Test Acc. F1 BLEU Acc. F1 BLEU SpanModel 46.14 57.49 22.98 37.53 50.56 7.47 GenModel (with augmented training data) 16.45 26.97 7.61 15.31 24.05 5.50 ParaphraseRC Span Test Full Test Acc. F1 BLEU Acc. F1 BLEU SpanModel 17.93 26.27 9.39 9.78 16.33 2.60 SpanModel with Preprocessed Data 27.49 35.10 12.78 14.92 21.53 2.75 GenModel (with augmented training data) 12.66 19.48 4.41 5.42 9.64 1.75 Table 3: Performance of the SpanModel and GenModel on the Span Test subset and the Full Test Set of the Self and ParaphraseRC. SpanModel v/s GenModel: Comparing the first two rows (SelfRC) and the last two rows (ParaphraseRC) of Table 3 we see that the SpanModel clearly outperforms the GenModel. This is not very surprising for two reasons. First, around 70% (and Span Test Full Test Train Test Acc. F1 BLEU Acc. F1 BLEU SelfRC SelfRC 46.14 57.49 22.98 37.53 50.56 7.47 ParaRC 27.85 36.82 14.48 15.16 22.70 3.90 SelfRC+ ParaRC 37.79 48.05 18.72 25.05 35.01 5.34 ParaRC SelfRC 34.85 45.71 16.01 28.25 40.16 5.15 Para RC 19.74 27.57 9.84 10.78 17.13 2.75 SelfRC+ ParaRC 27.94 37.42 13.00 18.50 27.31 3.75 SelfRC + ParaRC SelfRC 49.66 61.45 25.87 40.24 54.04 8.42 ParaRC 29.88 39.34 16.22 16.33 24.25 4.21 SelfRC+ ParaRC 40.62 51.35 21.18 26.90 37.42 5.94 Table 4: Combined and Cross-Testing between Self and ParaphraseRC Dataset, by taking the best performing SpanModel from Table 3.ParaRC is an abbreviation of ParaphraseRC 50%) of the answers in SelfRC (and ParaphraseRC) respectively, match an exact span in the document so the SpanModel still has scope to do well on these answers. On the other hand, even if the first stage of the GenModel predicts the span correctly, the second stage could make an error in generating the correct answer from it because generation is a harder problem. For the second stage, it is expected that the GenModel should learn to copy the predicted span to produce the answer output (as is required in most cases) and only occasionally where necessary, generate an answer. However, surprisingly the GenModel fails to even do this. Manual inspection of the generated answers shows that in many cases the generator ends up generating either more or fewer words compared the true answer. This demonstrates the clear scope for the GenModel to perform better. SelfRC v/s ParaphraseRC: Comparing the SelfRC and ParaphraseRC numbers in Table 3, we observe that the performance of the models clearly drops for the latter task, thus validating our hypothesis that ParaphraseRC is a indeed a much harder task. Effect of NLP pre-processing: As mentioned in Section 4, for ParaphraseRC, we first perform a few pre-processing steps to identify relevant sentences in the longer document. In order to evaluate whether the pre-processing method is effective, we compute: (i) the percentage of the document that gets pruned, and (ii) whether the true answer is present in the pruned document (i.e., average recall of the answer). We can compute the recall only for the span-based subset of the data since for the remaining data we do not know the true span. In Table 2, we report these two quantities for the spanbased subset using different pruning strategies. Finally, comparing the SpanModel with and without 1691 Paraphrasing in Table 3 for ParaphraseRC, we observe that the pre-processing step indeed improves the performance of the Span Detection Model. Effect of oracle pre-processing: As noted in Section 3, the ParaphraseRC plot is almost double in length in comparison to the SelfRC plot, which while adding to the complexities of the former task, is clearly not the primary reason of the model’s poor performance on that. To empirically validate this, we perform an Oracle pre-processing step, where, starting with the knowledge of the span containing the true answer, we extract a subplot around it such that the span is randomly located within that subplot and the average length of the subplot is similar to the SelfRC plots. The SpanModel with this Oracle preprocessed data exhibits a minor improvement in performance over that with rulebased preprocessing (1.6% in Accuracy and 4.3% in F1 over the Span Test), still failing to bridge the wide performance gap between the SelfRC and ParaphraseRC task. Cross Testing We wanted to examine whether a model trained on SelfRC performs well on ParaphraseRC and vice-versa. We also wanted to evaluate if merging the two datasets improves the performance of the model. For this we experimented with various combinations of train and test data. The results of these experiments for the SpanModel are summarized in Table 4. The best performance is obtained when the model is trained on both (SelfRC) and ParaphraseRC and tested on SelfRC and the performance is poorest when ParaphraseRC is used for both. We believe this is because learning with the ParaphraseRC is more difficult given the wide range of challenges in this dataset. Based on our experiments and empirical observations we believe that the DuoRC dataset indeed holds a lot of potential for advancing the horizon of complex language understanding by exposing newer challenges in this area. 7 Conclusion In this paper we introduced DuoRC, a large scale RC dataset of 186K human-generated QA pairs created from 7680 pairs of parallel movie-plots, each pair taken from Wikipedia and IMDb. We then showed that this dataset, by design, ensures very little or no lexical overlap between the questions created from one version and segments containing answers in the other version. With this, we hope to introduce the RC community to new research challenges on QA requiring external knowledge and common-sense driven reasoning, deeper language understanding and multiple-sentence inferencing. Through our experiments, we show how the stateof-the-art RC models, which have achieved near human performance on the SQuAD dataset, perform poorly on our dataset, thus emphasizing the need to explore further avenues for research. References Jonathan Berant, Vivek Srikumar, Pei-Chun Chen, Abby Vander Linden, Brittany Harding, Brad Huang, Peter Clark, and Christopher D. Manning. 2014. Modeling biological processes for reading comprehension. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL. http://aclweb.org/anthology/D/D14/D141159.pdf. Yiming Cui. 2017. Cloze explorer. https://github.com/ymcui/Eval-on-NN-ofRC/. Karl Moritz Hermann, Tom´as Kocisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada. pages 1693– 1701. http://papers.nips.cc/paper/5945-teachingmachines-to-read-and-comprehend. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017. pages 2011–2021. http://aclanthology.info/papers/D171214/d17-1214. Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers. pages 1601–1611. https://doi.org/10.18653/v1/P17-1147. 1692 Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2015. Skipthought vectors. CoRR abs/1506.06726. http://arxiv.org/abs/1506.06726. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard H. Hovy. 2017. RACE: large-scale reading comprehension dataset from examinations. CoRR abs/1704.04683. http://arxiv.org/abs/1704.04683. Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James F. Allen. 2016. A corpus and evaluation framework for deeper understanding of commonsense stories. CoRR abs/1604.01696. http://arxiv.org/abs/1604.01696. Preksha Nema, Mitesh M. Khapra, Anirban Laha, and Balaraman Ravindran. 2017. Diversity driven attention model for query-based abstractive summarization. CoRR abs/1704.08300. http://arxiv.org/abs/1704.08300. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. CoRR abs/1611.09268. http://arxiv.org/abs/1611.09268. Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. 2016. Who did what: A large-scale person-centered cloze dataset. arXiv preprint arXiv:1608.05457 . Alexandre Passos, Vineet Kumar, and Andrew McCallum. 2014. Lexicon infused phrase embeddings for named entity resolution. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, CoNLL 2014, Baltimore, Maryland, USA, June 26-27, 2014. pages 78– 86. http://aclweb.org/anthology/W/W14/W141609.pdf. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP). pages 1532–1543. http://www.aclweb.org/anthology/D14-1162. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016a. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250 . Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016b. Squad explorer. https://rajpurkar.github.io/SQuAD-explorer/. Matthew Richardson, Christopher J. C. Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL. pages 193– 203. http://aclweb.org/anthology/D/D13/D131020.pdf. Tom´aˇs Koˇ cisk´y, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, G´abor Melis, and Edward Grefenstette. 2018. The NarrativeQA reading comprehension challenge. Transactions of the Association for Computational Linguistics TBD:TBD. https://TBD. Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. CoRR abs/1611.01603. http://arxiv.org/abs/1611.01603. Chuanqi Tan, Furu Wei, Nan Yang, Weifeng Lv, and Ming Zhou. 2017. S-net: From answer extraction to answer generation for machine reading comprehension. CoRR abs/1706.04815. http://arxiv.org/abs/1706.04815. Makarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2016. Movieqa: Understanding stories in movies through question-answering. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Walter Daelemans and Miles Osborne, editors, Proceedings of CoNLL-2003. Edmonton, Canada, pages 142–147. 1693 Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2016. Newsqa: A machine comprehension dataset. CoRR abs/1611.09830. http://arxiv.org/abs/1611.09830. Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017. Making neural QA as simple as possible but not simpler. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), Vancouver, Canada, August 3-4, 2017. pages 271–280. https://doi.org/10.18653/v1/K17-1028. Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. CoRR abs/1611.01604. http://arxiv.org/abs/1611.01604.
2018
156
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1694–1704 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1694 Stochastic Answer Networks for Machine Reading Comprehension Xiaodong Liu†, Yelong Shen†, Kevin Duh‡ and Jianfeng Gao† † Microsoft Research, Redmond, WA, USA ‡ Johns Hopkins University, Baltimore, MD, USA †{xiaodl,yeshen,jfgao}@microsoft.com ‡[email protected] Abstract We propose a simple yet robust stochastic answer network (SAN) that simulates multi-step reasoning in machine reading comprehension. Compared to previous work such as ReasoNet which used reinforcement learning to determine the number of steps, the unique feature is the use of a kind of stochastic prediction dropout on the answer module (final layer) of the neural network during the training. We show that this simple trick improves robustness and achieves results competitive to the state-of-the-art on the Stanford Question Answering Dataset (SQuAD), the Adversarial SQuAD, and the Microsoft MAchine Reading COmprehension Dataset (MS MARCO). 1 Introduction Machine reading comprehension (MRC) is a challenging task: the goal is to have machines read a text passage and then answer any question about the passage. This task is an useful benchmark to demonstrate natural language understanding, and also has important applications in e.g. conversational agents and customer service support. It has been hypothesized that difficult MRC problems require some form of multi-step synthesis and reasoning. For instance, the following example from the MRC dataset SQuAD (Rajpurkar et al., 2016) illustrates the need for synthesis of information across sentences and multiple steps of reasoning: Q: What collection does the V&A Theator & Performance galleries hold? P: The V&A Theator & Performance galleries opened in March 2009. ... They hold the UK’s biggest national collection of material about live performance. To infer the answer (the underlined portion of the passage P), the model needs to first perform coreference resolution so that it knows “They” refers “V&A Theator”, then extract the subspan in the direct object corresponding to the answer. This kind of iterative process can be viewed as a form of multi-step reasoning. Several recent MRC models have embraced this kind of multistep strategy, where predictions are generated after making multiple passes through the same text and integrating intermediate information in the process. The first models employed a predetermined fixed number of steps (Hill et al., 2016; Dhingra et al., 2016; Sordoni et al., 2016; Kumar et al., 2015). Later, Shen et al. (2016) proposed using reinforcement learning to dynamically determine the number of steps based on the complexity of the question. Further, Shen et al. (2017) empirically showed that dynamic multi-step reasoning outperforms fixed multi-step reasoning, which in turn outperforms single-step reasoning on two distinct MRC datasets (SQuAD and MS MARCO). In this work, we derive an alternative multi-step reasoning neural network for MRC. During training, we fix the number of reasoning steps, but perform stochastic dropout on the answer module (final layer predictions). During decoding, we generate answers based on the average of predictions in all steps, rather than the final step. We call this a stochastic answer network (SAN) because the stochastic dropout is applied to the answer module; albeit simple, this technique significantly improves the robustness and overall accuracy of the model. Intuitively this works because while the model successively refines its prediction over multiple steps, each step is still trained to generate the same answer; we are performing a kind of stochastic ensemble over the model’s successive predic1695 st-1 st st+1 x Figure 1: Illustration of “stochastic prediction dropout” in the answer module during training. At each reasoning step t, the model combines memory (bottom row) with hidden states st−1 to generate a prediction (multinomial distribution). Here, there are three steps and three predictions, but one prediction is dropped and the final result is an average of the remaining distributions. tion refinements. Stochastic prediction dropout is illustrated in Figure 1. 2 Proposed model: SAN The machine reading comprehension (MRC) task as defined here involves a question Q = {q0, q1, ..., qm−1} and a passage P = {p0, p1, ..., pn−1} and aims to find an answer span A = {astart, aend} in P. We assume that the answer exists in the passage P as a contiguous text string. Here, m and n denote the number of tokens in Q and P, respectively. The learning algorithm for reading comprehension is to learn a function f(Q, P) →A. The training data is a set of the query, passage and answer tuples < Q, P, A >. We now describe our model from the ground up. The main contribution of this work is the answer module, but in order to understand what goes into this module, we will start by describing how Q and P are processed by the lower layers. Note the lower layers also have some novel variations that are not used in previous work. As shown in Figure 2, our model contains four different layers to capture different concept of representations. The detailed description of our model is provided as follows. Lexicon Encoding Layer. The purpose of the first layer is to extract information from Q and P at the word level and normalize for lexical variants. A typical technique to obtain lexicon embedding is concatenation of its word embedding with other linguistic embedding such as those derived from Part-Of-Speech (POS) tags. For word embeddings, we use the pre-trained 300-dimensional GloVe vectors (Pennington et al., 2014) for the both Q and P. Following Chen et al. (2017), we use three additional types of linguistic features for each token pi in the passage P: • 9-dimensional POS tagging embedding for total 56 different types of the POS tags. • 8-dimensional named-entity recognizer (NER) embedding for total 18 different types of the NER tags. We utilized small embedding sizes for POS and NER to reduce model size. They mainly serve the role of coarse-grained word clusters. • A 3-dimensional binary exact match feature defined as fexact match(pi) = I(pi ∈ Q). This checks whether a passage token pi matches the original, lowercase or lemma form of any question token. • Question enhanced passages word embeddings: falign(pi) = P j γi,jg(GloV e(qj)), where g(·) is a 280-dimensional single layer neural network ReLU(W0x) and γi,j = exp(g(GloV e(pj))·g(GloV e(qi))) P j′ exp(g(GloV e(pi))·g(GloV e(qj′))) measures the similarity in word embedding space between a token pi in the passage and a token qj in the question. Compared to the exact matching features, these embeddings encode soft alignments between similar but notidentical words. In summary, each token pi in the passage is represented as a 600-dimensional vector and each token qj is represented as a 300-dimensional vector. Due to different dimensions for the passages and questions, in the next layer two different bidirectional LSTM (BiLSTM) (Hochreiter and Schmidhuber, 1997) may be required to encode the contextual information. This, however, introduces a large number of parameters. To prevent this, we employ an idea inspired by (Vaswani et al., 2017): use two separate two-layer positionwise Feed-Forward Networks (FFN), FFN(x) = W2ReLU(W1x+b1)+b2, to map both the passage and question lexical encodings into the same number of dimensions. Note that this FFN has fewer 1696 Question Lexicon Encoding Layer Document Word Embedding Surface Feature Beyoncé is … what religion? 2 Layers Position-Wise FFN Beyoncé was born ... in a Methodist household. 2 Layers Position-Wise FFN Beyoncé was born ... in a Methodist household. 2 Layers Position-Wise FFN Contextual Encoding Layer Attention Self Attention 2 Layers BiLSTM with Maxout Memory Self Attended Sum GRU st-1 st st+1 Figure 2: Architecture of the SAN for Reading Comprehension: The first layer is a lexicon encoding layer that maps words to their embeddings independently for the question (left) and the passage (right): this is a concatenation of word embeddings, POS embeddings, etc. followed by a position-wise FFN. The next layer is a context encoding layer, where a BiLSTM is used on the top of the lexicon embedding layer to obtain the context representation for both question and passage. In order to reduce the parameters, a maxout layer is applied on the output of BiLSTM. The third layer is the working memory: First we compute an alignment matrix between the question and passage using an attention mechanism, and use this to derive a question-aware passage representation. Then we concatenate this with the context representation of passage and the word embedding, and employ a self attention layer to re-arrange the information gathered. Finally, we use another LSTM to generate a working memory for the passage. At last, the fourth layer is the answer module, which is a GRU that outputs predictions at each state st. parameters compared to a BiLSTM. Thus, we obtain the final lexicon embeddings for the tokens in Q as a matrix Eq ∈Rd×m and tokens in P as Ep ∈Rd×n. Contextual Encoding Layer. Both passage and question use a shared two-layers BiLSTM as the contextual encoding layer, which projects the lexicon embeddings to contextual embeddings. We concatenate a pre-trained 600-dimensional CoVe vectors1 (McCann et al., 2017) trained on German-English machine translation dataset, with 1https://github.com/salesforce/cove the aforementioned lexicon embeddings as the final input of the contextual encoding layer, and also with the output of the first contextual encoding layer as the input of its second encoding layer. To reduce the parameter size, we use a maxout layer (Goodfellow et al., 2013) at each BiLSTM layer to shrink its dimension. By a concatenation of the outputs of two BiLSTM layers, we obtain Hq ∈R2d×m as representation of Q and Hp ∈R2d×n as representation of P, where d is the hidden size of the BiLSTM. Memory Generation Layer. In the memory 1697 generation layer, We construct the working memory, a summary of information from both Q and P. First, a dot-product attention is adopted like in (Vaswani et al., 2017) to measure the similarity between the tokens in Q and P. Instead of using a scalar to normalize the scores as in (Vaswani et al., 2017), we use one layer network to transform the contextual information of both Q and P: C = dropout(fattention( ˆHq, ˆHp)) ∈Rm×n (1) C is an attention matrix. Note that ˆ Hq and ˆ Hp is transformed from Hq and Hp by one layer neural network ReLU(W3x), respectively. Next, we gather all the information on passages by a simple concatenation of its contextual information Hp and its question-aware representation Hq · C: U p = concat(Hp, HqC) ∈R4d×n (2) Typically, a passage may contain hundred of tokens, making it hard to learn the long dependencies within it. Inspired by (Lin et al., 2017), we apply a self-attended layer to rearrange the information U p as: ˆU p = U pdropdiag(fattention(U p, Up)). (3) In other words, we first obtain an n × n attention matrix with U p onto itself, apply dropout, then multiply this matrix with U p to obtain an updated ˆU p. Instead of using a penalization term as in (Lin et al., 2017), we dropout the diagonal of the similarity matrix forcing each token in the passage to align to other tokens rather than itself. At last, the working memory is generated by using another BiLSTM based on all the information gathered: M = BiLSTM([U p; ˆU p]) (4) where the semicolon mark ; indicates the vector/matrix concatenation operator. Answer module. There is a Chinese proverb that says: “wisdom of masses exceeds that of any individual.” Unlike other multi-step reasoning models, which only uses a single output either at the last step or some dynamically determined final step, our answer module employs all the outputs of multiple step reasoning. Intuitively, by applying dropout, it avoids a “step bias problem” (where models places too much emphasis one particular step’s predictions) and forces the model to produce good predictions at every individual step. Further, during decoding, we reuse wisdom of masses instead of individual to achieve a better result. We call this method “stochastic prediction dropout” because dropout is being applied to the final predictive distributions. Formally, our answer module will compute over T memory steps and output the answer span. This module is a memory network and has some similarities to other multi-step reasoning networks: namely, it maintains a state vector, one state per step. At the beginning, the initial state s0 is the summary of the Q: s0 = P j αjHq j , where αj = exp(w4·Hq j ) P j′ exp(w4·Hq j′). At time step t in the range of {1, 2, ..., T −1}, the state is defined by st = GRU(st−1, xt). Here, xt is computed from the previous state st−1 and memory M: xt = P j βjMj and βj = softmax(st−1W5M). Finally, a bilinear function is used to find the begin and end point of answer spans at each reasoning step t ∈{0, 1, . . . , T −1}. P begin t = softmax(stW6M) (5) P end t = softmax([st; X j P begin t,j Mj]W7M). (6) From a pair of begin and end points, the answer string can be extracted from the passage. However, rather than output the results (start/end points) from the final step (which is fixed at T −1 as in Memory Networks or dynamically determined as in ReasoNet), we utilize all of the T outputs by averaging the scores: P begin = avg([P begin 0 , P begin 1 , ..., P begin T−1 ]) (7) P end = avg([P end 0 , P end 1 , ..., P end T−1]) (8) Each P begin t or P end t is a multinomial distribution over {1, . . . , n}, so the average distribution is straightforward to compute. During training, we apply stochastic dropout to before the above averaging operation. For example, as illustrated in Figure 1, we randomly delete several steps’ predictions in Equations 7 and 8 so that P begin might be avg([P begin 1 , P begin 3 ]) and P end might be avg([P end 0 , P end 3 , P end 4 ]). The use of averaged predictions and dropout during training improves robustness. Our stochastic prediction dropout is similar in motivation to the dropout introduced by (Srivastava et al., 2014). The difference is that theirs 1698 is dropout at the intermediate node-level, whereas ours is dropout at the final layer-level. Dropout at the node-level prevents correlation between features. Dropout at the final layer level, where randomness is introduced to the averaging of predictions, prevents our model from relying exclusively on a particular step to generate correct output. We used a dropout rate of 0.4 in experiments. 3 Experiment Setup Dataset: We evaluate on the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016). This contains about 23K passages and 100K questions. The passages come from approximately 500 Wikipedia articles and the questions and answers are obtained by crowdsourcing. The crowdsourced workers are asked to read a passage (a paragraph), come up with questions, then mark the answer span. All results are on the official development set, unless otherwise noted. Two evaluation metrics are used: Exact Match (EM), which measures the percentage of span predictions that matched any one of the ground truth answer exactly, and Macro-averaged F1 score, which measures the average overlap between the prediction and the ground truth answer. Implementation details: The spaCy tool2 is used to tokenize the both passages and questions, and generate lemma, part-of-speech and named entity tags. We use 2-layer BiLSTM with d = 128 hidden units for both passage and question encoding. The mini-batch size is set to 32 and Adamax (Kingma and Ba, 2014) is used as our optimizer. The learning rate is set to 0.002 at first and decreased by half after every 10 epochs. We set the dropout rate for all the hidden units of LSTM, and the answer module output layer to 0.4. To prevent degenerate output, we ensure that at least one step in the answer module is active during training. 4 Results The main experimental question we would like to answer is whether the stochastic dropout and averaging in the answer module is an effective technique for multi-step reasoning. To do so, we fixed all lower layers and compared different architectures for the answer module: 1. Standard 1-step: generate prediction from s0, the first initial state. 2https://spacy.io 2. 5-step memory network: this is a memory network fixed at 5 steps. We try two variants: the standard variant outputs result from the final step sT−1. The averaged variant outputs results by averaging across all 5 steps, and is like SAN without the stochastic dropout. 3. ReasoNet3: this answer module dynamically decides the number of steps and outputs results conditioned on the final step. 4. SAN: proposed answer module that uses stochastic dropout and prediction averaging. The main results in terms of EM and F1 are shown in Table 1. We observe that SAN achieves 76.235 EM and 84.056 F1, outperforming all other models. Standard 1-step model only achieves 75.139 EM and dynamic steps (via ReasoNet) achieves only 75.355 EM. SAN also outperforms a 5-step memory net with averaging, which implies averaging predictions is not the only thing that led to SAN’s superior results; indeed, stochastic prediction dropout is an effective technique. The K-best oracle results is shown in Figure 3. The K-best spans are computed by ordering the spans according the their probabilities P begin × P end. We limit K in the range 1 to 4 and then pick the span with the best EM or F1 as oracle. SAN also outperforms the other models in terms of K-best oracle scores. Impressively, these models achieve human performance at K = 2 for EM and K = 3 for F1. Finally, we compare our results with other top models in Table 2. Note that all the results in Table 2 are taken from the published papers. We see that SAN is very competitive in both single and ensemble settings (ranked in second) despite its simplicity. Note that the best-performing model (Peters et al., 2018) used a large-scale language model as an extra contextual embedding, which gave a significant improvement (+4.3% dev F1). We expect significant improvements if we add this to SAN in future work. 3The ReasoNet here is not an exact re-implementation of (Shen et al., 2017). The answer module is the same as (Shen et al., 2017) but the lower layers are set to be the same as SAN, 5-step memory network, and standard 1-step as described in Figure 2. We only vary the answer module in our experiments for a fair comparison. 1699 Answer Module EM F1 Standard 1-step 75.139 83.367 Fixed 5-step with Memory Network (prediction from final step) 75.033 83.327 Fixed 5-step with Memory Network (prediction averaged from all steps) 75.256 83.215 Dynamic steps (max 5) with ReasoNet 75.355 83.360 Stochastic Answer Network (SAN ), Fixed 5-step 76.235 84.056 Table 1: Main results—Comparison of different answer module architectures. Note that SAN performs best in both Exact Match and F1 metrics. Ensemble model results: Dev Set (EM/F1) Test Set (EM/F1) BiDAF + Self Attention + ELMo (Peters et al., 2018) -/81.003/87.432 SAN (Ensemble model) 78.619/85.866 79.608/86.496 AIR-FusionNet (Huang et al., 2017) -/78.978/86.016 DCN+ (Xiong et al., 2017) -/78.852/85.996 M-Reader (Hu et al., 2017) -/77.678/84.888 Conductor-net (Liu et al., 2017b) 74.8 / 83.3 76.996/84.630 r-net (Wang et al., 2017) 77.7/83.7 76.9/84.0 ReasoNet++ (Shen et al., 2017) 75.4/82.9 75.0/82.6 Individual model results: BiDAF + Self Attention + ELMo(Peters et al., 2018) -/78.580/85.833 SAN (single model) 76.235/84.056 76.828/84.396 AIR-FusionNet(Huang et al., 2017) 75.3/83.6 75.968/83.900 RaSoR + TR (Salant and Berant, 2017) -/75.789/83.261 DCN+(Xiong et al., 2017) 74.5/83.1 75.087/83.081 r-net(Wang et al., 2017) 72.3/80.6 72.3/80.7 ReasoNet++(Shen et al., 2017) 70.8/79.4 70.6/79.36 BiDAF (Seo et al., 2016) 67.7/77.3 68.0/77.3 Human Performance 80.3/90.5 82.3/91.2 Table 2: Test performance on SQuAD. Results are sorted by Test F1. 5 Analysis 5.1 How robust are the results? We are interested in whether the proposed model is sensitive to different random initial conditions. Table 3 shows the development set scores of SAN trained from initialization with different random seeds. We observe that the SAN results are consistently strong regardless of the 10 different initializations. For example, the mean EM score is 76.131 and the lowest EM score is 75.922, both of which still outperform the 75.355 EM of the Dynamic step ReasoNet in Table 1.4 We are also interested in how sensitive are the results to the number of reasoning steps, which 4Note the Dev EM/F1 scores of ReasoNet in Table 1 do not match those of ReasoNet++ in Table 2. While the answer module is the same architecture, the lower encoding layers are different. is a fixed hyper-parameter. Since we are using dropout, a natural question is whether we can extend the number of steps to an extremely large number. Table 4 shows the development set scores for T = 1 to T = 10. We observe that there is a gradual improvement as we increase T = 1 to T = 5, but after 5 steps the improvements have saturated. In fact, the EM/F1 scores drop slightly, but considering that the random initialization results in Table 3 show a standard deviation of 0.142 and a spread of 0.426 (for EM), we believe that the T = 10 result does not statistically differ from the T = 5 result. In summary, we think it is useful to perform some approximate hyper-parameter tuning for the number of steps, but it is not necessary to find the exact optimal value. Finally, we test SAN on two Adversarial SQuAD datasets, AddSent and AddOneSent (Jia and Liang, 2017), where the passages contain 1700 (a) EM comparison on different systems. (b) F1 score comparison on different systems. Figure 3: K-Best Oracle results auto-generated adversarial distracting sentences to fool computer systems that are developed to answer questions about the passages. For example, AddSent is constructed by adding sentences that look similar to the question, but do not actually contradict the correct answer. AddOneSent is constructed by appending a random human-approved sentence to the passage. We evaluate the single SAN model (i.e., the one presented in Table 2) on both AddSent and AddOneSent. The results in Table 5 show that SAN achieves the new state-of-the-art performance and SAN’s superior result is mainly attributed to the multi-step answer module, which leads to significant improvement in F1 score over the Standard 1-step answer module, i.e., +1.2 on AddSent and +0.7 on AddOneSent. 5.2 Is it possible to use different numbers of steps in test vs. train? For practical deployment scenarios, prediction speed at test time is an important criterion. Therefore, one question is whether SAN can train with, e.g. T = 5 steps but test with T = 1 steps. Table 6 shows the results of a SAN trained on T = 5 steps, but tested with different number of steps. As exSeed# EM F1 Seed# EM F1 Seed 1 76.24 84.06 Seed 6 76.23 83.99 Seed 2 76.30 84.13 Seed 7 76.35 84.09 Seed 3 75.92 83.90 Seed 8 76.07 83.71 Seed 4 76.00 83.95 Seed 9 75.93 83.85 Seed 5 76.12 83.99 Seed 10 76.15 84.11 Mean: 76.131, Std. deviation: 0.142 (EM) Mean: 83.977, Std. deviation: 0.126 (F1) Table 3: Robustness of SAN (5-step) on different random seeds for initialization: best and worst scores are boldfaced. Note that our official submit is trained on seed 1. SAN EM F1 SAN EM F1 1 step 75.38 83.29 6 step 75.99 83.72 2 step 75.43 83.41 7 step 76.04 83.92 3 step 75.89 83.57 8 step 76.03 83.82 4 step 75.92 83.85 9 step 75.95 83.75 5 step 76.24 84.06 10 step 76.04 83.89 Table 4: Effect of number of steps: best and worst results are boldfaced. pected, the results are best when T matches during training and test; however, it is important to note that small numbers of steps T = 1 and T = 2 nevertheless achieve strong results. For example, prediction at T = 1 achieves 75.58, which outperforms a standard 1-step model (75.14 EM) as in Table 1 that has approximate equivalent prediction time. 5.3 How does the training time compare? The average training time per epoch is comparable: our implementation running on a GTX Titan X is 22 minutes for 5-step memory net, 30 minutes for ReasoNet, and 24 minutes for SAN. The learning curve is shown in Figure 4. We observe that all systems improve at approximately the same rate up to 10 or 15 epochs. However, SAN continues to improve afterwards as other models start to saturate. This observation is consistent with previous works using dropout (Srivastava et al., 2014). We believe that while training time per epoch is similar between SAN and other models, it is recommended to train SAN for more epochs in order to achieve gains in EM/F1. 1701 Single model: AddSent AddOneSent LR (Rajpurkar et al., 2016) 23.2 30.3 SEDT (Liu et al., 2017a) 33.9 44.8 BiDAF (Seo et al., 2016) 34.3 45.7 jNet (Zhang et al., 2017) 37.9 47.0 ReasoNet(Shen et al., 2017) 39.4 50.3 RaSoR(Lee et al., 2016) 39.5 49.5 Mnemonic(Hu et al., 2017) 46.6 56.0 QANet(Yu et al., 2018) 45.2 55.7 Standard 1-step in Table 1 45.4 55.8 SAN 46.6 56.5 Table 5: Test performance on the adversarial SQuAD dataset in F1 score. T = EM F1 T = EM F1 1 75.58 83.86 4 76.12 83.98 2 75.85 83.90 5 76.24 84.06 3 75.98 83.95 10 75.89 83.88 Table 6: Prediction on different steps T. Note that the SAN model is trained using 5 steps. (a) EM (b) F1 Figure 4: Learning curve measured on Dev set. Figure 5: Score breakdown by question type. 5.4 How does SAN perform by question type? To see whether SAN performs well on a particular type of question, we divided the development set by questions type based on their respective Whword, such as “who” and “where”. The score breakdown by F1 is shown in Figure 5. We observe that SAN seems to outperform other models uniformly across all types. The only exception is the Why questions, but there is too little data to derive strong conclusions. 5.5 Experiments results on MS MARCO MS MARCO (Nguyen et al., 2016) is a large scale real-word RC dataset which contains 100,100 (100K) queries collected from anonymized user logs from the Bing search engine. The characteristic of MS MARCO is that all the questions are real user queries and passages are extracted from real web documents. For each query, approximate 10 passages are extracted from public web documents. The answers are generated by humans. The data is partitioned into a 82,430 training, a 10,047 development and 9,650 test tuples. The evaluation metrics are BLEU(Papineni et al., 2002) and ROUGE-L (Lin, 2004) due to its free-form text answer style. To apply the same RC model, we search for a span in MS MARCO’s passages that maximizes the ROUGE-L score with the raw freeform answer. It has an upper bound of 93.45 BLEU and 93.82 ROUGE-L on the development set. The MS MARCO dataset contains multiple passages per query. Our model as shown in Figure 2 is developed to generate answer from a single passage. Thus, we need to extend it to handle multiple passages. Following (Shen et al., 2017), we take two steps to generate an answer to a query Q from J passages, P 1, ..., P J. First, we run SAN on ev1702 SingleModel ROUGE BLEU ReasoNet++(Shen et al., 2017) 38.01 38.62 V-Net(Wang et al., 2018) 45.65 Standard 1-step in Table 1 42.30 42.39 SAN 46.14 43.85 Table 7: MS MARCO devset results. ery (P j, Q) pair, generating J candidate answer spans, one from each passage. Then, we multiply the SAN score of each candidate answer span with its relevance score r(P j, Q) assigned by a passage ranker, and output the span with the maximum score as the answer. In our experiments, we use the passage ranker described in (Liu et al., 2018)5. The ranker is trained on the same MS MARCO training data, and achieves 37.1 p@1 on the development set. The results in Table 7 show that SAN outperforms V-Net (Wang et al., 2018) and becomes the new state of the art6. 6 Related Work The recent big progress on MRC is largely due to the availability of the large-scale datasets (Rajpurkar et al., 2016; Nguyen et al., 2016; Richardson et al., 2013; Hill et al., 2016), since it is possible to train large end-to-end neural network models. In spite of the variety of model structures and attenion types (Bahdanau et al., 2015; Chen et al., 2016; Xiong et al., 2016; Seo et al., 2016; Shen et al., 2017; Wang et al., 2017), a typical neural network MRC model first maps the symbolic representation of the documents and questions into a neural space, then search answers on top of it. We categorize these models into two groups based on the difference of the answer module: singlestep and multi-step reasoning. The key difference between the two is what strategies are applied to search the final answers in the neural space. A single-step model matches the question and document only once and produce the final answers. It is simple yet efficient and can be trained using the classical back-propagation algorithm, thus it is adopted by most systems (Chen et al., 2016; Seo et al., 2016; Wang et al., 2017; Liu et al., 2017b; Chen et al., 2017; Weissenborn et al., 2017; 5It is the same model structure as (Liu et al., 2018) by using softmax over all candidate passages. A simple baseline, TF-IDF, obtains 20.1 p@1 on MS MARCO development. 6The official evaluation on MS MARCO on test is closed, thus here we only report the results on the development set. Hu et al., 2017). However, since humans often solve question answering tasks by re-reading and re-digesting the document multiple times before reaching the final answers (this may be based on the complexity of the questions/documents), it is natural to devise an iterative way to find answers as multi-step reasoning. Pioneered by (Hill et al., 2016; Dhingra et al., 2016; Sordoni et al., 2016; Kumar et al., 2015), who used a predetermined fixed number of reasoning steps, Shen et al (2016; 2017) showed that multi-step reasoning outperforms single-step ones and dynamic multi-step reasoning further outperforms the fixed multi-step ones on two distinct MRC datasets (SQuAD and MS MARCO). But these models have to be trained using reinforcement learning methods, e.g., policy gradient, which are tricky to implement due to the instability issue. Our model is different in that we fix the number of reasoning steps, but perform stochastic dropout to prevent step bias. Further, our model can also be trained by using the back-propagation algorithm, which is simple and yet efficient. 7 Conclusion We introduce Stochastic Answer Networks (SAN), a simple yet robust model for machine reading comprehension. The use of stochastic dropout in training and averaging in test at the answer module leads to robust improvements on SQuAD, outperforming both fixed step memory networks and dynamic step ReasoNet. We further empirically analyze the properties of SAN in detail. The model achieves results competitive with the state-of-the-art on the SQuAD leaderboard, as well as on the Adversarial SQuAD and MS MARCO datasets. Due to the strong connection between the proposed model with memory networks and ReasoNet, we would like to delve into the theoretical link between these models and its training algorithms. Further, we also would like to explore SAN on other tasks, such as text classification and natural language inference for its generalization in the future. Acknowledgments We thank Pengcheng He, Yu Wang and Xinying Song for help to set up dockers. We also thank Pranav Samir Rajpurkar for help on SQuAD evaluations, and the anonymous reviewers for valuable discussions and comments. 1703 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. International Conference on Learning Representations (ICLR2015) . Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 2358–2367. http://www.aclweb.org/anthology/P16-1223. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer opendomain questions. In Association for Computational Linguistics (ACL). Bhuwan Dhingra, Hanxiao Liu, William W Cohen, and Ruslan Salakhutdinov. 2016. Gated-attention readers for text comprehension. arXiv preprint arXiv:1606.01549 . Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. 2013. Maxout networks. arXiv preprint arXiv:1302.4389 . Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2016. The goldilocks principle: Reading children’s books with explicit memory representations. ICLR . Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Minghao Hu, Yuxing Peng, and Xipeng Qiu. 2017. Mnemonic reader for machine comprehension. arXiv preprint arXiv:1705.02798 . Hsin-Yuan Huang, Chenguang Zhu, Yelong Shen, and Weizhu Chen. 2017. Fusionnet: Fusing via fullyaware attention with application to machine comprehension. arXiv preprint arXiv:1711.07341 . Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. pages 2021–2031. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Ankit Kumar, Ozan Irsoy, Jonathan Su, James Bradbury, Robert English, Brian Pierce, Peter Ondruska, Ishaan Gulrajani, and Richard Socher. 2015. Ask me anything: Dynamic memory networks for natural language processing. CoRR abs/1506.07285. http://arxiv.org/abs/1506.07285. Kenton Lee, Tom Kwiatkowski, Ankur Parikh, and Dipanjan Das. 2016. Learning recurrent span representations for extractive question answering. arXiv preprint arXiv:1611.01436 . Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130 . Rui Liu, Junjie Hu, Wei Wei, Zi Yang, and Eric Nyberg. 2017a. Structural embedding of syntactic trees for machine comprehension. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. pages 815–824. Rui Liu, Wei Wei, Weiguang Mao, and Maria Chikina. 2017b. Phase conductor on multi-layered attentions for machine comprehension. arXiv preprint arXiv:1710.10504 . Xiaodong Liu, Kevin Duh, and Jianfeng Gao. 2018. Stochastic answer networks for natural language inference. arXiv preprint arXiv:1804.07888 . Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. arXiv preprint arXiv:1708.00107 . Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268 . Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics. Association for Computational Linguistics, pages 311–318. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pages 1532–1543. http://www.aclweb.org/anthology/D14-1162. M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer. 2018. Deep contextualized word representations. ArXiv e-prints . Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text pages 2383–2392. https://aclweb.org/anthology/D16-1264. 1704 Matthew Richardson, Christopher J.C. Burges, and Erin Renshaw. 2013. MCTest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Seattle, Washington, USA, pages 193–203. S. Salant and J. Berant. 2017. Contextualized Word Representations for Reading Comprehension. ArXiv e-prints . Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603 . Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. 2016. Reasonet: Learning to stop reading in machine comprehension. arXiv preprint arXiv:1609.05284 . Yelong Shen, Xiaodong Liu, Kevin Duh, and Jianfeng Gao. 2017. An empirical analysis of multiple-turn reasoning strategies in reading comprehension tasks. arXiv preprint arXiv:1711.03230 . Alessandro Sordoni, Philip Bachman, Adam Trischler, and Yoshua Bengio. 2016. Iterative alternating neural attention for machine reading. arXiv preprint arXiv:1606.02245 . Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15(1):1929–1958. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762 . Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching networks for reading comprehension and question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 189–198. Y. Wang, K. Liu, J. Liu, W. He, Y. Lyu, H. Wu, S. Li, and H. Wang. 2018. Multi-Passage Machine Reading Comprehension with Cross-Passage Answer Verification. ArXiv e-prints . Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017. Fastqa: A simple and efficient neural architecture for question answering. arXiv preprint arXiv:1703.04816 . Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604 . Caiming Xiong, Victor Zhong, and Richard Socher. 2017. Dcn+: Mixed objective and deep residual coattention for question answering. arXiv preprint arXiv:1711.00106 . Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V. Le. 2018. Qanet: Combining local convolution with global self-attention for reading comprehension. Junbei Zhang, Xiaodan Zhu, Qian Chen, Lirong Dai, and Hui Jiang. 2017. Exploring question understanding and adaptation in neuralnetwork-based question answering. arXiv preprint arXiv:1703.04617 .
2018
157
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1705–1714 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1705 Multi-Granularity Hierarchical Attention Fusion Networks for Reading Comprehension and Question Answering Wei Wang∗, Ming Yan∗, Chen Wu∗ Alibaba Group, 969 West Wenyi Road, Hangzhou 311121, China {hebian.ww,ym119608,wuchen.wc}@alibaba-inc.com Abstract This paper describes a novel hierarchical attention network for reading comprehension style question answering, which aims to answer questions for a given narrative paragraph. In the proposed method, attention and fusion are conducted horizontally and vertically across layers at different levels of granularity between question and paragraph. Specifically, it first encode the question and paragraph with fine-grained language embeddings, to better capture the respective representations at semantic level. Then it proposes a multi-granularity fusion approach to fully fuse information from both global and attended representations. Finally, it introduces a hierarchical attention network to focuses on the answer span progressively with multi-level softalignment. Extensive experiments on the large-scale SQuAD and TriviaQA datasets validate the effectiveness of the proposed method. At the time of writing the paper (Jan. 12th 2018), our model achieves the first position on the SQuAD leaderboard for both single and ensemble models. We also achieves state-of-the-art results on TriviaQA, AddSent and AddOneSent datasets. 1 Introduction As a brand new field in question answering community, reading comprehension is one of the key problems in artificial intelligence, which aims to read and comprehend a given text, and then answer questions based on it. This task is challenging which requires a comprehensive understanding of natural languages and the ability to do further inference and reasoning. Restricted by the limited volume of the annotated dataset, early studies mainly rely on a pipeline of NLP models to complete this task, such as semantic parsing and linguistic annotation (Das et al., 2014). Not until the release of large-scale clozestyle dataset, such as Children’s Book Test (Hill et al., 2015) and CNN/Daily Mail (Hermann et al., 2015), some preliminary end-to-end deep learning methods have begun to bloom and achieve superior results in reading comprehension task (Hermann et al., 2015; Chen et al., 2016; Cui et al., 2016). However, these cloze-style datasets still have their limitations, where the goal is to predict the single missing word (often a named entity) in a passage. It requires less reasoning than previously thought and no need to comprehend the whole passage (Chen et al., 2016). Therefore, Stanford publish a new large-scale dataset SQuAD (Rajpurkar et al., 2016), in which all the question and answers are manually created through crowdsourcing. Different from cloze-style reading comprehension dataset, SQuAD constrains answers to all possible text spans within the reference passage, which requires more logical reasoning and content understanding. Benefiting from the availability of SQuAD benchmark dataset, rapid progress has been made these years. The work (Wang and Jiang, 2016) and (Seo et al., 2016) are among the first to investigate into this dataset, where Wang and Jiang propose an end-to-end architecture based on match-LSTM and pointer networks (Wang and Jiang, 2016), and Seo et al. introduce the bi-directional attention flow network which captures the questiondocument context at different levels of granularity (Seo et al., 2016). Chen et al. devise a simple and effective document reader, by introducing a bilinear match function and a few manual features (Chen et al., 2017a). Wang et al. propose 1706 a gated attention-based recurrent network where self-match attention mechanism is first incorporated (Wang et al., 2017). In (Liu et al., 2017b) and (Shen et al., 2017), the multi-turn memory networks are designed to simulate multi-step reasoning in machine reading comprehension. The idea of our approach derives from the normal human reading pattern. First, people scan through the whole passage to catch a glimpse of the main body of the passage. Then with the question in mind, people make connection between passage and question, and understand the main intent of the question related with the passage theme. A rough answer span is then located from the passage and the attention can be focused on to the located context. Finally, to prevent from forgetting the question, people come back to the question and select a best answer according to the previously located answer span. Inspired by this, we propose a hierarchical attention network which can gradually focus the attention on the right part of the answer boundary, while capturing the relation between the question and passage at different levels of granularity, as illustrated in Figure 1. Our model mainly consists of three joint layers: 1) encoder layer where pretrained language models and recurrent neural networks are used to build representation for questions and passages separately; 2) attention layer in which hierarchical attention networks are designed to capture the relation between question and passage at different levels of granularity; 3) match layer where refined question and passage are matched under a pointer-network (Vinyals et al., 2015) answer boundary predictor. In encoder layer, to better represent the questions and passages in multiple aspects, we combine two different embeddings to give the fundamental word representations. In addition to the typical glove word embeddings, we also utilize the ELMo embeddings (Peters et al., 2018) derived from a pre-trained language model, which shows superior performance in a wide range of NLP problems. Different from the original fusion way for intermediate layer representations, we design a representation-aware fusion method to compute the output ELMo embeddings and the context information is also incorporated by further passing through a bi-directional LSTM network. The key in machine reading comprehension solution lies in how to incorporate the question context into the paragraph, in which attention mechanism is most widely used. Recently, many different attention functions and types have been designed (Xiong et al., 2016; Seo et al., 2016; Wang et al., 2017), which aims at properly aligning the question and passage. In our attention layer, we propose a hierarchical attention network by leveraging both the co-attention and self-attention mechanism, to gradually focus our attention on the best answer span. Different from the previous attention-based methods, we constantly complement the aligned representations with global information from the previous layer, and an additional fusion layer is used to further refine the representations. In this way, our model can make some minor adjustment so that the attention will always be on the right place. Based on the refined question and passage representation, a bilinear match layer is finally used to identify the best answer span with respect to the question. Following the work of (Wang and Jiang, 2016), we predict the start and end boundary within a pointer-network output layer. The proposed method achieves state-of-the-art results against strong baselines. Our single model achieves 79.2% EM and 86.6% F1 score on the hidden test set, while the ensemble model further boosts the performance to 82.4% EM and 88.6% F1 score. At the time of writing the paper (Jan. 12th 2018), our model SLQA+ (Semantic Learning for Question Answering) achieves the first position on the SQuAD leaderboard 1 for both single and ensemble models. Besides, we are also among the first to surpass human EM performance on this golden benchmark dataset. 2 Related Work 2.1 Machine Reading Comprehension Traditional reading comprehension style question answering systems rely on a pipeline of NLP models, which make heavy use of linguistic annotation, structured world knowledge, semantic parsing and similar NLP pipeline outputs (Hermann et al., 2015). Recently, the rapid progress of machine reading comprehension has largely benefited from the availability of large-scale benchmark datasets and it is possible to train large end-to-end neural network models. Among them, CNN/Daily Mail (Hermann et al., 2015) and Children’s Book Test (Hill et al., 2015) are the first 1 https://rajpurkar.github.io/SQuAD-explorer/ 1707 large-scale datasets for reading comprehension task. However, these datasets are in cloze-style, in which the goal is to predict the missing word (often a named entity) in a passage. Moreover, Chen at al. have also shown that these clozestyle datasets requires less reasoning than previously thought (Chen et al., 2016). Different from the previous datasets, the SQuAD provides a more challenging benchmark dataset, where the goal is to extract an arbitrary answer span from the original passage. 2.2 Attention-based Neural Networks The key in MRC task lies in how to incorporate the question context into the paragraph, in which attention mechanism is most widely used. In spite of a variety of model structures and attention types (Cui et al., 2016; Xiong et al., 2016; Seo et al., 2016; Wang et al., 2017; Clark and Gardner, 2017), a typical attention-based neural network model for MRC first encodes the symbolic representation of the question and passage in an embedding space, then identify answers with particular attention functions in that space. In terms of the question and passage attention or matching strategy, we roughly categorize these attention-based models into two large groups: one-way attention and two-way attention. In one-way attention model, question is first summarized into a single vector and then directly matched with the passage. Most of the end-toend neural network methods on the cloze-style datasets are based on this model (Hermann et al., 2015; Kadlec et al., 2016; Chen et al., 2016; Dhingra et al., 2016). Hermann et al. are the first to apply the attention-based neural network methods to MRC task and introduce an attentive reader and an impatient reader (Hermann et al., 2015), by leveraging a two layer LSTM network. Chen et al. (Chen et al., 2016) further design a bilinear attention function based on the attentive reader, which shows superior performance on CNN/Daily Mail dataset. However, part of information may be lost when summarizing the question and a finegrained attention on both the question and passage words should be more reasonable. Therefore, the two-way attention model unfolds both the question and passage into respective word embeddings, and compute the attention in a two-dimensional matrix. Most of the top-ranking methods on SQuAD leaderboard are based on this attention mechanism (Wang et al., 2017; Huang et al., 2017; Xiong et al., 2017; Liu et al., 2017b,a). (Cui et al., 2016) and (Xiong et al., 2016) introduce the co-attention mechanism to better couple the representations of the question and document. Seo et al. propose a bi-directional attention flow network to capture the relevance at different levels of granularity (Seo et al., 2016). (Wang et al., 2017) further introduce the self-attention mechanism to refine the representation by matching the passage against itself, to better capture the global passage information. Huang et al. introduce a fully-aware attention mechanism with a novel history-of-word concept (Huang et al., 2017). We propose a hierarchical attention network by leveraging both co-attention and self-attention mechanisms in different layers, which can capture the relevance between the question and passage at different levels of granularity. Different from the above methods, we further devise a fusion function to combine both the aligned representation and the original representation from the previous layer within each attention. In this way, the model can always focus on the right part of the passage, while keeping the global passage topic in mind. 3 Machine Comprehension Model 3.1 Task Description Typical machine comprehension systems take an evidence text and a question as input, and predict a span within the evidence that answers the question. Based on this definition, given a passage and a question, the machine needs to first read and understand the passage, and then finds the answer to the question. The passage is described as a sequence of word tokens P = n wP t on t=1 and the question is described as Q = n wQ t om t=1, where n is the number of words in the passage, and m is the number of words in the question. In general, n ≫m. The answer can have different types depending on the task. In the SQuAD dataset (Rajpurkar et al., 2016), the answer A is guaranteed to be a continuous span in the passage P. The object function for machine reading comprehension is to learn a function f(q, p) = arg maxa∈A(p) P(a|q, p). The training data is a set of the question, passage and answer tuples < Q, P, A >. 1708 Figure 1: Hierarchical Attention Fusion Network. 3.2 Encode-Interaction-Pointer Framework We will now describe our framework from the bottom up. As show in Figure 1, the proposed framework consists of four typical layers to learn different concepts of semantic representations: • Encoder Layer as a language model, utilizes contextual cues from surrounding words to refine the embedding of the words. It converts the passage and question from tokens to semantic representation; • Attention Layer attempts to capture relations between question and passage. Besides the aligned context, the contextual embeddings are also merged by a fusion function. Moreover, the multi-level of this operation forms a ”working memory”; • Match Layer employs a bi-linear match function to compute the relevance between the question and passage representation on a span level; • Output Layer uses a pointer network to search the answer span of question. The main contribution of this work is the attention layer, in order to capture the relationship between question and passage, a hierarchical strategy is used to progressively make the answer boundary clear with the refined attention mechanism. A fine-grained fusion function is also introduced to better align the contextual representations from different levels. The detailed description of the model is provided as follows. 3.3 Hierarchical Attention Fusion Network Our design is based on a simple but natural intuition: performing fine-grained mechanism requires first to roughly see the potential answer domain and then progressively locate the most discriminative parts of the domain. The overall framework of our Hierarchical Attention Fusion Network is shown in Figure 1. It consists of several parts: a basic co-attention layer with shallow semantic fusion, a self-attention layer with deep semantic fusion and a memorywise bilinear alignment function. The proposed network has two distinctive characteristics: (i) A fine-grained fusion approach to blend attention vectors for a better understanding of the relationship between question and passage; (ii) A multi-granularity attention mechanism applied at the word and sentence-level, enabling it to properly attend to the most important content when constructing the question and passage representation. Experiments conducted on SQuAD and adversarial example datasets (Jia and Liang, 2017) demonstrate that the proposed framework outperform previous methods by a large margin. Details of different components will be described in the following sections. 3.4 Language Model & Encoder Layer Encoder layer of the model transform the discrete word tokens of question and passage to a sequence of continuous vector representations. We use a pre-trained word embedding model and a char embedding model to lay the foundation for our model. For the word embedding model, we adopt the popular glove embeddings (Pennington et al., 2014) which are widely used in deep learning-based NLP domain. For the char embedding model, the ELMo language model (Peters et al., 2018) is used due to its superior performance in a wide range of NLP tasks. As a result, we obtain two types of encoding vectors, i.e., word embeddings n eQ t om t=1, n eP t on t=1 and char embeddings n cQ t om t=1, n cP t on t=1. To further utilize contextual cues from surrounding words to refine the embedding of the words, we then put a shared Bi-LSTM network on top of the embeddings provided by the previous layers to model the temporal interactions between words. Before feeding into the Bi-LSTM 1709 contextual network, we concat the word embeddings and char embeddings for a full understanding of each word. The final output of our encoder layer is shown as below, uQ t = h BiLSTMQ([eQ t , cQ t ]), cQ t i (1) uP t = h BiLSTMP([eP t , cP t ]), cP t i (2) where we further concat the output of the contextual Bi-LSTM network with the pre-trained char embeddings for its good performance (Peters et al., 2018). This can be regarded as a residual connection between word representations in different levels. 3.5 Hierarchical Attention & Fusion Layer The attention layer is responsible for linking and fusing information from the question and passage representation, which is the most critical in most MRC tasks. It aims to align the question and passage so that we can better locate on the most relevant passage span with respect to the question. We propose a hierarchical attention structure by combining the co-attention and self-attention mechanism in a multi-hop style. Besides, we think that the original representation and the aligned representation via attention can reflect the content semantics in different granularities. Therefore, we also apply a particular fusion function after each attention function, so that different levels of semantics can be better incorporated towards a better understanding. 3.5.1 Co-attention & Fusion Given the question and passage representation uQ t and uP t , a soft-alignment matrix S has been built to calculate the shallow semantic similarity between question and passage as follows: Sij = Att(uQ t , uP t ) = ReLU(W⊤ linuQ t )⊤· ReLU(W⊤ linuP t ) (3) where Wlin is a trainable weight matrix. This decomposition avoids the quadratic complexity that is trivially parallelizable (Parikh et al., 2016). Now we use the unnormalized attention weights Sij to compute the attentions between question and passage, which is further used to obtain the attended vectors in passage to question and question to passage direction, respectively. P2Q Attention signifies which question words are most relevant to each passage word, given as below: αj = softmax(S:j) (4) where αj represents the attention weights on the question words. The aligned passage representation from question Q = n uQ t om t=1 can thus be derived as, ˜Q:t = X j αtj · Q:j, ∀j ∈[1, ..., m] (5) Q2P Attention signifies which passage words have the closest similarity to one of the question words and are hence critical for answering the question. We utilize the same way to calculate this attention as in the passage to question attention (P2Q), except for that in the opposite direction: βi = softmax(Si:) (6) ˜Pk: = X i βik · Pi:, ∀i ∈[1, ..., n] (7) where ˜P indicates the weighted sum of the most important words in the passage with respect to the question. With the aligned passage and question representations ˜Q and ˜P derived, a particular fusion unit has been designed to combine the original contextual representations and the corresponding attention vectors for question and passage separately: P′ = Fuse(P, ˜Q) (8) Q′ = Fuse(Q, ˜P) (9) where Fuse(·, ·) is a typical fusion kernel. The simplest way of fusion is a concatenation or addition of the two representations, followed by some linear or non-linear transformation. Recently, a heuristic matching trick with difference and element-wise product is found effective in combining different representations (Mou et al., 2016; Chen et al., 2017b): m(P, ˜Q) = tanh(Wf[P; ˜Q; P ◦˜Q; P −˜Q] + bf) (10) where ◦denotes the element-wise product, and Wf, bf are trainable parameters. The output dimension is projected back to the same size as the original representation P or Q via the projected matrix Wf. Since we find that the original contextual representations are important in reflecting the semantics at a more global level, we also introduce different levels of gating mechanism to incorporate the 1710 projected representations m(·, ·) with the original contextual representations. As a result, the final fused representations of passage and question can be formulated as: P′ = g(P, ˜Q)·m(P, ˜Q)+(1−g(P, ˜Q))·P (11) Q′ = g(Q, ˜P)·m(Q, ˜P)+(1−g(Q, ˜P))·Q (12) where g(·, ·) is a gating function. To capture the relation between the representations in different granularities, we also design a scalar-based, a vector-based and a matrix-based sigmoid gating function, which are compared in Section 4.5. 3.5.2 Self-attention & Fusion Borrowing the idea from wide and deep network (Cheng et al., 2016), manual features have also been added to combine with the outputs of previous layer for a more comprehensive representation. In our model, these features are concatenated with the refined question-aware passage representation as below: D = BiLSTM([P′; featman]) (13) where featman denotes the word-level manual passage features. In this layer, we separately consider the semantic representations of question and passage, and further refine the obtained information from the co-attention layer. Since fusing information among context words allows contextual information to flow close to the correct answer, the self-attention layer is used to further align the question and passage representation against itself, so as to keep the global sequence information in memory. Benefiting from the advantage of self-alignment attention in addressing the longdistance dependence (Wang et al., 2017), we adopt a self-alignment fusion process in this level. To allow for more freedom of the aligning process, we introduce a bilinear self-alignment attention function on the passage representation: L = softmax(D · Wl · D⊤) (14) ˜D = L · D (15) Another fusion function Fuse(·, ·) is again adopted to combine the question-aware passage representation D and self-aware representation ˜D, as below: D′ = Fuse(D, ˜D) (16) Finally, a bidirectional LSTM is used to get the final contextual passage representation: D′′ = BiLSTM(D′) (17) As for question side, since it is generally shorter in length and could be adequately represented with less information, we follow the question encoding method used in (Chen et al., 2017a) and adopt a linear transformation to encode the question representation to a single vector. First, another contextual bidirectional LSTM network is applied on top of the fused question representation: Q′′ = BiLSTM(Q′). Then we aggregate the resulting hidden units into one single question vector, with a linear self-alignment: γ = softmax(w⊤ q · Q′′) (18) q = X j γj · Q′′ :j, ∀j ∈[1, ..., m] (19) where wq is a weight vector to learn, we self-align the refined question representation to a single vector according to the question self-attention weight, which can be further used to compute the matching with the passage words. 3.6 Model & Output Layer Instead of predicting the start and end positions based only on D′′, a top-level bilinear match function is used to capture the semantic relation between question q and paragraph D′′ in a matching style, which actually works as a multi-hop matching mechanism. Different from the co-attention layer that generates coarse candidate answers and the selfattention layer that focus the relevant context of passage to a certain intent of question, the top model layer uses a bilinear matching function to capture the interaction between outputs from previous layers and finally locate on the right answer span. The start and end distribution of the passage words are calculated in a bilinear matching way as below, Pstart = softmax(q · W⊤ s · D′′) (20) Pend = softmax(q · W⊤ e · D′′) (21) where Ws and We are trainable matrices of the bilinear match function. The output layer is application-specific, in MRC task, we use pointer networks to predict the 1711 start and end position of the answer, since it requires the model to find the sub-phrase of the passage to answer the question. In training process, with cross entropy as metric, the loss for start and end position is the sum of the negative log probabilities of the true start and end indices by the predicted distributions, averaged over all examples: L(θ) = −1 N N X i log ps(ys i ) + log pe(ye i ) (22) where θ is the set of all trainable weights in the model, and ps is the probability of start index, pe is the probability of end index, respectively. ys i and ye i are the true start and end indices. During prediction, we choose the answer span with the maximum value of ps · pe under a constraint that s ≤e ≤s + 15, which is selected via a dynamic programming algorithm in linear time. 4 Experiments In this section, we first present the datasets used for evaluation. Then we compare our end-to-end Hierarchical Attention Fusion Networks with existing machine reading models. Finally, we conduct experiments to validate the effectiveness of our proposed components. We evaluate our model on the task of question answering using recently released SQuAD and TriviaQA Wikipedia (Joshi et al., 2017), which have gained a huge attention over the past year. An adversarial evaluation for the Stanford Question Answering SQuAD is also used to demonstrate the robust of our model under adversarial attacks (Jia and Liang, 2017). 4.1 Dataset We focus on the SQuAD dataset to train and evaluate our model. SQuAD is a popular machine comprehension dataset consisting of 100,000+ questions created by crowd workers on 536 Wikipedia articles. Each context is a paragraph from an article and the answer to each question is guaranteed to be a span in the context. The answer to each question is always a span in the context. The model is given a credit if its answer matches one of the human chosen answers. Two metrics are used to evaluate the model performance: Exact Match (EM) and a softer metric F1 score, which measures the weighted average of the precision and recall rate at a character level. Table 1: The performance of our SLQA model and competing approaches on SQuAD. Dev Set Test Set Single model EM / F1 EM / F1 LR Baseline (Rajpurkar et al., 2016) 40.0 / 51.0 40.4 / 51.0 Match-LSTM (Wang and Jiang, 2016) 64.1 / 73.9 64.7 / 73.7 DrQA (Chen et al., 2017a) - / 70.7 / 79.4 DCN+ (Xiong et al., 2017) 74.5 / 83.1 75.1 / 83.1 Interactive AoA Reader+ (Cui et al., 2016) - / 75.8 / 83.8 FusionNet (Huang et al., 2017) - / 76.0 / 83.9 SAN (Liu et al., 2017b) 76.2 / 84.0 76.8 / 84.4 AttentionReader+ (unpublished) - / 77.3 / 84.9 BiDAF + Self Attention + ELMo (Peters et al., 2018) - / 78.6 / 85.8 r-net+ (Wang et al., 2017) - / 79.9 / 86.5 SLQA+ 80.0 / 87.0 80.4 / 87.0 Ensemble model FusionNet (Huang et al., 2017) - / 78.8 / 85.9 DCN+ (Xiong et al., 2017) - / 78.9 / 86.0 Interactive AoA Reader+ (Cui et al., 2016) - / 79.0 / 86.4 SAN (Liu et al., 2017b) 78.6 / 85.9 79.6 / 86.5 BiDAF + Self Attention + ELMo (Peters et al., 2018) - / 81.0 / 87.4 AttentionReader+ (unpublished) - / 81.8 / 88.2 r-net+ (Wang et al., 2017) - / 82.6 / 88.5 SLQA+ 82.0 / 88.4 82.4 / 88.6 Human Performance 80.3 / 90.5 82.3 / 91.2 TriviaQA is a newly available machine comprehension dataset consisting of over 650K contextquery-answer triples. The contexts are automatically generated from either Wikipedia or Web search results. The length of contexts in TriviaQA (average 2895 words) is much more longer than the one in SQuAD (average 122 words). 4.2 Training Details We use the AdaMax optimizer, with a mini-batch size of 32 and initial learning rate of 0.002. A dropout rate of 0.4 is used for all LSTM layers. To directly optimize our target against the evaluation metrics, we further fine-tune the model with some well-defined strategy. During fine-tuning, Focal Loss (Lin et al., 2017) and Reinforce Loss which take F1 score as reward are incorporated with Cross Entropy Loss. The training process takes roughly 20 hours on a single Nvidia Tesla M40 GPU. We also train an ensemble model consisting of 15 training runs with the identical framework and hyper-parameters. At test time, we choose the answer with the highest sum of confidence scores amongst the 15 runs for each question. 4.3 Main Results The results of our model and competing approaches on the hidden test set are summarized in Table 1. The proposed SLQA+ ensemble model achieves an EM score of 82.4 and F1 score of 88.6, outperforming all previous approaches, which validates the effectiveness of our hierarchical attention and fusion network structure. We also conduct experiments on the adversarial 1712 Table 2: The F1 scores of different models on AddSent and AddOneSent datasets (S: Single Model, E: Ensemble). Model AddSent AddOneSent Logistic (Rajpurkar et al., 2016) 23.2 30.4 Match-S (Wang and Jiang, 2016) 27.3 39.0 Match-E (Wang and Jiang, 2016) 29.4 41.8 BiDAF-S (Seo et al., 2016) 34.3 45.7 BiDAF-E (Seo et al., 2016) 34.2 46.9 ReasoNet-S (Shen et al., 2017) 39.4 50.3 ReasoNet-E (Shen et al., 2017) 39.4 49.8 Mnemonic-S (Hu et al., 2017) 46.6 56.0 Mnemonic-E (Hu et al., 2017) 46.2 55.3 QANet-S (Yu et al., 2018) 45.2 55.7 FusionNet-E (Huang et al., 2017) 51.4 60.7 SLQA-S (our) 52.1 62.7 SLQA-E (our) 54.8 64.2 SQuAD dataset (Jia and Liang, 2017) to study the robustness of the proposed model. In the dataset, one or more sentences are appended to the original SQuAD context, aiming to mislead the trained models. We use exactly the same model as in our SQuAD dataset, the performance comparison result is shown in Table 2. It can be seen that the proposed model can still get superior results than all the other competing approaches. 4.4 Ablations In order to evaluate the individual contribution of each model component, we run an ablation study. Table 3 shows the performance of our model and its ablations on SQuAD dev set. The bi-linear alignment plus fusion between passage and question is most critical to the performance on both metrics which results in a drop of nearly 15%. The reason may be that in top-level attention layer, the similar semantics between question and passage are strong evidence to locate the correct answer span. The ELMo accounts for about 5% of the performance degradation, which clearly shows the effectiveness of language model. We conjecture that language model layer efficiently encodes different types of syntactic and semantic information about words-in-context, and improves the task performance. To evaluate the performance of hierarchical architecture, we reduce the multi-hop fusion with the standard LSTM network. The result shows that multi-hop fusion outperforms the standard LSTM by nearly 5% on both metrics. 4.5 Fusion Functions In this section, we experimentally demonstrate how different choices of the fusion kernel impact the performance of our model. The compared fusion kernels are described as follows: Simple Concat: a simple concatenation of two Table 3: Ablation tests of SLQA single model on the SQuAD dev set. SLQA single model EM / F1 SLQA+ 80.0 / 87.0 -Manual Features 79.2 / 86.2 -Language Embedding (ELMo) 77.6 / 84.9 -Self Matching 79.5 / 86.4 -Multi-hop 79.1 / 86.1 -Bi-linear Match 65.4 / 72.0 -Fusion (simple concat) 78.8 / 85.8 -Fusion, -Multi-hop 77.5 / 84.8 -Fusion, -Bi-linear Match 63.1 / 69.6 Table 4: Comparison of different fusion kernels on the SQuAD dev set. Fusion Kernel EM / F1 Simple Concat 78.8 / 85.8 Add Full Projection (FPU) 79.1 / 86.1 Scalar-based Fusion (SFU) 79.5 / 86.5 Vector-based Fusion (VFU) 80.0 / 87.0 Matrix-based Fusion (MFU) 79.8 / 86.8 channel inputs. Full Projection: the heuristic matching and projecting function as in Equ. 10. Scalar-based Fusion: the gating function is a trainable scalar parameter (a coarse fusion level): g(P, ˜Q) = gp (23) where gp is a trainable scalar parameter. Vector-based Fusion: the gating function contains a weight vector to learn, which acts as a onedimensional sigmoid gating, g(P, ˜Q) = σ(w⊤ g ·[P; ˜Q; P◦˜Q; P−˜Q]+bg) (24) where wg is trainable weight vector, bg is trainable bias, and σ is sigmoid function. Matrix-based Fusion: the gating function contains a weight matrix to learn, which acts as a twodimensional sigmoid gating, g(P, ˜Q) = σ(W⊤ g ·[P; ˜Q; P◦˜Q; P−˜Q]+bg) (25) where Wg is a trainable weight matrix. The comparison results of different fusion kernels can be found in Table 4. We can see that different fusion methods contribute differently to the final performances, and the vector-based fusion method performs best, with a moderate parameter size. 4.6 Attention Hierarchy and Function In the proposed model, attention layer is the most important part of the framework. At the bottom of Table 5 we show the performances on SQuAD 1713 Table 5: Comparison of different attention styles on the SQuAD dev set. Attention Hierarchy EM / F1 1-layer attention (only qp co-attention) 61.9 / 68.4 2-layer attention (add self-attention) 65.4 / 71.7 3-layer attention (add bilinear match) 80.0 / 87.0 Attention Function EM / F1 dot product 62.9 / 69.3 linear attention 78.0 / 84.9 bilinear attention (linear + relu) 80.0 / 87.0 trilinear attention 78.9 / 85.8 Table 6: Published and unpublished results on the TriviaQA wikipedia leaderboard. Full Verified Model EM / F1 EM / F1 BiDAF (Seo et al., 2016) 40.26 / 45.74 47.47 / 53.70 MEMEN (Pan et al., 2017) 43.16 / 46.90 49.28 / 55.83 M-Reader (Hu et al., 2017) 46.94 / 52.85 54.45 / 59.46 QANet (Yu et al., 2018) 51.10 / 56.60 53.30 / 59.20 document-qa (Clark and Gardner, 2017) 63.99 / 68.93 67.98 / 72.88 dirkweissenborn (unpublished) 64.60 / 69.90 72.77 / 77.44 SLQA-Single 66.56 / 71.39 74.83 / 78.74 for four common attention functions. Empirically, we find bilinear attention which add ReLU after linearly transforming does significantly better than the others. At the top of Table 5 we show the effect of varying the number of attention layers on the final performance. We see a steep and steady rise in accuracy as the number of layers is increased from N = 1 to 3. 4.7 Experiments on TriviaQA To further examine the robustness of the proposed model, we also test the model performance on TriviaQA dataset. The test performance of different methods on the leaderboard (on Jan. 12th 2018) is shown in Table 6. From the results, we can see that the proposed model can also obtain state-of-the-art performance in the more complex TriviaQA dataset. 5 Conclusions We introduce a novel hierarchical attention network, a state-of-the-art reading comprehension model which conducts attention and fusion horizontally and vertically across layers at different levels of granularity between question and paragraph. We show that our proposed method is very powerful and robust, which outperforms the previous state-of-the-art methods in various largescale golden MRC datasets: SQuAD, TriviaQA, AddSent and AddOneSent. Figure 2: Learning curve of F1 / EM score on the SQuAD dev set Acknowledgments We thank the Stanford NLP Group and the University of Washington NLP Group for evaluating our results on the SQuAD and the TriviaQA test set. References Danqi Chen, Jason Bolton, and Christopher D Manning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. arXiv preprint arXiv:1606.02858. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017a. Reading wikipedia to answer open-domain questions. arXiv preprint arXiv:1704.00051. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, and Diana Inkpen. 2017b. Natural language inference with external knowledge. arXiv preprint arXiv:1711.04289. Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, et al. 2016. Wide & deep learning for recommender systems. In Proceedings of the 1st Workshop on Deep Learning for Recommender Systems, pages 7–10. ACM. Christopher Clark and Matt Gardner. 2017. Simple and effective multi-paragraph reading comprehension. arXiv preprint arXiv:1710.10723. Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. 2016. Attention-overattention neural networks for reading comprehension. arXiv preprint arXiv:1607.04423. Dipanjan Das, Desai Chen, Andr´e FT Martins, Nathan Schneider, and Noah A Smith. 2014. Framesemantic parsing. Computational linguistics, 40(1):9–56. 1714 Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. 2016. Gated-attention readers for text comprehension. arXiv preprint arXiv:1606.01549. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1693– 1701. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representations. arXiv preprint arXiv:1511.02301. Minghao Hu, Yuxing Peng, and Xipeng Qiu. 2017. Reinforced mnemonic reader for machine comprehension. CoRR, abs/1705.02798. Hsin-Yuan Huang, Chenguang Zhu, Yelong Shen, and Weizhu Chen. 2017. Fusionnet: Fusing via fullyaware attention with application to machine comprehension. arXiv preprint arXiv:1711.07341. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021–2031. Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551. Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. 2016. Text understanding with the attention sum reader network. arXiv preprint arXiv:1603.01547. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Doll´ar. 2017. Focal loss for dense object detection. arXiv preprint arXiv:1708.02002. Rui Liu, Wei Wei, Weiguang Mao, and Maria Chikina. 2017a. Phase conductor on multi-layered attentions for machine comprehension. arXiv preprint arXiv:1710.10504. Xiaodong Liu, Yelong Shen, Kevin Duh, and Jianfeng Gao. 2017b. Stochastic answer networks for machine reading comprehension. arXiv preprint arXiv:1712.03556. Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2016. Natural language inference by tree-based convolution and heuristic matching. In The 54th Annual Meeting of the Association for Computational Linguistics, page 130. Boyuan Pan, Hao Li, Zhou Zhao, Bin Cao, Deng Cai, and Xiaofei He. 2017. Memen: Multi-layer embedding with memory networks for machine comprehension. arXiv preprint arXiv:1707.09098. Ankur Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2249–2255. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603. Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. 2017. Reasonet: Learning to stop reading in machine comprehension. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1047–1055. ACM. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems, pages 2692–2700. Shuohang Wang and Jing Jiang. 2016. Machine comprehension using match-lstm and answer pointer. arXiv preprint arXiv:1608.07905. Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching networks for reading comprehension and question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 189–198. Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604. Caiming Xiong, Victor Zhong, and Richard Socher. 2017. Dcn+: Mixed objective and deep residual coattention for question answering. arXiv preprint arXiv:1711.00106. Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. 2018. Qanet: Combining local convolution with global self-attention for reading comprehension. arXiv preprint arXiv:1804.09541.
2018
158
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1715–1724 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1715 Joint Training of Candidate Extraction and Answer Selection for Reading Comprehension Zhen Wang Jiachen Liu Xinyan Xiao Yajuan Lyu Tian Wu Baidu Inc., Beijing, China {wangzhen24, liujiachen, xiaoxinyan, lvyajuan, wutian}@baidu.com Abstract While sophisticated neural-based techniques have been developed in reading comprehension, most approaches model the answer in an independent manner, ignoring its relations with other answer candidates. This problem can be even worse in open-domain scenarios, where candidates from multiple passages should be combined to answer a single question. In this paper, we formulate reading comprehension as an extract-then-select twostage procedure. We first extract answer candidates from passages, then select the final answer by combining information from all the candidates. Furthermore, we regard candidate extraction as a latent variable and train the two-stage process jointly with reinforcement learning. As a result, our approach has improved the state-ofthe-art performance significantly on two challenging open-domain reading comprehension datasets. Further analysis demonstrates the effectiveness of our model components, especially the information fusion of all the candidates and the joint training of the extract-then-select procedure. 1 Introduction Teaching machines to read and comprehend human languages is a long-standing objective in natural language processing. In order to evaluate this ability, reading comprehension (RC) is designed to answer questions through reading relevant passages. In recent years, RC has attracted intense interest. Various advanced neural models have been proposed along with newly released datasets (Hermann et al., 2015; Rajpurkar et al., 2016; Dunn et al., 2017; Dhingra et al., 2017b; He et al., 2017). Q Cocktails: Rum, lime, and cola drink make a . A Cuba Libre P1 Daiquiri, the custom of mixing lime with rum for a cooling drink on a hot Cuban day, has been around a long time. P2 Cocktail recipe for a Daiquiri, a classic rum and lime drink that every bartender should know. P3 Hemingway Special Daiquiri: Daiquiris are a family of cocktails whose main ingredients are rum and lime juice. P4 A homemade Cuba Libre Preparation To make a Cuba Libre properly, fill a highball glass with ice and half fill with cola. P5 The difference between the Cuba Libre and Rum is a lime wedge at the end. Table 1: The answer candidates are in a bold font. The key information is marked in italic, which should be combined from different text pieces to select the correct answer ”Cuba Libre”. Most existing approaches mainly focus on modeling the interactions between questions and passages (Dhingra et al., 2017a; Seo et al., 2017; Wang et al., 2017), paying less attention to information concerning answer candidates. However, when human solve this problem, we often first read each piece of text, collect some answer candidates, then focus on these candidates and combine their information to select the final answer. This collect-then-select process can be more significant in open-domain scenarios, which require the combination of candidates from multiple passages to answer one single question. This phenomenon is illustrated by the example in Table 1. With this motivation, we formulate an extractthen-select two-stage architecture to simulate the above procedure. The architecture contains two 1716 components: (1) an extraction model, which generates answer candidates, (2) a selection model, which combines all these candidates and finds out the final answer. However, answer candidates to be focused on are often unobservable, as most RC datasets only provide golden answers. Therefore, we treat candidate extraction as a latent variable and train these two stages jointly with reinforcement learning (RL). In conclusion, our work makes the following contributions: 1. We formulate open-domain reading comprehension as a two-stage procedure, which first extracts answer candidates and then selects the final answer. With joint training, we optimize these two correlated stages as a whole. 2. We propose a novel answer selection model, which combines the information from all the extracted candidates using an attention-based correlation matrix. As shown in experiments, the information fusion is greatly helpful for answer selection. 3. With the two-stage framework and the joint training strategy, our method significantly surpasses the state-of-the-art performance on two challenging public RC datasets Quasar-T (Dhingra et al., 2017b) and SearchQA (Dunn et al., 2017). 2 Related Work In recent years, reading comprehension has made remarkable progress in methodology and dataset construction. Most existing approaches mainly focus on modeling sophisticated interactions between questions and passages, then use the pointer networks (Vinyals et al., 2015) to directly model the answers (Dhingra et al., 2017a; Wang and Jiang, 2017; Seo et al., 2017; Wang et al., 2017). These methods prove to be effective in existing close-domain datasets (Hermann et al., 2015; Hill et al., 2015; Rajpurkar et al., 2016). More recently, open-domain RC has attracted increasing attention (Nguyen et al., 2016; Dunn et al., 2017; Dhingra et al., 2017b; He et al., 2017) and raised new challenges for question answering techniques. In these scenarios, a question is paired with multiple passages, which are often collected by exploiting unstructured documents or web data. Aforementioned approaches often rely on recurrent neural networks and sophisticated attentions, which are prohibitively time-consuming if passages are concatenated altogether. Therefore, some work tried to alleviate this problem in a coarse-to-fine schema. Wang et al. (2018a) combined a ranker for selecting the relevant passage and a reader for producing the answer from it. However, this approach only depended on one passage when producing the answer, hence put great demands on the precisions of both components. Worse still, this framework cannot handle the situation where multiple passages are needed to answer correctly. In consideration of evidence aggregation, Wang et al. (2018b) proposed a re-ranking method to resolve the above issue. However, their re-ranking stage was totally isolated from the candidate extraction procedure. Being different from the re-ranking perspective, we propose a novel selection model to combine the information from all the extracted candidates. Moreover, with reinforcement learning, our candidate extraction and answer selection models can be learned in a joint manner. Trischler et al. (2016) also proposed a two-step extractor-reasoner model, which first extracted K most probable single-token answer candidates and then compared the hypotheses with all the sentences in the passage. However, in their work, each candidate was considered isolatedly, and their objective only took into account the ground truths compared with our RL treatment. The training strategy employed in our paper is reinforcement learning, which is inspired by recent work exploiting it into question answering problem. The above mentioned coarse-to-fine framework (Choi et al., 2017; Wang et al., 2018a) treated sentence selection as a latent variable and jointly trained the sentence selection module with the answer generation module via RL. Shen et al. (2017) modeled the multi-hop reasoning procedure with a termination state to decide when it is adequate to produce an answer. RL is suitable to capture this stochastic behavior. Hu et al. (2018) merely modeled the extraction process, using F1 as rewards in addition to maximum likelihood estimation. RL was utilized in their training process, as the F1 measure is not differentiable. 3 Two-stage RC Framework In this work, we mainly consider the open-domain extractive reading comprehension. In this scenario, a given question Q is paired with multiple passages P = {P1, P2, ..., PN}, based on which we aim to find out the answer A. Moreover, the golden answers are almost subspans shown in 1717 Q P P P 1 2 N ... C A Candidate Extraction Answer Selection Figure 1: Two-stage RC Framework. The first part extracts candidates (denoted with circles) from all the passages. The second part establishes interactions among all these candidates to select the final answer. The different gray scales of dashed lines between candidates represent different intensities of interactions. some passages in P. Our main framework consists of two parts, which are: (1) extracting answer candidates C = {C1, C2, ..., CM} from passages P and (2) selecting the final answer A from candidates C. This process is illustrated in Figure 1. We design different models for each part and optimize them as a whole with joint reinforcement learning. 3.1 Candidate Extraction We build candidate set C by independently extracting K candidates from each passage Pi according to the following distribution: p(C|Q, P) = N Y i p({Cij}K j=1|Q, Pi) C = N [ i=1 {Cij}K j=1 (1) where Cij denotes the jth candidate extracted from the ith passage. K is set as a constant number in our formulation. Taking K as 2 for an example, we denote each probability shown on the right side of Equation 1 through sampling without replacement: p({Ci1, Ci2}) = p(Ci1)p(Ci2)/(1 −p(Ci1)) + p(Ci1)p(Ci2)/(1 −p(Ci2)) (2) where we neglect Q, Pi to abbreviate the conditional distributions in Equation 1. Consequently, the basic block of our candidate extraction stage turns out to be the distribution of each candidate P(Cij|Q, Pi). In the rest of this subsection, we will elaborate on the model archiAttention Start End Question & Passage Representation Question & Passage Interaction Candidate Scoring Question Passage 1 xP lxP P 2 xP 1 xQ lxQ Q HQ HP HP ~ GP bP eP … … … … … Q P Figure 2: Candidate Extraction Model Architecture. tecture concerning candidate extraction, which is displayed in Figure 2. Question & Passage Representation Firstly, we embed the question Q = {xk Q}lQ k=1 and its relevant passage P = {xt P }lP t=1 ∈P with word vectors to form Q ∈Rdw×lQ and P ∈Rdw×lP respectively, where dw is the dimension of word embeddings, lQ and lP are the length of Q and P. We then feed Q and P to a bidirectional LSTM to form their contextual representations HQ ∈ Rdh×lQ and HP ∈Rdh×lP : HQ = BiLSTM(Q) HP = BiLSTM(P) (3) Question & Passage Interaction Modeling the interactions between questions and passages is a critical step in reading comprehension. Here, we adopt the attention mechanism similar to (Lee et al., 2016) to generate question-dependent passage representation eHP . Assume HQ = {hk Q}lQ k=1, HP = {ht P }lP t=1 , we have: αtk = ehk Q·ht P PlQ k=1 ehk Q·ht P 1 ≤k ≤lQ, 1 ≤t ≤lP eh t P = lQ X k=1 αtkhk Q 1 ≤t ≤lP eHP ={eh t P }lP t=1 (4) After concatenating two kinds of passage representations HP and eHP , we use another bidirectional LSTM to get the final representation of every position in passage P as GP ∈Rdg×lP : GP = BiLSTM([HP ; eHP ]) (5) 1718 Candidate Scoring Then we use two linear transformations wb ∈R1×dg and we ∈R1×dg to calculate the begin and the end scores for each position: {bt P }lQ t=1 = bP = wbGP {et P }lQ t=1 = eP = weGP (6) At last, we model the probability of every subspan in passage P as a candidate C = {xt P }Ce t=Cb according to its begin and end position: p(C|Q, P) = exp(bCb P + eCe P ) PlP k=1 PlP t=k exp(bk P + et P ) (7) In this definition, the probabilities of all the valid answer candidates are already normalized. 3.2 Answer Selection As the second part of our framework, the answer selection model finds out the most probable answer by calculating p(C|Q, P, C) for each candidate C ∈C. The model architecture is illustrated in Figure 3. Notably, selection model receives candidate set C as additional information. This more focused information allows the model to combine evidences from all the candidates, which would be useful for selecting the best answer. For ease of understanding, we briefly describe the selection stage as follows. After being extracted from a single passage, a candidate borrows information from other candidates across different passages. With this global information, the passage is reread to confirm the correctness of the candidate further. The following are details about the selection model. Question Representation Questions are fundamental for finding out the correct answer. As did for the extraction model, we embed the question Q with word vectors to form Q ∈Rdw×lQ. Then we use a bidirectional LSTM to establish its contextual representation: Sq = BiLSTM(Q) (8) A max-pooling operation across all the positions is followed to get the condensed vector representation: rq = MaxPooling(Sq) (9) Question Representation Passage Representation Candidate Representation Answer Scoring Question Passage 1 xP lxP P 2 xP 1 xQ lxQ Q SQ SP … … … … MaxPooling rQ ... MaxPooling Sc rC FP rC ~ rC rC rC 1 2 M zC s Candidates Q RP Figure 3: Answer Selection Model Architecture. Passage Representation Assume the candidate C is extracted from the passage P ∈P. To be informed of C, we first build the representation of P. For every word in P, three kinds of features are utilized: • Word embedding: each word expresses its basic feature with the word vector. • Common word: the feature has value 1 when the word occurs in the question, otherwise 0. • Question independent representation: the condensed representation rq. With these features, information not only in Q but also in P is considered. By concatenating them, we get rt P corresponding to every position t in passage P. Then with another bidirectional LSTM, we fuse these features to form the contextual representation of P as SP ∈Rds×lP : RP = {rt P }lP t=1 SP = BiLSTM(RP ) (10) Candidate Representation Candidates provide more focused information for answer selection. Therefore, for each candidate, we first build its independent representation according to its position in the passage, then construct candidates fused representation through combination of other correlated candidates. Given the candidate C = {xt P }Ce t=Cb in the passage P, we extract its corresponding span from SP = {st P }lP t=1 to form SC = {st P }Ce t=Cb as its contextual encoding. Moreover, we calculate its condensed vector representation through its begin 1719 and end positions: rC = tanh(WbsCb P + WesCe P ) (11) where Wb ∈Rdc×ds, We ∈Rdc×ds. To model the interactions among all the answer candidates, we calculate the correlations of the candidate C, which is assumed to be indexed by j in C, with others {Cm}M m=1,m̸=j via attention mechanism: Vjm = wvtanh(WcrC + WorCm) (12) where Wc ∈Rdc×dc, Wo ∈Rdc×dc and wv ∈ R1×dc are linear transformations to capture the intensity of each interaction. In this way, we form a correlation matrix V ∈ RM×M, where M is the total number of candidates. With the correlation matrix, for the candidate C, we normalize its interactions via a softmax operation, which emphasizes the influence of stronger interactions: αm = eVjm PM m=1,m̸=j eVjm (13) To take into account different influences of all the other candidates, it is sensible to generate a candidates fused representation according to the above normalized interactions: erC = M X m=1,m̸=j αmrCm (14) In this formulation, all the other candidates contribute their influences to the fused representation by their interactions with C, thus information from different passages is gathered altogether. In our experiments, this kind of information fusion is the key point for performance improvements. Passage Advanced Representation As more focused information of the candidate C is available, we are provided with a better way to confirm its correctness by rereading its corresponding passage P. Specifically, we equip each position t in P with following advanced features: • Passage contextual representation: the former passage representation st P . • Candidate-dependent passage representation: replace HQ with SC and HP with SP in Equation 4 to model the interactions between candidates and passages to form est P . • Candidate related distance feature: the relative distance to the candidate C can be a reference of the importance of each position. • Candidate independent representation: use rC to consider the concerned candidate C. • Candidates fused representation: use erC to consider all the other candidates interacting with the concerned candidate C. With these features, we capture the information from the question, the passages and all the candidates. By concatenating them, we get ut P in every position in the passage P. Combining these features with a bidirectional LSTM, we get: UP = {ut P }lP t=1 FP = BiLSTM(UP ) (15) Answer Scoring At last, the max pooling of each dimension of FP is performed, resulting in a condensed vector representation, which contains all the concerned information in a candidate: zC = MaxPooling(FP ) (16) The final score of this candidate as the answer is calculated via a linear transformation, which is then normalized across all the candidates: s = wzzC p(C|Q, P, C) = es PM k=1 esk (17) 3.3 Joint Training with RL In our formulation, the answer candidate set influences the result of answer selection to a large extent. However, with only golden answers provided in the training data, it is not apparent which candidates should be considered further. To alleviate the above problem, we treat candidate extraction as a latent variable, jointly train the extraction model and the selection model with reinforcement learning. Formally, in the extraction and selection stages, two kinds of actions are modeled. The action space for the extraction model is to select from different candidate sets, which is formulated by Equation 1. The action space for the selection model is to select from all extracted candidates, which is formulated by Equation 17. Our goal is to select the final answer that leads to a high reward. Inspired by Wang et al. (2018a), 1720 we define the reward of a candidate to reflect its accordance with the golden answer: r(C, A) =    2 if C == A f1(C, A) else if C ∩A ̸= ∅ −1 else (18) where f1(., .) ∈[0, 1] is the function to measure word-level F1 score between two sequences. Incorporating this reward can alleviate the overstrict requirements set by traditional maximum likelihood estimation as well as keep consistent with our evaluation methods in experiments. The learning objective becomes to maximize the expected reward modeled by our framework, where θ stands for all the parameters involved: L(θ) = −EC∼P(C|Q,P)[EC∼P(C|Q,P,C)r(C, A)] = −EC∼P(C|Q,P)[ X C P(C|Q, P, C)r(C, A)] (19) Following REINFORCE algorithm, we approximate the gradient of the above objective with a sampled candidate set, C ∼P(C|Q, P), resulting in the following form: ∇L(θ) ≈− X C ∇P(C|Q, P, C)r(C, A) −∇logP(C|Q, P)[ X C P(C|Q, P, C)r(C, A)] (20) 4 Experiments 4.1 Datasets We evaluate our models on two publicly available open-domain RC datasets, which are commonly adopted in related work. Quasar-T (Dhingra et al., 2017b) consists of 43,000 open-domain trivia questions and corresponding answers obtained from various internet sources. Each question is paired with 100 sentence-level passages retrieved from ClueWeb09 (Callan et al., 2009) based on Lucene. SearchQA (Dunn et al., 2017) starts from existing question-answer pairs, which are crawled from J!Archive, and is augmented with text snippets retrieved by Google, resulting in more than 140,000 question-answer pairs with each pair having 49.6 snippets on average. The detailed statistics of these two datasets is shown in Table 2. #q(train) #q(dev) #q(test) #p Quasar-T 28,496 3,000 3,000 100 SearchQA 99,811 13,893 27,247 50 Table 2: The statistics of our experimental datasets. #q represents the number of questions for each split of the datasets. #p is the number of passages for each question. 4.2 Model Settings We initialize word embeddings with the 300dimensional Glove vectors1. All the bidirectional LSTMs hold 1 layer and 100 hidden units. All the linear transformations take the size of 100 as output dimension. The common word feature and the candidate related distance feature are embedded with vectors of dimension 4 and 50 respectively. By default, we set K as 2 in Equation 1, which means each passage generates two candidates based on the extraction model. For ease of training, we first initialize our models by maximum likelihood estimation and finetune them with RL. The similar training strategy is commonly employed when RL process is involved (Ranzato et al., 2015; Li et al., 2016a; Hu et al., 2018). To pre-train the extraction model, we only use passages containing ground truths as training data. The log likelihood of Equation 7 is taken as the training objective for each question and passage pair. After pre-training the extraction model, we use it to generate two top-scoring candidates from each passage, forming the training data to pre-train our selection model, and maximize the log likelihood of the Equation 17 as our second objective. In pre-training, we use the batch size of 30 for the extraction model, 20 for the selection model and RMSProp (Tieleman and Hinton, 2012) with an initial learning rate of 2e-3. In fine-tuning with RL, we use the batch size of 5 and RMSProp with an initial learning rate of 1e-4. Also, we use a dropout rate of 0.1 in each training procedure. 4.3 Experimental Results In addition to results of previous work, we add two baselines to demonstrate the effectiveness of our framework. The first baseline only applies the extraction model to score the answers, which is aimed at explaining the importance of the selection model. The second one only uses the pre-trained extraction model and selection model 1http://nlp.stanford.edu/data/wordvecs/glove.840B.300d.zip 1721 Quasar-T SearchQA EM F1 EM F1 GA (Dhingra et al., 2017a) 26.4 26.4 BIDAF (Seo et al., 2017) 25.9 28.5 28.6 34.6 AQA (Buck et al., 2018) 38.7 45.6 R3 (Wang et al., 2018a) 35.3 41.7 49.0 55.3 Re-Ranker (Wang et al., 2018b) Strength-Based Re-Ranker (Probability) 36.1 42.4 50.4 56.5 Strength-Based Re-Ranker (Counting) 37.1 46.7 54.2 61.6 Coverage-Based Re-Raner 40.6 49.1 53.6 60.6 Full Re-Ranker 42.3 49.6 57.0 63.2 Our Methods Extraction Model 35.4 41.6 44.7 51.2 Extraction + Selection (Isolated Training) 41.6 49.5 49.7 56.6 Extraction + Selection (Joint Training) 45.9 53.9 58.3 64.2 Table 3: Experimental results on the test set of Quasar-T and SearchQA. Full re-ranker is the ensemble of three different re-rankers in (Wang et al., 2018b). to illustrate the benefits from our joint training schema. The often used evaluation metrics for extractive RC are exact match (EM) and F1 (Rajpurkar et al., 2016). The experimental results on Quasar-T and SearchQA are shown in Table 3. As seen from the results on Quasar-T, our quite simple extraction model alone almost reaches the state-of-the-art result compared with other methods without re-rankers. The combination of the extraction and selection models exceeds our extraction baseline by a great margin, and also results in performance surpassing the best single reranker in (Wang et al., 2018b). This result illustrates the necessity of introducing the selection model, which incorporates information from all the candidates. In the end, by joint training with RL, our method produces better performance even compared with the ensemble of three different rerankers. On SearchQA, we find that our extraction model alone performs not that well compared with the state-of-the-art model without re-rankers. However, the improvement brought by our selection model isolatedly or jointly trained still demonstrates the importance of our two-stage framework. Not surprisingly, comparing the results, our isolated training strategy still lags behind the single re-ranker proposed in (Wang et al., 2018b), partly because of the deficiency with our extraction model. However, uniting our extraction and selection models with RL makes up the disparity, and the performance surpasses the ensemble of three different re-rankers, let alone the result of Quasar-T EM F1 Extraction + Selection (Joint Training) 45.9 53.9 -question representation 42.5 50.5 -question and passage common words 41.0 48.7 -candidate independent representation 44.5 53.3 -candidate related distance feature 44.7 53.0 -candidate dependent passage representation 44.4 52.3 -candidates fused representation 39.2 45.8 Table 4: Ablation results concerning the selection model on the test set of Quasar-T. Obviously, candidates fused representation is the most evident feature when modeling the answer selection procedure. any single re-ranker. 4.4 Further Analysis Effect of Features in Selection Model As the incorporation of the selection model improves the overall performance significantly, we conduct ablation analysis on the Quasar-T to prove the effectiveness of its major components. As shown in Table 4, all these components modeling the selection procedure play important roles in our final architecture. Specifically, introducing the independent representation of the question and its common words with the passage seems an efficient way to consider the information of questions, which is consistent with previous work (Li et al., 2016b; Chen et al., 2017). As for features related to candidates, the incorporation of the candidate independent information 1722 Q Cocktails : Rum , lime , and cola drink make a . A Cuba Libre P1 In Nicaragua , when it is mixed using Flor de Ca a -LRB- the national brand of rum -RRB- and cola , it is called a Nica Libre . P2 The drink ... Daiquiri The custom of mixing lime with rum for a cooling drink on a hot Cuban day has been around a long time . P3 If you only learn to make two cocktails , the Manhattan should be one of them . P4 Daiquiri Cocktail recipe for a Daiquiri , a classic rum and lime drink that every bartender should know . P5 Hemingway Special Daiquiri : Daiquiris are a family of cocktails whose main ingredients are rum and lime juice . P6 In the Netherlands the drink is commonly called Baco , from the two ingredients of Bacardi rum and cola . P7 A homemade Cuba Libre Preparation To make a Cuba Libre properly , fill a highball glass with ice and half fill with cola . P8 Bacardi Cocktail Cocktail recipe for a Bacardi Cocktail , a classic cocktail of Bacardi rum , lemon or lime juice and grenadine Roy Rogers -LRB- non-alcoholic -RRB- Cocktail recipe for a Roy Rogers , P9 Margarita Cocktail recipe for a Margarita , a popular refreshing tequila and lime drink for summer . P10 The difference between the Cuba Libre and Rum is a lime wedge at the end . Table 5: An example from Quasar-T to illustrate the necessity of fused information. Candidates extracted from passages are in a bold font. To correctly answer the question, information in P7 and P10 should be combined. contributes to the final result more or less. These features include candidate-dependent passage representation, candidate independent representation and candidate related distance feature. Most importantly, the candidates fused representation, which combines the information from all the candidates, demonstrates its indispensable role in candidate modeling, with a performance drop of nearly 8% when discarded. This phenomenon also verifies the necessity of our extractthen-select procedure, showing the importance of combining information scattered in different text pieces when picking out the final answer. Example for Candidates Fused Representation We conduct a case study to demonstrate the importance of candidates fused information further. In Table 5, each candidate only partly matches the description of the question in its independent context. To correctly answer the question, information in P7 and P10 should be combined. In experiments, our selection model provides the correct answer, while the wrong candidate ”Daiquiri”, a different kind of cocktail, is selected if candidates fused representation is discarded. The attention map established when modeling the fusion of candidates (corresponding to Equation 13) in this example is illustrated in Figure 4, in which we can see the interactions among all the candidates from Nica Libre Daiquiri Manhattan Daiquiri Daiquiri Baco Cuba Libre Bacardi Margarita Cuba Libre Nica Libre Daiquiri Manhattan Daiquiri Daiquiri Baco Cuba Libre Bacardi Margarita Cuba Libre 0.0 0.2 0.4 0.6 0.8 1.0 Figure 4: The attention map generated when modeling candidates fused representations for the example in Table 5. Quasar-T EM F1 K=1 43.9 52.4 K=2 45.9 53.9 K=3 45.8 53.9 Table 6: Different number of extracted candidates results in different final performance on the test set of Quasar-T. different passages. In this figure, it is obvious that the interaction of ”Cuba Libre” in P7 and P10 is the key point to answer the question correctly. Effect of Candidate Number The candidate extraction stage takes an important role to decide what information should be focused on further. Therefore, we also test the influence of different K when extracting candidates from each passage. The results are shown in Table 6. Taking K = 1 degrades the performance, which conforms to the expectation, as the correct candidates become less in this stricter situation. However, taking K = 3 can not improve the performance further. Although a larger K means a higher possibility to include good answers, it raises more challenges for the selection model to pick out the correct one from candidates with more varieties. 5 Conclusion In this paper, we formulate the problem of RC as a two-stage process, which first generates candidates with an extraction model, then selects the final answer by combining the information from 1723 all the candidates. Furthermore, we treat candidate extraction as a latent variable and jointly train these two stages with RL. Experiments on public open-domain RC datasets Quasar-T and SearchQA show the necessity of introducing the selection model and the effectiveness of fusing candidates information when modeling. Moreover, our joint training strategy leads to significant improvements in performance. Acknowledgments This work is supported by the National Basic Research Program of China (973 program, No. 2014CB340505). We thank Ying Chen and anonymous reviewers for valuable feedback. References Christian Buck, Jannis Bulian, Massimiliano Ciaramita, Andrea Gesmundo, Neil Houlsby, Wojciech Gajewski, and Wei Wang. 2018. Ask the right questions: Active question reformulation with reinforcement learning. In ICLR. Jamie Callan, Mark Hoy, Changkuk Yoo, and Le Zhao. 2009. Clueweb09 data set. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer opendomain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Vancouver, Canada, pages 1870–1879. Eunsol Choi, Daniel Hewlett, Jakob Uszkoreit, Illia Polosukhin, Alexandre Lacoste, and Jonathan Berant. 2017. Coarse-to-fine question answering for long documents. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Vancouver, Canada, pages 209–220. Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William Cohen, and Ruslan Salakhutdinov. 2017a. Gatedattention readers for text comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Vancouver, Canada, pages 1832–1846. Bhuwan Dhingra, Kathryn Mazaitis, and William W Cohen. 2017b. Quasar: Datasets for question answering by search and reading. arXiv preprint arXiv:1707.03904 . Matthew Dunn, Levent Sagun, Mike Higgins, Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with context from a search engine. arXiv preprint arXiv:1704.05179 . Wei He, Kai Liu, Yajuan Lyu, Shiqi Zhao, Xinyan Xiao, Yuan Liu, Yizhong Wang, Hua Wu, Qiaoqiao She, Xuan Liu, Tian Wu, and Haifeng Wang. 2017. Dureader: a chinese machine reading comprehension dataset from real-world applications. arXiv preprint arXiv:1711.05073 . Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada. pages 1693–1701. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representations. arXiv preprint arXiv:1511.02301 . Minghao Hu, Yuxing Peng, and Xipeng Qiu. 2018. Reinforced mnemonic reader for machine comprehension. In IJCAI. Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, and Jonathan Berant. 2016. Learning recurrent span representations for extractive question answering. arXiv preprint arXiv:1611.01436 . Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016a. Deep reinforcement learning for dialogue generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 1192–1202. Peng Li, Wei Li, Zhengyan He, Xuguang Wang, Ying Cao, Jie Zhou, and Wei Xu. 2016b. Dataset and neural recurrent sequence labeling model for opendomain factoid question answering. arXiv preprint arXiv:1607.06275 . Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. In Proceedings of the Workshop on Cognitive Computation: Integrating neural and symbolic approaches 2016 colocated with the 30th Annual Conference on Neural Information Processing Systems (NIPS 2016). Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 2383–2392. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732 . 1724 Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In ICLR. Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. 2017. Reasonet: Learning to stop reading in machine comprehension. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, pages 1047–1055. Tijmen Tieleman and Geoffrey Hinton. 2012. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning 4(2):26–31. Adam Trischler, Zheng Ye, Xingdi Yuan, Philip Bachman, Alessandro Sordoni, and Kaheer Suleman. 2016. Natural language comprehension with the epireader. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 128–137. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada. pages 2692–2700. Shuohang Wang and Jing Jiang. 2017. Machine comprehension using match-lstm and answer pointer. In ICLR. Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerald Tesauro, Bowen Zhou, and Jing Jiang. 2018a. R3: Reinforced reader-ranker for open-domain question answering. In AAAI. Shuohang Wang, Mo Yu, Jing Jiang, Wei Zhang, Xiaoxiao Guo, Shiyu Chang, Zhiguo Wang, Tim Klinger, Gerald Tesauro, and Murray Campbell. 2018b. Evidence aggregation for answer re-ranking in open-domain question answering. In ICLR. Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching networks for reading comprehension and question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. volume 1, pages 189–198.
2018
159
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 162–173 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 162 Simple and Effective Text Simplification Using Semantic and Neural Methods Elior Sulem, Omri Abend, Ari Rappoport Department of Computer Science, The Hebrew University of Jerusalem {eliors|oabend|arir}@cs.huji.ac.il Abstract Sentence splitting is a major simplification operator. Here we present a simple and efficient splitting algorithm based on an automatic semantic parser. After splitting, the text is amenable for further fine-tuned simplification operations. In particular, we show that neural Machine Translation can be effectively used in this situation. Previous application of Machine Translation for simplification suffers from a considerable disadvantage in that they are overconservative, often failing to modify the source in any way. Splitting based on semantic parsing, as proposed here, alleviates this issue. Extensive automatic and human evaluation shows that the proposed method compares favorably to the stateof-the-art in combined lexical and structural simplification. 1 Introduction Text Simplification (TS) is generally defined as the conversion of a sentence into one or more simpler sentences. It has been shown useful both as a preprocessing step for tasks such as Machine Translation (MT; Mishra et al., 2014; ˇStajner and Popovi´c, 2016) and relation extraction (Niklaus et al., 2016), as well as for developing reading aids, e.g. for people with dyslexia (Rello et al., 2013) or non-native speakers (Siddharthan, 2002). TS includes both structural and lexical operations. The main structural simplification operation is sentence splitting, namely rewriting a single sentence into multiple sentences while preserving its meaning. While recent improvement in TS has been achieved by the use of neural MT (NMT) approaches (Nisioi et al., 2017; Zhang et al., 2017; Zhang and Lapata, 2017), where TS is considered a case of monolingual translation, the sentence splitting operation has not been addressed by these systems, potentially due to the rareness of this operation in the training corpora (Narayan and Gardent, 2014; Xu et al., 2015). We show that the explicit integration of sentence splitting in the simplification system could also reduce conservatism, which is a grave limitation of NMT-based TS systems (Alva-Manchego et al., 2017). Indeed, experimenting with a stateof-the-art neural system (Nisioi et al., 2017), we find that 66% of the input sentences remain unchanged, while none of the corresponding references is identical to the source. Human and automatic evaluation of the references (against other references), confirm that the references are indeed simpler than the source, indicating that the observed conservatism is excessive. Our methods for performing sentence splitting as pre-processing allows the TS system to perform other structural (e.g. deletions) and lexical (e.g. word substitutions) operations, thus increasing both structural and lexical simplicity. For combining linguistically informed sentence splitting with data-driven TS, two main methods have been proposed. The first involves handcrafted syntactic rules, whose compilation and validation are laborious (Shardlow, 2014). For example, Siddharthan and Angrosh (2014) used 111 rules for relative clauses, appositions, subordination and coordination. Moreover, syntactic splitting rules, which form a substantial part of the rules, are usually language specific, requiring the development of new rules when ported to other languages (Alu´ısio and Gasperin, 2010; Seretan, 2012; Hung et al., 2012; Barlacchi and Tonelli, 2013, for Portuguese, French, Vietnamese, and Italian respectively). The second method uses linguistic information for detecting potential splitting points, while splitting probabilities are learned us163 ing a parallel corpus. For example, in the system of Narayan and Gardent (2014) (henceforth, HYBRID), the state-of-the-art for joint structural and lexical TS, potential splitting points are determined by event boundaries. In this work, which is the first to combine structural semantics and neural methods for TS, we propose an intermediate way for performing sentence splitting, presenting Direct Semantic Splitting (DSS), a simple and efficient algorithm based on a semantic parser which supports the direct decomposition of the sentence into its main semantic constituents. After splitting, NMT-based simplification is performed, using the NTS system. We show that the resulting system outperforms HYBRID in both automatic and human evaluation. We use the UCCA scheme for semantic representation (Abend and Rappoport, 2013), where the semantic units are anchored in the text, which simplifies the splitting operation. We further leverage the explicit distinction in UCCA between types of Scenes (events), applying a specific rule for each of the cases. Nevertheless, the DSS approach can be adapted to other semantic schemes, like AMR (Banarescu et al., 2013). We collect human judgments for multiple variants of our system, its sub-components, HYBRID and similar systems that use phrase-based MT. This results in a sizable human evaluation benchmark, which includes 28 systems, totaling at 1960 complex-simple sentence pairs, each annotated by three annotators using four criteria.1 This benchmark will support the future analysis of TS systems, and evaluation practices. Previous work is discussed in §2, the semantic and NMT components we use in §3 and §4 respectively. The experimental setup is detailed in §5. Our main results are presented in §6, while §7 presents a more detailed analysis of the system’s sub-components and related settings. 2 Related Work MT-based sentence simplification. Phrasebased Machine Translation (PBMT; Koehn et al., 2003) was first used for TS by Specia (2010), who showed good performance on lexical simplification and simple rewriting, but under-prediction of other operations. ˇStajner et al. (2015) took a similar approach, finding that it is beneficial to use training data where the source side is 1The benchmark can be found in https://github. com/eliorsulem/simplification-acl2018. highly similar to the target. Other PBMT for TS systems include the work of Coster and Kauchak (2011b), which uses Moses (Koehn et al., 2007), the work of Coster and Kauchak (2011a), where the model is extended to include deletion, and PBMT-R (Wubben et al., 2012), where Levenshtein distance to the source is used for re-ranking to overcome conservatism. The NTS NMT-based system (Nisioi et al., 2017) (henceforth, N17) reported superior performance over PBMT in terms of BLEU and human evaluation scores, and serves as a component in our system (see Section 4). Zhang et al. (2017) took a similar approach, adding lexical constraints to an NMT model. Zhang and Lapata (2017) combined NMT with reinforcement learning, using SARI (Xu et al., 2016), BLEU, and cosine similarity to the source as the reward. None of these models explicitly addresses sentence splitting. Alva-Manchego et al. (2017) proposed to reduce conservatism, observed in PBMT and NMT systems, by first identifying simplification operations in a parallel corpus and then using sequencelabeling to perform the simplification. However, they did not address common structural operations, such as sentence splitting, and claimed that their method is not applicable to them. Xu et al. (2016) used Syntax-based Machine Translation (SBMT) for sentence simplification, using a large scale paraphrase dataset (Ganitketitch et al., 2013) for training. While it does not target structural simplification, we include it in our evaluation for completeness. Structural sentence simplification. Syntactic hand-crafted sentence splitting rules were proposed by Chandrasekar et al. (1996), Siddharthan (2002), Siddhathan (2011) in the context of rulebased TS. The rules separate relative clauses and coordinated clauses and un-embed appositives. In our method, the use of semantic distinctions instead of syntactic ones reduces the number of rules. For example, relative clauses and appositives can correspond to the same semantic category. In syntax-based splitting, a generation module is sometimes added after the split (Siddharthan, 2004), addressing issues such as reordering and determiner selection. In our model, no explicit regeneration is applied to the split sentences, which are fed directly to an NMT system. Glavaˇs and ˇStajner (2013) used a rule-based system conditioned on event extraction and syntax 164 for defining two simplification models. The eventwise simplification one, which separates events to separate output sentences, is similar to our semantic component. Differences are in that we use a single semantic representation for defining the rules (rather than a combination of semantic and syntactic criteria), and avoid the need for complex rules for retaining grammaticality by using a subsequent neural component. Combined structural and lexical TS. Earlier TS models used syntactic information for splitting. Zhu et al. (2010) used syntactic information on the source side, based on the SBMT model of Yamada and Knight (2001). Syntactic structures were used on both sides in the model of Woodsend and Lapata (2011), based on a quasi-synchronous grammar (Smith and Eisner, 2006), which resulted in 438 learned splitting rules. The model of Siddharthan and Angrosh (2014) is similar to ours in that it combines linguistic rules for structural simplification and statistical methods for lexical simplification. However, we use 2 semantic splitting rules instead of their 26 syntactic rules for relative clauses and appositions, and 85 syntactic rules for subordination and coordination. Narayan and Gardent (2014) argued that syntactic structures do not always capture the semantic arguments of a frame, which may result in wrong splitting boundaries. Consequently, they proposed a supervised system (HYBRID) that uses semantic structures (Discourse Semantic Representations, (Kamp, 1981)) for sentence splitting and deletion. Splitting candidates are pairs of event variables associated with at least one core thematic role (e.g., agent or patient). Semantic annotation is used on the source side in both training and test. Lexical simplification is performed using the Moses system. HYBRID is the most similar system to ours architecturally, in that it uses a combination of a semantic structural component and an MT component. Narayan and Gardent (2016) proposed instead an unsupervised pipeline, where sentences are split based on a probabilistic model trained on the semantic structures of Simple Wikipedia as well as a language model trained on the same corpus. Lexical simplification is there performed using the unsupervised model of Biran et al. (2011). As their BLEU and adequacy scores are lower than HYBRID’s, we use the latter for comparison. ˇStajner and Glavaˇs (2017) combined rule-based simplification conditioned on event extraction, together with an unsupervised lexical simplifier. They tackle a different setting, and aim to simplify texts (rather than sentences), by allowing the deletion of entire input sentences. Split and Rephrase. Narayan et al. (2017) recently proposed the Split and Rephrase task, focusing on sentence splitting. For this purpose they presented a specialized parallel corpus, derived from the WebNLG dataset (Gardent et al., 2017). The latter is obtained from the DBPedia knowledge base (Mendes et al., 2012) using content selection and crowdsourcing, and is annotated with semantic triplets of subject-relation-object, obtained semi-automatically. They experimented with five systems, including one similar to HYBRID, as well as sequence-to-sequence methods for generating sentences from the source text and its semantic forms. The present paper tackles both structural and lexical simplification, and examines the effect of sentence splitting on the subsequent application of a neural system, in terms of its tendency to perform other simplification operations. For this purpose, we adopt a semantic corpus-independent approach for sentence splitting that can be easily integrated in any simplification system. Another difference is that the semantic forms in Split and Rephrase are derived semi-automatically (during corpus compilation), while we automatically extract the semantic form, using a UCCA parser. 3 Direct Semantic Splitting 3.1 Semantic Representation UCCA (Universal Cognitive Conceptual Annotation; Abend and Rappoport, 2013) is a semantic annotation scheme rooted in typological and cognitive linguistic theory (Dixon, 2010b,a, 2012; Langacker, 2008). It aims to represent the main semantic phenomena in the text, abstracting away from syntactic forms. UCCA has been shown to be preserved remarkably well across translations (Sulem et al., 2015) and has also been successfully used for the evaluation of machine translation (Birch et al., 2016) and, recently, for the evaluation of TS (Sulem et al., 2018) and grammatical error correction (Choshen and Abend, 2018). Formally, UCCA structures are directed acyclic graphs whose nodes (or units) correspond either to the leaves of the graph or to several elements viewed as a single entity according to some semantic or cognitive consideration. 165 Figure 1: Example applications of rules 1 (Figure 1a) and 2 (Figure 1b). In both cases, the original sentence, the semantic parse, the extracted Scenes with the required modifications, and the output of the rules are presented top to bottom. The UCCA categories used are: Parallel Scene (H), Linker (L), Participant (A), Process/State (P/S), Center (C), Elaborator (E), Relator (R). A Scene is UCCA’s notion of an event or a frame, and is a unit that corresponds to a movement, an action or a state which persists in time. Every Scene contains one main relation, which can be either a Process or a State. Scenes contain one or more Participants, interpreted in a broad sense to include locations and destinations. For example, the sentence “He went to school” has a single Scene whose Process is “went”. The two Participants are “He” and “to school”. Scenes can have several roles in the text. First, they can provide additional information about an established entity (Elaborator Scenes), commonly participles or relative clauses. For example, “(child) who went to school” is an Elaborator Scene in “The child who went to school is John” (“child” serves both as an argument in the Elaborator Scene and as the Center). A Scene may also be a Participant in another Scene. For example, “John went to school” in the sentence: “He said John went to school”. In other cases, Scenes are annotated as Parallel Scenes (H), which are flat structures and may include a Linker (L), as in: “WhenL [he arrives]H, [he will call them]H”. With respect to units which are not Scenes, the category Center denotes the semantic head. For example, “dogs” is the Center of the expression “big brown dogs”, and “box” is the center of “in the box”. There could be more than one Center in a unit, for example in the case of coordination, where all conjuncts are Centers. We define the minimal center of a UCCA unit u to be the UCCA graph’s leaf reached by starting from u and iteratively selecting the child tagged as Center. For generating UCCA’s structures we use TUPA, a transition-based parser (Hershcovich et al., 2017) (specifically, the TUPABiLSTM model). TUPA uses an expressive set of transitions, able to support all structural properties required by the UCCA scheme. Its transition classifier is based on an MLP that receives a BiLSTM encoding of elements in the parser state (buffer, stack and intermediate graph), given word embeddings and other features. 3.2 The Semantic Rules For performing DSS, we define two simple splitting rules, conditioned on UCCA’s categories. We currently only consider Parallel Scenes and Elaborator Scenes, not separating Participant Scenes, in order to avoid splitting in cases of nominalizations or indirect speech. For example, the sentence “His arrival surprised everyone”, which has, in addition to the Scene evoked by “surprised”, a Participant Scene evoked by “arrival”, is not split here. Rule #1. Parallel Scenes of a given sentence are extracted, separated in different sentences, and concatenated according to the order of appearance. More formally, given a decomposition of a sentence S into parallel Scenes Sc1, Sc2, · · · Scn (indexed by the order of the first token), we obtain the 166 following rule, where “|” is the sentence delimiter: S −→Sc1|Sc2| · · · |Scn As UCCA allows argument sharing between Scenes, the rule may duplicate the same sub-span of S across sentences. For example, the rule will convert “He came back home and played piano” into “He came back home”|“He played piano.” Rule #2. Given a sentence S, the second rule extracts Elaborator Scenes and corresponding minimal centers. Elaborator Scenes are then concatenated to the original sentence, where the Elaborator Scenes, except for the minimal center they elaborate, are removed. Pronouns such as “who”, “which” and “that” are also removed. Formally, if {(Sc1, C1) · · · (Scn, Cn)} are the Elaborator Scenes of S and their corresponding minimal centers, the rewrite is: S −→S − n [ i=1 (Sci −Ci)|Sc1| · · · |Scn where S−A is S without the unit A. For example, this rule converts the sentence “He observed the planet which has 14 known satellites” to “He observed the planet| Planet has 14 known satellites.”. Article regeneration is not covered by the rule, as its output is directly fed into the NMT component. After the extraction of Parallel Scenes and Elaborator Scenes, the resulting simplified Parallel Scenes are placed before the Elaborator Scenes. See Figure 1. 4 Neural Component The split sentences are run through the NTS stateof-the-art neural TS system (Nisioi et al., 2017), built using the OpenNMT neural machine translation framework (Klein et al., 2017). The architecture includes two LSTM layers, with hidden states of 500 units in each, as well as global attention combined with input feeding (Luong et al., 2015). Training is done with a 0.3 dropout probability (Srivastava et al., 2014). This model uses alignment probabilities between the predictions and the original sentences, rather than characterbased models, to retrieve the original words. We here consider the w2v initialization for NTS (N17), where word2vec embeddings of size 300 are trained on Google News (Mikolov et al., 2013a) and local embeddings of size 200 are trained on the training simplification corpus ( ˇReh˚uˇrek and Sojka, 2010; Mikolov et al., 2013b). Local embeddings for the encoder are trained on the source side of the training corpus, while those for the decoder are trained on the simplified side. For sampling multiple outputs from the system, beam search is performed during decoding by generating the first 5 hypotheses at each step ordered by the log-likelihood of the target sentence given the input sentence. We here explore both the highest (h1) and fourth-ranked (h4) hypotheses, which we show to increase the SARI score and to be much less conservative.2 We thus experiment with two variants of the neural component, denoted by NTS-h1 and NTS-h4. The pipeline application of the rules and the neural system results in two corresponding models: SENTS-h1 and SENTS-h4. 5 Experimental Setup Corpus All systems are tested on the test corpus of Xu et al. (2016),3 comprising 359 sentences from the PWKP corpus (Zhu et al., 2010) with 8 references collected by crowdsourcing for each of the sentences. Semantic component. The TUPA parser4 is trained on the UCCA-annotated Wiki corpus.5 Neural component. We use the NTS-w2v model6 provided by N17, obtained by training on the corpus of Hwang et al. (2015) and tuning on the corpus of Xu et al. (2016). The training set is based on manual and automatic alignments between standard English Wikipedia and Simple English Wikipedia, including both good matches and partial matches whose similarity score is above the 0.45 scale threshold (Hwang et al., 2015). The total size of the training set is about 280K aligned sentences, of which 150K sentences are full matches and 130K are partial matches.7 Comparison systems. We compare our findings to HYBRID, which is the state of the art for joint structural and lexical simplification, imple2Similarly, N17 considered the first two hypotheses and showed that h2 has an higher SARI score and is less conservative than h1. 3https://github.com/cocoxu/ simplification (This also includes SARI tools and the SBMT-SARI system.) 4https://github.com/danielhers/tupa 5http://www.cs.huji.ac.il/˜oabend/ ucca.html 6https://github.com/senisioi/ NeuralTextSimplification 7We also considered the default initialization for the neural component, using the NTS model without word embeddings. Experimenting on the tuning set, the w2v approach got higher BLEU and SARI scores (for h1 and h4 respectively) than the default approach. 167 mented by Zhang and Lapata (2017).8 We use the released output of HYBRID, trained on a corpus extracted from Wikipedia, which includes the aligned sentence pairs from Kauchak (2013), the aligned revision sentence pairs in Woodsend and Lapata (2011), and the PWKP corpus, totaling about 296K sentence pairs. The tuning set is the same as for the above systems. In order to isolate the effect of NMT, we also implement SEMoses, where the neural-based component is replaced by the phrase-based MT system Moses,9 which is also used in HYBRID. The training, tuning and test sets are the same as in the case of SENTS. MGIZA10 is used for word alignment. The KenLM language model is trained using the target side of the training corpus. Additional baselines. We report human and automatic evaluation scores for Identity (where the output is identical to the input), for Simple Wikipedia where the output is the corresponding aligned sentence in the PWKP corpus, and for the SBMT-SARI system, tuned against SARI (Xu et al., 2016), which maximized the SARI score on this test set in previous works (Nisioi et al., 2017; Zhang and Lapata, 2017). Automatic evaluation. The automatic metrics used for the evaluation are: (1) BLEU (Papineni et al., 2002) (2) SARI (System output Against References and against the Input sentence; Xu et al., 2016), which compares the n-grams of the system output with those of the input and the human references, separately evaluating the quality of words that are added, deleted and kept by the systems. (3) Fadd: the addition component of the SARI score (F-score); (4) Fkeep: the keeping component of the SARI score (F-score); (5) Pdel: the deletion component of the SARI score (precision).11 Each metric is computed against the 8 available references. We also assess system conservatism, reporting the percentage of sentences copied from the input (%Same), the averaged Levenshtein distance from the source (LDSC, which considers additions, deletions, and substitutions), and the number of source sentences that are split (#Split).12 8https://github.com/XingxingZhang/ dress 9http://www.statmt.org/moses/ 10https://github.com/moses-smt/mgiza 11Uniform tokenization and truecasing styles for all systems are obtained using the Moses toolkit. 12We used the NLTK package (Loper and Bird, 2002) for these computations. Human evaluation. Human evaluation is carried out by 3 in-house native English annotators, who rated the different input-output pairs for the different systems according to 4 parameters: Grammaticality (G), Meaning preservation (M), Simplicity (S) and Structural Simplicity (StS). Each input-output pair is rated by all 3 annotators. Elicitation questions are given in Table 1. As the selection process of the input-output pairs in the test corpus of Xu et al. (2016), as well as their crowdsourced references, are explicitly biased towards lexical simplification, the use of human evaluation permits us to evaluate the structural aspects of the system outputs, even where structural operations are not attested in the references. Indeed, we show that system outputs may receive considerably higher structural simplicity scores than the source, in spite of the sample selection bias. Following previous work (e.g., Narayan and Gardent, 2014; Xu et al., 2016; Nisioi et al., 2017), Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale. Note that in the first question, the input sentence is not taken into account. The grammaticality of the input is assessed by evaluating the Identity transformation (see Table 2), providing a baseline for the grammaticality scores of the other systems. Following N17, a -2 to +2 scale is used for measuring simplicity, where a 0 score indicates that the input and the output are equally complex. This scale, compared to the standard 1 to 5 scale, permits a better differentiation between cases where simplicity is hurt (the output is more complex than the original) and between cases where the output is as simple as the original, for example in the case of the identity transformation. Structural simplicity is also evaluated with a -2 to +2 scale. The question for eliciting StS is accompanied with a negative example, showing a case of lexical simplification, where a complex word is replaced by a simple one (the other questions appear without examples). A positive example is not included so as not to bias the annotators by revealing the nature of the operations we focus on (splitting and deletion). We follow N17 in applying human evaluation on the first 70 sentences of the test corpus.13 The resulting corpus, totaling 1960 sentence pairs, each annotated by 3 annotators, also include 13We do not exclude system outputs identical to the source, as done by N17. 168 the additional experiments described in Section 7 as well as the outputs of the NTS and SENTS systems used with the default initialization. The inter-annotator agreement, using Cohen’s quadratic weighted κ (Cohen, 1968), is computed as the average agreement of the 3 annotator pairs. The obtained rates are 0.56, 0.75, 0.47 and 0.48 for G, M, S and StS respectively. System scores are computed by averaging over the 3 annotators and the 70 sentences. G Is the output fluent and grammatical? M Does the output preserve the meaning of the input? S Is the output simpler than the input? StS Is the output simpler than the input, ignoring the complexity of the words? Table 1: Questions for the human evaluation. G M S StS Identity 4.80 5.00 0.00 0.00 Simple Wikipedia 4.60 4.21 0.83 0.38 Only MT-Based Simplification SBMT-SARI 3.71 3.96 0.14 -0.15 NTS-h1 4.56 4.48 0.22 0.15 NTS-h4 4.29 3.90 0.31 0.19 Only Structural Simplification DSS 3.42 4.15 0.16 0.16 Structural+MT-based Simplification Hybrid 2.96 2.46 0.43 0.43 SEMoses 3.27 3.98 0.16 0.13 SENTS-h1 3.98 3.33 0.68 0.63 SENTS-h4 3.54 2.98 0.50 0.36 Table 2: Human evaluation of the different NMT-based systems. Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale. A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence. The highest score in each column appears in bold. Structural simplification systems are those that explicitly model structural operations. 6 Results Human evaluation. Results are presented in Table 2. First, we can see that the two SENTS systems outperform HYBRID in terms of G, M, and S. SENTS-h1 is the best scoring system, under all human measures. In comparison to NTS, SENTS scores markedly higher on the simplicity judgments. Meaning preservation and grammaticality are lower for SENTS, which is likely due to the more conservative nature of NTS. Interestingly, the application of the splitting rules by themselves does not yield a considerably simpler sentence. This likely stems from the rules not necessarily yielding grammatical sentences (NTS often serves as a grammatical error corrector over it), and from the incorporation of deletions, which are also structural operations, and are performed by the neural system. An example of high structural simplicity scores for SENTS resulting from deletions is presented in Table 5, together with the outputs of the other systems and the corresponding human evaluation scores. NTS here performs lexical simplification, replacing the word “incursions” by “raids” or “attacks”’. On the other hand, the high StS scores obtained by DSS and SEMoses are due to sentence splittings. Automatic evaluation. Results are presented in Table 3. Identity obtains much higher BLEU scores than any other system, suggesting that BLEU may not be informative in this setting. SARI seems more informative, and assigns the lowest score to Identity and the second highest to the reference. Both SENTS systems outperform HYBRID in terms of SARI and all its 3 sub-components. The h4 setting (hypothesis #4 in the beam) is generally best, both with and without the splitting rules. Comparing SENTS to using NTS alone (without splitting), we see that SENTS obtains higher SARI scores when hypothesis #1 is used and that NTS obtains higher scores when hypothesis #4 is used. This may result from NTS being more conservative than SENTS (and HYBRID), which is rewarded by SARI (conservatism is indicated by the %Same column). Indeed for h1, %Same is reduced from around 66% for NTS, to around 7% for SENTS. Conservatism further decreases when h4 is used (for both NTS and SENTS). Examining SARI’s components, we find that SENTS outperforms NTS on Fadd, and is comparable (or even superior for h1 setting) to NTS on Pdel. The superior SARI score of NTS over SENTS is thus entirely a result of a superior Fkeep, which is easier for a conservative system to maximize. Comparing HYBRID with SEMoses, both of which use Moses, we find that SEMoses obtains higher BLEU and SARI scores, as well as G and M human scores, and splits many more sentences. HYBRID scores higher on the human simplicity measures. We note, however, that applying NTS alone is inferior to HYBRID in terms of simplicity, and that both components are required to obtain high simplicity scores (with SENTS). We also compare the sentence splitting component used in our systems (namely DSS) to that used in HYBRID, abstracting away from deletionbased and lexical simplification. We therefore apply DSS to the test set (554 sentences) of the 169 BLEU SARI Fadd Fkeep Pdel % Same LDSC #Split Identity 94.93 25.44 0.00 76.31 0.00 100 0.00 0 Simple Wikipedia 69.58 39.50 8.46 61.71 48.32 0.00 33.34 0 Only MT-Based Simplification SBMT-SARI 74.44 41.46 6.77 69.92 47.68 4.18 23.31 0 NTS-h1 88.67 28.73 0.80 70.95 14.45 66.02 17.13 0 NTS-h4 79.88 36.55 2.59 65.93 41.13 2.79 24.18 1 Only Structural Simplification DSS 76.57 36.76 3.82 68.45 38.01 8.64 25.03 208 Structural+MT-Based Simplification HYBRID 52.82 27.40 2.41 43.09 36.69 1.39 61.53 3 SEMoses 74.45 36.68 3.77 67.66 38.62 7.52 27.44 208 SENTS-h1 58.94 30.27 3.01 51.52 36.28 6.69 59.18 0 SENTS-h4 57.71 31.90 3.95 51.86 39.90 0.28 54.47 17 Table 3: The left-hand side of the table presents BLEU and SARI scores for the combinations of NTS and DSS, as well as for the baselines. The highest score in each column appears in bold. The right hand side presents lexical and structural properties of the outputs. %Same: proportion of sentences copied from the input; LDSC: Averaged Levenshtein distance from the source; #Split: number of split sentences. Structural simplification systems are those that explicitly model structural operations. BLEU SARI Fadd Fkeep Pdel % Same LDSC #Split G M S StS Moses 92.58 28.19 0.16 75.73 8.70 79.67 3.22 0 4.25 4.78 0 0.04 SEMoses 74.45 36.68 3.77 67.66 38.62 7.52 27.44 208 3.27 3.98 0.16 0.13 SETrain1-Moses 91.24 33.06 0.41 76.07 22.69 60.72 4.47 1 4.23 4.54 -0.12 -0.13 SETrain2-Moses 94.31 26.71 0.07 76.20 3.85 92.76 1.45 0 4.73 4.99 0.01 -0.005 MosesLM 92.66 28.19 0.18 75.68 8.71 79.39 3.43 0 4.55 4.82 -0.01 -0.04 SEMosesLM 74.49 36.70 3.79 67.67 38.65 7.52 27.45 208 3.32 4.08 0.15 0.14 SETrain1-MosesLM 85.68 36.52 2.34 72.85 34.37 27.30 6.71 33 4.03 4.63 -0.11 -0.12 SETrain2-MosesLM 94.22 26.66 0.10 76.19 3.69 92.20 1.43 0 4.75 4.99 0.01 -0.01 Table 4: Automatic and human evaluation for the different combinations of Moses and DSS. The automatic metrics as well as the lexical and structural properties reported (%Same: proportion of sentences copied from the input; LDSC: Averaged Levenshtein distance from the source; #Split: number of split sentences) concern the 359 sentences of the test corpus. Human evaluation, with the G, M, S, and StS parameters, is applied to the first 70 sentences of the corpus. The highest score in each column appears in bold. WEB-SPLIT corpus (Narayan et al., 2017) (See Section 2), which focuses on sentence splitting. We compare our results to those reported for a variant of HYBRID used without the deletion module, and trained on WEB-SPLIT (Narayan et al., 2017). DSS gets a higher BLEU score (46.45 vs. 39.97) and performs more splittings (number of output sentences per input sentence of 1.73 vs. 1.26). 7 Additional Experiments Replacing the parser by manual annotation. In order to isolate the influence of the parser on the results, we implement a semi-automatic version of the semantic component, which uses manual UCCA annotation instead of the parser, focusing of the first 70 sentences of the test corpus. We employ a single expert UCCA annotator and use the UCCAApp annotation tool (Abend et al., 2017). Results are presented in Table 6, for both SENTS and SEMoses. In the case of SEMoses, meaning preservation is improved when manual UCCA annotation is used. On the other hand, simplicity degrades, possibly due to the larger number of Scenes marked by the human annotator (TUPA tends to under-predict Scenes). This effect doesn’t show with SENTS, where trends are similar to the automatic parses case, and high simplicity scores are obtained. This demonstrates that UCCA parsing technology is sufficiently mature to be used to carry out structural simplification. We also directly evaluate the performance of the parser by computing F1, Recall and Precision DAG scores (Hershcovich et al., 2017), against the manual UCCA annotation.14 We obtain for primary edges (i.e. edges that form a tree structure) scores of 68.9 %, 70.5%, and 67.4% for F1, Recall and Precision respectively. For remotes edges (i.e. additional edges, forming a DAG), the scores are 45.3%, 40.5%, and 51.5%. These results are comparable with the out-of-domain results reported by Hershcovich et al. (2017). Experiments on Moses. We test other variants of SEMoses, where phrase-based MT is used instead of NMT. Specifically, we incorporate semantic information in a different manner by implementing two additional models: (1) SETrain1Moses, where a new training corpus is obtained by applying the splitting rules to the target side of the 14We use the evaluation tools provided in https:// github.com/danielhers/ucca, ignoring 9 sentences for which different tokenizations of proper nouns are used in the automatic and manual parsing. 170 G M S StS Identity In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the incursions of other Viking groups. 5.00 5.00 0.00 0.00 Simple Wikipedia In return, Rollo swore fealty to Charles, converted to Christianity, and swore to defend the northern region of France against raids by other Viking groups. 4.67 5.00 1.00 0.00 SBMT-SARI In return, Rollo swore fealty to Charles, converted to Christianity, and set out to defend the north of France from the raids of other viking groups. 4.67 4.67 0.67 0.00 NTS-h1 In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the raids of other Viking groups. 5.00 5.00 1.00 0.00 NTS-h4 In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the attacks of other Viking groups. 4.67 5.00 1.00 0.00 DSS Rollo swore fealty to Charles. Rollo converted to Christianity. Rollo undertook to defend the northern region of France against the incursions of other viking groups. 4.00 4.33 1.33 1.33 HYBRID In return Rollo swore, and undertook to defend the region of France., Charles, converted 2.33 2.00 0.33 0.33 SEMoses Rollo swore put his seal to Charles. Rollo converted to Christianity. Rollo undertook to defend the northern region of France against the incursions of other viking groups. 3.33 4.00 1.33 1.33 SENTS-h1 Rollo swore fealty to Charles. 5.00 2.00 2.00 2.00 SENTS-h4 Rollo swore fealty to Charles and converted to Christianity. 5.00 2.67 1.33 1.33 Table 5: System outputs for one of the test sentences with the corresponding human evaluation scores (averaged over the 3 annotators). Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale. A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence. G M S StS DSSm 3.38 3.91 -0.16 -0.16 SENTSm-h1 4.12 3.34 0.61 0.58 SENTSm-h4 3.60 3.24 0.26 0.12 SEMosesm 3.32 4.27 -0.25 -0.25 SEMosesm LM 3.43 4.28 -0.18 -0.19 Table 6: Human evaluation using manual UCCA annotation. Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale. A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence. Xm refers to the semi-automatic version of the system X. training corpus; (2) SETrain2-Moses, where the rules are applied to the source side. The resulting parallel corpus is concatenated to the original training corpus. We also examine whether training a language model (LM) on split sentences has a positive effect, and train the LM on the split target side. For each system X, the version with the LM trained on split sentences is denoted by XLM. We repeat the same human and automatic evaluation protocol as in §6, presenting results in Table 4. Simplicity scores are much higher in the case of SENTS (that uses NMT), than with Moses. The two best systems according to SARI are SEMoses and SEMosesLM which use DSS. In fact, they resemble the performance of DSS applied alone (Tables 2 and 3), which confirms the high degree of conservatism observed by Moses in simplification (Alva-Manchego et al., 2017). Indeed, all Moses-based systems that don’t apply DSS as preprocessing are conservative, obtaining high scores for BLEU, grammaticality and meaning preservation, but low scores for simplicity. Training the LM on split sentences shows little improvement. 8 Conclusion We presented the first simplification system combining semantic structures and neural machine translation, showing that it outperforms existing lexical and structural systems. The proposed approach addresses the over-conservatism of MTbased systems for TS, which often fail to modify the source in any way. The semantic component performs sentence splitting without relying on a specialized corpus, but only an off-theshelf semantic parser. The consideration of sentence splitting as a decomposition of a sentence into its Scenes is further supported by recent work on structural TS evaluation (Sulem et al., 2018), which proposes the SAMSA metric. The two works, which apply this assumption to different ends (TS system construction, and TS evaluation), confirm its validity. Future work will leverage UCCA’s cross-linguistic applicability to support multi-lingual TS and TS pre-processing for MT. Acknowledgments We would like to thank Shashi Narayan for sharing his data and the annotators for participating in our evaluation and UCCA annotation experiments. We also thank Daniel Hershcovich and the anonymous reviewers for their helpful advices. This work was partially supported by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI) and by the Israel Science Foundation (grant No. 929/17), as well as by the HUJI Cyber Security Research Center in conjunction with the Israel National Cyber Bureau in the Prime Minister’s Office. 171 References Omri Abend and Ari Rappoport. 2013. Universal Conceptual Cognitive Annotation (UCCA). In Proc. of ACL-13, pages 228–238. Omri Abend, Shai Yerushalmi, and Ari Rappoport. 2017. UCCAApp: Web-application for syntactic and semantic phrase-based annotation. In Proc. of ACL’17, System Demonstrations, pages 109–114. Sandra Maria Alu´ısio and Caroline Gasperin. 2010. Foestering disgital inclusion and accessibility: The PorSimples project for simplification of Portuguese texts. In Proc. of NAACL HLT 2010 Young Investigators Workshop on Computational Approaches to Languages of the Americas, pages 46–53. Fernando Alva-Manchego, Joachim Bingel, Gustavo H. Paetzold, Carolina Scarton, and Lucia Specia. 2017. Learning how to simplify from explicit labeling of complex-simplified text pairs. In Proc. of IJCNLP’17, pages 295–305. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Rrepresentation for sembanking. Proc. of Linguistic Annotation Workshop and Interoperability with Discourse, pages 178–186. Gianni Barlacchi and Sara Tonelli. 2013. ERNESTA: A sentence simplification tool for children’s stories in Italian. In Proc. of CICLing’13, pages 476–487. Or Biran, Samuel Brody, and No´emie Elhadad. 2011. Putting it simply: a context-aware approach to lexical simplification. In Proc. of ACL’11, pages 465– 501. Alexandra Birch, Omri Abend, Ondˇrej Bojar, and Barry Haddow. 2016. HUME: Human UCCAbased evaluation of machine translation. In Proc. of EMNLP’16, pages 1264–1274. Raman Chandrasekar, Christine Doran, and Bangalore Srinivas. 1996. Motivations and methods for sentence simplification. In Proc. of COLING’96, pages 1041–1044. Leshem Choshen and Omri Abend. 2018. Referenceless measure of faithfulness for grammatical error correction. In Proc. of NAACL’18 (Short papers). To appear. Jacob Cohen. 1968. Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. Psychological bulletin, 70(4):213. William Coster and David Kauchak. 2011a. Learning to simplify sentences using Wikipedia. In Proc. of ACL, Short Papers, pages 1–9. William Coster and David Kauchak. 2011b. Simple English Wikipedia: A new text simplification task. In Proc. of ACL’11, pages 665–669. Robert M.W. Dixon. 2010a. Basic Linguistic Theory: Grammatical Topics, volume 2. Oxford University Press. Robert M.W. Dixon. 2010b. Basic Linguistic Theory: Methodology, volume 1. Oxford University Press. Robert M.W. Dixon. 2012. Basic Linguistic Theory: Further Grammatical Topics, volume 3. Oxford University Press. Jury Ganitketitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The paraphrase database. In Proc. of NAACL-HLT’13, pages 758– 764. Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. Creating training corpora for NLG micro-planning. In Proc. of ACL’17, pages 179–188. Goran Glavaˇs and Sanja ˇStajner. 2013. Event-centered simplification of news stories. In Proc. of the Student Research Workshop associated with RANLP 2013, pages 71–78. Daniel Hershcovich, Omri Abend, and Ari Rappoport. 2017. A transition-based directed acyclic graph parser for UCCA. In Proc. of ACL’17, pages 1127– 1138. Bui Thanh Hung, Nguyen Le Minh, and Akira Shimazu. 2012. Sentence splitting for VietnameseEnglish machine translation. In Knowledge and Systems Engineering, 2012 Fourth International Conference, pages 156–160. William Hwang, Hannaneh Hajishirzi, Mari Ostendorf, and Wei Wu. 2015. Aligning sentences from Standard Wikipedia to Simple Wikipedia. In Proc. of NAACL’15, pages 211–217. Hans Kamp. 1981. A theory of truth and semantic representation. In Formal methods in the study of language. Mathematisch Centrum. Number pt.1 in Mathematical Centre tracts. David Kauchak. 2013. Improving text simplification language modeling using unsimplified text data. In Proc. of ACL’13, pages 1537–1546. Guillam Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M. Rush. 2017. Open NMT: Open-source toolkit for neural machine translation. ArXiv:1701.02810 [cs:CL]. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Buch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: open source toolkit for statistical machine translation. In Proc. of ACL’07 on interactive poster and demonstration sessions, pages 177–180. 172 Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proc. of NAACL’03, pages 48–54. Ronald W. Langacker. 2008. Cognitive Grammar: A Basic Introduction. Oxford University Press, USA. Edward Loper and Steven Bird. 2002. NLTK: the natural language toolkit. In Proc. of EMNLP’02, pages 63–70. Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attentionbased neural machine translation. In Proc. of EMNLP’15, pages 1412–1421. Pablo N Mendes, Max Jakob, and Christian Bizer. 2012. DBpedia: A multilingual cross-domain knowledge base. In Proc. of LREC’12, pages 1813– 1817. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In Proc. of Workshop at International Conference on Learning Representations. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Kshitij Mishra, Ankush Soni, Rahul Sharma, and Dipti Misra Sharma. 2014. Exploring the effects of sentence simplification on Hindi to English Machine Translation systems. In Proc. of the Workshop on Automatic Text Simplification: Methods and Applications in the Multilingual Society, pages 21–29. Shashi Narayan and Claire Gardent. 2014. Hybrid simplification using deep semantics and machine translation. In Proc. of ACL’14, pages 435–445. Shashi Narayan and Claire Gardent. 2016. Unsupervised sentence simplification using deep semantics. In Proc. of INLG’16, pages 111–120. Shashi Narayan, Claire Gardent, Shay B. Cohen, and Anastasia Shimorina. 2017. Split and rephrase. In Proc. of EMNLP’17, pages 617–627. Christina Niklaus, Bernahard Bermeitinger, Siegfried Handschuh, and Andr´e Freitas. 2016. A sentence simplification system for improving relation extraction. In Proc. of COLING’16. Sergiu Nisioi, Sanja ˇStajner, Simone Paolo Ponzetto, and Liviu P. Dinu. 2017. Exploring neural text simplification models. In Proc. of ACL’17 (Short paper), pages 85–91. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proc. of ACL’02, pages 311–318. Radim ˇReh˚uˇrek and Petr Sojka. 2010. Software framework for topic modelling with large corpora. In Proc. of LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45–50, Valletta, Malta. ELRA. Luz Rello, Ricardo Baeza-Yates, Stefan Bott, and Horacio Saggion. 2013. Simplify or help?: text simplification strategies for people with dyslexia. In Proc. of the 10th International Cross-Disciplinary Conference on Web Accesibility, pages 15:1 – 15:10. Violeta Seretan. 2012. Acquisition of syntactic simplification rules for French. In Proc. of LREC’12, pages 4019–4026. Matthew Shardlow. 2014. A survey of automated text simplification. International Journal of Advanced Computer Science and Applications. Advaith Siddharthan. 2002. An architecture for a text simplification system. In Proc. of LEC, pages 64– 71. Advaith Siddharthan. 2004. Syntactic simplification and text cohesion. Technical Report 597, University of Cambridge. Advaith Siddharthan and M. A. Angrosh. 2014. Hybrid text simplification using synchronous dependency grammars with hand-written and automatically harvested rules. In Proc. of EACL’14, pages 722–731. Advaith Siddhathan. 2011. Text simplification using typed dependencies: A comparison of the robustness of different generation strategies. In Proc. of the 13th European Workshop on Natural Language Generation, pages 2–11. Association of Computational Linguistics. David A. Smith and Jason Eisner. 2006. Quasisynchronous grammars: Alignment by soft projection of syntactic dependencies. In Proc. of the 1st Workshop in Statistical Machine Translation, pages 23–30. Lucia Specia. 2010. Translating from complex to simplified sentences. In Proc. of the 9th International Conference on Computational Processing of the Portuguese Language, pages 30–39. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958. Sanja ˇStajner, Hannah Bechara, and Horacio Saggion. 2015. A deeper exploration of the standard PB-SMT approach to text simplification and its evaluation. In Proc. of ACL’15 (Short papers), pages 823–828. Sanja ˇStajner and Goran Glavaˇs. 2017. Leveraging event-based semantics for automated text simplification. Expert systems with applications, 82:383–395. 173 Sanja ˇStajner and Maja Popovi´c. 2016. Can text simplification help machine translation. Baltic J. Modern Computing, 4:230–242. Elior Sulem, Omri Abend, and Ari Rappoport. 2015. Conceptual annotations preserve structure across translations. In Proc. of 1st Workshop on Semantics-Driven Statistical Machine Translation (S2Mt 2015), pages 11–22. Elior Sulem, Omri Abend, and Ari Rappoport. 2018. Semantic structural evaluation for text simplification. In Proc. of NAACL’18. To appear. Kristian Woodsend and Mirella Lapata. 2011. Learning to simplify sentences with quasi-synchronous grammar and integer programming. In Proc. of EMNLP’11, pages 409–420. Sander Wubben, Antal van den Bosch, and Emiel Krahmer. 2012. Sentence simplification by monolingual machine translation. In Proc. of ACL’12, pages 1015–1024. Wei Xu, Chris Callison-Burch, and Courtney Napoles. 2015. Problems in current text simplification research: new data can help. TACL, 3:283–297. Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. TACL, 4:401–415. Kenji Yamada and Kevin Knight. 2001. A syntaxbased statistical translation model. In Proc.of ACL’01, pages 523–530. Xingxing Zhang and Mirella Lapata. 2017. Sentence simplification with deep reinforcement learning. In Proc. of EMNLP’17, pages 595–605. Yaoyuan Zhang, Zhenxu Ye, Dongyan Zhao, and Rui Yan. 2017. A constrained sequence-tosequence neural model for sentence simplification. ArXiv:1704.02312 [cs.CL]. Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych. 2010. A monolingual tree-based translation model for sentence simplification. In Proc. of COLING’10, pages 1353–1361.
2018
16